1
From: Alistair Francis <alistair.francis@wdc.com>
1
The following changes since commit c5ea91da443b458352c1b629b490ee6631775cb4:
2
2
3
The following changes since commit ad38520bdeb2b1e0b487db317f29119e94c1c88d:
3
Merge tag 'pull-trivial-patches' of https://gitlab.com/mjt0k/qemu into staging (2023-09-08 10:06:25 -0400)
4
5
Merge remote-tracking branch 'remotes/stefanha-gitlab/tags/block-pull-request' into staging (2022-02-15 19:30:33 +0000)
6
4
7
are available in the Git repository at:
5
are available in the Git repository at:
8
6
9
git@github.com:alistair23/qemu.git tags/pull-riscv-to-apply-20220216
7
https://github.com/alistair23/qemu.git tags/pull-riscv-to-apply-20230911
10
8
11
for you to fetch changes up to 7035b8420fa52e8c94cf4317c0f88c1b73ced28d:
9
for you to fetch changes up to e7a03409f29e2da59297d55afbaec98c96e43e3a:
12
10
13
docs/system: riscv: Update description of CPU (2022-02-16 12:25:52 +1000)
11
target/riscv: don't read CSR in riscv_csrrw_do64 (2023-09-11 11:45:55 +1000)
14
12
15
----------------------------------------------------------------
13
----------------------------------------------------------------
16
Fourth RISC-V PR for QEMU 7.0
14
First RISC-V PR for 8.2
17
15
18
* Remove old Ibex PLIC header file
16
* Remove 'host' CPU from TCG
19
* Allow writing 8 bytes with generic loader
17
* riscv_htif Fixup printing on big endian hosts
20
* Fixes for RV128
18
* Add zmmul isa string
21
* Refactor RISC-V CPU configs
19
* Add smepmp isa string
22
* Initial support for XVentanaCondOps custom extension
20
* Fix page_check_range use in fault-only-first
23
* Fix for vill field in vtype
21
* Use existing lookup tables for MixColumns
24
* Fix trap cause for RV32 HS-mode CSR access from RV64 HS-mode
22
* Add RISC-V vector cryptographic instruction set support
25
* Support for svnapot, svinval and svpbmt extensions
23
* Implement WARL behaviour for mcountinhibit/mcounteren
24
* Add Zihintntl extension ISA string to DTS
25
* Fix zfa fleq.d and fltq.d
26
* Fix upper/lower mtime write calculation
27
* Make rtc variable names consistent
28
* Use abi type for linux-user target_ucontext
29
* Add RISC-V KVM AIA Support
30
* Fix riscv,pmu DT node path in the virt machine
31
* Update CSR bits name for svadu extension
32
* Mark zicond non-experimental
33
* Fix satp_mode_finalize() when satp_mode.supported = 0
34
* Fix non-KVM --enable-debug build
35
* Add new extensions to hwprobe
36
* Use accelerated helper for AES64KS1I
37
* Allocate itrigger timers only once
38
* Respect mseccfg.RLB for pmpaddrX changes
39
* Align the AIA model to v1.0 ratified spec
40
* Don't read the CSR in riscv_csrrw_do64
26
41
27
----------------------------------------------------------------
42
----------------------------------------------------------------
28
Anup Patel (18):
43
Akihiko Odaki (1):
29
target/riscv: Fix trap cause for RV32 HS-mode CSR access from RV64 HS-mode
44
target/riscv: Allocate itrigger timers only once
30
target/riscv: Implement SGEIP bit in hip and hie CSRs
31
target/riscv: Implement hgeie and hgeip CSRs
32
target/riscv: Improve delivery of guest external interrupts
33
target/riscv: Allow setting CPU feature from machine/device emulation
34
target/riscv: Add AIA cpu feature
35
target/riscv: Add defines for AIA CSRs
36
target/riscv: Allow AIA device emulation to set ireg rmw callback
37
target/riscv: Implement AIA local interrupt priorities
38
target/riscv: Implement AIA CSRs for 64 local interrupts on RV32
39
target/riscv: Implement AIA hvictl and hviprioX CSRs
40
target/riscv: Implement AIA interrupt filtering CSRs
41
target/riscv: Implement AIA mtopi, stopi, and vstopi CSRs
42
target/riscv: Implement AIA xiselect and xireg CSRs
43
target/riscv: Implement AIA IMSIC interface CSRs
44
hw/riscv: virt: Use AIA INTC compatible string when available
45
target/riscv: Allow users to force enable AIA CSRs in HART
46
hw/intc: Add RISC-V AIA APLIC device emulation
47
45
48
Frédéric Pétrot (1):
46
Ard Biesheuvel (2):
49
target/riscv: correct "code should not be reached" for x-rv128
47
target/riscv: Use existing lookup tables for MixColumns
48
target/riscv: Use accelerated helper for AES64KS1I
50
49
51
Guo Ren (1):
50
Conor Dooley (1):
52
target/riscv: Ignore reserved bits in PTE for RV64
51
hw/riscv: virt: Fix riscv,pmu DT node path
53
52
54
LIU Zhiwei (1):
53
Daniel Henrique Barboza (6):
55
target/riscv: Fix vill field write in vtype
54
target/riscv/cpu.c: do not run 'host' CPU with TCG
55
target/riscv/cpu.c: add zmmul isa string
56
target/riscv/cpu.c: add smepmp isa string
57
target/riscv: fix satp_mode_finalize() when satp_mode.supported = 0
58
hw/riscv/virt.c: fix non-KVM --enable-debug build
59
hw/intc/riscv_aplic.c fix non-KVM --enable-debug build
56
60
57
Petr Tesarik (1):
61
Dickon Hood (2):
58
Allow setting up to 8 bytes with the generic loader
62
target/riscv: Refactor translation of vector-widening instruction
63
target/riscv: Add Zvbb ISA extension support
59
64
60
Philipp Tomsich (7):
65
Jason Chien (3):
61
target/riscv: refactor (anonymous struct) RISCVCPU.cfg into 'struct RISCVCPUConfig'
66
target/riscv: Add Zihintntl extension ISA string to DTS
62
target/riscv: riscv_tr_init_disas_context: copy pointer-to-cfg into cfg_ptr
67
hw/intc: Fix upper/lower mtime write calculation
63
target/riscv: access configuration through cfg_ptr in DisasContext
68
hw/intc: Make rtc variable names consistent
64
target/riscv: access cfg structure through DisasContext
65
target/riscv: iterate over a table of decoders
66
target/riscv: Add XVentanaCondOps custom extension
67
target/riscv: add a MAINTAINERS entry for XVentanaCondOps
68
69
69
Weiwei Li (4):
70
Kiran Ostrolenk (4):
70
target/riscv: add PTE_A/PTE_D/PTE_U bits check for inner PTE
71
target/riscv: Refactor some of the generic vector functionality
71
target/riscv: add support for svnapot extension
72
target/riscv: Refactor vector-vector translation macro
72
target/riscv: add support for svinval extension
73
target/riscv: Refactor some of the generic vector functionality
73
target/riscv: add support for svpbmt extension
74
target/riscv: Add Zvknh ISA extension support
74
75
75
Wilfred Mallawa (1):
76
LIU Zhiwei (3):
76
include: hw: remove ibex_plic.h
77
target/riscv: Fix page_check_range use in fault-only-first
78
target/riscv: Fix zfa fleq.d and fltq.d
79
linux-user/riscv: Use abi type for target_ucontext
77
80
78
Yu Li (1):
81
Lawrence Hunter (2):
79
docs/system: riscv: Update description of CPU
82
target/riscv: Add Zvbc ISA extension support
83
target/riscv: Add Zvksh ISA extension support
80
84
81
docs/system/riscv/virt.rst | 6 +-
85
Leon Schuermann (1):
82
include/hw/intc/ibex_plic.h | 67 -
86
target/riscv/pmp.c: respect mseccfg.RLB for pmpaddrX changes
83
include/hw/intc/riscv_aplic.h | 79 ++
84
target/riscv/cpu.h | 169 ++-
85
target/riscv/cpu_bits.h | 129 ++
86
target/riscv/XVentanaCondOps.decode | 25 +
87
target/riscv/insn32.decode | 7 +
88
hw/core/generic-loader.c | 2 +-
89
hw/intc/riscv_aplic.c | 978 +++++++++++++++
90
hw/riscv/virt.c | 13 +-
91
target/riscv/cpu.c | 113 +-
92
target/riscv/cpu_helper.c | 377 +++++-
93
target/riscv/csr.c | 1282 ++++++++++++++++++--
94
target/riscv/gdbstub.c | 3 +
95
target/riscv/machine.c | 24 +-
96
target/riscv/translate.c | 61 +-
97
target/riscv/vector_helper.c | 1 +
98
target/riscv/insn_trans/trans_rvb.c.inc | 8 +-
99
target/riscv/insn_trans/trans_rvi.c.inc | 2 +-
100
target/riscv/insn_trans/trans_rvv.c.inc | 146 ++-
101
target/riscv/insn_trans/trans_rvzfh.c.inc | 4 +-
102
target/riscv/insn_trans/trans_svinval.c.inc | 75 ++
103
.../riscv/insn_trans/trans_xventanacondops.c.inc | 39 +
104
MAINTAINERS | 7 +
105
hw/intc/Kconfig | 3 +
106
hw/intc/meson.build | 1 +
107
target/riscv/meson.build | 1 +
108
27 files changed, 3252 insertions(+), 370 deletions(-)
109
delete mode 100644 include/hw/intc/ibex_plic.h
110
create mode 100644 include/hw/intc/riscv_aplic.h
111
create mode 100644 target/riscv/XVentanaCondOps.decode
112
create mode 100644 hw/intc/riscv_aplic.c
113
create mode 100644 target/riscv/insn_trans/trans_svinval.c.inc
114
create mode 100644 target/riscv/insn_trans/trans_xventanacondops.c.inc
115
87
88
Max Chou (3):
89
crypto: Create sm4_subword
90
crypto: Add SM4 constant parameter CK
91
target/riscv: Add Zvksed ISA extension support
92
93
Nazar Kazakov (4):
94
target/riscv: Remove redundant "cpu_vl == 0" checks
95
target/riscv: Move vector translation checks
96
target/riscv: Add Zvkned ISA extension support
97
target/riscv: Add Zvkg ISA extension support
98
99
Nikita Shubin (1):
100
target/riscv: don't read CSR in riscv_csrrw_do64
101
102
Rob Bradford (1):
103
target/riscv: Implement WARL behaviour for mcountinhibit/mcounteren
104
105
Robbin Ehn (1):
106
linux-user/riscv: Add new extensions to hwprobe
107
108
Thomas Huth (2):
109
hw/char/riscv_htif: Fix printing of console characters on big endian hosts
110
hw/char/riscv_htif: Fix the console syscall on big endian hosts
111
112
Tommy Wu (1):
113
target/riscv: Align the AIA model to v1.0 ratified spec
114
115
Vineet Gupta (1):
116
riscv: zicond: make non-experimental
117
118
Weiwei Li (1):
119
target/riscv: Update CSR bits name for svadu extension
120
121
Yong-Xuan Wang (5):
122
target/riscv: support the AIA device emulation with KVM enabled
123
target/riscv: check the in-kernel irqchip support
124
target/riscv: Create an KVM AIA irqchip
125
target/riscv: update APLIC and IMSIC to support KVM AIA
126
target/riscv: select KVM AIA in riscv virt machine
127
128
include/crypto/aes.h | 7 +
129
include/crypto/sm4.h | 9 +
130
target/riscv/cpu_bits.h | 8 +-
131
target/riscv/cpu_cfg.h | 9 +
132
target/riscv/debug.h | 3 +-
133
target/riscv/helper.h | 98 +++
134
target/riscv/kvm_riscv.h | 5 +
135
target/riscv/vector_internals.h | 228 +++++++
136
target/riscv/insn32.decode | 58 ++
137
crypto/aes.c | 4 +-
138
crypto/sm4.c | 10 +
139
hw/char/riscv_htif.c | 12 +-
140
hw/intc/riscv_aclint.c | 11 +-
141
hw/intc/riscv_aplic.c | 52 +-
142
hw/intc/riscv_imsic.c | 25 +-
143
hw/riscv/virt.c | 374 ++++++------
144
linux-user/riscv/signal.c | 4 +-
145
linux-user/syscall.c | 14 +-
146
target/arm/tcg/crypto_helper.c | 10 +-
147
target/riscv/cpu.c | 83 ++-
148
target/riscv/cpu_helper.c | 6 +-
149
target/riscv/crypto_helper.c | 51 +-
150
target/riscv/csr.c | 54 +-
151
target/riscv/debug.c | 15 +-
152
target/riscv/kvm.c | 201 ++++++-
153
target/riscv/pmp.c | 4 +
154
target/riscv/translate.c | 1 +
155
target/riscv/vcrypto_helper.c | 970 ++++++++++++++++++++++++++++++
156
target/riscv/vector_helper.c | 245 +-------
157
target/riscv/vector_internals.c | 81 +++
158
target/riscv/insn_trans/trans_rvv.c.inc | 171 +++---
159
target/riscv/insn_trans/trans_rvvk.c.inc | 606 +++++++++++++++++++
160
target/riscv/insn_trans/trans_rvzfa.c.inc | 4 +-
161
target/riscv/meson.build | 4 +-
162
34 files changed, 2785 insertions(+), 652 deletions(-)
163
create mode 100644 target/riscv/vector_internals.h
164
create mode 100644 target/riscv/vcrypto_helper.c
165
create mode 100644 target/riscv/vector_internals.c
166
create mode 100644 target/riscv/insn_trans/trans_rvvk.c.inc
diff view generated by jsdifflib
1
From: Anup Patel <anup.patel@wdc.com>
1
From: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
2
2
3
The hgeie and hgeip CSRs are required for emulating an external
3
The 'host' CPU is available in a CONFIG_KVM build and it's currently
4
interrupt controller capable of injecting virtual external interrupt
4
available for all accels, but is a KVM only CPU. This means that in a
5
to Guest/VM running at VS-level.
5
RISC-V KVM capable host we can do things like this:
6
6
7
Signed-off-by: Anup Patel <anup.patel@wdc.com>
7
$ ./build/qemu-system-riscv64 -M virt,accel=tcg -cpu host --nographic
8
Signed-off-by: Anup Patel <anup@brainfault.org>
8
qemu-system-riscv64: H extension requires priv spec 1.12.0
9
10
This CPU does not have a priv spec because we don't filter its extensions
11
via priv spec. We shouldn't be reaching riscv_cpu_realize_tcg() at all
12
with the 'host' CPU.
13
14
We don't have a way to filter the 'host' CPU out of the available CPU
15
options (-cpu help) if the build includes both KVM and TCG. What we can
16
do is to error out during riscv_cpu_realize_tcg() if the user chooses
17
the 'host' CPU with accel=tcg:
18
19
$ ./build/qemu-system-riscv64 -M virt,accel=tcg -cpu host --nographic
20
qemu-system-riscv64: 'host' CPU is not compatible with TCG acceleration
21
22
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
9
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
23
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
10
Reviewed-by: Frank Chang <frank.chang@sifive.com>
24
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
11
Message-id: 20220204174700.534953-4-anup@brainfault.org
25
Message-Id: <20230721133411.474105-1-dbarboza@ventanamicro.com>
12
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
26
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
13
---
27
---
14
target/riscv/cpu.h | 5 +++
28
target/riscv/cpu.c | 5 +++++
15
target/riscv/cpu_bits.h | 1 +
29
1 file changed, 5 insertions(+)
16
target/riscv/cpu.c | 67 +++++++++++++++++++++++++++------------
17
target/riscv/cpu_helper.c | 37 +++++++++++++++++++--
18
target/riscv/csr.c | 43 +++++++++++++++++--------
19
target/riscv/machine.c | 6 ++--
20
6 files changed, 121 insertions(+), 38 deletions(-)
21
30
22
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
23
index XXXXXXX..XXXXXXX 100644
24
--- a/target/riscv/cpu.h
25
+++ b/target/riscv/cpu.h
26
@@ -XXX,XX +XXX,XX @@ struct CPURISCVState {
27
target_ulong priv;
28
/* This contains QEMU specific information about the virt state. */
29
target_ulong virt;
30
+ target_ulong geilen;
31
target_ulong resetvec;
32
33
target_ulong mhartid;
34
@@ -XXX,XX +XXX,XX @@ struct CPURISCVState {
35
target_ulong htval;
36
target_ulong htinst;
37
target_ulong hgatp;
38
+ target_ulong hgeie;
39
+ target_ulong hgeip;
40
uint64_t htimedelta;
41
42
/* Upper 64-bits of 128-bit CSRs */
43
@@ -XXX,XX +XXX,XX @@ int riscv_cpu_write_elf32_note(WriteCoreDumpFunction f, CPUState *cs,
44
int riscv_cpu_gdb_read_register(CPUState *cpu, GByteArray *buf, int reg);
45
int riscv_cpu_gdb_write_register(CPUState *cpu, uint8_t *buf, int reg);
46
bool riscv_cpu_fp_enabled(CPURISCVState *env);
47
+target_ulong riscv_cpu_get_geilen(CPURISCVState *env);
48
+void riscv_cpu_set_geilen(CPURISCVState *env, target_ulong geilen);
49
bool riscv_cpu_vector_enabled(CPURISCVState *env);
50
bool riscv_cpu_virt_enabled(CPURISCVState *env);
51
void riscv_cpu_set_virt_enabled(CPURISCVState *env, bool enable);
52
diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
53
index XXXXXXX..XXXXXXX 100644
54
--- a/target/riscv/cpu_bits.h
55
+++ b/target/riscv/cpu_bits.h
56
@@ -XXX,XX +XXX,XX @@ typedef enum RISCVException {
57
#define IRQ_M_EXT 11
58
#define IRQ_S_GEXT 12
59
#define IRQ_LOCAL_MAX 16
60
+#define IRQ_LOCAL_GUEST_MAX (TARGET_LONG_BITS - 1)
61
62
/* mip masks */
63
#define MIP_USIP (1 << IRQ_U_SOFT)
64
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
31
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
65
index XXXXXXX..XXXXXXX 100644
32
index XXXXXXX..XXXXXXX 100644
66
--- a/target/riscv/cpu.c
33
--- a/target/riscv/cpu.c
67
+++ b/target/riscv/cpu.c
34
+++ b/target/riscv/cpu.c
68
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_realize(DeviceState *dev, Error **errp)
35
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_realize_tcg(DeviceState *dev, Error **errp)
69
static void riscv_cpu_set_irq(void *opaque, int irq, int level)
36
CPURISCVState *env = &cpu->env;
70
{
37
Error *local_err = NULL;
71
RISCVCPU *cpu = RISCV_CPU(opaque);
38
72
+ CPURISCVState *env = &cpu->env;
39
+ if (object_dynamic_cast(OBJECT(dev), TYPE_RISCV_CPU_HOST)) {
73
40
+ error_setg(errp, "'host' CPU is not compatible with TCG acceleration");
74
- switch (irq) {
75
- case IRQ_U_SOFT:
76
- case IRQ_S_SOFT:
77
- case IRQ_VS_SOFT:
78
- case IRQ_M_SOFT:
79
- case IRQ_U_TIMER:
80
- case IRQ_S_TIMER:
81
- case IRQ_VS_TIMER:
82
- case IRQ_M_TIMER:
83
- case IRQ_U_EXT:
84
- case IRQ_S_EXT:
85
- case IRQ_VS_EXT:
86
- case IRQ_M_EXT:
87
- if (kvm_enabled()) {
88
- kvm_riscv_set_irq(cpu, irq, level);
89
- } else {
90
- riscv_cpu_update_mip(cpu, 1 << irq, BOOL_TO_MASK(level));
91
+ if (irq < IRQ_LOCAL_MAX) {
92
+ switch (irq) {
93
+ case IRQ_U_SOFT:
94
+ case IRQ_S_SOFT:
95
+ case IRQ_VS_SOFT:
96
+ case IRQ_M_SOFT:
97
+ case IRQ_U_TIMER:
98
+ case IRQ_S_TIMER:
99
+ case IRQ_VS_TIMER:
100
+ case IRQ_M_TIMER:
101
+ case IRQ_U_EXT:
102
+ case IRQ_S_EXT:
103
+ case IRQ_VS_EXT:
104
+ case IRQ_M_EXT:
105
+ if (kvm_enabled()) {
106
+ kvm_riscv_set_irq(cpu, irq, level);
107
+ } else {
108
+ riscv_cpu_update_mip(cpu, 1 << irq, BOOL_TO_MASK(level));
109
+ }
110
+ break;
111
+ default:
112
+ g_assert_not_reached();
113
}
114
- break;
115
- default:
116
+ } else if (irq < (IRQ_LOCAL_MAX + IRQ_LOCAL_GUEST_MAX)) {
117
+ /* Require H-extension for handling guest local interrupts */
118
+ if (!riscv_has_ext(env, RVH)) {
119
+ g_assert_not_reached();
120
+ }
121
+
122
+ /* Compute bit position in HGEIP CSR */
123
+ irq = irq - IRQ_LOCAL_MAX + 1;
124
+ if (env->geilen < irq) {
125
+ g_assert_not_reached();
126
+ }
127
+
128
+ /* Update HGEIP CSR */
129
+ env->hgeip &= ~((target_ulong)1 << irq);
130
+ if (level) {
131
+ env->hgeip |= (target_ulong)1 << irq;
132
+ }
133
+
134
+ /* Update mip.SGEIP bit */
135
+ riscv_cpu_update_mip(cpu, MIP_SGEIP,
136
+ BOOL_TO_MASK(!!(env->hgeie & env->hgeip)));
137
+ } else {
138
g_assert_not_reached();
139
}
140
}
141
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_init(Object *obj)
142
cpu_set_cpustate_pointers(cpu);
143
144
#ifndef CONFIG_USER_ONLY
145
- qdev_init_gpio_in(DEVICE(cpu), riscv_cpu_set_irq, IRQ_LOCAL_MAX);
146
+ qdev_init_gpio_in(DEVICE(cpu), riscv_cpu_set_irq,
147
+ IRQ_LOCAL_MAX + IRQ_LOCAL_GUEST_MAX);
148
#endif /* CONFIG_USER_ONLY */
149
}
150
151
diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
152
index XXXXXXX..XXXXXXX 100644
153
--- a/target/riscv/cpu_helper.c
154
+++ b/target/riscv/cpu_helper.c
155
@@ -XXX,XX +XXX,XX @@ static int riscv_cpu_local_irq_pending(CPURISCVState *env)
156
target_ulong mstatus_mie = get_field(env->mstatus, MSTATUS_MIE);
157
target_ulong mstatus_sie = get_field(env->mstatus, MSTATUS_SIE);
158
159
- target_ulong pending = env->mip & env->mie;
160
+ target_ulong vsgemask =
161
+ (target_ulong)1 << get_field(env->hstatus, HSTATUS_VGEIN);
162
+ target_ulong vsgein = (env->hgeip & vsgemask) ? MIP_VSEIP : 0;
163
+
164
+ target_ulong pending = (env->mip | vsgein) & env->mie;
165
166
target_ulong mie = env->priv < PRV_M ||
167
(env->priv == PRV_M && mstatus_mie);
168
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_swap_hypervisor_regs(CPURISCVState *env)
169
}
170
}
171
172
+target_ulong riscv_cpu_get_geilen(CPURISCVState *env)
173
+{
174
+ if (!riscv_has_ext(env, RVH)) {
175
+ return 0;
176
+ }
177
+
178
+ return env->geilen;
179
+}
180
+
181
+void riscv_cpu_set_geilen(CPURISCVState *env, target_ulong geilen)
182
+{
183
+ if (!riscv_has_ext(env, RVH)) {
184
+ return;
41
+ return;
185
+ }
42
+ }
186
+
43
+
187
+ if (geilen > (TARGET_LONG_BITS - 1)) {
44
riscv_cpu_validate_misa_mxl(cpu, &local_err);
188
+ return;
45
if (local_err != NULL) {
189
+ }
46
error_propagate(errp, local_err);
190
+
191
+ env->geilen = geilen;
192
+}
193
+
194
bool riscv_cpu_virt_enabled(CPURISCVState *env)
195
{
196
if (!riscv_has_ext(env, RVH)) {
197
@@ -XXX,XX +XXX,XX @@ uint32_t riscv_cpu_update_mip(RISCVCPU *cpu, uint32_t mask, uint32_t value)
198
{
199
CPURISCVState *env = &cpu->env;
200
CPUState *cs = CPU(cpu);
201
- uint32_t old = env->mip;
202
+ uint32_t gein, vsgein = 0, old = env->mip;
203
bool locked = false;
204
205
+ if (riscv_cpu_virt_enabled(env)) {
206
+ gein = get_field(env->hstatus, HSTATUS_VGEIN);
207
+ vsgein = (env->hgeip & (1ULL << gein)) ? MIP_VSEIP : 0;
208
+ }
209
+
210
if (!qemu_mutex_iothread_locked()) {
211
locked = true;
212
qemu_mutex_lock_iothread();
213
@@ -XXX,XX +XXX,XX @@ uint32_t riscv_cpu_update_mip(RISCVCPU *cpu, uint32_t mask, uint32_t value)
214
215
env->mip = (env->mip & ~mask) | (value & mask);
216
217
- if (env->mip) {
218
+ if (env->mip | vsgein) {
219
cpu_interrupt(cs, CPU_INTERRUPT_HARD);
220
} else {
221
cpu_reset_interrupt(cs, CPU_INTERRUPT_HARD);
222
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
223
index XXXXXXX..XXXXXXX 100644
224
--- a/target/riscv/csr.c
225
+++ b/target/riscv/csr.c
226
@@ -XXX,XX +XXX,XX @@ static RISCVException rmw_mip(CPURISCVState *env, int csrno,
227
RISCVCPU *cpu = env_archcpu(env);
228
/* Allow software control of delegable interrupts not claimed by hardware */
229
target_ulong mask = write_mask & delegable_ints & ~env->miclaim;
230
- uint32_t old_mip;
231
+ uint32_t gin, old_mip;
232
233
if (mask) {
234
old_mip = riscv_cpu_update_mip(cpu, mask, (new_value & mask));
235
@@ -XXX,XX +XXX,XX @@ static RISCVException rmw_mip(CPURISCVState *env, int csrno,
236
old_mip = env->mip;
237
}
238
239
+ if (csrno != CSR_HVIP) {
240
+ gin = get_field(env->hstatus, HSTATUS_VGEIN);
241
+ old_mip |= (env->hgeip & ((target_ulong)1 << gin)) ? MIP_VSEIP : 0;
242
+ }
243
+
244
if (ret_value) {
245
*ret_value = old_mip;
246
}
247
@@ -XXX,XX +XXX,XX @@ static RISCVException rmw_vsip(CPURISCVState *env, int csrno,
248
target_ulong new_value, target_ulong write_mask)
249
{
250
/* Shift the S bits to their VS bit location in mip */
251
- int ret = rmw_mip(env, 0, ret_value, new_value << 1,
252
+ int ret = rmw_mip(env, csrno, ret_value, new_value << 1,
253
(write_mask << 1) & vsip_writable_mask & env->hideleg);
254
255
if (ret_value) {
256
@@ -XXX,XX +XXX,XX @@ static RISCVException rmw_sip(CPURISCVState *env, int csrno,
257
if (riscv_cpu_virt_enabled(env)) {
258
ret = rmw_vsip(env, CSR_VSIP, ret_value, new_value, write_mask);
259
} else {
260
- ret = rmw_mip(env, CSR_MSTATUS, ret_value, new_value,
261
+ ret = rmw_mip(env, csrno, ret_value, new_value,
262
write_mask & env->mideleg & sip_writable_mask);
263
}
264
265
@@ -XXX,XX +XXX,XX @@ static RISCVException rmw_hvip(CPURISCVState *env, int csrno,
266
target_ulong *ret_value,
267
target_ulong new_value, target_ulong write_mask)
268
{
269
- int ret = rmw_mip(env, 0, ret_value, new_value,
270
+ int ret = rmw_mip(env, csrno, ret_value, new_value,
271
write_mask & hvip_writable_mask);
272
273
if (ret_value) {
274
@@ -XXX,XX +XXX,XX @@ static RISCVException rmw_hip(CPURISCVState *env, int csrno,
275
target_ulong *ret_value,
276
target_ulong new_value, target_ulong write_mask)
277
{
278
- int ret = rmw_mip(env, 0, ret_value, new_value,
279
+ int ret = rmw_mip(env, csrno, ret_value, new_value,
280
write_mask & hip_writable_mask);
281
282
if (ret_value) {
283
@@ -XXX,XX +XXX,XX @@ static RISCVException write_hcounteren(CPURISCVState *env, int csrno,
284
return RISCV_EXCP_NONE;
285
}
286
287
-static RISCVException write_hgeie(CPURISCVState *env, int csrno,
288
- target_ulong val)
289
+static RISCVException read_hgeie(CPURISCVState *env, int csrno,
290
+ target_ulong *val)
291
{
292
if (val) {
293
- qemu_log_mask(LOG_UNIMP, "No support for a non-zero GEILEN.");
294
+ *val = env->hgeie;
295
}
296
return RISCV_EXCP_NONE;
297
}
298
299
+static RISCVException write_hgeie(CPURISCVState *env, int csrno,
300
+ target_ulong val)
301
+{
302
+ /* Only GEILEN:1 bits implemented and BIT0 is never implemented */
303
+ val &= ((((target_ulong)1) << env->geilen) - 1) << 1;
304
+ env->hgeie = val;
305
+ /* Update mip.SGEIP bit */
306
+ riscv_cpu_update_mip(env_archcpu(env), MIP_SGEIP,
307
+ BOOL_TO_MASK(!!(env->hgeie & env->hgeip)));
308
+ return RISCV_EXCP_NONE;
309
+}
310
+
311
static RISCVException read_htval(CPURISCVState *env, int csrno,
312
target_ulong *val)
313
{
314
@@ -XXX,XX +XXX,XX @@ static RISCVException write_htinst(CPURISCVState *env, int csrno,
315
return RISCV_EXCP_NONE;
316
}
317
318
-static RISCVException write_hgeip(CPURISCVState *env, int csrno,
319
- target_ulong val)
320
+static RISCVException read_hgeip(CPURISCVState *env, int csrno,
321
+ target_ulong *val)
322
{
323
if (val) {
324
- qemu_log_mask(LOG_UNIMP, "No support for a non-zero GEILEN.");
325
+ *val = env->hgeip;
326
}
327
return RISCV_EXCP_NONE;
328
}
329
@@ -XXX,XX +XXX,XX @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
330
[CSR_HIP] = { "hip", hmode, NULL, NULL, rmw_hip },
331
[CSR_HIE] = { "hie", hmode, read_hie, write_hie },
332
[CSR_HCOUNTEREN] = { "hcounteren", hmode, read_hcounteren, write_hcounteren },
333
- [CSR_HGEIE] = { "hgeie", hmode, read_zero, write_hgeie },
334
+ [CSR_HGEIE] = { "hgeie", hmode, read_hgeie, write_hgeie },
335
[CSR_HTVAL] = { "htval", hmode, read_htval, write_htval },
336
[CSR_HTINST] = { "htinst", hmode, read_htinst, write_htinst },
337
- [CSR_HGEIP] = { "hgeip", hmode, read_zero, write_hgeip },
338
+ [CSR_HGEIP] = { "hgeip", hmode, read_hgeip, NULL },
339
[CSR_HGATP] = { "hgatp", hmode, read_hgatp, write_hgatp },
340
[CSR_HTIMEDELTA] = { "htimedelta", hmode, read_htimedelta, write_htimedelta },
341
[CSR_HTIMEDELTAH] = { "htimedeltah", hmode32, read_htimedeltah, write_htimedeltah },
342
diff --git a/target/riscv/machine.c b/target/riscv/machine.c
343
index XXXXXXX..XXXXXXX 100644
344
--- a/target/riscv/machine.c
345
+++ b/target/riscv/machine.c
346
@@ -XXX,XX +XXX,XX @@ static bool hyper_needed(void *opaque)
347
348
static const VMStateDescription vmstate_hyper = {
349
.name = "cpu/hyper",
350
- .version_id = 1,
351
- .minimum_version_id = 1,
352
+ .version_id = 2,
353
+ .minimum_version_id = 2,
354
.needed = hyper_needed,
355
.fields = (VMStateField[]) {
356
VMSTATE_UINTTL(env.hstatus, RISCVCPU),
357
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_hyper = {
358
VMSTATE_UINTTL(env.htval, RISCVCPU),
359
VMSTATE_UINTTL(env.htinst, RISCVCPU),
360
VMSTATE_UINTTL(env.hgatp, RISCVCPU),
361
+ VMSTATE_UINTTL(env.hgeie, RISCVCPU),
362
+ VMSTATE_UINTTL(env.hgeip, RISCVCPU),
363
VMSTATE_UINT64(env.htimedelta, RISCVCPU),
364
365
VMSTATE_UINT64(env.vsstatus, RISCVCPU),
366
--
47
--
367
2.34.1
48
2.41.0
368
49
369
50
diff view generated by jsdifflib
1
From: Anup Patel <anup.patel@wdc.com>
1
From: Thomas Huth <thuth@redhat.com>
2
2
3
We define a CPU feature for AIA CSR support in RISC-V CPUs which
3
The character that should be printed is stored in the 64 bit "payload"
4
can be set by machine/device emulation. The RISC-V CSR emulation
4
variable. The code currently tries to print it by taking the address
5
will also check this feature for emulating AIA CSRs.
5
of the variable and passing this pointer to qemu_chr_fe_write(). However,
6
this only works on little endian hosts where the least significant bits
7
are stored on the lowest address. To do this in a portable way, we have
8
to store the value in an uint8_t variable instead.
6
9
7
Signed-off-by: Anup Patel <anup.patel@wdc.com>
10
Fixes: 5033606780 ("RISC-V HTIF Console")
8
Signed-off-by: Anup Patel <anup@brainfault.org>
11
Signed-off-by: Thomas Huth <thuth@redhat.com>
9
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
10
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
12
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
11
Reviewed-by: Frank Chang <frank.chang@sifive.com>
13
Reviewed-by: Bin Meng <bmeng@tinylab.org>
12
Message-id: 20220204174700.534953-7-anup@brainfault.org
14
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
15
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
16
Message-Id: <20230721094720.902454-2-thuth@redhat.com>
13
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
17
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
14
---
18
---
15
target/riscv/cpu.h | 3 ++-
19
hw/char/riscv_htif.c | 3 ++-
16
1 file changed, 2 insertions(+), 1 deletion(-)
20
1 file changed, 2 insertions(+), 1 deletion(-)
17
21
18
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
22
diff --git a/hw/char/riscv_htif.c b/hw/char/riscv_htif.c
19
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
20
--- a/target/riscv/cpu.h
24
--- a/hw/char/riscv_htif.c
21
+++ b/target/riscv/cpu.h
25
+++ b/hw/char/riscv_htif.c
22
@@ -XXX,XX +XXX,XX @@ enum {
26
@@ -XXX,XX +XXX,XX @@ static void htif_handle_tohost_write(HTIFState *s, uint64_t val_written)
23
RISCV_FEATURE_MMU,
27
s->tohost = 0; /* clear to indicate we read */
24
RISCV_FEATURE_PMP,
28
return;
25
RISCV_FEATURE_EPMP,
29
} else if (cmd == HTIF_CONSOLE_CMD_PUTC) {
26
- RISCV_FEATURE_MISA
30
- qemu_chr_fe_write(&s->chr, (uint8_t *)&payload, 1);
27
+ RISCV_FEATURE_MISA,
31
+ uint8_t ch = (uint8_t)payload;
28
+ RISCV_FEATURE_AIA
32
+ qemu_chr_fe_write(&s->chr, &ch, 1);
29
};
33
resp = 0x100 | (uint8_t)payload;
30
34
} else {
31
#define PRIV_VERSION_1_10_0 0x00011000
35
qemu_log("HTIF device %d: unknown command\n", device);
32
--
36
--
33
2.34.1
37
2.41.0
34
38
35
39
diff view generated by jsdifflib
New patch
1
From: Thomas Huth <thuth@redhat.com>
1
2
3
Values that have been read via cpu_physical_memory_read() from the
4
guest's memory have to be swapped in case the host endianess differs
5
from the guest.
6
7
Fixes: a6e13e31d5 ("riscv_htif: Support console output via proxy syscall")
8
Signed-off-by: Thomas Huth <thuth@redhat.com>
9
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
10
Reviewed-by: Bin Meng <bmeng@tinylab.org>
11
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
12
Message-Id: <20230721094720.902454-3-thuth@redhat.com>
13
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
14
---
15
hw/char/riscv_htif.c | 9 +++++----
16
1 file changed, 5 insertions(+), 4 deletions(-)
17
18
diff --git a/hw/char/riscv_htif.c b/hw/char/riscv_htif.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/char/riscv_htif.c
21
+++ b/hw/char/riscv_htif.c
22
@@ -XXX,XX +XXX,XX @@
23
#include "qemu/timer.h"
24
#include "qemu/error-report.h"
25
#include "exec/address-spaces.h"
26
+#include "exec/tswap.h"
27
#include "sysemu/dma.h"
28
29
#define RISCV_DEBUG_HTIF 0
30
@@ -XXX,XX +XXX,XX @@ static void htif_handle_tohost_write(HTIFState *s, uint64_t val_written)
31
} else {
32
uint64_t syscall[8];
33
cpu_physical_memory_read(payload, syscall, sizeof(syscall));
34
- if (syscall[0] == PK_SYS_WRITE &&
35
- syscall[1] == HTIF_DEV_CONSOLE &&
36
- syscall[3] == HTIF_CONSOLE_CMD_PUTC) {
37
+ if (tswap64(syscall[0]) == PK_SYS_WRITE &&
38
+ tswap64(syscall[1]) == HTIF_DEV_CONSOLE &&
39
+ tswap64(syscall[3]) == HTIF_CONSOLE_CMD_PUTC) {
40
uint8_t ch;
41
- cpu_physical_memory_read(syscall[2], &ch, 1);
42
+ cpu_physical_memory_read(tswap64(syscall[2]), &ch, 1);
43
qemu_chr_fe_write(&s->chr, &ch, 1);
44
resp = 0x100 | (uint8_t)payload;
45
} else {
46
--
47
2.41.0
diff view generated by jsdifflib
New patch
1
From: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
1
2
3
zmmul was promoted from experimental to ratified in commit 6d00ffad4e95.
4
Add a riscv,isa string for it.
5
6
Fixes: 6d00ffad4e95 ("target/riscv: move zmmul out of the experimental properties")
7
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
8
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
9
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
10
Message-Id: <20230720132424.371132-2-dbarboza@ventanamicro.com>
11
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
12
---
13
target/riscv/cpu.c | 1 +
14
1 file changed, 1 insertion(+)
15
16
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/riscv/cpu.c
19
+++ b/target/riscv/cpu.c
20
@@ -XXX,XX +XXX,XX @@ static const struct isa_ext_data isa_edata_arr[] = {
21
ISA_EXT_DATA_ENTRY(zicsr, PRIV_VERSION_1_10_0, ext_icsr),
22
ISA_EXT_DATA_ENTRY(zifencei, PRIV_VERSION_1_10_0, ext_ifencei),
23
ISA_EXT_DATA_ENTRY(zihintpause, PRIV_VERSION_1_10_0, ext_zihintpause),
24
+ ISA_EXT_DATA_ENTRY(zmmul, PRIV_VERSION_1_12_0, ext_zmmul),
25
ISA_EXT_DATA_ENTRY(zawrs, PRIV_VERSION_1_12_0, ext_zawrs),
26
ISA_EXT_DATA_ENTRY(zfa, PRIV_VERSION_1_12_0, ext_zfa),
27
ISA_EXT_DATA_ENTRY(zfbfmin, PRIV_VERSION_1_12_0, ext_zfbfmin),
28
--
29
2.41.0
diff view generated by jsdifflib
New patch
1
From: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
1
2
3
The cpu->cfg.epmp extension is still experimental, but it already has a
4
'smepmp' riscv,isa string. Add it.
5
6
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
7
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
8
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
9
Message-Id: <20230720132424.371132-3-dbarboza@ventanamicro.com>
10
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
11
---
12
target/riscv/cpu.c | 1 +
13
1 file changed, 1 insertion(+)
14
15
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/riscv/cpu.c
18
+++ b/target/riscv/cpu.c
19
@@ -XXX,XX +XXX,XX @@ static const struct isa_ext_data isa_edata_arr[] = {
20
ISA_EXT_DATA_ENTRY(zhinx, PRIV_VERSION_1_12_0, ext_zhinx),
21
ISA_EXT_DATA_ENTRY(zhinxmin, PRIV_VERSION_1_12_0, ext_zhinxmin),
22
ISA_EXT_DATA_ENTRY(smaia, PRIV_VERSION_1_12_0, ext_smaia),
23
+ ISA_EXT_DATA_ENTRY(smepmp, PRIV_VERSION_1_12_0, epmp),
24
ISA_EXT_DATA_ENTRY(smstateen, PRIV_VERSION_1_12_0, ext_smstateen),
25
ISA_EXT_DATA_ENTRY(ssaia, PRIV_VERSION_1_12_0, ext_ssaia),
26
ISA_EXT_DATA_ENTRY(sscofpmf, PRIV_VERSION_1_12_0, ext_sscofpmf),
27
--
28
2.41.0
diff view generated by jsdifflib
1
From: LIU Zhiwei <zhiwei_liu@c-sky.com>
1
From: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
2
2
3
The guest should be able to set the vill bit as part of vsetvl.
3
Commit bef6f008b98(accel/tcg: Return bool from page_check_range) converts
4
integer return value to bool type. However, it wrongly converted the use
5
of the API in riscv fault-only-first, where page_check_range < = 0, should
6
be converted to !page_check_range.
4
7
5
Currently we may set env->vill to 1 in the vsetvl helper, but there
8
Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
6
is nowhere that we set it to 0, so once it transitions to 1 it's stuck
7
there until the system is reset.
8
9
Signed-off-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
10
Message-ID: <20230729031618.821-1-zhiwei_liu@linux.alibaba.com>
12
Message-Id: <20220201064601.41143-1-zhiwei_liu@c-sky.com>
13
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
11
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
14
---
12
---
15
target/riscv/vector_helper.c | 1 +
13
target/riscv/vector_helper.c | 2 +-
16
1 file changed, 1 insertion(+)
14
1 file changed, 1 insertion(+), 1 deletion(-)
17
15
18
diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c
16
diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c
19
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
20
--- a/target/riscv/vector_helper.c
18
--- a/target/riscv/vector_helper.c
21
+++ b/target/riscv/vector_helper.c
19
+++ b/target/riscv/vector_helper.c
22
@@ -XXX,XX +XXX,XX @@ target_ulong HELPER(vsetvl)(CPURISCVState *env, target_ulong s1,
20
@@ -XXX,XX +XXX,XX @@ vext_ldff(void *vd, void *v0, target_ulong base,
23
env->vl = vl;
21
cpu_mmu_index(env, false));
24
env->vtype = s2;
22
if (host) {
25
env->vstart = 0;
23
#ifdef CONFIG_USER_ONLY
26
+ env->vill = 0;
24
- if (page_check_range(addr, offset, PAGE_READ)) {
27
return vl;
25
+ if (!page_check_range(addr, offset, PAGE_READ)) {
28
}
26
vl = i;
29
27
goto ProbeSuccess;
28
}
30
--
29
--
31
2.34.1
30
2.41.0
32
33
diff view generated by jsdifflib
1
From: Anup Patel <anup.patel@wdc.com>
1
From: Ard Biesheuvel <ardb@kernel.org>
2
2
3
The RISC-V AIA specification extends RISC-V local interrupts and
3
The AES MixColumns and InvMixColumns operations are relatively
4
introduces new CSRs. This patch adds defines for the new AIA CSRs.
4
expensive 4x4 matrix multiplications in GF(2^8), which is why C
5
implementations usually rely on precomputed lookup tables rather than
6
performing the calculations on demand.
5
7
6
Signed-off-by: Anup Patel <anup.patel@wdc.com>
8
Given that we already carry those tables in QEMU, we can just grab the
7
Signed-off-by: Anup Patel <anup@brainfault.org>
9
right value in the implementation of the RISC-V AES32 instructions. Note
8
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
10
that the tables in question are permuted according to the respective
9
Reviewed-by: Frank Chang <frank.chang@sifive.com>
11
Sbox, so we can omit the Sbox lookup as well in this case.
10
Message-id: 20220204174700.534953-8-anup@brainfault.org
12
13
Cc: Richard Henderson <richard.henderson@linaro.org>
14
Cc: Philippe Mathieu-Daudé <philmd@linaro.org>
15
Cc: Zewen Ye <lustrew@foxmail.com>
16
Cc: Weiwei Li <liweiwei@iscas.ac.cn>
17
Cc: Junqiang Wang <wangjunqiang@iscas.ac.cn>
18
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
19
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Message-ID: <20230731084043.1791984-1-ardb@kernel.org>
11
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
21
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
12
---
22
---
13
target/riscv/cpu_bits.h | 119 ++++++++++++++++++++++++++++++++++++++++
23
include/crypto/aes.h | 7 +++++++
14
1 file changed, 119 insertions(+)
24
crypto/aes.c | 4 ++--
25
target/riscv/crypto_helper.c | 34 ++++------------------------------
26
3 files changed, 13 insertions(+), 32 deletions(-)
15
27
16
diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
28
diff --git a/include/crypto/aes.h b/include/crypto/aes.h
17
index XXXXXXX..XXXXXXX 100644
29
index XXXXXXX..XXXXXXX 100644
18
--- a/target/riscv/cpu_bits.h
30
--- a/include/crypto/aes.h
19
+++ b/target/riscv/cpu_bits.h
31
+++ b/include/crypto/aes.h
20
@@ -XXX,XX +XXX,XX @@
32
@@ -XXX,XX +XXX,XX @@ void AES_decrypt(const unsigned char *in, unsigned char *out,
21
#define CSR_MTVAL 0x343
33
extern const uint8_t AES_sbox[256];
22
#define CSR_MIP 0x344
34
extern const uint8_t AES_isbox[256];
23
35
24
+/* Machine-Level Window to Indirectly Accessed Registers (AIA) */
36
+/*
25
+#define CSR_MISELECT 0x350
37
+AES_Te0[x] = S [x].[02, 01, 01, 03];
26
+#define CSR_MIREG 0x351
38
+AES_Td0[x] = Si[x].[0e, 09, 0d, 0b];
39
+*/
27
+
40
+
28
+/* Machine-Level Interrupts (AIA) */
41
+extern const uint32_t AES_Te0[256], AES_Td0[256];
29
+#define CSR_MTOPI 0xfb0
30
+
31
+/* Machine-Level IMSIC Interface (AIA) */
32
+#define CSR_MSETEIPNUM 0x358
33
+#define CSR_MCLREIPNUM 0x359
34
+#define CSR_MSETEIENUM 0x35a
35
+#define CSR_MCLREIENUM 0x35b
36
+#define CSR_MTOPEI 0x35c
37
+
38
+/* Virtual Interrupts for Supervisor Level (AIA) */
39
+#define CSR_MVIEN 0x308
40
+#define CSR_MVIP 0x309
41
+
42
+/* Machine-Level High-Half CSRs (AIA) */
43
+#define CSR_MIDELEGH 0x313
44
+#define CSR_MIEH 0x314
45
+#define CSR_MVIENH 0x318
46
+#define CSR_MVIPH 0x319
47
+#define CSR_MIPH 0x354
48
+
49
/* Supervisor Trap Setup */
50
#define CSR_SSTATUS 0x100
51
#define CSR_SEDELEG 0x102
52
@@ -XXX,XX +XXX,XX @@
53
#define CSR_SPTBR 0x180
54
#define CSR_SATP 0x180
55
56
+/* Supervisor-Level Window to Indirectly Accessed Registers (AIA) */
57
+#define CSR_SISELECT 0x150
58
+#define CSR_SIREG 0x151
59
+
60
+/* Supervisor-Level Interrupts (AIA) */
61
+#define CSR_STOPI 0xdb0
62
+
63
+/* Supervisor-Level IMSIC Interface (AIA) */
64
+#define CSR_SSETEIPNUM 0x158
65
+#define CSR_SCLREIPNUM 0x159
66
+#define CSR_SSETEIENUM 0x15a
67
+#define CSR_SCLREIENUM 0x15b
68
+#define CSR_STOPEI 0x15c
69
+
70
+/* Supervisor-Level High-Half CSRs (AIA) */
71
+#define CSR_SIEH 0x114
72
+#define CSR_SIPH 0x154
73
+
74
/* Hpervisor CSRs */
75
#define CSR_HSTATUS 0x600
76
#define CSR_HEDELEG 0x602
77
@@ -XXX,XX +XXX,XX @@
78
#define CSR_MTINST 0x34a
79
#define CSR_MTVAL2 0x34b
80
81
+/* Virtual Interrupts and Interrupt Priorities (H-extension with AIA) */
82
+#define CSR_HVIEN 0x608
83
+#define CSR_HVICTL 0x609
84
+#define CSR_HVIPRIO1 0x646
85
+#define CSR_HVIPRIO2 0x647
86
+
87
+/* VS-Level Window to Indirectly Accessed Registers (H-extension with AIA) */
88
+#define CSR_VSISELECT 0x250
89
+#define CSR_VSIREG 0x251
90
+
91
+/* VS-Level Interrupts (H-extension with AIA) */
92
+#define CSR_VSTOPI 0xeb0
93
+
94
+/* VS-Level IMSIC Interface (H-extension with AIA) */
95
+#define CSR_VSSETEIPNUM 0x258
96
+#define CSR_VSCLREIPNUM 0x259
97
+#define CSR_VSSETEIENUM 0x25a
98
+#define CSR_VSCLREIENUM 0x25b
99
+#define CSR_VSTOPEI 0x25c
100
+
101
+/* Hypervisor and VS-Level High-Half CSRs (H-extension with AIA) */
102
+#define CSR_HIDELEGH 0x613
103
+#define CSR_HVIENH 0x618
104
+#define CSR_HVIPH 0x655
105
+#define CSR_HVIPRIO1H 0x656
106
+#define CSR_HVIPRIO2H 0x657
107
+#define CSR_VSIEH 0x214
108
+#define CSR_VSIPH 0x254
109
+
110
/* Enhanced Physical Memory Protection (ePMP) */
111
#define CSR_MSECCFG 0x747
112
#define CSR_MSECCFGH 0x757
113
@@ -XXX,XX +XXX,XX @@ typedef enum RISCVException {
114
#define UMTE_U_PM_INSN U_PM_INSN
115
#define UMTE_MASK (UMTE_U_PM_ENABLE | MMTE_U_PM_CURRENT | UMTE_U_PM_INSN)
116
117
+/* MISELECT, SISELECT, and VSISELECT bits (AIA) */
118
+#define ISELECT_IPRIO0 0x30
119
+#define ISELECT_IPRIO15 0x3f
120
+#define ISELECT_IMSIC_EIDELIVERY 0x70
121
+#define ISELECT_IMSIC_EITHRESHOLD 0x72
122
+#define ISELECT_IMSIC_EIP0 0x80
123
+#define ISELECT_IMSIC_EIP63 0xbf
124
+#define ISELECT_IMSIC_EIE0 0xc0
125
+#define ISELECT_IMSIC_EIE63 0xff
126
+#define ISELECT_IMSIC_FIRST ISELECT_IMSIC_EIDELIVERY
127
+#define ISELECT_IMSIC_LAST ISELECT_IMSIC_EIE63
128
+#define ISELECT_MASK 0x1ff
129
+
130
+/* Dummy [M|S|VS]ISELECT value for emulating [M|S|VS]TOPEI CSRs */
131
+#define ISELECT_IMSIC_TOPEI (ISELECT_MASK + 1)
132
+
133
+/* IMSIC bits (AIA) */
134
+#define IMSIC_TOPEI_IID_SHIFT 16
135
+#define IMSIC_TOPEI_IID_MASK 0x7ff
136
+#define IMSIC_TOPEI_IPRIO_MASK 0x7ff
137
+#define IMSIC_EIPx_BITS 32
138
+#define IMSIC_EIEx_BITS 32
139
+
140
+/* MTOPI and STOPI bits (AIA) */
141
+#define TOPI_IID_SHIFT 16
142
+#define TOPI_IID_MASK 0xfff
143
+#define TOPI_IPRIO_MASK 0xff
144
+
145
+/* Interrupt priority bits (AIA) */
146
+#define IPRIO_IRQ_BITS 8
147
+#define IPRIO_MMAXIPRIO 255
148
+#define IPRIO_DEFAULT_UPPER 4
149
+#define IPRIO_DEFAULT_MIDDLE (IPRIO_DEFAULT_UPPER + 24)
150
+#define IPRIO_DEFAULT_M IPRIO_DEFAULT_MIDDLE
151
+#define IPRIO_DEFAULT_S (IPRIO_DEFAULT_M + 3)
152
+#define IPRIO_DEFAULT_SGEXT (IPRIO_DEFAULT_S + 3)
153
+#define IPRIO_DEFAULT_VS (IPRIO_DEFAULT_SGEXT + 1)
154
+#define IPRIO_DEFAULT_LOWER (IPRIO_DEFAULT_VS + 3)
155
+
156
+/* HVICTL bits (AIA) */
157
+#define HVICTL_VTI 0x40000000
158
+#define HVICTL_IID 0x0fff0000
159
+#define HVICTL_IPRIOM 0x00000100
160
+#define HVICTL_IPRIO 0x000000ff
161
+#define HVICTL_VALID_MASK \
162
+ (HVICTL_VTI | HVICTL_IID | HVICTL_IPRIOM | HVICTL_IPRIO)
163
+
42
+
164
#endif
43
#endif
44
diff --git a/crypto/aes.c b/crypto/aes.c
45
index XXXXXXX..XXXXXXX 100644
46
--- a/crypto/aes.c
47
+++ b/crypto/aes.c
48
@@ -XXX,XX +XXX,XX @@ AES_Td3[x] = Si[x].[09, 0d, 0b, 0e];
49
AES_Td4[x] = Si[x].[01, 01, 01, 01];
50
*/
51
52
-static const uint32_t AES_Te0[256] = {
53
+const uint32_t AES_Te0[256] = {
54
0xc66363a5U, 0xf87c7c84U, 0xee777799U, 0xf67b7b8dU,
55
0xfff2f20dU, 0xd66b6bbdU, 0xde6f6fb1U, 0x91c5c554U,
56
0x60303050U, 0x02010103U, 0xce6767a9U, 0x562b2b7dU,
57
@@ -XXX,XX +XXX,XX @@ static const uint32_t AES_Te4[256] = {
58
0xb0b0b0b0U, 0x54545454U, 0xbbbbbbbbU, 0x16161616U,
59
};
60
61
-static const uint32_t AES_Td0[256] = {
62
+const uint32_t AES_Td0[256] = {
63
0x51f4a750U, 0x7e416553U, 0x1a17a4c3U, 0x3a275e96U,
64
0x3bab6bcbU, 0x1f9d45f1U, 0xacfa58abU, 0x4be30393U,
65
0x2030fa55U, 0xad766df6U, 0x88cc7691U, 0xf5024c25U,
66
diff --git a/target/riscv/crypto_helper.c b/target/riscv/crypto_helper.c
67
index XXXXXXX..XXXXXXX 100644
68
--- a/target/riscv/crypto_helper.c
69
+++ b/target/riscv/crypto_helper.c
70
@@ -XXX,XX +XXX,XX @@
71
#include "crypto/aes-round.h"
72
#include "crypto/sm4.h"
73
74
-#define AES_XTIME(a) \
75
- ((a << 1) ^ ((a & 0x80) ? 0x1b : 0))
76
-
77
-#define AES_GFMUL(a, b) (( \
78
- (((b) & 0x1) ? (a) : 0) ^ \
79
- (((b) & 0x2) ? AES_XTIME(a) : 0) ^ \
80
- (((b) & 0x4) ? AES_XTIME(AES_XTIME(a)) : 0) ^ \
81
- (((b) & 0x8) ? AES_XTIME(AES_XTIME(AES_XTIME(a))) : 0)) & 0xFF)
82
-
83
-static inline uint32_t aes_mixcolumn_byte(uint8_t x, bool fwd)
84
-{
85
- uint32_t u;
86
-
87
- if (fwd) {
88
- u = (AES_GFMUL(x, 3) << 24) | (x << 16) | (x << 8) |
89
- (AES_GFMUL(x, 2) << 0);
90
- } else {
91
- u = (AES_GFMUL(x, 0xb) << 24) | (AES_GFMUL(x, 0xd) << 16) |
92
- (AES_GFMUL(x, 0x9) << 8) | (AES_GFMUL(x, 0xe) << 0);
93
- }
94
- return u;
95
-}
96
-
97
#define sext32_xlen(x) (target_ulong)(int32_t)(x)
98
99
static inline target_ulong aes32_operation(target_ulong shamt,
100
@@ -XXX,XX +XXX,XX @@ static inline target_ulong aes32_operation(target_ulong shamt,
101
bool enc, bool mix)
102
{
103
uint8_t si = rs2 >> shamt;
104
- uint8_t so;
105
uint32_t mixed;
106
target_ulong res;
107
108
if (enc) {
109
- so = AES_sbox[si];
110
if (mix) {
111
- mixed = aes_mixcolumn_byte(so, true);
112
+ mixed = be32_to_cpu(AES_Te0[si]);
113
} else {
114
- mixed = so;
115
+ mixed = AES_sbox[si];
116
}
117
} else {
118
- so = AES_isbox[si];
119
if (mix) {
120
- mixed = aes_mixcolumn_byte(so, false);
121
+ mixed = be32_to_cpu(AES_Td0[si]);
122
} else {
123
- mixed = so;
124
+ mixed = AES_isbox[si];
125
}
126
}
127
mixed = rol32(mixed, shamt);
165
--
128
--
166
2.34.1
129
2.41.0
167
130
168
131
diff view generated by jsdifflib
1
From: Anup Patel <anup.patel@wdc.com>
1
From: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
2
2
3
The RISC-V AIA (Advanced Interrupt Architecture) defines a new
3
Take some functions/macros out of `vector_helper` and put them in a new
4
interrupt controller for wired interrupts called APLIC (Advanced
4
module called `vector_internals`. This ensures they can be used by both
5
Platform Level Interrupt Controller). The APLIC is capabable of
5
vector and vector-crypto helpers (latter implemented in proceeding
6
forwarding wired interupts to RISC-V HARTs directly or as MSIs
6
commits).
7
(Message Signaled Interupts).
8
7
9
This patch adds device emulation for RISC-V AIA APLIC.
8
Signed-off-by: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
10
9
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
11
Signed-off-by: Anup Patel <anup.patel@wdc.com>
10
Signed-off-by: Max Chou <max.chou@sifive.com>
12
Signed-off-by: Anup Patel <anup@brainfault.org>
11
Acked-by: Alistair Francis <alistair.francis@wdc.com>
13
Reviewed-by: Frank Chang <frank.chang@sifive.com>
12
Message-ID: <20230711165917.2629866-2-max.chou@sifive.com>
14
Message-id: 20220204174700.534953-19-anup@brainfault.org
15
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
13
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
16
---
14
---
17
include/hw/intc/riscv_aplic.h | 79 +++
15
target/riscv/vector_internals.h | 182 +++++++++++++++++++++++++++++
18
hw/intc/riscv_aplic.c | 978 ++++++++++++++++++++++++++++++++++
16
target/riscv/vector_helper.c | 201 +-------------------------------
19
hw/intc/Kconfig | 3 +
17
target/riscv/vector_internals.c | 81 +++++++++++++
20
hw/intc/meson.build | 1 +
18
target/riscv/meson.build | 1 +
21
4 files changed, 1061 insertions(+)
19
4 files changed, 265 insertions(+), 200 deletions(-)
22
create mode 100644 include/hw/intc/riscv_aplic.h
20
create mode 100644 target/riscv/vector_internals.h
23
create mode 100644 hw/intc/riscv_aplic.c
21
create mode 100644 target/riscv/vector_internals.c
24
22
25
diff --git a/include/hw/intc/riscv_aplic.h b/include/hw/intc/riscv_aplic.h
23
diff --git a/target/riscv/vector_internals.h b/target/riscv/vector_internals.h
26
new file mode 100644
24
new file mode 100644
27
index XXXXXXX..XXXXXXX
25
index XXXXXXX..XXXXXXX
28
--- /dev/null
26
--- /dev/null
29
+++ b/include/hw/intc/riscv_aplic.h
27
+++ b/target/riscv/vector_internals.h
30
@@ -XXX,XX +XXX,XX @@
28
@@ -XXX,XX +XXX,XX @@
31
+/*
29
+/*
32
+ * RISC-V APLIC (Advanced Platform Level Interrupt Controller) interface
30
+ * RISC-V Vector Extension Internals
33
+ *
31
+ *
34
+ * Copyright (c) 2021 Western Digital Corporation or its affiliates.
32
+ * Copyright (c) 2020 T-Head Semiconductor Co., Ltd. All rights reserved.
35
+ *
33
+ *
36
+ * This program is free software; you can redistribute it and/or modify it
34
+ * This program is free software; you can redistribute it and/or modify it
37
+ * under the terms and conditions of the GNU General Public License,
35
+ * under the terms and conditions of the GNU General Public License,
38
+ * version 2 or later, as published by the Free Software Foundation.
36
+ * version 2 or later, as published by the Free Software Foundation.
39
+ *
37
+ *
...
...
44
+ *
42
+ *
45
+ * You should have received a copy of the GNU General Public License along with
43
+ * You should have received a copy of the GNU General Public License along with
46
+ * this program. If not, see <http://www.gnu.org/licenses/>.
44
+ * this program. If not, see <http://www.gnu.org/licenses/>.
47
+ */
45
+ */
48
+
46
+
49
+#ifndef HW_RISCV_APLIC_H
47
+#ifndef TARGET_RISCV_VECTOR_INTERNALS_H
50
+#define HW_RISCV_APLIC_H
48
+#define TARGET_RISCV_VECTOR_INTERNALS_H
51
+
49
+
52
+#include "hw/sysbus.h"
50
+#include "qemu/osdep.h"
53
+#include "qom/object.h"
51
+#include "qemu/bitops.h"
54
+
52
+#include "cpu.h"
55
+#define TYPE_RISCV_APLIC "riscv.aplic"
53
+#include "tcg/tcg-gvec-desc.h"
56
+
54
+#include "internals.h"
57
+typedef struct RISCVAPLICState RISCVAPLICState;
55
+
58
+DECLARE_INSTANCE_CHECKER(RISCVAPLICState, RISCV_APLIC, TYPE_RISCV_APLIC)
56
+static inline uint32_t vext_nf(uint32_t desc)
59
+
57
+{
60
+#define APLIC_MIN_SIZE 0x4000
58
+ return FIELD_EX32(simd_data(desc), VDATA, NF);
61
+#define APLIC_SIZE_ALIGN(__x) (((__x) + (APLIC_MIN_SIZE - 1)) & \
59
+}
62
+ ~(APLIC_MIN_SIZE - 1))
60
+
63
+#define APLIC_SIZE(__num_harts) (APLIC_MIN_SIZE + \
61
+/*
64
+ APLIC_SIZE_ALIGN(32 * (__num_harts)))
62
+ * Note that vector data is stored in host-endian 64-bit chunks,
65
+
63
+ * so addressing units smaller than that needs a host-endian fixup.
66
+struct RISCVAPLICState {
64
+ */
67
+ /*< private >*/
65
+#if HOST_BIG_ENDIAN
68
+ SysBusDevice parent_obj;
66
+#define H1(x) ((x) ^ 7)
69
+ qemu_irq *external_irqs;
67
+#define H1_2(x) ((x) ^ 6)
70
+
68
+#define H1_4(x) ((x) ^ 4)
71
+ /*< public >*/
69
+#define H2(x) ((x) ^ 3)
72
+ MemoryRegion mmio;
70
+#define H4(x) ((x) ^ 1)
73
+ uint32_t bitfield_words;
71
+#define H8(x) ((x))
74
+ uint32_t domaincfg;
72
+#else
75
+ uint32_t mmsicfgaddr;
73
+#define H1(x) (x)
76
+ uint32_t mmsicfgaddrH;
74
+#define H1_2(x) (x)
77
+ uint32_t smsicfgaddr;
75
+#define H1_4(x) (x)
78
+ uint32_t smsicfgaddrH;
76
+#define H2(x) (x)
79
+ uint32_t genmsi;
77
+#define H4(x) (x)
80
+ uint32_t *sourcecfg;
78
+#define H8(x) (x)
81
+ uint32_t *state;
82
+ uint32_t *target;
83
+ uint32_t *idelivery;
84
+ uint32_t *iforce;
85
+ uint32_t *ithreshold;
86
+
87
+ /* topology */
88
+#define QEMU_APLIC_MAX_CHILDREN 16
89
+ struct RISCVAPLICState *parent;
90
+ struct RISCVAPLICState *children[QEMU_APLIC_MAX_CHILDREN];
91
+ uint16_t num_children;
92
+
93
+ /* config */
94
+ uint32_t aperture_size;
95
+ uint32_t hartid_base;
96
+ uint32_t num_harts;
97
+ uint32_t iprio_mask;
98
+ uint32_t num_irqs;
99
+ bool msimode;
100
+ bool mmode;
101
+};
102
+
103
+void riscv_aplic_add_child(DeviceState *parent, DeviceState *child);
104
+
105
+DeviceState *riscv_aplic_create(hwaddr addr, hwaddr size,
106
+ uint32_t hartid_base, uint32_t num_harts, uint32_t num_sources,
107
+ uint32_t iprio_bits, bool msimode, bool mmode, DeviceState *parent);
108
+
109
+#endif
79
+#endif
110
diff --git a/hw/intc/riscv_aplic.c b/hw/intc/riscv_aplic.c
80
+
81
+/*
82
+ * Encode LMUL to lmul as following:
83
+ * LMUL vlmul lmul
84
+ * 1 000 0
85
+ * 2 001 1
86
+ * 4 010 2
87
+ * 8 011 3
88
+ * - 100 -
89
+ * 1/8 101 -3
90
+ * 1/4 110 -2
91
+ * 1/2 111 -1
92
+ */
93
+static inline int32_t vext_lmul(uint32_t desc)
94
+{
95
+ return sextract32(FIELD_EX32(simd_data(desc), VDATA, LMUL), 0, 3);
96
+}
97
+
98
+static inline uint32_t vext_vm(uint32_t desc)
99
+{
100
+ return FIELD_EX32(simd_data(desc), VDATA, VM);
101
+}
102
+
103
+static inline uint32_t vext_vma(uint32_t desc)
104
+{
105
+ return FIELD_EX32(simd_data(desc), VDATA, VMA);
106
+}
107
+
108
+static inline uint32_t vext_vta(uint32_t desc)
109
+{
110
+ return FIELD_EX32(simd_data(desc), VDATA, VTA);
111
+}
112
+
113
+static inline uint32_t vext_vta_all_1s(uint32_t desc)
114
+{
115
+ return FIELD_EX32(simd_data(desc), VDATA, VTA_ALL_1S);
116
+}
117
+
118
+/*
119
+ * Earlier designs (pre-0.9) had a varying number of bits
120
+ * per mask value (MLEN). In the 0.9 design, MLEN=1.
121
+ * (Section 4.5)
122
+ */
123
+static inline int vext_elem_mask(void *v0, int index)
124
+{
125
+ int idx = index / 64;
126
+ int pos = index % 64;
127
+ return (((uint64_t *)v0)[idx] >> pos) & 1;
128
+}
129
+
130
+/*
131
+ * Get number of total elements, including prestart, body and tail elements.
132
+ * Note that when LMUL < 1, the tail includes the elements past VLMAX that
133
+ * are held in the same vector register.
134
+ */
135
+static inline uint32_t vext_get_total_elems(CPURISCVState *env, uint32_t desc,
136
+ uint32_t esz)
137
+{
138
+ uint32_t vlenb = simd_maxsz(desc);
139
+ uint32_t sew = 1 << FIELD_EX64(env->vtype, VTYPE, VSEW);
140
+ int8_t emul = ctzl(esz) - ctzl(sew) + vext_lmul(desc) < 0 ? 0 :
141
+ ctzl(esz) - ctzl(sew) + vext_lmul(desc);
142
+ return (vlenb << emul) / esz;
143
+}
144
+
145
+/* set agnostic elements to 1s */
146
+void vext_set_elems_1s(void *base, uint32_t is_agnostic, uint32_t cnt,
147
+ uint32_t tot);
148
+
149
+/* expand macro args before macro */
150
+#define RVVCALL(macro, ...) macro(__VA_ARGS__)
151
+
152
+/* (TD, T1, T2, TX1, TX2) */
153
+#define OP_UUU_B uint8_t, uint8_t, uint8_t, uint8_t, uint8_t
154
+#define OP_UUU_H uint16_t, uint16_t, uint16_t, uint16_t, uint16_t
155
+#define OP_UUU_W uint32_t, uint32_t, uint32_t, uint32_t, uint32_t
156
+#define OP_UUU_D uint64_t, uint64_t, uint64_t, uint64_t, uint64_t
157
+
158
+/* operation of two vector elements */
159
+typedef void opivv2_fn(void *vd, void *vs1, void *vs2, int i);
160
+
161
+#define OPIVV2(NAME, TD, T1, T2, TX1, TX2, HD, HS1, HS2, OP) \
162
+static void do_##NAME(void *vd, void *vs1, void *vs2, int i) \
163
+{ \
164
+ TX1 s1 = *((T1 *)vs1 + HS1(i)); \
165
+ TX2 s2 = *((T2 *)vs2 + HS2(i)); \
166
+ *((TD *)vd + HD(i)) = OP(s2, s1); \
167
+}
168
+
169
+void do_vext_vv(void *vd, void *v0, void *vs1, void *vs2,
170
+ CPURISCVState *env, uint32_t desc,
171
+ opivv2_fn *fn, uint32_t esz);
172
+
173
+/* generate the helpers for OPIVV */
174
+#define GEN_VEXT_VV(NAME, ESZ) \
175
+void HELPER(NAME)(void *vd, void *v0, void *vs1, \
176
+ void *vs2, CPURISCVState *env, \
177
+ uint32_t desc) \
178
+{ \
179
+ do_vext_vv(vd, v0, vs1, vs2, env, desc, \
180
+ do_##NAME, ESZ); \
181
+}
182
+
183
+typedef void opivx2_fn(void *vd, target_long s1, void *vs2, int i);
184
+
185
+/*
186
+ * (T1)s1 gives the real operator type.
187
+ * (TX1)(T1)s1 expands the operator type of widen or narrow operations.
188
+ */
189
+#define OPIVX2(NAME, TD, T1, T2, TX1, TX2, HD, HS2, OP) \
190
+static void do_##NAME(void *vd, target_long s1, void *vs2, int i) \
191
+{ \
192
+ TX2 s2 = *((T2 *)vs2 + HS2(i)); \
193
+ *((TD *)vd + HD(i)) = OP(s2, (TX1)(T1)s1); \
194
+}
195
+
196
+void do_vext_vx(void *vd, void *v0, target_long s1, void *vs2,
197
+ CPURISCVState *env, uint32_t desc,
198
+ opivx2_fn fn, uint32_t esz);
199
+
200
+/* generate the helpers for OPIVX */
201
+#define GEN_VEXT_VX(NAME, ESZ) \
202
+void HELPER(NAME)(void *vd, void *v0, target_ulong s1, \
203
+ void *vs2, CPURISCVState *env, \
204
+ uint32_t desc) \
205
+{ \
206
+ do_vext_vx(vd, v0, s1, vs2, env, desc, \
207
+ do_##NAME, ESZ); \
208
+}
209
+
210
+#endif /* TARGET_RISCV_VECTOR_INTERNALS_H */
211
diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c
212
index XXXXXXX..XXXXXXX 100644
213
--- a/target/riscv/vector_helper.c
214
+++ b/target/riscv/vector_helper.c
215
@@ -XXX,XX +XXX,XX @@
216
#include "fpu/softfloat.h"
217
#include "tcg/tcg-gvec-desc.h"
218
#include "internals.h"
219
+#include "vector_internals.h"
220
#include <math.h>
221
222
target_ulong HELPER(vsetvl)(CPURISCVState *env, target_ulong s1,
223
@@ -XXX,XX +XXX,XX @@ target_ulong HELPER(vsetvl)(CPURISCVState *env, target_ulong s1,
224
return vl;
225
}
226
227
-/*
228
- * Note that vector data is stored in host-endian 64-bit chunks,
229
- * so addressing units smaller than that needs a host-endian fixup.
230
- */
231
-#if HOST_BIG_ENDIAN
232
-#define H1(x) ((x) ^ 7)
233
-#define H1_2(x) ((x) ^ 6)
234
-#define H1_4(x) ((x) ^ 4)
235
-#define H2(x) ((x) ^ 3)
236
-#define H4(x) ((x) ^ 1)
237
-#define H8(x) ((x))
238
-#else
239
-#define H1(x) (x)
240
-#define H1_2(x) (x)
241
-#define H1_4(x) (x)
242
-#define H2(x) (x)
243
-#define H4(x) (x)
244
-#define H8(x) (x)
245
-#endif
246
-
247
-static inline uint32_t vext_nf(uint32_t desc)
248
-{
249
- return FIELD_EX32(simd_data(desc), VDATA, NF);
250
-}
251
-
252
-static inline uint32_t vext_vm(uint32_t desc)
253
-{
254
- return FIELD_EX32(simd_data(desc), VDATA, VM);
255
-}
256
-
257
-/*
258
- * Encode LMUL to lmul as following:
259
- * LMUL vlmul lmul
260
- * 1 000 0
261
- * 2 001 1
262
- * 4 010 2
263
- * 8 011 3
264
- * - 100 -
265
- * 1/8 101 -3
266
- * 1/4 110 -2
267
- * 1/2 111 -1
268
- */
269
-static inline int32_t vext_lmul(uint32_t desc)
270
-{
271
- return sextract32(FIELD_EX32(simd_data(desc), VDATA, LMUL), 0, 3);
272
-}
273
-
274
-static inline uint32_t vext_vta(uint32_t desc)
275
-{
276
- return FIELD_EX32(simd_data(desc), VDATA, VTA);
277
-}
278
-
279
-static inline uint32_t vext_vma(uint32_t desc)
280
-{
281
- return FIELD_EX32(simd_data(desc), VDATA, VMA);
282
-}
283
-
284
-static inline uint32_t vext_vta_all_1s(uint32_t desc)
285
-{
286
- return FIELD_EX32(simd_data(desc), VDATA, VTA_ALL_1S);
287
-}
288
-
289
/*
290
* Get the maximum number of elements can be operated.
291
*
292
@@ -XXX,XX +XXX,XX @@ static inline uint32_t vext_max_elems(uint32_t desc, uint32_t log2_esz)
293
return scale < 0 ? vlenb >> -scale : vlenb << scale;
294
}
295
296
-/*
297
- * Get number of total elements, including prestart, body and tail elements.
298
- * Note that when LMUL < 1, the tail includes the elements past VLMAX that
299
- * are held in the same vector register.
300
- */
301
-static inline uint32_t vext_get_total_elems(CPURISCVState *env, uint32_t desc,
302
- uint32_t esz)
303
-{
304
- uint32_t vlenb = simd_maxsz(desc);
305
- uint32_t sew = 1 << FIELD_EX64(env->vtype, VTYPE, VSEW);
306
- int8_t emul = ctzl(esz) - ctzl(sew) + vext_lmul(desc) < 0 ? 0 :
307
- ctzl(esz) - ctzl(sew) + vext_lmul(desc);
308
- return (vlenb << emul) / esz;
309
-}
310
-
311
static inline target_ulong adjust_addr(CPURISCVState *env, target_ulong addr)
312
{
313
return (addr & ~env->cur_pmmask) | env->cur_pmbase;
314
@@ -XXX,XX +XXX,XX @@ static void probe_pages(CPURISCVState *env, target_ulong addr,
315
}
316
}
317
318
-/* set agnostic elements to 1s */
319
-static void vext_set_elems_1s(void *base, uint32_t is_agnostic, uint32_t cnt,
320
- uint32_t tot)
321
-{
322
- if (is_agnostic == 0) {
323
- /* policy undisturbed */
324
- return;
325
- }
326
- if (tot - cnt == 0) {
327
- return;
328
- }
329
- memset(base + cnt, -1, tot - cnt);
330
-}
331
-
332
static inline void vext_set_elem_mask(void *v0, int index,
333
uint8_t value)
334
{
335
@@ -XXX,XX +XXX,XX @@ static inline void vext_set_elem_mask(void *v0, int index,
336
((uint64_t *)v0)[idx] = deposit64(old, pos, 1, value);
337
}
338
339
-/*
340
- * Earlier designs (pre-0.9) had a varying number of bits
341
- * per mask value (MLEN). In the 0.9 design, MLEN=1.
342
- * (Section 4.5)
343
- */
344
-static inline int vext_elem_mask(void *v0, int index)
345
-{
346
- int idx = index / 64;
347
- int pos = index % 64;
348
- return (((uint64_t *)v0)[idx] >> pos) & 1;
349
-}
350
-
351
/* elements operations for load and store */
352
typedef void vext_ldst_elem_fn(CPURISCVState *env, abi_ptr addr,
353
uint32_t idx, void *vd, uintptr_t retaddr);
354
@@ -XXX,XX +XXX,XX @@ GEN_VEXT_ST_WHOLE(vs8r_v, int8_t, ste_b)
355
* Vector Integer Arithmetic Instructions
356
*/
357
358
-/* expand macro args before macro */
359
-#define RVVCALL(macro, ...) macro(__VA_ARGS__)
360
-
361
/* (TD, T1, T2, TX1, TX2) */
362
#define OP_SSS_B int8_t, int8_t, int8_t, int8_t, int8_t
363
#define OP_SSS_H int16_t, int16_t, int16_t, int16_t, int16_t
364
#define OP_SSS_W int32_t, int32_t, int32_t, int32_t, int32_t
365
#define OP_SSS_D int64_t, int64_t, int64_t, int64_t, int64_t
366
-#define OP_UUU_B uint8_t, uint8_t, uint8_t, uint8_t, uint8_t
367
-#define OP_UUU_H uint16_t, uint16_t, uint16_t, uint16_t, uint16_t
368
-#define OP_UUU_W uint32_t, uint32_t, uint32_t, uint32_t, uint32_t
369
-#define OP_UUU_D uint64_t, uint64_t, uint64_t, uint64_t, uint64_t
370
#define OP_SUS_B int8_t, uint8_t, int8_t, uint8_t, int8_t
371
#define OP_SUS_H int16_t, uint16_t, int16_t, uint16_t, int16_t
372
#define OP_SUS_W int32_t, uint32_t, int32_t, uint32_t, int32_t
373
@@ -XXX,XX +XXX,XX @@ GEN_VEXT_ST_WHOLE(vs8r_v, int8_t, ste_b)
374
#define NOP_UUU_H uint16_t, uint16_t, uint32_t, uint16_t, uint32_t
375
#define NOP_UUU_W uint32_t, uint32_t, uint64_t, uint32_t, uint64_t
376
377
-/* operation of two vector elements */
378
-typedef void opivv2_fn(void *vd, void *vs1, void *vs2, int i);
379
-
380
-#define OPIVV2(NAME, TD, T1, T2, TX1, TX2, HD, HS1, HS2, OP) \
381
-static void do_##NAME(void *vd, void *vs1, void *vs2, int i) \
382
-{ \
383
- TX1 s1 = *((T1 *)vs1 + HS1(i)); \
384
- TX2 s2 = *((T2 *)vs2 + HS2(i)); \
385
- *((TD *)vd + HD(i)) = OP(s2, s1); \
386
-}
387
#define DO_SUB(N, M) (N - M)
388
#define DO_RSUB(N, M) (M - N)
389
390
@@ -XXX,XX +XXX,XX @@ RVVCALL(OPIVV2, vsub_vv_h, OP_SSS_H, H2, H2, H2, DO_SUB)
391
RVVCALL(OPIVV2, vsub_vv_w, OP_SSS_W, H4, H4, H4, DO_SUB)
392
RVVCALL(OPIVV2, vsub_vv_d, OP_SSS_D, H8, H8, H8, DO_SUB)
393
394
-static void do_vext_vv(void *vd, void *v0, void *vs1, void *vs2,
395
- CPURISCVState *env, uint32_t desc,
396
- opivv2_fn *fn, uint32_t esz)
397
-{
398
- uint32_t vm = vext_vm(desc);
399
- uint32_t vl = env->vl;
400
- uint32_t total_elems = vext_get_total_elems(env, desc, esz);
401
- uint32_t vta = vext_vta(desc);
402
- uint32_t vma = vext_vma(desc);
403
- uint32_t i;
404
-
405
- for (i = env->vstart; i < vl; i++) {
406
- if (!vm && !vext_elem_mask(v0, i)) {
407
- /* set masked-off elements to 1s */
408
- vext_set_elems_1s(vd, vma, i * esz, (i + 1) * esz);
409
- continue;
410
- }
411
- fn(vd, vs1, vs2, i);
412
- }
413
- env->vstart = 0;
414
- /* set tail elements to 1s */
415
- vext_set_elems_1s(vd, vta, vl * esz, total_elems * esz);
416
-}
417
-
418
-/* generate the helpers for OPIVV */
419
-#define GEN_VEXT_VV(NAME, ESZ) \
420
-void HELPER(NAME)(void *vd, void *v0, void *vs1, \
421
- void *vs2, CPURISCVState *env, \
422
- uint32_t desc) \
423
-{ \
424
- do_vext_vv(vd, v0, vs1, vs2, env, desc, \
425
- do_##NAME, ESZ); \
426
-}
427
-
428
GEN_VEXT_VV(vadd_vv_b, 1)
429
GEN_VEXT_VV(vadd_vv_h, 2)
430
GEN_VEXT_VV(vadd_vv_w, 4)
431
@@ -XXX,XX +XXX,XX @@ GEN_VEXT_VV(vsub_vv_h, 2)
432
GEN_VEXT_VV(vsub_vv_w, 4)
433
GEN_VEXT_VV(vsub_vv_d, 8)
434
435
-typedef void opivx2_fn(void *vd, target_long s1, void *vs2, int i);
436
-
437
-/*
438
- * (T1)s1 gives the real operator type.
439
- * (TX1)(T1)s1 expands the operator type of widen or narrow operations.
440
- */
441
-#define OPIVX2(NAME, TD, T1, T2, TX1, TX2, HD, HS2, OP) \
442
-static void do_##NAME(void *vd, target_long s1, void *vs2, int i) \
443
-{ \
444
- TX2 s2 = *((T2 *)vs2 + HS2(i)); \
445
- *((TD *)vd + HD(i)) = OP(s2, (TX1)(T1)s1); \
446
-}
447
448
RVVCALL(OPIVX2, vadd_vx_b, OP_SSS_B, H1, H1, DO_ADD)
449
RVVCALL(OPIVX2, vadd_vx_h, OP_SSS_H, H2, H2, DO_ADD)
450
@@ -XXX,XX +XXX,XX @@ RVVCALL(OPIVX2, vrsub_vx_h, OP_SSS_H, H2, H2, DO_RSUB)
451
RVVCALL(OPIVX2, vrsub_vx_w, OP_SSS_W, H4, H4, DO_RSUB)
452
RVVCALL(OPIVX2, vrsub_vx_d, OP_SSS_D, H8, H8, DO_RSUB)
453
454
-static void do_vext_vx(void *vd, void *v0, target_long s1, void *vs2,
455
- CPURISCVState *env, uint32_t desc,
456
- opivx2_fn fn, uint32_t esz)
457
-{
458
- uint32_t vm = vext_vm(desc);
459
- uint32_t vl = env->vl;
460
- uint32_t total_elems = vext_get_total_elems(env, desc, esz);
461
- uint32_t vta = vext_vta(desc);
462
- uint32_t vma = vext_vma(desc);
463
- uint32_t i;
464
-
465
- for (i = env->vstart; i < vl; i++) {
466
- if (!vm && !vext_elem_mask(v0, i)) {
467
- /* set masked-off elements to 1s */
468
- vext_set_elems_1s(vd, vma, i * esz, (i + 1) * esz);
469
- continue;
470
- }
471
- fn(vd, s1, vs2, i);
472
- }
473
- env->vstart = 0;
474
- /* set tail elements to 1s */
475
- vext_set_elems_1s(vd, vta, vl * esz, total_elems * esz);
476
-}
477
-
478
-/* generate the helpers for OPIVX */
479
-#define GEN_VEXT_VX(NAME, ESZ) \
480
-void HELPER(NAME)(void *vd, void *v0, target_ulong s1, \
481
- void *vs2, CPURISCVState *env, \
482
- uint32_t desc) \
483
-{ \
484
- do_vext_vx(vd, v0, s1, vs2, env, desc, \
485
- do_##NAME, ESZ); \
486
-}
487
-
488
GEN_VEXT_VX(vadd_vx_b, 1)
489
GEN_VEXT_VX(vadd_vx_h, 2)
490
GEN_VEXT_VX(vadd_vx_w, 4)
491
diff --git a/target/riscv/vector_internals.c b/target/riscv/vector_internals.c
111
new file mode 100644
492
new file mode 100644
112
index XXXXXXX..XXXXXXX
493
index XXXXXXX..XXXXXXX
113
--- /dev/null
494
--- /dev/null
114
+++ b/hw/intc/riscv_aplic.c
495
+++ b/target/riscv/vector_internals.c
115
@@ -XXX,XX +XXX,XX @@
496
@@ -XXX,XX +XXX,XX @@
116
+/*
497
+/*
117
+ * RISC-V APLIC (Advanced Platform Level Interrupt Controller)
498
+ * RISC-V Vector Extension Internals
118
+ *
499
+ *
119
+ * Copyright (c) 2021 Western Digital Corporation or its affiliates.
500
+ * Copyright (c) 2020 T-Head Semiconductor Co., Ltd. All rights reserved.
120
+ *
501
+ *
121
+ * This program is free software; you can redistribute it and/or modify it
502
+ * This program is free software; you can redistribute it and/or modify it
122
+ * under the terms and conditions of the GNU General Public License,
503
+ * under the terms and conditions of the GNU General Public License,
123
+ * version 2 or later, as published by the Free Software Foundation.
504
+ * version 2 or later, as published by the Free Software Foundation.
124
+ *
505
+ *
...
...
129
+ *
510
+ *
130
+ * You should have received a copy of the GNU General Public License along with
511
+ * You should have received a copy of the GNU General Public License along with
131
+ * this program. If not, see <http://www.gnu.org/licenses/>.
512
+ * this program. If not, see <http://www.gnu.org/licenses/>.
132
+ */
513
+ */
133
+
514
+
134
+#include "qemu/osdep.h"
515
+#include "vector_internals.h"
135
+#include "qapi/error.h"
516
+
136
+#include "qemu/log.h"
517
+/* set agnostic elements to 1s */
137
+#include "qemu/module.h"
518
+void vext_set_elems_1s(void *base, uint32_t is_agnostic, uint32_t cnt,
138
+#include "qemu/error-report.h"
519
+ uint32_t tot)
139
+#include "qemu/bswap.h"
520
+{
140
+#include "exec/address-spaces.h"
521
+ if (is_agnostic == 0) {
141
+#include "hw/sysbus.h"
522
+ /* policy undisturbed */
142
+#include "hw/pci/msi.h"
523
+ return;
143
+#include "hw/boards.h"
524
+ }
144
+#include "hw/qdev-properties.h"
525
+ if (tot - cnt == 0) {
145
+#include "hw/intc/riscv_aplic.h"
526
+ return ;
146
+#include "hw/irq.h"
527
+ }
147
+#include "target/riscv/cpu.h"
528
+ memset(base + cnt, -1, tot - cnt);
148
+#include "sysemu/sysemu.h"
529
+}
149
+#include "migration/vmstate.h"
530
+
150
+
531
+void do_vext_vv(void *vd, void *v0, void *vs1, void *vs2,
151
+#define APLIC_MAX_IDC (1UL << 14)
532
+ CPURISCVState *env, uint32_t desc,
152
+#define APLIC_MAX_SOURCE 1024
533
+ opivv2_fn *fn, uint32_t esz)
153
+#define APLIC_MIN_IPRIO_BITS 1
534
+{
154
+#define APLIC_MAX_IPRIO_BITS 8
535
+ uint32_t vm = vext_vm(desc);
155
+#define APLIC_MAX_CHILDREN 1024
536
+ uint32_t vl = env->vl;
156
+
537
+ uint32_t total_elems = vext_get_total_elems(env, desc, esz);
157
+#define APLIC_DOMAINCFG 0x0000
538
+ uint32_t vta = vext_vta(desc);
158
+#define APLIC_DOMAINCFG_RDONLY 0x80000000
539
+ uint32_t vma = vext_vma(desc);
159
+#define APLIC_DOMAINCFG_IE (1 << 8)
540
+ uint32_t i;
160
+#define APLIC_DOMAINCFG_DM (1 << 2)
541
+
161
+#define APLIC_DOMAINCFG_BE (1 << 0)
542
+ for (i = env->vstart; i < vl; i++) {
162
+
543
+ if (!vm && !vext_elem_mask(v0, i)) {
163
+#define APLIC_SOURCECFG_BASE 0x0004
544
+ /* set masked-off elements to 1s */
164
+#define APLIC_SOURCECFG_D (1 << 10)
545
+ vext_set_elems_1s(vd, vma, i * esz, (i + 1) * esz);
165
+#define APLIC_SOURCECFG_CHILDIDX_MASK 0x000003ff
166
+#define APLIC_SOURCECFG_SM_MASK 0x00000007
167
+#define APLIC_SOURCECFG_SM_INACTIVE 0x0
168
+#define APLIC_SOURCECFG_SM_DETACH 0x1
169
+#define APLIC_SOURCECFG_SM_EDGE_RISE 0x4
170
+#define APLIC_SOURCECFG_SM_EDGE_FALL 0x5
171
+#define APLIC_SOURCECFG_SM_LEVEL_HIGH 0x6
172
+#define APLIC_SOURCECFG_SM_LEVEL_LOW 0x7
173
+
174
+#define APLIC_MMSICFGADDR 0x1bc0
175
+#define APLIC_MMSICFGADDRH 0x1bc4
176
+#define APLIC_SMSICFGADDR 0x1bc8
177
+#define APLIC_SMSICFGADDRH 0x1bcc
178
+
179
+#define APLIC_xMSICFGADDRH_L (1UL << 31)
180
+#define APLIC_xMSICFGADDRH_HHXS_MASK 0x1f
181
+#define APLIC_xMSICFGADDRH_HHXS_SHIFT 24
182
+#define APLIC_xMSICFGADDRH_LHXS_MASK 0x7
183
+#define APLIC_xMSICFGADDRH_LHXS_SHIFT 20
184
+#define APLIC_xMSICFGADDRH_HHXW_MASK 0x7
185
+#define APLIC_xMSICFGADDRH_HHXW_SHIFT 16
186
+#define APLIC_xMSICFGADDRH_LHXW_MASK 0xf
187
+#define APLIC_xMSICFGADDRH_LHXW_SHIFT 12
188
+#define APLIC_xMSICFGADDRH_BAPPN_MASK 0xfff
189
+
190
+#define APLIC_xMSICFGADDR_PPN_SHIFT 12
191
+
192
+#define APLIC_xMSICFGADDR_PPN_HART(__lhxs) \
193
+ ((1UL << (__lhxs)) - 1)
194
+
195
+#define APLIC_xMSICFGADDR_PPN_LHX_MASK(__lhxw) \
196
+ ((1UL << (__lhxw)) - 1)
197
+#define APLIC_xMSICFGADDR_PPN_LHX_SHIFT(__lhxs) \
198
+ ((__lhxs))
199
+#define APLIC_xMSICFGADDR_PPN_LHX(__lhxw, __lhxs) \
200
+ (APLIC_xMSICFGADDR_PPN_LHX_MASK(__lhxw) << \
201
+ APLIC_xMSICFGADDR_PPN_LHX_SHIFT(__lhxs))
202
+
203
+#define APLIC_xMSICFGADDR_PPN_HHX_MASK(__hhxw) \
204
+ ((1UL << (__hhxw)) - 1)
205
+#define APLIC_xMSICFGADDR_PPN_HHX_SHIFT(__hhxs) \
206
+ ((__hhxs) + APLIC_xMSICFGADDR_PPN_SHIFT)
207
+#define APLIC_xMSICFGADDR_PPN_HHX(__hhxw, __hhxs) \
208
+ (APLIC_xMSICFGADDR_PPN_HHX_MASK(__hhxw) << \
209
+ APLIC_xMSICFGADDR_PPN_HHX_SHIFT(__hhxs))
210
+
211
+#define APLIC_xMSICFGADDRH_VALID_MASK \
212
+ (APLIC_xMSICFGADDRH_L | \
213
+ (APLIC_xMSICFGADDRH_HHXS_MASK << APLIC_xMSICFGADDRH_HHXS_SHIFT) | \
214
+ (APLIC_xMSICFGADDRH_LHXS_MASK << APLIC_xMSICFGADDRH_LHXS_SHIFT) | \
215
+ (APLIC_xMSICFGADDRH_HHXW_MASK << APLIC_xMSICFGADDRH_HHXW_SHIFT) | \
216
+ (APLIC_xMSICFGADDRH_LHXW_MASK << APLIC_xMSICFGADDRH_LHXW_SHIFT) | \
217
+ APLIC_xMSICFGADDRH_BAPPN_MASK)
218
+
219
+#define APLIC_SETIP_BASE 0x1c00
220
+#define APLIC_SETIPNUM 0x1cdc
221
+
222
+#define APLIC_CLRIP_BASE 0x1d00
223
+#define APLIC_CLRIPNUM 0x1ddc
224
+
225
+#define APLIC_SETIE_BASE 0x1e00
226
+#define APLIC_SETIENUM 0x1edc
227
+
228
+#define APLIC_CLRIE_BASE 0x1f00
229
+#define APLIC_CLRIENUM 0x1fdc
230
+
231
+#define APLIC_SETIPNUM_LE 0x2000
232
+#define APLIC_SETIPNUM_BE 0x2004
233
+
234
+#define APLIC_ISTATE_PENDING (1U << 0)
235
+#define APLIC_ISTATE_ENABLED (1U << 1)
236
+#define APLIC_ISTATE_ENPEND (APLIC_ISTATE_ENABLED | \
237
+ APLIC_ISTATE_PENDING)
238
+#define APLIC_ISTATE_INPUT (1U << 8)
239
+
240
+#define APLIC_GENMSI 0x3000
241
+
242
+#define APLIC_TARGET_BASE 0x3004
243
+#define APLIC_TARGET_HART_IDX_SHIFT 18
244
+#define APLIC_TARGET_HART_IDX_MASK 0x3fff
245
+#define APLIC_TARGET_GUEST_IDX_SHIFT 12
246
+#define APLIC_TARGET_GUEST_IDX_MASK 0x3f
247
+#define APLIC_TARGET_IPRIO_MASK 0xff
248
+#define APLIC_TARGET_EIID_MASK 0x7ff
249
+
250
+#define APLIC_IDC_BASE 0x4000
251
+#define APLIC_IDC_SIZE 32
252
+
253
+#define APLIC_IDC_IDELIVERY 0x00
254
+
255
+#define APLIC_IDC_IFORCE 0x04
256
+
257
+#define APLIC_IDC_ITHRESHOLD 0x08
258
+
259
+#define APLIC_IDC_TOPI 0x18
260
+#define APLIC_IDC_TOPI_ID_SHIFT 16
261
+#define APLIC_IDC_TOPI_ID_MASK 0x3ff
262
+#define APLIC_IDC_TOPI_PRIO_MASK 0xff
263
+
264
+#define APLIC_IDC_CLAIMI 0x1c
265
+
266
+static uint32_t riscv_aplic_read_input_word(RISCVAPLICState *aplic,
267
+ uint32_t word)
268
+{
269
+ uint32_t i, irq, ret = 0;
270
+
271
+ for (i = 0; i < 32; i++) {
272
+ irq = word * 32 + i;
273
+ if (!irq || aplic->num_irqs <= irq) {
274
+ continue;
546
+ continue;
275
+ }
547
+ }
276
+
548
+ fn(vd, vs1, vs2, i);
277
+ ret |= ((aplic->state[irq] & APLIC_ISTATE_INPUT) ? 1 : 0) << i;
278
+ }
549
+ }
279
+
550
+ env->vstart = 0;
280
+ return ret;
551
+ /* set tail elements to 1s */
281
+}
552
+ vext_set_elems_1s(vd, vta, vl * esz, total_elems * esz);
282
+
553
+}
283
+static uint32_t riscv_aplic_read_pending_word(RISCVAPLICState *aplic,
554
+
284
+ uint32_t word)
555
+void do_vext_vx(void *vd, void *v0, target_long s1, void *vs2,
285
+{
556
+ CPURISCVState *env, uint32_t desc,
286
+ uint32_t i, irq, ret = 0;
557
+ opivx2_fn fn, uint32_t esz)
287
+
558
+{
288
+ for (i = 0; i < 32; i++) {
559
+ uint32_t vm = vext_vm(desc);
289
+ irq = word * 32 + i;
560
+ uint32_t vl = env->vl;
290
+ if (!irq || aplic->num_irqs <= irq) {
561
+ uint32_t total_elems = vext_get_total_elems(env, desc, esz);
562
+ uint32_t vta = vext_vta(desc);
563
+ uint32_t vma = vext_vma(desc);
564
+ uint32_t i;
565
+
566
+ for (i = env->vstart; i < vl; i++) {
567
+ if (!vm && !vext_elem_mask(v0, i)) {
568
+ /* set masked-off elements to 1s */
569
+ vext_set_elems_1s(vd, vma, i * esz, (i + 1) * esz);
291
+ continue;
570
+ continue;
292
+ }
571
+ }
293
+
572
+ fn(vd, s1, vs2, i);
294
+ ret |= ((aplic->state[irq] & APLIC_ISTATE_PENDING) ? 1 : 0) << i;
295
+ }
573
+ }
296
+
574
+ env->vstart = 0;
297
+ return ret;
575
+ /* set tail elements to 1s */
298
+}
576
+ vext_set_elems_1s(vd, vta, vl * esz, total_elems * esz);
299
+
577
+}
300
+static void riscv_aplic_set_pending_raw(RISCVAPLICState *aplic,
578
diff --git a/target/riscv/meson.build b/target/riscv/meson.build
301
+ uint32_t irq, bool pending)
302
+{
303
+ if (pending) {
304
+ aplic->state[irq] |= APLIC_ISTATE_PENDING;
305
+ } else {
306
+ aplic->state[irq] &= ~APLIC_ISTATE_PENDING;
307
+ }
308
+}
309
+
310
+static void riscv_aplic_set_pending(RISCVAPLICState *aplic,
311
+ uint32_t irq, bool pending)
312
+{
313
+ uint32_t sourcecfg, sm;
314
+
315
+ if ((irq <= 0) || (aplic->num_irqs <= irq)) {
316
+ return;
317
+ }
318
+
319
+ sourcecfg = aplic->sourcecfg[irq];
320
+ if (sourcecfg & APLIC_SOURCECFG_D) {
321
+ return;
322
+ }
323
+
324
+ sm = sourcecfg & APLIC_SOURCECFG_SM_MASK;
325
+ if ((sm == APLIC_SOURCECFG_SM_INACTIVE) ||
326
+ ((!aplic->msimode || (aplic->msimode && !pending)) &&
327
+ ((sm == APLIC_SOURCECFG_SM_LEVEL_HIGH) ||
328
+ (sm == APLIC_SOURCECFG_SM_LEVEL_LOW)))) {
329
+ return;
330
+ }
331
+
332
+ riscv_aplic_set_pending_raw(aplic, irq, pending);
333
+}
334
+
335
+static void riscv_aplic_set_pending_word(RISCVAPLICState *aplic,
336
+ uint32_t word, uint32_t value,
337
+ bool pending)
338
+{
339
+ uint32_t i, irq;
340
+
341
+ for (i = 0; i < 32; i++) {
342
+ irq = word * 32 + i;
343
+ if (!irq || aplic->num_irqs <= irq) {
344
+ continue;
345
+ }
346
+
347
+ if (value & (1U << i)) {
348
+ riscv_aplic_set_pending(aplic, irq, pending);
349
+ }
350
+ }
351
+}
352
+
353
+static uint32_t riscv_aplic_read_enabled_word(RISCVAPLICState *aplic,
354
+ int word)
355
+{
356
+ uint32_t i, irq, ret = 0;
357
+
358
+ for (i = 0; i < 32; i++) {
359
+ irq = word * 32 + i;
360
+ if (!irq || aplic->num_irqs <= irq) {
361
+ continue;
362
+ }
363
+
364
+ ret |= ((aplic->state[irq] & APLIC_ISTATE_ENABLED) ? 1 : 0) << i;
365
+ }
366
+
367
+ return ret;
368
+}
369
+
370
+static void riscv_aplic_set_enabled_raw(RISCVAPLICState *aplic,
371
+ uint32_t irq, bool enabled)
372
+{
373
+ if (enabled) {
374
+ aplic->state[irq] |= APLIC_ISTATE_ENABLED;
375
+ } else {
376
+ aplic->state[irq] &= ~APLIC_ISTATE_ENABLED;
377
+ }
378
+}
379
+
380
+static void riscv_aplic_set_enabled(RISCVAPLICState *aplic,
381
+ uint32_t irq, bool enabled)
382
+{
383
+ uint32_t sourcecfg, sm;
384
+
385
+ if ((irq <= 0) || (aplic->num_irqs <= irq)) {
386
+ return;
387
+ }
388
+
389
+ sourcecfg = aplic->sourcecfg[irq];
390
+ if (sourcecfg & APLIC_SOURCECFG_D) {
391
+ return;
392
+ }
393
+
394
+ sm = sourcecfg & APLIC_SOURCECFG_SM_MASK;
395
+ if (sm == APLIC_SOURCECFG_SM_INACTIVE) {
396
+ return;
397
+ }
398
+
399
+ riscv_aplic_set_enabled_raw(aplic, irq, enabled);
400
+}
401
+
402
+static void riscv_aplic_set_enabled_word(RISCVAPLICState *aplic,
403
+ uint32_t word, uint32_t value,
404
+ bool enabled)
405
+{
406
+ uint32_t i, irq;
407
+
408
+ for (i = 0; i < 32; i++) {
409
+ irq = word * 32 + i;
410
+ if (!irq || aplic->num_irqs <= irq) {
411
+ continue;
412
+ }
413
+
414
+ if (value & (1U << i)) {
415
+ riscv_aplic_set_enabled(aplic, irq, enabled);
416
+ }
417
+ }
418
+}
419
+
420
+static void riscv_aplic_msi_send(RISCVAPLICState *aplic,
421
+ uint32_t hart_idx, uint32_t guest_idx,
422
+ uint32_t eiid)
423
+{
424
+ uint64_t addr;
425
+ MemTxResult result;
426
+ RISCVAPLICState *aplic_m;
427
+ uint32_t lhxs, lhxw, hhxs, hhxw, group_idx, msicfgaddr, msicfgaddrH;
428
+
429
+ aplic_m = aplic;
430
+ while (aplic_m && !aplic_m->mmode) {
431
+ aplic_m = aplic_m->parent;
432
+ }
433
+ if (!aplic_m) {
434
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: m-level APLIC not found\n",
435
+ __func__);
436
+ return;
437
+ }
438
+
439
+ if (aplic->mmode) {
440
+ msicfgaddr = aplic_m->mmsicfgaddr;
441
+ msicfgaddrH = aplic_m->mmsicfgaddrH;
442
+ } else {
443
+ msicfgaddr = aplic_m->smsicfgaddr;
444
+ msicfgaddrH = aplic_m->smsicfgaddrH;
445
+ }
446
+
447
+ lhxs = (msicfgaddrH >> APLIC_xMSICFGADDRH_LHXS_SHIFT) &
448
+ APLIC_xMSICFGADDRH_LHXS_MASK;
449
+ lhxw = (msicfgaddrH >> APLIC_xMSICFGADDRH_LHXW_SHIFT) &
450
+ APLIC_xMSICFGADDRH_LHXW_MASK;
451
+ hhxs = (msicfgaddrH >> APLIC_xMSICFGADDRH_HHXS_SHIFT) &
452
+ APLIC_xMSICFGADDRH_HHXS_MASK;
453
+ hhxw = (msicfgaddrH >> APLIC_xMSICFGADDRH_HHXW_SHIFT) &
454
+ APLIC_xMSICFGADDRH_HHXW_MASK;
455
+
456
+ group_idx = hart_idx >> lhxw;
457
+ hart_idx &= APLIC_xMSICFGADDR_PPN_LHX_MASK(lhxw);
458
+
459
+ addr = msicfgaddr;
460
+ addr |= ((uint64_t)(msicfgaddrH & APLIC_xMSICFGADDRH_BAPPN_MASK)) << 32;
461
+ addr |= ((uint64_t)(group_idx & APLIC_xMSICFGADDR_PPN_HHX_MASK(hhxw))) <<
462
+ APLIC_xMSICFGADDR_PPN_HHX_SHIFT(hhxs);
463
+ addr |= ((uint64_t)(hart_idx & APLIC_xMSICFGADDR_PPN_LHX_MASK(lhxw))) <<
464
+ APLIC_xMSICFGADDR_PPN_LHX_SHIFT(lhxs);
465
+ addr |= (uint64_t)(guest_idx & APLIC_xMSICFGADDR_PPN_HART(lhxs));
466
+ addr <<= APLIC_xMSICFGADDR_PPN_SHIFT;
467
+
468
+ address_space_stl_le(&address_space_memory, addr,
469
+ eiid, MEMTXATTRS_UNSPECIFIED, &result);
470
+ if (result != MEMTX_OK) {
471
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: MSI write failed for "
472
+ "hart_index=%d guest_index=%d eiid=%d\n",
473
+ __func__, hart_idx, guest_idx, eiid);
474
+ }
475
+}
476
+
477
+static void riscv_aplic_msi_irq_update(RISCVAPLICState *aplic, uint32_t irq)
478
+{
479
+ uint32_t hart_idx, guest_idx, eiid;
480
+
481
+ if (!aplic->msimode || (aplic->num_irqs <= irq) ||
482
+ !(aplic->domaincfg & APLIC_DOMAINCFG_IE)) {
483
+ return;
484
+ }
485
+
486
+ if ((aplic->state[irq] & APLIC_ISTATE_ENPEND) != APLIC_ISTATE_ENPEND) {
487
+ return;
488
+ }
489
+
490
+ riscv_aplic_set_pending_raw(aplic, irq, false);
491
+
492
+ hart_idx = aplic->target[irq] >> APLIC_TARGET_HART_IDX_SHIFT;
493
+ hart_idx &= APLIC_TARGET_HART_IDX_MASK;
494
+ if (aplic->mmode) {
495
+ /* M-level APLIC ignores guest_index */
496
+ guest_idx = 0;
497
+ } else {
498
+ guest_idx = aplic->target[irq] >> APLIC_TARGET_GUEST_IDX_SHIFT;
499
+ guest_idx &= APLIC_TARGET_GUEST_IDX_MASK;
500
+ }
501
+ eiid = aplic->target[irq] & APLIC_TARGET_EIID_MASK;
502
+ riscv_aplic_msi_send(aplic, hart_idx, guest_idx, eiid);
503
+}
504
+
505
+static uint32_t riscv_aplic_idc_topi(RISCVAPLICState *aplic, uint32_t idc)
506
+{
507
+ uint32_t best_irq, best_iprio;
508
+ uint32_t irq, iprio, ihartidx, ithres;
509
+
510
+ if (aplic->num_harts <= idc) {
511
+ return 0;
512
+ }
513
+
514
+ ithres = aplic->ithreshold[idc];
515
+ best_irq = best_iprio = UINT32_MAX;
516
+ for (irq = 1; irq < aplic->num_irqs; irq++) {
517
+ if ((aplic->state[irq] & APLIC_ISTATE_ENPEND) !=
518
+ APLIC_ISTATE_ENPEND) {
519
+ continue;
520
+ }
521
+
522
+ ihartidx = aplic->target[irq] >> APLIC_TARGET_HART_IDX_SHIFT;
523
+ ihartidx &= APLIC_TARGET_HART_IDX_MASK;
524
+ if (ihartidx != idc) {
525
+ continue;
526
+ }
527
+
528
+ iprio = aplic->target[irq] & aplic->iprio_mask;
529
+ if (ithres && iprio >= ithres) {
530
+ continue;
531
+ }
532
+
533
+ if (iprio < best_iprio) {
534
+ best_irq = irq;
535
+ best_iprio = iprio;
536
+ }
537
+ }
538
+
539
+ if (best_irq < aplic->num_irqs && best_iprio <= aplic->iprio_mask) {
540
+ return (best_irq << APLIC_IDC_TOPI_ID_SHIFT) | best_iprio;
541
+ }
542
+
543
+ return 0;
544
+}
545
+
546
+static void riscv_aplic_idc_update(RISCVAPLICState *aplic, uint32_t idc)
547
+{
548
+ uint32_t topi;
549
+
550
+ if (aplic->msimode || aplic->num_harts <= idc) {
551
+ return;
552
+ }
553
+
554
+ topi = riscv_aplic_idc_topi(aplic, idc);
555
+ if ((aplic->domaincfg & APLIC_DOMAINCFG_IE) &&
556
+ aplic->idelivery[idc] &&
557
+ (aplic->iforce[idc] || topi)) {
558
+ qemu_irq_raise(aplic->external_irqs[idc]);
559
+ } else {
560
+ qemu_irq_lower(aplic->external_irqs[idc]);
561
+ }
562
+}
563
+
564
+static uint32_t riscv_aplic_idc_claimi(RISCVAPLICState *aplic, uint32_t idc)
565
+{
566
+ uint32_t irq, state, sm, topi = riscv_aplic_idc_topi(aplic, idc);
567
+
568
+ if (!topi) {
569
+ aplic->iforce[idc] = 0;
570
+ return 0;
571
+ }
572
+
573
+ irq = (topi >> APLIC_IDC_TOPI_ID_SHIFT) & APLIC_IDC_TOPI_ID_MASK;
574
+ sm = aplic->sourcecfg[irq] & APLIC_SOURCECFG_SM_MASK;
575
+ state = aplic->state[irq];
576
+ riscv_aplic_set_pending_raw(aplic, irq, false);
577
+ if ((sm == APLIC_SOURCECFG_SM_LEVEL_HIGH) &&
578
+ (state & APLIC_ISTATE_INPUT)) {
579
+ riscv_aplic_set_pending_raw(aplic, irq, true);
580
+ } else if ((sm == APLIC_SOURCECFG_SM_LEVEL_LOW) &&
581
+ !(state & APLIC_ISTATE_INPUT)) {
582
+ riscv_aplic_set_pending_raw(aplic, irq, true);
583
+ }
584
+ riscv_aplic_idc_update(aplic, idc);
585
+
586
+ return topi;
587
+}
588
+
589
+static void riscv_aplic_request(void *opaque, int irq, int level)
590
+{
591
+ bool update = false;
592
+ RISCVAPLICState *aplic = opaque;
593
+ uint32_t sourcecfg, childidx, state, idc;
594
+
595
+ assert((0 < irq) && (irq < aplic->num_irqs));
596
+
597
+ sourcecfg = aplic->sourcecfg[irq];
598
+ if (sourcecfg & APLIC_SOURCECFG_D) {
599
+ childidx = sourcecfg & APLIC_SOURCECFG_CHILDIDX_MASK;
600
+ if (childidx < aplic->num_children) {
601
+ riscv_aplic_request(aplic->children[childidx], irq, level);
602
+ }
603
+ return;
604
+ }
605
+
606
+ state = aplic->state[irq];
607
+ switch (sourcecfg & APLIC_SOURCECFG_SM_MASK) {
608
+ case APLIC_SOURCECFG_SM_EDGE_RISE:
609
+ if ((level > 0) && !(state & APLIC_ISTATE_INPUT) &&
610
+ !(state & APLIC_ISTATE_PENDING)) {
611
+ riscv_aplic_set_pending_raw(aplic, irq, true);
612
+ update = true;
613
+ }
614
+ break;
615
+ case APLIC_SOURCECFG_SM_EDGE_FALL:
616
+ if ((level <= 0) && (state & APLIC_ISTATE_INPUT) &&
617
+ !(state & APLIC_ISTATE_PENDING)) {
618
+ riscv_aplic_set_pending_raw(aplic, irq, true);
619
+ update = true;
620
+ }
621
+ break;
622
+ case APLIC_SOURCECFG_SM_LEVEL_HIGH:
623
+ if ((level > 0) && !(state & APLIC_ISTATE_PENDING)) {
624
+ riscv_aplic_set_pending_raw(aplic, irq, true);
625
+ update = true;
626
+ }
627
+ break;
628
+ case APLIC_SOURCECFG_SM_LEVEL_LOW:
629
+ if ((level <= 0) && !(state & APLIC_ISTATE_PENDING)) {
630
+ riscv_aplic_set_pending_raw(aplic, irq, true);
631
+ update = true;
632
+ }
633
+ break;
634
+ default:
635
+ break;
636
+ }
637
+
638
+ if (level <= 0) {
639
+ aplic->state[irq] &= ~APLIC_ISTATE_INPUT;
640
+ } else {
641
+ aplic->state[irq] |= APLIC_ISTATE_INPUT;
642
+ }
643
+
644
+ if (update) {
645
+ if (aplic->msimode) {
646
+ riscv_aplic_msi_irq_update(aplic, irq);
647
+ } else {
648
+ idc = aplic->target[irq] >> APLIC_TARGET_HART_IDX_SHIFT;
649
+ idc &= APLIC_TARGET_HART_IDX_MASK;
650
+ riscv_aplic_idc_update(aplic, idc);
651
+ }
652
+ }
653
+}
654
+
655
+static uint64_t riscv_aplic_read(void *opaque, hwaddr addr, unsigned size)
656
+{
657
+ uint32_t irq, word, idc;
658
+ RISCVAPLICState *aplic = opaque;
659
+
660
+ /* Reads must be 4 byte words */
661
+ if ((addr & 0x3) != 0) {
662
+ goto err;
663
+ }
664
+
665
+ if (addr == APLIC_DOMAINCFG) {
666
+ return APLIC_DOMAINCFG_RDONLY | aplic->domaincfg |
667
+ (aplic->msimode ? APLIC_DOMAINCFG_DM : 0);
668
+ } else if ((APLIC_SOURCECFG_BASE <= addr) &&
669
+ (addr < (APLIC_SOURCECFG_BASE + (aplic->num_irqs - 1) * 4))) {
670
+ irq = ((addr - APLIC_SOURCECFG_BASE) >> 2) + 1;
671
+ return aplic->sourcecfg[irq];
672
+ } else if (aplic->mmode && aplic->msimode &&
673
+ (addr == APLIC_MMSICFGADDR)) {
674
+ return aplic->mmsicfgaddr;
675
+ } else if (aplic->mmode && aplic->msimode &&
676
+ (addr == APLIC_MMSICFGADDRH)) {
677
+ return aplic->mmsicfgaddrH;
678
+ } else if (aplic->mmode && aplic->msimode &&
679
+ (addr == APLIC_SMSICFGADDR)) {
680
+ /*
681
+ * Registers SMSICFGADDR and SMSICFGADDRH are implemented only if:
682
+ * (a) the interrupt domain is at machine level
683
+ * (b) the domain's harts implement supervisor mode
684
+ * (c) the domain has one or more child supervisor-level domains
685
+ * that support MSI delivery mode (domaincfg.DM is not read-
686
+ * only zero in at least one of the supervisor-level child
687
+ * domains).
688
+ */
689
+ return (aplic->num_children) ? aplic->smsicfgaddr : 0;
690
+ } else if (aplic->mmode && aplic->msimode &&
691
+ (addr == APLIC_SMSICFGADDRH)) {
692
+ return (aplic->num_children) ? aplic->smsicfgaddrH : 0;
693
+ } else if ((APLIC_SETIP_BASE <= addr) &&
694
+ (addr < (APLIC_SETIP_BASE + aplic->bitfield_words * 4))) {
695
+ word = (addr - APLIC_SETIP_BASE) >> 2;
696
+ return riscv_aplic_read_pending_word(aplic, word);
697
+ } else if (addr == APLIC_SETIPNUM) {
698
+ return 0;
699
+ } else if ((APLIC_CLRIP_BASE <= addr) &&
700
+ (addr < (APLIC_CLRIP_BASE + aplic->bitfield_words * 4))) {
701
+ word = (addr - APLIC_CLRIP_BASE) >> 2;
702
+ return riscv_aplic_read_input_word(aplic, word);
703
+ } else if (addr == APLIC_CLRIPNUM) {
704
+ return 0;
705
+ } else if ((APLIC_SETIE_BASE <= addr) &&
706
+ (addr < (APLIC_SETIE_BASE + aplic->bitfield_words * 4))) {
707
+ word = (addr - APLIC_SETIE_BASE) >> 2;
708
+ return riscv_aplic_read_enabled_word(aplic, word);
709
+ } else if (addr == APLIC_SETIENUM) {
710
+ return 0;
711
+ } else if ((APLIC_CLRIE_BASE <= addr) &&
712
+ (addr < (APLIC_CLRIE_BASE + aplic->bitfield_words * 4))) {
713
+ return 0;
714
+ } else if (addr == APLIC_CLRIENUM) {
715
+ return 0;
716
+ } else if (addr == APLIC_SETIPNUM_LE) {
717
+ return 0;
718
+ } else if (addr == APLIC_SETIPNUM_BE) {
719
+ return 0;
720
+ } else if (addr == APLIC_GENMSI) {
721
+ return (aplic->msimode) ? aplic->genmsi : 0;
722
+ } else if ((APLIC_TARGET_BASE <= addr) &&
723
+ (addr < (APLIC_TARGET_BASE + (aplic->num_irqs - 1) * 4))) {
724
+ irq = ((addr - APLIC_TARGET_BASE) >> 2) + 1;
725
+ return aplic->target[irq];
726
+ } else if (!aplic->msimode && (APLIC_IDC_BASE <= addr) &&
727
+ (addr < (APLIC_IDC_BASE + aplic->num_harts * APLIC_IDC_SIZE))) {
728
+ idc = (addr - APLIC_IDC_BASE) / APLIC_IDC_SIZE;
729
+ switch (addr - (APLIC_IDC_BASE + idc * APLIC_IDC_SIZE)) {
730
+ case APLIC_IDC_IDELIVERY:
731
+ return aplic->idelivery[idc];
732
+ case APLIC_IDC_IFORCE:
733
+ return aplic->iforce[idc];
734
+ case APLIC_IDC_ITHRESHOLD:
735
+ return aplic->ithreshold[idc];
736
+ case APLIC_IDC_TOPI:
737
+ return riscv_aplic_idc_topi(aplic, idc);
738
+ case APLIC_IDC_CLAIMI:
739
+ return riscv_aplic_idc_claimi(aplic, idc);
740
+ default:
741
+ goto err;
742
+ };
743
+ }
744
+
745
+err:
746
+ qemu_log_mask(LOG_GUEST_ERROR,
747
+ "%s: Invalid register read 0x%" HWADDR_PRIx "\n",
748
+ __func__, addr);
749
+ return 0;
750
+}
751
+
752
+static void riscv_aplic_write(void *opaque, hwaddr addr, uint64_t value,
753
+ unsigned size)
754
+{
755
+ RISCVAPLICState *aplic = opaque;
756
+ uint32_t irq, word, idc = UINT32_MAX;
757
+
758
+ /* Writes must be 4 byte words */
759
+ if ((addr & 0x3) != 0) {
760
+ goto err;
761
+ }
762
+
763
+ if (addr == APLIC_DOMAINCFG) {
764
+ /* Only IE bit writeable at the moment */
765
+ value &= APLIC_DOMAINCFG_IE;
766
+ aplic->domaincfg = value;
767
+ } else if ((APLIC_SOURCECFG_BASE <= addr) &&
768
+ (addr < (APLIC_SOURCECFG_BASE + (aplic->num_irqs - 1) * 4))) {
769
+ irq = ((addr - APLIC_SOURCECFG_BASE) >> 2) + 1;
770
+ if (!aplic->num_children && (value & APLIC_SOURCECFG_D)) {
771
+ value = 0;
772
+ }
773
+ if (value & APLIC_SOURCECFG_D) {
774
+ value &= (APLIC_SOURCECFG_D | APLIC_SOURCECFG_CHILDIDX_MASK);
775
+ } else {
776
+ value &= (APLIC_SOURCECFG_D | APLIC_SOURCECFG_SM_MASK);
777
+ }
778
+ aplic->sourcecfg[irq] = value;
779
+ if ((aplic->sourcecfg[irq] & APLIC_SOURCECFG_D) ||
780
+ (aplic->sourcecfg[irq] == 0)) {
781
+ riscv_aplic_set_pending_raw(aplic, irq, false);
782
+ riscv_aplic_set_enabled_raw(aplic, irq, false);
783
+ }
784
+ } else if (aplic->mmode && aplic->msimode &&
785
+ (addr == APLIC_MMSICFGADDR)) {
786
+ if (!(aplic->mmsicfgaddrH & APLIC_xMSICFGADDRH_L)) {
787
+ aplic->mmsicfgaddr = value;
788
+ }
789
+ } else if (aplic->mmode && aplic->msimode &&
790
+ (addr == APLIC_MMSICFGADDRH)) {
791
+ if (!(aplic->mmsicfgaddrH & APLIC_xMSICFGADDRH_L)) {
792
+ aplic->mmsicfgaddrH = value & APLIC_xMSICFGADDRH_VALID_MASK;
793
+ }
794
+ } else if (aplic->mmode && aplic->msimode &&
795
+ (addr == APLIC_SMSICFGADDR)) {
796
+ /*
797
+ * Registers SMSICFGADDR and SMSICFGADDRH are implemented only if:
798
+ * (a) the interrupt domain is at machine level
799
+ * (b) the domain's harts implement supervisor mode
800
+ * (c) the domain has one or more child supervisor-level domains
801
+ * that support MSI delivery mode (domaincfg.DM is not read-
802
+ * only zero in at least one of the supervisor-level child
803
+ * domains).
804
+ */
805
+ if (aplic->num_children &&
806
+ !(aplic->smsicfgaddrH & APLIC_xMSICFGADDRH_L)) {
807
+ aplic->smsicfgaddr = value;
808
+ }
809
+ } else if (aplic->mmode && aplic->msimode &&
810
+ (addr == APLIC_SMSICFGADDRH)) {
811
+ if (aplic->num_children &&
812
+ !(aplic->smsicfgaddrH & APLIC_xMSICFGADDRH_L)) {
813
+ aplic->smsicfgaddrH = value & APLIC_xMSICFGADDRH_VALID_MASK;
814
+ }
815
+ } else if ((APLIC_SETIP_BASE <= addr) &&
816
+ (addr < (APLIC_SETIP_BASE + aplic->bitfield_words * 4))) {
817
+ word = (addr - APLIC_SETIP_BASE) >> 2;
818
+ riscv_aplic_set_pending_word(aplic, word, value, true);
819
+ } else if (addr == APLIC_SETIPNUM) {
820
+ riscv_aplic_set_pending(aplic, value, true);
821
+ } else if ((APLIC_CLRIP_BASE <= addr) &&
822
+ (addr < (APLIC_CLRIP_BASE + aplic->bitfield_words * 4))) {
823
+ word = (addr - APLIC_CLRIP_BASE) >> 2;
824
+ riscv_aplic_set_pending_word(aplic, word, value, false);
825
+ } else if (addr == APLIC_CLRIPNUM) {
826
+ riscv_aplic_set_pending(aplic, value, false);
827
+ } else if ((APLIC_SETIE_BASE <= addr) &&
828
+ (addr < (APLIC_SETIE_BASE + aplic->bitfield_words * 4))) {
829
+ word = (addr - APLIC_SETIE_BASE) >> 2;
830
+ riscv_aplic_set_enabled_word(aplic, word, value, true);
831
+ } else if (addr == APLIC_SETIENUM) {
832
+ riscv_aplic_set_enabled(aplic, value, true);
833
+ } else if ((APLIC_CLRIE_BASE <= addr) &&
834
+ (addr < (APLIC_CLRIE_BASE + aplic->bitfield_words * 4))) {
835
+ word = (addr - APLIC_CLRIE_BASE) >> 2;
836
+ riscv_aplic_set_enabled_word(aplic, word, value, false);
837
+ } else if (addr == APLIC_CLRIENUM) {
838
+ riscv_aplic_set_enabled(aplic, value, false);
839
+ } else if (addr == APLIC_SETIPNUM_LE) {
840
+ riscv_aplic_set_pending(aplic, value, true);
841
+ } else if (addr == APLIC_SETIPNUM_BE) {
842
+ riscv_aplic_set_pending(aplic, bswap32(value), true);
843
+ } else if (addr == APLIC_GENMSI) {
844
+ if (aplic->msimode) {
845
+ aplic->genmsi = value & ~(APLIC_TARGET_GUEST_IDX_MASK <<
846
+ APLIC_TARGET_GUEST_IDX_SHIFT);
847
+ riscv_aplic_msi_send(aplic,
848
+ value >> APLIC_TARGET_HART_IDX_SHIFT,
849
+ 0,
850
+ value & APLIC_TARGET_EIID_MASK);
851
+ }
852
+ } else if ((APLIC_TARGET_BASE <= addr) &&
853
+ (addr < (APLIC_TARGET_BASE + (aplic->num_irqs - 1) * 4))) {
854
+ irq = ((addr - APLIC_TARGET_BASE) >> 2) + 1;
855
+ if (aplic->msimode) {
856
+ aplic->target[irq] = value;
857
+ } else {
858
+ aplic->target[irq] = (value & ~APLIC_TARGET_IPRIO_MASK) |
859
+ ((value & aplic->iprio_mask) ?
860
+ (value & aplic->iprio_mask) : 1);
861
+ }
862
+ } else if (!aplic->msimode && (APLIC_IDC_BASE <= addr) &&
863
+ (addr < (APLIC_IDC_BASE + aplic->num_harts * APLIC_IDC_SIZE))) {
864
+ idc = (addr - APLIC_IDC_BASE) / APLIC_IDC_SIZE;
865
+ switch (addr - (APLIC_IDC_BASE + idc * APLIC_IDC_SIZE)) {
866
+ case APLIC_IDC_IDELIVERY:
867
+ aplic->idelivery[idc] = value & 0x1;
868
+ break;
869
+ case APLIC_IDC_IFORCE:
870
+ aplic->iforce[idc] = value & 0x1;
871
+ break;
872
+ case APLIC_IDC_ITHRESHOLD:
873
+ aplic->ithreshold[idc] = value & aplic->iprio_mask;
874
+ break;
875
+ default:
876
+ goto err;
877
+ };
878
+ } else {
879
+ goto err;
880
+ }
881
+
882
+ if (aplic->msimode) {
883
+ for (irq = 1; irq < aplic->num_irqs; irq++) {
884
+ riscv_aplic_msi_irq_update(aplic, irq);
885
+ }
886
+ } else {
887
+ if (idc == UINT32_MAX) {
888
+ for (idc = 0; idc < aplic->num_harts; idc++) {
889
+ riscv_aplic_idc_update(aplic, idc);
890
+ }
891
+ } else {
892
+ riscv_aplic_idc_update(aplic, idc);
893
+ }
894
+ }
895
+
896
+ return;
897
+
898
+err:
899
+ qemu_log_mask(LOG_GUEST_ERROR,
900
+ "%s: Invalid register write 0x%" HWADDR_PRIx "\n",
901
+ __func__, addr);
902
+}
903
+
904
+static const MemoryRegionOps riscv_aplic_ops = {
905
+ .read = riscv_aplic_read,
906
+ .write = riscv_aplic_write,
907
+ .endianness = DEVICE_LITTLE_ENDIAN,
908
+ .valid = {
909
+ .min_access_size = 4,
910
+ .max_access_size = 4
911
+ }
912
+};
913
+
914
+static void riscv_aplic_realize(DeviceState *dev, Error **errp)
915
+{
916
+ uint32_t i;
917
+ RISCVAPLICState *aplic = RISCV_APLIC(dev);
918
+
919
+ aplic->bitfield_words = (aplic->num_irqs + 31) >> 5;
920
+ aplic->sourcecfg = g_new0(uint32_t, aplic->num_irqs);
921
+ aplic->state = g_new(uint32_t, aplic->num_irqs);
922
+ aplic->target = g_new0(uint32_t, aplic->num_irqs);
923
+ if (!aplic->msimode) {
924
+ for (i = 0; i < aplic->num_irqs; i++) {
925
+ aplic->target[i] = 1;
926
+ }
927
+ }
928
+ aplic->idelivery = g_new0(uint32_t, aplic->num_harts);
929
+ aplic->iforce = g_new0(uint32_t, aplic->num_harts);
930
+ aplic->ithreshold = g_new0(uint32_t, aplic->num_harts);
931
+
932
+ memory_region_init_io(&aplic->mmio, OBJECT(dev), &riscv_aplic_ops, aplic,
933
+ TYPE_RISCV_APLIC, aplic->aperture_size);
934
+ sysbus_init_mmio(SYS_BUS_DEVICE(dev), &aplic->mmio);
935
+
936
+ /*
937
+ * Only root APLICs have hardware IRQ lines. All non-root APLICs
938
+ * have IRQ lines delegated by their parent APLIC.
939
+ */
940
+ if (!aplic->parent) {
941
+ qdev_init_gpio_in(dev, riscv_aplic_request, aplic->num_irqs);
942
+ }
943
+
944
+ /* Create output IRQ lines for non-MSI mode */
945
+ if (!aplic->msimode) {
946
+ aplic->external_irqs = g_malloc(sizeof(qemu_irq) * aplic->num_harts);
947
+ qdev_init_gpio_out(dev, aplic->external_irqs, aplic->num_harts);
948
+
949
+ /* Claim the CPU interrupt to be triggered by this APLIC */
950
+ for (i = 0; i < aplic->num_harts; i++) {
951
+ RISCVCPU *cpu = RISCV_CPU(qemu_get_cpu(aplic->hartid_base + i));
952
+ if (riscv_cpu_claim_interrupts(cpu,
953
+ (aplic->mmode) ? MIP_MEIP : MIP_SEIP) < 0) {
954
+ error_report("%s already claimed",
955
+ (aplic->mmode) ? "MEIP" : "SEIP");
956
+ exit(1);
957
+ }
958
+ }
959
+ }
960
+
961
+ msi_nonbroken = true;
962
+}
963
+
964
+static Property riscv_aplic_properties[] = {
965
+ DEFINE_PROP_UINT32("aperture-size", RISCVAPLICState, aperture_size, 0),
966
+ DEFINE_PROP_UINT32("hartid-base", RISCVAPLICState, hartid_base, 0),
967
+ DEFINE_PROP_UINT32("num-harts", RISCVAPLICState, num_harts, 0),
968
+ DEFINE_PROP_UINT32("iprio-mask", RISCVAPLICState, iprio_mask, 0),
969
+ DEFINE_PROP_UINT32("num-irqs", RISCVAPLICState, num_irqs, 0),
970
+ DEFINE_PROP_BOOL("msimode", RISCVAPLICState, msimode, 0),
971
+ DEFINE_PROP_BOOL("mmode", RISCVAPLICState, mmode, 0),
972
+ DEFINE_PROP_END_OF_LIST(),
973
+};
974
+
975
+static const VMStateDescription vmstate_riscv_aplic = {
976
+ .name = "riscv_aplic",
977
+ .version_id = 1,
978
+ .minimum_version_id = 1,
979
+ .fields = (VMStateField[]) {
980
+ VMSTATE_UINT32(domaincfg, RISCVAPLICState),
981
+ VMSTATE_UINT32(mmsicfgaddr, RISCVAPLICState),
982
+ VMSTATE_UINT32(mmsicfgaddrH, RISCVAPLICState),
983
+ VMSTATE_UINT32(smsicfgaddr, RISCVAPLICState),
984
+ VMSTATE_UINT32(smsicfgaddrH, RISCVAPLICState),
985
+ VMSTATE_UINT32(genmsi, RISCVAPLICState),
986
+ VMSTATE_VARRAY_UINT32(sourcecfg, RISCVAPLICState,
987
+ num_irqs, 0,
988
+ vmstate_info_uint32, uint32_t),
989
+ VMSTATE_VARRAY_UINT32(state, RISCVAPLICState,
990
+ num_irqs, 0,
991
+ vmstate_info_uint32, uint32_t),
992
+ VMSTATE_VARRAY_UINT32(target, RISCVAPLICState,
993
+ num_irqs, 0,
994
+ vmstate_info_uint32, uint32_t),
995
+ VMSTATE_VARRAY_UINT32(idelivery, RISCVAPLICState,
996
+ num_harts, 0,
997
+ vmstate_info_uint32, uint32_t),
998
+ VMSTATE_VARRAY_UINT32(iforce, RISCVAPLICState,
999
+ num_harts, 0,
1000
+ vmstate_info_uint32, uint32_t),
1001
+ VMSTATE_VARRAY_UINT32(ithreshold, RISCVAPLICState,
1002
+ num_harts, 0,
1003
+ vmstate_info_uint32, uint32_t),
1004
+ VMSTATE_END_OF_LIST()
1005
+ }
1006
+};
1007
+
1008
+static void riscv_aplic_class_init(ObjectClass *klass, void *data)
1009
+{
1010
+ DeviceClass *dc = DEVICE_CLASS(klass);
1011
+
1012
+ device_class_set_props(dc, riscv_aplic_properties);
1013
+ dc->realize = riscv_aplic_realize;
1014
+ dc->vmsd = &vmstate_riscv_aplic;
1015
+}
1016
+
1017
+static const TypeInfo riscv_aplic_info = {
1018
+ .name = TYPE_RISCV_APLIC,
1019
+ .parent = TYPE_SYS_BUS_DEVICE,
1020
+ .instance_size = sizeof(RISCVAPLICState),
1021
+ .class_init = riscv_aplic_class_init,
1022
+};
1023
+
1024
+static void riscv_aplic_register_types(void)
1025
+{
1026
+ type_register_static(&riscv_aplic_info);
1027
+}
1028
+
1029
+type_init(riscv_aplic_register_types)
1030
+
1031
+/*
1032
+ * Add a APLIC device to another APLIC device as child for
1033
+ * interrupt delegation.
1034
+ */
1035
+void riscv_aplic_add_child(DeviceState *parent, DeviceState *child)
1036
+{
1037
+ RISCVAPLICState *caplic, *paplic;
1038
+
1039
+ assert(parent && child);
1040
+ caplic = RISCV_APLIC(child);
1041
+ paplic = RISCV_APLIC(parent);
1042
+
1043
+ assert(paplic->num_irqs == caplic->num_irqs);
1044
+ assert(paplic->num_children <= QEMU_APLIC_MAX_CHILDREN);
1045
+
1046
+ caplic->parent = paplic;
1047
+ paplic->children[paplic->num_children] = caplic;
1048
+ paplic->num_children++;
1049
+}
1050
+
1051
+/*
1052
+ * Create APLIC device.
1053
+ */
1054
+DeviceState *riscv_aplic_create(hwaddr addr, hwaddr size,
1055
+ uint32_t hartid_base, uint32_t num_harts, uint32_t num_sources,
1056
+ uint32_t iprio_bits, bool msimode, bool mmode, DeviceState *parent)
1057
+{
1058
+ DeviceState *dev = qdev_new(TYPE_RISCV_APLIC);
1059
+ uint32_t i;
1060
+
1061
+ assert(num_harts < APLIC_MAX_IDC);
1062
+ assert((APLIC_IDC_BASE + (num_harts * APLIC_IDC_SIZE)) <= size);
1063
+ assert(num_sources < APLIC_MAX_SOURCE);
1064
+ assert(APLIC_MIN_IPRIO_BITS <= iprio_bits);
1065
+ assert(iprio_bits <= APLIC_MAX_IPRIO_BITS);
1066
+
1067
+ qdev_prop_set_uint32(dev, "aperture-size", size);
1068
+ qdev_prop_set_uint32(dev, "hartid-base", hartid_base);
1069
+ qdev_prop_set_uint32(dev, "num-harts", num_harts);
1070
+ qdev_prop_set_uint32(dev, "iprio-mask", ((1U << iprio_bits) - 1));
1071
+ qdev_prop_set_uint32(dev, "num-irqs", num_sources + 1);
1072
+ qdev_prop_set_bit(dev, "msimode", msimode);
1073
+ qdev_prop_set_bit(dev, "mmode", mmode);
1074
+
1075
+ sysbus_realize_and_unref(SYS_BUS_DEVICE(dev), &error_fatal);
1076
+ sysbus_mmio_map(SYS_BUS_DEVICE(dev), 0, addr);
1077
+
1078
+ if (parent) {
1079
+ riscv_aplic_add_child(parent, dev);
1080
+ }
1081
+
1082
+ if (!msimode) {
1083
+ for (i = 0; i < num_harts; i++) {
1084
+ CPUState *cpu = qemu_get_cpu(hartid_base + i);
1085
+
1086
+ qdev_connect_gpio_out_named(dev, NULL, i,
1087
+ qdev_get_gpio_in(DEVICE(cpu),
1088
+ (mmode) ? IRQ_M_EXT : IRQ_S_EXT));
1089
+ }
1090
+ }
1091
+
1092
+ return dev;
1093
+}
1094
diff --git a/hw/intc/Kconfig b/hw/intc/Kconfig
1095
index XXXXXXX..XXXXXXX 100644
579
index XXXXXXX..XXXXXXX 100644
1096
--- a/hw/intc/Kconfig
580
--- a/target/riscv/meson.build
1097
+++ b/hw/intc/Kconfig
581
+++ b/target/riscv/meson.build
1098
@@ -XXX,XX +XXX,XX @@ config LOONGSON_LIOINTC
582
@@ -XXX,XX +XXX,XX @@ riscv_ss.add(files(
1099
config RISCV_ACLINT
583
'gdbstub.c',
1100
bool
584
'op_helper.c',
1101
585
'vector_helper.c',
1102
+config RISCV_APLIC
586
+ 'vector_internals.c',
1103
+ bool
587
'bitmanip_helper.c',
1104
+
588
'translate.c',
1105
config SIFIVE_PLIC
589
'm128_helper.c',
1106
bool
1107
1108
diff --git a/hw/intc/meson.build b/hw/intc/meson.build
1109
index XXXXXXX..XXXXXXX 100644
1110
--- a/hw/intc/meson.build
1111
+++ b/hw/intc/meson.build
1112
@@ -XXX,XX +XXX,XX @@ specific_ss.add(when: 'CONFIG_S390_FLIC', if_true: files('s390_flic.c'))
1113
specific_ss.add(when: 'CONFIG_S390_FLIC_KVM', if_true: files('s390_flic_kvm.c'))
1114
specific_ss.add(when: 'CONFIG_SH_INTC', if_true: files('sh_intc.c'))
1115
specific_ss.add(when: 'CONFIG_RISCV_ACLINT', if_true: files('riscv_aclint.c'))
1116
+specific_ss.add(when: 'CONFIG_RISCV_APLIC', if_true: files('riscv_aplic.c'))
1117
specific_ss.add(when: 'CONFIG_SIFIVE_PLIC', if_true: files('sifive_plic.c'))
1118
specific_ss.add(when: 'CONFIG_XICS', if_true: files('xics.c'))
1119
specific_ss.add(when: ['CONFIG_KVM', 'CONFIG_XICS'],
1120
--
590
--
1121
2.34.1
591
2.41.0
1122
1123
1124
diff view generated by jsdifflib
1
From: Philipp Tomsich <philipp.tomsich@vrull.eu>
1
From: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
2
2
3
To split up the decoder into multiple functions (both to support
3
Refactor the non SEW-specific stuff out of `GEN_OPIVV_TRANS` into
4
vendor-specific opcodes in separate files and to simplify maintenance
4
function `opivv_trans` (similar to `opivi_trans`). `opivv_trans` will be
5
of orthogonal extensions), this changes decode_op to iterate over a
5
used in proceeding vector-crypto commits.
6
table of decoders predicated on guard functions.
7
6
8
This commit only adds the new structure and the table, allowing for
7
Signed-off-by: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
9
the easy addition of additional decoders in the future.
10
11
Signed-off-by: Philipp Tomsich <philipp.tomsich@vrull.eu>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
9
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
14
Message-Id: <20220202005249.3566542-6-philipp.tomsich@vrull.eu>
10
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
11
Signed-off-by: Max Chou <max.chou@sifive.com>
12
Message-ID: <20230711165917.2629866-3-max.chou@sifive.com>
15
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
13
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
16
---
14
---
17
target/riscv/translate.c | 32 +++++++++++++++++++++++++++-----
15
target/riscv/insn_trans/trans_rvv.c.inc | 62 +++++++++++++------------
18
1 file changed, 27 insertions(+), 5 deletions(-)
16
1 file changed, 32 insertions(+), 30 deletions(-)
19
17
20
diff --git a/target/riscv/translate.c b/target/riscv/translate.c
18
diff --git a/target/riscv/insn_trans/trans_rvv.c.inc b/target/riscv/insn_trans/trans_rvv.c.inc
21
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
22
--- a/target/riscv/translate.c
20
--- a/target/riscv/insn_trans/trans_rvv.c.inc
23
+++ b/target/riscv/translate.c
21
+++ b/target/riscv/insn_trans/trans_rvv.c.inc
24
@@ -XXX,XX +XXX,XX @@ static inline bool has_ext(DisasContext *ctx, uint32_t ext)
22
@@ -XXX,XX +XXX,XX @@ GEN_OPIWX_WIDEN_TRANS(vwadd_wx)
25
return ctx->misa_ext & ext;
23
GEN_OPIWX_WIDEN_TRANS(vwsubu_wx)
26
}
24
GEN_OPIWX_WIDEN_TRANS(vwsub_wx)
27
25
28
+static bool always_true_p(DisasContext *ctx __attribute__((__unused__)))
26
+static bool opivv_trans(uint32_t vd, uint32_t vs1, uint32_t vs2, uint32_t vm,
27
+ gen_helper_gvec_4_ptr *fn, DisasContext *s)
29
+{
28
+{
29
+ uint32_t data = 0;
30
+ TCGLabel *over = gen_new_label();
31
+ tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
32
+ tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
33
+
34
+ data = FIELD_DP32(data, VDATA, VM, vm);
35
+ data = FIELD_DP32(data, VDATA, LMUL, s->lmul);
36
+ data = FIELD_DP32(data, VDATA, VTA, s->vta);
37
+ data = FIELD_DP32(data, VDATA, VTA_ALL_1S, s->cfg_vta_all_1s);
38
+ data = FIELD_DP32(data, VDATA, VMA, s->vma);
39
+ tcg_gen_gvec_4_ptr(vreg_ofs(s, vd), vreg_ofs(s, 0), vreg_ofs(s, vs1),
40
+ vreg_ofs(s, vs2), cpu_env, s->cfg_ptr->vlen / 8,
41
+ s->cfg_ptr->vlen / 8, data, fn);
42
+ mark_vs_dirty(s);
43
+ gen_set_label(over);
30
+ return true;
44
+ return true;
31
+}
45
+}
32
+
46
+
33
#ifdef TARGET_RISCV32
47
/* Vector Integer Add-with-Carry / Subtract-with-Borrow Instructions */
34
#define get_xl(ctx) MXL_RV32
48
/* OPIVV without GVEC IR */
35
#elif defined(CONFIG_USER_ONLY)
49
-#define GEN_OPIVV_TRANS(NAME, CHECK) \
36
@@ -XXX,XX +XXX,XX @@ static uint32_t opcode_at(DisasContextBase *dcbase, target_ulong pc)
50
-static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
37
51
-{ \
38
static void decode_opc(CPURISCVState *env, DisasContext *ctx, uint16_t opcode)
52
- if (CHECK(s, a)) { \
39
{
53
- uint32_t data = 0; \
40
- /* check for compressed insn */
54
- static gen_helper_gvec_4_ptr * const fns[4] = { \
41
+ /*
55
- gen_helper_##NAME##_b, gen_helper_##NAME##_h, \
42
+ * A table with predicate (i.e., guard) functions and decoder functions
56
- gen_helper_##NAME##_w, gen_helper_##NAME##_d, \
43
+ * that are tested in-order until a decoder matches onto the opcode.
57
- }; \
44
+ */
58
- TCGLabel *over = gen_new_label(); \
45
+ static const struct {
59
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
46
+ bool (*guard_func)(DisasContext *);
60
- tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
47
+ bool (*decode_func)(DisasContext *, uint32_t);
61
- \
48
+ } decoders[] = {
62
- data = FIELD_DP32(data, VDATA, VM, a->vm); \
49
+ { always_true_p, decode_insn32 },
63
- data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \
50
+ };
64
- data = FIELD_DP32(data, VDATA, VTA, s->vta); \
51
+
65
- data = \
52
+ /* Check for compressed insn */
66
- FIELD_DP32(data, VDATA, VTA_ALL_1S, s->cfg_vta_all_1s);\
53
if (extract16(opcode, 0, 2) != 3) {
67
- data = FIELD_DP32(data, VDATA, VMA, s->vma); \
54
if (!has_ext(ctx, RVC)) {
68
- tcg_gen_gvec_4_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0), \
55
gen_exception_illegal(ctx);
69
- vreg_ofs(s, a->rs1), \
56
} else {
70
- vreg_ofs(s, a->rs2), cpu_env, \
57
ctx->opcode = opcode;
71
- s->cfg_ptr->vlen / 8, \
58
ctx->pc_succ_insn = ctx->base.pc_next + 2;
72
- s->cfg_ptr->vlen / 8, data, \
59
- if (!decode_insn16(ctx, opcode)) {
73
- fns[s->sew]); \
60
- gen_exception_illegal(ctx);
74
- mark_vs_dirty(s); \
61
+ if (decode_insn16(ctx, opcode)) {
75
- gen_set_label(over); \
62
+ return;
76
- return true; \
63
}
77
- } \
64
}
78
- return false; \
65
} else {
79
+#define GEN_OPIVV_TRANS(NAME, CHECK) \
66
@@ -XXX,XX +XXX,XX @@ static void decode_opc(CPURISCVState *env, DisasContext *ctx, uint16_t opcode)
80
+static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
67
ctx->base.pc_next + 2));
81
+{ \
68
ctx->opcode = opcode32;
82
+ if (CHECK(s, a)) { \
69
ctx->pc_succ_insn = ctx->base.pc_next + 4;
83
+ static gen_helper_gvec_4_ptr * const fns[4] = { \
70
- if (!decode_insn32(ctx, opcode32)) {
84
+ gen_helper_##NAME##_b, gen_helper_##NAME##_h, \
71
- gen_exception_illegal(ctx);
85
+ gen_helper_##NAME##_w, gen_helper_##NAME##_d, \
72
+
86
+ }; \
73
+ for (size_t i = 0; i < ARRAY_SIZE(decoders); ++i) {
87
+ return opivv_trans(a->rd, a->rs1, a->rs2, a->vm, fns[s->sew], s);\
74
+ if (decoders[i].guard_func(ctx) &&
88
+ } \
75
+ decoders[i].decode_func(ctx, opcode32)) {
89
+ return false; \
76
+ return;
77
+ }
78
}
79
}
80
+
81
+ gen_exception_illegal(ctx);
82
}
90
}
83
91
84
static void riscv_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
92
/*
85
--
93
--
86
2.34.1
94
2.41.0
87
88
diff view generated by jsdifflib
New patch
1
From: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
1
2
3
Remove the redundant "vl == 0" check which is already included within the vstart >= vl check, when vl == 0.
4
5
Signed-off-by: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
6
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
7
Signed-off-by: Max Chou <max.chou@sifive.com>
8
Acked-by: Alistair Francis <alistair.francis@wdc.com>
9
Message-ID: <20230711165917.2629866-4-max.chou@sifive.com>
10
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
11
---
12
target/riscv/insn_trans/trans_rvv.c.inc | 31 +------------------------
13
1 file changed, 1 insertion(+), 30 deletions(-)
14
15
diff --git a/target/riscv/insn_trans/trans_rvv.c.inc b/target/riscv/insn_trans/trans_rvv.c.inc
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/riscv/insn_trans/trans_rvv.c.inc
18
+++ b/target/riscv/insn_trans/trans_rvv.c.inc
19
@@ -XXX,XX +XXX,XX @@ static bool ldst_us_trans(uint32_t vd, uint32_t rs1, uint32_t data,
20
TCGv_i32 desc;
21
22
TCGLabel *over = gen_new_label();
23
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
24
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
25
26
dest = tcg_temp_new_ptr();
27
@@ -XXX,XX +XXX,XX @@ static bool ldst_stride_trans(uint32_t vd, uint32_t rs1, uint32_t rs2,
28
TCGv_i32 desc;
29
30
TCGLabel *over = gen_new_label();
31
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
32
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
33
34
dest = tcg_temp_new_ptr();
35
@@ -XXX,XX +XXX,XX @@ static bool ldst_index_trans(uint32_t vd, uint32_t rs1, uint32_t vs2,
36
TCGv_i32 desc;
37
38
TCGLabel *over = gen_new_label();
39
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
40
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
41
42
dest = tcg_temp_new_ptr();
43
@@ -XXX,XX +XXX,XX @@ static bool ldff_trans(uint32_t vd, uint32_t rs1, uint32_t data,
44
TCGv_i32 desc;
45
46
TCGLabel *over = gen_new_label();
47
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
48
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
49
50
dest = tcg_temp_new_ptr();
51
@@ -XXX,XX +XXX,XX @@ do_opivv_gvec(DisasContext *s, arg_rmrr *a, GVecGen3Fn *gvec_fn,
52
return false;
53
}
54
55
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
56
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
57
58
if (a->vm && s->vl_eq_vlmax && !(s->vta && s->lmul < 0)) {
59
@@ -XXX,XX +XXX,XX @@ static bool opivx_trans(uint32_t vd, uint32_t rs1, uint32_t vs2, uint32_t vm,
60
uint32_t data = 0;
61
62
TCGLabel *over = gen_new_label();
63
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
64
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
65
66
dest = tcg_temp_new_ptr();
67
@@ -XXX,XX +XXX,XX @@ static bool opivi_trans(uint32_t vd, uint32_t imm, uint32_t vs2, uint32_t vm,
68
uint32_t data = 0;
69
70
TCGLabel *over = gen_new_label();
71
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
72
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
73
74
dest = tcg_temp_new_ptr();
75
@@ -XXX,XX +XXX,XX @@ static bool do_opivv_widen(DisasContext *s, arg_rmrr *a,
76
if (checkfn(s, a)) {
77
uint32_t data = 0;
78
TCGLabel *over = gen_new_label();
79
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
80
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
81
82
data = FIELD_DP32(data, VDATA, VM, a->vm);
83
@@ -XXX,XX +XXX,XX @@ static bool do_opiwv_widen(DisasContext *s, arg_rmrr *a,
84
if (opiwv_widen_check(s, a)) {
85
uint32_t data = 0;
86
TCGLabel *over = gen_new_label();
87
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
88
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
89
90
data = FIELD_DP32(data, VDATA, VM, a->vm);
91
@@ -XXX,XX +XXX,XX @@ static bool opivv_trans(uint32_t vd, uint32_t vs1, uint32_t vs2, uint32_t vm,
92
{
93
uint32_t data = 0;
94
TCGLabel *over = gen_new_label();
95
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
96
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
97
98
data = FIELD_DP32(data, VDATA, VM, vm);
99
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
100
gen_helper_##NAME##_w, \
101
}; \
102
TCGLabel *over = gen_new_label(); \
103
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
104
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
105
\
106
data = FIELD_DP32(data, VDATA, VM, a->vm); \
107
@@ -XXX,XX +XXX,XX @@ static bool trans_vmv_v_v(DisasContext *s, arg_vmv_v_v *a)
108
gen_helper_vmv_v_v_w, gen_helper_vmv_v_v_d,
109
};
110
TCGLabel *over = gen_new_label();
111
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
112
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
113
114
tcg_gen_gvec_2_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, a->rs1),
115
@@ -XXX,XX +XXX,XX @@ static bool trans_vmv_v_x(DisasContext *s, arg_vmv_v_x *a)
116
vext_check_ss(s, a->rd, 0, 1)) {
117
TCGv s1;
118
TCGLabel *over = gen_new_label();
119
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
120
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
121
122
s1 = get_gpr(s, a->rs1, EXT_SIGN);
123
@@ -XXX,XX +XXX,XX @@ static bool trans_vmv_v_i(DisasContext *s, arg_vmv_v_i *a)
124
gen_helper_vmv_v_x_w, gen_helper_vmv_v_x_d,
125
};
126
TCGLabel *over = gen_new_label();
127
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
128
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
129
130
s1 = tcg_constant_i64(simm);
131
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
132
}; \
133
TCGLabel *over = gen_new_label(); \
134
gen_set_rm(s, RISCV_FRM_DYN); \
135
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
136
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
137
\
138
data = FIELD_DP32(data, VDATA, VM, a->vm); \
139
@@ -XXX,XX +XXX,XX @@ static bool opfvf_trans(uint32_t vd, uint32_t rs1, uint32_t vs2,
140
TCGv_i64 t1;
141
142
TCGLabel *over = gen_new_label();
143
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
144
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
145
146
dest = tcg_temp_new_ptr();
147
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
148
}; \
149
TCGLabel *over = gen_new_label(); \
150
gen_set_rm(s, RISCV_FRM_DYN); \
151
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
152
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);\
153
\
154
data = FIELD_DP32(data, VDATA, VM, a->vm); \
155
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
156
}; \
157
TCGLabel *over = gen_new_label(); \
158
gen_set_rm(s, RISCV_FRM_DYN); \
159
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
160
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
161
\
162
data = FIELD_DP32(data, VDATA, VM, a->vm); \
163
@@ -XXX,XX +XXX,XX @@ static bool do_opfv(DisasContext *s, arg_rmr *a,
164
uint32_t data = 0;
165
TCGLabel *over = gen_new_label();
166
gen_set_rm_chkfrm(s, rm);
167
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
168
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
169
170
data = FIELD_DP32(data, VDATA, VM, a->vm);
171
@@ -XXX,XX +XXX,XX @@ static bool trans_vfmv_v_f(DisasContext *s, arg_vfmv_v_f *a)
172
gen_helper_vmv_v_x_d,
173
};
174
TCGLabel *over = gen_new_label();
175
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
176
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
177
178
t1 = tcg_temp_new_i64();
179
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmr *a) \
180
}; \
181
TCGLabel *over = gen_new_label(); \
182
gen_set_rm_chkfrm(s, FRM); \
183
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
184
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
185
\
186
data = FIELD_DP32(data, VDATA, VM, a->vm); \
187
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmr *a) \
188
}; \
189
TCGLabel *over = gen_new_label(); \
190
gen_set_rm(s, RISCV_FRM_DYN); \
191
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
192
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
193
\
194
data = FIELD_DP32(data, VDATA, VM, a->vm); \
195
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmr *a) \
196
}; \
197
TCGLabel *over = gen_new_label(); \
198
gen_set_rm_chkfrm(s, FRM); \
199
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
200
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
201
\
202
data = FIELD_DP32(data, VDATA, VM, a->vm); \
203
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmr *a) \
204
}; \
205
TCGLabel *over = gen_new_label(); \
206
gen_set_rm_chkfrm(s, FRM); \
207
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
208
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
209
\
210
data = FIELD_DP32(data, VDATA, VM, a->vm); \
211
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_r *a) \
212
uint32_t data = 0; \
213
gen_helper_gvec_4_ptr *fn = gen_helper_##NAME; \
214
TCGLabel *over = gen_new_label(); \
215
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
216
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
217
\
218
data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \
219
@@ -XXX,XX +XXX,XX @@ static bool trans_vid_v(DisasContext *s, arg_vid_v *a)
220
require_vm(a->vm, a->rd)) {
221
uint32_t data = 0;
222
TCGLabel *over = gen_new_label();
223
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
224
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
225
226
data = FIELD_DP32(data, VDATA, VM, a->vm);
227
@@ -XXX,XX +XXX,XX @@ static bool trans_vmv_s_x(DisasContext *s, arg_vmv_s_x *a)
228
TCGv s1;
229
TCGLabel *over = gen_new_label();
230
231
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
232
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
233
234
t1 = tcg_temp_new_i64();
235
@@ -XXX,XX +XXX,XX @@ static bool trans_vfmv_s_f(DisasContext *s, arg_vfmv_s_f *a)
236
TCGv_i64 t1;
237
TCGLabel *over = gen_new_label();
238
239
- /* if vl == 0 or vstart >= vl, skip vector register write back */
240
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
241
+ /* if vstart >= vl, skip vector register write back */
242
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
243
244
/* NaN-box f[rs1] */
245
@@ -XXX,XX +XXX,XX @@ static bool int_ext_op(DisasContext *s, arg_rmr *a, uint8_t seq)
246
uint32_t data = 0;
247
gen_helper_gvec_3_ptr *fn;
248
TCGLabel *over = gen_new_label();
249
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
250
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
251
252
static gen_helper_gvec_3_ptr * const fns[6][4] = {
253
--
254
2.41.0
diff view generated by jsdifflib
1
From: Philipp Tomsich <philipp.tomsich@vrull.eu>
1
From: Lawrence Hunter <lawrence.hunter@codethink.co.uk>
2
2
3
This adds the decoder and translation for the XVentanaCondOps custom
3
This commit adds support for the Zvbc vector-crypto extension, which
4
extension (vendor-defined by Ventana Micro Systems), which is
4
consists of the following instructions:
5
documented at https://github.com/ventanamicro/ventana-custom-extensions/releases/download/v1.0.0/ventana-custom-extensions-v1.0.0.pdf
5
6
6
* vclmulh.[vx,vv]
7
This commit then also adds a guard-function (has_XVentanaCondOps_p)
7
* vclmul.[vx,vv]
8
and the decoder function to the table of decoders, enabling the
8
9
support for the XVentanaCondOps extension.
9
Translation functions are defined in
10
10
`target/riscv/insn_trans/trans_rvvk.c.inc` and helpers are defined in
11
Signed-off-by: Philipp Tomsich <philipp.tomsich@vrull.eu>
11
`target/riscv/vcrypto_helper.c`.
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
13
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
13
Co-authored-by: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
14
Message-Id: <20220202005249.3566542-7-philipp.tomsich@vrull.eu>
14
Co-authored-by: Max Chou <max.chou@sifive.com>
15
Signed-off-by: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
16
Signed-off-by: Lawrence Hunter <lawrence.hunter@codethink.co.uk>
17
Signed-off-by: Max Chou <max.chou@sifive.com>
18
[max.chou@sifive.com: Exposed x-zvbc property]
19
Message-ID: <20230711165917.2629866-5-max.chou@sifive.com>
15
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
20
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
16
---
21
---
17
target/riscv/cpu.h | 3 ++
22
target/riscv/cpu_cfg.h | 1 +
18
target/riscv/XVentanaCondOps.decode | 25 ++++++++++++
23
target/riscv/helper.h | 6 +++
19
target/riscv/cpu.c | 3 ++
24
target/riscv/insn32.decode | 6 +++
20
target/riscv/translate.c | 12 ++++++
25
target/riscv/cpu.c | 9 ++++
21
.../insn_trans/trans_xventanacondops.c.inc | 39 +++++++++++++++++++
26
target/riscv/translate.c | 1 +
22
target/riscv/meson.build | 1 +
27
target/riscv/vcrypto_helper.c | 59 ++++++++++++++++++++++
23
6 files changed, 83 insertions(+)
28
target/riscv/insn_trans/trans_rvvk.c.inc | 62 ++++++++++++++++++++++++
24
create mode 100644 target/riscv/XVentanaCondOps.decode
29
target/riscv/meson.build | 3 +-
25
create mode 100644 target/riscv/insn_trans/trans_xventanacondops.c.inc
30
8 files changed, 146 insertions(+), 1 deletion(-)
26
31
create mode 100644 target/riscv/vcrypto_helper.c
27
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
32
create mode 100644 target/riscv/insn_trans/trans_rvvk.c.inc
28
index XXXXXXX..XXXXXXX 100644
33
29
--- a/target/riscv/cpu.h
34
diff --git a/target/riscv/cpu_cfg.h b/target/riscv/cpu_cfg.h
30
+++ b/target/riscv/cpu.h
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/riscv/cpu_cfg.h
37
+++ b/target/riscv/cpu_cfg.h
31
@@ -XXX,XX +XXX,XX @@ struct RISCVCPUConfig {
38
@@ -XXX,XX +XXX,XX @@ struct RISCVCPUConfig {
32
bool ext_zve32f;
39
bool ext_zve32f;
33
bool ext_zve64f;
40
bool ext_zve64f;
34
41
bool ext_zve64d;
35
+ /* Vendor-specific custom extensions */
42
+ bool ext_zvbc;
36
+ bool ext_XVentanaCondOps;
43
bool ext_zmmul;
37
+
44
bool ext_zvfbfmin;
38
char *priv_spec;
45
bool ext_zvfbfwma;
39
char *user_spec;
46
diff --git a/target/riscv/helper.h b/target/riscv/helper.h
40
char *bext_spec;
47
index XXXXXXX..XXXXXXX 100644
41
diff --git a/target/riscv/XVentanaCondOps.decode b/target/riscv/XVentanaCondOps.decode
48
--- a/target/riscv/helper.h
49
+++ b/target/riscv/helper.h
50
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_5(vfwcvtbf16_f_f_v, void, ptr, ptr, ptr, env, i32)
51
52
DEF_HELPER_6(vfwmaccbf16_vv, void, ptr, ptr, ptr, ptr, env, i32)
53
DEF_HELPER_6(vfwmaccbf16_vf, void, ptr, ptr, i64, ptr, env, i32)
54
+
55
+/* Vector crypto functions */
56
+DEF_HELPER_6(vclmul_vv, void, ptr, ptr, ptr, ptr, env, i32)
57
+DEF_HELPER_6(vclmul_vx, void, ptr, ptr, tl, ptr, env, i32)
58
+DEF_HELPER_6(vclmulh_vv, void, ptr, ptr, ptr, ptr, env, i32)
59
+DEF_HELPER_6(vclmulh_vx, void, ptr, ptr, tl, ptr, env, i32)
60
diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode
61
index XXXXXXX..XXXXXXX 100644
62
--- a/target/riscv/insn32.decode
63
+++ b/target/riscv/insn32.decode
64
@@ -XXX,XX +XXX,XX @@ vfwcvtbf16_f_f_v 010010 . ..... 01101 001 ..... 1010111 @r2_vm
65
# *** Zvfbfwma Standard Extension ***
66
vfwmaccbf16_vv 111011 . ..... ..... 001 ..... 1010111 @r_vm
67
vfwmaccbf16_vf 111011 . ..... ..... 101 ..... 1010111 @r_vm
68
+
69
+# *** Zvbc vector crypto extension ***
70
+vclmul_vv 001100 . ..... ..... 010 ..... 1010111 @r_vm
71
+vclmul_vx 001100 . ..... ..... 110 ..... 1010111 @r_vm
72
+vclmulh_vv 001101 . ..... ..... 010 ..... 1010111 @r_vm
73
+vclmulh_vx 001101 . ..... ..... 110 ..... 1010111 @r_vm
74
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
75
index XXXXXXX..XXXXXXX 100644
76
--- a/target/riscv/cpu.c
77
+++ b/target/riscv/cpu.c
78
@@ -XXX,XX +XXX,XX @@ static const struct isa_ext_data isa_edata_arr[] = {
79
ISA_EXT_DATA_ENTRY(zksed, PRIV_VERSION_1_12_0, ext_zksed),
80
ISA_EXT_DATA_ENTRY(zksh, PRIV_VERSION_1_12_0, ext_zksh),
81
ISA_EXT_DATA_ENTRY(zkt, PRIV_VERSION_1_12_0, ext_zkt),
82
+ ISA_EXT_DATA_ENTRY(zvbc, PRIV_VERSION_1_12_0, ext_zvbc),
83
ISA_EXT_DATA_ENTRY(zve32f, PRIV_VERSION_1_10_0, ext_zve32f),
84
ISA_EXT_DATA_ENTRY(zve64f, PRIV_VERSION_1_10_0, ext_zve64f),
85
ISA_EXT_DATA_ENTRY(zve64d, PRIV_VERSION_1_10_0, ext_zve64d),
86
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_validate_set_extensions(RISCVCPU *cpu, Error **errp)
87
return;
88
}
89
90
+ if (cpu->cfg.ext_zvbc && !cpu->cfg.ext_zve64f) {
91
+ error_setg(errp, "Zvbc extension requires V or Zve64{f,d} extensions");
92
+ return;
93
+ }
94
+
95
if (cpu->cfg.ext_zk) {
96
cpu->cfg.ext_zkn = true;
97
cpu->cfg.ext_zkr = true;
98
@@ -XXX,XX +XXX,XX @@ static Property riscv_cpu_extensions[] = {
99
DEFINE_PROP_BOOL("x-zvfbfmin", RISCVCPU, cfg.ext_zvfbfmin, false),
100
DEFINE_PROP_BOOL("x-zvfbfwma", RISCVCPU, cfg.ext_zvfbfwma, false),
101
102
+ /* Vector cryptography extensions */
103
+ DEFINE_PROP_BOOL("x-zvbc", RISCVCPU, cfg.ext_zvbc, false),
104
+
105
DEFINE_PROP_END_OF_LIST(),
106
};
107
108
diff --git a/target/riscv/translate.c b/target/riscv/translate.c
109
index XXXXXXX..XXXXXXX 100644
110
--- a/target/riscv/translate.c
111
+++ b/target/riscv/translate.c
112
@@ -XXX,XX +XXX,XX @@ static uint32_t opcode_at(DisasContextBase *dcbase, target_ulong pc)
113
#include "insn_trans/trans_rvzfa.c.inc"
114
#include "insn_trans/trans_rvzfh.c.inc"
115
#include "insn_trans/trans_rvk.c.inc"
116
+#include "insn_trans/trans_rvvk.c.inc"
117
#include "insn_trans/trans_privileged.c.inc"
118
#include "insn_trans/trans_svinval.c.inc"
119
#include "insn_trans/trans_rvbf16.c.inc"
120
diff --git a/target/riscv/vcrypto_helper.c b/target/riscv/vcrypto_helper.c
42
new file mode 100644
121
new file mode 100644
43
index XXXXXXX..XXXXXXX
122
index XXXXXXX..XXXXXXX
44
--- /dev/null
123
--- /dev/null
45
+++ b/target/riscv/XVentanaCondOps.decode
124
+++ b/target/riscv/vcrypto_helper.c
46
@@ -XXX,XX +XXX,XX @@
47
+#
48
+# RISC-V translation routines for the XVentanaCondOps extension
49
+#
50
+# Copyright (c) 2022 Dr. Philipp Tomsich, philipp.tomsich@vrull.eu
51
+#
52
+# SPDX-License-Identifier: LGPL-2.1-or-later
53
+#
54
+# Reference: VTx-family custom instructions
55
+# Custom ISA extensions for Ventana Micro Systems RISC-V cores
56
+# (https://github.com/ventanamicro/ventana-custom-extensions/releases/download/v1.0.0/ventana-custom-extensions-v1.0.0.pdf)
57
+
58
+# Fields
59
+%rs2 20:5
60
+%rs1 15:5
61
+%rd 7:5
62
+
63
+# Argument sets
64
+&r rd rs1 rs2 !extern
65
+
66
+# Formats
67
+@r ....... ..... ..... ... ..... ....... &r %rs2 %rs1 %rd
68
+
69
+# *** RV64 Custom-3 Extension ***
70
+vt_maskc 0000000 ..... ..... 110 ..... 1111011 @r
71
+vt_maskcn 0000000 ..... ..... 111 ..... 1111011 @r
72
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
73
index XXXXXXX..XXXXXXX 100644
74
--- a/target/riscv/cpu.c
75
+++ b/target/riscv/cpu.c
76
@@ -XXX,XX +XXX,XX @@ static Property riscv_cpu_properties[] = {
77
DEFINE_PROP_BOOL("zbc", RISCVCPU, cfg.ext_zbc, true),
78
DEFINE_PROP_BOOL("zbs", RISCVCPU, cfg.ext_zbs, true),
79
80
+ /* Vendor-specific custom extensions */
81
+ DEFINE_PROP_BOOL("xventanacondops", RISCVCPU, cfg.ext_XVentanaCondOps, false),
82
+
83
/* These are experimental so mark with 'x-' */
84
DEFINE_PROP_BOOL("x-j", RISCVCPU, cfg.ext_j, false),
85
/* ePMP 0.9.3 */
86
diff --git a/target/riscv/translate.c b/target/riscv/translate.c
87
index XXXXXXX..XXXXXXX 100644
88
--- a/target/riscv/translate.c
89
+++ b/target/riscv/translate.c
90
@@ -XXX,XX +XXX,XX @@ static bool always_true_p(DisasContext *ctx __attribute__((__unused__)))
91
return true;
92
}
93
94
+#define MATERIALISE_EXT_PREDICATE(ext) \
95
+ static bool has_ ## ext ## _p(DisasContext *ctx) \
96
+ { \
97
+ return ctx->cfg_ptr->ext_ ## ext ; \
98
+ }
99
+
100
+MATERIALISE_EXT_PREDICATE(XVentanaCondOps);
101
+
102
#ifdef TARGET_RISCV32
103
#define get_xl(ctx) MXL_RV32
104
#elif defined(CONFIG_USER_ONLY)
105
@@ -XXX,XX +XXX,XX @@ static uint32_t opcode_at(DisasContextBase *dcbase, target_ulong pc)
106
#include "insn_trans/trans_rvb.c.inc"
107
#include "insn_trans/trans_rvzfh.c.inc"
108
#include "insn_trans/trans_privileged.c.inc"
109
+#include "insn_trans/trans_xventanacondops.c.inc"
110
111
/* Include the auto-generated decoder for 16 bit insn */
112
#include "decode-insn16.c.inc"
113
+/* Include decoders for factored-out extensions */
114
+#include "decode-XVentanaCondOps.c.inc"
115
116
static void decode_opc(CPURISCVState *env, DisasContext *ctx, uint16_t opcode)
117
{
118
@@ -XXX,XX +XXX,XX @@ static void decode_opc(CPURISCVState *env, DisasContext *ctx, uint16_t opcode)
119
bool (*decode_func)(DisasContext *, uint32_t);
120
} decoders[] = {
121
{ always_true_p, decode_insn32 },
122
+ { has_XVentanaCondOps_p, decode_XVentanaCodeOps },
123
};
124
125
/* Check for compressed insn */
126
diff --git a/target/riscv/insn_trans/trans_xventanacondops.c.inc b/target/riscv/insn_trans/trans_xventanacondops.c.inc
127
new file mode 100644
128
index XXXXXXX..XXXXXXX
129
--- /dev/null
130
+++ b/target/riscv/insn_trans/trans_xventanacondops.c.inc
131
@@ -XXX,XX +XXX,XX @@
125
@@ -XXX,XX +XXX,XX @@
132
+/*
126
+/*
133
+ * RISC-V translation routines for the XVentanaCondOps extension.
127
+ * RISC-V Vector Crypto Extension Helpers for QEMU.
134
+ *
128
+ *
135
+ * Copyright (c) 2021-2022 VRULL GmbH.
129
+ * Copyright (C) 2023 SiFive, Inc.
130
+ * Written by Codethink Ltd and SiFive.
136
+ *
131
+ *
137
+ * This program is free software; you can redistribute it and/or modify it
132
+ * This program is free software; you can redistribute it and/or modify it
138
+ * under the terms and conditions of the GNU General Public License,
133
+ * under the terms and conditions of the GNU General Public License,
139
+ * version 2 or later, as published by the Free Software Foundation.
134
+ * version 2 or later, as published by the Free Software Foundation.
140
+ *
135
+ *
...
...
145
+ *
140
+ *
146
+ * You should have received a copy of the GNU General Public License along with
141
+ * You should have received a copy of the GNU General Public License along with
147
+ * this program. If not, see <http://www.gnu.org/licenses/>.
142
+ * this program. If not, see <http://www.gnu.org/licenses/>.
148
+ */
143
+ */
149
+
144
+
150
+static bool gen_vt_condmask(DisasContext *ctx, arg_r *a, TCGCond cond)
145
+#include "qemu/osdep.h"
151
+{
146
+#include "qemu/host-utils.h"
152
+ TCGv dest = dest_gpr(ctx, a->rd);
147
+#include "qemu/bitops.h"
153
+ TCGv src1 = get_gpr(ctx, a->rs1, EXT_NONE);
148
+#include "cpu.h"
154
+ TCGv src2 = get_gpr(ctx, a->rs2, EXT_NONE);
149
+#include "exec/memop.h"
155
+
150
+#include "exec/exec-all.h"
156
+ tcg_gen_movcond_tl(cond, dest, src2, ctx->zero, src1, ctx->zero);
151
+#include "exec/helper-proto.h"
157
+
152
+#include "internals.h"
158
+ gen_set_gpr(ctx, a->rd, dest);
153
+#include "vector_internals.h"
159
+ return true;
154
+
160
+}
155
+static uint64_t clmul64(uint64_t y, uint64_t x)
161
+
156
+{
162
+static bool trans_vt_maskc(DisasContext *ctx, arg_r *a)
157
+ uint64_t result = 0;
163
+{
158
+ for (int j = 63; j >= 0; j--) {
164
+ return gen_vt_condmask(ctx, a, TCG_COND_NE);
159
+ if ((y >> j) & 1) {
165
+}
160
+ result ^= (x << j);
166
+
161
+ }
167
+static bool trans_vt_maskcn(DisasContext *ctx, arg_r *a)
162
+ }
168
+{
163
+ return result;
169
+ return gen_vt_condmask(ctx, a, TCG_COND_EQ);
164
+}
170
+}
165
+
166
+static uint64_t clmulh64(uint64_t y, uint64_t x)
167
+{
168
+ uint64_t result = 0;
169
+ for (int j = 63; j >= 1; j--) {
170
+ if ((y >> j) & 1) {
171
+ result ^= (x >> (64 - j));
172
+ }
173
+ }
174
+ return result;
175
+}
176
+
177
+RVVCALL(OPIVV2, vclmul_vv, OP_UUU_D, H8, H8, H8, clmul64)
178
+GEN_VEXT_VV(vclmul_vv, 8)
179
+RVVCALL(OPIVX2, vclmul_vx, OP_UUU_D, H8, H8, clmul64)
180
+GEN_VEXT_VX(vclmul_vx, 8)
181
+RVVCALL(OPIVV2, vclmulh_vv, OP_UUU_D, H8, H8, H8, clmulh64)
182
+GEN_VEXT_VV(vclmulh_vv, 8)
183
+RVVCALL(OPIVX2, vclmulh_vx, OP_UUU_D, H8, H8, clmulh64)
184
+GEN_VEXT_VX(vclmulh_vx, 8)
185
diff --git a/target/riscv/insn_trans/trans_rvvk.c.inc b/target/riscv/insn_trans/trans_rvvk.c.inc
186
new file mode 100644
187
index XXXXXXX..XXXXXXX
188
--- /dev/null
189
+++ b/target/riscv/insn_trans/trans_rvvk.c.inc
190
@@ -XXX,XX +XXX,XX @@
191
+/*
192
+ * RISC-V translation routines for the vector crypto extension.
193
+ *
194
+ * Copyright (C) 2023 SiFive, Inc.
195
+ * Written by Codethink Ltd and SiFive.
196
+ *
197
+ * This program is free software; you can redistribute it and/or modify it
198
+ * under the terms and conditions of the GNU General Public License,
199
+ * version 2 or later, as published by the Free Software Foundation.
200
+ *
201
+ * This program is distributed in the hope it will be useful, but WITHOUT
202
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
203
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
204
+ * more details.
205
+ *
206
+ * You should have received a copy of the GNU General Public License along with
207
+ * this program. If not, see <http://www.gnu.org/licenses/>.
208
+ */
209
+
210
+/*
211
+ * Zvbc
212
+ */
213
+
214
+#define GEN_VV_MASKED_TRANS(NAME, CHECK) \
215
+ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
216
+ { \
217
+ if (CHECK(s, a)) { \
218
+ return opivv_trans(a->rd, a->rs1, a->rs2, a->vm, \
219
+ gen_helper_##NAME, s); \
220
+ } \
221
+ return false; \
222
+ }
223
+
224
+static bool vclmul_vv_check(DisasContext *s, arg_rmrr *a)
225
+{
226
+ return opivv_check(s, a) &&
227
+ s->cfg_ptr->ext_zvbc == true &&
228
+ s->sew == MO_64;
229
+}
230
+
231
+GEN_VV_MASKED_TRANS(vclmul_vv, vclmul_vv_check)
232
+GEN_VV_MASKED_TRANS(vclmulh_vv, vclmul_vv_check)
233
+
234
+#define GEN_VX_MASKED_TRANS(NAME, CHECK) \
235
+ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
236
+ { \
237
+ if (CHECK(s, a)) { \
238
+ return opivx_trans(a->rd, a->rs1, a->rs2, a->vm, \
239
+ gen_helper_##NAME, s); \
240
+ } \
241
+ return false; \
242
+ }
243
+
244
+static bool vclmul_vx_check(DisasContext *s, arg_rmrr *a)
245
+{
246
+ return opivx_check(s, a) &&
247
+ s->cfg_ptr->ext_zvbc == true &&
248
+ s->sew == MO_64;
249
+}
250
+
251
+GEN_VX_MASKED_TRANS(vclmul_vx, vclmul_vx_check)
252
+GEN_VX_MASKED_TRANS(vclmulh_vx, vclmul_vx_check)
171
diff --git a/target/riscv/meson.build b/target/riscv/meson.build
253
diff --git a/target/riscv/meson.build b/target/riscv/meson.build
172
index XXXXXXX..XXXXXXX 100644
254
index XXXXXXX..XXXXXXX 100644
173
--- a/target/riscv/meson.build
255
--- a/target/riscv/meson.build
174
+++ b/target/riscv/meson.build
256
+++ b/target/riscv/meson.build
175
@@ -XXX,XX +XXX,XX @@ dir = meson.current_source_dir()
257
@@ -XXX,XX +XXX,XX @@ riscv_ss.add(files(
176
gen = [
258
'translate.c',
177
decodetree.process('insn16.decode', extra_args: ['--static-decode=decode_insn16', '--insnwidth=16']),
259
'm128_helper.c',
178
decodetree.process('insn32.decode', extra_args: '--static-decode=decode_insn32'),
260
'crypto_helper.c',
179
+ decodetree.process('XVentanaCondOps.decode', extra_args: '--static-decode=decode_XVentanaCodeOps'),
261
- 'zce_helper.c'
180
]
262
+ 'zce_helper.c',
181
263
+ 'vcrypto_helper.c'
182
riscv_ss = ss.source_set()
264
))
265
riscv_ss.add(when: 'CONFIG_KVM', if_true: files('kvm.c'), if_false: files('kvm-stub.c'))
266
183
--
267
--
184
2.34.1
268
2.41.0
185
186
diff view generated by jsdifflib
1
From: Philipp Tomsich <philipp.tomsich@vrull.eu>
1
From: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
2
2
3
The implementation in trans_{rvi,rvv,rvzfh}.c.inc accesses the shallow
3
Move the checks out of `do_opiv{v,x,i}_gvec{,_shift}` functions
4
copies (in DisasContext) of some of the elements available in the
4
and into the corresponding macros. This enables the functions to be
5
RISCVCPUConfig structure. This commit redirects accesses to use the
5
reused in proceeding commits without check duplication.
6
cfg_ptr copied into DisasContext and removes the shallow copies.
7
6
8
Signed-off-by: Philipp Tomsich <philipp.tomsich@vrull.eu>
7
Signed-off-by: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
9
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
10
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-Id: <20220202005249.3566542-4-philipp.tomsich@vrull.eu>
9
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
13
[ Changes by AF:
10
Signed-off-by: Max Chou <max.chou@sifive.com>
14
- Fixup checkpatch failures
11
Message-ID: <20230711165917.2629866-6-max.chou@sifive.com>
15
]
16
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
12
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
17
---
13
---
18
target/riscv/translate.c | 14 ---
14
target/riscv/insn_trans/trans_rvv.c.inc | 28 +++++++++++--------------
19
target/riscv/insn_trans/trans_rvi.c.inc | 2 +-
15
1 file changed, 12 insertions(+), 16 deletions(-)
20
target/riscv/insn_trans/trans_rvv.c.inc | 146 ++++++++++++++--------
21
target/riscv/insn_trans/trans_rvzfh.c.inc | 4 +-
22
4 files changed, 97 insertions(+), 69 deletions(-)
23
16
24
diff --git a/target/riscv/translate.c b/target/riscv/translate.c
25
index XXXXXXX..XXXXXXX 100644
26
--- a/target/riscv/translate.c
27
+++ b/target/riscv/translate.c
28
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
29
RISCVMXL ol;
30
bool virt_enabled;
31
const RISCVCPUConfig *cfg_ptr;
32
- bool ext_ifencei;
33
- bool ext_zfh;
34
- bool ext_zfhmin;
35
- bool ext_zve32f;
36
- bool ext_zve64f;
37
bool hlsx;
38
/* vector extension */
39
bool vill;
40
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
41
*/
42
int8_t lmul;
43
uint8_t sew;
44
- uint16_t vlen;
45
- uint16_t elen;
46
target_ulong vstart;
47
bool vl_eq_vlmax;
48
uint8_t ntemp;
49
@@ -XXX,XX +XXX,XX @@ static void riscv_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
50
ctx->misa_ext = env->misa_ext;
51
ctx->frm = -1; /* unknown rounding mode */
52
ctx->cfg_ptr = &(cpu->cfg);
53
- ctx->ext_ifencei = cpu->cfg.ext_ifencei;
54
- ctx->ext_zfh = cpu->cfg.ext_zfh;
55
- ctx->ext_zfhmin = cpu->cfg.ext_zfhmin;
56
- ctx->ext_zve32f = cpu->cfg.ext_zve32f;
57
- ctx->ext_zve64f = cpu->cfg.ext_zve64f;
58
- ctx->vlen = cpu->cfg.vlen;
59
- ctx->elen = cpu->cfg.elen;
60
ctx->mstatus_hs_fs = FIELD_EX32(tb_flags, TB_FLAGS, MSTATUS_HS_FS);
61
ctx->mstatus_hs_vs = FIELD_EX32(tb_flags, TB_FLAGS, MSTATUS_HS_VS);
62
ctx->hlsx = FIELD_EX32(tb_flags, TB_FLAGS, HLSX);
63
diff --git a/target/riscv/insn_trans/trans_rvi.c.inc b/target/riscv/insn_trans/trans_rvi.c.inc
64
index XXXXXXX..XXXXXXX 100644
65
--- a/target/riscv/insn_trans/trans_rvi.c.inc
66
+++ b/target/riscv/insn_trans/trans_rvi.c.inc
67
@@ -XXX,XX +XXX,XX @@ static bool trans_fence(DisasContext *ctx, arg_fence *a)
68
69
static bool trans_fence_i(DisasContext *ctx, arg_fence_i *a)
70
{
71
- if (!ctx->ext_ifencei) {
72
+ if (!ctx->cfg_ptr->ext_ifencei) {
73
return false;
74
}
75
76
diff --git a/target/riscv/insn_trans/trans_rvv.c.inc b/target/riscv/insn_trans/trans_rvv.c.inc
17
diff --git a/target/riscv/insn_trans/trans_rvv.c.inc b/target/riscv/insn_trans/trans_rvv.c.inc
77
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
78
--- a/target/riscv/insn_trans/trans_rvv.c.inc
19
--- a/target/riscv/insn_trans/trans_rvv.c.inc
79
+++ b/target/riscv/insn_trans/trans_rvv.c.inc
20
+++ b/target/riscv/insn_trans/trans_rvv.c.inc
80
@@ -XXX,XX +XXX,XX @@ static bool require_zve32f(DisasContext *s)
21
@@ -XXX,XX +XXX,XX @@ do_opivv_gvec(DisasContext *s, arg_rmrr *a, GVecGen3Fn *gvec_fn,
81
}
22
gen_helper_gvec_4_ptr *fn)
82
23
{
83
/* Zve32f doesn't support FP64. (Section 18.2) */
24
TCGLabel *over = gen_new_label();
84
- return s->ext_zve32f ? s->sew <= MO_32 : true;
25
- if (!opivv_check(s, a)) {
85
+ return s->cfg_ptr->ext_zve32f ? s->sew <= MO_32 : true;
26
- return false;
27
- }
28
29
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
30
31
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
32
gen_helper_##NAME##_b, gen_helper_##NAME##_h, \
33
gen_helper_##NAME##_w, gen_helper_##NAME##_d, \
34
}; \
35
+ if (!opivv_check(s, a)) { \
36
+ return false; \
37
+ } \
38
return do_opivv_gvec(s, a, tcg_gen_gvec_##SUF, fns[s->sew]); \
86
}
39
}
87
40
88
static bool require_scale_zve32f(DisasContext *s)
41
@@ -XXX,XX +XXX,XX @@ static inline bool
89
@@ -XXX,XX +XXX,XX @@ static bool require_scale_zve32f(DisasContext *s)
42
do_opivx_gvec(DisasContext *s, arg_rmrr *a, GVecGen2sFn *gvec_fn,
90
}
43
gen_helper_opivx *fn)
91
44
{
92
/* Zve32f doesn't support FP64. (Section 18.2) */
45
- if (!opivx_check(s, a)) {
93
- return s->ext_zve64f ? s->sew <= MO_16 : true;
46
- return false;
94
+ return s->cfg_ptr->ext_zve64f ? s->sew <= MO_16 : true;
47
- }
48
-
49
if (a->vm && s->vl_eq_vlmax && !(s->vta && s->lmul < 0)) {
50
TCGv_i64 src1 = tcg_temp_new_i64();
51
52
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
53
gen_helper_##NAME##_b, gen_helper_##NAME##_h, \
54
gen_helper_##NAME##_w, gen_helper_##NAME##_d, \
55
}; \
56
+ if (!opivx_check(s, a)) { \
57
+ return false; \
58
+ } \
59
return do_opivx_gvec(s, a, tcg_gen_gvec_##SUF, fns[s->sew]); \
95
}
60
}
96
61
97
static bool require_zve64f(DisasContext *s)
62
@@ -XXX,XX +XXX,XX @@ static inline bool
98
@@ -XXX,XX +XXX,XX @@ static bool require_zve64f(DisasContext *s)
63
do_opivi_gvec(DisasContext *s, arg_rmrr *a, GVecGen2iFn *gvec_fn,
99
}
64
gen_helper_opivx *fn, imm_mode_t imm_mode)
100
65
{
101
/* Zve64f doesn't support FP64. (Section 18.2) */
66
- if (!opivx_check(s, a)) {
102
- return s->ext_zve64f ? s->sew <= MO_32 : true;
67
- return false;
103
+ return s->cfg_ptr->ext_zve64f ? s->sew <= MO_32 : true;
68
- }
69
-
70
if (a->vm && s->vl_eq_vlmax && !(s->vta && s->lmul < 0)) {
71
gvec_fn(s->sew, vreg_ofs(s, a->rd), vreg_ofs(s, a->rs2),
72
extract_imm(s, a->rs1, imm_mode), MAXSZ(s), MAXSZ(s));
73
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
74
gen_helper_##OPIVX##_b, gen_helper_##OPIVX##_h, \
75
gen_helper_##OPIVX##_w, gen_helper_##OPIVX##_d, \
76
}; \
77
+ if (!opivx_check(s, a)) { \
78
+ return false; \
79
+ } \
80
return do_opivi_gvec(s, a, tcg_gen_gvec_##SUF, \
81
fns[s->sew], IMM_MODE); \
104
}
82
}
105
83
@@ -XXX,XX +XXX,XX @@ static inline bool
106
static bool require_scale_zve64f(DisasContext *s)
84
do_opivx_gvec_shift(DisasContext *s, arg_rmrr *a, GVecGen2sFn32 *gvec_fn,
107
@@ -XXX,XX +XXX,XX @@ static bool require_scale_zve64f(DisasContext *s)
85
gen_helper_opivx *fn)
108
}
86
{
109
87
- if (!opivx_check(s, a)) {
110
/* Zve64f doesn't support FP64. (Section 18.2) */
88
- return false;
111
- return s->ext_zve64f ? s->sew <= MO_16 : true;
89
- }
112
+ return s->cfg_ptr->ext_zve64f ? s->sew <= MO_16 : true;
90
-
91
if (a->vm && s->vl_eq_vlmax && !(s->vta && s->lmul < 0)) {
92
TCGv_i32 src1 = tcg_temp_new_i32();
93
94
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
95
gen_helper_##NAME##_b, gen_helper_##NAME##_h, \
96
gen_helper_##NAME##_w, gen_helper_##NAME##_d, \
97
}; \
98
- \
99
+ if (!opivx_check(s, a)) { \
100
+ return false; \
101
+ } \
102
return do_opivx_gvec_shift(s, a, tcg_gen_gvec_##SUF, fns[s->sew]); \
113
}
103
}
114
104
115
/* Destination vector register group cannot overlap source mask register. */
116
@@ -XXX,XX +XXX,XX @@ static bool do_vsetvl(DisasContext *s, int rd, int rs1, TCGv s2)
117
TCGv s1, dst;
118
119
if (!require_rvv(s) ||
120
- !(has_ext(s, RVV) || s->ext_zve32f || s->ext_zve64f)) {
121
+ !(has_ext(s, RVV) || s->cfg_ptr->ext_zve32f ||
122
+ s->cfg_ptr->ext_zve64f)) {
123
return false;
124
}
125
126
@@ -XXX,XX +XXX,XX @@ static bool do_vsetivli(DisasContext *s, int rd, TCGv s1, TCGv s2)
127
TCGv dst;
128
129
if (!require_rvv(s) ||
130
- !(has_ext(s, RVV) || s->ext_zve32f || s->ext_zve64f)) {
131
+ !(has_ext(s, RVV) || s->cfg_ptr->ext_zve32f ||
132
+ s->cfg_ptr->ext_zve64f)) {
133
return false;
134
}
135
136
@@ -XXX,XX +XXX,XX @@ static bool trans_vsetivli(DisasContext *s, arg_vsetivli *a)
137
/* vector register offset from env */
138
static uint32_t vreg_ofs(DisasContext *s, int reg)
139
{
140
- return offsetof(CPURISCVState, vreg) + reg * s->vlen / 8;
141
+ return offsetof(CPURISCVState, vreg) + reg * s->cfg_ptr->vlen / 8;
142
}
143
144
/* check functions */
145
@@ -XXX,XX +XXX,XX @@ static bool vext_check_st_index(DisasContext *s, int vd, int vs2, int nf,
146
* when XLEN=32. (Section 18.2)
147
*/
148
if (get_xl(s) == MXL_RV32) {
149
- ret &= (!has_ext(s, RVV) && s->ext_zve64f ? eew != MO_64 : true);
150
+ ret &= (!has_ext(s, RVV) &&
151
+ s->cfg_ptr->ext_zve64f ? eew != MO_64 : true);
152
}
153
154
return ret;
155
@@ -XXX,XX +XXX,XX @@ static bool vext_wide_check_common(DisasContext *s, int vd, int vm)
156
{
157
return (s->lmul <= 2) &&
158
(s->sew < MO_64) &&
159
- ((s->sew + 1) <= (s->elen >> 4)) &&
160
+ ((s->sew + 1) <= (s->cfg_ptr->elen >> 4)) &&
161
require_align(vd, s->lmul + 1) &&
162
require_vm(vm, vd);
163
}
164
@@ -XXX,XX +XXX,XX @@ static bool vext_narrow_check_common(DisasContext *s, int vd, int vs2,
165
{
166
return (s->lmul <= 2) &&
167
(s->sew < MO_64) &&
168
- ((s->sew + 1) <= (s->elen >> 4)) &&
169
+ ((s->sew + 1) <= (s->cfg_ptr->elen >> 4)) &&
170
require_align(vs2, s->lmul + 1) &&
171
require_align(vd, s->lmul) &&
172
require_vm(vm, vd);
173
@@ -XXX,XX +XXX,XX @@ static bool ldst_us_trans(uint32_t vd, uint32_t rs1, uint32_t data,
174
* The first part is vlen in bytes, encoded in maxsz of simd_desc.
175
* The second part is lmul, encoded in data of simd_desc.
176
*/
177
- desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
178
+ desc = tcg_constant_i32(simd_desc(s->cfg_ptr->vlen / 8,
179
+ s->cfg_ptr->vlen / 8, data));
180
181
tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, vd));
182
tcg_gen_addi_ptr(mask, cpu_env, vreg_ofs(s, 0));
183
@@ -XXX,XX +XXX,XX @@ static bool ldst_stride_trans(uint32_t vd, uint32_t rs1, uint32_t rs2,
184
mask = tcg_temp_new_ptr();
185
base = get_gpr(s, rs1, EXT_NONE);
186
stride = get_gpr(s, rs2, EXT_NONE);
187
- desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
188
+ desc = tcg_constant_i32(simd_desc(s->cfg_ptr->vlen / 8,
189
+ s->cfg_ptr->vlen / 8, data));
190
191
tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, vd));
192
tcg_gen_addi_ptr(mask, cpu_env, vreg_ofs(s, 0));
193
@@ -XXX,XX +XXX,XX @@ static bool ldst_index_trans(uint32_t vd, uint32_t rs1, uint32_t vs2,
194
mask = tcg_temp_new_ptr();
195
index = tcg_temp_new_ptr();
196
base = get_gpr(s, rs1, EXT_NONE);
197
- desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
198
+ desc = tcg_constant_i32(simd_desc(s->cfg_ptr->vlen / 8,
199
+ s->cfg_ptr->vlen / 8, data));
200
201
tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, vd));
202
tcg_gen_addi_ptr(index, cpu_env, vreg_ofs(s, vs2));
203
@@ -XXX,XX +XXX,XX @@ static bool ldff_trans(uint32_t vd, uint32_t rs1, uint32_t data,
204
dest = tcg_temp_new_ptr();
205
mask = tcg_temp_new_ptr();
206
base = get_gpr(s, rs1, EXT_NONE);
207
- desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
208
+ desc = tcg_constant_i32(simd_desc(s->cfg_ptr->vlen / 8,
209
+ s->cfg_ptr->vlen / 8, data));
210
211
tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, vd));
212
tcg_gen_addi_ptr(mask, cpu_env, vreg_ofs(s, 0));
213
@@ -XXX,XX +XXX,XX @@ static bool ldst_whole_trans(uint32_t vd, uint32_t rs1, uint32_t nf,
214
215
uint32_t data = FIELD_DP32(0, VDATA, NF, nf);
216
dest = tcg_temp_new_ptr();
217
- desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
218
+ desc = tcg_constant_i32(simd_desc(s->cfg_ptr->vlen / 8,
219
+ s->cfg_ptr->vlen / 8, data));
220
221
base = get_gpr(s, rs1, EXT_NONE);
222
tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, vd));
223
@@ -XXX,XX +XXX,XX @@ GEN_LDST_WHOLE_TRANS(vs8r_v, 8, true)
224
static inline uint32_t MAXSZ(DisasContext *s)
225
{
226
int scale = s->lmul - 3;
227
- return scale < 0 ? s->vlen >> -scale : s->vlen << scale;
228
+ return scale < 0 ? s->cfg_ptr->vlen >> -scale : s->cfg_ptr->vlen << scale;
229
}
230
231
static bool opivv_check(DisasContext *s, arg_rmrr *a)
232
@@ -XXX,XX +XXX,XX @@ do_opivv_gvec(DisasContext *s, arg_rmrr *a, GVecGen3Fn *gvec_fn,
233
data = FIELD_DP32(data, VDATA, LMUL, s->lmul);
234
tcg_gen_gvec_4_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0),
235
vreg_ofs(s, a->rs1), vreg_ofs(s, a->rs2),
236
- cpu_env, s->vlen / 8, s->vlen / 8, data, fn);
237
+ cpu_env, s->cfg_ptr->vlen / 8,
238
+ s->cfg_ptr->vlen / 8, data, fn);
239
}
240
mark_vs_dirty(s);
241
gen_set_label(over);
242
@@ -XXX,XX +XXX,XX @@ static bool opivx_trans(uint32_t vd, uint32_t rs1, uint32_t vs2, uint32_t vm,
243
244
data = FIELD_DP32(data, VDATA, VM, vm);
245
data = FIELD_DP32(data, VDATA, LMUL, s->lmul);
246
- desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
247
+ desc = tcg_constant_i32(simd_desc(s->cfg_ptr->vlen / 8,
248
+ s->cfg_ptr->vlen / 8, data));
249
250
tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, vd));
251
tcg_gen_addi_ptr(src2, cpu_env, vreg_ofs(s, vs2));
252
@@ -XXX,XX +XXX,XX @@ static bool opivi_trans(uint32_t vd, uint32_t imm, uint32_t vs2, uint32_t vm,
253
254
data = FIELD_DP32(data, VDATA, VM, vm);
255
data = FIELD_DP32(data, VDATA, LMUL, s->lmul);
256
- desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
257
+ desc = tcg_constant_i32(simd_desc(s->cfg_ptr->vlen / 8,
258
+ s->cfg_ptr->vlen / 8, data));
259
260
tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, vd));
261
tcg_gen_addi_ptr(src2, cpu_env, vreg_ofs(s, vs2));
262
@@ -XXX,XX +XXX,XX @@ static bool do_opivv_widen(DisasContext *s, arg_rmrr *a,
263
tcg_gen_gvec_4_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0),
264
vreg_ofs(s, a->rs1),
265
vreg_ofs(s, a->rs2),
266
- cpu_env, s->vlen / 8, s->vlen / 8,
267
+ cpu_env, s->cfg_ptr->vlen / 8,
268
+ s->cfg_ptr->vlen / 8,
269
data, fn);
270
mark_vs_dirty(s);
271
gen_set_label(over);
272
@@ -XXX,XX +XXX,XX @@ static bool do_opiwv_widen(DisasContext *s, arg_rmrr *a,
273
tcg_gen_gvec_4_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0),
274
vreg_ofs(s, a->rs1),
275
vreg_ofs(s, a->rs2),
276
- cpu_env, s->vlen / 8, s->vlen / 8, data, fn);
277
+ cpu_env, s->cfg_ptr->vlen / 8,
278
+ s->cfg_ptr->vlen / 8, data, fn);
279
mark_vs_dirty(s);
280
gen_set_label(over);
281
return true;
282
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
283
tcg_gen_gvec_4_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0), \
284
vreg_ofs(s, a->rs1), \
285
vreg_ofs(s, a->rs2), cpu_env, \
286
- s->vlen / 8, s->vlen / 8, data, \
287
+ s->cfg_ptr->vlen / 8, \
288
+ s->cfg_ptr->vlen / 8, data, \
289
fns[s->sew]); \
290
mark_vs_dirty(s); \
291
gen_set_label(over); \
292
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
293
tcg_gen_gvec_4_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0), \
294
vreg_ofs(s, a->rs1), \
295
vreg_ofs(s, a->rs2), cpu_env, \
296
- s->vlen / 8, s->vlen / 8, data, \
297
+ s->cfg_ptr->vlen / 8, \
298
+ s->cfg_ptr->vlen / 8, data, \
299
fns[s->sew]); \
300
mark_vs_dirty(s); \
301
gen_set_label(over); \
302
@@ -XXX,XX +XXX,XX @@ static bool vmulh_vv_check(DisasContext *s, arg_rmrr *a)
303
* are not included for EEW=64 in Zve64*. (Section 18.2)
304
*/
305
return opivv_check(s, a) &&
306
- (!has_ext(s, RVV) && s->ext_zve64f ? s->sew != MO_64 : true);
307
+ (!has_ext(s, RVV) &&
308
+ s->cfg_ptr->ext_zve64f ? s->sew != MO_64 : true);
309
}
310
311
static bool vmulh_vx_check(DisasContext *s, arg_rmrr *a)
312
@@ -XXX,XX +XXX,XX @@ static bool vmulh_vx_check(DisasContext *s, arg_rmrr *a)
313
* are not included for EEW=64 in Zve64*. (Section 18.2)
314
*/
315
return opivx_check(s, a) &&
316
- (!has_ext(s, RVV) && s->ext_zve64f ? s->sew != MO_64 : true);
317
+ (!has_ext(s, RVV) &&
318
+ s->cfg_ptr->ext_zve64f ? s->sew != MO_64 : true);
319
}
320
321
GEN_OPIVV_GVEC_TRANS(vmul_vv, mul)
322
@@ -XXX,XX +XXX,XX @@ static bool trans_vmv_v_v(DisasContext *s, arg_vmv_v_v *a)
323
tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
324
325
tcg_gen_gvec_2_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, a->rs1),
326
- cpu_env, s->vlen / 8, s->vlen / 8, data,
327
+ cpu_env, s->cfg_ptr->vlen / 8,
328
+ s->cfg_ptr->vlen / 8, data,
329
fns[s->sew]);
330
gen_set_label(over);
331
}
332
@@ -XXX,XX +XXX,XX @@ static bool trans_vmv_v_x(DisasContext *s, arg_vmv_v_x *a)
333
};
334
335
tcg_gen_ext_tl_i64(s1_i64, s1);
336
- desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
337
+ desc = tcg_constant_i32(simd_desc(s->cfg_ptr->vlen / 8,
338
+ s->cfg_ptr->vlen / 8, data));
339
tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, a->rd));
340
fns[s->sew](dest, s1_i64, cpu_env, desc);
341
342
@@ -XXX,XX +XXX,XX @@ static bool trans_vmv_v_i(DisasContext *s, arg_vmv_v_i *a)
343
344
s1 = tcg_constant_i64(simm);
345
dest = tcg_temp_new_ptr();
346
- desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
347
+ desc = tcg_constant_i32(simd_desc(s->cfg_ptr->vlen / 8,
348
+ s->cfg_ptr->vlen / 8, data));
349
tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, a->rd));
350
fns[s->sew](dest, s1, cpu_env, desc);
351
352
@@ -XXX,XX +XXX,XX @@ static bool vsmul_vv_check(DisasContext *s, arg_rmrr *a)
353
* for EEW=64 in Zve64*. (Section 18.2)
354
*/
355
return opivv_check(s, a) &&
356
- (!has_ext(s, RVV) && s->ext_zve64f ? s->sew != MO_64 : true);
357
+ (!has_ext(s, RVV) &&
358
+ s->cfg_ptr->ext_zve64f ? s->sew != MO_64 : true);
359
}
360
361
static bool vsmul_vx_check(DisasContext *s, arg_rmrr *a)
362
@@ -XXX,XX +XXX,XX @@ static bool vsmul_vx_check(DisasContext *s, arg_rmrr *a)
363
* for EEW=64 in Zve64*. (Section 18.2)
364
*/
365
return opivx_check(s, a) &&
366
- (!has_ext(s, RVV) && s->ext_zve64f ? s->sew != MO_64 : true);
367
+ (!has_ext(s, RVV) &&
368
+ s->cfg_ptr->ext_zve64f ? s->sew != MO_64 : true);
369
}
370
371
GEN_OPIVV_TRANS(vsmul_vv, vsmul_vv_check)
372
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
373
tcg_gen_gvec_4_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0), \
374
vreg_ofs(s, a->rs1), \
375
vreg_ofs(s, a->rs2), cpu_env, \
376
- s->vlen / 8, s->vlen / 8, data, \
377
+ s->cfg_ptr->vlen / 8, \
378
+ s->cfg_ptr->vlen / 8, data, \
379
fns[s->sew - 1]); \
380
mark_vs_dirty(s); \
381
gen_set_label(over); \
382
@@ -XXX,XX +XXX,XX @@ static bool opfvf_trans(uint32_t vd, uint32_t rs1, uint32_t vs2,
383
dest = tcg_temp_new_ptr();
384
mask = tcg_temp_new_ptr();
385
src2 = tcg_temp_new_ptr();
386
- desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
387
+ desc = tcg_constant_i32(simd_desc(s->cfg_ptr->vlen / 8,
388
+ s->cfg_ptr->vlen / 8, data));
389
390
tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, vd));
391
tcg_gen_addi_ptr(src2, cpu_env, vreg_ofs(s, vs2));
392
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
393
tcg_gen_gvec_4_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0), \
394
vreg_ofs(s, a->rs1), \
395
vreg_ofs(s, a->rs2), cpu_env, \
396
- s->vlen / 8, s->vlen / 8, data, \
397
+ s->cfg_ptr->vlen / 8, \
398
+ s->cfg_ptr->vlen / 8, data, \
399
fns[s->sew - 1]); \
400
mark_vs_dirty(s); \
401
gen_set_label(over); \
402
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
403
tcg_gen_gvec_4_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0), \
404
vreg_ofs(s, a->rs1), \
405
vreg_ofs(s, a->rs2), cpu_env, \
406
- s->vlen / 8, s->vlen / 8, data, \
407
+ s->cfg_ptr->vlen / 8, \
408
+ s->cfg_ptr->vlen / 8, data, \
409
fns[s->sew - 1]); \
410
mark_vs_dirty(s); \
411
gen_set_label(over); \
412
@@ -XXX,XX +XXX,XX @@ static bool do_opfv(DisasContext *s, arg_rmr *a,
413
data = FIELD_DP32(data, VDATA, LMUL, s->lmul);
414
tcg_gen_gvec_3_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0),
415
vreg_ofs(s, a->rs2), cpu_env,
416
- s->vlen / 8, s->vlen / 8, data, fn);
417
+ s->cfg_ptr->vlen / 8,
418
+ s->cfg_ptr->vlen / 8, data, fn);
419
mark_vs_dirty(s);
420
gen_set_label(over);
421
return true;
422
@@ -XXX,XX +XXX,XX @@ static bool trans_vfmv_v_f(DisasContext *s, arg_vfmv_v_f *a)
423
do_nanbox(s, t1, cpu_fpr[a->rs1]);
424
425
dest = tcg_temp_new_ptr();
426
- desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
427
+ desc = tcg_constant_i32(simd_desc(s->cfg_ptr->vlen / 8,
428
+ s->cfg_ptr->vlen / 8, data));
429
tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, a->rd));
430
431
fns[s->sew - 1](dest, t1, cpu_env, desc);
432
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmr *a) \
433
data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \
434
tcg_gen_gvec_3_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0), \
435
vreg_ofs(s, a->rs2), cpu_env, \
436
- s->vlen / 8, s->vlen / 8, data, \
437
+ s->cfg_ptr->vlen / 8, \
438
+ s->cfg_ptr->vlen / 8, data, \
439
fns[s->sew - 1]); \
440
mark_vs_dirty(s); \
441
gen_set_label(over); \
442
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmr *a) \
443
data = FIELD_DP32(data, VDATA, VM, a->vm); \
444
tcg_gen_gvec_3_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0), \
445
vreg_ofs(s, a->rs2), cpu_env, \
446
- s->vlen / 8, s->vlen / 8, data, \
447
+ s->cfg_ptr->vlen / 8, \
448
+ s->cfg_ptr->vlen / 8, data, \
449
fns[s->sew]); \
450
mark_vs_dirty(s); \
451
gen_set_label(over); \
452
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmr *a) \
453
data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \
454
tcg_gen_gvec_3_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0), \
455
vreg_ofs(s, a->rs2), cpu_env, \
456
- s->vlen / 8, s->vlen / 8, data, \
457
+ s->cfg_ptr->vlen / 8, \
458
+ s->cfg_ptr->vlen / 8, data, \
459
fns[s->sew - 1]); \
460
mark_vs_dirty(s); \
461
gen_set_label(over); \
462
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmr *a) \
463
data = FIELD_DP32(data, VDATA, VM, a->vm); \
464
tcg_gen_gvec_3_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0), \
465
vreg_ofs(s, a->rs2), cpu_env, \
466
- s->vlen / 8, s->vlen / 8, data, \
467
+ s->cfg_ptr->vlen / 8, \
468
+ s->cfg_ptr->vlen / 8, data, \
469
fns[s->sew]); \
470
mark_vs_dirty(s); \
471
gen_set_label(over); \
472
@@ -XXX,XX +XXX,XX @@ GEN_OPIVV_TRANS(vredxor_vs, reduction_check)
473
static bool reduction_widen_check(DisasContext *s, arg_rmrr *a)
474
{
475
return reduction_check(s, a) && (s->sew < MO_64) &&
476
- ((s->sew + 1) <= (s->elen >> 4));
477
+ ((s->sew + 1) <= (s->cfg_ptr->elen >> 4));
478
}
479
480
GEN_OPIVV_WIDEN_TRANS(vwredsum_vs, reduction_widen_check)
481
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_r *a) \
482
tcg_gen_gvec_4_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0), \
483
vreg_ofs(s, a->rs1), \
484
vreg_ofs(s, a->rs2), cpu_env, \
485
- s->vlen / 8, s->vlen / 8, data, fn); \
486
+ s->cfg_ptr->vlen / 8, \
487
+ s->cfg_ptr->vlen / 8, data, fn); \
488
mark_vs_dirty(s); \
489
gen_set_label(over); \
490
return true; \
491
@@ -XXX,XX +XXX,XX @@ static bool trans_vcpop_m(DisasContext *s, arg_rmr *a)
492
mask = tcg_temp_new_ptr();
493
src2 = tcg_temp_new_ptr();
494
dst = dest_gpr(s, a->rd);
495
- desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
496
+ desc = tcg_constant_i32(simd_desc(s->cfg_ptr->vlen / 8,
497
+ s->cfg_ptr->vlen / 8, data));
498
499
tcg_gen_addi_ptr(src2, cpu_env, vreg_ofs(s, a->rs2));
500
tcg_gen_addi_ptr(mask, cpu_env, vreg_ofs(s, 0));
501
@@ -XXX,XX +XXX,XX @@ static bool trans_vfirst_m(DisasContext *s, arg_rmr *a)
502
mask = tcg_temp_new_ptr();
503
src2 = tcg_temp_new_ptr();
504
dst = dest_gpr(s, a->rd);
505
- desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
506
+ desc = tcg_constant_i32(simd_desc(s->cfg_ptr->vlen / 8,
507
+ s->cfg_ptr->vlen / 8, data));
508
509
tcg_gen_addi_ptr(src2, cpu_env, vreg_ofs(s, a->rs2));
510
tcg_gen_addi_ptr(mask, cpu_env, vreg_ofs(s, 0));
511
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmr *a) \
512
data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \
513
tcg_gen_gvec_3_ptr(vreg_ofs(s, a->rd), \
514
vreg_ofs(s, 0), vreg_ofs(s, a->rs2), \
515
- cpu_env, s->vlen / 8, s->vlen / 8, \
516
+ cpu_env, s->cfg_ptr->vlen / 8, \
517
+ s->cfg_ptr->vlen / 8, \
518
data, fn); \
519
mark_vs_dirty(s); \
520
gen_set_label(over); \
521
@@ -XXX,XX +XXX,XX @@ static bool trans_viota_m(DisasContext *s, arg_viota_m *a)
522
};
523
tcg_gen_gvec_3_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0),
524
vreg_ofs(s, a->rs2), cpu_env,
525
- s->vlen / 8, s->vlen / 8, data, fns[s->sew]);
526
+ s->cfg_ptr->vlen / 8,
527
+ s->cfg_ptr->vlen / 8, data, fns[s->sew]);
528
mark_vs_dirty(s);
529
gen_set_label(over);
530
return true;
531
@@ -XXX,XX +XXX,XX @@ static bool trans_vid_v(DisasContext *s, arg_vid_v *a)
532
gen_helper_vid_v_w, gen_helper_vid_v_d,
533
};
534
tcg_gen_gvec_2_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0),
535
- cpu_env, s->vlen / 8, s->vlen / 8,
536
+ cpu_env, s->cfg_ptr->vlen / 8,
537
+ s->cfg_ptr->vlen / 8,
538
data, fns[s->sew]);
539
mark_vs_dirty(s);
540
gen_set_label(over);
541
@@ -XXX,XX +XXX,XX @@ static bool trans_vrgather_vx(DisasContext *s, arg_rmrr *a)
542
543
if (a->vm && s->vl_eq_vlmax) {
544
int scale = s->lmul - (s->sew + 3);
545
- int vlmax = scale < 0 ? s->vlen >> -scale : s->vlen << scale;
546
+ int vlmax = scale < 0 ?
547
+ s->cfg_ptr->vlen >> -scale : s->cfg_ptr->vlen << scale;
548
TCGv_i64 dest = tcg_temp_new_i64();
549
550
if (a->rs1 == 0) {
551
@@ -XXX,XX +XXX,XX @@ static bool trans_vrgather_vi(DisasContext *s, arg_rmrr *a)
552
553
if (a->vm && s->vl_eq_vlmax) {
554
int scale = s->lmul - (s->sew + 3);
555
- int vlmax = scale < 0 ? s->vlen >> -scale : s->vlen << scale;
556
+ int vlmax = scale < 0 ?
557
+ s->cfg_ptr->vlen >> -scale : s->cfg_ptr->vlen << scale;
558
if (a->rs1 >= vlmax) {
559
tcg_gen_gvec_dup_imm(MO_64, vreg_ofs(s, a->rd),
560
MAXSZ(s), MAXSZ(s), 0);
561
@@ -XXX,XX +XXX,XX @@ static bool trans_vcompress_vm(DisasContext *s, arg_r *a)
562
data = FIELD_DP32(data, VDATA, LMUL, s->lmul);
563
tcg_gen_gvec_4_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0),
564
vreg_ofs(s, a->rs1), vreg_ofs(s, a->rs2),
565
- cpu_env, s->vlen / 8, s->vlen / 8, data,
566
+ cpu_env, s->cfg_ptr->vlen / 8,
567
+ s->cfg_ptr->vlen / 8, data,
568
fns[s->sew]);
569
mark_vs_dirty(s);
570
gen_set_label(over);
571
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_##NAME * a) \
572
if (require_rvv(s) && \
573
QEMU_IS_ALIGNED(a->rd, LEN) && \
574
QEMU_IS_ALIGNED(a->rs2, LEN)) { \
575
- uint32_t maxsz = (s->vlen >> 3) * LEN; \
576
+ uint32_t maxsz = (s->cfg_ptr->vlen >> 3) * LEN; \
577
if (s->vstart == 0) { \
578
/* EEW = 8 */ \
579
tcg_gen_gvec_mov(MO_8, vreg_ofs(s, a->rd), \
580
@@ -XXX,XX +XXX,XX @@ static bool int_ext_op(DisasContext *s, arg_rmr *a, uint8_t seq)
581
582
tcg_gen_gvec_3_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0),
583
vreg_ofs(s, a->rs2), cpu_env,
584
- s->vlen / 8, s->vlen / 8, data, fn);
585
+ s->cfg_ptr->vlen / 8,
586
+ s->cfg_ptr->vlen / 8, data, fn);
587
588
mark_vs_dirty(s);
589
gen_set_label(over);
590
diff --git a/target/riscv/insn_trans/trans_rvzfh.c.inc b/target/riscv/insn_trans/trans_rvzfh.c.inc
591
index XXXXXXX..XXXXXXX 100644
592
--- a/target/riscv/insn_trans/trans_rvzfh.c.inc
593
+++ b/target/riscv/insn_trans/trans_rvzfh.c.inc
594
@@ -XXX,XX +XXX,XX @@
595
*/
596
597
#define REQUIRE_ZFH(ctx) do { \
598
- if (!ctx->ext_zfh) { \
599
+ if (!ctx->cfg_ptr->ext_zfh) { \
600
return false; \
601
} \
602
} while (0)
603
604
#define REQUIRE_ZFH_OR_ZFHMIN(ctx) do { \
605
- if (!(ctx->ext_zfh || ctx->ext_zfhmin)) { \
606
+ if (!(ctx->cfg_ptr->ext_zfh || ctx->cfg_ptr->ext_zfhmin)) { \
607
return false; \
608
} \
609
} while (0)
610
--
105
--
611
2.34.1
106
2.41.0
612
613
diff view generated by jsdifflib
New patch
1
From: Dickon Hood <dickon.hood@codethink.co.uk>
1
2
3
Zvbb (implemented in later commit) has a widening instruction, which
4
requires an extra check on the enabled extensions. Refactor
5
GEN_OPIVX_WIDEN_TRANS() to take a check function to avoid reimplementing
6
it.
7
8
Signed-off-by: Dickon Hood <dickon.hood@codethink.co.uk>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
11
Signed-off-by: Max Chou <max.chou@sifive.com>
12
Message-ID: <20230711165917.2629866-7-max.chou@sifive.com>
13
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
14
---
15
target/riscv/insn_trans/trans_rvv.c.inc | 52 +++++++++++--------------
16
1 file changed, 23 insertions(+), 29 deletions(-)
17
18
diff --git a/target/riscv/insn_trans/trans_rvv.c.inc b/target/riscv/insn_trans/trans_rvv.c.inc
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/riscv/insn_trans/trans_rvv.c.inc
21
+++ b/target/riscv/insn_trans/trans_rvv.c.inc
22
@@ -XXX,XX +XXX,XX @@ static bool opivx_widen_check(DisasContext *s, arg_rmrr *a)
23
vext_check_ds(s, a->rd, a->rs2, a->vm);
24
}
25
26
-static bool do_opivx_widen(DisasContext *s, arg_rmrr *a,
27
- gen_helper_opivx *fn)
28
-{
29
- if (opivx_widen_check(s, a)) {
30
- return opivx_trans(a->rd, a->rs1, a->rs2, a->vm, fn, s);
31
- }
32
- return false;
33
-}
34
-
35
-#define GEN_OPIVX_WIDEN_TRANS(NAME) \
36
-static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
37
-{ \
38
- static gen_helper_opivx * const fns[3] = { \
39
- gen_helper_##NAME##_b, \
40
- gen_helper_##NAME##_h, \
41
- gen_helper_##NAME##_w \
42
- }; \
43
- return do_opivx_widen(s, a, fns[s->sew]); \
44
+#define GEN_OPIVX_WIDEN_TRANS(NAME, CHECK) \
45
+static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
46
+{ \
47
+ if (CHECK(s, a)) { \
48
+ static gen_helper_opivx * const fns[3] = { \
49
+ gen_helper_##NAME##_b, \
50
+ gen_helper_##NAME##_h, \
51
+ gen_helper_##NAME##_w \
52
+ }; \
53
+ return opivx_trans(a->rd, a->rs1, a->rs2, a->vm, fns[s->sew], s); \
54
+ } \
55
+ return false; \
56
}
57
58
-GEN_OPIVX_WIDEN_TRANS(vwaddu_vx)
59
-GEN_OPIVX_WIDEN_TRANS(vwadd_vx)
60
-GEN_OPIVX_WIDEN_TRANS(vwsubu_vx)
61
-GEN_OPIVX_WIDEN_TRANS(vwsub_vx)
62
+GEN_OPIVX_WIDEN_TRANS(vwaddu_vx, opivx_widen_check)
63
+GEN_OPIVX_WIDEN_TRANS(vwadd_vx, opivx_widen_check)
64
+GEN_OPIVX_WIDEN_TRANS(vwsubu_vx, opivx_widen_check)
65
+GEN_OPIVX_WIDEN_TRANS(vwsub_vx, opivx_widen_check)
66
67
/* WIDEN OPIVV with WIDEN */
68
static bool opiwv_widen_check(DisasContext *s, arg_rmrr *a)
69
@@ -XXX,XX +XXX,XX @@ GEN_OPIVX_TRANS(vrem_vx, opivx_check)
70
GEN_OPIVV_WIDEN_TRANS(vwmul_vv, opivv_widen_check)
71
GEN_OPIVV_WIDEN_TRANS(vwmulu_vv, opivv_widen_check)
72
GEN_OPIVV_WIDEN_TRANS(vwmulsu_vv, opivv_widen_check)
73
-GEN_OPIVX_WIDEN_TRANS(vwmul_vx)
74
-GEN_OPIVX_WIDEN_TRANS(vwmulu_vx)
75
-GEN_OPIVX_WIDEN_TRANS(vwmulsu_vx)
76
+GEN_OPIVX_WIDEN_TRANS(vwmul_vx, opivx_widen_check)
77
+GEN_OPIVX_WIDEN_TRANS(vwmulu_vx, opivx_widen_check)
78
+GEN_OPIVX_WIDEN_TRANS(vwmulsu_vx, opivx_widen_check)
79
80
/* Vector Single-Width Integer Multiply-Add Instructions */
81
GEN_OPIVV_TRANS(vmacc_vv, opivv_check)
82
@@ -XXX,XX +XXX,XX @@ GEN_OPIVX_TRANS(vnmsub_vx, opivx_check)
83
GEN_OPIVV_WIDEN_TRANS(vwmaccu_vv, opivv_widen_check)
84
GEN_OPIVV_WIDEN_TRANS(vwmacc_vv, opivv_widen_check)
85
GEN_OPIVV_WIDEN_TRANS(vwmaccsu_vv, opivv_widen_check)
86
-GEN_OPIVX_WIDEN_TRANS(vwmaccu_vx)
87
-GEN_OPIVX_WIDEN_TRANS(vwmacc_vx)
88
-GEN_OPIVX_WIDEN_TRANS(vwmaccsu_vx)
89
-GEN_OPIVX_WIDEN_TRANS(vwmaccus_vx)
90
+GEN_OPIVX_WIDEN_TRANS(vwmaccu_vx, opivx_widen_check)
91
+GEN_OPIVX_WIDEN_TRANS(vwmacc_vx, opivx_widen_check)
92
+GEN_OPIVX_WIDEN_TRANS(vwmaccsu_vx, opivx_widen_check)
93
+GEN_OPIVX_WIDEN_TRANS(vwmaccus_vx, opivx_widen_check)
94
95
/* Vector Integer Merge and Move Instructions */
96
static bool trans_vmv_v_v(DisasContext *s, arg_vmv_v_v *a)
97
--
98
2.41.0
diff view generated by jsdifflib
New patch
1
From: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
1
2
3
Move some macros out of `vector_helper` and into `vector_internals`.
4
This ensures they can be used by both vector and vector-crypto helpers
5
(latter implemented in proceeding commits).
6
7
Signed-off-by: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
8
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
9
Signed-off-by: Max Chou <max.chou@sifive.com>
10
Message-ID: <20230711165917.2629866-8-max.chou@sifive.com>
11
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
12
---
13
target/riscv/vector_internals.h | 46 +++++++++++++++++++++++++++++++++
14
target/riscv/vector_helper.c | 42 ------------------------------
15
2 files changed, 46 insertions(+), 42 deletions(-)
16
17
diff --git a/target/riscv/vector_internals.h b/target/riscv/vector_internals.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/riscv/vector_internals.h
20
+++ b/target/riscv/vector_internals.h
21
@@ -XXX,XX +XXX,XX @@ void vext_set_elems_1s(void *base, uint32_t is_agnostic, uint32_t cnt,
22
/* expand macro args before macro */
23
#define RVVCALL(macro, ...) macro(__VA_ARGS__)
24
25
+/* (TD, T2, TX2) */
26
+#define OP_UU_B uint8_t, uint8_t, uint8_t
27
+#define OP_UU_H uint16_t, uint16_t, uint16_t
28
+#define OP_UU_W uint32_t, uint32_t, uint32_t
29
+#define OP_UU_D uint64_t, uint64_t, uint64_t
30
+
31
/* (TD, T1, T2, TX1, TX2) */
32
#define OP_UUU_B uint8_t, uint8_t, uint8_t, uint8_t, uint8_t
33
#define OP_UUU_H uint16_t, uint16_t, uint16_t, uint16_t, uint16_t
34
#define OP_UUU_W uint32_t, uint32_t, uint32_t, uint32_t, uint32_t
35
#define OP_UUU_D uint64_t, uint64_t, uint64_t, uint64_t, uint64_t
36
37
+#define OPIVV1(NAME, TD, T2, TX2, HD, HS2, OP) \
38
+static void do_##NAME(void *vd, void *vs2, int i) \
39
+{ \
40
+ TX2 s2 = *((T2 *)vs2 + HS2(i)); \
41
+ *((TD *)vd + HD(i)) = OP(s2); \
42
+}
43
+
44
+#define GEN_VEXT_V(NAME, ESZ) \
45
+void HELPER(NAME)(void *vd, void *v0, void *vs2, \
46
+ CPURISCVState *env, uint32_t desc) \
47
+{ \
48
+ uint32_t vm = vext_vm(desc); \
49
+ uint32_t vl = env->vl; \
50
+ uint32_t total_elems = \
51
+ vext_get_total_elems(env, desc, ESZ); \
52
+ uint32_t vta = vext_vta(desc); \
53
+ uint32_t vma = vext_vma(desc); \
54
+ uint32_t i; \
55
+ \
56
+ for (i = env->vstart; i < vl; i++) { \
57
+ if (!vm && !vext_elem_mask(v0, i)) { \
58
+ /* set masked-off elements to 1s */ \
59
+ vext_set_elems_1s(vd, vma, i * ESZ, \
60
+ (i + 1) * ESZ); \
61
+ continue; \
62
+ } \
63
+ do_##NAME(vd, vs2, i); \
64
+ } \
65
+ env->vstart = 0; \
66
+ /* set tail elements to 1s */ \
67
+ vext_set_elems_1s(vd, vta, vl * ESZ, \
68
+ total_elems * ESZ); \
69
+}
70
+
71
/* operation of two vector elements */
72
typedef void opivv2_fn(void *vd, void *vs1, void *vs2, int i);
73
74
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *v0, target_ulong s1, \
75
do_##NAME, ESZ); \
76
}
77
78
+/* Three of the widening shortening macros: */
79
+/* (TD, T1, T2, TX1, TX2) */
80
+#define WOP_UUU_B uint16_t, uint8_t, uint8_t, uint16_t, uint16_t
81
+#define WOP_UUU_H uint32_t, uint16_t, uint16_t, uint32_t, uint32_t
82
+#define WOP_UUU_W uint64_t, uint32_t, uint32_t, uint64_t, uint64_t
83
+
84
#endif /* TARGET_RISCV_VECTOR_INTERNALS_H */
85
diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c
86
index XXXXXXX..XXXXXXX 100644
87
--- a/target/riscv/vector_helper.c
88
+++ b/target/riscv/vector_helper.c
89
@@ -XXX,XX +XXX,XX @@ GEN_VEXT_ST_WHOLE(vs8r_v, int8_t, ste_b)
90
#define OP_SUS_H int16_t, uint16_t, int16_t, uint16_t, int16_t
91
#define OP_SUS_W int32_t, uint32_t, int32_t, uint32_t, int32_t
92
#define OP_SUS_D int64_t, uint64_t, int64_t, uint64_t, int64_t
93
-#define WOP_UUU_B uint16_t, uint8_t, uint8_t, uint16_t, uint16_t
94
-#define WOP_UUU_H uint32_t, uint16_t, uint16_t, uint32_t, uint32_t
95
-#define WOP_UUU_W uint64_t, uint32_t, uint32_t, uint64_t, uint64_t
96
#define WOP_SSS_B int16_t, int8_t, int8_t, int16_t, int16_t
97
#define WOP_SSS_H int32_t, int16_t, int16_t, int32_t, int32_t
98
#define WOP_SSS_W int64_t, int32_t, int32_t, int64_t, int64_t
99
@@ -XXX,XX +XXX,XX @@ GEN_VEXT_VF(vfwnmsac_vf_h, 4)
100
GEN_VEXT_VF(vfwnmsac_vf_w, 8)
101
102
/* Vector Floating-Point Square-Root Instruction */
103
-/* (TD, T2, TX2) */
104
-#define OP_UU_H uint16_t, uint16_t, uint16_t
105
-#define OP_UU_W uint32_t, uint32_t, uint32_t
106
-#define OP_UU_D uint64_t, uint64_t, uint64_t
107
-
108
#define OPFVV1(NAME, TD, T2, TX2, HD, HS2, OP) \
109
static void do_##NAME(void *vd, void *vs2, int i, \
110
CPURISCVState *env) \
111
@@ -XXX,XX +XXX,XX @@ GEN_VEXT_CMP_VF(vmfge_vf_w, uint32_t, H4, vmfge32)
112
GEN_VEXT_CMP_VF(vmfge_vf_d, uint64_t, H8, vmfge64)
113
114
/* Vector Floating-Point Classify Instruction */
115
-#define OPIVV1(NAME, TD, T2, TX2, HD, HS2, OP) \
116
-static void do_##NAME(void *vd, void *vs2, int i) \
117
-{ \
118
- TX2 s2 = *((T2 *)vs2 + HS2(i)); \
119
- *((TD *)vd + HD(i)) = OP(s2); \
120
-}
121
-
122
-#define GEN_VEXT_V(NAME, ESZ) \
123
-void HELPER(NAME)(void *vd, void *v0, void *vs2, \
124
- CPURISCVState *env, uint32_t desc) \
125
-{ \
126
- uint32_t vm = vext_vm(desc); \
127
- uint32_t vl = env->vl; \
128
- uint32_t total_elems = \
129
- vext_get_total_elems(env, desc, ESZ); \
130
- uint32_t vta = vext_vta(desc); \
131
- uint32_t vma = vext_vma(desc); \
132
- uint32_t i; \
133
- \
134
- for (i = env->vstart; i < vl; i++) { \
135
- if (!vm && !vext_elem_mask(v0, i)) { \
136
- /* set masked-off elements to 1s */ \
137
- vext_set_elems_1s(vd, vma, i * ESZ, \
138
- (i + 1) * ESZ); \
139
- continue; \
140
- } \
141
- do_##NAME(vd, vs2, i); \
142
- } \
143
- env->vstart = 0; \
144
- /* set tail elements to 1s */ \
145
- vext_set_elems_1s(vd, vta, vl * ESZ, \
146
- total_elems * ESZ); \
147
-}
148
-
149
target_ulong fclass_h(uint64_t frs1)
150
{
151
float16 f = frs1;
152
--
153
2.41.0
diff view generated by jsdifflib
1
From: Anup Patel <anup.patel@wdc.com>
1
From: Dickon Hood <dickon.hood@codethink.co.uk>
2
2
3
The AIA spec defines programmable 8-bit priority for each local interrupt
3
This commit adds support for the Zvbb vector-crypto extension, which
4
at M-level, S-level and VS-level so we extend local interrupt processing
4
consists of the following instructions:
5
to consider AIA interrupt priorities. The AIA CSRs which help software
6
configure local interrupt priorities will be added by subsequent patches.
7
5
8
Signed-off-by: Anup Patel <anup.patel@wdc.com>
6
* vrol.[vv,vx]
9
Signed-off-by: Anup Patel <anup@brainfault.org>
7
* vror.[vv,vx,vi]
10
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
8
* vbrev8.v
11
Message-id: 20220204174700.534953-10-anup@brainfault.org
9
* vrev8.v
10
* vandn.[vv,vx]
11
* vbrev.v
12
* vclz.v
13
* vctz.v
14
* vcpop.v
15
* vwsll.[vv,vx,vi]
16
17
Translation functions are defined in
18
`target/riscv/insn_trans/trans_rvvk.c.inc` and helpers are defined in
19
`target/riscv/vcrypto_helper.c`.
20
21
Co-authored-by: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
22
Co-authored-by: William Salmon <will.salmon@codethink.co.uk>
23
Co-authored-by: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
24
[max.chou@sifive.com: Fix imm mode of vror.vi]
25
Signed-off-by: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
26
Signed-off-by: William Salmon <will.salmon@codethink.co.uk>
27
Signed-off-by: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
28
Signed-off-by: Dickon Hood <dickon.hood@codethink.co.uk>
29
Signed-off-by: Max Chou <max.chou@sifive.com>
30
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
31
[max.chou@sifive.com: Exposed x-zvbb property]
32
Message-ID: <20230711165917.2629866-9-max.chou@sifive.com>
12
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
33
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
13
---
34
---
14
target/riscv/cpu.h | 12 ++
35
target/riscv/cpu_cfg.h | 1 +
15
target/riscv/cpu.c | 19 +++
36
target/riscv/helper.h | 62 +++++++++
16
target/riscv/cpu_helper.c | 281 +++++++++++++++++++++++++++++++++++---
37
target/riscv/insn32.decode | 20 +++
17
target/riscv/machine.c | 3 +
38
target/riscv/cpu.c | 12 ++
18
4 files changed, 294 insertions(+), 21 deletions(-)
39
target/riscv/vcrypto_helper.c | 138 +++++++++++++++++++
40
target/riscv/insn_trans/trans_rvvk.c.inc | 164 +++++++++++++++++++++++
41
6 files changed, 397 insertions(+)
19
42
20
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
43
diff --git a/target/riscv/cpu_cfg.h b/target/riscv/cpu_cfg.h
21
index XXXXXXX..XXXXXXX 100644
44
index XXXXXXX..XXXXXXX 100644
22
--- a/target/riscv/cpu.h
45
--- a/target/riscv/cpu_cfg.h
23
+++ b/target/riscv/cpu.h
46
+++ b/target/riscv/cpu_cfg.h
24
@@ -XXX,XX +XXX,XX @@ struct CPURISCVState {
47
@@ -XXX,XX +XXX,XX @@ struct RISCVCPUConfig {
25
target_ulong mcause;
48
bool ext_zve32f;
26
target_ulong mtval; /* since: priv-1.10.0 */
49
bool ext_zve64f;
27
50
bool ext_zve64d;
28
+ /* Machine and Supervisor interrupt priorities */
51
+ bool ext_zvbb;
29
+ uint8_t miprio[64];
52
bool ext_zvbc;
30
+ uint8_t siprio[64];
53
bool ext_zmmul;
31
+
54
bool ext_zvfbfmin;
32
/* Hypervisor CSRs */
55
diff --git a/target/riscv/helper.h b/target/riscv/helper.h
33
target_ulong hstatus;
56
index XXXXXXX..XXXXXXX 100644
34
target_ulong hedeleg;
57
--- a/target/riscv/helper.h
35
@@ -XXX,XX +XXX,XX @@ struct CPURISCVState {
58
+++ b/target/riscv/helper.h
36
target_ulong hgeip;
59
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_6(vclmul_vv, void, ptr, ptr, ptr, ptr, env, i32)
37
uint64_t htimedelta;
60
DEF_HELPER_6(vclmul_vx, void, ptr, ptr, tl, ptr, env, i32)
38
61
DEF_HELPER_6(vclmulh_vv, void, ptr, ptr, ptr, ptr, env, i32)
39
+ /* Hypervisor controlled virtual interrupt priorities */
62
DEF_HELPER_6(vclmulh_vx, void, ptr, ptr, tl, ptr, env, i32)
40
+ uint8_t hviprio[64];
63
+
41
+
64
+DEF_HELPER_6(vror_vv_b, void, ptr, ptr, ptr, ptr, env, i32)
42
/* Upper 64-bits of 128-bit CSRs */
65
+DEF_HELPER_6(vror_vv_h, void, ptr, ptr, ptr, ptr, env, i32)
43
uint64_t mscratchh;
66
+DEF_HELPER_6(vror_vv_w, void, ptr, ptr, ptr, ptr, env, i32)
44
uint64_t sscratchh;
67
+DEF_HELPER_6(vror_vv_d, void, ptr, ptr, ptr, ptr, env, i32)
45
@@ -XXX,XX +XXX,XX @@ int riscv_cpu_write_elf32_note(WriteCoreDumpFunction f, CPUState *cs,
68
+
46
int cpuid, void *opaque);
69
+DEF_HELPER_6(vror_vx_b, void, ptr, ptr, tl, ptr, env, i32)
47
int riscv_cpu_gdb_read_register(CPUState *cpu, GByteArray *buf, int reg);
70
+DEF_HELPER_6(vror_vx_h, void, ptr, ptr, tl, ptr, env, i32)
48
int riscv_cpu_gdb_write_register(CPUState *cpu, uint8_t *buf, int reg);
71
+DEF_HELPER_6(vror_vx_w, void, ptr, ptr, tl, ptr, env, i32)
49
+int riscv_cpu_hviprio_index2irq(int index, int *out_irq, int *out_rdzero);
72
+DEF_HELPER_6(vror_vx_d, void, ptr, ptr, tl, ptr, env, i32)
50
+uint8_t riscv_cpu_default_priority(int irq);
73
+
51
+int riscv_cpu_mirq_pending(CPURISCVState *env);
74
+DEF_HELPER_6(vrol_vv_b, void, ptr, ptr, ptr, ptr, env, i32)
52
+int riscv_cpu_sirq_pending(CPURISCVState *env);
75
+DEF_HELPER_6(vrol_vv_h, void, ptr, ptr, ptr, ptr, env, i32)
53
+int riscv_cpu_vsirq_pending(CPURISCVState *env);
76
+DEF_HELPER_6(vrol_vv_w, void, ptr, ptr, ptr, ptr, env, i32)
54
bool riscv_cpu_fp_enabled(CPURISCVState *env);
77
+DEF_HELPER_6(vrol_vv_d, void, ptr, ptr, ptr, ptr, env, i32)
55
target_ulong riscv_cpu_get_geilen(CPURISCVState *env);
78
+
56
void riscv_cpu_set_geilen(CPURISCVState *env, target_ulong geilen);
79
+DEF_HELPER_6(vrol_vx_b, void, ptr, ptr, tl, ptr, env, i32)
80
+DEF_HELPER_6(vrol_vx_h, void, ptr, ptr, tl, ptr, env, i32)
81
+DEF_HELPER_6(vrol_vx_w, void, ptr, ptr, tl, ptr, env, i32)
82
+DEF_HELPER_6(vrol_vx_d, void, ptr, ptr, tl, ptr, env, i32)
83
+
84
+DEF_HELPER_5(vrev8_v_b, void, ptr, ptr, ptr, env, i32)
85
+DEF_HELPER_5(vrev8_v_h, void, ptr, ptr, ptr, env, i32)
86
+DEF_HELPER_5(vrev8_v_w, void, ptr, ptr, ptr, env, i32)
87
+DEF_HELPER_5(vrev8_v_d, void, ptr, ptr, ptr, env, i32)
88
+DEF_HELPER_5(vbrev8_v_b, void, ptr, ptr, ptr, env, i32)
89
+DEF_HELPER_5(vbrev8_v_h, void, ptr, ptr, ptr, env, i32)
90
+DEF_HELPER_5(vbrev8_v_w, void, ptr, ptr, ptr, env, i32)
91
+DEF_HELPER_5(vbrev8_v_d, void, ptr, ptr, ptr, env, i32)
92
+DEF_HELPER_5(vbrev_v_b, void, ptr, ptr, ptr, env, i32)
93
+DEF_HELPER_5(vbrev_v_h, void, ptr, ptr, ptr, env, i32)
94
+DEF_HELPER_5(vbrev_v_w, void, ptr, ptr, ptr, env, i32)
95
+DEF_HELPER_5(vbrev_v_d, void, ptr, ptr, ptr, env, i32)
96
+
97
+DEF_HELPER_5(vclz_v_b, void, ptr, ptr, ptr, env, i32)
98
+DEF_HELPER_5(vclz_v_h, void, ptr, ptr, ptr, env, i32)
99
+DEF_HELPER_5(vclz_v_w, void, ptr, ptr, ptr, env, i32)
100
+DEF_HELPER_5(vclz_v_d, void, ptr, ptr, ptr, env, i32)
101
+DEF_HELPER_5(vctz_v_b, void, ptr, ptr, ptr, env, i32)
102
+DEF_HELPER_5(vctz_v_h, void, ptr, ptr, ptr, env, i32)
103
+DEF_HELPER_5(vctz_v_w, void, ptr, ptr, ptr, env, i32)
104
+DEF_HELPER_5(vctz_v_d, void, ptr, ptr, ptr, env, i32)
105
+DEF_HELPER_5(vcpop_v_b, void, ptr, ptr, ptr, env, i32)
106
+DEF_HELPER_5(vcpop_v_h, void, ptr, ptr, ptr, env, i32)
107
+DEF_HELPER_5(vcpop_v_w, void, ptr, ptr, ptr, env, i32)
108
+DEF_HELPER_5(vcpop_v_d, void, ptr, ptr, ptr, env, i32)
109
+
110
+DEF_HELPER_6(vwsll_vv_b, void, ptr, ptr, ptr, ptr, env, i32)
111
+DEF_HELPER_6(vwsll_vv_h, void, ptr, ptr, ptr, ptr, env, i32)
112
+DEF_HELPER_6(vwsll_vv_w, void, ptr, ptr, ptr, ptr, env, i32)
113
+DEF_HELPER_6(vwsll_vx_b, void, ptr, ptr, tl, ptr, env, i32)
114
+DEF_HELPER_6(vwsll_vx_h, void, ptr, ptr, tl, ptr, env, i32)
115
+DEF_HELPER_6(vwsll_vx_w, void, ptr, ptr, tl, ptr, env, i32)
116
+
117
+DEF_HELPER_6(vandn_vv_b, void, ptr, ptr, ptr, ptr, env, i32)
118
+DEF_HELPER_6(vandn_vv_h, void, ptr, ptr, ptr, ptr, env, i32)
119
+DEF_HELPER_6(vandn_vv_w, void, ptr, ptr, ptr, ptr, env, i32)
120
+DEF_HELPER_6(vandn_vv_d, void, ptr, ptr, ptr, ptr, env, i32)
121
+DEF_HELPER_6(vandn_vx_b, void, ptr, ptr, tl, ptr, env, i32)
122
+DEF_HELPER_6(vandn_vx_h, void, ptr, ptr, tl, ptr, env, i32)
123
+DEF_HELPER_6(vandn_vx_w, void, ptr, ptr, tl, ptr, env, i32)
124
+DEF_HELPER_6(vandn_vx_d, void, ptr, ptr, tl, ptr, env, i32)
125
diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode
126
index XXXXXXX..XXXXXXX 100644
127
--- a/target/riscv/insn32.decode
128
+++ b/target/riscv/insn32.decode
129
@@ -XXX,XX +XXX,XX @@
130
%imm_u 12:s20 !function=ex_shift_12
131
%imm_bs 30:2 !function=ex_shift_3
132
%imm_rnum 20:4
133
+%imm_z6 26:1 15:5
134
135
# Argument sets:
136
&empty
137
@@ -XXX,XX +XXX,XX @@
138
@r_vm ...... vm:1 ..... ..... ... ..... ....... &rmrr %rs2 %rs1 %rd
139
@r_vm_1 ...... . ..... ..... ... ..... ....... &rmrr vm=1 %rs2 %rs1 %rd
140
@r_vm_0 ...... . ..... ..... ... ..... ....... &rmrr vm=0 %rs2 %rs1 %rd
141
+@r2_zimm6 ..... . vm:1 ..... ..... ... ..... ....... &rmrr %rs2 rs1=%imm_z6 %rd
142
@r2_zimm11 . zimm:11 ..... ... ..... ....... %rs1 %rd
143
@r2_zimm10 .. zimm:10 ..... ... ..... ....... %rs1 %rd
144
@r2_s ....... ..... ..... ... ..... ....... %rs2 %rs1
145
@@ -XXX,XX +XXX,XX @@ vclmul_vv 001100 . ..... ..... 010 ..... 1010111 @r_vm
146
vclmul_vx 001100 . ..... ..... 110 ..... 1010111 @r_vm
147
vclmulh_vv 001101 . ..... ..... 010 ..... 1010111 @r_vm
148
vclmulh_vx 001101 . ..... ..... 110 ..... 1010111 @r_vm
149
+
150
+# *** Zvbb vector crypto extension ***
151
+vrol_vv 010101 . ..... ..... 000 ..... 1010111 @r_vm
152
+vrol_vx 010101 . ..... ..... 100 ..... 1010111 @r_vm
153
+vror_vv 010100 . ..... ..... 000 ..... 1010111 @r_vm
154
+vror_vx 010100 . ..... ..... 100 ..... 1010111 @r_vm
155
+vror_vi 01010. . ..... ..... 011 ..... 1010111 @r2_zimm6
156
+vbrev8_v 010010 . ..... 01000 010 ..... 1010111 @r2_vm
157
+vrev8_v 010010 . ..... 01001 010 ..... 1010111 @r2_vm
158
+vandn_vv 000001 . ..... ..... 000 ..... 1010111 @r_vm
159
+vandn_vx 000001 . ..... ..... 100 ..... 1010111 @r_vm
160
+vbrev_v 010010 . ..... 01010 010 ..... 1010111 @r2_vm
161
+vclz_v 010010 . ..... 01100 010 ..... 1010111 @r2_vm
162
+vctz_v 010010 . ..... 01101 010 ..... 1010111 @r2_vm
163
+vcpop_v 010010 . ..... 01110 010 ..... 1010111 @r2_vm
164
+vwsll_vv 110101 . ..... ..... 000 ..... 1010111 @r_vm
165
+vwsll_vx 110101 . ..... ..... 100 ..... 1010111 @r_vm
166
+vwsll_vi 110101 . ..... ..... 011 ..... 1010111 @r_vm
57
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
167
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
58
index XXXXXXX..XXXXXXX 100644
168
index XXXXXXX..XXXXXXX 100644
59
--- a/target/riscv/cpu.c
169
--- a/target/riscv/cpu.c
60
+++ b/target/riscv/cpu.c
170
+++ b/target/riscv/cpu.c
61
@@ -XXX,XX +XXX,XX @@ void restore_state_to_opc(CPURISCVState *env, TranslationBlock *tb,
171
@@ -XXX,XX +XXX,XX @@ static const struct isa_ext_data isa_edata_arr[] = {
62
172
ISA_EXT_DATA_ENTRY(zksed, PRIV_VERSION_1_12_0, ext_zksed),
63
static void riscv_cpu_reset(DeviceState *dev)
173
ISA_EXT_DATA_ENTRY(zksh, PRIV_VERSION_1_12_0, ext_zksh),
64
{
174
ISA_EXT_DATA_ENTRY(zkt, PRIV_VERSION_1_12_0, ext_zkt),
65
+#ifndef CONFIG_USER_ONLY
175
+ ISA_EXT_DATA_ENTRY(zvbb, PRIV_VERSION_1_12_0, ext_zvbb),
66
+ uint8_t iprio;
176
ISA_EXT_DATA_ENTRY(zvbc, PRIV_VERSION_1_12_0, ext_zvbc),
67
+ int i, irq, rdzero;
177
ISA_EXT_DATA_ENTRY(zve32f, PRIV_VERSION_1_10_0, ext_zve32f),
68
+#endif
178
ISA_EXT_DATA_ENTRY(zve64f, PRIV_VERSION_1_10_0, ext_zve64f),
69
CPUState *cs = CPU(dev);
179
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_validate_set_extensions(RISCVCPU *cpu, Error **errp)
70
RISCVCPU *cpu = RISCV_CPU(cs);
180
return;
71
RISCVCPUClass *mcc = RISCV_CPU_GET_CLASS(cpu);
181
}
72
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_reset(DeviceState *dev)
182
73
env->miclaim = MIP_SGEIP;
183
+ /*
74
env->pc = env->resetvec;
184
+ * In principle Zve*x would also suffice here, were they supported
75
env->two_stage_lookup = false;
185
+ * in qemu
76
+
186
+ */
77
+ /* Initialized default priorities of local interrupts. */
187
+ if (cpu->cfg.ext_zvbb && !cpu->cfg.ext_zve32f) {
78
+ for (i = 0; i < ARRAY_SIZE(env->miprio); i++) {
188
+ error_setg(errp,
79
+ iprio = riscv_cpu_default_priority(i);
189
+ "Vector crypto extensions require V or Zve* extensions");
80
+ env->miprio[i] = (i == IRQ_M_EXT) ? 0 : iprio;
190
+ return;
81
+ env->siprio[i] = (i == IRQ_S_EXT) ? 0 : iprio;
191
+ }
82
+ env->hviprio[i] = 0;
192
+
83
+ }
193
if (cpu->cfg.ext_zvbc && !cpu->cfg.ext_zve64f) {
84
+ i = 0;
194
error_setg(errp, "Zvbc extension requires V or Zve64{f,d} extensions");
85
+ while (!riscv_cpu_hviprio_index2irq(i, &irq, &rdzero)) {
195
return;
86
+ if (!rdzero) {
196
@@ -XXX,XX +XXX,XX @@ static Property riscv_cpu_extensions[] = {
87
+ env->hviprio[irq] = env->miprio[irq];
197
DEFINE_PROP_BOOL("x-zvfbfwma", RISCVCPU, cfg.ext_zvfbfwma, false),
88
+ }
198
89
+ i++;
199
/* Vector cryptography extensions */
90
+ }
200
+ DEFINE_PROP_BOOL("x-zvbb", RISCVCPU, cfg.ext_zvbb, false),
91
/* mmte is supposed to have pm.current hardwired to 1 */
201
DEFINE_PROP_BOOL("x-zvbc", RISCVCPU, cfg.ext_zvbc, false),
92
env->mmte |= (PM_EXT_INITIAL | MMTE_M_PM_CURRENT);
202
93
#endif
203
DEFINE_PROP_END_OF_LIST(),
94
diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
204
diff --git a/target/riscv/vcrypto_helper.c b/target/riscv/vcrypto_helper.c
95
index XXXXXXX..XXXXXXX 100644
205
index XXXXXXX..XXXXXXX 100644
96
--- a/target/riscv/cpu_helper.c
206
--- a/target/riscv/vcrypto_helper.c
97
+++ b/target/riscv/cpu_helper.c
207
+++ b/target/riscv/vcrypto_helper.c
98
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_update_mask(CPURISCVState *env)
208
@@ -XXX,XX +XXX,XX @@
99
}
209
#include "qemu/osdep.h"
100
210
#include "qemu/host-utils.h"
101
#ifndef CONFIG_USER_ONLY
211
#include "qemu/bitops.h"
102
-static int riscv_cpu_local_irq_pending(CPURISCVState *env)
212
+#include "qemu/bswap.h"
213
#include "cpu.h"
214
#include "exec/memop.h"
215
#include "exec/exec-all.h"
216
@@ -XXX,XX +XXX,XX @@ RVVCALL(OPIVV2, vclmulh_vv, OP_UUU_D, H8, H8, H8, clmulh64)
217
GEN_VEXT_VV(vclmulh_vv, 8)
218
RVVCALL(OPIVX2, vclmulh_vx, OP_UUU_D, H8, H8, clmulh64)
219
GEN_VEXT_VX(vclmulh_vx, 8)
220
+
221
+RVVCALL(OPIVV2, vror_vv_b, OP_UUU_B, H1, H1, H1, ror8)
222
+RVVCALL(OPIVV2, vror_vv_h, OP_UUU_H, H2, H2, H2, ror16)
223
+RVVCALL(OPIVV2, vror_vv_w, OP_UUU_W, H4, H4, H4, ror32)
224
+RVVCALL(OPIVV2, vror_vv_d, OP_UUU_D, H8, H8, H8, ror64)
225
+GEN_VEXT_VV(vror_vv_b, 1)
226
+GEN_VEXT_VV(vror_vv_h, 2)
227
+GEN_VEXT_VV(vror_vv_w, 4)
228
+GEN_VEXT_VV(vror_vv_d, 8)
229
+
230
+RVVCALL(OPIVX2, vror_vx_b, OP_UUU_B, H1, H1, ror8)
231
+RVVCALL(OPIVX2, vror_vx_h, OP_UUU_H, H2, H2, ror16)
232
+RVVCALL(OPIVX2, vror_vx_w, OP_UUU_W, H4, H4, ror32)
233
+RVVCALL(OPIVX2, vror_vx_d, OP_UUU_D, H8, H8, ror64)
234
+GEN_VEXT_VX(vror_vx_b, 1)
235
+GEN_VEXT_VX(vror_vx_h, 2)
236
+GEN_VEXT_VX(vror_vx_w, 4)
237
+GEN_VEXT_VX(vror_vx_d, 8)
238
+
239
+RVVCALL(OPIVV2, vrol_vv_b, OP_UUU_B, H1, H1, H1, rol8)
240
+RVVCALL(OPIVV2, vrol_vv_h, OP_UUU_H, H2, H2, H2, rol16)
241
+RVVCALL(OPIVV2, vrol_vv_w, OP_UUU_W, H4, H4, H4, rol32)
242
+RVVCALL(OPIVV2, vrol_vv_d, OP_UUU_D, H8, H8, H8, rol64)
243
+GEN_VEXT_VV(vrol_vv_b, 1)
244
+GEN_VEXT_VV(vrol_vv_h, 2)
245
+GEN_VEXT_VV(vrol_vv_w, 4)
246
+GEN_VEXT_VV(vrol_vv_d, 8)
247
+
248
+RVVCALL(OPIVX2, vrol_vx_b, OP_UUU_B, H1, H1, rol8)
249
+RVVCALL(OPIVX2, vrol_vx_h, OP_UUU_H, H2, H2, rol16)
250
+RVVCALL(OPIVX2, vrol_vx_w, OP_UUU_W, H4, H4, rol32)
251
+RVVCALL(OPIVX2, vrol_vx_d, OP_UUU_D, H8, H8, rol64)
252
+GEN_VEXT_VX(vrol_vx_b, 1)
253
+GEN_VEXT_VX(vrol_vx_h, 2)
254
+GEN_VEXT_VX(vrol_vx_w, 4)
255
+GEN_VEXT_VX(vrol_vx_d, 8)
256
+
257
+static uint64_t brev8(uint64_t val)
258
+{
259
+ val = ((val & 0x5555555555555555ull) << 1) |
260
+ ((val & 0xAAAAAAAAAAAAAAAAull) >> 1);
261
+ val = ((val & 0x3333333333333333ull) << 2) |
262
+ ((val & 0xCCCCCCCCCCCCCCCCull) >> 2);
263
+ val = ((val & 0x0F0F0F0F0F0F0F0Full) << 4) |
264
+ ((val & 0xF0F0F0F0F0F0F0F0ull) >> 4);
265
+
266
+ return val;
267
+}
268
+
269
+RVVCALL(OPIVV1, vbrev8_v_b, OP_UU_B, H1, H1, brev8)
270
+RVVCALL(OPIVV1, vbrev8_v_h, OP_UU_H, H2, H2, brev8)
271
+RVVCALL(OPIVV1, vbrev8_v_w, OP_UU_W, H4, H4, brev8)
272
+RVVCALL(OPIVV1, vbrev8_v_d, OP_UU_D, H8, H8, brev8)
273
+GEN_VEXT_V(vbrev8_v_b, 1)
274
+GEN_VEXT_V(vbrev8_v_h, 2)
275
+GEN_VEXT_V(vbrev8_v_w, 4)
276
+GEN_VEXT_V(vbrev8_v_d, 8)
277
+
278
+#define DO_IDENTITY(a) (a)
279
+RVVCALL(OPIVV1, vrev8_v_b, OP_UU_B, H1, H1, DO_IDENTITY)
280
+RVVCALL(OPIVV1, vrev8_v_h, OP_UU_H, H2, H2, bswap16)
281
+RVVCALL(OPIVV1, vrev8_v_w, OP_UU_W, H4, H4, bswap32)
282
+RVVCALL(OPIVV1, vrev8_v_d, OP_UU_D, H8, H8, bswap64)
283
+GEN_VEXT_V(vrev8_v_b, 1)
284
+GEN_VEXT_V(vrev8_v_h, 2)
285
+GEN_VEXT_V(vrev8_v_w, 4)
286
+GEN_VEXT_V(vrev8_v_d, 8)
287
+
288
+#define DO_ANDN(a, b) ((a) & ~(b))
289
+RVVCALL(OPIVV2, vandn_vv_b, OP_UUU_B, H1, H1, H1, DO_ANDN)
290
+RVVCALL(OPIVV2, vandn_vv_h, OP_UUU_H, H2, H2, H2, DO_ANDN)
291
+RVVCALL(OPIVV2, vandn_vv_w, OP_UUU_W, H4, H4, H4, DO_ANDN)
292
+RVVCALL(OPIVV2, vandn_vv_d, OP_UUU_D, H8, H8, H8, DO_ANDN)
293
+GEN_VEXT_VV(vandn_vv_b, 1)
294
+GEN_VEXT_VV(vandn_vv_h, 2)
295
+GEN_VEXT_VV(vandn_vv_w, 4)
296
+GEN_VEXT_VV(vandn_vv_d, 8)
297
+
298
+RVVCALL(OPIVX2, vandn_vx_b, OP_UUU_B, H1, H1, DO_ANDN)
299
+RVVCALL(OPIVX2, vandn_vx_h, OP_UUU_H, H2, H2, DO_ANDN)
300
+RVVCALL(OPIVX2, vandn_vx_w, OP_UUU_W, H4, H4, DO_ANDN)
301
+RVVCALL(OPIVX2, vandn_vx_d, OP_UUU_D, H8, H8, DO_ANDN)
302
+GEN_VEXT_VX(vandn_vx_b, 1)
303
+GEN_VEXT_VX(vandn_vx_h, 2)
304
+GEN_VEXT_VX(vandn_vx_w, 4)
305
+GEN_VEXT_VX(vandn_vx_d, 8)
306
+
307
+RVVCALL(OPIVV1, vbrev_v_b, OP_UU_B, H1, H1, revbit8)
308
+RVVCALL(OPIVV1, vbrev_v_h, OP_UU_H, H2, H2, revbit16)
309
+RVVCALL(OPIVV1, vbrev_v_w, OP_UU_W, H4, H4, revbit32)
310
+RVVCALL(OPIVV1, vbrev_v_d, OP_UU_D, H8, H8, revbit64)
311
+GEN_VEXT_V(vbrev_v_b, 1)
312
+GEN_VEXT_V(vbrev_v_h, 2)
313
+GEN_VEXT_V(vbrev_v_w, 4)
314
+GEN_VEXT_V(vbrev_v_d, 8)
315
+
316
+RVVCALL(OPIVV1, vclz_v_b, OP_UU_B, H1, H1, clz8)
317
+RVVCALL(OPIVV1, vclz_v_h, OP_UU_H, H2, H2, clz16)
318
+RVVCALL(OPIVV1, vclz_v_w, OP_UU_W, H4, H4, clz32)
319
+RVVCALL(OPIVV1, vclz_v_d, OP_UU_D, H8, H8, clz64)
320
+GEN_VEXT_V(vclz_v_b, 1)
321
+GEN_VEXT_V(vclz_v_h, 2)
322
+GEN_VEXT_V(vclz_v_w, 4)
323
+GEN_VEXT_V(vclz_v_d, 8)
324
+
325
+RVVCALL(OPIVV1, vctz_v_b, OP_UU_B, H1, H1, ctz8)
326
+RVVCALL(OPIVV1, vctz_v_h, OP_UU_H, H2, H2, ctz16)
327
+RVVCALL(OPIVV1, vctz_v_w, OP_UU_W, H4, H4, ctz32)
328
+RVVCALL(OPIVV1, vctz_v_d, OP_UU_D, H8, H8, ctz64)
329
+GEN_VEXT_V(vctz_v_b, 1)
330
+GEN_VEXT_V(vctz_v_h, 2)
331
+GEN_VEXT_V(vctz_v_w, 4)
332
+GEN_VEXT_V(vctz_v_d, 8)
333
+
334
+RVVCALL(OPIVV1, vcpop_v_b, OP_UU_B, H1, H1, ctpop8)
335
+RVVCALL(OPIVV1, vcpop_v_h, OP_UU_H, H2, H2, ctpop16)
336
+RVVCALL(OPIVV1, vcpop_v_w, OP_UU_W, H4, H4, ctpop32)
337
+RVVCALL(OPIVV1, vcpop_v_d, OP_UU_D, H8, H8, ctpop64)
338
+GEN_VEXT_V(vcpop_v_b, 1)
339
+GEN_VEXT_V(vcpop_v_h, 2)
340
+GEN_VEXT_V(vcpop_v_w, 4)
341
+GEN_VEXT_V(vcpop_v_d, 8)
342
+
343
+#define DO_SLL(N, M) (N << (M & (sizeof(N) * 8 - 1)))
344
+RVVCALL(OPIVV2, vwsll_vv_b, WOP_UUU_B, H2, H1, H1, DO_SLL)
345
+RVVCALL(OPIVV2, vwsll_vv_h, WOP_UUU_H, H4, H2, H2, DO_SLL)
346
+RVVCALL(OPIVV2, vwsll_vv_w, WOP_UUU_W, H8, H4, H4, DO_SLL)
347
+GEN_VEXT_VV(vwsll_vv_b, 2)
348
+GEN_VEXT_VV(vwsll_vv_h, 4)
349
+GEN_VEXT_VV(vwsll_vv_w, 8)
350
+
351
+RVVCALL(OPIVX2, vwsll_vx_b, WOP_UUU_B, H2, H1, DO_SLL)
352
+RVVCALL(OPIVX2, vwsll_vx_h, WOP_UUU_H, H4, H2, DO_SLL)
353
+RVVCALL(OPIVX2, vwsll_vx_w, WOP_UUU_W, H8, H4, DO_SLL)
354
+GEN_VEXT_VX(vwsll_vx_b, 2)
355
+GEN_VEXT_VX(vwsll_vx_h, 4)
356
+GEN_VEXT_VX(vwsll_vx_w, 8)
357
diff --git a/target/riscv/insn_trans/trans_rvvk.c.inc b/target/riscv/insn_trans/trans_rvvk.c.inc
358
index XXXXXXX..XXXXXXX 100644
359
--- a/target/riscv/insn_trans/trans_rvvk.c.inc
360
+++ b/target/riscv/insn_trans/trans_rvvk.c.inc
361
@@ -XXX,XX +XXX,XX @@ static bool vclmul_vx_check(DisasContext *s, arg_rmrr *a)
362
363
GEN_VX_MASKED_TRANS(vclmul_vx, vclmul_vx_check)
364
GEN_VX_MASKED_TRANS(vclmulh_vx, vclmul_vx_check)
103
+
365
+
104
+/*
366
+/*
105
+ * The HS-mode is allowed to configure priority only for the
367
+ * Zvbb
106
+ * following VS-mode local interrupts:
107
+ *
108
+ * 0 (Reserved interrupt, reads as zero)
109
+ * 1 Supervisor software interrupt
110
+ * 4 (Reserved interrupt, reads as zero)
111
+ * 5 Supervisor timer interrupt
112
+ * 8 (Reserved interrupt, reads as zero)
113
+ * 13 (Reserved interrupt)
114
+ * 14 "
115
+ * 15 "
116
+ * 16 "
117
+ * 18 Debug/trace interrupt
118
+ * 20 (Reserved interrupt)
119
+ * 22 "
120
+ * 24 "
121
+ * 26 "
122
+ * 28 "
123
+ * 30 (Reserved for standard reporting of bus or system errors)
124
+ */
368
+ */
125
+
369
+
126
+static const int hviprio_index2irq[] = {
370
+#define GEN_OPIVI_GVEC_TRANS_CHECK(NAME, IMM_MODE, OPIVX, SUF, CHECK) \
127
+ 0, 1, 4, 5, 8, 13, 14, 15, 16, 18, 20, 22, 24, 26, 28, 30 };
371
+ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
128
+static const int hviprio_index2rdzero[] = {
372
+ { \
129
+ 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
373
+ if (CHECK(s, a)) { \
130
+
374
+ static gen_helper_opivx *const fns[4] = { \
131
+int riscv_cpu_hviprio_index2irq(int index, int *out_irq, int *out_rdzero)
375
+ gen_helper_##OPIVX##_b, \
376
+ gen_helper_##OPIVX##_h, \
377
+ gen_helper_##OPIVX##_w, \
378
+ gen_helper_##OPIVX##_d, \
379
+ }; \
380
+ return do_opivi_gvec(s, a, tcg_gen_gvec_##SUF, fns[s->sew], \
381
+ IMM_MODE); \
382
+ } \
383
+ return false; \
384
+ }
385
+
386
+#define GEN_OPIVV_GVEC_TRANS_CHECK(NAME, SUF, CHECK) \
387
+ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
388
+ { \
389
+ if (CHECK(s, a)) { \
390
+ static gen_helper_gvec_4_ptr *const fns[4] = { \
391
+ gen_helper_##NAME##_b, \
392
+ gen_helper_##NAME##_h, \
393
+ gen_helper_##NAME##_w, \
394
+ gen_helper_##NAME##_d, \
395
+ }; \
396
+ return do_opivv_gvec(s, a, tcg_gen_gvec_##SUF, fns[s->sew]); \
397
+ } \
398
+ return false; \
399
+ }
400
+
401
+#define GEN_OPIVX_GVEC_SHIFT_TRANS_CHECK(NAME, SUF, CHECK) \
402
+ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
403
+ { \
404
+ if (CHECK(s, a)) { \
405
+ static gen_helper_opivx *const fns[4] = { \
406
+ gen_helper_##NAME##_b, \
407
+ gen_helper_##NAME##_h, \
408
+ gen_helper_##NAME##_w, \
409
+ gen_helper_##NAME##_d, \
410
+ }; \
411
+ return do_opivx_gvec_shift(s, a, tcg_gen_gvec_##SUF, \
412
+ fns[s->sew]); \
413
+ } \
414
+ return false; \
415
+ }
416
+
417
+static bool zvbb_vv_check(DisasContext *s, arg_rmrr *a)
132
+{
418
+{
133
+ if (index < 0 || ARRAY_SIZE(hviprio_index2irq) <= index) {
419
+ return opivv_check(s, a) && s->cfg_ptr->ext_zvbb == true;
134
+ return -EINVAL;
135
+ }
136
+
137
+ if (out_irq) {
138
+ *out_irq = hviprio_index2irq[index];
139
+ }
140
+
141
+ if (out_rdzero) {
142
+ *out_rdzero = hviprio_index2rdzero[index];
143
+ }
144
+
145
+ return 0;
146
+}
420
+}
147
+
421
+
148
+/*
422
+static bool zvbb_vx_check(DisasContext *s, arg_rmrr *a)
149
+ * Default priorities of local interrupts are defined in the
150
+ * RISC-V Advanced Interrupt Architecture specification.
151
+ *
152
+ * ----------------------------------------------------------------
153
+ * Default |
154
+ * Priority | Major Interrupt Numbers
155
+ * ----------------------------------------------------------------
156
+ * Highest | 63 (3f), 62 (3e), 31 (1f), 30 (1e), 61 (3d), 60 (3c),
157
+ * | 59 (3b), 58 (3a), 29 (1d), 28 (1c), 57 (39), 56 (38),
158
+ * | 55 (37), 54 (36), 27 (1b), 26 (1a), 53 (35), 52 (34),
159
+ * | 51 (33), 50 (32), 25 (19), 24 (18), 49 (31), 48 (30)
160
+ * |
161
+ * | 11 (0b), 3 (03), 7 (07)
162
+ * | 9 (09), 1 (01), 5 (05)
163
+ * | 12 (0c)
164
+ * | 10 (0a), 2 (02), 6 (06)
165
+ * |
166
+ * | 47 (2f), 46 (2e), 23 (17), 22 (16), 45 (2d), 44 (2c),
167
+ * | 43 (2b), 42 (2a), 21 (15), 20 (14), 41 (29), 40 (28),
168
+ * | 39 (27), 38 (26), 19 (13), 18 (12), 37 (25), 36 (24),
169
+ * Lowest | 35 (23), 34 (22), 17 (11), 16 (10), 33 (21), 32 (20)
170
+ * ----------------------------------------------------------------
171
+ */
172
+static const uint8_t default_iprio[64] = {
173
+ [63] = IPRIO_DEFAULT_UPPER,
174
+ [62] = IPRIO_DEFAULT_UPPER + 1,
175
+ [31] = IPRIO_DEFAULT_UPPER + 2,
176
+ [30] = IPRIO_DEFAULT_UPPER + 3,
177
+ [61] = IPRIO_DEFAULT_UPPER + 4,
178
+ [60] = IPRIO_DEFAULT_UPPER + 5,
179
+
180
+ [59] = IPRIO_DEFAULT_UPPER + 6,
181
+ [58] = IPRIO_DEFAULT_UPPER + 7,
182
+ [29] = IPRIO_DEFAULT_UPPER + 8,
183
+ [28] = IPRIO_DEFAULT_UPPER + 9,
184
+ [57] = IPRIO_DEFAULT_UPPER + 10,
185
+ [56] = IPRIO_DEFAULT_UPPER + 11,
186
+
187
+ [55] = IPRIO_DEFAULT_UPPER + 12,
188
+ [54] = IPRIO_DEFAULT_UPPER + 13,
189
+ [27] = IPRIO_DEFAULT_UPPER + 14,
190
+ [26] = IPRIO_DEFAULT_UPPER + 15,
191
+ [53] = IPRIO_DEFAULT_UPPER + 16,
192
+ [52] = IPRIO_DEFAULT_UPPER + 17,
193
+
194
+ [51] = IPRIO_DEFAULT_UPPER + 18,
195
+ [50] = IPRIO_DEFAULT_UPPER + 19,
196
+ [25] = IPRIO_DEFAULT_UPPER + 20,
197
+ [24] = IPRIO_DEFAULT_UPPER + 21,
198
+ [49] = IPRIO_DEFAULT_UPPER + 22,
199
+ [48] = IPRIO_DEFAULT_UPPER + 23,
200
+
201
+ [11] = IPRIO_DEFAULT_M,
202
+ [3] = IPRIO_DEFAULT_M + 1,
203
+ [7] = IPRIO_DEFAULT_M + 2,
204
+
205
+ [9] = IPRIO_DEFAULT_S,
206
+ [1] = IPRIO_DEFAULT_S + 1,
207
+ [5] = IPRIO_DEFAULT_S + 2,
208
+
209
+ [12] = IPRIO_DEFAULT_SGEXT,
210
+
211
+ [10] = IPRIO_DEFAULT_VS,
212
+ [2] = IPRIO_DEFAULT_VS + 1,
213
+ [6] = IPRIO_DEFAULT_VS + 2,
214
+
215
+ [47] = IPRIO_DEFAULT_LOWER,
216
+ [46] = IPRIO_DEFAULT_LOWER + 1,
217
+ [23] = IPRIO_DEFAULT_LOWER + 2,
218
+ [22] = IPRIO_DEFAULT_LOWER + 3,
219
+ [45] = IPRIO_DEFAULT_LOWER + 4,
220
+ [44] = IPRIO_DEFAULT_LOWER + 5,
221
+
222
+ [43] = IPRIO_DEFAULT_LOWER + 6,
223
+ [42] = IPRIO_DEFAULT_LOWER + 7,
224
+ [21] = IPRIO_DEFAULT_LOWER + 8,
225
+ [20] = IPRIO_DEFAULT_LOWER + 9,
226
+ [41] = IPRIO_DEFAULT_LOWER + 10,
227
+ [40] = IPRIO_DEFAULT_LOWER + 11,
228
+
229
+ [39] = IPRIO_DEFAULT_LOWER + 12,
230
+ [38] = IPRIO_DEFAULT_LOWER + 13,
231
+ [19] = IPRIO_DEFAULT_LOWER + 14,
232
+ [18] = IPRIO_DEFAULT_LOWER + 15,
233
+ [37] = IPRIO_DEFAULT_LOWER + 16,
234
+ [36] = IPRIO_DEFAULT_LOWER + 17,
235
+
236
+ [35] = IPRIO_DEFAULT_LOWER + 18,
237
+ [34] = IPRIO_DEFAULT_LOWER + 19,
238
+ [17] = IPRIO_DEFAULT_LOWER + 20,
239
+ [16] = IPRIO_DEFAULT_LOWER + 21,
240
+ [33] = IPRIO_DEFAULT_LOWER + 22,
241
+ [32] = IPRIO_DEFAULT_LOWER + 23,
242
+};
243
+
244
+uint8_t riscv_cpu_default_priority(int irq)
245
{
246
- target_ulong virt_enabled = riscv_cpu_virt_enabled(env);
247
+ if (irq < 0 || irq > 63) {
248
+ return IPRIO_MMAXIPRIO;
249
+ }
250
+
251
+ return default_iprio[irq] ? default_iprio[irq] : IPRIO_MMAXIPRIO;
252
+};
253
254
- target_ulong mstatus_mie = get_field(env->mstatus, MSTATUS_MIE);
255
- target_ulong mstatus_sie = get_field(env->mstatus, MSTATUS_SIE);
256
+static int riscv_cpu_pending_to_irq(CPURISCVState *env,
257
+ int extirq, unsigned int extirq_def_prio,
258
+ uint64_t pending, uint8_t *iprio)
259
+{
423
+{
260
+ int irq, best_irq = RISCV_EXCP_NONE;
424
+ return opivx_check(s, a) && s->cfg_ptr->ext_zvbb == true;
261
+ unsigned int prio, best_prio = UINT_MAX;
262
263
- target_ulong vsgemask =
264
- (target_ulong)1 << get_field(env->hstatus, HSTATUS_VGEIN);
265
- target_ulong vsgein = (env->hgeip & vsgemask) ? MIP_VSEIP : 0;
266
+ if (!pending) {
267
+ return RISCV_EXCP_NONE;
268
+ }
269
270
- target_ulong pending = (env->mip | vsgein) & env->mie;
271
+ irq = ctz64(pending);
272
+ if (!riscv_feature(env, RISCV_FEATURE_AIA)) {
273
+ return irq;
274
+ }
275
276
- target_ulong mie = env->priv < PRV_M ||
277
- (env->priv == PRV_M && mstatus_mie);
278
- target_ulong sie = env->priv < PRV_S ||
279
- (env->priv == PRV_S && mstatus_sie);
280
- target_ulong hsie = virt_enabled || sie;
281
- target_ulong vsie = virt_enabled && sie;
282
+ pending = pending >> irq;
283
+ while (pending) {
284
+ prio = iprio[irq];
285
+ if (!prio) {
286
+ if (irq == extirq) {
287
+ prio = extirq_def_prio;
288
+ } else {
289
+ prio = (riscv_cpu_default_priority(irq) < extirq_def_prio) ?
290
+ 1 : IPRIO_MMAXIPRIO;
291
+ }
292
+ }
293
+ if ((pending & 0x1) && (prio <= best_prio)) {
294
+ best_irq = irq;
295
+ best_prio = prio;
296
+ }
297
+ irq++;
298
+ pending = pending >> 1;
299
+ }
300
301
- target_ulong irqs =
302
- (pending & ~env->mideleg & -mie) |
303
- (pending & env->mideleg & ~env->hideleg & -hsie) |
304
- (pending & env->mideleg & env->hideleg & -vsie);
305
+ return best_irq;
306
+}
425
+}
307
426
+
308
- if (irqs) {
427
+/* vrol.v[vx] */
309
- return ctz64(irqs); /* since non-zero */
428
+GEN_OPIVV_GVEC_TRANS_CHECK(vrol_vv, rotlv, zvbb_vv_check)
310
+static uint64_t riscv_cpu_all_pending(CPURISCVState *env)
429
+GEN_OPIVX_GVEC_SHIFT_TRANS_CHECK(vrol_vx, rotls, zvbb_vx_check)
430
+
431
+/* vror.v[vxi] */
432
+GEN_OPIVV_GVEC_TRANS_CHECK(vror_vv, rotrv, zvbb_vv_check)
433
+GEN_OPIVX_GVEC_SHIFT_TRANS_CHECK(vror_vx, rotrs, zvbb_vx_check)
434
+GEN_OPIVI_GVEC_TRANS_CHECK(vror_vi, IMM_TRUNC_SEW, vror_vx, rotri, zvbb_vx_check)
435
+
436
+#define GEN_OPIVX_GVEC_TRANS_CHECK(NAME, SUF, CHECK) \
437
+ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
438
+ { \
439
+ if (CHECK(s, a)) { \
440
+ static gen_helper_opivx *const fns[4] = { \
441
+ gen_helper_##NAME##_b, \
442
+ gen_helper_##NAME##_h, \
443
+ gen_helper_##NAME##_w, \
444
+ gen_helper_##NAME##_d, \
445
+ }; \
446
+ return do_opivx_gvec(s, a, tcg_gen_gvec_##SUF, fns[s->sew]); \
447
+ } \
448
+ return false; \
449
+ }
450
+
451
+/* vandn.v[vx] */
452
+GEN_OPIVV_GVEC_TRANS_CHECK(vandn_vv, andc, zvbb_vv_check)
453
+GEN_OPIVX_GVEC_TRANS_CHECK(vandn_vx, andcs, zvbb_vx_check)
454
+
455
+#define GEN_OPIV_TRANS(NAME, CHECK) \
456
+ static bool trans_##NAME(DisasContext *s, arg_rmr *a) \
457
+ { \
458
+ if (CHECK(s, a)) { \
459
+ uint32_t data = 0; \
460
+ static gen_helper_gvec_3_ptr *const fns[4] = { \
461
+ gen_helper_##NAME##_b, \
462
+ gen_helper_##NAME##_h, \
463
+ gen_helper_##NAME##_w, \
464
+ gen_helper_##NAME##_d, \
465
+ }; \
466
+ TCGLabel *over = gen_new_label(); \
467
+ tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
468
+ \
469
+ data = FIELD_DP32(data, VDATA, VM, a->vm); \
470
+ data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \
471
+ data = FIELD_DP32(data, VDATA, VTA, s->vta); \
472
+ data = FIELD_DP32(data, VDATA, VTA_ALL_1S, s->cfg_vta_all_1s); \
473
+ data = FIELD_DP32(data, VDATA, VMA, s->vma); \
474
+ tcg_gen_gvec_3_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0), \
475
+ vreg_ofs(s, a->rs2), cpu_env, \
476
+ s->cfg_ptr->vlen / 8, s->cfg_ptr->vlen / 8, \
477
+ data, fns[s->sew]); \
478
+ mark_vs_dirty(s); \
479
+ gen_set_label(over); \
480
+ return true; \
481
+ } \
482
+ return false; \
483
+ }
484
+
485
+static bool zvbb_opiv_check(DisasContext *s, arg_rmr *a)
311
+{
486
+{
312
+ uint32_t gein = get_field(env->hstatus, HSTATUS_VGEIN);
487
+ return s->cfg_ptr->ext_zvbb == true &&
313
+ uint64_t vsgein = (env->hgeip & (1ULL << gein)) ? MIP_VSEIP : 0;
488
+ require_rvv(s) &&
314
+
489
+ vext_check_isa_ill(s) &&
315
+ return (env->mip | vsgein) & env->mie;
490
+ vext_check_ss(s, a->rd, a->rs2, a->vm);
316
+}
491
+}
317
+
492
+
318
+int riscv_cpu_mirq_pending(CPURISCVState *env)
493
+GEN_OPIV_TRANS(vbrev8_v, zvbb_opiv_check)
494
+GEN_OPIV_TRANS(vrev8_v, zvbb_opiv_check)
495
+GEN_OPIV_TRANS(vbrev_v, zvbb_opiv_check)
496
+GEN_OPIV_TRANS(vclz_v, zvbb_opiv_check)
497
+GEN_OPIV_TRANS(vctz_v, zvbb_opiv_check)
498
+GEN_OPIV_TRANS(vcpop_v, zvbb_opiv_check)
499
+
500
+static bool vwsll_vv_check(DisasContext *s, arg_rmrr *a)
319
+{
501
+{
320
+ uint64_t irqs = riscv_cpu_all_pending(env) & ~env->mideleg &
502
+ return s->cfg_ptr->ext_zvbb && opivv_widen_check(s, a);
321
+ ~(MIP_SGEIP | MIP_VSSIP | MIP_VSTIP | MIP_VSEIP);
322
+
323
+ return riscv_cpu_pending_to_irq(env, IRQ_M_EXT, IPRIO_DEFAULT_M,
324
+ irqs, env->miprio);
325
+}
503
+}
326
+
504
+
327
+int riscv_cpu_sirq_pending(CPURISCVState *env)
505
+static bool vwsll_vx_check(DisasContext *s, arg_rmrr *a)
328
+{
506
+{
329
+ uint64_t irqs = riscv_cpu_all_pending(env) & env->mideleg &
507
+ return s->cfg_ptr->ext_zvbb && opivx_widen_check(s, a);
330
+ ~(MIP_VSSIP | MIP_VSTIP | MIP_VSEIP);
331
+
332
+ return riscv_cpu_pending_to_irq(env, IRQ_S_EXT, IPRIO_DEFAULT_S,
333
+ irqs, env->siprio);
334
+}
508
+}
335
+
509
+
336
+int riscv_cpu_vsirq_pending(CPURISCVState *env)
510
+/* OPIVI without GVEC IR */
337
+{
511
+#define GEN_OPIVI_WIDEN_TRANS(NAME, IMM_MODE, OPIVX, CHECK) \
338
+ uint64_t irqs = riscv_cpu_all_pending(env) & env->mideleg &
512
+ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
339
+ (MIP_VSSIP | MIP_VSTIP | MIP_VSEIP);
513
+ { \
340
+
514
+ if (CHECK(s, a)) { \
341
+ return riscv_cpu_pending_to_irq(env, IRQ_S_EXT, IPRIO_DEFAULT_S,
515
+ static gen_helper_opivx *const fns[3] = { \
342
+ irqs >> 1, env->hviprio);
516
+ gen_helper_##OPIVX##_b, \
343
+}
517
+ gen_helper_##OPIVX##_h, \
344
+
518
+ gen_helper_##OPIVX##_w, \
345
+static int riscv_cpu_local_irq_pending(CPURISCVState *env)
519
+ }; \
346
+{
520
+ return opivi_trans(a->rd, a->rs1, a->rs2, a->vm, fns[s->sew], s, \
347
+ int virq;
521
+ IMM_MODE); \
348
+ uint64_t irqs, pending, mie, hsie, vsie;
522
+ } \
349
+
523
+ return false; \
350
+ /* Determine interrupt enable state of all privilege modes */
524
+ }
351
+ if (riscv_cpu_virt_enabled(env)) {
525
+
352
+ mie = 1;
526
+GEN_OPIVV_WIDEN_TRANS(vwsll_vv, vwsll_vv_check)
353
+ hsie = 1;
527
+GEN_OPIVX_WIDEN_TRANS(vwsll_vx, vwsll_vx_check)
354
+ vsie = (env->priv < PRV_S) ||
528
+GEN_OPIVI_WIDEN_TRANS(vwsll_vi, IMM_ZX, vwsll_vx, vwsll_vx_check)
355
+ (env->priv == PRV_S && get_field(env->mstatus, MSTATUS_SIE));
356
} else {
357
- return RISCV_EXCP_NONE; /* indicates no pending interrupt */
358
+ mie = (env->priv < PRV_M) ||
359
+ (env->priv == PRV_M && get_field(env->mstatus, MSTATUS_MIE));
360
+ hsie = (env->priv < PRV_S) ||
361
+ (env->priv == PRV_S && get_field(env->mstatus, MSTATUS_SIE));
362
+ vsie = 0;
363
+ }
364
+
365
+ /* Determine all pending interrupts */
366
+ pending = riscv_cpu_all_pending(env);
367
+
368
+ /* Check M-mode interrupts */
369
+ irqs = pending & ~env->mideleg & -mie;
370
+ if (irqs) {
371
+ return riscv_cpu_pending_to_irq(env, IRQ_M_EXT, IPRIO_DEFAULT_M,
372
+ irqs, env->miprio);
373
}
374
+
375
+ /* Check HS-mode interrupts */
376
+ irqs = pending & env->mideleg & ~env->hideleg & -hsie;
377
+ if (irqs) {
378
+ return riscv_cpu_pending_to_irq(env, IRQ_S_EXT, IPRIO_DEFAULT_S,
379
+ irqs, env->siprio);
380
+ }
381
+
382
+ /* Check VS-mode interrupts */
383
+ irqs = pending & env->mideleg & env->hideleg & -vsie;
384
+ if (irqs) {
385
+ virq = riscv_cpu_pending_to_irq(env, IRQ_S_EXT, IPRIO_DEFAULT_S,
386
+ irqs >> 1, env->hviprio);
387
+ return (virq <= 0) ? virq : virq + 1;
388
+ }
389
+
390
+ /* Indicate no pending interrupt */
391
+ return RISCV_EXCP_NONE;
392
}
393
394
bool riscv_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
395
diff --git a/target/riscv/machine.c b/target/riscv/machine.c
396
index XXXXXXX..XXXXXXX 100644
397
--- a/target/riscv/machine.c
398
+++ b/target/riscv/machine.c
399
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_hyper = {
400
VMSTATE_UINTTL(env.hgeie, RISCVCPU),
401
VMSTATE_UINTTL(env.hgeip, RISCVCPU),
402
VMSTATE_UINT64(env.htimedelta, RISCVCPU),
403
+ VMSTATE_UINT8_ARRAY(env.hviprio, RISCVCPU, 64),
404
405
VMSTATE_UINT64(env.vsstatus, RISCVCPU),
406
VMSTATE_UINTTL(env.vstvec, RISCVCPU),
407
@@ -XXX,XX +XXX,XX @@ const VMStateDescription vmstate_riscv_cpu = {
408
.fields = (VMStateField[]) {
409
VMSTATE_UINTTL_ARRAY(env.gpr, RISCVCPU, 32),
410
VMSTATE_UINT64_ARRAY(env.fpr, RISCVCPU, 32),
411
+ VMSTATE_UINT8_ARRAY(env.miprio, RISCVCPU, 64),
412
+ VMSTATE_UINT8_ARRAY(env.siprio, RISCVCPU, 64),
413
VMSTATE_UINTTL(env.pc, RISCVCPU),
414
VMSTATE_UINTTL(env.load_res, RISCVCPU),
415
VMSTATE_UINTTL(env.load_val, RISCVCPU),
416
--
529
--
417
2.34.1
530
2.41.0
418
419
diff view generated by jsdifflib
New patch
1
From: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
1
2
3
This commit adds support for the Zvkned vector-crypto extension, which
4
consists of the following instructions:
5
6
* vaesef.[vv,vs]
7
* vaesdf.[vv,vs]
8
* vaesdm.[vv,vs]
9
* vaesz.vs
10
* vaesem.[vv,vs]
11
* vaeskf1.vi
12
* vaeskf2.vi
13
14
Translation functions are defined in
15
`target/riscv/insn_trans/trans_rvvk.c.inc` and helpers are defined in
16
`target/riscv/vcrypto_helper.c`.
17
18
Co-authored-by: Lawrence Hunter <lawrence.hunter@codethink.co.uk>
19
Co-authored-by: William Salmon <will.salmon@codethink.co.uk>
20
[max.chou@sifive.com: Replaced vstart checking by TCG op]
21
Signed-off-by: Lawrence Hunter <lawrence.hunter@codethink.co.uk>
22
Signed-off-by: William Salmon <will.salmon@codethink.co.uk>
23
Signed-off-by: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
24
Signed-off-by: Max Chou <max.chou@sifive.com>
25
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
26
[max.chou@sifive.com: Imported aes-round.h and exposed x-zvkned
27
property]
28
[max.chou@sifive.com: Fixed endian issues and replaced the vstart & vl
29
egs checking by helper function]
30
[max.chou@sifive.com: Replaced bswap32 calls in aes key expanding]
31
Message-ID: <20230711165917.2629866-10-max.chou@sifive.com>
32
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
33
---
34
target/riscv/cpu_cfg.h | 1 +
35
target/riscv/helper.h | 14 ++
36
target/riscv/insn32.decode | 14 ++
37
target/riscv/cpu.c | 4 +-
38
target/riscv/vcrypto_helper.c | 202 +++++++++++++++++++++++
39
target/riscv/insn_trans/trans_rvvk.c.inc | 147 +++++++++++++++++
40
6 files changed, 381 insertions(+), 1 deletion(-)
41
42
diff --git a/target/riscv/cpu_cfg.h b/target/riscv/cpu_cfg.h
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/riscv/cpu_cfg.h
45
+++ b/target/riscv/cpu_cfg.h
46
@@ -XXX,XX +XXX,XX @@ struct RISCVCPUConfig {
47
bool ext_zve64d;
48
bool ext_zvbb;
49
bool ext_zvbc;
50
+ bool ext_zvkned;
51
bool ext_zmmul;
52
bool ext_zvfbfmin;
53
bool ext_zvfbfwma;
54
diff --git a/target/riscv/helper.h b/target/riscv/helper.h
55
index XXXXXXX..XXXXXXX 100644
56
--- a/target/riscv/helper.h
57
+++ b/target/riscv/helper.h
58
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_6(vandn_vx_b, void, ptr, ptr, tl, ptr, env, i32)
59
DEF_HELPER_6(vandn_vx_h, void, ptr, ptr, tl, ptr, env, i32)
60
DEF_HELPER_6(vandn_vx_w, void, ptr, ptr, tl, ptr, env, i32)
61
DEF_HELPER_6(vandn_vx_d, void, ptr, ptr, tl, ptr, env, i32)
62
+
63
+DEF_HELPER_2(egs_check, void, i32, env)
64
+
65
+DEF_HELPER_4(vaesef_vv, void, ptr, ptr, env, i32)
66
+DEF_HELPER_4(vaesef_vs, void, ptr, ptr, env, i32)
67
+DEF_HELPER_4(vaesdf_vv, void, ptr, ptr, env, i32)
68
+DEF_HELPER_4(vaesdf_vs, void, ptr, ptr, env, i32)
69
+DEF_HELPER_4(vaesem_vv, void, ptr, ptr, env, i32)
70
+DEF_HELPER_4(vaesem_vs, void, ptr, ptr, env, i32)
71
+DEF_HELPER_4(vaesdm_vv, void, ptr, ptr, env, i32)
72
+DEF_HELPER_4(vaesdm_vs, void, ptr, ptr, env, i32)
73
+DEF_HELPER_4(vaesz_vs, void, ptr, ptr, env, i32)
74
+DEF_HELPER_5(vaeskf1_vi, void, ptr, ptr, i32, env, i32)
75
+DEF_HELPER_5(vaeskf2_vi, void, ptr, ptr, i32, env, i32)
76
diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode
77
index XXXXXXX..XXXXXXX 100644
78
--- a/target/riscv/insn32.decode
79
+++ b/target/riscv/insn32.decode
80
@@ -XXX,XX +XXX,XX @@
81
@r_rm ....... ..... ..... ... ..... ....... %rs2 %rs1 %rm %rd
82
@r2_rm ....... ..... ..... ... ..... ....... %rs1 %rm %rd
83
@r2 ....... ..... ..... ... ..... ....... &r2 %rs1 %rd
84
+@r2_vm_1 ...... . ..... ..... ... ..... ....... &rmr vm=1 %rs2 %rd
85
@r2_nfvm ... ... vm:1 ..... ..... ... ..... ....... &r2nfvm %nf %rs1 %rd
86
@r2_vm ...... vm:1 ..... ..... ... ..... ....... &rmr %rs2 %rd
87
@r1_vm ...... vm:1 ..... ..... ... ..... ....... %rd
88
@@ -XXX,XX +XXX,XX @@ vcpop_v 010010 . ..... 01110 010 ..... 1010111 @r2_vm
89
vwsll_vv 110101 . ..... ..... 000 ..... 1010111 @r_vm
90
vwsll_vx 110101 . ..... ..... 100 ..... 1010111 @r_vm
91
vwsll_vi 110101 . ..... ..... 011 ..... 1010111 @r_vm
92
+
93
+# *** Zvkned vector crypto extension ***
94
+vaesef_vv 101000 1 ..... 00011 010 ..... 1110111 @r2_vm_1
95
+vaesef_vs 101001 1 ..... 00011 010 ..... 1110111 @r2_vm_1
96
+vaesdf_vv 101000 1 ..... 00001 010 ..... 1110111 @r2_vm_1
97
+vaesdf_vs 101001 1 ..... 00001 010 ..... 1110111 @r2_vm_1
98
+vaesem_vv 101000 1 ..... 00010 010 ..... 1110111 @r2_vm_1
99
+vaesem_vs 101001 1 ..... 00010 010 ..... 1110111 @r2_vm_1
100
+vaesdm_vv 101000 1 ..... 00000 010 ..... 1110111 @r2_vm_1
101
+vaesdm_vs 101001 1 ..... 00000 010 ..... 1110111 @r2_vm_1
102
+vaesz_vs 101001 1 ..... 00111 010 ..... 1110111 @r2_vm_1
103
+vaeskf1_vi 100010 1 ..... ..... 010 ..... 1110111 @r_vm_1
104
+vaeskf2_vi 101010 1 ..... ..... 010 ..... 1110111 @r_vm_1
105
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
106
index XXXXXXX..XXXXXXX 100644
107
--- a/target/riscv/cpu.c
108
+++ b/target/riscv/cpu.c
109
@@ -XXX,XX +XXX,XX @@ static const struct isa_ext_data isa_edata_arr[] = {
110
ISA_EXT_DATA_ENTRY(zvfbfwma, PRIV_VERSION_1_12_0, ext_zvfbfwma),
111
ISA_EXT_DATA_ENTRY(zvfh, PRIV_VERSION_1_12_0, ext_zvfh),
112
ISA_EXT_DATA_ENTRY(zvfhmin, PRIV_VERSION_1_12_0, ext_zvfhmin),
113
+ ISA_EXT_DATA_ENTRY(zvkned, PRIV_VERSION_1_12_0, ext_zvkned),
114
ISA_EXT_DATA_ENTRY(zhinx, PRIV_VERSION_1_12_0, ext_zhinx),
115
ISA_EXT_DATA_ENTRY(zhinxmin, PRIV_VERSION_1_12_0, ext_zhinxmin),
116
ISA_EXT_DATA_ENTRY(smaia, PRIV_VERSION_1_12_0, ext_smaia),
117
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_validate_set_extensions(RISCVCPU *cpu, Error **errp)
118
* In principle Zve*x would also suffice here, were they supported
119
* in qemu
120
*/
121
- if (cpu->cfg.ext_zvbb && !cpu->cfg.ext_zve32f) {
122
+ if ((cpu->cfg.ext_zvbb || cpu->cfg.ext_zvkned) && !cpu->cfg.ext_zve32f) {
123
error_setg(errp,
124
"Vector crypto extensions require V or Zve* extensions");
125
return;
126
@@ -XXX,XX +XXX,XX @@ static Property riscv_cpu_extensions[] = {
127
/* Vector cryptography extensions */
128
DEFINE_PROP_BOOL("x-zvbb", RISCVCPU, cfg.ext_zvbb, false),
129
DEFINE_PROP_BOOL("x-zvbc", RISCVCPU, cfg.ext_zvbc, false),
130
+ DEFINE_PROP_BOOL("x-zvkned", RISCVCPU, cfg.ext_zvkned, false),
131
132
DEFINE_PROP_END_OF_LIST(),
133
};
134
diff --git a/target/riscv/vcrypto_helper.c b/target/riscv/vcrypto_helper.c
135
index XXXXXXX..XXXXXXX 100644
136
--- a/target/riscv/vcrypto_helper.c
137
+++ b/target/riscv/vcrypto_helper.c
138
@@ -XXX,XX +XXX,XX @@
139
#include "qemu/bitops.h"
140
#include "qemu/bswap.h"
141
#include "cpu.h"
142
+#include "crypto/aes.h"
143
+#include "crypto/aes-round.h"
144
#include "exec/memop.h"
145
#include "exec/exec-all.h"
146
#include "exec/helper-proto.h"
147
@@ -XXX,XX +XXX,XX @@ RVVCALL(OPIVX2, vwsll_vx_w, WOP_UUU_W, H8, H4, DO_SLL)
148
GEN_VEXT_VX(vwsll_vx_b, 2)
149
GEN_VEXT_VX(vwsll_vx_h, 4)
150
GEN_VEXT_VX(vwsll_vx_w, 8)
151
+
152
+void HELPER(egs_check)(uint32_t egs, CPURISCVState *env)
153
+{
154
+ uint32_t vl = env->vl;
155
+ uint32_t vstart = env->vstart;
156
+
157
+ if (vl % egs != 0 || vstart % egs != 0) {
158
+ riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC());
159
+ }
160
+}
161
+
162
+static inline void xor_round_key(AESState *round_state, AESState *round_key)
163
+{
164
+ round_state->v = round_state->v ^ round_key->v;
165
+}
166
+
167
+#define GEN_ZVKNED_HELPER_VV(NAME, ...) \
168
+ void HELPER(NAME)(void *vd, void *vs2, CPURISCVState *env, \
169
+ uint32_t desc) \
170
+ { \
171
+ uint32_t vl = env->vl; \
172
+ uint32_t total_elems = vext_get_total_elems(env, desc, 4); \
173
+ uint32_t vta = vext_vta(desc); \
174
+ \
175
+ for (uint32_t i = env->vstart / 4; i < env->vl / 4; i++) { \
176
+ AESState round_key; \
177
+ round_key.d[0] = *((uint64_t *)vs2 + H8(i * 2 + 0)); \
178
+ round_key.d[1] = *((uint64_t *)vs2 + H8(i * 2 + 1)); \
179
+ AESState round_state; \
180
+ round_state.d[0] = *((uint64_t *)vd + H8(i * 2 + 0)); \
181
+ round_state.d[1] = *((uint64_t *)vd + H8(i * 2 + 1)); \
182
+ __VA_ARGS__; \
183
+ *((uint64_t *)vd + H8(i * 2 + 0)) = round_state.d[0]; \
184
+ *((uint64_t *)vd + H8(i * 2 + 1)) = round_state.d[1]; \
185
+ } \
186
+ env->vstart = 0; \
187
+ /* set tail elements to 1s */ \
188
+ vext_set_elems_1s(vd, vta, vl * 4, total_elems * 4); \
189
+ }
190
+
191
+#define GEN_ZVKNED_HELPER_VS(NAME, ...) \
192
+ void HELPER(NAME)(void *vd, void *vs2, CPURISCVState *env, \
193
+ uint32_t desc) \
194
+ { \
195
+ uint32_t vl = env->vl; \
196
+ uint32_t total_elems = vext_get_total_elems(env, desc, 4); \
197
+ uint32_t vta = vext_vta(desc); \
198
+ \
199
+ for (uint32_t i = env->vstart / 4; i < env->vl / 4; i++) { \
200
+ AESState round_key; \
201
+ round_key.d[0] = *((uint64_t *)vs2 + H8(0)); \
202
+ round_key.d[1] = *((uint64_t *)vs2 + H8(1)); \
203
+ AESState round_state; \
204
+ round_state.d[0] = *((uint64_t *)vd + H8(i * 2 + 0)); \
205
+ round_state.d[1] = *((uint64_t *)vd + H8(i * 2 + 1)); \
206
+ __VA_ARGS__; \
207
+ *((uint64_t *)vd + H8(i * 2 + 0)) = round_state.d[0]; \
208
+ *((uint64_t *)vd + H8(i * 2 + 1)) = round_state.d[1]; \
209
+ } \
210
+ env->vstart = 0; \
211
+ /* set tail elements to 1s */ \
212
+ vext_set_elems_1s(vd, vta, vl * 4, total_elems * 4); \
213
+ }
214
+
215
+GEN_ZVKNED_HELPER_VV(vaesef_vv, aesenc_SB_SR_AK(&round_state,
216
+ &round_state,
217
+ &round_key,
218
+ false);)
219
+GEN_ZVKNED_HELPER_VS(vaesef_vs, aesenc_SB_SR_AK(&round_state,
220
+ &round_state,
221
+ &round_key,
222
+ false);)
223
+GEN_ZVKNED_HELPER_VV(vaesdf_vv, aesdec_ISB_ISR_AK(&round_state,
224
+ &round_state,
225
+ &round_key,
226
+ false);)
227
+GEN_ZVKNED_HELPER_VS(vaesdf_vs, aesdec_ISB_ISR_AK(&round_state,
228
+ &round_state,
229
+ &round_key,
230
+ false);)
231
+GEN_ZVKNED_HELPER_VV(vaesem_vv, aesenc_SB_SR_MC_AK(&round_state,
232
+ &round_state,
233
+ &round_key,
234
+ false);)
235
+GEN_ZVKNED_HELPER_VS(vaesem_vs, aesenc_SB_SR_MC_AK(&round_state,
236
+ &round_state,
237
+ &round_key,
238
+ false);)
239
+GEN_ZVKNED_HELPER_VV(vaesdm_vv, aesdec_ISB_ISR_AK_IMC(&round_state,
240
+ &round_state,
241
+ &round_key,
242
+ false);)
243
+GEN_ZVKNED_HELPER_VS(vaesdm_vs, aesdec_ISB_ISR_AK_IMC(&round_state,
244
+ &round_state,
245
+ &round_key,
246
+ false);)
247
+GEN_ZVKNED_HELPER_VS(vaesz_vs, xor_round_key(&round_state, &round_key);)
248
+
249
+void HELPER(vaeskf1_vi)(void *vd_vptr, void *vs2_vptr, uint32_t uimm,
250
+ CPURISCVState *env, uint32_t desc)
251
+{
252
+ uint32_t *vd = vd_vptr;
253
+ uint32_t *vs2 = vs2_vptr;
254
+ uint32_t vl = env->vl;
255
+ uint32_t total_elems = vext_get_total_elems(env, desc, 4);
256
+ uint32_t vta = vext_vta(desc);
257
+
258
+ uimm &= 0b1111;
259
+ if (uimm > 10 || uimm == 0) {
260
+ uimm ^= 0b1000;
261
+ }
262
+
263
+ for (uint32_t i = env->vstart / 4; i < env->vl / 4; i++) {
264
+ uint32_t rk[8], tmp;
265
+ static const uint32_t rcon[] = {
266
+ 0x00000001, 0x00000002, 0x00000004, 0x00000008, 0x00000010,
267
+ 0x00000020, 0x00000040, 0x00000080, 0x0000001B, 0x00000036,
268
+ };
269
+
270
+ rk[0] = vs2[i * 4 + H4(0)];
271
+ rk[1] = vs2[i * 4 + H4(1)];
272
+ rk[2] = vs2[i * 4 + H4(2)];
273
+ rk[3] = vs2[i * 4 + H4(3)];
274
+ tmp = ror32(rk[3], 8);
275
+
276
+ rk[4] = rk[0] ^ (((uint32_t)AES_sbox[(tmp >> 24) & 0xff] << 24) |
277
+ ((uint32_t)AES_sbox[(tmp >> 16) & 0xff] << 16) |
278
+ ((uint32_t)AES_sbox[(tmp >> 8) & 0xff] << 8) |
279
+ ((uint32_t)AES_sbox[(tmp >> 0) & 0xff] << 0))
280
+ ^ rcon[uimm - 1];
281
+ rk[5] = rk[1] ^ rk[4];
282
+ rk[6] = rk[2] ^ rk[5];
283
+ rk[7] = rk[3] ^ rk[6];
284
+
285
+ vd[i * 4 + H4(0)] = rk[4];
286
+ vd[i * 4 + H4(1)] = rk[5];
287
+ vd[i * 4 + H4(2)] = rk[6];
288
+ vd[i * 4 + H4(3)] = rk[7];
289
+ }
290
+ env->vstart = 0;
291
+ /* set tail elements to 1s */
292
+ vext_set_elems_1s(vd, vta, vl * 4, total_elems * 4);
293
+}
294
+
295
+void HELPER(vaeskf2_vi)(void *vd_vptr, void *vs2_vptr, uint32_t uimm,
296
+ CPURISCVState *env, uint32_t desc)
297
+{
298
+ uint32_t *vd = vd_vptr;
299
+ uint32_t *vs2 = vs2_vptr;
300
+ uint32_t vl = env->vl;
301
+ uint32_t total_elems = vext_get_total_elems(env, desc, 4);
302
+ uint32_t vta = vext_vta(desc);
303
+
304
+ uimm &= 0b1111;
305
+ if (uimm > 14 || uimm < 2) {
306
+ uimm ^= 0b1000;
307
+ }
308
+
309
+ for (uint32_t i = env->vstart / 4; i < env->vl / 4; i++) {
310
+ uint32_t rk[12], tmp;
311
+ static const uint32_t rcon[] = {
312
+ 0x00000001, 0x00000002, 0x00000004, 0x00000008, 0x00000010,
313
+ 0x00000020, 0x00000040, 0x00000080, 0x0000001B, 0x00000036,
314
+ };
315
+
316
+ rk[0] = vd[i * 4 + H4(0)];
317
+ rk[1] = vd[i * 4 + H4(1)];
318
+ rk[2] = vd[i * 4 + H4(2)];
319
+ rk[3] = vd[i * 4 + H4(3)];
320
+ rk[4] = vs2[i * 4 + H4(0)];
321
+ rk[5] = vs2[i * 4 + H4(1)];
322
+ rk[6] = vs2[i * 4 + H4(2)];
323
+ rk[7] = vs2[i * 4 + H4(3)];
324
+
325
+ if (uimm % 2 == 0) {
326
+ tmp = ror32(rk[7], 8);
327
+ rk[8] = rk[0] ^ (((uint32_t)AES_sbox[(tmp >> 24) & 0xff] << 24) |
328
+ ((uint32_t)AES_sbox[(tmp >> 16) & 0xff] << 16) |
329
+ ((uint32_t)AES_sbox[(tmp >> 8) & 0xff] << 8) |
330
+ ((uint32_t)AES_sbox[(tmp >> 0) & 0xff] << 0))
331
+ ^ rcon[(uimm - 1) / 2];
332
+ } else {
333
+ rk[8] = rk[0] ^ (((uint32_t)AES_sbox[(rk[7] >> 24) & 0xff] << 24) |
334
+ ((uint32_t)AES_sbox[(rk[7] >> 16) & 0xff] << 16) |
335
+ ((uint32_t)AES_sbox[(rk[7] >> 8) & 0xff] << 8) |
336
+ ((uint32_t)AES_sbox[(rk[7] >> 0) & 0xff] << 0));
337
+ }
338
+ rk[9] = rk[1] ^ rk[8];
339
+ rk[10] = rk[2] ^ rk[9];
340
+ rk[11] = rk[3] ^ rk[10];
341
+
342
+ vd[i * 4 + H4(0)] = rk[8];
343
+ vd[i * 4 + H4(1)] = rk[9];
344
+ vd[i * 4 + H4(2)] = rk[10];
345
+ vd[i * 4 + H4(3)] = rk[11];
346
+ }
347
+ env->vstart = 0;
348
+ /* set tail elements to 1s */
349
+ vext_set_elems_1s(vd, vta, vl * 4, total_elems * 4);
350
+}
351
diff --git a/target/riscv/insn_trans/trans_rvvk.c.inc b/target/riscv/insn_trans/trans_rvvk.c.inc
352
index XXXXXXX..XXXXXXX 100644
353
--- a/target/riscv/insn_trans/trans_rvvk.c.inc
354
+++ b/target/riscv/insn_trans/trans_rvvk.c.inc
355
@@ -XXX,XX +XXX,XX @@ static bool vwsll_vx_check(DisasContext *s, arg_rmrr *a)
356
GEN_OPIVV_WIDEN_TRANS(vwsll_vv, vwsll_vv_check)
357
GEN_OPIVX_WIDEN_TRANS(vwsll_vx, vwsll_vx_check)
358
GEN_OPIVI_WIDEN_TRANS(vwsll_vi, IMM_ZX, vwsll_vx, vwsll_vx_check)
359
+
360
+/*
361
+ * Zvkned
362
+ */
363
+
364
+#define ZVKNED_EGS 4
365
+
366
+#define GEN_V_UNMASKED_TRANS(NAME, CHECK, EGS) \
367
+ static bool trans_##NAME(DisasContext *s, arg_##NAME *a) \
368
+ { \
369
+ if (CHECK(s, a)) { \
370
+ TCGv_ptr rd_v, rs2_v; \
371
+ TCGv_i32 desc, egs; \
372
+ uint32_t data = 0; \
373
+ TCGLabel *over = gen_new_label(); \
374
+ \
375
+ if (!s->vstart_eq_zero || !s->vl_eq_vlmax) { \
376
+ /* save opcode for unwinding in case we throw an exception */ \
377
+ decode_save_opc(s); \
378
+ egs = tcg_constant_i32(EGS); \
379
+ gen_helper_egs_check(egs, cpu_env); \
380
+ tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
381
+ } \
382
+ \
383
+ data = FIELD_DP32(data, VDATA, VM, a->vm); \
384
+ data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \
385
+ data = FIELD_DP32(data, VDATA, VTA, s->vta); \
386
+ data = FIELD_DP32(data, VDATA, VTA_ALL_1S, s->cfg_vta_all_1s); \
387
+ data = FIELD_DP32(data, VDATA, VMA, s->vma); \
388
+ rd_v = tcg_temp_new_ptr(); \
389
+ rs2_v = tcg_temp_new_ptr(); \
390
+ desc = tcg_constant_i32( \
391
+ simd_desc(s->cfg_ptr->vlen / 8, s->cfg_ptr->vlen / 8, data)); \
392
+ tcg_gen_addi_ptr(rd_v, cpu_env, vreg_ofs(s, a->rd)); \
393
+ tcg_gen_addi_ptr(rs2_v, cpu_env, vreg_ofs(s, a->rs2)); \
394
+ gen_helper_##NAME(rd_v, rs2_v, cpu_env, desc); \
395
+ mark_vs_dirty(s); \
396
+ gen_set_label(over); \
397
+ return true; \
398
+ } \
399
+ return false; \
400
+ }
401
+
402
+static bool vaes_check_vv(DisasContext *s, arg_rmr *a)
403
+{
404
+ int egw_bytes = ZVKNED_EGS << s->sew;
405
+ return s->cfg_ptr->ext_zvkned == true &&
406
+ require_rvv(s) &&
407
+ vext_check_isa_ill(s) &&
408
+ MAXSZ(s) >= egw_bytes &&
409
+ require_align(a->rd, s->lmul) &&
410
+ require_align(a->rs2, s->lmul) &&
411
+ s->sew == MO_32;
412
+}
413
+
414
+static bool vaes_check_overlap(DisasContext *s, int vd, int vs2)
415
+{
416
+ int8_t op_size = s->lmul <= 0 ? 1 : 1 << s->lmul;
417
+ return !is_overlapped(vd, op_size, vs2, 1);
418
+}
419
+
420
+static bool vaes_check_vs(DisasContext *s, arg_rmr *a)
421
+{
422
+ int egw_bytes = ZVKNED_EGS << s->sew;
423
+ return vaes_check_overlap(s, a->rd, a->rs2) &&
424
+ MAXSZ(s) >= egw_bytes &&
425
+ s->cfg_ptr->ext_zvkned == true &&
426
+ require_rvv(s) &&
427
+ vext_check_isa_ill(s) &&
428
+ require_align(a->rd, s->lmul) &&
429
+ s->sew == MO_32;
430
+}
431
+
432
+GEN_V_UNMASKED_TRANS(vaesef_vv, vaes_check_vv, ZVKNED_EGS)
433
+GEN_V_UNMASKED_TRANS(vaesef_vs, vaes_check_vs, ZVKNED_EGS)
434
+GEN_V_UNMASKED_TRANS(vaesdf_vv, vaes_check_vv, ZVKNED_EGS)
435
+GEN_V_UNMASKED_TRANS(vaesdf_vs, vaes_check_vs, ZVKNED_EGS)
436
+GEN_V_UNMASKED_TRANS(vaesdm_vv, vaes_check_vv, ZVKNED_EGS)
437
+GEN_V_UNMASKED_TRANS(vaesdm_vs, vaes_check_vs, ZVKNED_EGS)
438
+GEN_V_UNMASKED_TRANS(vaesz_vs, vaes_check_vs, ZVKNED_EGS)
439
+GEN_V_UNMASKED_TRANS(vaesem_vv, vaes_check_vv, ZVKNED_EGS)
440
+GEN_V_UNMASKED_TRANS(vaesem_vs, vaes_check_vs, ZVKNED_EGS)
441
+
442
+#define GEN_VI_UNMASKED_TRANS(NAME, CHECK, EGS) \
443
+ static bool trans_##NAME(DisasContext *s, arg_##NAME *a) \
444
+ { \
445
+ if (CHECK(s, a)) { \
446
+ TCGv_ptr rd_v, rs2_v; \
447
+ TCGv_i32 uimm_v, desc, egs; \
448
+ uint32_t data = 0; \
449
+ TCGLabel *over = gen_new_label(); \
450
+ \
451
+ if (!s->vstart_eq_zero || !s->vl_eq_vlmax) { \
452
+ /* save opcode for unwinding in case we throw an exception */ \
453
+ decode_save_opc(s); \
454
+ egs = tcg_constant_i32(EGS); \
455
+ gen_helper_egs_check(egs, cpu_env); \
456
+ tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
457
+ } \
458
+ \
459
+ data = FIELD_DP32(data, VDATA, VM, a->vm); \
460
+ data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \
461
+ data = FIELD_DP32(data, VDATA, VTA, s->vta); \
462
+ data = FIELD_DP32(data, VDATA, VTA_ALL_1S, s->cfg_vta_all_1s); \
463
+ data = FIELD_DP32(data, VDATA, VMA, s->vma); \
464
+ \
465
+ rd_v = tcg_temp_new_ptr(); \
466
+ rs2_v = tcg_temp_new_ptr(); \
467
+ uimm_v = tcg_constant_i32(a->rs1); \
468
+ desc = tcg_constant_i32( \
469
+ simd_desc(s->cfg_ptr->vlen / 8, s->cfg_ptr->vlen / 8, data)); \
470
+ tcg_gen_addi_ptr(rd_v, cpu_env, vreg_ofs(s, a->rd)); \
471
+ tcg_gen_addi_ptr(rs2_v, cpu_env, vreg_ofs(s, a->rs2)); \
472
+ gen_helper_##NAME(rd_v, rs2_v, uimm_v, cpu_env, desc); \
473
+ mark_vs_dirty(s); \
474
+ gen_set_label(over); \
475
+ return true; \
476
+ } \
477
+ return false; \
478
+ }
479
+
480
+static bool vaeskf1_check(DisasContext *s, arg_vaeskf1_vi *a)
481
+{
482
+ int egw_bytes = ZVKNED_EGS << s->sew;
483
+ return s->cfg_ptr->ext_zvkned == true &&
484
+ require_rvv(s) &&
485
+ vext_check_isa_ill(s) &&
486
+ MAXSZ(s) >= egw_bytes &&
487
+ s->sew == MO_32 &&
488
+ require_align(a->rd, s->lmul) &&
489
+ require_align(a->rs2, s->lmul);
490
+}
491
+
492
+static bool vaeskf2_check(DisasContext *s, arg_vaeskf2_vi *a)
493
+{
494
+ int egw_bytes = ZVKNED_EGS << s->sew;
495
+ return s->cfg_ptr->ext_zvkned == true &&
496
+ require_rvv(s) &&
497
+ vext_check_isa_ill(s) &&
498
+ MAXSZ(s) >= egw_bytes &&
499
+ s->sew == MO_32 &&
500
+ require_align(a->rd, s->lmul) &&
501
+ require_align(a->rs2, s->lmul);
502
+}
503
+
504
+GEN_VI_UNMASKED_TRANS(vaeskf1_vi, vaeskf1_check, ZVKNED_EGS)
505
+GEN_VI_UNMASKED_TRANS(vaeskf2_vi, vaeskf2_check, ZVKNED_EGS)
506
--
507
2.41.0
diff view generated by jsdifflib
1
From: Weiwei Li <liweiwei@iscas.ac.cn>
1
From: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
2
2
3
- sinval.vma, hinval.vvma and hinval.gvma do the same as sfence.vma, hfence.vvma and hfence.gvma except extension check
3
This commit adds support for the Zvknh vector-crypto extension, which
4
- do nothing other than extension check for sfence.w.inval and sfence.inval.ir
4
consists of the following instructions:
5
5
6
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
6
* vsha2ms.vv
7
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
7
* vsha2c[hl].vv
8
Reviewed-by: Anup Patel <anup@brainfault.org>
8
9
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
9
Translation functions are defined in
10
Message-Id: <20220204022658.18097-5-liweiwei@iscas.ac.cn>
10
`target/riscv/insn_trans/trans_rvvk.c.inc` and helpers are defined in
11
`target/riscv/vcrypto_helper.c`.
12
13
Co-authored-by: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
14
Co-authored-by: Lawrence Hunter <lawrence.hunter@codethink.co.uk>
15
[max.chou@sifive.com: Replaced vstart checking by TCG op]
16
Signed-off-by: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
17
Signed-off-by: Lawrence Hunter <lawrence.hunter@codethink.co.uk>
18
Signed-off-by: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
19
Signed-off-by: Max Chou <max.chou@sifive.com>
20
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
21
[max.chou@sifive.com: Exposed x-zvknha & x-zvknhb properties]
22
[max.chou@sifive.com: Replaced SEW selection to happened during
23
translation]
24
Message-ID: <20230711165917.2629866-11-max.chou@sifive.com>
11
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
25
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
12
---
26
---
13
target/riscv/cpu.h | 1 +
27
target/riscv/cpu_cfg.h | 2 +
14
target/riscv/insn32.decode | 7 ++
28
target/riscv/helper.h | 6 +
15
target/riscv/cpu.c | 1 +
29
target/riscv/insn32.decode | 5 +
16
target/riscv/translate.c | 1 +
30
target/riscv/cpu.c | 13 +-
17
target/riscv/insn_trans/trans_svinval.c.inc | 75 +++++++++++++++++++++
31
target/riscv/vcrypto_helper.c | 238 +++++++++++++++++++++++
18
5 files changed, 85 insertions(+)
32
target/riscv/insn_trans/trans_rvvk.c.inc | 129 ++++++++++++
19
create mode 100644 target/riscv/insn_trans/trans_svinval.c.inc
33
6 files changed, 390 insertions(+), 3 deletions(-)
20
34
21
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
35
diff --git a/target/riscv/cpu_cfg.h b/target/riscv/cpu_cfg.h
22
index XXXXXXX..XXXXXXX 100644
36
index XXXXXXX..XXXXXXX 100644
23
--- a/target/riscv/cpu.h
37
--- a/target/riscv/cpu_cfg.h
24
+++ b/target/riscv/cpu.h
38
+++ b/target/riscv/cpu_cfg.h
25
@@ -XXX,XX +XXX,XX @@ struct RISCVCPUConfig {
39
@@ -XXX,XX +XXX,XX @@ struct RISCVCPUConfig {
26
bool ext_counters;
40
bool ext_zvbb;
27
bool ext_ifencei;
41
bool ext_zvbc;
28
bool ext_icsr;
42
bool ext_zvkned;
29
+ bool ext_svinval;
43
+ bool ext_zvknha;
30
bool ext_svnapot;
44
+ bool ext_zvknhb;
31
bool ext_svpbmt;
45
bool ext_zmmul;
32
bool ext_zfh;
46
bool ext_zvfbfmin;
47
bool ext_zvfbfwma;
48
diff --git a/target/riscv/helper.h b/target/riscv/helper.h
49
index XXXXXXX..XXXXXXX 100644
50
--- a/target/riscv/helper.h
51
+++ b/target/riscv/helper.h
52
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_4(vaesdm_vs, void, ptr, ptr, env, i32)
53
DEF_HELPER_4(vaesz_vs, void, ptr, ptr, env, i32)
54
DEF_HELPER_5(vaeskf1_vi, void, ptr, ptr, i32, env, i32)
55
DEF_HELPER_5(vaeskf2_vi, void, ptr, ptr, i32, env, i32)
56
+
57
+DEF_HELPER_5(vsha2ms_vv, void, ptr, ptr, ptr, env, i32)
58
+DEF_HELPER_5(vsha2ch32_vv, void, ptr, ptr, ptr, env, i32)
59
+DEF_HELPER_5(vsha2ch64_vv, void, ptr, ptr, ptr, env, i32)
60
+DEF_HELPER_5(vsha2cl32_vv, void, ptr, ptr, ptr, env, i32)
61
+DEF_HELPER_5(vsha2cl64_vv, void, ptr, ptr, ptr, env, i32)
33
diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode
62
diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode
34
index XXXXXXX..XXXXXXX 100644
63
index XXXXXXX..XXXXXXX 100644
35
--- a/target/riscv/insn32.decode
64
--- a/target/riscv/insn32.decode
36
+++ b/target/riscv/insn32.decode
65
+++ b/target/riscv/insn32.decode
37
@@ -XXX,XX +XXX,XX @@ fcvt_l_h 1100010 00010 ..... ... ..... 1010011 @r2_rm
66
@@ -XXX,XX +XXX,XX @@ vaesdm_vs 101001 1 ..... 00000 010 ..... 1110111 @r2_vm_1
38
fcvt_lu_h 1100010 00011 ..... ... ..... 1010011 @r2_rm
67
vaesz_vs 101001 1 ..... 00111 010 ..... 1110111 @r2_vm_1
39
fcvt_h_l 1101010 00010 ..... ... ..... 1010011 @r2_rm
68
vaeskf1_vi 100010 1 ..... ..... 010 ..... 1110111 @r_vm_1
40
fcvt_h_lu 1101010 00011 ..... ... ..... 1010011 @r2_rm
69
vaeskf2_vi 101010 1 ..... ..... 010 ..... 1110111 @r_vm_1
41
+
70
+
42
+# *** Svinval Standard Extension ***
71
+# *** Zvknh vector crypto extension ***
43
+sinval_vma 0001011 ..... ..... 000 00000 1110011 @sfence_vma
72
+vsha2ms_vv 101101 1 ..... ..... 010 ..... 1110111 @r_vm_1
44
+sfence_w_inval 0001100 00000 00000 000 00000 1110011
73
+vsha2ch_vv 101110 1 ..... ..... 010 ..... 1110111 @r_vm_1
45
+sfence_inval_ir 0001100 00001 00000 000 00000 1110011
74
+vsha2cl_vv 101111 1 ..... ..... 010 ..... 1110111 @r_vm_1
46
+hinval_vvma 0010011 ..... ..... 000 00000 1110011 @hfence_vvma
47
+hinval_gvma 0110011 ..... ..... 000 00000 1110011 @hfence_gvma
48
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
75
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
49
index XXXXXXX..XXXXXXX 100644
76
index XXXXXXX..XXXXXXX 100644
50
--- a/target/riscv/cpu.c
77
--- a/target/riscv/cpu.c
51
+++ b/target/riscv/cpu.c
78
+++ b/target/riscv/cpu.c
52
@@ -XXX,XX +XXX,XX @@ static Property riscv_cpu_properties[] = {
79
@@ -XXX,XX +XXX,XX @@ static const struct isa_ext_data isa_edata_arr[] = {
53
DEFINE_PROP_UINT16("vlen", RISCVCPU, cfg.vlen, 128),
80
ISA_EXT_DATA_ENTRY(zvfh, PRIV_VERSION_1_12_0, ext_zvfh),
54
DEFINE_PROP_UINT16("elen", RISCVCPU, cfg.elen, 64),
81
ISA_EXT_DATA_ENTRY(zvfhmin, PRIV_VERSION_1_12_0, ext_zvfhmin),
55
82
ISA_EXT_DATA_ENTRY(zvkned, PRIV_VERSION_1_12_0, ext_zvkned),
56
+ DEFINE_PROP_BOOL("svinval", RISCVCPU, cfg.ext_svinval, false),
83
+ ISA_EXT_DATA_ENTRY(zvknha, PRIV_VERSION_1_12_0, ext_zvknha),
57
DEFINE_PROP_BOOL("svnapot", RISCVCPU, cfg.ext_svnapot, false),
84
+ ISA_EXT_DATA_ENTRY(zvknhb, PRIV_VERSION_1_12_0, ext_zvknhb),
58
85
ISA_EXT_DATA_ENTRY(zhinx, PRIV_VERSION_1_12_0, ext_zhinx),
59
DEFINE_PROP_BOOL("zba", RISCVCPU, cfg.ext_zba, true),
86
ISA_EXT_DATA_ENTRY(zhinxmin, PRIV_VERSION_1_12_0, ext_zhinxmin),
60
diff --git a/target/riscv/translate.c b/target/riscv/translate.c
87
ISA_EXT_DATA_ENTRY(smaia, PRIV_VERSION_1_12_0, ext_smaia),
88
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_validate_set_extensions(RISCVCPU *cpu, Error **errp)
89
* In principle Zve*x would also suffice here, were they supported
90
* in qemu
91
*/
92
- if ((cpu->cfg.ext_zvbb || cpu->cfg.ext_zvkned) && !cpu->cfg.ext_zve32f) {
93
+ if ((cpu->cfg.ext_zvbb || cpu->cfg.ext_zvkned || cpu->cfg.ext_zvknha) &&
94
+ !cpu->cfg.ext_zve32f) {
95
error_setg(errp,
96
"Vector crypto extensions require V or Zve* extensions");
97
return;
98
}
99
100
- if (cpu->cfg.ext_zvbc && !cpu->cfg.ext_zve64f) {
101
- error_setg(errp, "Zvbc extension requires V or Zve64{f,d} extensions");
102
+ if ((cpu->cfg.ext_zvbc || cpu->cfg.ext_zvknhb) && !cpu->cfg.ext_zve64f) {
103
+ error_setg(
104
+ errp,
105
+ "Zvbc and Zvknhb extensions require V or Zve64{f,d} extensions");
106
return;
107
}
108
109
@@ -XXX,XX +XXX,XX @@ static Property riscv_cpu_extensions[] = {
110
DEFINE_PROP_BOOL("x-zvbb", RISCVCPU, cfg.ext_zvbb, false),
111
DEFINE_PROP_BOOL("x-zvbc", RISCVCPU, cfg.ext_zvbc, false),
112
DEFINE_PROP_BOOL("x-zvkned", RISCVCPU, cfg.ext_zvkned, false),
113
+ DEFINE_PROP_BOOL("x-zvknha", RISCVCPU, cfg.ext_zvknha, false),
114
+ DEFINE_PROP_BOOL("x-zvknhb", RISCVCPU, cfg.ext_zvknhb, false),
115
116
DEFINE_PROP_END_OF_LIST(),
117
};
118
diff --git a/target/riscv/vcrypto_helper.c b/target/riscv/vcrypto_helper.c
61
index XXXXXXX..XXXXXXX 100644
119
index XXXXXXX..XXXXXXX 100644
62
--- a/target/riscv/translate.c
120
--- a/target/riscv/vcrypto_helper.c
63
+++ b/target/riscv/translate.c
121
+++ b/target/riscv/vcrypto_helper.c
64
@@ -XXX,XX +XXX,XX @@ static uint32_t opcode_at(DisasContextBase *dcbase, target_ulong pc)
122
@@ -XXX,XX +XXX,XX @@ void HELPER(vaeskf2_vi)(void *vd_vptr, void *vs2_vptr, uint32_t uimm,
65
#include "insn_trans/trans_rvb.c.inc"
123
/* set tail elements to 1s */
66
#include "insn_trans/trans_rvzfh.c.inc"
124
vext_set_elems_1s(vd, vta, vl * 4, total_elems * 4);
67
#include "insn_trans/trans_privileged.c.inc"
125
}
68
+#include "insn_trans/trans_svinval.c.inc"
126
+
69
#include "insn_trans/trans_xventanacondops.c.inc"
127
+static inline uint32_t sig0_sha256(uint32_t x)
70
128
+{
71
/* Include the auto-generated decoder for 16 bit insn */
129
+ return ror32(x, 7) ^ ror32(x, 18) ^ (x >> 3);
72
diff --git a/target/riscv/insn_trans/trans_svinval.c.inc b/target/riscv/insn_trans/trans_svinval.c.inc
130
+}
73
new file mode 100644
131
+
74
index XXXXXXX..XXXXXXX
132
+static inline uint32_t sig1_sha256(uint32_t x)
75
--- /dev/null
133
+{
76
+++ b/target/riscv/insn_trans/trans_svinval.c.inc
134
+ return ror32(x, 17) ^ ror32(x, 19) ^ (x >> 10);
77
@@ -XXX,XX +XXX,XX @@
135
+}
136
+
137
+static inline uint64_t sig0_sha512(uint64_t x)
138
+{
139
+ return ror64(x, 1) ^ ror64(x, 8) ^ (x >> 7);
140
+}
141
+
142
+static inline uint64_t sig1_sha512(uint64_t x)
143
+{
144
+ return ror64(x, 19) ^ ror64(x, 61) ^ (x >> 6);
145
+}
146
+
147
+static inline void vsha2ms_e32(uint32_t *vd, uint32_t *vs1, uint32_t *vs2)
148
+{
149
+ uint32_t res[4];
150
+ res[0] = sig1_sha256(vs1[H4(2)]) + vs2[H4(1)] + sig0_sha256(vd[H4(1)]) +
151
+ vd[H4(0)];
152
+ res[1] = sig1_sha256(vs1[H4(3)]) + vs2[H4(2)] + sig0_sha256(vd[H4(2)]) +
153
+ vd[H4(1)];
154
+ res[2] =
155
+ sig1_sha256(res[0]) + vs2[H4(3)] + sig0_sha256(vd[H4(3)]) + vd[H4(2)];
156
+ res[3] =
157
+ sig1_sha256(res[1]) + vs1[H4(0)] + sig0_sha256(vs2[H4(0)]) + vd[H4(3)];
158
+ vd[H4(3)] = res[3];
159
+ vd[H4(2)] = res[2];
160
+ vd[H4(1)] = res[1];
161
+ vd[H4(0)] = res[0];
162
+}
163
+
164
+static inline void vsha2ms_e64(uint64_t *vd, uint64_t *vs1, uint64_t *vs2)
165
+{
166
+ uint64_t res[4];
167
+ res[0] = sig1_sha512(vs1[2]) + vs2[1] + sig0_sha512(vd[1]) + vd[0];
168
+ res[1] = sig1_sha512(vs1[3]) + vs2[2] + sig0_sha512(vd[2]) + vd[1];
169
+ res[2] = sig1_sha512(res[0]) + vs2[3] + sig0_sha512(vd[3]) + vd[2];
170
+ res[3] = sig1_sha512(res[1]) + vs1[0] + sig0_sha512(vs2[0]) + vd[3];
171
+ vd[3] = res[3];
172
+ vd[2] = res[2];
173
+ vd[1] = res[1];
174
+ vd[0] = res[0];
175
+}
176
+
177
+void HELPER(vsha2ms_vv)(void *vd, void *vs1, void *vs2, CPURISCVState *env,
178
+ uint32_t desc)
179
+{
180
+ uint32_t sew = FIELD_EX64(env->vtype, VTYPE, VSEW);
181
+ uint32_t esz = sew == MO_32 ? 4 : 8;
182
+ uint32_t total_elems;
183
+ uint32_t vta = vext_vta(desc);
184
+
185
+ for (uint32_t i = env->vstart / 4; i < env->vl / 4; i++) {
186
+ if (sew == MO_32) {
187
+ vsha2ms_e32(((uint32_t *)vd) + i * 4, ((uint32_t *)vs1) + i * 4,
188
+ ((uint32_t *)vs2) + i * 4);
189
+ } else {
190
+ /* If not 32 then SEW should be 64 */
191
+ vsha2ms_e64(((uint64_t *)vd) + i * 4, ((uint64_t *)vs1) + i * 4,
192
+ ((uint64_t *)vs2) + i * 4);
193
+ }
194
+ }
195
+ /* set tail elements to 1s */
196
+ total_elems = vext_get_total_elems(env, desc, esz);
197
+ vext_set_elems_1s(vd, vta, env->vl * esz, total_elems * esz);
198
+ env->vstart = 0;
199
+}
200
+
201
+static inline uint64_t sum0_64(uint64_t x)
202
+{
203
+ return ror64(x, 28) ^ ror64(x, 34) ^ ror64(x, 39);
204
+}
205
+
206
+static inline uint32_t sum0_32(uint32_t x)
207
+{
208
+ return ror32(x, 2) ^ ror32(x, 13) ^ ror32(x, 22);
209
+}
210
+
211
+static inline uint64_t sum1_64(uint64_t x)
212
+{
213
+ return ror64(x, 14) ^ ror64(x, 18) ^ ror64(x, 41);
214
+}
215
+
216
+static inline uint32_t sum1_32(uint32_t x)
217
+{
218
+ return ror32(x, 6) ^ ror32(x, 11) ^ ror32(x, 25);
219
+}
220
+
221
+#define ch(x, y, z) ((x & y) ^ ((~x) & z))
222
+
223
+#define maj(x, y, z) ((x & y) ^ (x & z) ^ (y & z))
224
+
225
+static void vsha2c_64(uint64_t *vs2, uint64_t *vd, uint64_t *vs1)
226
+{
227
+ uint64_t a = vs2[3], b = vs2[2], e = vs2[1], f = vs2[0];
228
+ uint64_t c = vd[3], d = vd[2], g = vd[1], h = vd[0];
229
+ uint64_t W0 = vs1[0], W1 = vs1[1];
230
+ uint64_t T1 = h + sum1_64(e) + ch(e, f, g) + W0;
231
+ uint64_t T2 = sum0_64(a) + maj(a, b, c);
232
+
233
+ h = g;
234
+ g = f;
235
+ f = e;
236
+ e = d + T1;
237
+ d = c;
238
+ c = b;
239
+ b = a;
240
+ a = T1 + T2;
241
+
242
+ T1 = h + sum1_64(e) + ch(e, f, g) + W1;
243
+ T2 = sum0_64(a) + maj(a, b, c);
244
+ h = g;
245
+ g = f;
246
+ f = e;
247
+ e = d + T1;
248
+ d = c;
249
+ c = b;
250
+ b = a;
251
+ a = T1 + T2;
252
+
253
+ vd[0] = f;
254
+ vd[1] = e;
255
+ vd[2] = b;
256
+ vd[3] = a;
257
+}
258
+
259
+static void vsha2c_32(uint32_t *vs2, uint32_t *vd, uint32_t *vs1)
260
+{
261
+ uint32_t a = vs2[H4(3)], b = vs2[H4(2)], e = vs2[H4(1)], f = vs2[H4(0)];
262
+ uint32_t c = vd[H4(3)], d = vd[H4(2)], g = vd[H4(1)], h = vd[H4(0)];
263
+ uint32_t W0 = vs1[H4(0)], W1 = vs1[H4(1)];
264
+ uint32_t T1 = h + sum1_32(e) + ch(e, f, g) + W0;
265
+ uint32_t T2 = sum0_32(a) + maj(a, b, c);
266
+
267
+ h = g;
268
+ g = f;
269
+ f = e;
270
+ e = d + T1;
271
+ d = c;
272
+ c = b;
273
+ b = a;
274
+ a = T1 + T2;
275
+
276
+ T1 = h + sum1_32(e) + ch(e, f, g) + W1;
277
+ T2 = sum0_32(a) + maj(a, b, c);
278
+ h = g;
279
+ g = f;
280
+ f = e;
281
+ e = d + T1;
282
+ d = c;
283
+ c = b;
284
+ b = a;
285
+ a = T1 + T2;
286
+
287
+ vd[H4(0)] = f;
288
+ vd[H4(1)] = e;
289
+ vd[H4(2)] = b;
290
+ vd[H4(3)] = a;
291
+}
292
+
293
+void HELPER(vsha2ch32_vv)(void *vd, void *vs1, void *vs2, CPURISCVState *env,
294
+ uint32_t desc)
295
+{
296
+ const uint32_t esz = 4;
297
+ uint32_t total_elems;
298
+ uint32_t vta = vext_vta(desc);
299
+
300
+ for (uint32_t i = env->vstart / 4; i < env->vl / 4; i++) {
301
+ vsha2c_32(((uint32_t *)vs2) + 4 * i, ((uint32_t *)vd) + 4 * i,
302
+ ((uint32_t *)vs1) + 4 * i + 2);
303
+ }
304
+
305
+ /* set tail elements to 1s */
306
+ total_elems = vext_get_total_elems(env, desc, esz);
307
+ vext_set_elems_1s(vd, vta, env->vl * esz, total_elems * esz);
308
+ env->vstart = 0;
309
+}
310
+
311
+void HELPER(vsha2ch64_vv)(void *vd, void *vs1, void *vs2, CPURISCVState *env,
312
+ uint32_t desc)
313
+{
314
+ const uint32_t esz = 8;
315
+ uint32_t total_elems;
316
+ uint32_t vta = vext_vta(desc);
317
+
318
+ for (uint32_t i = env->vstart / 4; i < env->vl / 4; i++) {
319
+ vsha2c_64(((uint64_t *)vs2) + 4 * i, ((uint64_t *)vd) + 4 * i,
320
+ ((uint64_t *)vs1) + 4 * i + 2);
321
+ }
322
+
323
+ /* set tail elements to 1s */
324
+ total_elems = vext_get_total_elems(env, desc, esz);
325
+ vext_set_elems_1s(vd, vta, env->vl * esz, total_elems * esz);
326
+ env->vstart = 0;
327
+}
328
+
329
+void HELPER(vsha2cl32_vv)(void *vd, void *vs1, void *vs2, CPURISCVState *env,
330
+ uint32_t desc)
331
+{
332
+ const uint32_t esz = 4;
333
+ uint32_t total_elems;
334
+ uint32_t vta = vext_vta(desc);
335
+
336
+ for (uint32_t i = env->vstart / 4; i < env->vl / 4; i++) {
337
+ vsha2c_32(((uint32_t *)vs2) + 4 * i, ((uint32_t *)vd) + 4 * i,
338
+ (((uint32_t *)vs1) + 4 * i));
339
+ }
340
+
341
+ /* set tail elements to 1s */
342
+ total_elems = vext_get_total_elems(env, desc, esz);
343
+ vext_set_elems_1s(vd, vta, env->vl * esz, total_elems * esz);
344
+ env->vstart = 0;
345
+}
346
+
347
+void HELPER(vsha2cl64_vv)(void *vd, void *vs1, void *vs2, CPURISCVState *env,
348
+ uint32_t desc)
349
+{
350
+ uint32_t esz = 8;
351
+ uint32_t total_elems;
352
+ uint32_t vta = vext_vta(desc);
353
+
354
+ for (uint32_t i = env->vstart / 4; i < env->vl / 4; i++) {
355
+ vsha2c_64(((uint64_t *)vs2) + 4 * i, ((uint64_t *)vd) + 4 * i,
356
+ (((uint64_t *)vs1) + 4 * i));
357
+ }
358
+
359
+ /* set tail elements to 1s */
360
+ total_elems = vext_get_total_elems(env, desc, esz);
361
+ vext_set_elems_1s(vd, vta, env->vl * esz, total_elems * esz);
362
+ env->vstart = 0;
363
+}
364
diff --git a/target/riscv/insn_trans/trans_rvvk.c.inc b/target/riscv/insn_trans/trans_rvvk.c.inc
365
index XXXXXXX..XXXXXXX 100644
366
--- a/target/riscv/insn_trans/trans_rvvk.c.inc
367
+++ b/target/riscv/insn_trans/trans_rvvk.c.inc
368
@@ -XXX,XX +XXX,XX @@ static bool vaeskf2_check(DisasContext *s, arg_vaeskf2_vi *a)
369
370
GEN_VI_UNMASKED_TRANS(vaeskf1_vi, vaeskf1_check, ZVKNED_EGS)
371
GEN_VI_UNMASKED_TRANS(vaeskf2_vi, vaeskf2_check, ZVKNED_EGS)
372
+
78
+/*
373
+/*
79
+ * RISC-V translation routines for the Svinval Standard Instruction Set.
374
+ * Zvknh
80
+ *
81
+ * Copyright (c) 2020-2022 PLCT lab
82
+ *
83
+ * This program is free software; you can redistribute it and/or modify it
84
+ * under the terms and conditions of the GNU General Public License,
85
+ * version 2 or later, as published by the Free Software Foundation.
86
+ *
87
+ * This program is distributed in the hope it will be useful, but WITHOUT
88
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
89
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
90
+ * more details.
91
+ *
92
+ * You should have received a copy of the GNU General Public License along with
93
+ * this program. If not, see <http://www.gnu.org/licenses/>.
94
+ */
375
+ */
95
+
376
+
96
+#define REQUIRE_SVINVAL(ctx) do { \
377
+#define ZVKNH_EGS 4
97
+ if (!ctx->cfg_ptr->ext_svinval) { \
378
+
98
+ return false; \
379
+#define GEN_VV_UNMASKED_TRANS(NAME, CHECK, EGS) \
99
+ } \
380
+ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
100
+} while (0)
381
+ { \
101
+
382
+ if (CHECK(s, a)) { \
102
+static bool trans_sinval_vma(DisasContext *ctx, arg_sinval_vma *a)
383
+ uint32_t data = 0; \
103
+{
384
+ TCGLabel *over = gen_new_label(); \
104
+ REQUIRE_SVINVAL(ctx);
385
+ TCGv_i32 egs; \
105
+ /* Do the same as sfence.vma currently */
386
+ \
106
+ REQUIRE_EXT(ctx, RVS);
387
+ if (!s->vstart_eq_zero || !s->vl_eq_vlmax) { \
107
+#ifndef CONFIG_USER_ONLY
388
+ /* save opcode for unwinding in case we throw an exception */ \
108
+ gen_helper_tlb_flush(cpu_env);
389
+ decode_save_opc(s); \
109
+ return true;
390
+ egs = tcg_constant_i32(EGS); \
110
+#endif
391
+ gen_helper_egs_check(egs, cpu_env); \
392
+ tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
393
+ } \
394
+ \
395
+ data = FIELD_DP32(data, VDATA, VM, a->vm); \
396
+ data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \
397
+ data = FIELD_DP32(data, VDATA, VTA, s->vta); \
398
+ data = FIELD_DP32(data, VDATA, VTA_ALL_1S, s->cfg_vta_all_1s); \
399
+ data = FIELD_DP32(data, VDATA, VMA, s->vma); \
400
+ \
401
+ tcg_gen_gvec_3_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, a->rs1), \
402
+ vreg_ofs(s, a->rs2), cpu_env, \
403
+ s->cfg_ptr->vlen / 8, s->cfg_ptr->vlen / 8, \
404
+ data, gen_helper_##NAME); \
405
+ \
406
+ mark_vs_dirty(s); \
407
+ gen_set_label(over); \
408
+ return true; \
409
+ } \
410
+ return false; \
411
+ }
412
+
413
+static bool vsha_check_sew(DisasContext *s)
414
+{
415
+ return (s->cfg_ptr->ext_zvknha == true && s->sew == MO_32) ||
416
+ (s->cfg_ptr->ext_zvknhb == true &&
417
+ (s->sew == MO_32 || s->sew == MO_64));
418
+}
419
+
420
+static bool vsha_check(DisasContext *s, arg_rmrr *a)
421
+{
422
+ int egw_bytes = ZVKNH_EGS << s->sew;
423
+ int mult = 1 << MAX(s->lmul, 0);
424
+ return opivv_check(s, a) &&
425
+ vsha_check_sew(s) &&
426
+ MAXSZ(s) >= egw_bytes &&
427
+ !is_overlapped(a->rd, mult, a->rs1, mult) &&
428
+ !is_overlapped(a->rd, mult, a->rs2, mult) &&
429
+ s->lmul >= 0;
430
+}
431
+
432
+GEN_VV_UNMASKED_TRANS(vsha2ms_vv, vsha_check, ZVKNH_EGS)
433
+
434
+static bool trans_vsha2cl_vv(DisasContext *s, arg_rmrr *a)
435
+{
436
+ if (vsha_check(s, a)) {
437
+ uint32_t data = 0;
438
+ TCGLabel *over = gen_new_label();
439
+ TCGv_i32 egs;
440
+
441
+ if (!s->vstart_eq_zero || !s->vl_eq_vlmax) {
442
+ /* save opcode for unwinding in case we throw an exception */
443
+ decode_save_opc(s);
444
+ egs = tcg_constant_i32(ZVKNH_EGS);
445
+ gen_helper_egs_check(egs, cpu_env);
446
+ tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
447
+ }
448
+
449
+ data = FIELD_DP32(data, VDATA, VM, a->vm);
450
+ data = FIELD_DP32(data, VDATA, LMUL, s->lmul);
451
+ data = FIELD_DP32(data, VDATA, VTA, s->vta);
452
+ data = FIELD_DP32(data, VDATA, VTA_ALL_1S, s->cfg_vta_all_1s);
453
+ data = FIELD_DP32(data, VDATA, VMA, s->vma);
454
+
455
+ tcg_gen_gvec_3_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, a->rs1),
456
+ vreg_ofs(s, a->rs2), cpu_env, s->cfg_ptr->vlen / 8,
457
+ s->cfg_ptr->vlen / 8, data,
458
+ s->sew == MO_32 ?
459
+ gen_helper_vsha2cl32_vv : gen_helper_vsha2cl64_vv);
460
+
461
+ mark_vs_dirty(s);
462
+ gen_set_label(over);
463
+ return true;
464
+ }
111
+ return false;
465
+ return false;
112
+}
466
+}
113
+
467
+
114
+static bool trans_sfence_w_inval(DisasContext *ctx, arg_sfence_w_inval *a)
468
+static bool trans_vsha2ch_vv(DisasContext *s, arg_rmrr *a)
115
+{
469
+{
116
+ REQUIRE_SVINVAL(ctx);
470
+ if (vsha_check(s, a)) {
117
+ REQUIRE_EXT(ctx, RVS);
471
+ uint32_t data = 0;
118
+ /* Do nothing currently */
472
+ TCGLabel *over = gen_new_label();
119
+ return true;
473
+ TCGv_i32 egs;
120
+}
474
+
121
+
475
+ if (!s->vstart_eq_zero || !s->vl_eq_vlmax) {
122
+static bool trans_sfence_inval_ir(DisasContext *ctx, arg_sfence_inval_ir *a)
476
+ /* save opcode for unwinding in case we throw an exception */
123
+{
477
+ decode_save_opc(s);
124
+ REQUIRE_SVINVAL(ctx);
478
+ egs = tcg_constant_i32(ZVKNH_EGS);
125
+ REQUIRE_EXT(ctx, RVS);
479
+ gen_helper_egs_check(egs, cpu_env);
126
+ /* Do nothing currently */
480
+ tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
127
+ return true;
481
+ }
128
+}
482
+
129
+
483
+ data = FIELD_DP32(data, VDATA, VM, a->vm);
130
+static bool trans_hinval_vvma(DisasContext *ctx, arg_hinval_vvma *a)
484
+ data = FIELD_DP32(data, VDATA, LMUL, s->lmul);
131
+{
485
+ data = FIELD_DP32(data, VDATA, VTA, s->vta);
132
+ REQUIRE_SVINVAL(ctx);
486
+ data = FIELD_DP32(data, VDATA, VTA_ALL_1S, s->cfg_vta_all_1s);
133
+ /* Do the same as hfence.vvma currently */
487
+ data = FIELD_DP32(data, VDATA, VMA, s->vma);
134
+ REQUIRE_EXT(ctx, RVH);
488
+
135
+#ifndef CONFIG_USER_ONLY
489
+ tcg_gen_gvec_3_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, a->rs1),
136
+ gen_helper_hyp_tlb_flush(cpu_env);
490
+ vreg_ofs(s, a->rs2), cpu_env, s->cfg_ptr->vlen / 8,
137
+ return true;
491
+ s->cfg_ptr->vlen / 8, data,
138
+#endif
492
+ s->sew == MO_32 ?
493
+ gen_helper_vsha2ch32_vv : gen_helper_vsha2ch64_vv);
494
+
495
+ mark_vs_dirty(s);
496
+ gen_set_label(over);
497
+ return true;
498
+ }
139
+ return false;
499
+ return false;
140
+}
500
+}
141
+
142
+static bool trans_hinval_gvma(DisasContext *ctx, arg_hinval_gvma *a)
143
+{
144
+ REQUIRE_SVINVAL(ctx);
145
+ /* Do the same as hfence.gvma currently */
146
+ REQUIRE_EXT(ctx, RVH);
147
+#ifndef CONFIG_USER_ONLY
148
+ gen_helper_hyp_gvma_tlb_flush(cpu_env);
149
+ return true;
150
+#endif
151
+ return false;
152
+}
153
--
501
--
154
2.34.1
502
2.41.0
155
156
diff view generated by jsdifflib
New patch
1
1
From: Lawrence Hunter <lawrence.hunter@codethink.co.uk>
2
3
This commit adds support for the Zvksh vector-crypto extension, which
4
consists of the following instructions:
5
6
* vsm3me.vv
7
* vsm3c.vi
8
9
Translation functions are defined in
10
`target/riscv/insn_trans/trans_rvvk.c.inc` and helpers are defined in
11
`target/riscv/vcrypto_helper.c`.
12
13
Co-authored-by: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
14
[max.chou@sifive.com: Replaced vstart checking by TCG op]
15
Signed-off-by: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
16
Signed-off-by: Lawrence Hunter <lawrence.hunter@codethink.co.uk>
17
Signed-off-by: Max Chou <max.chou@sifive.com>
18
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
19
[max.chou@sifive.com: Exposed x-zvksh property]
20
Message-ID: <20230711165917.2629866-12-max.chou@sifive.com>
21
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
22
---
23
target/riscv/cpu_cfg.h | 1 +
24
target/riscv/helper.h | 3 +
25
target/riscv/insn32.decode | 4 +
26
target/riscv/cpu.c | 6 +-
27
target/riscv/vcrypto_helper.c | 134 +++++++++++++++++++++++
28
target/riscv/insn_trans/trans_rvvk.c.inc | 31 ++++++
29
6 files changed, 177 insertions(+), 2 deletions(-)
30
31
diff --git a/target/riscv/cpu_cfg.h b/target/riscv/cpu_cfg.h
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/riscv/cpu_cfg.h
34
+++ b/target/riscv/cpu_cfg.h
35
@@ -XXX,XX +XXX,XX @@ struct RISCVCPUConfig {
36
bool ext_zvkned;
37
bool ext_zvknha;
38
bool ext_zvknhb;
39
+ bool ext_zvksh;
40
bool ext_zmmul;
41
bool ext_zvfbfmin;
42
bool ext_zvfbfwma;
43
diff --git a/target/riscv/helper.h b/target/riscv/helper.h
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/riscv/helper.h
46
+++ b/target/riscv/helper.h
47
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_5(vsha2ch32_vv, void, ptr, ptr, ptr, env, i32)
48
DEF_HELPER_5(vsha2ch64_vv, void, ptr, ptr, ptr, env, i32)
49
DEF_HELPER_5(vsha2cl32_vv, void, ptr, ptr, ptr, env, i32)
50
DEF_HELPER_5(vsha2cl64_vv, void, ptr, ptr, ptr, env, i32)
51
+
52
+DEF_HELPER_5(vsm3me_vv, void, ptr, ptr, ptr, env, i32)
53
+DEF_HELPER_5(vsm3c_vi, void, ptr, ptr, i32, env, i32)
54
diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode
55
index XXXXXXX..XXXXXXX 100644
56
--- a/target/riscv/insn32.decode
57
+++ b/target/riscv/insn32.decode
58
@@ -XXX,XX +XXX,XX @@ vaeskf2_vi 101010 1 ..... ..... 010 ..... 1110111 @r_vm_1
59
vsha2ms_vv 101101 1 ..... ..... 010 ..... 1110111 @r_vm_1
60
vsha2ch_vv 101110 1 ..... ..... 010 ..... 1110111 @r_vm_1
61
vsha2cl_vv 101111 1 ..... ..... 010 ..... 1110111 @r_vm_1
62
+
63
+# *** Zvksh vector crypto extension ***
64
+vsm3me_vv 100000 1 ..... ..... 010 ..... 1110111 @r_vm_1
65
+vsm3c_vi 101011 1 ..... ..... 010 ..... 1110111 @r_vm_1
66
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
67
index XXXXXXX..XXXXXXX 100644
68
--- a/target/riscv/cpu.c
69
+++ b/target/riscv/cpu.c
70
@@ -XXX,XX +XXX,XX @@ static const struct isa_ext_data isa_edata_arr[] = {
71
ISA_EXT_DATA_ENTRY(zvkned, PRIV_VERSION_1_12_0, ext_zvkned),
72
ISA_EXT_DATA_ENTRY(zvknha, PRIV_VERSION_1_12_0, ext_zvknha),
73
ISA_EXT_DATA_ENTRY(zvknhb, PRIV_VERSION_1_12_0, ext_zvknhb),
74
+ ISA_EXT_DATA_ENTRY(zvksh, PRIV_VERSION_1_12_0, ext_zvksh),
75
ISA_EXT_DATA_ENTRY(zhinx, PRIV_VERSION_1_12_0, ext_zhinx),
76
ISA_EXT_DATA_ENTRY(zhinxmin, PRIV_VERSION_1_12_0, ext_zhinxmin),
77
ISA_EXT_DATA_ENTRY(smaia, PRIV_VERSION_1_12_0, ext_smaia),
78
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_validate_set_extensions(RISCVCPU *cpu, Error **errp)
79
* In principle Zve*x would also suffice here, were they supported
80
* in qemu
81
*/
82
- if ((cpu->cfg.ext_zvbb || cpu->cfg.ext_zvkned || cpu->cfg.ext_zvknha) &&
83
- !cpu->cfg.ext_zve32f) {
84
+ if ((cpu->cfg.ext_zvbb || cpu->cfg.ext_zvkned || cpu->cfg.ext_zvknha ||
85
+ cpu->cfg.ext_zvksh) && !cpu->cfg.ext_zve32f) {
86
error_setg(errp,
87
"Vector crypto extensions require V or Zve* extensions");
88
return;
89
@@ -XXX,XX +XXX,XX @@ static Property riscv_cpu_extensions[] = {
90
DEFINE_PROP_BOOL("x-zvkned", RISCVCPU, cfg.ext_zvkned, false),
91
DEFINE_PROP_BOOL("x-zvknha", RISCVCPU, cfg.ext_zvknha, false),
92
DEFINE_PROP_BOOL("x-zvknhb", RISCVCPU, cfg.ext_zvknhb, false),
93
+ DEFINE_PROP_BOOL("x-zvksh", RISCVCPU, cfg.ext_zvksh, false),
94
95
DEFINE_PROP_END_OF_LIST(),
96
};
97
diff --git a/target/riscv/vcrypto_helper.c b/target/riscv/vcrypto_helper.c
98
index XXXXXXX..XXXXXXX 100644
99
--- a/target/riscv/vcrypto_helper.c
100
+++ b/target/riscv/vcrypto_helper.c
101
@@ -XXX,XX +XXX,XX @@ void HELPER(vsha2cl64_vv)(void *vd, void *vs1, void *vs2, CPURISCVState *env,
102
vext_set_elems_1s(vd, vta, env->vl * esz, total_elems * esz);
103
env->vstart = 0;
104
}
105
+
106
+static inline uint32_t p1(uint32_t x)
107
+{
108
+ return x ^ rol32(x, 15) ^ rol32(x, 23);
109
+}
110
+
111
+static inline uint32_t zvksh_w(uint32_t m16, uint32_t m9, uint32_t m3,
112
+ uint32_t m13, uint32_t m6)
113
+{
114
+ return p1(m16 ^ m9 ^ rol32(m3, 15)) ^ rol32(m13, 7) ^ m6;
115
+}
116
+
117
+void HELPER(vsm3me_vv)(void *vd_vptr, void *vs1_vptr, void *vs2_vptr,
118
+ CPURISCVState *env, uint32_t desc)
119
+{
120
+ uint32_t esz = memop_size(FIELD_EX64(env->vtype, VTYPE, VSEW));
121
+ uint32_t total_elems = vext_get_total_elems(env, desc, esz);
122
+ uint32_t vta = vext_vta(desc);
123
+ uint32_t *vd = vd_vptr;
124
+ uint32_t *vs1 = vs1_vptr;
125
+ uint32_t *vs2 = vs2_vptr;
126
+
127
+ for (int i = env->vstart / 8; i < env->vl / 8; i++) {
128
+ uint32_t w[24];
129
+ for (int j = 0; j < 8; j++) {
130
+ w[j] = bswap32(vs1[H4((i * 8) + j)]);
131
+ w[j + 8] = bswap32(vs2[H4((i * 8) + j)]);
132
+ }
133
+ for (int j = 0; j < 8; j++) {
134
+ w[j + 16] =
135
+ zvksh_w(w[j], w[j + 7], w[j + 13], w[j + 3], w[j + 10]);
136
+ }
137
+ for (int j = 0; j < 8; j++) {
138
+ vd[(i * 8) + j] = bswap32(w[H4(j + 16)]);
139
+ }
140
+ }
141
+ vext_set_elems_1s(vd_vptr, vta, env->vl * esz, total_elems * esz);
142
+ env->vstart = 0;
143
+}
144
+
145
+static inline uint32_t ff1(uint32_t x, uint32_t y, uint32_t z)
146
+{
147
+ return x ^ y ^ z;
148
+}
149
+
150
+static inline uint32_t ff2(uint32_t x, uint32_t y, uint32_t z)
151
+{
152
+ return (x & y) | (x & z) | (y & z);
153
+}
154
+
155
+static inline uint32_t ff_j(uint32_t x, uint32_t y, uint32_t z, uint32_t j)
156
+{
157
+ return (j <= 15) ? ff1(x, y, z) : ff2(x, y, z);
158
+}
159
+
160
+static inline uint32_t gg1(uint32_t x, uint32_t y, uint32_t z)
161
+{
162
+ return x ^ y ^ z;
163
+}
164
+
165
+static inline uint32_t gg2(uint32_t x, uint32_t y, uint32_t z)
166
+{
167
+ return (x & y) | (~x & z);
168
+}
169
+
170
+static inline uint32_t gg_j(uint32_t x, uint32_t y, uint32_t z, uint32_t j)
171
+{
172
+ return (j <= 15) ? gg1(x, y, z) : gg2(x, y, z);
173
+}
174
+
175
+static inline uint32_t t_j(uint32_t j)
176
+{
177
+ return (j <= 15) ? 0x79cc4519 : 0x7a879d8a;
178
+}
179
+
180
+static inline uint32_t p_0(uint32_t x)
181
+{
182
+ return x ^ rol32(x, 9) ^ rol32(x, 17);
183
+}
184
+
185
+static void sm3c(uint32_t *vd, uint32_t *vs1, uint32_t *vs2, uint32_t uimm)
186
+{
187
+ uint32_t x0, x1;
188
+ uint32_t j;
189
+ uint32_t ss1, ss2, tt1, tt2;
190
+ x0 = vs2[0] ^ vs2[4];
191
+ x1 = vs2[1] ^ vs2[5];
192
+ j = 2 * uimm;
193
+ ss1 = rol32(rol32(vs1[0], 12) + vs1[4] + rol32(t_j(j), j % 32), 7);
194
+ ss2 = ss1 ^ rol32(vs1[0], 12);
195
+ tt1 = ff_j(vs1[0], vs1[1], vs1[2], j) + vs1[3] + ss2 + x0;
196
+ tt2 = gg_j(vs1[4], vs1[5], vs1[6], j) + vs1[7] + ss1 + vs2[0];
197
+ vs1[3] = vs1[2];
198
+ vd[3] = rol32(vs1[1], 9);
199
+ vs1[1] = vs1[0];
200
+ vd[1] = tt1;
201
+ vs1[7] = vs1[6];
202
+ vd[7] = rol32(vs1[5], 19);
203
+ vs1[5] = vs1[4];
204
+ vd[5] = p_0(tt2);
205
+ j = 2 * uimm + 1;
206
+ ss1 = rol32(rol32(vd[1], 12) + vd[5] + rol32(t_j(j), j % 32), 7);
207
+ ss2 = ss1 ^ rol32(vd[1], 12);
208
+ tt1 = ff_j(vd[1], vs1[1], vd[3], j) + vs1[3] + ss2 + x1;
209
+ tt2 = gg_j(vd[5], vs1[5], vd[7], j) + vs1[7] + ss1 + vs2[1];
210
+ vd[2] = rol32(vs1[1], 9);
211
+ vd[0] = tt1;
212
+ vd[6] = rol32(vs1[5], 19);
213
+ vd[4] = p_0(tt2);
214
+}
215
+
216
+void HELPER(vsm3c_vi)(void *vd_vptr, void *vs2_vptr, uint32_t uimm,
217
+ CPURISCVState *env, uint32_t desc)
218
+{
219
+ uint32_t esz = memop_size(FIELD_EX64(env->vtype, VTYPE, VSEW));
220
+ uint32_t total_elems = vext_get_total_elems(env, desc, esz);
221
+ uint32_t vta = vext_vta(desc);
222
+ uint32_t *vd = vd_vptr;
223
+ uint32_t *vs2 = vs2_vptr;
224
+ uint32_t v1[8], v2[8], v3[8];
225
+
226
+ for (int i = env->vstart / 8; i < env->vl / 8; i++) {
227
+ for (int k = 0; k < 8; k++) {
228
+ v2[k] = bswap32(vd[H4(i * 8 + k)]);
229
+ v3[k] = bswap32(vs2[H4(i * 8 + k)]);
230
+ }
231
+ sm3c(v1, v2, v3, uimm);
232
+ for (int k = 0; k < 8; k++) {
233
+ vd[i * 8 + k] = bswap32(v1[H4(k)]);
234
+ }
235
+ }
236
+ vext_set_elems_1s(vd_vptr, vta, env->vl * esz, total_elems * esz);
237
+ env->vstart = 0;
238
+}
239
diff --git a/target/riscv/insn_trans/trans_rvvk.c.inc b/target/riscv/insn_trans/trans_rvvk.c.inc
240
index XXXXXXX..XXXXXXX 100644
241
--- a/target/riscv/insn_trans/trans_rvvk.c.inc
242
+++ b/target/riscv/insn_trans/trans_rvvk.c.inc
243
@@ -XXX,XX +XXX,XX @@ static bool trans_vsha2ch_vv(DisasContext *s, arg_rmrr *a)
244
}
245
return false;
246
}
247
+
248
+/*
249
+ * Zvksh
250
+ */
251
+
252
+#define ZVKSH_EGS 8
253
+
254
+static inline bool vsm3_check(DisasContext *s, arg_rmrr *a)
255
+{
256
+ int egw_bytes = ZVKSH_EGS << s->sew;
257
+ int mult = 1 << MAX(s->lmul, 0);
258
+ return s->cfg_ptr->ext_zvksh == true &&
259
+ require_rvv(s) &&
260
+ vext_check_isa_ill(s) &&
261
+ !is_overlapped(a->rd, mult, a->rs2, mult) &&
262
+ MAXSZ(s) >= egw_bytes &&
263
+ s->sew == MO_32;
264
+}
265
+
266
+static inline bool vsm3me_check(DisasContext *s, arg_rmrr *a)
267
+{
268
+ return vsm3_check(s, a) && vext_check_sss(s, a->rd, a->rs1, a->rs2, a->vm);
269
+}
270
+
271
+static inline bool vsm3c_check(DisasContext *s, arg_rmrr *a)
272
+{
273
+ return vsm3_check(s, a) && vext_check_ss(s, a->rd, a->rs2, a->vm);
274
+}
275
+
276
+GEN_VV_UNMASKED_TRANS(vsm3me_vv, vsm3me_check, ZVKSH_EGS)
277
+GEN_VI_UNMASKED_TRANS(vsm3c_vi, vsm3c_check, ZVKSH_EGS)
278
--
279
2.41.0
diff view generated by jsdifflib
New patch
1
1
From: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
2
3
This commit adds support for the Zvkg vector-crypto extension, which
4
consists of the following instructions:
5
6
* vgmul.vv
7
* vghsh.vv
8
9
Translation functions are defined in
10
`target/riscv/insn_trans/trans_rvvk.c.inc` and helpers are defined in
11
`target/riscv/vcrypto_helper.c`.
12
13
Co-authored-by: Lawrence Hunter <lawrence.hunter@codethink.co.uk>
14
[max.chou@sifive.com: Replaced vstart checking by TCG op]
15
Signed-off-by: Lawrence Hunter <lawrence.hunter@codethink.co.uk>
16
Signed-off-by: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
17
Signed-off-by: Max Chou <max.chou@sifive.com>
18
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
19
[max.chou@sifive.com: Exposed x-zvkg property]
20
[max.chou@sifive.com: Replaced uint by int for cross win32 build]
21
Message-ID: <20230711165917.2629866-13-max.chou@sifive.com>
22
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
23
---
24
target/riscv/cpu_cfg.h | 1 +
25
target/riscv/helper.h | 3 +
26
target/riscv/insn32.decode | 4 ++
27
target/riscv/cpu.c | 6 +-
28
target/riscv/vcrypto_helper.c | 72 ++++++++++++++++++++++++
29
target/riscv/insn_trans/trans_rvvk.c.inc | 30 ++++++++++
30
6 files changed, 114 insertions(+), 2 deletions(-)
31
32
diff --git a/target/riscv/cpu_cfg.h b/target/riscv/cpu_cfg.h
33
index XXXXXXX..XXXXXXX 100644
34
--- a/target/riscv/cpu_cfg.h
35
+++ b/target/riscv/cpu_cfg.h
36
@@ -XXX,XX +XXX,XX @@ struct RISCVCPUConfig {
37
bool ext_zve64d;
38
bool ext_zvbb;
39
bool ext_zvbc;
40
+ bool ext_zvkg;
41
bool ext_zvkned;
42
bool ext_zvknha;
43
bool ext_zvknhb;
44
diff --git a/target/riscv/helper.h b/target/riscv/helper.h
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/riscv/helper.h
47
+++ b/target/riscv/helper.h
48
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_5(vsha2cl64_vv, void, ptr, ptr, ptr, env, i32)
49
50
DEF_HELPER_5(vsm3me_vv, void, ptr, ptr, ptr, env, i32)
51
DEF_HELPER_5(vsm3c_vi, void, ptr, ptr, i32, env, i32)
52
+
53
+DEF_HELPER_5(vghsh_vv, void, ptr, ptr, ptr, env, i32)
54
+DEF_HELPER_4(vgmul_vv, void, ptr, ptr, env, i32)
55
diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode
56
index XXXXXXX..XXXXXXX 100644
57
--- a/target/riscv/insn32.decode
58
+++ b/target/riscv/insn32.decode
59
@@ -XXX,XX +XXX,XX @@ vsha2cl_vv 101111 1 ..... ..... 010 ..... 1110111 @r_vm_1
60
# *** Zvksh vector crypto extension ***
61
vsm3me_vv 100000 1 ..... ..... 010 ..... 1110111 @r_vm_1
62
vsm3c_vi 101011 1 ..... ..... 010 ..... 1110111 @r_vm_1
63
+
64
+# *** Zvkg vector crypto extension ***
65
+vghsh_vv 101100 1 ..... ..... 010 ..... 1110111 @r_vm_1
66
+vgmul_vv 101000 1 ..... 10001 010 ..... 1110111 @r2_vm_1
67
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
68
index XXXXXXX..XXXXXXX 100644
69
--- a/target/riscv/cpu.c
70
+++ b/target/riscv/cpu.c
71
@@ -XXX,XX +XXX,XX @@ static const struct isa_ext_data isa_edata_arr[] = {
72
ISA_EXT_DATA_ENTRY(zvfbfwma, PRIV_VERSION_1_12_0, ext_zvfbfwma),
73
ISA_EXT_DATA_ENTRY(zvfh, PRIV_VERSION_1_12_0, ext_zvfh),
74
ISA_EXT_DATA_ENTRY(zvfhmin, PRIV_VERSION_1_12_0, ext_zvfhmin),
75
+ ISA_EXT_DATA_ENTRY(zvkg, PRIV_VERSION_1_12_0, ext_zvkg),
76
ISA_EXT_DATA_ENTRY(zvkned, PRIV_VERSION_1_12_0, ext_zvkned),
77
ISA_EXT_DATA_ENTRY(zvknha, PRIV_VERSION_1_12_0, ext_zvknha),
78
ISA_EXT_DATA_ENTRY(zvknhb, PRIV_VERSION_1_12_0, ext_zvknhb),
79
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_validate_set_extensions(RISCVCPU *cpu, Error **errp)
80
* In principle Zve*x would also suffice here, were they supported
81
* in qemu
82
*/
83
- if ((cpu->cfg.ext_zvbb || cpu->cfg.ext_zvkned || cpu->cfg.ext_zvknha ||
84
- cpu->cfg.ext_zvksh) && !cpu->cfg.ext_zve32f) {
85
+ if ((cpu->cfg.ext_zvbb || cpu->cfg.ext_zvkg || cpu->cfg.ext_zvkned ||
86
+ cpu->cfg.ext_zvknha || cpu->cfg.ext_zvksh) && !cpu->cfg.ext_zve32f) {
87
error_setg(errp,
88
"Vector crypto extensions require V or Zve* extensions");
89
return;
90
@@ -XXX,XX +XXX,XX @@ static Property riscv_cpu_extensions[] = {
91
/* Vector cryptography extensions */
92
DEFINE_PROP_BOOL("x-zvbb", RISCVCPU, cfg.ext_zvbb, false),
93
DEFINE_PROP_BOOL("x-zvbc", RISCVCPU, cfg.ext_zvbc, false),
94
+ DEFINE_PROP_BOOL("x-zvkg", RISCVCPU, cfg.ext_zvkg, false),
95
DEFINE_PROP_BOOL("x-zvkned", RISCVCPU, cfg.ext_zvkned, false),
96
DEFINE_PROP_BOOL("x-zvknha", RISCVCPU, cfg.ext_zvknha, false),
97
DEFINE_PROP_BOOL("x-zvknhb", RISCVCPU, cfg.ext_zvknhb, false),
98
diff --git a/target/riscv/vcrypto_helper.c b/target/riscv/vcrypto_helper.c
99
index XXXXXXX..XXXXXXX 100644
100
--- a/target/riscv/vcrypto_helper.c
101
+++ b/target/riscv/vcrypto_helper.c
102
@@ -XXX,XX +XXX,XX @@ void HELPER(vsm3c_vi)(void *vd_vptr, void *vs2_vptr, uint32_t uimm,
103
vext_set_elems_1s(vd_vptr, vta, env->vl * esz, total_elems * esz);
104
env->vstart = 0;
105
}
106
+
107
+void HELPER(vghsh_vv)(void *vd_vptr, void *vs1_vptr, void *vs2_vptr,
108
+ CPURISCVState *env, uint32_t desc)
109
+{
110
+ uint64_t *vd = vd_vptr;
111
+ uint64_t *vs1 = vs1_vptr;
112
+ uint64_t *vs2 = vs2_vptr;
113
+ uint32_t vta = vext_vta(desc);
114
+ uint32_t total_elems = vext_get_total_elems(env, desc, 4);
115
+
116
+ for (uint32_t i = env->vstart / 4; i < env->vl / 4; i++) {
117
+ uint64_t Y[2] = {vd[i * 2 + 0], vd[i * 2 + 1]};
118
+ uint64_t H[2] = {brev8(vs2[i * 2 + 0]), brev8(vs2[i * 2 + 1])};
119
+ uint64_t X[2] = {vs1[i * 2 + 0], vs1[i * 2 + 1]};
120
+ uint64_t Z[2] = {0, 0};
121
+
122
+ uint64_t S[2] = {brev8(Y[0] ^ X[0]), brev8(Y[1] ^ X[1])};
123
+
124
+ for (int j = 0; j < 128; j++) {
125
+ if ((S[j / 64] >> (j % 64)) & 1) {
126
+ Z[0] ^= H[0];
127
+ Z[1] ^= H[1];
128
+ }
129
+ bool reduce = ((H[1] >> 63) & 1);
130
+ H[1] = H[1] << 1 | H[0] >> 63;
131
+ H[0] = H[0] << 1;
132
+ if (reduce) {
133
+ H[0] ^= 0x87;
134
+ }
135
+ }
136
+
137
+ vd[i * 2 + 0] = brev8(Z[0]);
138
+ vd[i * 2 + 1] = brev8(Z[1]);
139
+ }
140
+ /* set tail elements to 1s */
141
+ vext_set_elems_1s(vd, vta, env->vl * 4, total_elems * 4);
142
+ env->vstart = 0;
143
+}
144
+
145
+void HELPER(vgmul_vv)(void *vd_vptr, void *vs2_vptr, CPURISCVState *env,
146
+ uint32_t desc)
147
+{
148
+ uint64_t *vd = vd_vptr;
149
+ uint64_t *vs2 = vs2_vptr;
150
+ uint32_t vta = vext_vta(desc);
151
+ uint32_t total_elems = vext_get_total_elems(env, desc, 4);
152
+
153
+ for (uint32_t i = env->vstart / 4; i < env->vl / 4; i++) {
154
+ uint64_t Y[2] = {brev8(vd[i * 2 + 0]), brev8(vd[i * 2 + 1])};
155
+ uint64_t H[2] = {brev8(vs2[i * 2 + 0]), brev8(vs2[i * 2 + 1])};
156
+ uint64_t Z[2] = {0, 0};
157
+
158
+ for (int j = 0; j < 128; j++) {
159
+ if ((Y[j / 64] >> (j % 64)) & 1) {
160
+ Z[0] ^= H[0];
161
+ Z[1] ^= H[1];
162
+ }
163
+ bool reduce = ((H[1] >> 63) & 1);
164
+ H[1] = H[1] << 1 | H[0] >> 63;
165
+ H[0] = H[0] << 1;
166
+ if (reduce) {
167
+ H[0] ^= 0x87;
168
+ }
169
+ }
170
+
171
+ vd[i * 2 + 0] = brev8(Z[0]);
172
+ vd[i * 2 + 1] = brev8(Z[1]);
173
+ }
174
+ /* set tail elements to 1s */
175
+ vext_set_elems_1s(vd, vta, env->vl * 4, total_elems * 4);
176
+ env->vstart = 0;
177
+}
178
diff --git a/target/riscv/insn_trans/trans_rvvk.c.inc b/target/riscv/insn_trans/trans_rvvk.c.inc
179
index XXXXXXX..XXXXXXX 100644
180
--- a/target/riscv/insn_trans/trans_rvvk.c.inc
181
+++ b/target/riscv/insn_trans/trans_rvvk.c.inc
182
@@ -XXX,XX +XXX,XX @@ static inline bool vsm3c_check(DisasContext *s, arg_rmrr *a)
183
184
GEN_VV_UNMASKED_TRANS(vsm3me_vv, vsm3me_check, ZVKSH_EGS)
185
GEN_VI_UNMASKED_TRANS(vsm3c_vi, vsm3c_check, ZVKSH_EGS)
186
+
187
+/*
188
+ * Zvkg
189
+ */
190
+
191
+#define ZVKG_EGS 4
192
+
193
+static bool vgmul_check(DisasContext *s, arg_rmr *a)
194
+{
195
+ int egw_bytes = ZVKG_EGS << s->sew;
196
+ return s->cfg_ptr->ext_zvkg == true &&
197
+ vext_check_isa_ill(s) &&
198
+ require_rvv(s) &&
199
+ MAXSZ(s) >= egw_bytes &&
200
+ vext_check_ss(s, a->rd, a->rs2, a->vm) &&
201
+ s->sew == MO_32;
202
+}
203
+
204
+GEN_V_UNMASKED_TRANS(vgmul_vv, vgmul_check, ZVKG_EGS)
205
+
206
+static bool vghsh_check(DisasContext *s, arg_rmrr *a)
207
+{
208
+ int egw_bytes = ZVKG_EGS << s->sew;
209
+ return s->cfg_ptr->ext_zvkg == true &&
210
+ opivv_check(s, a) &&
211
+ MAXSZ(s) >= egw_bytes &&
212
+ s->sew == MO_32;
213
+}
214
+
215
+GEN_VV_UNMASKED_TRANS(vghsh_vv, vghsh_check, ZVKG_EGS)
216
--
217
2.41.0
diff view generated by jsdifflib
New patch
1
From: Max Chou <max.chou@sifive.com>
1
2
3
Allows sharing of sm4_subword between different targets.
4
5
Signed-off-by: Max Chou <max.chou@sifive.com>
6
Reviewed-by: Frank Chang <frank.chang@sifive.com>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Max Chou <max.chou@sifive.com>
9
Message-ID: <20230711165917.2629866-14-max.chou@sifive.com>
10
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
11
---
12
include/crypto/sm4.h | 8 ++++++++
13
target/arm/tcg/crypto_helper.c | 10 ++--------
14
2 files changed, 10 insertions(+), 8 deletions(-)
15
16
diff --git a/include/crypto/sm4.h b/include/crypto/sm4.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/include/crypto/sm4.h
19
+++ b/include/crypto/sm4.h
20
@@ -XXX,XX +XXX,XX @@
21
22
extern const uint8_t sm4_sbox[256];
23
24
+static inline uint32_t sm4_subword(uint32_t word)
25
+{
26
+ return sm4_sbox[word & 0xff] |
27
+ sm4_sbox[(word >> 8) & 0xff] << 8 |
28
+ sm4_sbox[(word >> 16) & 0xff] << 16 |
29
+ sm4_sbox[(word >> 24) & 0xff] << 24;
30
+}
31
+
32
#endif
33
diff --git a/target/arm/tcg/crypto_helper.c b/target/arm/tcg/crypto_helper.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/tcg/crypto_helper.c
36
+++ b/target/arm/tcg/crypto_helper.c
37
@@ -XXX,XX +XXX,XX @@ static void do_crypto_sm4e(uint64_t *rd, uint64_t *rn, uint64_t *rm)
38
CR_ST_WORD(d, (i + 3) % 4) ^
39
CR_ST_WORD(n, i);
40
41
- t = sm4_sbox[t & 0xff] |
42
- sm4_sbox[(t >> 8) & 0xff] << 8 |
43
- sm4_sbox[(t >> 16) & 0xff] << 16 |
44
- sm4_sbox[(t >> 24) & 0xff] << 24;
45
+ t = sm4_subword(t);
46
47
CR_ST_WORD(d, i) ^= t ^ rol32(t, 2) ^ rol32(t, 10) ^ rol32(t, 18) ^
48
rol32(t, 24);
49
@@ -XXX,XX +XXX,XX @@ static void do_crypto_sm4ekey(uint64_t *rd, uint64_t *rn, uint64_t *rm)
50
CR_ST_WORD(d, (i + 3) % 4) ^
51
CR_ST_WORD(m, i);
52
53
- t = sm4_sbox[t & 0xff] |
54
- sm4_sbox[(t >> 8) & 0xff] << 8 |
55
- sm4_sbox[(t >> 16) & 0xff] << 16 |
56
- sm4_sbox[(t >> 24) & 0xff] << 24;
57
+ t = sm4_subword(t);
58
59
CR_ST_WORD(d, i) ^= t ^ rol32(t, 13) ^ rol32(t, 23);
60
}
61
--
62
2.41.0
diff view generated by jsdifflib
1
From: Philipp Tomsich <philipp.tomsich@vrull.eu>
1
From: Max Chou <max.chou@sifive.com>
2
2
3
Signed-off-by: Philipp Tomsich <philipp.tomsich@vrull.eu>
3
Adds sm4_ck constant for use in sm4 cryptography across different targets.
4
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
4
5
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
5
Signed-off-by: Max Chou <max.chou@sifive.com>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Frank Chang <frank.chang@sifive.com>
7
Message-Id: <20220202005249.3566542-2-philipp.tomsich@vrull.eu>
7
Signed-off-by: Max Chou <max.chou@sifive.com>
8
Message-ID: <20230711165917.2629866-15-max.chou@sifive.com>
8
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
9
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
9
---
10
---
10
target/riscv/cpu.h | 78 ++++++++++++++++++++++++----------------------
11
include/crypto/sm4.h | 1 +
11
1 file changed, 41 insertions(+), 37 deletions(-)
12
crypto/sm4.c | 10 ++++++++++
13
2 files changed, 11 insertions(+)
12
14
13
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
15
diff --git a/include/crypto/sm4.h b/include/crypto/sm4.h
14
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
15
--- a/target/riscv/cpu.h
17
--- a/include/crypto/sm4.h
16
+++ b/target/riscv/cpu.h
18
+++ b/include/crypto/sm4.h
17
@@ -XXX,XX +XXX,XX @@ struct RISCVCPUClass {
19
@@ -XXX,XX +XXX,XX @@
18
DeviceReset parent_reset;
20
#define QEMU_SM4_H
21
22
extern const uint8_t sm4_sbox[256];
23
+extern const uint32_t sm4_ck[32];
24
25
static inline uint32_t sm4_subword(uint32_t word)
26
{
27
diff --git a/crypto/sm4.c b/crypto/sm4.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/crypto/sm4.c
30
+++ b/crypto/sm4.c
31
@@ -XXX,XX +XXX,XX @@ uint8_t const sm4_sbox[] = {
32
0x79, 0xee, 0x5f, 0x3e, 0xd7, 0xcb, 0x39, 0x48,
19
};
33
};
20
34
21
+struct RISCVCPUConfig {
35
+uint32_t const sm4_ck[] = {
22
+ bool ext_i;
36
+ 0x00070e15, 0x1c232a31, 0x383f464d, 0x545b6269,
23
+ bool ext_e;
37
+ 0x70777e85, 0x8c939aa1, 0xa8afb6bd, 0xc4cbd2d9,
24
+ bool ext_g;
38
+ 0xe0e7eef5, 0xfc030a11, 0x181f262d, 0x343b4249,
25
+ bool ext_m;
39
+ 0x50575e65, 0x6c737a81, 0x888f969d, 0xa4abb2b9,
26
+ bool ext_a;
40
+ 0xc0c7ced5, 0xdce3eaf1, 0xf8ff060d, 0x141b2229,
27
+ bool ext_f;
41
+ 0x30373e45, 0x4c535a61, 0x686f767d, 0x848b9299,
28
+ bool ext_d;
42
+ 0xa0a7aeb5, 0xbcc3cad1, 0xd8dfe6ed, 0xf4fb0209,
29
+ bool ext_c;
43
+ 0x10171e25, 0x2c333a41, 0x484f565d, 0x646b7279
30
+ bool ext_s;
31
+ bool ext_u;
32
+ bool ext_h;
33
+ bool ext_j;
34
+ bool ext_v;
35
+ bool ext_zba;
36
+ bool ext_zbb;
37
+ bool ext_zbc;
38
+ bool ext_zbs;
39
+ bool ext_counters;
40
+ bool ext_ifencei;
41
+ bool ext_icsr;
42
+ bool ext_zfh;
43
+ bool ext_zfhmin;
44
+ bool ext_zve32f;
45
+ bool ext_zve64f;
46
+
47
+ char *priv_spec;
48
+ char *user_spec;
49
+ char *bext_spec;
50
+ char *vext_spec;
51
+ uint16_t vlen;
52
+ uint16_t elen;
53
+ bool mmu;
54
+ bool pmp;
55
+ bool epmp;
56
+ uint64_t resetvec;
57
+};
44
+};
58
+
59
+typedef struct RISCVCPUConfig RISCVCPUConfig;
60
+
61
/**
62
* RISCVCPU:
63
* @env: #CPURISCVState
64
@@ -XXX,XX +XXX,XX @@ struct RISCVCPU {
65
char *dyn_vreg_xml;
66
67
/* Configuration Settings */
68
- struct {
69
- bool ext_i;
70
- bool ext_e;
71
- bool ext_g;
72
- bool ext_m;
73
- bool ext_a;
74
- bool ext_f;
75
- bool ext_d;
76
- bool ext_c;
77
- bool ext_s;
78
- bool ext_u;
79
- bool ext_h;
80
- bool ext_j;
81
- bool ext_v;
82
- bool ext_zba;
83
- bool ext_zbb;
84
- bool ext_zbc;
85
- bool ext_zbs;
86
- bool ext_counters;
87
- bool ext_ifencei;
88
- bool ext_icsr;
89
- bool ext_zfh;
90
- bool ext_zfhmin;
91
- bool ext_zve32f;
92
- bool ext_zve64f;
93
-
94
- char *priv_spec;
95
- char *user_spec;
96
- char *bext_spec;
97
- char *vext_spec;
98
- uint16_t vlen;
99
- uint16_t elen;
100
- bool mmu;
101
- bool pmp;
102
- bool epmp;
103
- uint64_t resetvec;
104
- } cfg;
105
+ RISCVCPUConfig cfg;
106
};
107
108
static inline int riscv_has_ext(CPURISCVState *env, target_ulong ext)
109
--
45
--
110
2.34.1
46
2.41.0
111
112
diff view generated by jsdifflib
1
From: Weiwei Li <liweiwei@iscas.ac.cn>
1
From: Max Chou <max.chou@sifive.com>
2
2
3
- add PTE_PBMT bits: It uses two PTE bits, but otherwise has no effect on QEMU, since QEMU is sequentially consistent and doesn't model PMAs currently
3
This commit adds support for the Zvksed vector-crypto extension, which
4
- add PTE_PBMT bit check for inner PTE
4
consists of the following instructions:
5
5
6
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
6
* vsm4k.vi
7
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
7
* vsm4r.[vv,vs]
8
Reviewed-by: Anup Patel <anup@brainfault.org>
8
9
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
9
Translation functions are defined in
10
Message-Id: <20220204022658.18097-6-liweiwei@iscas.ac.cn>
10
`target/riscv/insn_trans/trans_rvvk.c.inc` and helpers are defined in
11
`target/riscv/vcrypto_helper.c`.
12
13
Signed-off-by: Max Chou <max.chou@sifive.com>
14
Reviewed-by: Frank Chang <frank.chang@sifive.com>
15
[lawrence.hunter@codethink.co.uk: Moved SM4 functions from
16
crypto_helper.c to vcrypto_helper.c]
17
[nazar.kazakov@codethink.co.uk: Added alignment checks, refactored code to
18
use macros, and minor style changes]
19
Signed-off-by: Max Chou <max.chou@sifive.com>
20
Message-ID: <20230711165917.2629866-16-max.chou@sifive.com>
11
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
21
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
12
---
22
---
13
target/riscv/cpu_bits.h | 2 ++
23
target/riscv/cpu_cfg.h | 1 +
14
target/riscv/cpu.c | 1 +
24
target/riscv/helper.h | 4 +
15
target/riscv/cpu_helper.c | 4 +++-
25
target/riscv/insn32.decode | 5 +
16
3 files changed, 6 insertions(+), 1 deletion(-)
26
target/riscv/cpu.c | 5 +-
17
27
target/riscv/vcrypto_helper.c | 127 +++++++++++++++++++++++
18
diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
28
target/riscv/insn_trans/trans_rvvk.c.inc | 43 ++++++++
19
index XXXXXXX..XXXXXXX 100644
29
6 files changed, 184 insertions(+), 1 deletion(-)
20
--- a/target/riscv/cpu_bits.h
30
21
+++ b/target/riscv/cpu_bits.h
31
diff --git a/target/riscv/cpu_cfg.h b/target/riscv/cpu_cfg.h
22
@@ -XXX,XX +XXX,XX @@ typedef enum {
32
index XXXXXXX..XXXXXXX 100644
23
#define PTE_A 0x040 /* Accessed */
33
--- a/target/riscv/cpu_cfg.h
24
#define PTE_D 0x080 /* Dirty */
34
+++ b/target/riscv/cpu_cfg.h
25
#define PTE_SOFT 0x300 /* Reserved for Software */
35
@@ -XXX,XX +XXX,XX @@ struct RISCVCPUConfig {
26
+#define PTE_PBMT 0x6000000000000000ULL /* Page-based memory types */
36
bool ext_zvkned;
27
#define PTE_N 0x8000000000000000ULL /* NAPOT translation */
37
bool ext_zvknha;
28
+#define PTE_ATTR (PTE_N | PTE_PBMT) /* All attributes bits */
38
bool ext_zvknhb;
29
39
+ bool ext_zvksed;
30
/* Page table PPN shift amount */
40
bool ext_zvksh;
31
#define PTE_PPN_SHIFT 10
41
bool ext_zmmul;
42
bool ext_zvfbfmin;
43
diff --git a/target/riscv/helper.h b/target/riscv/helper.h
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/riscv/helper.h
46
+++ b/target/riscv/helper.h
47
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_5(vsm3c_vi, void, ptr, ptr, i32, env, i32)
48
49
DEF_HELPER_5(vghsh_vv, void, ptr, ptr, ptr, env, i32)
50
DEF_HELPER_4(vgmul_vv, void, ptr, ptr, env, i32)
51
+
52
+DEF_HELPER_5(vsm4k_vi, void, ptr, ptr, i32, env, i32)
53
+DEF_HELPER_4(vsm4r_vv, void, ptr, ptr, env, i32)
54
+DEF_HELPER_4(vsm4r_vs, void, ptr, ptr, env, i32)
55
diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode
56
index XXXXXXX..XXXXXXX 100644
57
--- a/target/riscv/insn32.decode
58
+++ b/target/riscv/insn32.decode
59
@@ -XXX,XX +XXX,XX @@ vsm3c_vi 101011 1 ..... ..... 010 ..... 1110111 @r_vm_1
60
# *** Zvkg vector crypto extension ***
61
vghsh_vv 101100 1 ..... ..... 010 ..... 1110111 @r_vm_1
62
vgmul_vv 101000 1 ..... 10001 010 ..... 1110111 @r2_vm_1
63
+
64
+# *** Zvksed vector crypto extension ***
65
+vsm4k_vi 100001 1 ..... ..... 010 ..... 1110111 @r_vm_1
66
+vsm4r_vv 101000 1 ..... 10000 010 ..... 1110111 @r2_vm_1
67
+vsm4r_vs 101001 1 ..... 10000 010 ..... 1110111 @r2_vm_1
32
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
68
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
33
index XXXXXXX..XXXXXXX 100644
69
index XXXXXXX..XXXXXXX 100644
34
--- a/target/riscv/cpu.c
70
--- a/target/riscv/cpu.c
35
+++ b/target/riscv/cpu.c
71
+++ b/target/riscv/cpu.c
36
@@ -XXX,XX +XXX,XX @@ static Property riscv_cpu_properties[] = {
72
@@ -XXX,XX +XXX,XX @@ static const struct isa_ext_data isa_edata_arr[] = {
37
73
ISA_EXT_DATA_ENTRY(zvkned, PRIV_VERSION_1_12_0, ext_zvkned),
38
DEFINE_PROP_BOOL("svinval", RISCVCPU, cfg.ext_svinval, false),
74
ISA_EXT_DATA_ENTRY(zvknha, PRIV_VERSION_1_12_0, ext_zvknha),
39
DEFINE_PROP_BOOL("svnapot", RISCVCPU, cfg.ext_svnapot, false),
75
ISA_EXT_DATA_ENTRY(zvknhb, PRIV_VERSION_1_12_0, ext_zvknhb),
40
+ DEFINE_PROP_BOOL("svpbmt", RISCVCPU, cfg.ext_svpbmt, false),
76
+ ISA_EXT_DATA_ENTRY(zvksed, PRIV_VERSION_1_12_0, ext_zvksed),
41
77
ISA_EXT_DATA_ENTRY(zvksh, PRIV_VERSION_1_12_0, ext_zvksh),
42
DEFINE_PROP_BOOL("zba", RISCVCPU, cfg.ext_zba, true),
78
ISA_EXT_DATA_ENTRY(zhinx, PRIV_VERSION_1_12_0, ext_zhinx),
43
DEFINE_PROP_BOOL("zbb", RISCVCPU, cfg.ext_zbb, true),
79
ISA_EXT_DATA_ENTRY(zhinxmin, PRIV_VERSION_1_12_0, ext_zhinxmin),
44
diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
80
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_validate_set_extensions(RISCVCPU *cpu, Error **errp)
45
index XXXXXXX..XXXXXXX 100644
81
* in qemu
46
--- a/target/riscv/cpu_helper.c
82
*/
47
+++ b/target/riscv/cpu_helper.c
83
if ((cpu->cfg.ext_zvbb || cpu->cfg.ext_zvkg || cpu->cfg.ext_zvkned ||
48
@@ -XXX,XX +XXX,XX @@ restart:
84
- cpu->cfg.ext_zvknha || cpu->cfg.ext_zvksh) && !cpu->cfg.ext_zve32f) {
49
if (!(pte & PTE_V)) {
85
+ cpu->cfg.ext_zvknha || cpu->cfg.ext_zvksed || cpu->cfg.ext_zvksh) &&
50
/* Invalid PTE */
86
+ !cpu->cfg.ext_zve32f) {
51
return TRANSLATE_FAIL;
87
error_setg(errp,
52
+ } else if (!cpu->cfg.ext_svpbmt && (pte & PTE_PBMT)) {
88
"Vector crypto extensions require V or Zve* extensions");
53
+ return TRANSLATE_FAIL;
89
return;
54
} else if (!(pte & (PTE_R | PTE_W | PTE_X))) {
90
@@ -XXX,XX +XXX,XX @@ static Property riscv_cpu_extensions[] = {
55
/* Inner PTE, continue walking */
91
DEFINE_PROP_BOOL("x-zvkned", RISCVCPU, cfg.ext_zvkned, false),
56
- if (pte & (PTE_D | PTE_A | PTE_U | PTE_N)) {
92
DEFINE_PROP_BOOL("x-zvknha", RISCVCPU, cfg.ext_zvknha, false),
57
+ if (pte & (PTE_D | PTE_A | PTE_U | PTE_ATTR)) {
93
DEFINE_PROP_BOOL("x-zvknhb", RISCVCPU, cfg.ext_zvknhb, false),
58
return TRANSLATE_FAIL;
94
+ DEFINE_PROP_BOOL("x-zvksed", RISCVCPU, cfg.ext_zvksed, false),
59
}
95
DEFINE_PROP_BOOL("x-zvksh", RISCVCPU, cfg.ext_zvksh, false),
60
base = ppn << PGSHIFT;
96
97
DEFINE_PROP_END_OF_LIST(),
98
diff --git a/target/riscv/vcrypto_helper.c b/target/riscv/vcrypto_helper.c
99
index XXXXXXX..XXXXXXX 100644
100
--- a/target/riscv/vcrypto_helper.c
101
+++ b/target/riscv/vcrypto_helper.c
102
@@ -XXX,XX +XXX,XX @@
103
#include "cpu.h"
104
#include "crypto/aes.h"
105
#include "crypto/aes-round.h"
106
+#include "crypto/sm4.h"
107
#include "exec/memop.h"
108
#include "exec/exec-all.h"
109
#include "exec/helper-proto.h"
110
@@ -XXX,XX +XXX,XX @@ void HELPER(vgmul_vv)(void *vd_vptr, void *vs2_vptr, CPURISCVState *env,
111
vext_set_elems_1s(vd, vta, env->vl * 4, total_elems * 4);
112
env->vstart = 0;
113
}
114
+
115
+void HELPER(vsm4k_vi)(void *vd, void *vs2, uint32_t uimm5, CPURISCVState *env,
116
+ uint32_t desc)
117
+{
118
+ const uint32_t egs = 4;
119
+ uint32_t rnd = uimm5 & 0x7;
120
+ uint32_t group_start = env->vstart / egs;
121
+ uint32_t group_end = env->vl / egs;
122
+ uint32_t esz = sizeof(uint32_t);
123
+ uint32_t total_elems = vext_get_total_elems(env, desc, esz);
124
+
125
+ for (uint32_t i = group_start; i < group_end; ++i) {
126
+ uint32_t vstart = i * egs;
127
+ uint32_t vend = (i + 1) * egs;
128
+ uint32_t rk[4] = {0};
129
+ uint32_t tmp[8] = {0};
130
+
131
+ for (uint32_t j = vstart; j < vend; ++j) {
132
+ rk[j - vstart] = *((uint32_t *)vs2 + H4(j));
133
+ }
134
+
135
+ for (uint32_t j = 0; j < egs; ++j) {
136
+ tmp[j] = rk[j];
137
+ }
138
+
139
+ for (uint32_t j = 0; j < egs; ++j) {
140
+ uint32_t b, s;
141
+ b = tmp[j + 1] ^ tmp[j + 2] ^ tmp[j + 3] ^ sm4_ck[rnd * 4 + j];
142
+
143
+ s = sm4_subword(b);
144
+
145
+ tmp[j + 4] = tmp[j] ^ (s ^ rol32(s, 13) ^ rol32(s, 23));
146
+ }
147
+
148
+ for (uint32_t j = vstart; j < vend; ++j) {
149
+ *((uint32_t *)vd + H4(j)) = tmp[egs + (j - vstart)];
150
+ }
151
+ }
152
+
153
+ env->vstart = 0;
154
+ /* set tail elements to 1s */
155
+ vext_set_elems_1s(vd, vext_vta(desc), env->vl * esz, total_elems * esz);
156
+}
157
+
158
+static void do_sm4_round(uint32_t *rk, uint32_t *buf)
159
+{
160
+ const uint32_t egs = 4;
161
+ uint32_t s, b;
162
+
163
+ for (uint32_t j = egs; j < egs * 2; ++j) {
164
+ b = buf[j - 3] ^ buf[j - 2] ^ buf[j - 1] ^ rk[j - 4];
165
+
166
+ s = sm4_subword(b);
167
+
168
+ buf[j] = buf[j - 4] ^ (s ^ rol32(s, 2) ^ rol32(s, 10) ^ rol32(s, 18) ^
169
+ rol32(s, 24));
170
+ }
171
+}
172
+
173
+void HELPER(vsm4r_vv)(void *vd, void *vs2, CPURISCVState *env, uint32_t desc)
174
+{
175
+ const uint32_t egs = 4;
176
+ uint32_t group_start = env->vstart / egs;
177
+ uint32_t group_end = env->vl / egs;
178
+ uint32_t esz = sizeof(uint32_t);
179
+ uint32_t total_elems = vext_get_total_elems(env, desc, esz);
180
+
181
+ for (uint32_t i = group_start; i < group_end; ++i) {
182
+ uint32_t vstart = i * egs;
183
+ uint32_t vend = (i + 1) * egs;
184
+ uint32_t rk[4] = {0};
185
+ uint32_t tmp[8] = {0};
186
+
187
+ for (uint32_t j = vstart; j < vend; ++j) {
188
+ rk[j - vstart] = *((uint32_t *)vs2 + H4(j));
189
+ }
190
+
191
+ for (uint32_t j = vstart; j < vend; ++j) {
192
+ tmp[j - vstart] = *((uint32_t *)vd + H4(j));
193
+ }
194
+
195
+ do_sm4_round(rk, tmp);
196
+
197
+ for (uint32_t j = vstart; j < vend; ++j) {
198
+ *((uint32_t *)vd + H4(j)) = tmp[egs + (j - vstart)];
199
+ }
200
+ }
201
+
202
+ env->vstart = 0;
203
+ /* set tail elements to 1s */
204
+ vext_set_elems_1s(vd, vext_vta(desc), env->vl * esz, total_elems * esz);
205
+}
206
+
207
+void HELPER(vsm4r_vs)(void *vd, void *vs2, CPURISCVState *env, uint32_t desc)
208
+{
209
+ const uint32_t egs = 4;
210
+ uint32_t group_start = env->vstart / egs;
211
+ uint32_t group_end = env->vl / egs;
212
+ uint32_t esz = sizeof(uint32_t);
213
+ uint32_t total_elems = vext_get_total_elems(env, desc, esz);
214
+
215
+ for (uint32_t i = group_start; i < group_end; ++i) {
216
+ uint32_t vstart = i * egs;
217
+ uint32_t vend = (i + 1) * egs;
218
+ uint32_t rk[4] = {0};
219
+ uint32_t tmp[8] = {0};
220
+
221
+ for (uint32_t j = 0; j < egs; ++j) {
222
+ rk[j] = *((uint32_t *)vs2 + H4(j));
223
+ }
224
+
225
+ for (uint32_t j = vstart; j < vend; ++j) {
226
+ tmp[j - vstart] = *((uint32_t *)vd + H4(j));
227
+ }
228
+
229
+ do_sm4_round(rk, tmp);
230
+
231
+ for (uint32_t j = vstart; j < vend; ++j) {
232
+ *((uint32_t *)vd + H4(j)) = tmp[egs + (j - vstart)];
233
+ }
234
+ }
235
+
236
+ env->vstart = 0;
237
+ /* set tail elements to 1s */
238
+ vext_set_elems_1s(vd, vext_vta(desc), env->vl * esz, total_elems * esz);
239
+}
240
diff --git a/target/riscv/insn_trans/trans_rvvk.c.inc b/target/riscv/insn_trans/trans_rvvk.c.inc
241
index XXXXXXX..XXXXXXX 100644
242
--- a/target/riscv/insn_trans/trans_rvvk.c.inc
243
+++ b/target/riscv/insn_trans/trans_rvvk.c.inc
244
@@ -XXX,XX +XXX,XX @@ static bool vghsh_check(DisasContext *s, arg_rmrr *a)
245
}
246
247
GEN_VV_UNMASKED_TRANS(vghsh_vv, vghsh_check, ZVKG_EGS)
248
+
249
+/*
250
+ * Zvksed
251
+ */
252
+
253
+#define ZVKSED_EGS 4
254
+
255
+static bool zvksed_check(DisasContext *s)
256
+{
257
+ int egw_bytes = ZVKSED_EGS << s->sew;
258
+ return s->cfg_ptr->ext_zvksed == true &&
259
+ require_rvv(s) &&
260
+ vext_check_isa_ill(s) &&
261
+ MAXSZ(s) >= egw_bytes &&
262
+ s->sew == MO_32;
263
+}
264
+
265
+static bool vsm4k_vi_check(DisasContext *s, arg_rmrr *a)
266
+{
267
+ return zvksed_check(s) &&
268
+ require_align(a->rd, s->lmul) &&
269
+ require_align(a->rs2, s->lmul);
270
+}
271
+
272
+GEN_VI_UNMASKED_TRANS(vsm4k_vi, vsm4k_vi_check, ZVKSED_EGS)
273
+
274
+static bool vsm4r_vv_check(DisasContext *s, arg_rmr *a)
275
+{
276
+ return zvksed_check(s) &&
277
+ require_align(a->rd, s->lmul) &&
278
+ require_align(a->rs2, s->lmul);
279
+}
280
+
281
+GEN_V_UNMASKED_TRANS(vsm4r_vv, vsm4r_vv_check, ZVKSED_EGS)
282
+
283
+static bool vsm4r_vs_check(DisasContext *s, arg_rmr *a)
284
+{
285
+ return zvksed_check(s) &&
286
+ !is_overlapped(a->rd, 1 << MAX(s->lmul, 0), a->rs2, 1) &&
287
+ require_align(a->rd, s->lmul);
288
+}
289
+
290
+GEN_V_UNMASKED_TRANS(vsm4r_vs, vsm4r_vs_check, ZVKSED_EGS)
61
--
291
--
62
2.34.1
292
2.41.0
63
64
diff view generated by jsdifflib
1
From: Anup Patel <anup.patel@wdc.com>
1
From: Rob Bradford <rbradford@rivosinc.com>
2
2
3
The AIA specification defines IMSIC interface CSRs for easy access
3
These are WARL fields - zero out the bits for unavailable counters and
4
to the per-HART IMSIC registers without using indirect xiselect and
4
special case the TM bit in mcountinhibit which is hardwired to zero.
5
xireg CSRs. This patch implements the AIA IMSIC interface CSRs.
5
This patch achieves this by modifying the value written so that any use
6
of the field will see the correctly masked bits.
6
7
7
Signed-off-by: Anup Patel <anup.patel@wdc.com>
8
Tested by modifying OpenSBI to write max value to these CSRs and upon
8
Signed-off-by: Anup Patel <anup@brainfault.org>
9
subsequent read the appropriate number of bits for number of PMUs is
9
Reviewed-by: Frank Chang <frank.chang@sifive.com>
10
enabled and the TM bit is zero in mcountinhibit.
10
Message-id: 20220204174700.534953-16-anup@brainfault.org
11
12
Signed-off-by: Rob Bradford <rbradford@rivosinc.com>
13
Acked-by: Alistair Francis <alistair.francis@wdc.com>
14
Reviewed-by: Atish Patra <atishp@rivosinc.com>
15
Message-ID: <20230802124906.24197-1-rbradford@rivosinc.com>
11
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
16
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
12
---
17
---
13
target/riscv/csr.c | 203 +++++++++++++++++++++++++++++++++++++++++++++
18
target/riscv/csr.c | 11 +++++++++--
14
1 file changed, 203 insertions(+)
19
1 file changed, 9 insertions(+), 2 deletions(-)
15
20
16
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
21
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
17
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
18
--- a/target/riscv/csr.c
23
--- a/target/riscv/csr.c
19
+++ b/target/riscv/csr.c
24
+++ b/target/riscv/csr.c
20
@@ -XXX,XX +XXX,XX @@ static int aia_xlate_vs_csrno(CPURISCVState *env, int csrno)
25
@@ -XXX,XX +XXX,XX @@ static RISCVException write_mcountinhibit(CPURISCVState *env, int csrno,
21
return CSR_VSISELECT;
26
{
22
case CSR_SIREG:
27
int cidx;
23
return CSR_VSIREG;
28
PMUCTRState *counter;
24
+ case CSR_SSETEIPNUM:
29
+ RISCVCPU *cpu = env_archcpu(env);
25
+ return CSR_VSSETEIPNUM;
30
26
+ case CSR_SCLREIPNUM:
31
- env->mcountinhibit = val;
27
+ return CSR_VSCLREIPNUM;
32
+ /* WARL register - disable unavailable counters; TM bit is always 0 */
28
+ case CSR_SSETEIENUM:
33
+ env->mcountinhibit =
29
+ return CSR_VSSETEIENUM;
34
+ val & (cpu->pmu_avail_ctrs | COUNTEREN_CY | COUNTEREN_IR);
30
+ case CSR_SCLREIENUM:
35
31
+ return CSR_VSCLREIENUM;
36
/* Check if any other counter is also monitoring cycles/instructions */
32
+ case CSR_STOPEI:
37
for (cidx = 0; cidx < RV_MAX_MHPMCOUNTERS; cidx++) {
33
+ return CSR_VSTOPEI;
38
@@ -XXX,XX +XXX,XX @@ static RISCVException read_mcounteren(CPURISCVState *env, int csrno,
34
default:
39
static RISCVException write_mcounteren(CPURISCVState *env, int csrno,
35
return csrno;
40
target_ulong val)
36
};
41
{
37
@@ -XXX,XX +XXX,XX @@ done:
42
- env->mcounteren = val;
43
+ RISCVCPU *cpu = env_archcpu(env);
44
+
45
+ /* WARL register - disable unavailable counters */
46
+ env->mcounteren = val & (cpu->pmu_avail_ctrs | COUNTEREN_CY | COUNTEREN_TM |
47
+ COUNTEREN_IR);
38
return RISCV_EXCP_NONE;
48
return RISCV_EXCP_NONE;
39
}
49
}
40
50
41
+static int rmw_xsetclreinum(CPURISCVState *env, int csrno, target_ulong *val,
42
+ target_ulong new_val, target_ulong wr_mask)
43
+{
44
+ int ret = -EINVAL;
45
+ bool set, pend, virt;
46
+ target_ulong priv, isel, vgein, xlen, nval, wmask;
47
+
48
+ /* Translate CSR number for VS-mode */
49
+ csrno = aia_xlate_vs_csrno(env, csrno);
50
+
51
+ /* Decode register details from CSR number */
52
+ virt = set = pend = false;
53
+ switch (csrno) {
54
+ case CSR_MSETEIPNUM:
55
+ priv = PRV_M;
56
+ set = true;
57
+ pend = true;
58
+ break;
59
+ case CSR_MCLREIPNUM:
60
+ priv = PRV_M;
61
+ pend = true;
62
+ break;
63
+ case CSR_MSETEIENUM:
64
+ priv = PRV_M;
65
+ set = true;
66
+ break;
67
+ case CSR_MCLREIENUM:
68
+ priv = PRV_M;
69
+ break;
70
+ case CSR_SSETEIPNUM:
71
+ priv = PRV_S;
72
+ set = true;
73
+ pend = true;
74
+ break;
75
+ case CSR_SCLREIPNUM:
76
+ priv = PRV_S;
77
+ pend = true;
78
+ break;
79
+ case CSR_SSETEIENUM:
80
+ priv = PRV_S;
81
+ set = true;
82
+ break;
83
+ case CSR_SCLREIENUM:
84
+ priv = PRV_S;
85
+ break;
86
+ case CSR_VSSETEIPNUM:
87
+ priv = PRV_S;
88
+ virt = true;
89
+ set = true;
90
+ pend = true;
91
+ break;
92
+ case CSR_VSCLREIPNUM:
93
+ priv = PRV_S;
94
+ virt = true;
95
+ pend = true;
96
+ break;
97
+ case CSR_VSSETEIENUM:
98
+ priv = PRV_S;
99
+ virt = true;
100
+ set = true;
101
+ break;
102
+ case CSR_VSCLREIENUM:
103
+ priv = PRV_S;
104
+ virt = true;
105
+ break;
106
+ default:
107
+ goto done;
108
+ };
109
+
110
+ /* IMSIC CSRs only available when machine implements IMSIC. */
111
+ if (!env->aia_ireg_rmw_fn[priv]) {
112
+ goto done;
113
+ }
114
+
115
+ /* Find the selected guest interrupt file */
116
+ vgein = (virt) ? get_field(env->hstatus, HSTATUS_VGEIN) : 0;
117
+
118
+ /* Selected guest interrupt file should be valid */
119
+ if (virt && (!vgein || env->geilen < vgein)) {
120
+ goto done;
121
+ }
122
+
123
+ /* Set/Clear CSRs always read zero */
124
+ if (val) {
125
+ *val = 0;
126
+ }
127
+
128
+ if (wr_mask) {
129
+ /* Get interrupt number */
130
+ new_val &= wr_mask;
131
+
132
+ /* Find target interrupt pending/enable register */
133
+ xlen = riscv_cpu_mxl_bits(env);
134
+ isel = (new_val / xlen);
135
+ isel *= (xlen / IMSIC_EIPx_BITS);
136
+ isel += (pend) ? ISELECT_IMSIC_EIP0 : ISELECT_IMSIC_EIE0;
137
+
138
+ /* Find the interrupt bit to be set/clear */
139
+ wmask = ((target_ulong)1) << (new_val % xlen);
140
+ nval = (set) ? wmask : 0;
141
+
142
+ /* Call machine specific IMSIC register emulation */
143
+ ret = env->aia_ireg_rmw_fn[priv](env->aia_ireg_rmw_fn_arg[priv],
144
+ AIA_MAKE_IREG(isel, priv, virt,
145
+ vgein, xlen),
146
+ NULL, nval, wmask);
147
+ } else {
148
+ ret = 0;
149
+ }
150
+
151
+done:
152
+ if (ret) {
153
+ return (riscv_cpu_virt_enabled(env) && virt) ?
154
+ RISCV_EXCP_VIRT_INSTRUCTION_FAULT : RISCV_EXCP_ILLEGAL_INST;
155
+ }
156
+ return RISCV_EXCP_NONE;
157
+}
158
+
159
+static int rmw_xtopei(CPURISCVState *env, int csrno, target_ulong *val,
160
+ target_ulong new_val, target_ulong wr_mask)
161
+{
162
+ bool virt;
163
+ int ret = -EINVAL;
164
+ target_ulong priv, vgein;
165
+
166
+ /* Translate CSR number for VS-mode */
167
+ csrno = aia_xlate_vs_csrno(env, csrno);
168
+
169
+ /* Decode register details from CSR number */
170
+ virt = false;
171
+ switch (csrno) {
172
+ case CSR_MTOPEI:
173
+ priv = PRV_M;
174
+ break;
175
+ case CSR_STOPEI:
176
+ priv = PRV_S;
177
+ break;
178
+ case CSR_VSTOPEI:
179
+ priv = PRV_S;
180
+ virt = true;
181
+ break;
182
+ default:
183
+ goto done;
184
+ };
185
+
186
+ /* IMSIC CSRs only available when machine implements IMSIC. */
187
+ if (!env->aia_ireg_rmw_fn[priv]) {
188
+ goto done;
189
+ }
190
+
191
+ /* Find the selected guest interrupt file */
192
+ vgein = (virt) ? get_field(env->hstatus, HSTATUS_VGEIN) : 0;
193
+
194
+ /* Selected guest interrupt file should be valid */
195
+ if (virt && (!vgein || env->geilen < vgein)) {
196
+ goto done;
197
+ }
198
+
199
+ /* Call machine specific IMSIC register emulation for TOPEI */
200
+ ret = env->aia_ireg_rmw_fn[priv](env->aia_ireg_rmw_fn_arg[priv],
201
+ AIA_MAKE_IREG(ISELECT_IMSIC_TOPEI, priv, virt, vgein,
202
+ riscv_cpu_mxl_bits(env)),
203
+ val, new_val, wr_mask);
204
+
205
+done:
206
+ if (ret) {
207
+ return (riscv_cpu_virt_enabled(env) && virt) ?
208
+ RISCV_EXCP_VIRT_INSTRUCTION_FAULT : RISCV_EXCP_ILLEGAL_INST;
209
+ }
210
+ return RISCV_EXCP_NONE;
211
+}
212
+
213
static RISCVException read_mtvec(CPURISCVState *env, int csrno,
214
target_ulong *val)
215
{
216
@@ -XXX,XX +XXX,XX @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
217
/* Machine-Level Interrupts (AIA) */
218
[CSR_MTOPI] = { "mtopi", aia_any, read_mtopi },
219
220
+ /* Machine-Level IMSIC Interface (AIA) */
221
+ [CSR_MSETEIPNUM] = { "mseteipnum", aia_any, NULL, NULL, rmw_xsetclreinum },
222
+ [CSR_MCLREIPNUM] = { "mclreipnum", aia_any, NULL, NULL, rmw_xsetclreinum },
223
+ [CSR_MSETEIENUM] = { "mseteienum", aia_any, NULL, NULL, rmw_xsetclreinum },
224
+ [CSR_MCLREIENUM] = { "mclreienum", aia_any, NULL, NULL, rmw_xsetclreinum },
225
+ [CSR_MTOPEI] = { "mtopei", aia_any, NULL, NULL, rmw_xtopei },
226
+
227
/* Virtual Interrupts for Supervisor Level (AIA) */
228
[CSR_MVIEN] = { "mvien", aia_any, read_zero, write_ignore },
229
[CSR_MVIP] = { "mvip", aia_any, read_zero, write_ignore },
230
@@ -XXX,XX +XXX,XX @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
231
/* Supervisor-Level Interrupts (AIA) */
232
[CSR_STOPI] = { "stopi", aia_smode, read_stopi },
233
234
+ /* Supervisor-Level IMSIC Interface (AIA) */
235
+ [CSR_SSETEIPNUM] = { "sseteipnum", aia_smode, NULL, NULL, rmw_xsetclreinum },
236
+ [CSR_SCLREIPNUM] = { "sclreipnum", aia_smode, NULL, NULL, rmw_xsetclreinum },
237
+ [CSR_SSETEIENUM] = { "sseteienum", aia_smode, NULL, NULL, rmw_xsetclreinum },
238
+ [CSR_SCLREIENUM] = { "sclreienum", aia_smode, NULL, NULL, rmw_xsetclreinum },
239
+ [CSR_STOPEI] = { "stopei", aia_smode, NULL, NULL, rmw_xtopei },
240
+
241
/* Supervisor-Level High-Half CSRs (AIA) */
242
[CSR_SIEH] = { "sieh", aia_smode32, NULL, NULL, rmw_sieh },
243
[CSR_SIPH] = { "siph", aia_smode32, NULL, NULL, rmw_siph },
244
@@ -XXX,XX +XXX,XX @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
245
/* VS-Level Interrupts (H-extension with AIA) */
246
[CSR_VSTOPI] = { "vstopi", aia_hmode, read_vstopi },
247
248
+ /* VS-Level IMSIC Interface (H-extension with AIA) */
249
+ [CSR_VSSETEIPNUM] = { "vsseteipnum", aia_hmode, NULL, NULL, rmw_xsetclreinum },
250
+ [CSR_VSCLREIPNUM] = { "vsclreipnum", aia_hmode, NULL, NULL, rmw_xsetclreinum },
251
+ [CSR_VSSETEIENUM] = { "vsseteienum", aia_hmode, NULL, NULL, rmw_xsetclreinum },
252
+ [CSR_VSCLREIENUM] = { "vsclreienum", aia_hmode, NULL, NULL, rmw_xsetclreinum },
253
+ [CSR_VSTOPEI] = { "vstopei", aia_hmode, NULL, NULL, rmw_xtopei },
254
+
255
/* Hypervisor and VS-Level High-Half CSRs (H-extension with AIA) */
256
[CSR_HIDELEGH] = { "hidelegh", aia_hmode32, NULL, NULL, rmw_hidelegh },
257
[CSR_HVIENH] = { "hvienh", aia_hmode32, read_zero, write_ignore },
258
--
51
--
259
2.34.1
52
2.41.0
260
261
diff view generated by jsdifflib
1
From: Weiwei Li <liweiwei@iscas.ac.cn>
1
From: Jason Chien <jason.chien@sifive.com>
2
2
3
- add PTE_N bit
3
RVA23 Profiles states:
4
- add PTE_N bit check for inner PTE
4
The RVA23 profiles are intended to be used for 64-bit application
5
- update address translation to support 64KiB continuous region (napot_bits = 4)
5
processors that will run rich OS stacks from standard binary OS
6
distributions and with a substantial number of third-party binary user
7
applications that will be supported over a considerable length of time
8
in the field.
6
9
7
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
10
The chapter 4 of the unprivileged spec introduces the Zihintntl extension
8
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
11
and Zihintntl is a mandatory extension presented in RVA23 Profiles, whose
9
Reviewed-by: Anup Patel <anup@brainfault.org>
12
purpose is to enable application and operating system portability across
13
different implementations. Thus the DTS should contain the Zihintntl ISA
14
string in order to pass to software.
15
16
The unprivileged spec states:
17
Like any HINTs, these instructions may be freely ignored. Hence, although
18
they are described in terms of cache-based memory hierarchies, they do not
19
mandate the provision of caches.
20
21
These instructions are encoded with non-used opcode, e.g. ADD x0, x0, x2,
22
which QEMU already supports, and QEMU does not emulate cache. Therefore
23
these instructions can be considered as a no-op, and we only need to add
24
a new property for the Zihintntl extension.
25
26
Reviewed-by: Frank Chang <frank.chang@sifive.com>
10
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
27
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
11
Message-Id: <20220204022658.18097-4-liweiwei@iscas.ac.cn>
28
Signed-off-by: Jason Chien <jason.chien@sifive.com>
29
Message-ID: <20230726074049.19505-2-jason.chien@sifive.com>
12
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
30
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
13
---
31
---
14
target/riscv/cpu_bits.h | 1 +
32
target/riscv/cpu_cfg.h | 1 +
15
target/riscv/cpu.c | 2 ++
33
target/riscv/cpu.c | 2 ++
16
target/riscv/cpu_helper.c | 18 +++++++++++++++---
34
2 files changed, 3 insertions(+)
17
3 files changed, 18 insertions(+), 3 deletions(-)
18
35
19
diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
36
diff --git a/target/riscv/cpu_cfg.h b/target/riscv/cpu_cfg.h
20
index XXXXXXX..XXXXXXX 100644
37
index XXXXXXX..XXXXXXX 100644
21
--- a/target/riscv/cpu_bits.h
38
--- a/target/riscv/cpu_cfg.h
22
+++ b/target/riscv/cpu_bits.h
39
+++ b/target/riscv/cpu_cfg.h
23
@@ -XXX,XX +XXX,XX @@ typedef enum {
40
@@ -XXX,XX +XXX,XX @@ struct RISCVCPUConfig {
24
#define PTE_A 0x040 /* Accessed */
41
bool ext_icbom;
25
#define PTE_D 0x080 /* Dirty */
42
bool ext_icboz;
26
#define PTE_SOFT 0x300 /* Reserved for Software */
43
bool ext_zicond;
27
+#define PTE_N 0x8000000000000000ULL /* NAPOT translation */
44
+ bool ext_zihintntl;
28
45
bool ext_zihintpause;
29
/* Page table PPN shift amount */
46
bool ext_smstateen;
30
#define PTE_PPN_SHIFT 10
47
bool ext_sstc;
31
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
48
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
32
index XXXXXXX..XXXXXXX 100644
49
index XXXXXXX..XXXXXXX 100644
33
--- a/target/riscv/cpu.c
50
--- a/target/riscv/cpu.c
34
+++ b/target/riscv/cpu.c
51
+++ b/target/riscv/cpu.c
35
@@ -XXX,XX +XXX,XX @@ static Property riscv_cpu_properties[] = {
52
@@ -XXX,XX +XXX,XX @@ static const struct isa_ext_data isa_edata_arr[] = {
36
DEFINE_PROP_UINT16("vlen", RISCVCPU, cfg.vlen, 128),
53
ISA_EXT_DATA_ENTRY(zicond, PRIV_VERSION_1_12_0, ext_zicond),
37
DEFINE_PROP_UINT16("elen", RISCVCPU, cfg.elen, 64),
54
ISA_EXT_DATA_ENTRY(zicsr, PRIV_VERSION_1_10_0, ext_icsr),
38
55
ISA_EXT_DATA_ENTRY(zifencei, PRIV_VERSION_1_10_0, ext_ifencei),
39
+ DEFINE_PROP_BOOL("svnapot", RISCVCPU, cfg.ext_svnapot, false),
56
+ ISA_EXT_DATA_ENTRY(zihintntl, PRIV_VERSION_1_10_0, ext_zihintntl),
40
+
57
ISA_EXT_DATA_ENTRY(zihintpause, PRIV_VERSION_1_10_0, ext_zihintpause),
41
DEFINE_PROP_BOOL("zba", RISCVCPU, cfg.ext_zba, true),
58
ISA_EXT_DATA_ENTRY(zmmul, PRIV_VERSION_1_12_0, ext_zmmul),
42
DEFINE_PROP_BOOL("zbb", RISCVCPU, cfg.ext_zbb, true),
59
ISA_EXT_DATA_ENTRY(zawrs, PRIV_VERSION_1_12_0, ext_zawrs),
43
DEFINE_PROP_BOOL("zbc", RISCVCPU, cfg.ext_zbc, true),
60
@@ -XXX,XX +XXX,XX @@ static Property riscv_cpu_extensions[] = {
44
diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
61
DEFINE_PROP_BOOL("sscofpmf", RISCVCPU, cfg.ext_sscofpmf, false),
45
index XXXXXXX..XXXXXXX 100644
62
DEFINE_PROP_BOOL("Zifencei", RISCVCPU, cfg.ext_ifencei, true),
46
--- a/target/riscv/cpu_helper.c
63
DEFINE_PROP_BOOL("Zicsr", RISCVCPU, cfg.ext_icsr, true),
47
+++ b/target/riscv/cpu_helper.c
64
+ DEFINE_PROP_BOOL("Zihintntl", RISCVCPU, cfg.ext_zihintntl, true),
48
@@ -XXX,XX +XXX,XX @@ static int get_physical_address(CPURISCVState *env, hwaddr *physical,
65
DEFINE_PROP_BOOL("Zihintpause", RISCVCPU, cfg.ext_zihintpause, true),
49
bool use_background = false;
66
DEFINE_PROP_BOOL("Zawrs", RISCVCPU, cfg.ext_zawrs, true),
50
hwaddr ppn;
67
DEFINE_PROP_BOOL("Zfa", RISCVCPU, cfg.ext_zfa, true),
51
RISCVCPU *cpu = env_archcpu(env);
52
+ int napot_bits = 0;
53
+ target_ulong napot_mask;
54
55
/*
56
* Check if we should use the background registers for the two
57
@@ -XXX,XX +XXX,XX @@ restart:
58
return TRANSLATE_FAIL;
59
} else if (!(pte & (PTE_R | PTE_W | PTE_X))) {
60
/* Inner PTE, continue walking */
61
- if (pte & (PTE_D | PTE_A | PTE_U)) {
62
+ if (pte & (PTE_D | PTE_A | PTE_U | PTE_N)) {
63
return TRANSLATE_FAIL;
64
}
65
base = ppn << PGSHIFT;
66
@@ -XXX,XX +XXX,XX @@ restart:
67
/* for superpage mappings, make a fake leaf PTE for the TLB's
68
benefit. */
69
target_ulong vpn = addr >> PGSHIFT;
70
- *physical = ((ppn | (vpn & ((1L << ptshift) - 1))) << PGSHIFT) |
71
- (addr & ~TARGET_PAGE_MASK);
72
+
73
+ if (cpu->cfg.ext_svnapot && (pte & PTE_N)) {
74
+ napot_bits = ctzl(ppn) + 1;
75
+ if ((i != (levels - 1)) || (napot_bits != 4)) {
76
+ return TRANSLATE_FAIL;
77
+ }
78
+ }
79
+
80
+ napot_mask = (1 << napot_bits) - 1;
81
+ *physical = (((ppn & ~napot_mask) | (vpn & napot_mask) |
82
+ (vpn & (((target_ulong)1 << ptshift) - 1))
83
+ ) << PGSHIFT) | (addr & ~TARGET_PAGE_MASK);
84
85
/* set permissions on the TLB entry */
86
if ((pte & PTE_R) || ((pte & PTE_X) && mxr)) {
87
--
68
--
88
2.34.1
69
2.41.0
89
90
diff view generated by jsdifflib
1
From: Weiwei Li <liweiwei@iscas.ac.cn>
1
From: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
2
2
3
For non-leaf PTEs, the D, A, and U bits are reserved for future standard use.
3
Commit a47842d ("riscv: Add support for the Zfa extension") implemented the zfa extension.
4
However, it has some typos for fleq.d and fltq.d. Both of them misused the fltq.s
5
helper function.
4
6
5
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
7
Fixes: a47842d ("riscv: Add support for the Zfa extension")
6
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
8
Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
7
Reviewed-by: Anup Patel <anup@brainfault.org>
9
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
8
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
10
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
9
Message-Id: <20220204022658.18097-3-liweiwei@iscas.ac.cn>
11
Message-ID: <20230728003906.768-1-zhiwei_liu@linux.alibaba.com>
10
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
12
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
11
---
13
---
12
target/riscv/cpu_helper.c | 3 +++
14
target/riscv/insn_trans/trans_rvzfa.c.inc | 4 ++--
13
1 file changed, 3 insertions(+)
15
1 file changed, 2 insertions(+), 2 deletions(-)
14
16
15
diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
17
diff --git a/target/riscv/insn_trans/trans_rvzfa.c.inc b/target/riscv/insn_trans/trans_rvzfa.c.inc
16
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
17
--- a/target/riscv/cpu_helper.c
19
--- a/target/riscv/insn_trans/trans_rvzfa.c.inc
18
+++ b/target/riscv/cpu_helper.c
20
+++ b/target/riscv/insn_trans/trans_rvzfa.c.inc
19
@@ -XXX,XX +XXX,XX @@ restart:
21
@@ -XXX,XX +XXX,XX @@ bool trans_fleq_d(DisasContext *ctx, arg_fleq_d *a)
20
return TRANSLATE_FAIL;
22
TCGv_i64 src1 = get_fpr_hs(ctx, a->rs1);
21
} else if (!(pte & (PTE_R | PTE_W | PTE_X))) {
23
TCGv_i64 src2 = get_fpr_hs(ctx, a->rs2);
22
/* Inner PTE, continue walking */
24
23
+ if (pte & (PTE_D | PTE_A | PTE_U)) {
25
- gen_helper_fltq_s(dest, cpu_env, src1, src2);
24
+ return TRANSLATE_FAIL;
26
+ gen_helper_fleq_d(dest, cpu_env, src1, src2);
25
+ }
27
gen_set_gpr(ctx, a->rd, dest);
26
base = ppn << PGSHIFT;
28
return true;
27
} else if ((pte & (PTE_R | PTE_W | PTE_X)) == PTE_W) {
29
}
28
/* Reserved leaf PTE flags: PTE_W */
30
@@ -XXX,XX +XXX,XX @@ bool trans_fltq_d(DisasContext *ctx, arg_fltq_d *a)
31
TCGv_i64 src1 = get_fpr_hs(ctx, a->rs1);
32
TCGv_i64 src2 = get_fpr_hs(ctx, a->rs2);
33
34
- gen_helper_fltq_s(dest, cpu_env, src1, src2);
35
+ gen_helper_fltq_d(dest, cpu_env, src1, src2);
36
gen_set_gpr(ctx, a->rd, dest);
37
return true;
38
}
29
--
39
--
30
2.34.1
40
2.41.0
31
32
diff view generated by jsdifflib
1
From: Philipp Tomsich <philipp.tomsich@vrull.eu>
1
From: Jason Chien <jason.chien@sifive.com>
2
2
3
The XVentanaCondOps extension is supported by VRULL on behalf of the
3
When writing the upper mtime, we should keep the original lower mtime
4
Ventana Micro. Add myself as a point-of-contact.
4
whose value is given by cpu_riscv_read_rtc() instead of
5
cpu_riscv_read_rtc_raw(). The same logic applies to writes to lower mtime.
5
6
6
Signed-off-by: Philipp Tomsich <philipp.tomsich@vrull.eu>
7
Signed-off-by: Jason Chien <jason.chien@sifive.com>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
8
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
9
Message-Id: <20220202005249.3566542-8-philipp.tomsich@vrull.eu>
9
Message-ID: <20230728082502.26439-1-jason.chien@sifive.com>
10
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
10
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
11
---
11
---
12
MAINTAINERS | 7 +++++++
12
hw/intc/riscv_aclint.c | 5 +++--
13
1 file changed, 7 insertions(+)
13
1 file changed, 3 insertions(+), 2 deletions(-)
14
14
15
diff --git a/MAINTAINERS b/MAINTAINERS
15
diff --git a/hw/intc/riscv_aclint.c b/hw/intc/riscv_aclint.c
16
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
17
--- a/MAINTAINERS
17
--- a/hw/intc/riscv_aclint.c
18
+++ b/MAINTAINERS
18
+++ b/hw/intc/riscv_aclint.c
19
@@ -XXX,XX +XXX,XX @@ F: include/hw/riscv/
19
@@ -XXX,XX +XXX,XX @@ static void riscv_aclint_mtimer_write(void *opaque, hwaddr addr,
20
F: linux-user/host/riscv32/
20
return;
21
F: linux-user/host/riscv64/
21
} else if (addr == mtimer->time_base || addr == mtimer->time_base + 4) {
22
22
uint64_t rtc_r = cpu_riscv_read_rtc_raw(mtimer->timebase_freq);
23
+RISC-V XVentanaCondOps extension
23
+ uint64_t rtc = cpu_riscv_read_rtc(mtimer);
24
+M: Philipp Tomsich <philipp.tomsich@vrull.eu>
24
25
+L: qemu-riscv@nongnu.org
25
if (addr == mtimer->time_base) {
26
+S: Supported
26
if (size == 4) {
27
+F: target/riscv/XVentanaCondOps.decode
27
/* time_lo for RV32/RV64 */
28
+F: target/riscv/insn_trans/trans_xventanacondops.c.inc
28
- mtimer->time_delta = ((rtc_r & ~0xFFFFFFFFULL) | value) - rtc_r;
29
+
29
+ mtimer->time_delta = ((rtc & ~0xFFFFFFFFULL) | value) - rtc_r;
30
RENESAS RX CPUs
30
} else {
31
R: Yoshinori Sato <ysato@users.sourceforge.jp>
31
/* time for RV64 */
32
S: Orphan
32
mtimer->time_delta = value - rtc_r;
33
@@ -XXX,XX +XXX,XX @@ static void riscv_aclint_mtimer_write(void *opaque, hwaddr addr,
34
} else {
35
if (size == 4) {
36
/* time_hi for RV32/RV64 */
37
- mtimer->time_delta = (value << 32 | (rtc_r & 0xFFFFFFFF)) - rtc_r;
38
+ mtimer->time_delta = (value << 32 | (rtc & 0xFFFFFFFF)) - rtc_r;
39
} else {
40
qemu_log_mask(LOG_GUEST_ERROR,
41
"aclint-mtimer: invalid time_hi write: %08x",
33
--
42
--
34
2.34.1
43
2.41.0
35
36
diff view generated by jsdifflib
1
From: Yu Li <liyu.yukiteru@bytedance.com>
1
From: Jason Chien <jason.chien@sifive.com>
2
2
3
Since the hypervisor extension been non experimental and enabled for
3
The variables whose values are given by cpu_riscv_read_rtc() should be named
4
default CPU, the previous command is no longer available and the
4
"rtc". The variables whose value are given by cpu_riscv_read_rtc_raw()
5
option `x-h=true` or `h=true` is also no longer required.
5
should be named "rtc_r".
6
6
7
Signed-off-by: Yu Li <liyu.yukiteru@bytedance.com>
7
Signed-off-by: Jason Chien <jason.chien@sifive.com>
8
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
8
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
9
Message-Id: <9040401e-8f87-ef4a-d840-6703f08d068c@bytedance.com>
9
Message-ID: <20230728082502.26439-2-jason.chien@sifive.com>
10
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
10
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
11
---
11
---
12
docs/system/riscv/virt.rst | 6 +++---
12
hw/intc/riscv_aclint.c | 6 +++---
13
1 file changed, 3 insertions(+), 3 deletions(-)
13
1 file changed, 3 insertions(+), 3 deletions(-)
14
14
15
diff --git a/docs/system/riscv/virt.rst b/docs/system/riscv/virt.rst
15
diff --git a/hw/intc/riscv_aclint.c b/hw/intc/riscv_aclint.c
16
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
17
--- a/docs/system/riscv/virt.rst
17
--- a/hw/intc/riscv_aclint.c
18
+++ b/docs/system/riscv/virt.rst
18
+++ b/hw/intc/riscv_aclint.c
19
@@ -XXX,XX +XXX,XX @@ The ``virt`` machine supports the following devices:
19
@@ -XXX,XX +XXX,XX @@ static void riscv_aclint_mtimer_write_timecmp(RISCVAclintMTimerState *mtimer,
20
* 1 generic PCIe host bridge
20
uint64_t next;
21
* The fw_cfg device that allows a guest to obtain data from QEMU
21
uint64_t diff;
22
22
23
-Note that the default CPU is a generic RV32GC/RV64GC. Optional extensions
23
- uint64_t rtc_r = cpu_riscv_read_rtc(mtimer);
24
-can be enabled via command line parameters, e.g.: ``-cpu rv64,x-h=true``
24
+ uint64_t rtc = cpu_riscv_read_rtc(mtimer);
25
-enables the hypervisor extension for RV64.
25
26
+The hypervisor extension has been enabled for the default CPU, so virtual
26
/* Compute the relative hartid w.r.t the socket */
27
+machines with hypervisor extension can simply be used without explicitly
27
hartid = hartid - mtimer->hartid_base;
28
+declaring.
28
29
29
mtimer->timecmp[hartid] = value;
30
Hardware configuration information
30
- if (mtimer->timecmp[hartid] <= rtc_r) {
31
----------------------------------
31
+ if (mtimer->timecmp[hartid] <= rtc) {
32
/*
33
* If we're setting an MTIMECMP value in the "past",
34
* immediately raise the timer interrupt
35
@@ -XXX,XX +XXX,XX @@ static void riscv_aclint_mtimer_write_timecmp(RISCVAclintMTimerState *mtimer,
36
37
/* otherwise, set up the future timer interrupt */
38
qemu_irq_lower(mtimer->timer_irqs[hartid]);
39
- diff = mtimer->timecmp[hartid] - rtc_r;
40
+ diff = mtimer->timecmp[hartid] - rtc;
41
/* back to ns (note args switched in muldiv64) */
42
uint64_t ns_diff = muldiv64(diff, NANOSECONDS_PER_SECOND, timebase_freq);
43
32
--
44
--
33
2.34.1
45
2.41.0
34
35
diff view generated by jsdifflib
1
From: Philipp Tomsich <philipp.tomsich@vrull.eu>
1
From: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
2
2
3
The Zb[abcs] support code still uses the RISCV_CPU macros to access
3
We should not use types dependend on host arch for target_ucontext.
4
the configuration information (i.e., check whether an extension is
4
This bug is found when run rv32 applications.
5
available/enabled). Now that we provide this information directly
6
from DisasContext, we can access this directly via the cfg_ptr field.
7
5
8
Signed-off-by: Philipp Tomsich <philipp.tomsich@vrull.eu>
6
Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
9
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
10
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-Id: <20220202005249.3566542-5-philipp.tomsich@vrull.eu>
8
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
10
Message-ID: <20230811055438.1945-1-zhiwei_liu@linux.alibaba.com>
13
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
11
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
14
---
12
---
15
target/riscv/insn_trans/trans_rvb.c.inc | 8 ++++----
13
linux-user/riscv/signal.c | 4 ++--
16
1 file changed, 4 insertions(+), 4 deletions(-)
14
1 file changed, 2 insertions(+), 2 deletions(-)
17
15
18
diff --git a/target/riscv/insn_trans/trans_rvb.c.inc b/target/riscv/insn_trans/trans_rvb.c.inc
16
diff --git a/linux-user/riscv/signal.c b/linux-user/riscv/signal.c
19
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
20
--- a/target/riscv/insn_trans/trans_rvb.c.inc
18
--- a/linux-user/riscv/signal.c
21
+++ b/target/riscv/insn_trans/trans_rvb.c.inc
19
+++ b/linux-user/riscv/signal.c
22
@@ -XXX,XX +XXX,XX @@
20
@@ -XXX,XX +XXX,XX @@ struct target_sigcontext {
23
*/
21
}; /* cf. riscv-linux:arch/riscv/include/uapi/asm/ptrace.h */
24
22
25
#define REQUIRE_ZBA(ctx) do { \
23
struct target_ucontext {
26
- if (!RISCV_CPU(ctx->cs)->cfg.ext_zba) { \
24
- unsigned long uc_flags;
27
+ if (ctx->cfg_ptr->ext_zba) { \
25
- struct target_ucontext *uc_link;
28
return false; \
26
+ abi_ulong uc_flags;
29
} \
27
+ abi_ptr uc_link;
30
} while (0)
28
target_stack_t uc_stack;
31
29
target_sigset_t uc_sigmask;
32
#define REQUIRE_ZBB(ctx) do { \
30
uint8_t __unused[1024 / 8 - sizeof(target_sigset_t)];
33
- if (!RISCV_CPU(ctx->cs)->cfg.ext_zbb) { \
34
+ if (ctx->cfg_ptr->ext_zbb) { \
35
return false; \
36
} \
37
} while (0)
38
39
#define REQUIRE_ZBC(ctx) do { \
40
- if (!RISCV_CPU(ctx->cs)->cfg.ext_zbc) { \
41
+ if (ctx->cfg_ptr->ext_zbc) { \
42
return false; \
43
} \
44
} while (0)
45
46
#define REQUIRE_ZBS(ctx) do { \
47
- if (!RISCV_CPU(ctx->cs)->cfg.ext_zbs) { \
48
+ if (ctx->cfg_ptr->ext_zbs) { \
49
return false; \
50
} \
51
} while (0)
52
--
31
--
53
2.34.1
32
2.41.0
54
33
55
34
diff view generated by jsdifflib
1
From: Anup Patel <anup.patel@wdc.com>
1
From: Yong-Xuan Wang <yongxuan.wang@sifive.com>
2
2
3
The AIA specification adds new CSRs for RV32 so that RISC-V hart can
3
In this patch, we create the APLIC and IMSIC FDT helper functions and
4
support 64 local interrupts on both RV32 and RV64.
4
remove M mode AIA devices when using KVM acceleration.
5
5
6
Signed-off-by: Anup Patel <anup.patel@wdc.com>
6
Signed-off-by: Yong-Xuan Wang <yongxuan.wang@sifive.com>
7
Signed-off-by: Anup Patel <anup@brainfault.org>
7
Reviewed-by: Jim Shu <jim.shu@sifive.com>
8
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
8
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
9
Reviewed-by: Frank Chang <frank.chang@sifive.com>
9
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
10
Message-id: 20220204174700.534953-11-anup@brainfault.org
10
Message-ID: <20230727102439.22554-2-yongxuan.wang@sifive.com>
11
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
11
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
12
---
12
---
13
target/riscv/cpu.h | 14 +-
13
hw/riscv/virt.c | 290 +++++++++++++++++++++++-------------------------
14
target/riscv/cpu_helper.c | 10 +-
14
1 file changed, 137 insertions(+), 153 deletions(-)
15
target/riscv/csr.c | 560 +++++++++++++++++++++++++++++++-------
16
target/riscv/machine.c | 10 +-
17
4 files changed, 474 insertions(+), 120 deletions(-)
18
15
19
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
16
diff --git a/hw/riscv/virt.c b/hw/riscv/virt.c
20
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
21
--- a/target/riscv/cpu.h
18
--- a/hw/riscv/virt.c
22
+++ b/target/riscv/cpu.h
19
+++ b/hw/riscv/virt.c
23
@@ -XXX,XX +XXX,XX @@ struct CPURISCVState {
20
@@ -XXX,XX +XXX,XX @@ static uint32_t imsic_num_bits(uint32_t count)
24
*/
25
uint64_t mstatus;
26
27
- target_ulong mip;
28
+ uint64_t mip;
29
30
- uint32_t miclaim;
31
+ uint64_t miclaim;
32
33
- target_ulong mie;
34
- target_ulong mideleg;
35
+ uint64_t mie;
36
+ uint64_t mideleg;
37
38
target_ulong satp; /* since: priv-1.10.0 */
39
target_ulong stval;
40
@@ -XXX,XX +XXX,XX @@ struct CPURISCVState {
41
/* Hypervisor CSRs */
42
target_ulong hstatus;
43
target_ulong hedeleg;
44
- target_ulong hideleg;
45
+ uint64_t hideleg;
46
target_ulong hcounteren;
47
target_ulong htval;
48
target_ulong htinst;
49
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_list(void);
50
#ifndef CONFIG_USER_ONLY
51
bool riscv_cpu_exec_interrupt(CPUState *cs, int interrupt_request);
52
void riscv_cpu_swap_hypervisor_regs(CPURISCVState *env);
53
-int riscv_cpu_claim_interrupts(RISCVCPU *cpu, uint32_t interrupts);
54
-uint32_t riscv_cpu_update_mip(RISCVCPU *cpu, uint32_t mask, uint32_t value);
55
+int riscv_cpu_claim_interrupts(RISCVCPU *cpu, uint64_t interrupts);
56
+uint64_t riscv_cpu_update_mip(RISCVCPU *cpu, uint64_t mask, uint64_t value);
57
#define BOOL_TO_MASK(x) (-!!(x)) /* helper for riscv_cpu_update_mip value */
58
void riscv_cpu_set_rdtime_fn(CPURISCVState *env, uint64_t (*fn)(uint32_t),
59
uint32_t arg);
60
diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
61
index XXXXXXX..XXXXXXX 100644
62
--- a/target/riscv/cpu_helper.c
63
+++ b/target/riscv/cpu_helper.c
64
@@ -XXX,XX +XXX,XX @@ bool riscv_cpu_two_stage_lookup(int mmu_idx)
65
return mmu_idx & TB_FLAGS_PRIV_HYP_ACCESS_MASK;
66
}
67
68
-int riscv_cpu_claim_interrupts(RISCVCPU *cpu, uint32_t interrupts)
69
+int riscv_cpu_claim_interrupts(RISCVCPU *cpu, uint64_t interrupts)
70
{
71
CPURISCVState *env = &cpu->env;
72
if (env->miclaim & interrupts) {
73
@@ -XXX,XX +XXX,XX @@ int riscv_cpu_claim_interrupts(RISCVCPU *cpu, uint32_t interrupts)
74
}
75
}
76
77
-uint32_t riscv_cpu_update_mip(RISCVCPU *cpu, uint32_t mask, uint32_t value)
78
+uint64_t riscv_cpu_update_mip(RISCVCPU *cpu, uint64_t mask, uint64_t value)
79
{
80
CPURISCVState *env = &cpu->env;
81
CPUState *cs = CPU(cpu);
82
- uint32_t gein, vsgein = 0, old = env->mip;
83
+ uint64_t gein, vsgein = 0, old = env->mip;
84
bool locked = false;
85
86
if (riscv_cpu_virt_enabled(env)) {
87
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_do_interrupt(CPUState *cs)
88
*/
89
bool async = !!(cs->exception_index & RISCV_EXCP_INT_FLAG);
90
target_ulong cause = cs->exception_index & RISCV_EXCP_INT_MASK;
91
- target_ulong deleg = async ? env->mideleg : env->medeleg;
92
+ uint64_t deleg = async ? env->mideleg : env->medeleg;
93
target_ulong tval = 0;
94
target_ulong htval = 0;
95
target_ulong mtval2 = 0;
96
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_do_interrupt(CPUState *cs)
97
cause < TARGET_LONG_BITS && ((deleg >> cause) & 1)) {
98
/* handle the trap in S-mode */
99
if (riscv_has_ext(env, RVH)) {
100
- target_ulong hdeleg = async ? env->hideleg : env->hedeleg;
101
+ uint64_t hdeleg = async ? env->hideleg : env->hedeleg;
102
103
if (riscv_cpu_virt_enabled(env) && ((hdeleg >> cause) & 1)) {
104
/* Trap to VS mode */
105
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
106
index XXXXXXX..XXXXXXX 100644
107
--- a/target/riscv/csr.c
108
+++ b/target/riscv/csr.c
109
@@ -XXX,XX +XXX,XX @@ static RISCVException any32(CPURISCVState *env, int csrno)
110
111
}
112
113
+static int aia_any32(CPURISCVState *env, int csrno)
114
+{
115
+ if (!riscv_feature(env, RISCV_FEATURE_AIA)) {
116
+ return RISCV_EXCP_ILLEGAL_INST;
117
+ }
118
+
119
+ return any32(env, csrno);
120
+}
121
+
122
static RISCVException smode(CPURISCVState *env, int csrno)
123
{
124
if (riscv_has_ext(env, RVS)) {
125
@@ -XXX,XX +XXX,XX @@ static RISCVException smode(CPURISCVState *env, int csrno)
126
return RISCV_EXCP_ILLEGAL_INST;
127
}
128
129
+static int smode32(CPURISCVState *env, int csrno)
130
+{
131
+ if (riscv_cpu_mxl(env) != MXL_RV32) {
132
+ return RISCV_EXCP_ILLEGAL_INST;
133
+ }
134
+
135
+ return smode(env, csrno);
136
+}
137
+
138
+static int aia_smode32(CPURISCVState *env, int csrno)
139
+{
140
+ if (!riscv_feature(env, RISCV_FEATURE_AIA)) {
141
+ return RISCV_EXCP_ILLEGAL_INST;
142
+ }
143
+
144
+ return smode32(env, csrno);
145
+}
146
+
147
static RISCVException hmode(CPURISCVState *env, int csrno)
148
{
149
if (riscv_has_ext(env, RVS) &&
150
@@ -XXX,XX +XXX,XX @@ static RISCVException pointer_masking(CPURISCVState *env, int csrno)
151
return RISCV_EXCP_ILLEGAL_INST;
152
}
153
154
+static int aia_hmode32(CPURISCVState *env, int csrno)
155
+{
156
+ if (!riscv_feature(env, RISCV_FEATURE_AIA)) {
157
+ return RISCV_EXCP_ILLEGAL_INST;
158
+ }
159
+
160
+ return hmode32(env, csrno);
161
+}
162
+
163
static RISCVException pmp(CPURISCVState *env, int csrno)
164
{
165
if (riscv_feature(env, RISCV_FEATURE_PMP)) {
166
@@ -XXX,XX +XXX,XX @@ static RISCVException read_timeh(CPURISCVState *env, int csrno,
167
168
/* Machine constants */
169
170
-#define M_MODE_INTERRUPTS (MIP_MSIP | MIP_MTIP | MIP_MEIP)
171
-#define S_MODE_INTERRUPTS (MIP_SSIP | MIP_STIP | MIP_SEIP)
172
-#define VS_MODE_INTERRUPTS (MIP_VSSIP | MIP_VSTIP | MIP_VSEIP)
173
-#define HS_MODE_INTERRUPTS (MIP_SGEIP | VS_MODE_INTERRUPTS)
174
+#define M_MODE_INTERRUPTS ((uint64_t)(MIP_MSIP | MIP_MTIP | MIP_MEIP))
175
+#define S_MODE_INTERRUPTS ((uint64_t)(MIP_SSIP | MIP_STIP | MIP_SEIP))
176
+#define VS_MODE_INTERRUPTS ((uint64_t)(MIP_VSSIP | MIP_VSTIP | MIP_VSEIP))
177
+#define HS_MODE_INTERRUPTS ((uint64_t)(MIP_SGEIP | VS_MODE_INTERRUPTS))
178
179
-static const target_ulong delegable_ints = S_MODE_INTERRUPTS |
180
+static const uint64_t delegable_ints = S_MODE_INTERRUPTS |
181
VS_MODE_INTERRUPTS;
182
-static const target_ulong vs_delegable_ints = VS_MODE_INTERRUPTS;
183
-static const target_ulong all_ints = M_MODE_INTERRUPTS | S_MODE_INTERRUPTS |
184
+static const uint64_t vs_delegable_ints = VS_MODE_INTERRUPTS;
185
+static const uint64_t all_ints = M_MODE_INTERRUPTS | S_MODE_INTERRUPTS |
186
HS_MODE_INTERRUPTS;
187
#define DELEGABLE_EXCPS ((1ULL << (RISCV_EXCP_INST_ADDR_MIS)) | \
188
(1ULL << (RISCV_EXCP_INST_ACCESS_FAULT)) | \
189
@@ -XXX,XX +XXX,XX @@ static RISCVException write_medeleg(CPURISCVState *env, int csrno,
190
return RISCV_EXCP_NONE;
191
}
192
193
-static RISCVException read_mideleg(CPURISCVState *env, int csrno,
194
- target_ulong *val)
195
+static RISCVException rmw_mideleg64(CPURISCVState *env, int csrno,
196
+ uint64_t *ret_val,
197
+ uint64_t new_val, uint64_t wr_mask)
198
{
199
- *val = env->mideleg;
200
- return RISCV_EXCP_NONE;
201
-}
202
+ uint64_t mask = wr_mask & delegable_ints;
203
+
204
+ if (ret_val) {
205
+ *ret_val = env->mideleg;
206
+ }
207
+
208
+ env->mideleg = (env->mideleg & ~mask) | (new_val & mask);
209
210
-static RISCVException write_mideleg(CPURISCVState *env, int csrno,
211
- target_ulong val)
212
-{
213
- env->mideleg = (env->mideleg & ~delegable_ints) | (val & delegable_ints);
214
if (riscv_has_ext(env, RVH)) {
215
env->mideleg |= HS_MODE_INTERRUPTS;
216
}
217
+
218
return RISCV_EXCP_NONE;
219
}
220
221
-static RISCVException read_mie(CPURISCVState *env, int csrno,
222
- target_ulong *val)
223
+static RISCVException rmw_mideleg(CPURISCVState *env, int csrno,
224
+ target_ulong *ret_val,
225
+ target_ulong new_val, target_ulong wr_mask)
226
{
227
- *val = env->mie;
228
- return RISCV_EXCP_NONE;
229
+ uint64_t rval;
230
+ RISCVException ret;
231
+
232
+ ret = rmw_mideleg64(env, csrno, &rval, new_val, wr_mask);
233
+ if (ret_val) {
234
+ *ret_val = rval;
235
+ }
236
+
237
+ return ret;
238
}
239
240
-static RISCVException write_mie(CPURISCVState *env, int csrno,
241
- target_ulong val)
242
+static RISCVException rmw_midelegh(CPURISCVState *env, int csrno,
243
+ target_ulong *ret_val,
244
+ target_ulong new_val,
245
+ target_ulong wr_mask)
246
{
247
- env->mie = (env->mie & ~all_ints) | (val & all_ints);
248
+ uint64_t rval;
249
+ RISCVException ret;
250
+
251
+ ret = rmw_mideleg64(env, csrno, &rval,
252
+ ((uint64_t)new_val) << 32, ((uint64_t)wr_mask) << 32);
253
+ if (ret_val) {
254
+ *ret_val = rval >> 32;
255
+ }
256
+
257
+ return ret;
258
+}
259
+
260
+static RISCVException rmw_mie64(CPURISCVState *env, int csrno,
261
+ uint64_t *ret_val,
262
+ uint64_t new_val, uint64_t wr_mask)
263
+{
264
+ uint64_t mask = wr_mask & all_ints;
265
+
266
+ if (ret_val) {
267
+ *ret_val = env->mie;
268
+ }
269
+
270
+ env->mie = (env->mie & ~mask) | (new_val & mask);
271
+
272
if (!riscv_has_ext(env, RVH)) {
273
- env->mie &= ~MIP_SGEIP;
274
+ env->mie &= ~((uint64_t)MIP_SGEIP);
275
}
276
+
277
return RISCV_EXCP_NONE;
278
}
279
280
+static RISCVException rmw_mie(CPURISCVState *env, int csrno,
281
+ target_ulong *ret_val,
282
+ target_ulong new_val, target_ulong wr_mask)
283
+{
284
+ uint64_t rval;
285
+ RISCVException ret;
286
+
287
+ ret = rmw_mie64(env, csrno, &rval, new_val, wr_mask);
288
+ if (ret_val) {
289
+ *ret_val = rval;
290
+ }
291
+
292
+ return ret;
293
+}
294
+
295
+static RISCVException rmw_mieh(CPURISCVState *env, int csrno,
296
+ target_ulong *ret_val,
297
+ target_ulong new_val, target_ulong wr_mask)
298
+{
299
+ uint64_t rval;
300
+ RISCVException ret;
301
+
302
+ ret = rmw_mie64(env, csrno, &rval,
303
+ ((uint64_t)new_val) << 32, ((uint64_t)wr_mask) << 32);
304
+ if (ret_val) {
305
+ *ret_val = rval >> 32;
306
+ }
307
+
308
+ return ret;
309
+}
310
+
311
static RISCVException read_mtvec(CPURISCVState *env, int csrno,
312
target_ulong *val)
313
{
314
@@ -XXX,XX +XXX,XX @@ static RISCVException write_mtval(CPURISCVState *env, int csrno,
315
return RISCV_EXCP_NONE;
316
}
317
318
-static RISCVException rmw_mip(CPURISCVState *env, int csrno,
319
- target_ulong *ret_value,
320
- target_ulong new_value, target_ulong write_mask)
321
+static RISCVException rmw_mip64(CPURISCVState *env, int csrno,
322
+ uint64_t *ret_val,
323
+ uint64_t new_val, uint64_t wr_mask)
324
{
325
RISCVCPU *cpu = env_archcpu(env);
326
/* Allow software control of delegable interrupts not claimed by hardware */
327
- target_ulong mask = write_mask & delegable_ints & ~env->miclaim;
328
- uint32_t gin, old_mip;
329
+ uint64_t old_mip, mask = wr_mask & delegable_ints & ~env->miclaim;
330
+ uint32_t gin;
331
332
if (mask) {
333
- old_mip = riscv_cpu_update_mip(cpu, mask, (new_value & mask));
334
+ old_mip = riscv_cpu_update_mip(cpu, mask, (new_val & mask));
335
} else {
336
old_mip = env->mip;
337
}
338
@@ -XXX,XX +XXX,XX @@ static RISCVException rmw_mip(CPURISCVState *env, int csrno,
339
old_mip |= (env->hgeip & ((target_ulong)1 << gin)) ? MIP_VSEIP : 0;
340
}
341
342
- if (ret_value) {
343
- *ret_value = old_mip;
344
+ if (ret_val) {
345
+ *ret_val = old_mip;
346
}
347
348
return RISCV_EXCP_NONE;
349
}
350
351
+static RISCVException rmw_mip(CPURISCVState *env, int csrno,
352
+ target_ulong *ret_val,
353
+ target_ulong new_val, target_ulong wr_mask)
354
+{
355
+ uint64_t rval;
356
+ RISCVException ret;
357
+
358
+ ret = rmw_mip64(env, csrno, &rval, new_val, wr_mask);
359
+ if (ret_val) {
360
+ *ret_val = rval;
361
+ }
362
+
363
+ return ret;
364
+}
365
+
366
+static RISCVException rmw_miph(CPURISCVState *env, int csrno,
367
+ target_ulong *ret_val,
368
+ target_ulong new_val, target_ulong wr_mask)
369
+{
370
+ uint64_t rval;
371
+ RISCVException ret;
372
+
373
+ ret = rmw_mip64(env, csrno, &rval,
374
+ ((uint64_t)new_val) << 32, ((uint64_t)wr_mask) << 32);
375
+ if (ret_val) {
376
+ *ret_val = rval >> 32;
377
+ }
378
+
379
+ return ret;
380
+}
381
+
382
/* Supervisor Trap Setup */
383
static RISCVException read_sstatus_i128(CPURISCVState *env, int csrno,
384
Int128 *val)
385
@@ -XXX,XX +XXX,XX @@ static RISCVException write_sstatus(CPURISCVState *env, int csrno,
386
return write_mstatus(env, CSR_MSTATUS, newval);
387
}
388
389
-static RISCVException read_vsie(CPURISCVState *env, int csrno,
390
- target_ulong *val)
391
+static RISCVException rmw_vsie64(CPURISCVState *env, int csrno,
392
+ uint64_t *ret_val,
393
+ uint64_t new_val, uint64_t wr_mask)
394
{
395
- /* Shift the VS bits to their S bit location in vsie */
396
- *val = (env->mie & env->hideleg & VS_MODE_INTERRUPTS) >> 1;
397
- return RISCV_EXCP_NONE;
398
+ RISCVException ret;
399
+ uint64_t rval, vsbits, mask = env->hideleg & VS_MODE_INTERRUPTS;
400
+
401
+ /* Bring VS-level bits to correct position */
402
+ vsbits = new_val & (VS_MODE_INTERRUPTS >> 1);
403
+ new_val &= ~(VS_MODE_INTERRUPTS >> 1);
404
+ new_val |= vsbits << 1;
405
+ vsbits = wr_mask & (VS_MODE_INTERRUPTS >> 1);
406
+ wr_mask &= ~(VS_MODE_INTERRUPTS >> 1);
407
+ wr_mask |= vsbits << 1;
408
+
409
+ ret = rmw_mie64(env, csrno, &rval, new_val, wr_mask & mask);
410
+ if (ret_val) {
411
+ rval &= mask;
412
+ vsbits = rval & VS_MODE_INTERRUPTS;
413
+ rval &= ~VS_MODE_INTERRUPTS;
414
+ *ret_val = rval | (vsbits >> 1);
415
+ }
416
+
417
+ return ret;
418
}
419
420
-static RISCVException read_sie(CPURISCVState *env, int csrno,
421
- target_ulong *val)
422
+static RISCVException rmw_vsie(CPURISCVState *env, int csrno,
423
+ target_ulong *ret_val,
424
+ target_ulong new_val, target_ulong wr_mask)
425
{
426
- if (riscv_cpu_virt_enabled(env)) {
427
- read_vsie(env, CSR_VSIE, val);
428
- } else {
429
- *val = env->mie & env->mideleg;
430
+ uint64_t rval;
431
+ RISCVException ret;
432
+
433
+ ret = rmw_vsie64(env, csrno, &rval, new_val, wr_mask);
434
+ if (ret_val) {
435
+ *ret_val = rval;
436
}
437
- return RISCV_EXCP_NONE;
438
+
439
+ return ret;
440
}
441
442
-static RISCVException write_vsie(CPURISCVState *env, int csrno,
443
- target_ulong val)
444
+static RISCVException rmw_vsieh(CPURISCVState *env, int csrno,
445
+ target_ulong *ret_val,
446
+ target_ulong new_val, target_ulong wr_mask)
447
{
448
- /* Shift the S bits to their VS bit location in mie */
449
- target_ulong newval = (env->mie & ~VS_MODE_INTERRUPTS) |
450
- ((val << 1) & env->hideleg & VS_MODE_INTERRUPTS);
451
- return write_mie(env, CSR_MIE, newval);
452
+ uint64_t rval;
453
+ RISCVException ret;
454
+
455
+ ret = rmw_vsie64(env, csrno, &rval,
456
+ ((uint64_t)new_val) << 32, ((uint64_t)wr_mask) << 32);
457
+ if (ret_val) {
458
+ *ret_val = rval >> 32;
459
+ }
460
+
461
+ return ret;
462
}
463
464
-static int write_sie(CPURISCVState *env, int csrno, target_ulong val)
465
+static RISCVException rmw_sie64(CPURISCVState *env, int csrno,
466
+ uint64_t *ret_val,
467
+ uint64_t new_val, uint64_t wr_mask)
468
{
469
+ RISCVException ret;
470
+ uint64_t mask = env->mideleg & S_MODE_INTERRUPTS;
471
+
472
if (riscv_cpu_virt_enabled(env)) {
473
- write_vsie(env, CSR_VSIE, val);
474
+ ret = rmw_vsie64(env, CSR_VSIE, ret_val, new_val, wr_mask);
475
} else {
476
- target_ulong newval = (env->mie & ~S_MODE_INTERRUPTS) |
477
- (val & S_MODE_INTERRUPTS);
478
- write_mie(env, CSR_MIE, newval);
479
+ ret = rmw_mie64(env, csrno, ret_val, new_val, wr_mask & mask);
480
}
481
482
- return RISCV_EXCP_NONE;
483
+ if (ret_val) {
484
+ *ret_val &= mask;
485
+ }
486
+
487
+ return ret;
488
+}
489
+
490
+static RISCVException rmw_sie(CPURISCVState *env, int csrno,
491
+ target_ulong *ret_val,
492
+ target_ulong new_val, target_ulong wr_mask)
493
+{
494
+ uint64_t rval;
495
+ RISCVException ret;
496
+
497
+ ret = rmw_sie64(env, csrno, &rval, new_val, wr_mask);
498
+ if (ret_val) {
499
+ *ret_val = rval;
500
+ }
501
+
502
+ return ret;
503
+}
504
+
505
+static RISCVException rmw_sieh(CPURISCVState *env, int csrno,
506
+ target_ulong *ret_val,
507
+ target_ulong new_val, target_ulong wr_mask)
508
+{
509
+ uint64_t rval;
510
+ RISCVException ret;
511
+
512
+ ret = rmw_sie64(env, csrno, &rval,
513
+ ((uint64_t)new_val) << 32, ((uint64_t)wr_mask) << 32);
514
+ if (ret_val) {
515
+ *ret_val = rval >> 32;
516
+ }
517
+
518
+ return ret;
519
}
520
521
static RISCVException read_stvec(CPURISCVState *env, int csrno,
522
@@ -XXX,XX +XXX,XX @@ static RISCVException write_stval(CPURISCVState *env, int csrno,
523
return RISCV_EXCP_NONE;
524
}
525
526
+static RISCVException rmw_vsip64(CPURISCVState *env, int csrno,
527
+ uint64_t *ret_val,
528
+ uint64_t new_val, uint64_t wr_mask)
529
+{
530
+ RISCVException ret;
531
+ uint64_t rval, vsbits, mask = env->hideleg & vsip_writable_mask;
532
+
533
+ /* Bring VS-level bits to correct position */
534
+ vsbits = new_val & (VS_MODE_INTERRUPTS >> 1);
535
+ new_val &= ~(VS_MODE_INTERRUPTS >> 1);
536
+ new_val |= vsbits << 1;
537
+ vsbits = wr_mask & (VS_MODE_INTERRUPTS >> 1);
538
+ wr_mask &= ~(VS_MODE_INTERRUPTS >> 1);
539
+ wr_mask |= vsbits << 1;
540
+
541
+ ret = rmw_mip64(env, csrno, &rval, new_val, wr_mask & mask);
542
+ if (ret_val) {
543
+ rval &= mask;
544
+ vsbits = rval & VS_MODE_INTERRUPTS;
545
+ rval &= ~VS_MODE_INTERRUPTS;
546
+ *ret_val = rval | (vsbits >> 1);
547
+ }
548
+
549
+ return ret;
550
+}
551
+
552
static RISCVException rmw_vsip(CPURISCVState *env, int csrno,
553
- target_ulong *ret_value,
554
- target_ulong new_value, target_ulong write_mask)
555
+ target_ulong *ret_val,
556
+ target_ulong new_val, target_ulong wr_mask)
557
{
558
- /* Shift the S bits to their VS bit location in mip */
559
- int ret = rmw_mip(env, csrno, ret_value, new_value << 1,
560
- (write_mask << 1) & vsip_writable_mask & env->hideleg);
561
+ uint64_t rval;
562
+ RISCVException ret;
563
564
- if (ret_value) {
565
- *ret_value &= VS_MODE_INTERRUPTS;
566
- /* Shift the VS bits to their S bit location in vsip */
567
- *ret_value >>= 1;
568
+ ret = rmw_vsip64(env, csrno, &rval, new_val, wr_mask);
569
+ if (ret_val) {
570
+ *ret_val = rval;
571
}
572
+
573
return ret;
21
return ret;
574
}
22
}
575
23
576
-static RISCVException rmw_sip(CPURISCVState *env, int csrno,
24
-static void create_fdt_imsic(RISCVVirtState *s, const MemMapEntry *memmap,
577
- target_ulong *ret_value,
25
- uint32_t *phandle, uint32_t *intc_phandles,
578
- target_ulong new_value, target_ulong write_mask)
26
- uint32_t *msi_m_phandle, uint32_t *msi_s_phandle)
579
+static RISCVException rmw_vsiph(CPURISCVState *env, int csrno,
27
+static void create_fdt_one_imsic(RISCVVirtState *s, hwaddr base_addr,
580
+ target_ulong *ret_val,
28
+ uint32_t *intc_phandles, uint32_t msi_phandle,
581
+ target_ulong new_val, target_ulong wr_mask)
29
+ bool m_mode, uint32_t imsic_guest_bits)
582
{
30
{
583
- int ret;
31
int cpu, socket;
584
+ uint64_t rval;
32
char *imsic_name;
585
+ RISCVException ret;
33
MachineState *ms = MACHINE(s);
586
+
34
int socket_count = riscv_socket_count(ms);
587
+ ret = rmw_vsip64(env, csrno, &rval,
35
- uint32_t imsic_max_hart_per_socket, imsic_guest_bits;
588
+ ((uint64_t)new_val) << 32, ((uint64_t)wr_mask) << 32);
36
+ uint32_t imsic_max_hart_per_socket;
589
+ if (ret_val) {
37
uint32_t *imsic_cells, *imsic_regs, imsic_addr, imsic_size;
590
+ *ret_val = rval >> 32;
38
39
- *msi_m_phandle = (*phandle)++;
40
- *msi_s_phandle = (*phandle)++;
41
imsic_cells = g_new0(uint32_t, ms->smp.cpus * 2);
42
imsic_regs = g_new0(uint32_t, socket_count * 4);
43
44
- /* M-level IMSIC node */
45
for (cpu = 0; cpu < ms->smp.cpus; cpu++) {
46
imsic_cells[cpu * 2 + 0] = cpu_to_be32(intc_phandles[cpu]);
47
- imsic_cells[cpu * 2 + 1] = cpu_to_be32(IRQ_M_EXT);
48
+ imsic_cells[cpu * 2 + 1] = cpu_to_be32(m_mode ? IRQ_M_EXT : IRQ_S_EXT);
49
}
50
- imsic_max_hart_per_socket = 0;
51
- for (socket = 0; socket < socket_count; socket++) {
52
- imsic_addr = memmap[VIRT_IMSIC_M].base +
53
- socket * VIRT_IMSIC_GROUP_MAX_SIZE;
54
- imsic_size = IMSIC_HART_SIZE(0) * s->soc[socket].num_harts;
55
- imsic_regs[socket * 4 + 0] = 0;
56
- imsic_regs[socket * 4 + 1] = cpu_to_be32(imsic_addr);
57
- imsic_regs[socket * 4 + 2] = 0;
58
- imsic_regs[socket * 4 + 3] = cpu_to_be32(imsic_size);
59
- if (imsic_max_hart_per_socket < s->soc[socket].num_harts) {
60
- imsic_max_hart_per_socket = s->soc[socket].num_harts;
61
- }
62
- }
63
- imsic_name = g_strdup_printf("/soc/imsics@%lx",
64
- (unsigned long)memmap[VIRT_IMSIC_M].base);
65
- qemu_fdt_add_subnode(ms->fdt, imsic_name);
66
- qemu_fdt_setprop_string(ms->fdt, imsic_name, "compatible",
67
- "riscv,imsics");
68
- qemu_fdt_setprop_cell(ms->fdt, imsic_name, "#interrupt-cells",
69
- FDT_IMSIC_INT_CELLS);
70
- qemu_fdt_setprop(ms->fdt, imsic_name, "interrupt-controller",
71
- NULL, 0);
72
- qemu_fdt_setprop(ms->fdt, imsic_name, "msi-controller",
73
- NULL, 0);
74
- qemu_fdt_setprop(ms->fdt, imsic_name, "interrupts-extended",
75
- imsic_cells, ms->smp.cpus * sizeof(uint32_t) * 2);
76
- qemu_fdt_setprop(ms->fdt, imsic_name, "reg", imsic_regs,
77
- socket_count * sizeof(uint32_t) * 4);
78
- qemu_fdt_setprop_cell(ms->fdt, imsic_name, "riscv,num-ids",
79
- VIRT_IRQCHIP_NUM_MSIS);
80
- if (socket_count > 1) {
81
- qemu_fdt_setprop_cell(ms->fdt, imsic_name, "riscv,hart-index-bits",
82
- imsic_num_bits(imsic_max_hart_per_socket));
83
- qemu_fdt_setprop_cell(ms->fdt, imsic_name, "riscv,group-index-bits",
84
- imsic_num_bits(socket_count));
85
- qemu_fdt_setprop_cell(ms->fdt, imsic_name, "riscv,group-index-shift",
86
- IMSIC_MMIO_GROUP_MIN_SHIFT);
87
- }
88
- qemu_fdt_setprop_cell(ms->fdt, imsic_name, "phandle", *msi_m_phandle);
89
-
90
- g_free(imsic_name);
91
92
- /* S-level IMSIC node */
93
- for (cpu = 0; cpu < ms->smp.cpus; cpu++) {
94
- imsic_cells[cpu * 2 + 0] = cpu_to_be32(intc_phandles[cpu]);
95
- imsic_cells[cpu * 2 + 1] = cpu_to_be32(IRQ_S_EXT);
96
- }
97
- imsic_guest_bits = imsic_num_bits(s->aia_guests + 1);
98
imsic_max_hart_per_socket = 0;
99
for (socket = 0; socket < socket_count; socket++) {
100
- imsic_addr = memmap[VIRT_IMSIC_S].base +
101
- socket * VIRT_IMSIC_GROUP_MAX_SIZE;
102
+ imsic_addr = base_addr + socket * VIRT_IMSIC_GROUP_MAX_SIZE;
103
imsic_size = IMSIC_HART_SIZE(imsic_guest_bits) *
104
s->soc[socket].num_harts;
105
imsic_regs[socket * 4 + 0] = 0;
106
@@ -XXX,XX +XXX,XX @@ static void create_fdt_imsic(RISCVVirtState *s, const MemMapEntry *memmap,
107
imsic_max_hart_per_socket = s->soc[socket].num_harts;
108
}
109
}
110
- imsic_name = g_strdup_printf("/soc/imsics@%lx",
111
- (unsigned long)memmap[VIRT_IMSIC_S].base);
112
+
113
+ imsic_name = g_strdup_printf("/soc/imsics@%lx", (unsigned long)base_addr);
114
qemu_fdt_add_subnode(ms->fdt, imsic_name);
115
- qemu_fdt_setprop_string(ms->fdt, imsic_name, "compatible",
116
- "riscv,imsics");
117
+ qemu_fdt_setprop_string(ms->fdt, imsic_name, "compatible", "riscv,imsics");
118
qemu_fdt_setprop_cell(ms->fdt, imsic_name, "#interrupt-cells",
119
- FDT_IMSIC_INT_CELLS);
120
- qemu_fdt_setprop(ms->fdt, imsic_name, "interrupt-controller",
121
- NULL, 0);
122
- qemu_fdt_setprop(ms->fdt, imsic_name, "msi-controller",
123
- NULL, 0);
124
+ FDT_IMSIC_INT_CELLS);
125
+ qemu_fdt_setprop(ms->fdt, imsic_name, "interrupt-controller", NULL, 0);
126
+ qemu_fdt_setprop(ms->fdt, imsic_name, "msi-controller", NULL, 0);
127
qemu_fdt_setprop(ms->fdt, imsic_name, "interrupts-extended",
128
- imsic_cells, ms->smp.cpus * sizeof(uint32_t) * 2);
129
+ imsic_cells, ms->smp.cpus * sizeof(uint32_t) * 2);
130
qemu_fdt_setprop(ms->fdt, imsic_name, "reg", imsic_regs,
131
- socket_count * sizeof(uint32_t) * 4);
132
+ socket_count * sizeof(uint32_t) * 4);
133
qemu_fdt_setprop_cell(ms->fdt, imsic_name, "riscv,num-ids",
134
- VIRT_IRQCHIP_NUM_MSIS);
135
+ VIRT_IRQCHIP_NUM_MSIS);
136
+
137
if (imsic_guest_bits) {
138
qemu_fdt_setprop_cell(ms->fdt, imsic_name, "riscv,guest-index-bits",
139
- imsic_guest_bits);
140
+ imsic_guest_bits);
141
}
142
+
143
if (socket_count > 1) {
144
qemu_fdt_setprop_cell(ms->fdt, imsic_name, "riscv,hart-index-bits",
145
- imsic_num_bits(imsic_max_hart_per_socket));
146
+ imsic_num_bits(imsic_max_hart_per_socket));
147
qemu_fdt_setprop_cell(ms->fdt, imsic_name, "riscv,group-index-bits",
148
- imsic_num_bits(socket_count));
149
+ imsic_num_bits(socket_count));
150
qemu_fdt_setprop_cell(ms->fdt, imsic_name, "riscv,group-index-shift",
151
- IMSIC_MMIO_GROUP_MIN_SHIFT);
152
+ IMSIC_MMIO_GROUP_MIN_SHIFT);
153
}
154
- qemu_fdt_setprop_cell(ms->fdt, imsic_name, "phandle", *msi_s_phandle);
155
- g_free(imsic_name);
156
+ qemu_fdt_setprop_cell(ms->fdt, imsic_name, "phandle", msi_phandle);
157
158
+ g_free(imsic_name);
159
g_free(imsic_regs);
160
g_free(imsic_cells);
161
}
162
163
-static void create_fdt_socket_aplic(RISCVVirtState *s,
164
- const MemMapEntry *memmap, int socket,
165
- uint32_t msi_m_phandle,
166
- uint32_t msi_s_phandle,
167
- uint32_t *phandle,
168
- uint32_t *intc_phandles,
169
- uint32_t *aplic_phandles)
170
+static void create_fdt_imsic(RISCVVirtState *s, const MemMapEntry *memmap,
171
+ uint32_t *phandle, uint32_t *intc_phandles,
172
+ uint32_t *msi_m_phandle, uint32_t *msi_s_phandle)
173
+{
174
+ *msi_m_phandle = (*phandle)++;
175
+ *msi_s_phandle = (*phandle)++;
176
+
177
+ if (!kvm_enabled()) {
178
+ /* M-level IMSIC node */
179
+ create_fdt_one_imsic(s, memmap[VIRT_IMSIC_M].base, intc_phandles,
180
+ *msi_m_phandle, true, 0);
591
+ }
181
+ }
592
+
182
+
593
+ return ret;
183
+ /* S-level IMSIC node */
184
+ create_fdt_one_imsic(s, memmap[VIRT_IMSIC_S].base, intc_phandles,
185
+ *msi_s_phandle, false,
186
+ imsic_num_bits(s->aia_guests + 1));
187
+
594
+}
188
+}
595
+
189
+
596
+static RISCVException rmw_sip64(CPURISCVState *env, int csrno,
190
+static void create_fdt_one_aplic(RISCVVirtState *s, int socket,
597
+ uint64_t *ret_val,
191
+ unsigned long aplic_addr, uint32_t aplic_size,
598
+ uint64_t new_val, uint64_t wr_mask)
192
+ uint32_t msi_phandle,
193
+ uint32_t *intc_phandles,
194
+ uint32_t aplic_phandle,
195
+ uint32_t aplic_child_phandle,
196
+ bool m_mode)
197
{
198
int cpu;
199
char *aplic_name;
200
uint32_t *aplic_cells;
201
- unsigned long aplic_addr;
202
MachineState *ms = MACHINE(s);
203
- uint32_t aplic_m_phandle, aplic_s_phandle;
204
205
- aplic_m_phandle = (*phandle)++;
206
- aplic_s_phandle = (*phandle)++;
207
aplic_cells = g_new0(uint32_t, s->soc[socket].num_harts * 2);
208
209
- /* M-level APLIC node */
210
for (cpu = 0; cpu < s->soc[socket].num_harts; cpu++) {
211
aplic_cells[cpu * 2 + 0] = cpu_to_be32(intc_phandles[cpu]);
212
- aplic_cells[cpu * 2 + 1] = cpu_to_be32(IRQ_M_EXT);
213
+ aplic_cells[cpu * 2 + 1] = cpu_to_be32(m_mode ? IRQ_M_EXT : IRQ_S_EXT);
214
}
215
- aplic_addr = memmap[VIRT_APLIC_M].base +
216
- (memmap[VIRT_APLIC_M].size * socket);
217
+
218
aplic_name = g_strdup_printf("/soc/aplic@%lx", aplic_addr);
219
qemu_fdt_add_subnode(ms->fdt, aplic_name);
220
qemu_fdt_setprop_string(ms->fdt, aplic_name, "compatible", "riscv,aplic");
221
qemu_fdt_setprop_cell(ms->fdt, aplic_name,
222
- "#interrupt-cells", FDT_APLIC_INT_CELLS);
223
+ "#interrupt-cells", FDT_APLIC_INT_CELLS);
224
qemu_fdt_setprop(ms->fdt, aplic_name, "interrupt-controller", NULL, 0);
225
+
226
if (s->aia_type == VIRT_AIA_TYPE_APLIC) {
227
qemu_fdt_setprop(ms->fdt, aplic_name, "interrupts-extended",
228
- aplic_cells, s->soc[socket].num_harts * sizeof(uint32_t) * 2);
229
+ aplic_cells,
230
+ s->soc[socket].num_harts * sizeof(uint32_t) * 2);
231
} else {
232
- qemu_fdt_setprop_cell(ms->fdt, aplic_name, "msi-parent",
233
- msi_m_phandle);
234
+ qemu_fdt_setprop_cell(ms->fdt, aplic_name, "msi-parent", msi_phandle);
235
}
236
+
237
qemu_fdt_setprop_cells(ms->fdt, aplic_name, "reg",
238
- 0x0, aplic_addr, 0x0, memmap[VIRT_APLIC_M].size);
239
+ 0x0, aplic_addr, 0x0, aplic_size);
240
qemu_fdt_setprop_cell(ms->fdt, aplic_name, "riscv,num-sources",
241
- VIRT_IRQCHIP_NUM_SOURCES);
242
- qemu_fdt_setprop_cell(ms->fdt, aplic_name, "riscv,children",
243
- aplic_s_phandle);
244
- qemu_fdt_setprop_cells(ms->fdt, aplic_name, "riscv,delegate",
245
- aplic_s_phandle, 0x1, VIRT_IRQCHIP_NUM_SOURCES);
246
+ VIRT_IRQCHIP_NUM_SOURCES);
247
+
248
+ if (aplic_child_phandle) {
249
+ qemu_fdt_setprop_cell(ms->fdt, aplic_name, "riscv,children",
250
+ aplic_child_phandle);
251
+ qemu_fdt_setprop_cells(ms->fdt, aplic_name, "riscv,delegate",
252
+ aplic_child_phandle, 0x1,
253
+ VIRT_IRQCHIP_NUM_SOURCES);
254
+ }
255
+
256
riscv_socket_fdt_write_id(ms, aplic_name, socket);
257
- qemu_fdt_setprop_cell(ms->fdt, aplic_name, "phandle", aplic_m_phandle);
258
+ qemu_fdt_setprop_cell(ms->fdt, aplic_name, "phandle", aplic_phandle);
259
+
260
g_free(aplic_name);
261
+ g_free(aplic_cells);
262
+}
263
264
- /* S-level APLIC node */
265
- for (cpu = 0; cpu < s->soc[socket].num_harts; cpu++) {
266
- aplic_cells[cpu * 2 + 0] = cpu_to_be32(intc_phandles[cpu]);
267
- aplic_cells[cpu * 2 + 1] = cpu_to_be32(IRQ_S_EXT);
268
+static void create_fdt_socket_aplic(RISCVVirtState *s,
269
+ const MemMapEntry *memmap, int socket,
270
+ uint32_t msi_m_phandle,
271
+ uint32_t msi_s_phandle,
272
+ uint32_t *phandle,
273
+ uint32_t *intc_phandles,
274
+ uint32_t *aplic_phandles)
599
+{
275
+{
600
+ RISCVException ret;
276
+ char *aplic_name;
601
+ uint64_t mask = env->mideleg & sip_writable_mask;
277
+ unsigned long aplic_addr;
602
278
+ MachineState *ms = MACHINE(s);
603
if (riscv_cpu_virt_enabled(env)) {
279
+ uint32_t aplic_m_phandle, aplic_s_phandle;
604
- ret = rmw_vsip(env, CSR_VSIP, ret_value, new_value, write_mask);
280
+
605
+ ret = rmw_vsip64(env, CSR_VSIP, ret_val, new_val, wr_mask);
281
+ aplic_m_phandle = (*phandle)++;
606
} else {
282
+ aplic_s_phandle = (*phandle)++;
607
- ret = rmw_mip(env, csrno, ret_value, new_value,
283
+
608
- write_mask & env->mideleg & sip_writable_mask);
284
+ if (!kvm_enabled()) {
609
+ ret = rmw_mip64(env, csrno, ret_val, new_val, wr_mask & mask);
285
+ /* M-level APLIC node */
610
}
286
+ aplic_addr = memmap[VIRT_APLIC_M].base +
611
287
+ (memmap[VIRT_APLIC_M].size * socket);
612
- if (ret_value) {
288
+ create_fdt_one_aplic(s, socket, aplic_addr, memmap[VIRT_APLIC_M].size,
613
- *ret_value &= env->mideleg & S_MODE_INTERRUPTS;
289
+ msi_m_phandle, intc_phandles,
614
+ if (ret_val) {
290
+ aplic_m_phandle, aplic_s_phandle,
615
+ *ret_val &= env->mideleg & S_MODE_INTERRUPTS;
291
+ true);
616
+ }
292
}
617
+
293
+
618
+ return ret;
294
+ /* S-level APLIC node */
619
+}
295
aplic_addr = memmap[VIRT_APLIC_S].base +
620
+
296
(memmap[VIRT_APLIC_S].size * socket);
621
+static RISCVException rmw_sip(CPURISCVState *env, int csrno,
297
+ create_fdt_one_aplic(s, socket, aplic_addr, memmap[VIRT_APLIC_S].size,
622
+ target_ulong *ret_val,
298
+ msi_s_phandle, intc_phandles,
623
+ target_ulong new_val, target_ulong wr_mask)
299
+ aplic_s_phandle, 0,
624
+{
300
+ false);
625
+ uint64_t rval;
301
+
626
+ RISCVException ret;
302
aplic_name = g_strdup_printf("/soc/aplic@%lx", aplic_addr);
627
+
303
- qemu_fdt_add_subnode(ms->fdt, aplic_name);
628
+ ret = rmw_sip64(env, csrno, &rval, new_val, wr_mask);
304
- qemu_fdt_setprop_string(ms->fdt, aplic_name, "compatible", "riscv,aplic");
629
+ if (ret_val) {
305
- qemu_fdt_setprop_cell(ms->fdt, aplic_name,
630
+ *ret_val = rval;
306
- "#interrupt-cells", FDT_APLIC_INT_CELLS);
631
}
307
- qemu_fdt_setprop(ms->fdt, aplic_name, "interrupt-controller", NULL, 0);
632
+
308
- if (s->aia_type == VIRT_AIA_TYPE_APLIC) {
633
+ return ret;
309
- qemu_fdt_setprop(ms->fdt, aplic_name, "interrupts-extended",
634
+}
310
- aplic_cells, s->soc[socket].num_harts * sizeof(uint32_t) * 2);
635
+
311
- } else {
636
+static RISCVException rmw_siph(CPURISCVState *env, int csrno,
312
- qemu_fdt_setprop_cell(ms->fdt, aplic_name, "msi-parent",
637
+ target_ulong *ret_val,
313
- msi_s_phandle);
638
+ target_ulong new_val, target_ulong wr_mask)
314
- }
639
+{
315
- qemu_fdt_setprop_cells(ms->fdt, aplic_name, "reg",
640
+ uint64_t rval;
316
- 0x0, aplic_addr, 0x0, memmap[VIRT_APLIC_S].size);
641
+ RISCVException ret;
317
- qemu_fdt_setprop_cell(ms->fdt, aplic_name, "riscv,num-sources",
642
+
318
- VIRT_IRQCHIP_NUM_SOURCES);
643
+ ret = rmw_sip64(env, csrno, &rval,
319
- riscv_socket_fdt_write_id(ms, aplic_name, socket);
644
+ ((uint64_t)new_val) << 32, ((uint64_t)wr_mask) << 32);
320
- qemu_fdt_setprop_cell(ms->fdt, aplic_name, "phandle", aplic_s_phandle);
645
+ if (ret_val) {
321
646
+ *ret_val = rval >> 32;
322
if (!socket) {
647
+ }
323
platform_bus_add_all_fdt_nodes(ms->fdt, aplic_name,
648
+
324
@@ -XXX,XX +XXX,XX @@ static void create_fdt_socket_aplic(RISCVVirtState *s,
649
return ret;
325
326
g_free(aplic_name);
327
328
- g_free(aplic_cells);
329
aplic_phandles[socket] = aplic_s_phandle;
650
}
330
}
651
331
652
@@ -XXX,XX +XXX,XX @@ static RISCVException write_hedeleg(CPURISCVState *env, int csrno,
332
@@ -XXX,XX +XXX,XX @@ static DeviceState *virt_create_aia(RISCVVirtAIAType aia_type, int aia_guests,
653
return RISCV_EXCP_NONE;
333
int i;
334
hwaddr addr;
335
uint32_t guest_bits;
336
- DeviceState *aplic_m;
337
- bool msimode = (aia_type == VIRT_AIA_TYPE_APLIC_IMSIC) ? true : false;
338
+ DeviceState *aplic_s = NULL;
339
+ DeviceState *aplic_m = NULL;
340
+ bool msimode = aia_type == VIRT_AIA_TYPE_APLIC_IMSIC;
341
342
if (msimode) {
343
- /* Per-socket M-level IMSICs */
344
- addr = memmap[VIRT_IMSIC_M].base + socket * VIRT_IMSIC_GROUP_MAX_SIZE;
345
- for (i = 0; i < hart_count; i++) {
346
- riscv_imsic_create(addr + i * IMSIC_HART_SIZE(0),
347
- base_hartid + i, true, 1,
348
- VIRT_IRQCHIP_NUM_MSIS);
349
+ if (!kvm_enabled()) {
350
+ /* Per-socket M-level IMSICs */
351
+ addr = memmap[VIRT_IMSIC_M].base +
352
+ socket * VIRT_IMSIC_GROUP_MAX_SIZE;
353
+ for (i = 0; i < hart_count; i++) {
354
+ riscv_imsic_create(addr + i * IMSIC_HART_SIZE(0),
355
+ base_hartid + i, true, 1,
356
+ VIRT_IRQCHIP_NUM_MSIS);
357
+ }
358
}
359
360
/* Per-socket S-level IMSICs */
361
@@ -XXX,XX +XXX,XX @@ static DeviceState *virt_create_aia(RISCVVirtAIAType aia_type, int aia_guests,
362
}
363
}
364
365
- /* Per-socket M-level APLIC */
366
- aplic_m = riscv_aplic_create(
367
- memmap[VIRT_APLIC_M].base + socket * memmap[VIRT_APLIC_M].size,
368
- memmap[VIRT_APLIC_M].size,
369
- (msimode) ? 0 : base_hartid,
370
- (msimode) ? 0 : hart_count,
371
- VIRT_IRQCHIP_NUM_SOURCES,
372
- VIRT_IRQCHIP_NUM_PRIO_BITS,
373
- msimode, true, NULL);
374
-
375
- if (aplic_m) {
376
- /* Per-socket S-level APLIC */
377
- riscv_aplic_create(
378
- memmap[VIRT_APLIC_S].base + socket * memmap[VIRT_APLIC_S].size,
379
- memmap[VIRT_APLIC_S].size,
380
- (msimode) ? 0 : base_hartid,
381
- (msimode) ? 0 : hart_count,
382
- VIRT_IRQCHIP_NUM_SOURCES,
383
- VIRT_IRQCHIP_NUM_PRIO_BITS,
384
- msimode, false, aplic_m);
385
+ if (!kvm_enabled()) {
386
+ /* Per-socket M-level APLIC */
387
+ aplic_m = riscv_aplic_create(memmap[VIRT_APLIC_M].base +
388
+ socket * memmap[VIRT_APLIC_M].size,
389
+ memmap[VIRT_APLIC_M].size,
390
+ (msimode) ? 0 : base_hartid,
391
+ (msimode) ? 0 : hart_count,
392
+ VIRT_IRQCHIP_NUM_SOURCES,
393
+ VIRT_IRQCHIP_NUM_PRIO_BITS,
394
+ msimode, true, NULL);
395
}
396
397
- return aplic_m;
398
+ /* Per-socket S-level APLIC */
399
+ aplic_s = riscv_aplic_create(memmap[VIRT_APLIC_S].base +
400
+ socket * memmap[VIRT_APLIC_S].size,
401
+ memmap[VIRT_APLIC_S].size,
402
+ (msimode) ? 0 : base_hartid,
403
+ (msimode) ? 0 : hart_count,
404
+ VIRT_IRQCHIP_NUM_SOURCES,
405
+ VIRT_IRQCHIP_NUM_PRIO_BITS,
406
+ msimode, false, aplic_m);
407
+
408
+ return kvm_enabled() ? aplic_s : aplic_m;
654
}
409
}
655
410
656
-static RISCVException read_hideleg(CPURISCVState *env, int csrno,
411
static void create_platform_bus(RISCVVirtState *s, DeviceState *irqchip)
657
- target_ulong *val)
658
+static RISCVException rmw_hideleg64(CPURISCVState *env, int csrno,
659
+ uint64_t *ret_val,
660
+ uint64_t new_val, uint64_t wr_mask)
661
{
662
- *val = env->hideleg;
663
+ uint64_t mask = wr_mask & vs_delegable_ints;
664
+
665
+ if (ret_val) {
666
+ *ret_val = env->hideleg & vs_delegable_ints;
667
+ }
668
+
669
+ env->hideleg = (env->hideleg & ~mask) | (new_val & mask);
670
return RISCV_EXCP_NONE;
671
}
672
673
-static RISCVException write_hideleg(CPURISCVState *env, int csrno,
674
- target_ulong val)
675
+static RISCVException rmw_hideleg(CPURISCVState *env, int csrno,
676
+ target_ulong *ret_val,
677
+ target_ulong new_val, target_ulong wr_mask)
678
{
679
- env->hideleg = val & vs_delegable_ints;
680
- return RISCV_EXCP_NONE;
681
+ uint64_t rval;
682
+ RISCVException ret;
683
+
684
+ ret = rmw_hideleg64(env, csrno, &rval, new_val, wr_mask);
685
+ if (ret_val) {
686
+ *ret_val = rval;
687
+ }
688
+
689
+ return ret;
690
+}
691
+
692
+static RISCVException rmw_hidelegh(CPURISCVState *env, int csrno,
693
+ target_ulong *ret_val,
694
+ target_ulong new_val, target_ulong wr_mask)
695
+{
696
+ uint64_t rval;
697
+ RISCVException ret;
698
+
699
+ ret = rmw_hideleg64(env, csrno, &rval,
700
+ ((uint64_t)new_val) << 32, ((uint64_t)wr_mask) << 32);
701
+ if (ret_val) {
702
+ *ret_val = rval >> 32;
703
+ }
704
+
705
+ return ret;
706
+}
707
+
708
+static RISCVException rmw_hvip64(CPURISCVState *env, int csrno,
709
+ uint64_t *ret_val,
710
+ uint64_t new_val, uint64_t wr_mask)
711
+{
712
+ RISCVException ret;
713
+
714
+ ret = rmw_mip64(env, csrno, ret_val, new_val,
715
+ wr_mask & hvip_writable_mask);
716
+ if (ret_val) {
717
+ *ret_val &= VS_MODE_INTERRUPTS;
718
+ }
719
+
720
+ return ret;
721
}
722
723
static RISCVException rmw_hvip(CPURISCVState *env, int csrno,
724
- target_ulong *ret_value,
725
- target_ulong new_value, target_ulong write_mask)
726
+ target_ulong *ret_val,
727
+ target_ulong new_val, target_ulong wr_mask)
728
{
729
- int ret = rmw_mip(env, csrno, ret_value, new_value,
730
- write_mask & hvip_writable_mask);
731
+ uint64_t rval;
732
+ RISCVException ret;
733
734
- if (ret_value) {
735
- *ret_value &= VS_MODE_INTERRUPTS;
736
+ ret = rmw_hvip64(env, csrno, &rval, new_val, wr_mask);
737
+ if (ret_val) {
738
+ *ret_val = rval;
739
+ }
740
+
741
+ return ret;
742
+}
743
+
744
+static RISCVException rmw_hviph(CPURISCVState *env, int csrno,
745
+ target_ulong *ret_val,
746
+ target_ulong new_val, target_ulong wr_mask)
747
+{
748
+ uint64_t rval;
749
+ RISCVException ret;
750
+
751
+ ret = rmw_hvip64(env, csrno, &rval,
752
+ ((uint64_t)new_val) << 32, ((uint64_t)wr_mask) << 32);
753
+ if (ret_val) {
754
+ *ret_val = rval >> 32;
755
}
756
+
757
return ret;
758
}
759
760
@@ -XXX,XX +XXX,XX @@ static RISCVException rmw_hip(CPURISCVState *env, int csrno,
761
return ret;
762
}
763
764
-static RISCVException read_hie(CPURISCVState *env, int csrno,
765
- target_ulong *val)
766
+static RISCVException rmw_hie(CPURISCVState *env, int csrno,
767
+ target_ulong *ret_val,
768
+ target_ulong new_val, target_ulong wr_mask)
769
{
770
- *val = env->mie & HS_MODE_INTERRUPTS;
771
- return RISCV_EXCP_NONE;
772
-}
773
+ uint64_t rval;
774
+ RISCVException ret;
775
776
-static RISCVException write_hie(CPURISCVState *env, int csrno,
777
- target_ulong val)
778
-{
779
- target_ulong newval = (env->mie & ~HS_MODE_INTERRUPTS) | (val & HS_MODE_INTERRUPTS);
780
- return write_mie(env, CSR_MIE, newval);
781
+ ret = rmw_mie64(env, csrno, &rval, new_val, wr_mask & HS_MODE_INTERRUPTS);
782
+ if (ret_val) {
783
+ *ret_val = rval & HS_MODE_INTERRUPTS;
784
+ }
785
+
786
+ return ret;
787
}
788
789
static RISCVException read_hcounteren(CPURISCVState *env, int csrno,
790
@@ -XXX,XX +XXX,XX @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
791
read_mstatus_i128 },
792
[CSR_MISA] = { "misa", any, read_misa, write_misa, NULL,
793
read_misa_i128 },
794
- [CSR_MIDELEG] = { "mideleg", any, read_mideleg, write_mideleg },
795
+ [CSR_MIDELEG] = { "mideleg", any, NULL, NULL, rmw_mideleg },
796
[CSR_MEDELEG] = { "medeleg", any, read_medeleg, write_medeleg },
797
- [CSR_MIE] = { "mie", any, read_mie, write_mie },
798
+ [CSR_MIE] = { "mie", any, NULL, NULL, rmw_mie },
799
[CSR_MTVEC] = { "mtvec", any, read_mtvec, write_mtvec },
800
[CSR_MCOUNTEREN] = { "mcounteren", any, read_mcounteren, write_mcounteren },
801
802
@@ -XXX,XX +XXX,XX @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
803
[CSR_MTVAL] = { "mtval", any, read_mtval, write_mtval },
804
[CSR_MIP] = { "mip", any, NULL, NULL, rmw_mip },
805
806
+ /* Machine-Level High-Half CSRs (AIA) */
807
+ [CSR_MIDELEGH] = { "midelegh", aia_any32, NULL, NULL, rmw_midelegh },
808
+ [CSR_MIEH] = { "mieh", aia_any32, NULL, NULL, rmw_mieh },
809
+ [CSR_MIPH] = { "miph", aia_any32, NULL, NULL, rmw_miph },
810
+
811
/* Supervisor Trap Setup */
812
[CSR_SSTATUS] = { "sstatus", smode, read_sstatus, write_sstatus, NULL,
813
read_sstatus_i128 },
814
- [CSR_SIE] = { "sie", smode, read_sie, write_sie },
815
+ [CSR_SIE] = { "sie", smode, NULL, NULL, rmw_sie },
816
[CSR_STVEC] = { "stvec", smode, read_stvec, write_stvec },
817
[CSR_SCOUNTEREN] = { "scounteren", smode, read_scounteren, write_scounteren },
818
819
@@ -XXX,XX +XXX,XX @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
820
/* Supervisor Protection and Translation */
821
[CSR_SATP] = { "satp", smode, read_satp, write_satp },
822
823
+ /* Supervisor-Level High-Half CSRs (AIA) */
824
+ [CSR_SIEH] = { "sieh", aia_smode32, NULL, NULL, rmw_sieh },
825
+ [CSR_SIPH] = { "siph", aia_smode32, NULL, NULL, rmw_siph },
826
+
827
[CSR_HSTATUS] = { "hstatus", hmode, read_hstatus, write_hstatus },
828
[CSR_HEDELEG] = { "hedeleg", hmode, read_hedeleg, write_hedeleg },
829
- [CSR_HIDELEG] = { "hideleg", hmode, read_hideleg, write_hideleg },
830
+ [CSR_HIDELEG] = { "hideleg", hmode, NULL, NULL, rmw_hideleg },
831
[CSR_HVIP] = { "hvip", hmode, NULL, NULL, rmw_hvip },
832
[CSR_HIP] = { "hip", hmode, NULL, NULL, rmw_hip },
833
- [CSR_HIE] = { "hie", hmode, read_hie, write_hie },
834
+ [CSR_HIE] = { "hie", hmode, NULL, NULL, rmw_hie },
835
[CSR_HCOUNTEREN] = { "hcounteren", hmode, read_hcounteren, write_hcounteren },
836
[CSR_HGEIE] = { "hgeie", hmode, read_hgeie, write_hgeie },
837
[CSR_HTVAL] = { "htval", hmode, read_htval, write_htval },
838
@@ -XXX,XX +XXX,XX @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
839
840
[CSR_VSSTATUS] = { "vsstatus", hmode, read_vsstatus, write_vsstatus },
841
[CSR_VSIP] = { "vsip", hmode, NULL, NULL, rmw_vsip },
842
- [CSR_VSIE] = { "vsie", hmode, read_vsie, write_vsie },
843
+ [CSR_VSIE] = { "vsie", hmode, NULL, NULL, rmw_vsie },
844
[CSR_VSTVEC] = { "vstvec", hmode, read_vstvec, write_vstvec },
845
[CSR_VSSCRATCH] = { "vsscratch", hmode, read_vsscratch, write_vsscratch },
846
[CSR_VSEPC] = { "vsepc", hmode, read_vsepc, write_vsepc },
847
@@ -XXX,XX +XXX,XX @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
848
[CSR_MTVAL2] = { "mtval2", hmode, read_mtval2, write_mtval2 },
849
[CSR_MTINST] = { "mtinst", hmode, read_mtinst, write_mtinst },
850
851
+ /* Hypervisor and VS-Level High-Half CSRs (H-extension with AIA) */
852
+ [CSR_HIDELEGH] = { "hidelegh", aia_hmode32, NULL, NULL, rmw_hidelegh },
853
+ [CSR_HVIPH] = { "hviph", aia_hmode32, NULL, NULL, rmw_hviph },
854
+ [CSR_VSIEH] = { "vsieh", aia_hmode32, NULL, NULL, rmw_vsieh },
855
+ [CSR_VSIPH] = { "vsiph", aia_hmode32, NULL, NULL, rmw_vsiph },
856
+
857
/* Physical Memory Protection */
858
[CSR_MSECCFG] = { "mseccfg", epmp, read_mseccfg, write_mseccfg },
859
[CSR_PMPCFG0] = { "pmpcfg0", pmp, read_pmpcfg, write_pmpcfg },
860
diff --git a/target/riscv/machine.c b/target/riscv/machine.c
861
index XXXXXXX..XXXXXXX 100644
862
--- a/target/riscv/machine.c
863
+++ b/target/riscv/machine.c
864
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_hyper = {
865
.fields = (VMStateField[]) {
866
VMSTATE_UINTTL(env.hstatus, RISCVCPU),
867
VMSTATE_UINTTL(env.hedeleg, RISCVCPU),
868
- VMSTATE_UINTTL(env.hideleg, RISCVCPU),
869
+ VMSTATE_UINT64(env.hideleg, RISCVCPU),
870
VMSTATE_UINTTL(env.hcounteren, RISCVCPU),
871
VMSTATE_UINTTL(env.htval, RISCVCPU),
872
VMSTATE_UINTTL(env.htinst, RISCVCPU),
873
@@ -XXX,XX +XXX,XX @@ const VMStateDescription vmstate_riscv_cpu = {
874
VMSTATE_UINTTL(env.resetvec, RISCVCPU),
875
VMSTATE_UINTTL(env.mhartid, RISCVCPU),
876
VMSTATE_UINT64(env.mstatus, RISCVCPU),
877
- VMSTATE_UINTTL(env.mip, RISCVCPU),
878
- VMSTATE_UINT32(env.miclaim, RISCVCPU),
879
- VMSTATE_UINTTL(env.mie, RISCVCPU),
880
- VMSTATE_UINTTL(env.mideleg, RISCVCPU),
881
+ VMSTATE_UINT64(env.mip, RISCVCPU),
882
+ VMSTATE_UINT64(env.miclaim, RISCVCPU),
883
+ VMSTATE_UINT64(env.mie, RISCVCPU),
884
+ VMSTATE_UINT64(env.mideleg, RISCVCPU),
885
VMSTATE_UINTTL(env.satp, RISCVCPU),
886
VMSTATE_UINTTL(env.stval, RISCVCPU),
887
VMSTATE_UINTTL(env.medeleg, RISCVCPU),
888
--
412
--
889
2.34.1
413
2.41.0
890
891
diff view generated by jsdifflib
1
From: Guo Ren <ren_guo@c-sky.com>
1
From: Yong-Xuan Wang <yongxuan.wang@sifive.com>
2
2
3
Highest bits of PTE has been used for svpbmt, ref: [1], [2], so we
3
We check the in-kernel irqchip support when using KVM acceleration.
4
need to ignore them. They cannot be a part of ppn.
5
4
6
1: The RISC-V Instruction Set Manual, Volume II: Privileged Architecture
5
Signed-off-by: Yong-Xuan Wang <yongxuan.wang@sifive.com>
7
4.4 Sv39: Page-Based 39-bit Virtual-Memory System
6
Reviewed-by: Jim Shu <jim.shu@sifive.com>
8
4.5 Sv48: Page-Based 48-bit Virtual-Memory System
7
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
9
8
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
10
2: https://github.com/riscv/virtual-memory/blob/main/specs/663-Svpbmt-diff.pdf
9
Message-ID: <20230727102439.22554-3-yongxuan.wang@sifive.com>
11
12
Signed-off-by: Guo Ren <ren_guo@c-sky.com>
13
Reviewed-by: Liu Zhiwei <zhiwei_liu@c-sky.com>
14
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
15
Cc: Bin Meng <bmeng.cn@gmail.com>
16
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
17
Message-Id: <20220204022658.18097-2-liweiwei@iscas.ac.cn>
18
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
10
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
19
---
11
---
20
target/riscv/cpu.h | 15 +++++++++++++++
12
target/riscv/kvm.c | 10 +++++++++-
21
target/riscv/cpu_bits.h | 3 +++
13
1 file changed, 9 insertions(+), 1 deletion(-)
22
target/riscv/cpu_helper.c | 13 ++++++++++++-
23
3 files changed, 30 insertions(+), 1 deletion(-)
24
14
25
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
15
diff --git a/target/riscv/kvm.c b/target/riscv/kvm.c
26
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
27
--- a/target/riscv/cpu.h
17
--- a/target/riscv/kvm.c
28
+++ b/target/riscv/cpu.h
18
+++ b/target/riscv/kvm.c
29
@@ -XXX,XX +XXX,XX @@ struct RISCVCPUConfig {
19
@@ -XXX,XX +XXX,XX @@ int kvm_arch_init(MachineState *ms, KVMState *s)
30
bool ext_counters;
20
31
bool ext_ifencei;
21
int kvm_arch_irqchip_create(KVMState *s)
32
bool ext_icsr;
22
{
33
+ bool ext_svnapot;
23
- return 0;
34
+ bool ext_svpbmt;
24
+ if (kvm_kernel_irqchip_split()) {
35
bool ext_zfh;
25
+ error_report("-machine kernel_irqchip=split is not supported on RISC-V.");
36
bool ext_zfhmin;
26
+ exit(1);
37
bool ext_zve32f;
27
+ }
38
@@ -XXX,XX +XXX,XX @@ static inline int riscv_cpu_xlen(CPURISCVState *env)
28
+
39
return 16 << env->xl;
29
+ /*
30
+ * We can create the VAIA using the newer device control API.
31
+ */
32
+ return kvm_check_extension(s, KVM_CAP_DEVICE_CTRL);
40
}
33
}
41
34
42
+#ifdef TARGET_RISCV32
35
int kvm_arch_process_async_events(CPUState *cs)
43
+#define riscv_cpu_sxl(env) ((void)(env), MXL_RV32)
44
+#else
45
+static inline RISCVMXL riscv_cpu_sxl(CPURISCVState *env)
46
+{
47
+#ifdef CONFIG_USER_ONLY
48
+ return env->misa_mxl;
49
+#else
50
+ return get_field(env->mstatus, MSTATUS64_SXL);
51
+#endif
52
+}
53
+#endif
54
+
55
/*
56
* Encode LMUL to lmul as follows:
57
* LMUL vlmul lmul
58
diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
59
index XXXXXXX..XXXXXXX 100644
60
--- a/target/riscv/cpu_bits.h
61
+++ b/target/riscv/cpu_bits.h
62
@@ -XXX,XX +XXX,XX @@ typedef enum {
63
/* Page table PPN shift amount */
64
#define PTE_PPN_SHIFT 10
65
66
+/* Page table PPN mask */
67
+#define PTE_PPN_MASK 0x3FFFFFFFFFFC00ULL
68
+
69
/* Leaf page shift amount */
70
#define PGSHIFT 12
71
72
diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
73
index XXXXXXX..XXXXXXX 100644
74
--- a/target/riscv/cpu_helper.c
75
+++ b/target/riscv/cpu_helper.c
76
@@ -XXX,XX +XXX,XX @@ static int get_physical_address(CPURISCVState *env, hwaddr *physical,
77
MemTxAttrs attrs = MEMTXATTRS_UNSPECIFIED;
78
int mode = mmu_idx & TB_FLAGS_PRIV_MMU_MASK;
79
bool use_background = false;
80
+ hwaddr ppn;
81
+ RISCVCPU *cpu = env_archcpu(env);
82
83
/*
84
* Check if we should use the background registers for the two
85
@@ -XXX,XX +XXX,XX @@ restart:
86
return TRANSLATE_FAIL;
87
}
88
89
- hwaddr ppn = pte >> PTE_PPN_SHIFT;
90
+ if (riscv_cpu_sxl(env) == MXL_RV32) {
91
+ ppn = pte >> PTE_PPN_SHIFT;
92
+ } else if (cpu->cfg.ext_svpbmt || cpu->cfg.ext_svnapot) {
93
+ ppn = (pte & (target_ulong)PTE_PPN_MASK) >> PTE_PPN_SHIFT;
94
+ } else {
95
+ ppn = pte >> PTE_PPN_SHIFT;
96
+ if ((pte & ~(target_ulong)PTE_PPN_MASK) >> PTE_PPN_SHIFT) {
97
+ return TRANSLATE_FAIL;
98
+ }
99
+ }
100
101
if (!(pte & PTE_V)) {
102
/* Invalid PTE */
103
--
36
--
104
2.34.1
37
2.41.0
105
106
diff view generated by jsdifflib
1
From: Anup Patel <anup.patel@wdc.com>
1
From: Yong-Xuan Wang <yongxuan.wang@sifive.com>
2
2
3
The AIA specification defines [m|s|vs]iselect and [m|s|vs]ireg CSRs
3
We create a vAIA chip by using the KVM_DEV_TYPE_RISCV_AIA and then set up
4
which allow indirect access to interrupt priority arrays and per-HART
4
the chip with the KVM_DEV_RISCV_AIA_GRP_* APIs.
5
IMSIC registers. This patch implements AIA xiselect and xireg CSRs.
5
We also extend KVM accelerator to specify the KVM AIA mode. The "riscv-aia"
6
parameter is passed along with --accel in QEMU command-line.
7
1) "riscv-aia=emul": IMSIC is emulated by hypervisor
8
2) "riscv-aia=hwaccel": use hardware guest IMSIC
9
3) "riscv-aia=auto": use the hardware guest IMSICs whenever available
10
otherwise we fallback to software emulation.
6
11
7
Signed-off-by: Anup Patel <anup.patel@wdc.com>
12
Signed-off-by: Yong-Xuan Wang <yongxuan.wang@sifive.com>
8
Signed-off-by: Anup Patel <anup@brainfault.org>
13
Reviewed-by: Jim Shu <jim.shu@sifive.com>
9
Reviewed-by: Frank Chang <frank.chang@sifive.com>
14
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
10
Message-id: 20220204174700.534953-15-anup@brainfault.org
15
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
16
Message-ID: <20230727102439.22554-4-yongxuan.wang@sifive.com>
11
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
17
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
12
---
18
---
13
target/riscv/cpu.h | 7 ++
19
target/riscv/kvm_riscv.h | 4 +
14
target/riscv/csr.c | 177 +++++++++++++++++++++++++++++++++++++++++
20
target/riscv/kvm.c | 186 +++++++++++++++++++++++++++++++++++++++
15
target/riscv/machine.c | 3 +
21
2 files changed, 190 insertions(+)
16
3 files changed, 187 insertions(+)
17
22
18
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
23
diff --git a/target/riscv/kvm_riscv.h b/target/riscv/kvm_riscv.h
19
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
20
--- a/target/riscv/cpu.h
25
--- a/target/riscv/kvm_riscv.h
21
+++ b/target/riscv/cpu.h
26
+++ b/target/riscv/kvm_riscv.h
22
@@ -XXX,XX +XXX,XX @@ struct CPURISCVState {
27
@@ -XXX,XX +XXX,XX @@
23
uint8_t miprio[64];
28
void kvm_riscv_init_user_properties(Object *cpu_obj);
24
uint8_t siprio[64];
29
void kvm_riscv_reset_vcpu(RISCVCPU *cpu);
25
30
void kvm_riscv_set_irq(RISCVCPU *cpu, int irq, int level);
26
+ /* AIA CSRs */
31
+void kvm_riscv_aia_create(MachineState *machine, uint64_t group_shift,
27
+ target_ulong miselect;
32
+ uint64_t aia_irq_num, uint64_t aia_msi_num,
28
+ target_ulong siselect;
33
+ uint64_t aplic_base, uint64_t imsic_base,
29
+
34
+ uint64_t guest_num);
30
/* Hypervisor CSRs */
35
31
target_ulong hstatus;
36
#endif
32
target_ulong hedeleg;
37
diff --git a/target/riscv/kvm.c b/target/riscv/kvm.c
33
@@ -XXX,XX +XXX,XX @@ struct CPURISCVState {
34
target_ulong vstval;
35
target_ulong vsatp;
36
37
+ /* AIA VS-mode CSRs */
38
+ target_ulong vsiselect;
39
+
40
target_ulong mtval2;
41
target_ulong mtinst;
42
43
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
44
index XXXXXXX..XXXXXXX 100644
38
index XXXXXXX..XXXXXXX 100644
45
--- a/target/riscv/csr.c
39
--- a/target/riscv/kvm.c
46
+++ b/target/riscv/csr.c
40
+++ b/target/riscv/kvm.c
47
@@ -XXX,XX +XXX,XX @@ static int read_mtopi(CPURISCVState *env, int csrno, target_ulong *val)
41
@@ -XXX,XX +XXX,XX @@
48
return RISCV_EXCP_NONE;
42
#include "exec/address-spaces.h"
43
#include "hw/boards.h"
44
#include "hw/irq.h"
45
+#include "hw/intc/riscv_imsic.h"
46
#include "qemu/log.h"
47
#include "hw/loader.h"
48
#include "kvm_riscv.h"
49
@@ -XXX,XX +XXX,XX @@
50
#include "chardev/char-fe.h"
51
#include "migration/migration.h"
52
#include "sysemu/runstate.h"
53
+#include "hw/riscv/numa.h"
54
55
static uint64_t kvm_riscv_reg_id(CPURISCVState *env, uint64_t type,
56
uint64_t idx)
57
@@ -XXX,XX +XXX,XX @@ bool kvm_arch_cpu_check_are_resettable(void)
58
return true;
49
}
59
}
50
60
51
+static int aia_xlate_vs_csrno(CPURISCVState *env, int csrno)
61
+static int aia_mode;
52
+{
62
+
53
+ if (!riscv_cpu_virt_enabled(env)) {
63
+static const char *kvm_aia_mode_str(uint64_t mode)
54
+ return csrno;
64
+{
55
+ }
65
+ switch (mode) {
56
+
66
+ case KVM_DEV_RISCV_AIA_MODE_EMUL:
57
+ switch (csrno) {
67
+ return "emul";
58
+ case CSR_SISELECT:
68
+ case KVM_DEV_RISCV_AIA_MODE_HWACCEL:
59
+ return CSR_VSISELECT;
69
+ return "hwaccel";
60
+ case CSR_SIREG:
70
+ case KVM_DEV_RISCV_AIA_MODE_AUTO:
61
+ return CSR_VSIREG;
62
+ default:
71
+ default:
63
+ return csrno;
72
+ return "auto";
64
+ };
73
+ };
65
+}
74
+}
66
+
75
+
67
+static int rmw_xiselect(CPURISCVState *env, int csrno, target_ulong *val,
76
+static char *riscv_get_kvm_aia(Object *obj, Error **errp)
68
+ target_ulong new_val, target_ulong wr_mask)
77
+{
69
+{
78
+ return g_strdup(kvm_aia_mode_str(aia_mode));
70
+ target_ulong *iselect;
79
+}
71
+
80
+
72
+ /* Translate CSR number for VS-mode */
81
+static void riscv_set_kvm_aia(Object *obj, const char *val, Error **errp)
73
+ csrno = aia_xlate_vs_csrno(env, csrno);
82
+{
74
+
83
+ if (!strcmp(val, "emul")) {
75
+ /* Find the iselect CSR based on CSR number */
84
+ aia_mode = KVM_DEV_RISCV_AIA_MODE_EMUL;
76
+ switch (csrno) {
85
+ } else if (!strcmp(val, "hwaccel")) {
77
+ case CSR_MISELECT:
86
+ aia_mode = KVM_DEV_RISCV_AIA_MODE_HWACCEL;
78
+ iselect = &env->miselect;
87
+ } else if (!strcmp(val, "auto")) {
79
+ break;
88
+ aia_mode = KVM_DEV_RISCV_AIA_MODE_AUTO;
80
+ case CSR_SISELECT:
89
+ } else {
81
+ iselect = &env->siselect;
90
+ error_setg(errp, "Invalid KVM AIA mode");
82
+ break;
91
+ error_append_hint(errp, "Valid values are emul, hwaccel, and auto.\n");
83
+ case CSR_VSISELECT:
92
+ }
84
+ iselect = &env->vsiselect;
93
+}
85
+ break;
94
+
86
+ default:
95
void kvm_arch_accel_class_init(ObjectClass *oc)
87
+ return RISCV_EXCP_ILLEGAL_INST;
96
{
88
+ };
97
+ object_class_property_add_str(oc, "riscv-aia", riscv_get_kvm_aia,
89
+
98
+ riscv_set_kvm_aia);
90
+ if (val) {
99
+ object_class_property_set_description(oc, "riscv-aia",
91
+ *val = *iselect;
100
+ "Set KVM AIA mode. Valid values are "
92
+ }
101
+ "emul, hwaccel, and auto. Default "
93
+
102
+ "is auto.");
94
+ wr_mask &= ISELECT_MASK;
103
+ object_property_set_default_str(object_class_property_find(oc, "riscv-aia"),
95
+ if (wr_mask) {
104
+ "auto");
96
+ *iselect = (*iselect & ~wr_mask) | (new_val & wr_mask);
105
+}
97
+ }
106
+
98
+
107
+void kvm_riscv_aia_create(MachineState *machine, uint64_t group_shift,
99
+ return RISCV_EXCP_NONE;
108
+ uint64_t aia_irq_num, uint64_t aia_msi_num,
100
+}
109
+ uint64_t aplic_base, uint64_t imsic_base,
101
+
110
+ uint64_t guest_num)
102
+static int rmw_iprio(target_ulong xlen,
111
+{
103
+ target_ulong iselect, uint8_t *iprio,
112
+ int ret, i;
104
+ target_ulong *val, target_ulong new_val,
113
+ int aia_fd = -1;
105
+ target_ulong wr_mask, int ext_irq_no)
114
+ uint64_t default_aia_mode;
106
+{
115
+ uint64_t socket_count = riscv_socket_count(machine);
107
+ int i, firq, nirqs;
116
+ uint64_t max_hart_per_socket = 0;
108
+ target_ulong old_val;
117
+ uint64_t socket, base_hart, hart_count, socket_imsic_base, imsic_addr;
109
+
118
+ uint64_t socket_bits, hart_bits, guest_bits;
110
+ if (iselect < ISELECT_IPRIO0 || ISELECT_IPRIO15 < iselect) {
119
+
111
+ return -EINVAL;
120
+ aia_fd = kvm_create_device(kvm_state, KVM_DEV_TYPE_RISCV_AIA, false);
112
+ }
121
+
113
+ if (xlen != 32 && iselect & 0x1) {
122
+ if (aia_fd < 0) {
114
+ return -EINVAL;
123
+ error_report("Unable to create in-kernel irqchip");
115
+ }
124
+ exit(1);
116
+
125
+ }
117
+ nirqs = 4 * (xlen / 32);
126
+
118
+ firq = ((iselect - ISELECT_IPRIO0) / (xlen / 32)) * (nirqs);
127
+ ret = kvm_device_access(aia_fd, KVM_DEV_RISCV_AIA_GRP_CONFIG,
119
+
128
+ KVM_DEV_RISCV_AIA_CONFIG_MODE,
120
+ old_val = 0;
129
+ &default_aia_mode, false, NULL);
121
+ for (i = 0; i < nirqs; i++) {
130
+ if (ret < 0) {
122
+ old_val |= ((target_ulong)iprio[firq + i]) << (IPRIO_IRQ_BITS * i);
131
+ error_report("KVM AIA: failed to get current KVM AIA mode");
123
+ }
132
+ exit(1);
124
+
133
+ }
125
+ if (val) {
134
+ qemu_log("KVM AIA: default mode is %s\n",
126
+ *val = old_val;
135
+ kvm_aia_mode_str(default_aia_mode));
127
+ }
136
+
128
+
137
+ if (default_aia_mode != aia_mode) {
129
+ if (wr_mask) {
138
+ ret = kvm_device_access(aia_fd, KVM_DEV_RISCV_AIA_GRP_CONFIG,
130
+ new_val = (old_val & ~wr_mask) | (new_val & wr_mask);
139
+ KVM_DEV_RISCV_AIA_CONFIG_MODE,
131
+ for (i = 0; i < nirqs; i++) {
140
+ &aia_mode, true, NULL);
132
+ /*
141
+ if (ret < 0)
133
+ * M-level and S-level external IRQ priority always read-only
142
+ warn_report("KVM AIA: failed to set KVM AIA mode");
134
+ * zero. This means default priority order is always preferred
143
+ else
135
+ * for M-level and S-level external IRQs.
144
+ qemu_log("KVM AIA: set current mode to %s\n",
136
+ */
145
+ kvm_aia_mode_str(aia_mode));
137
+ if ((firq + i) == ext_irq_no) {
146
+ }
138
+ continue;
147
+
148
+ ret = kvm_device_access(aia_fd, KVM_DEV_RISCV_AIA_GRP_CONFIG,
149
+ KVM_DEV_RISCV_AIA_CONFIG_SRCS,
150
+ &aia_irq_num, true, NULL);
151
+ if (ret < 0) {
152
+ error_report("KVM AIA: failed to set number of input irq lines");
153
+ exit(1);
154
+ }
155
+
156
+ ret = kvm_device_access(aia_fd, KVM_DEV_RISCV_AIA_GRP_CONFIG,
157
+ KVM_DEV_RISCV_AIA_CONFIG_IDS,
158
+ &aia_msi_num, true, NULL);
159
+ if (ret < 0) {
160
+ error_report("KVM AIA: failed to set number of msi");
161
+ exit(1);
162
+ }
163
+
164
+ socket_bits = find_last_bit(&socket_count, BITS_PER_LONG) + 1;
165
+ ret = kvm_device_access(aia_fd, KVM_DEV_RISCV_AIA_GRP_CONFIG,
166
+ KVM_DEV_RISCV_AIA_CONFIG_GROUP_BITS,
167
+ &socket_bits, true, NULL);
168
+ if (ret < 0) {
169
+ error_report("KVM AIA: failed to set group_bits");
170
+ exit(1);
171
+ }
172
+
173
+ ret = kvm_device_access(aia_fd, KVM_DEV_RISCV_AIA_GRP_CONFIG,
174
+ KVM_DEV_RISCV_AIA_CONFIG_GROUP_SHIFT,
175
+ &group_shift, true, NULL);
176
+ if (ret < 0) {
177
+ error_report("KVM AIA: failed to set group_shift");
178
+ exit(1);
179
+ }
180
+
181
+ guest_bits = guest_num == 0 ? 0 :
182
+ find_last_bit(&guest_num, BITS_PER_LONG) + 1;
183
+ ret = kvm_device_access(aia_fd, KVM_DEV_RISCV_AIA_GRP_CONFIG,
184
+ KVM_DEV_RISCV_AIA_CONFIG_GUEST_BITS,
185
+ &guest_bits, true, NULL);
186
+ if (ret < 0) {
187
+ error_report("KVM AIA: failed to set guest_bits");
188
+ exit(1);
189
+ }
190
+
191
+ ret = kvm_device_access(aia_fd, KVM_DEV_RISCV_AIA_GRP_ADDR,
192
+ KVM_DEV_RISCV_AIA_ADDR_APLIC,
193
+ &aplic_base, true, NULL);
194
+ if (ret < 0) {
195
+ error_report("KVM AIA: failed to set the base address of APLIC");
196
+ exit(1);
197
+ }
198
+
199
+ for (socket = 0; socket < socket_count; socket++) {
200
+ socket_imsic_base = imsic_base + socket * (1U << group_shift);
201
+ hart_count = riscv_socket_hart_count(machine, socket);
202
+ base_hart = riscv_socket_first_hartid(machine, socket);
203
+
204
+ if (max_hart_per_socket < hart_count) {
205
+ max_hart_per_socket = hart_count;
206
+ }
207
+
208
+ for (i = 0; i < hart_count; i++) {
209
+ imsic_addr = socket_imsic_base + i * IMSIC_HART_SIZE(guest_bits);
210
+ ret = kvm_device_access(aia_fd, KVM_DEV_RISCV_AIA_GRP_ADDR,
211
+ KVM_DEV_RISCV_AIA_ADDR_IMSIC(i + base_hart),
212
+ &imsic_addr, true, NULL);
213
+ if (ret < 0) {
214
+ error_report("KVM AIA: failed to set the IMSIC address for hart %d", i);
215
+ exit(1);
139
+ }
216
+ }
140
+ iprio[firq + i] = (new_val >> (IPRIO_IRQ_BITS * i)) & 0xff;
141
+ }
217
+ }
142
+ }
218
+ }
143
+
219
+
144
+ return 0;
220
+ hart_bits = find_last_bit(&max_hart_per_socket, BITS_PER_LONG) + 1;
145
+}
221
+ ret = kvm_device_access(aia_fd, KVM_DEV_RISCV_AIA_GRP_CONFIG,
146
+
222
+ KVM_DEV_RISCV_AIA_CONFIG_HART_BITS,
147
+static int rmw_xireg(CPURISCVState *env, int csrno, target_ulong *val,
223
+ &hart_bits, true, NULL);
148
+ target_ulong new_val, target_ulong wr_mask)
224
+ if (ret < 0) {
149
+{
225
+ error_report("KVM AIA: failed to set hart_bits");
150
+ bool virt;
226
+ exit(1);
151
+ uint8_t *iprio;
227
+ }
152
+ int ret = -EINVAL;
228
+
153
+ target_ulong priv, isel, vgein;
229
+ if (kvm_has_gsi_routing()) {
154
+
230
+ for (uint64_t idx = 0; idx < aia_irq_num + 1; ++idx) {
155
+ /* Translate CSR number for VS-mode */
231
+ /* KVM AIA only has one APLIC instance */
156
+ csrno = aia_xlate_vs_csrno(env, csrno);
232
+ kvm_irqchip_add_irq_route(kvm_state, idx, 0, idx);
157
+
158
+ /* Decode register details from CSR number */
159
+ virt = false;
160
+ switch (csrno) {
161
+ case CSR_MIREG:
162
+ iprio = env->miprio;
163
+ isel = env->miselect;
164
+ priv = PRV_M;
165
+ break;
166
+ case CSR_SIREG:
167
+ iprio = env->siprio;
168
+ isel = env->siselect;
169
+ priv = PRV_S;
170
+ break;
171
+ case CSR_VSIREG:
172
+ iprio = env->hviprio;
173
+ isel = env->vsiselect;
174
+ priv = PRV_S;
175
+ virt = true;
176
+ break;
177
+ default:
178
+ goto done;
179
+ };
180
+
181
+ /* Find the selected guest interrupt file */
182
+ vgein = (virt) ? get_field(env->hstatus, HSTATUS_VGEIN) : 0;
183
+
184
+ if (ISELECT_IPRIO0 <= isel && isel <= ISELECT_IPRIO15) {
185
+ /* Local interrupt priority registers not available for VS-mode */
186
+ if (!virt) {
187
+ ret = rmw_iprio(riscv_cpu_mxl_bits(env),
188
+ isel, iprio, val, new_val, wr_mask,
189
+ (priv == PRV_M) ? IRQ_M_EXT : IRQ_S_EXT);
190
+ }
233
+ }
191
+ } else if (ISELECT_IMSIC_FIRST <= isel && isel <= ISELECT_IMSIC_LAST) {
234
+ kvm_gsi_routing_allowed = true;
192
+ /* IMSIC registers only available when machine implements it. */
235
+ kvm_irqchip_commit_routes(kvm_state);
193
+ if (env->aia_ireg_rmw_fn[priv]) {
236
+ }
194
+ /* Selected guest interrupt file should not be zero */
237
+
195
+ if (virt && (!vgein || env->geilen < vgein)) {
238
+ ret = kvm_device_access(aia_fd, KVM_DEV_RISCV_AIA_GRP_CTRL,
196
+ goto done;
239
+ KVM_DEV_RISCV_AIA_CTRL_INIT,
197
+ }
240
+ NULL, true, NULL);
198
+ /* Call machine specific IMSIC register emulation */
241
+ if (ret < 0) {
199
+ ret = env->aia_ireg_rmw_fn[priv](env->aia_ireg_rmw_fn_arg[priv],
242
+ error_report("KVM AIA: initialized fail");
200
+ AIA_MAKE_IREG(isel, priv, virt, vgein,
243
+ exit(1);
201
+ riscv_cpu_mxl_bits(env)),
244
+ }
202
+ val, new_val, wr_mask);
245
+
203
+ }
246
+ kvm_msi_via_irqfd_allowed = kvm_irqfds_enabled();
204
+ }
247
}
205
+
206
+done:
207
+ if (ret) {
208
+ return (riscv_cpu_virt_enabled(env) && virt) ?
209
+ RISCV_EXCP_VIRT_INSTRUCTION_FAULT : RISCV_EXCP_ILLEGAL_INST;
210
+ }
211
+ return RISCV_EXCP_NONE;
212
+}
213
+
214
static RISCVException read_mtvec(CPURISCVState *env, int csrno,
215
target_ulong *val)
216
{
217
@@ -XXX,XX +XXX,XX @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
218
[CSR_MTVAL] = { "mtval", any, read_mtval, write_mtval },
219
[CSR_MIP] = { "mip", any, NULL, NULL, rmw_mip },
220
221
+ /* Machine-Level Window to Indirectly Accessed Registers (AIA) */
222
+ [CSR_MISELECT] = { "miselect", aia_any, NULL, NULL, rmw_xiselect },
223
+ [CSR_MIREG] = { "mireg", aia_any, NULL, NULL, rmw_xireg },
224
+
225
/* Machine-Level Interrupts (AIA) */
226
[CSR_MTOPI] = { "mtopi", aia_any, read_mtopi },
227
228
@@ -XXX,XX +XXX,XX @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
229
/* Supervisor Protection and Translation */
230
[CSR_SATP] = { "satp", smode, read_satp, write_satp },
231
232
+ /* Supervisor-Level Window to Indirectly Accessed Registers (AIA) */
233
+ [CSR_SISELECT] = { "siselect", aia_smode, NULL, NULL, rmw_xiselect },
234
+ [CSR_SIREG] = { "sireg", aia_smode, NULL, NULL, rmw_xireg },
235
+
236
/* Supervisor-Level Interrupts (AIA) */
237
[CSR_STOPI] = { "stopi", aia_smode, read_stopi },
238
239
@@ -XXX,XX +XXX,XX @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
240
[CSR_HVIPRIO1] = { "hviprio1", aia_hmode, read_hviprio1, write_hviprio1 },
241
[CSR_HVIPRIO2] = { "hviprio2", aia_hmode, read_hviprio2, write_hviprio2 },
242
243
+ /*
244
+ * VS-Level Window to Indirectly Accessed Registers (H-extension with AIA)
245
+ */
246
+ [CSR_VSISELECT] = { "vsiselect", aia_hmode, NULL, NULL, rmw_xiselect },
247
+ [CSR_VSIREG] = { "vsireg", aia_hmode, NULL, NULL, rmw_xireg },
248
+
249
/* VS-Level Interrupts (H-extension with AIA) */
250
[CSR_VSTOPI] = { "vstopi", aia_hmode, read_vstopi },
251
252
diff --git a/target/riscv/machine.c b/target/riscv/machine.c
253
index XXXXXXX..XXXXXXX 100644
254
--- a/target/riscv/machine.c
255
+++ b/target/riscv/machine.c
256
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_hyper = {
257
VMSTATE_UINTTL(env.vscause, RISCVCPU),
258
VMSTATE_UINTTL(env.vstval, RISCVCPU),
259
VMSTATE_UINTTL(env.vsatp, RISCVCPU),
260
+ VMSTATE_UINTTL(env.vsiselect, RISCVCPU),
261
262
VMSTATE_UINTTL(env.mtval2, RISCVCPU),
263
VMSTATE_UINTTL(env.mtinst, RISCVCPU),
264
@@ -XXX,XX +XXX,XX @@ const VMStateDescription vmstate_riscv_cpu = {
265
VMSTATE_UINTTL(env.mepc, RISCVCPU),
266
VMSTATE_UINTTL(env.mcause, RISCVCPU),
267
VMSTATE_UINTTL(env.mtval, RISCVCPU),
268
+ VMSTATE_UINTTL(env.miselect, RISCVCPU),
269
+ VMSTATE_UINTTL(env.siselect, RISCVCPU),
270
VMSTATE_UINTTL(env.scounteren, RISCVCPU),
271
VMSTATE_UINTTL(env.mcounteren, RISCVCPU),
272
VMSTATE_UINTTL(env.sscratch, RISCVCPU),
273
--
248
--
274
2.34.1
249
2.41.0
275
276
diff view generated by jsdifflib
1
From: Anup Patel <anup.patel@wdc.com>
1
From: Yong-Xuan Wang <yongxuan.wang@sifive.com>
2
2
3
The AIA specification introduces new [m|s|vs]topi CSRs for
3
KVM AIA can't emulate APLIC only. When "aia=aplic" parameter is passed,
4
reporting pending local IRQ number and associated IRQ priority.
4
APLIC devices is emulated by QEMU. For "aia=aplic-imsic", remove the
5
mmio operations of APLIC when using KVM AIA and send wired interrupt
6
signal via KVM_IRQ_LINE API.
7
After KVM AIA enabled, MSI messages are delivered by KVM_SIGNAL_MSI API
8
when the IMSICs receive mmio write requests.
5
9
6
Signed-off-by: Anup Patel <anup.patel@wdc.com>
10
Signed-off-by: Yong-Xuan Wang <yongxuan.wang@sifive.com>
7
Signed-off-by: Anup Patel <anup@brainfault.org>
11
Reviewed-by: Jim Shu <jim.shu@sifive.com>
8
Reviewed-by: Frank Chang <frank.chang@sifive.com>
12
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
9
Message-id: 20220204174700.534953-14-anup@brainfault.org
13
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
10
[ Changed by AF:
14
Message-ID: <20230727102439.22554-5-yongxuan.wang@sifive.com>
11
- Fixup indentation
12
]
13
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
15
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
14
---
16
---
15
target/riscv/csr.c | 156 +++++++++++++++++++++++++++++++++++++++++++++
17
hw/intc/riscv_aplic.c | 56 ++++++++++++++++++++++++++++++-------------
16
1 file changed, 156 insertions(+)
18
hw/intc/riscv_imsic.c | 25 +++++++++++++++----
19
2 files changed, 61 insertions(+), 20 deletions(-)
17
20
18
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
21
diff --git a/hw/intc/riscv_aplic.c b/hw/intc/riscv_aplic.c
19
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
20
--- a/target/riscv/csr.c
23
--- a/hw/intc/riscv_aplic.c
21
+++ b/target/riscv/csr.c
24
+++ b/hw/intc/riscv_aplic.c
22
@@ -XXX,XX +XXX,XX @@ static int smode32(CPURISCVState *env, int csrno)
25
@@ -XXX,XX +XXX,XX @@
23
return smode(env, csrno);
26
#include "hw/irq.h"
24
}
27
#include "target/riscv/cpu.h"
25
28
#include "sysemu/sysemu.h"
26
+static int aia_smode(CPURISCVState *env, int csrno)
29
+#include "sysemu/kvm.h"
30
#include "migration/vmstate.h"
31
32
#define APLIC_MAX_IDC (1UL << 14)
33
@@ -XXX,XX +XXX,XX @@
34
35
#define APLIC_IDC_CLAIMI 0x1c
36
37
+/*
38
+ * KVM AIA only supports APLIC MSI, fallback to QEMU emulation if we want to use
39
+ * APLIC Wired.
40
+ */
41
+static bool is_kvm_aia(bool msimode)
27
+{
42
+{
28
+ if (!riscv_feature(env, RISCV_FEATURE_AIA)) {
43
+ return kvm_irqchip_in_kernel() && msimode;
29
+ return RISCV_EXCP_ILLEGAL_INST;
30
+ }
31
+
32
+ return smode(env, csrno);
33
+}
44
+}
34
+
45
+
35
static int aia_smode32(CPURISCVState *env, int csrno)
46
static uint32_t riscv_aplic_read_input_word(RISCVAPLICState *aplic,
47
uint32_t word)
36
{
48
{
37
if (!riscv_feature(env, RISCV_FEATURE_AIA)) {
49
@@ -XXX,XX +XXX,XX @@ static uint32_t riscv_aplic_idc_claimi(RISCVAPLICState *aplic, uint32_t idc)
38
@@ -XXX,XX +XXX,XX @@ static RISCVException read_timeh(CPURISCVState *env, int csrno,
50
return topi;
39
#define VS_MODE_INTERRUPTS ((uint64_t)(MIP_VSSIP | MIP_VSTIP | MIP_VSEIP))
40
#define HS_MODE_INTERRUPTS ((uint64_t)(MIP_SGEIP | VS_MODE_INTERRUPTS))
41
42
+#define VSTOPI_NUM_SRCS 5
43
+
44
static const uint64_t delegable_ints = S_MODE_INTERRUPTS |
45
VS_MODE_INTERRUPTS;
46
static const uint64_t vs_delegable_ints = VS_MODE_INTERRUPTS;
47
@@ -XXX,XX +XXX,XX @@ static RISCVException rmw_mieh(CPURISCVState *env, int csrno,
48
return ret;
49
}
51
}
50
52
51
+static int read_mtopi(CPURISCVState *env, int csrno, target_ulong *val)
53
+static void riscv_kvm_aplic_request(void *opaque, int irq, int level)
52
+{
54
+{
53
+ int irq;
55
+ kvm_set_irq(kvm_state, irq, !!level);
54
+ uint8_t iprio;
55
+
56
+ irq = riscv_cpu_mirq_pending(env);
57
+ if (irq <= 0 || irq > 63) {
58
+ *val = 0;
59
+ } else {
60
+ iprio = env->miprio[irq];
61
+ if (!iprio) {
62
+ if (riscv_cpu_default_priority(irq) > IPRIO_DEFAULT_M) {
63
+ iprio = IPRIO_MMAXIPRIO;
64
+ }
65
+ }
66
+ *val = (irq & TOPI_IID_MASK) << TOPI_IID_SHIFT;
67
+ *val |= iprio;
68
+ }
69
+
70
+ return RISCV_EXCP_NONE;
71
+}
56
+}
72
+
57
+
73
static RISCVException read_mtvec(CPURISCVState *env, int csrno,
58
static void riscv_aplic_request(void *opaque, int irq, int level)
74
target_ulong *val)
75
{
59
{
76
@@ -XXX,XX +XXX,XX @@ static RISCVException write_satp(CPURISCVState *env, int csrno,
60
bool update = false;
77
return RISCV_EXCP_NONE;
61
@@ -XXX,XX +XXX,XX @@ static void riscv_aplic_realize(DeviceState *dev, Error **errp)
78
}
62
uint32_t i;
79
63
RISCVAPLICState *aplic = RISCV_APLIC(dev);
80
+static int read_vstopi(CPURISCVState *env, int csrno, target_ulong *val)
64
81
+{
65
- aplic->bitfield_words = (aplic->num_irqs + 31) >> 5;
82
+ int irq, ret;
66
- aplic->sourcecfg = g_new0(uint32_t, aplic->num_irqs);
83
+ target_ulong topei;
67
- aplic->state = g_new0(uint32_t, aplic->num_irqs);
84
+ uint64_t vseip, vsgein;
68
- aplic->target = g_new0(uint32_t, aplic->num_irqs);
85
+ uint32_t iid, iprio, hviid, hviprio, gein;
69
- if (!aplic->msimode) {
86
+ uint32_t s, scount = 0, siid[VSTOPI_NUM_SRCS], siprio[VSTOPI_NUM_SRCS];
70
- for (i = 0; i < aplic->num_irqs; i++) {
71
- aplic->target[i] = 1;
72
+ if (!is_kvm_aia(aplic->msimode)) {
73
+ aplic->bitfield_words = (aplic->num_irqs + 31) >> 5;
74
+ aplic->sourcecfg = g_new0(uint32_t, aplic->num_irqs);
75
+ aplic->state = g_new0(uint32_t, aplic->num_irqs);
76
+ aplic->target = g_new0(uint32_t, aplic->num_irqs);
77
+ if (!aplic->msimode) {
78
+ for (i = 0; i < aplic->num_irqs; i++) {
79
+ aplic->target[i] = 1;
80
+ }
81
}
82
- }
83
- aplic->idelivery = g_new0(uint32_t, aplic->num_harts);
84
- aplic->iforce = g_new0(uint32_t, aplic->num_harts);
85
- aplic->ithreshold = g_new0(uint32_t, aplic->num_harts);
86
+ aplic->idelivery = g_new0(uint32_t, aplic->num_harts);
87
+ aplic->iforce = g_new0(uint32_t, aplic->num_harts);
88
+ aplic->ithreshold = g_new0(uint32_t, aplic->num_harts);
89
90
- memory_region_init_io(&aplic->mmio, OBJECT(dev), &riscv_aplic_ops, aplic,
91
- TYPE_RISCV_APLIC, aplic->aperture_size);
92
- sysbus_init_mmio(SYS_BUS_DEVICE(dev), &aplic->mmio);
93
+ memory_region_init_io(&aplic->mmio, OBJECT(dev), &riscv_aplic_ops,
94
+ aplic, TYPE_RISCV_APLIC, aplic->aperture_size);
95
+ sysbus_init_mmio(SYS_BUS_DEVICE(dev), &aplic->mmio);
96
+ }
97
98
/*
99
* Only root APLICs have hardware IRQ lines. All non-root APLICs
100
* have IRQ lines delegated by their parent APLIC.
101
*/
102
if (!aplic->parent) {
103
- qdev_init_gpio_in(dev, riscv_aplic_request, aplic->num_irqs);
104
+ if (is_kvm_aia(aplic->msimode)) {
105
+ qdev_init_gpio_in(dev, riscv_kvm_aplic_request, aplic->num_irqs);
106
+ } else {
107
+ qdev_init_gpio_in(dev, riscv_aplic_request, aplic->num_irqs);
108
+ }
109
}
110
111
/* Create output IRQ lines for non-MSI mode */
112
@@ -XXX,XX +XXX,XX @@ DeviceState *riscv_aplic_create(hwaddr addr, hwaddr size,
113
qdev_prop_set_bit(dev, "mmode", mmode);
114
115
sysbus_realize_and_unref(SYS_BUS_DEVICE(dev), &error_fatal);
116
- sysbus_mmio_map(SYS_BUS_DEVICE(dev), 0, addr);
87
+
117
+
88
+ gein = get_field(env->hstatus, HSTATUS_VGEIN);
118
+ if (!is_kvm_aia(msimode)) {
89
+ hviid = get_field(env->hvictl, HVICTL_IID);
119
+ sysbus_mmio_map(SYS_BUS_DEVICE(dev), 0, addr);
90
+ hviprio = get_field(env->hvictl, HVICTL_IPRIO);
120
+ }
121
122
if (parent) {
123
riscv_aplic_add_child(parent, dev);
124
diff --git a/hw/intc/riscv_imsic.c b/hw/intc/riscv_imsic.c
125
index XXXXXXX..XXXXXXX 100644
126
--- a/hw/intc/riscv_imsic.c
127
+++ b/hw/intc/riscv_imsic.c
128
@@ -XXX,XX +XXX,XX @@
129
#include "target/riscv/cpu.h"
130
#include "target/riscv/cpu_bits.h"
131
#include "sysemu/sysemu.h"
132
+#include "sysemu/kvm.h"
133
#include "migration/vmstate.h"
134
135
#define IMSIC_MMIO_PAGE_LE 0x00
136
@@ -XXX,XX +XXX,XX @@ static void riscv_imsic_write(void *opaque, hwaddr addr, uint64_t value,
137
goto err;
138
}
139
140
+#if defined(CONFIG_KVM)
141
+ if (kvm_irqchip_in_kernel()) {
142
+ struct kvm_msi msi;
91
+
143
+
92
+ if (gein) {
144
+ msi.address_lo = extract64(imsic->mmio.addr + addr, 0, 32);
93
+ vsgein = (env->hgeip & (1ULL << gein)) ? MIP_VSEIP : 0;
145
+ msi.address_hi = extract64(imsic->mmio.addr + addr, 32, 32);
94
+ vseip = env->mie & (env->mip | vsgein) & MIP_VSEIP;
146
+ msi.data = le32_to_cpu(value);
95
+ if (gein <= env->geilen && vseip) {
147
+
96
+ siid[scount] = IRQ_S_EXT;
148
+ kvm_vm_ioctl(kvm_state, KVM_SIGNAL_MSI, &msi);
97
+ siprio[scount] = IPRIO_MMAXIPRIO + 1;
149
+
98
+ if (env->aia_ireg_rmw_fn[PRV_S]) {
150
+ return;
99
+ /*
100
+ * Call machine specific IMSIC register emulation for
101
+ * reading TOPEI.
102
+ */
103
+ ret = env->aia_ireg_rmw_fn[PRV_S](
104
+ env->aia_ireg_rmw_fn_arg[PRV_S],
105
+ AIA_MAKE_IREG(ISELECT_IMSIC_TOPEI, PRV_S, true, gein,
106
+ riscv_cpu_mxl_bits(env)),
107
+ &topei, 0, 0);
108
+ if (!ret && topei) {
109
+ siprio[scount] = topei & IMSIC_TOPEI_IPRIO_MASK;
110
+ }
111
+ }
112
+ scount++;
113
+ }
114
+ } else {
115
+ if (hviid == IRQ_S_EXT && hviprio) {
116
+ siid[scount] = IRQ_S_EXT;
117
+ siprio[scount] = hviprio;
118
+ scount++;
119
+ }
120
+ }
151
+ }
152
+#endif
121
+
153
+
122
+ if (env->hvictl & HVICTL_VTI) {
154
/* Writes only supported for MSI little-endian registers */
123
+ if (hviid != IRQ_S_EXT) {
155
page = addr >> IMSIC_MMIO_PAGE_SHIFT;
124
+ siid[scount] = hviid;
156
if ((addr & (IMSIC_MMIO_PAGE_SZ - 1)) == IMSIC_MMIO_PAGE_LE) {
125
+ siprio[scount] = hviprio;
157
@@ -XXX,XX +XXX,XX @@ static void riscv_imsic_realize(DeviceState *dev, Error **errp)
126
+ scount++;
158
CPUState *cpu = cpu_by_arch_id(imsic->hartid);
127
+ }
159
CPURISCVState *env = cpu ? cpu->env_ptr : NULL;
128
+ } else {
160
129
+ irq = riscv_cpu_vsirq_pending(env);
161
- imsic->num_eistate = imsic->num_pages * imsic->num_irqs;
130
+ if (irq != IRQ_S_EXT && 0 < irq && irq <= 63) {
162
- imsic->eidelivery = g_new0(uint32_t, imsic->num_pages);
131
+ siid[scount] = irq;
163
- imsic->eithreshold = g_new0(uint32_t, imsic->num_pages);
132
+ siprio[scount] = env->hviprio[irq];
164
- imsic->eistate = g_new0(uint32_t, imsic->num_eistate);
133
+ scount++;
165
+ if (!kvm_irqchip_in_kernel()) {
134
+ }
166
+ imsic->num_eistate = imsic->num_pages * imsic->num_irqs;
167
+ imsic->eidelivery = g_new0(uint32_t, imsic->num_pages);
168
+ imsic->eithreshold = g_new0(uint32_t, imsic->num_pages);
169
+ imsic->eistate = g_new0(uint32_t, imsic->num_eistate);
135
+ }
170
+ }
136
+
171
137
+ iid = 0;
172
memory_region_init_io(&imsic->mmio, OBJECT(dev), &riscv_imsic_ops,
138
+ iprio = UINT_MAX;
173
imsic, TYPE_RISCV_IMSIC,
139
+ for (s = 0; s < scount; s++) {
140
+ if (siprio[s] < iprio) {
141
+ iid = siid[s];
142
+ iprio = siprio[s];
143
+ }
144
+ }
145
+
146
+ if (iid) {
147
+ if (env->hvictl & HVICTL_IPRIOM) {
148
+ if (iprio > IPRIO_MMAXIPRIO) {
149
+ iprio = IPRIO_MMAXIPRIO;
150
+ }
151
+ if (!iprio) {
152
+ if (riscv_cpu_default_priority(iid) > IPRIO_DEFAULT_S) {
153
+ iprio = IPRIO_MMAXIPRIO;
154
+ }
155
+ }
156
+ } else {
157
+ iprio = 1;
158
+ }
159
+ } else {
160
+ iprio = 0;
161
+ }
162
+
163
+ *val = (iid & TOPI_IID_MASK) << TOPI_IID_SHIFT;
164
+ *val |= iprio;
165
+ return RISCV_EXCP_NONE;
166
+}
167
+
168
+static int read_stopi(CPURISCVState *env, int csrno, target_ulong *val)
169
+{
170
+ int irq;
171
+ uint8_t iprio;
172
+
173
+ if (riscv_cpu_virt_enabled(env)) {
174
+ return read_vstopi(env, CSR_VSTOPI, val);
175
+ }
176
+
177
+ irq = riscv_cpu_sirq_pending(env);
178
+ if (irq <= 0 || irq > 63) {
179
+ *val = 0;
180
+ } else {
181
+ iprio = env->siprio[irq];
182
+ if (!iprio) {
183
+ if (riscv_cpu_default_priority(irq) > IPRIO_DEFAULT_S) {
184
+ iprio = IPRIO_MMAXIPRIO;
185
+ }
186
+ }
187
+ *val = (irq & TOPI_IID_MASK) << TOPI_IID_SHIFT;
188
+ *val |= iprio;
189
+ }
190
+
191
+ return RISCV_EXCP_NONE;
192
+}
193
+
194
/* Hypervisor Extensions */
195
static RISCVException read_hstatus(CPURISCVState *env, int csrno,
196
target_ulong *val)
197
@@ -XXX,XX +XXX,XX @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
198
[CSR_MTVAL] = { "mtval", any, read_mtval, write_mtval },
199
[CSR_MIP] = { "mip", any, NULL, NULL, rmw_mip },
200
201
+ /* Machine-Level Interrupts (AIA) */
202
+ [CSR_MTOPI] = { "mtopi", aia_any, read_mtopi },
203
+
204
/* Virtual Interrupts for Supervisor Level (AIA) */
205
[CSR_MVIEN] = { "mvien", aia_any, read_zero, write_ignore },
206
[CSR_MVIP] = { "mvip", aia_any, read_zero, write_ignore },
207
@@ -XXX,XX +XXX,XX @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
208
/* Supervisor Protection and Translation */
209
[CSR_SATP] = { "satp", smode, read_satp, write_satp },
210
211
+ /* Supervisor-Level Interrupts (AIA) */
212
+ [CSR_STOPI] = { "stopi", aia_smode, read_stopi },
213
+
214
/* Supervisor-Level High-Half CSRs (AIA) */
215
[CSR_SIEH] = { "sieh", aia_smode32, NULL, NULL, rmw_sieh },
216
[CSR_SIPH] = { "siph", aia_smode32, NULL, NULL, rmw_siph },
217
@@ -XXX,XX +XXX,XX @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
218
[CSR_HVIPRIO1] = { "hviprio1", aia_hmode, read_hviprio1, write_hviprio1 },
219
[CSR_HVIPRIO2] = { "hviprio2", aia_hmode, read_hviprio2, write_hviprio2 },
220
221
+ /* VS-Level Interrupts (H-extension with AIA) */
222
+ [CSR_VSTOPI] = { "vstopi", aia_hmode, read_vstopi },
223
+
224
/* Hypervisor and VS-Level High-Half CSRs (H-extension with AIA) */
225
[CSR_HIDELEGH] = { "hidelegh", aia_hmode32, NULL, NULL, rmw_hidelegh },
226
[CSR_HVIENH] = { "hvienh", aia_hmode32, read_zero, write_ignore },
227
--
174
--
228
2.34.1
175
2.41.0
229
230
diff view generated by jsdifflib
1
From: Anup Patel <anup.patel@wdc.com>
1
From: Yong-Xuan Wang <yongxuan.wang@sifive.com>
2
2
3
The AIA hvictl and hviprioX CSRs allow hypervisor to control
3
Select KVM AIA when the host kernel has in-kernel AIA chip support.
4
interrupts visible at VS-level. This patch implements AIA hvictl
4
Since KVM AIA only has one APLIC instance, we map the QEMU APLIC
5
and hviprioX CSRs.
5
devices to KVM APLIC.
6
6
7
Signed-off-by: Anup Patel <anup.patel@wdc.com>
7
Signed-off-by: Yong-Xuan Wang <yongxuan.wang@sifive.com>
8
Signed-off-by: Anup Patel <anup@brainfault.org>
8
Reviewed-by: Jim Shu <jim.shu@sifive.com>
9
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
9
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
10
Reviewed-by: Frank Chang <frank.chang@sifive.com>
10
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
11
Message-id: 20220204174700.534953-12-anup@brainfault.org
11
Message-ID: <20230727102439.22554-6-yongxuan.wang@sifive.com>
12
[ Changes by AF:
13
- Fix possible unintilised variable error in rmw_sie()
14
]
15
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
12
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
16
---
13
---
17
target/riscv/cpu.h | 2 +
14
hw/riscv/virt.c | 94 +++++++++++++++++++++++++++++++++----------------
18
target/riscv/csr.c | 128 ++++++++++++++++++++++++++++++++++++++++-
15
1 file changed, 63 insertions(+), 31 deletions(-)
19
target/riscv/machine.c | 2 +
20
3 files changed, 131 insertions(+), 1 deletion(-)
21
16
22
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
17
diff --git a/hw/riscv/virt.c b/hw/riscv/virt.c
23
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
24
--- a/target/riscv/cpu.h
19
--- a/hw/riscv/virt.c
25
+++ b/target/riscv/cpu.h
20
+++ b/hw/riscv/virt.c
26
@@ -XXX,XX +XXX,XX @@ struct CPURISCVState {
21
@@ -XXX,XX +XXX,XX @@
27
uint64_t htimedelta;
22
#include "hw/riscv/virt.h"
28
23
#include "hw/riscv/boot.h"
29
/* Hypervisor controlled virtual interrupt priorities */
24
#include "hw/riscv/numa.h"
30
+ target_ulong hvictl;
25
+#include "kvm_riscv.h"
31
uint8_t hviprio[64];
26
#include "hw/intc/riscv_aclint.h"
32
27
#include "hw/intc/riscv_aplic.h"
33
/* Upper 64-bits of 128-bit CSRs */
28
#include "hw/intc/riscv_imsic.h"
34
@@ -XXX,XX +XXX,XX @@ static inline RISCVMXL riscv_cpu_mxl(CPURISCVState *env)
29
@@ -XXX,XX +XXX,XX @@
35
return env->misa_mxl;
30
#error "Can't accommodate all IMSIC groups in address space"
36
}
37
#endif
31
#endif
38
+#define riscv_cpu_mxl_bits(env) (1UL << (4 + riscv_cpu_mxl(env)))
32
39
33
+/* KVM AIA only supports APLIC MSI. APLIC Wired is always emulated by QEMU. */
40
#if defined(TARGET_RISCV32)
34
+static bool virt_use_kvm_aia(RISCVVirtState *s)
41
#define cpu_recompute_xl(env) ((void)(env), MXL_RV32)
42
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/riscv/csr.c
45
+++ b/target/riscv/csr.c
46
@@ -XXX,XX +XXX,XX @@ static RISCVException pointer_masking(CPURISCVState *env, int csrno)
47
return RISCV_EXCP_ILLEGAL_INST;
48
}
49
50
+static int aia_hmode(CPURISCVState *env, int csrno)
51
+{
35
+{
52
+ if (!riscv_feature(env, RISCV_FEATURE_AIA)) {
36
+ return kvm_irqchip_in_kernel() && s->aia_type == VIRT_AIA_TYPE_APLIC_IMSIC;
53
+ return RISCV_EXCP_ILLEGAL_INST;
54
+ }
55
+
56
+ return hmode(env, csrno);
57
+}
37
+}
58
+
38
+
59
static int aia_hmode32(CPURISCVState *env, int csrno)
39
static const MemMapEntry virt_memmap[] = {
40
[VIRT_DEBUG] = { 0x0, 0x100 },
41
[VIRT_MROM] = { 0x1000, 0xf000 },
42
@@ -XXX,XX +XXX,XX @@ static void create_fdt_one_aplic(RISCVVirtState *s, int socket,
43
uint32_t *intc_phandles,
44
uint32_t aplic_phandle,
45
uint32_t aplic_child_phandle,
46
- bool m_mode)
47
+ bool m_mode, int num_harts)
60
{
48
{
61
if (!riscv_feature(env, RISCV_FEATURE_AIA)) {
49
int cpu;
62
@@ -XXX,XX +XXX,XX @@ static RISCVException rmw_sie64(CPURISCVState *env, int csrno,
50
char *aplic_name;
63
uint64_t mask = env->mideleg & S_MODE_INTERRUPTS;
51
uint32_t *aplic_cells;
64
52
MachineState *ms = MACHINE(s);
65
if (riscv_cpu_virt_enabled(env)) {
53
66
+ if (env->hvictl & HVICTL_VTI) {
54
- aplic_cells = g_new0(uint32_t, s->soc[socket].num_harts * 2);
67
+ return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
55
+ aplic_cells = g_new0(uint32_t, num_harts * 2);
68
+ }
56
69
ret = rmw_vsie64(env, CSR_VSIE, ret_val, new_val, wr_mask);
57
- for (cpu = 0; cpu < s->soc[socket].num_harts; cpu++) {
58
+ for (cpu = 0; cpu < num_harts; cpu++) {
59
aplic_cells[cpu * 2 + 0] = cpu_to_be32(intc_phandles[cpu]);
60
aplic_cells[cpu * 2 + 1] = cpu_to_be32(m_mode ? IRQ_M_EXT : IRQ_S_EXT);
61
}
62
@@ -XXX,XX +XXX,XX @@ static void create_fdt_one_aplic(RISCVVirtState *s, int socket,
63
64
if (s->aia_type == VIRT_AIA_TYPE_APLIC) {
65
qemu_fdt_setprop(ms->fdt, aplic_name, "interrupts-extended",
66
- aplic_cells,
67
- s->soc[socket].num_harts * sizeof(uint32_t) * 2);
68
+ aplic_cells, num_harts * sizeof(uint32_t) * 2);
70
} else {
69
} else {
71
ret = rmw_mie64(env, csrno, ret_val, new_val, wr_mask & mask);
70
qemu_fdt_setprop_cell(ms->fdt, aplic_name, "msi-parent", msi_phandle);
72
@@ -XXX,XX +XXX,XX @@ static RISCVException rmw_sie(CPURISCVState *env, int csrno,
73
RISCVException ret;
74
75
ret = rmw_sie64(env, csrno, &rval, new_val, wr_mask);
76
- if (ret_val) {
77
+ if (ret == RISCV_EXCP_NONE && ret_val) {
78
*ret_val = rval;
79
}
71
}
80
72
@@ -XXX,XX +XXX,XX @@ static void create_fdt_socket_aplic(RISCVVirtState *s,
81
@@ -XXX,XX +XXX,XX @@ static RISCVException rmw_sip64(CPURISCVState *env, int csrno,
73
uint32_t msi_s_phandle,
82
uint64_t mask = env->mideleg & sip_writable_mask;
74
uint32_t *phandle,
83
75
uint32_t *intc_phandles,
84
if (riscv_cpu_virt_enabled(env)) {
76
- uint32_t *aplic_phandles)
85
+ if (env->hvictl & HVICTL_VTI) {
77
+ uint32_t *aplic_phandles,
86
+ return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
78
+ int num_harts)
87
+ }
79
{
88
ret = rmw_vsip64(env, CSR_VSIP, ret_val, new_val, wr_mask);
80
char *aplic_name;
89
} else {
81
unsigned long aplic_addr;
90
ret = rmw_mip64(env, csrno, ret_val, new_val, wr_mask & mask);
82
@@ -XXX,XX +XXX,XX @@ static void create_fdt_socket_aplic(RISCVVirtState *s,
91
@@ -XXX,XX +XXX,XX @@ static RISCVException write_htimedeltah(CPURISCVState *env, int csrno,
83
create_fdt_one_aplic(s, socket, aplic_addr, memmap[VIRT_APLIC_M].size,
92
return RISCV_EXCP_NONE;
84
msi_m_phandle, intc_phandles,
93
}
85
aplic_m_phandle, aplic_s_phandle,
94
86
- true);
95
+static int read_hvictl(CPURISCVState *env, int csrno, target_ulong *val)
87
+ true, num_harts);
96
+{
88
}
97
+ *val = env->hvictl;
89
98
+ return RISCV_EXCP_NONE;
90
/* S-level APLIC node */
99
+}
91
@@ -XXX,XX +XXX,XX @@ static void create_fdt_socket_aplic(RISCVVirtState *s,
92
create_fdt_one_aplic(s, socket, aplic_addr, memmap[VIRT_APLIC_S].size,
93
msi_s_phandle, intc_phandles,
94
aplic_s_phandle, 0,
95
- false);
96
+ false, num_harts);
97
98
aplic_name = g_strdup_printf("/soc/aplic@%lx", aplic_addr);
99
100
@@ -XXX,XX +XXX,XX @@ static void create_fdt_sockets(RISCVVirtState *s, const MemMapEntry *memmap,
101
*msi_pcie_phandle = msi_s_phandle;
102
}
103
104
- phandle_pos = ms->smp.cpus;
105
- for (socket = (socket_count - 1); socket >= 0; socket--) {
106
- phandle_pos -= s->soc[socket].num_harts;
107
-
108
- if (s->aia_type == VIRT_AIA_TYPE_NONE) {
109
- create_fdt_socket_plic(s, memmap, socket, phandle,
110
- &intc_phandles[phandle_pos], xplic_phandles);
111
- } else {
112
- create_fdt_socket_aplic(s, memmap, socket,
113
- msi_m_phandle, msi_s_phandle, phandle,
114
- &intc_phandles[phandle_pos], xplic_phandles);
115
+ /* KVM AIA only has one APLIC instance */
116
+ if (virt_use_kvm_aia(s)) {
117
+ create_fdt_socket_aplic(s, memmap, 0,
118
+ msi_m_phandle, msi_s_phandle, phandle,
119
+ &intc_phandles[0], xplic_phandles,
120
+ ms->smp.cpus);
121
+ } else {
122
+ phandle_pos = ms->smp.cpus;
123
+ for (socket = (socket_count - 1); socket >= 0; socket--) {
124
+ phandle_pos -= s->soc[socket].num_harts;
100
+
125
+
101
+static int write_hvictl(CPURISCVState *env, int csrno, target_ulong val)
126
+ if (s->aia_type == VIRT_AIA_TYPE_NONE) {
102
+{
127
+ create_fdt_socket_plic(s, memmap, socket, phandle,
103
+ env->hvictl = val & HVICTL_VALID_MASK;
128
+ &intc_phandles[phandle_pos],
104
+ return RISCV_EXCP_NONE;
129
+ xplic_phandles);
105
+}
130
+ } else {
106
+
131
+ create_fdt_socket_aplic(s, memmap, socket,
107
+static int read_hvipriox(CPURISCVState *env, int first_index,
132
+ msi_m_phandle, msi_s_phandle, phandle,
108
+ uint8_t *iprio, target_ulong *val)
133
+ &intc_phandles[phandle_pos],
109
+{
134
+ xplic_phandles,
110
+ int i, irq, rdzero, num_irqs = 4 * (riscv_cpu_mxl_bits(env) / 32);
135
+ s->soc[socket].num_harts);
111
+
136
+ }
112
+ /* First index has to be a multiple of number of irqs per register */
137
}
113
+ if (first_index % num_irqs) {
138
}
114
+ return (riscv_cpu_virt_enabled(env)) ?
139
115
+ RISCV_EXCP_VIRT_INSTRUCTION_FAULT : RISCV_EXCP_ILLEGAL_INST;
140
g_free(intc_phandles);
141
142
- for (socket = 0; socket < socket_count; socket++) {
143
- if (socket == 0) {
144
- *irq_mmio_phandle = xplic_phandles[socket];
145
- *irq_virtio_phandle = xplic_phandles[socket];
146
- *irq_pcie_phandle = xplic_phandles[socket];
147
- }
148
- if (socket == 1) {
149
- *irq_virtio_phandle = xplic_phandles[socket];
150
- *irq_pcie_phandle = xplic_phandles[socket];
151
- }
152
- if (socket == 2) {
153
- *irq_pcie_phandle = xplic_phandles[socket];
154
+ if (virt_use_kvm_aia(s)) {
155
+ *irq_mmio_phandle = xplic_phandles[0];
156
+ *irq_virtio_phandle = xplic_phandles[0];
157
+ *irq_pcie_phandle = xplic_phandles[0];
158
+ } else {
159
+ for (socket = 0; socket < socket_count; socket++) {
160
+ if (socket == 0) {
161
+ *irq_mmio_phandle = xplic_phandles[socket];
162
+ *irq_virtio_phandle = xplic_phandles[socket];
163
+ *irq_pcie_phandle = xplic_phandles[socket];
164
+ }
165
+ if (socket == 1) {
166
+ *irq_virtio_phandle = xplic_phandles[socket];
167
+ *irq_pcie_phandle = xplic_phandles[socket];
168
+ }
169
+ if (socket == 2) {
170
+ *irq_pcie_phandle = xplic_phandles[socket];
171
+ }
172
}
173
}
174
175
@@ -XXX,XX +XXX,XX @@ static void virt_machine_init(MachineState *machine)
176
}
177
}
178
179
+ if (virt_use_kvm_aia(s)) {
180
+ kvm_riscv_aia_create(machine, IMSIC_MMIO_GROUP_MIN_SHIFT,
181
+ VIRT_IRQCHIP_NUM_SOURCES, VIRT_IRQCHIP_NUM_MSIS,
182
+ memmap[VIRT_APLIC_S].base,
183
+ memmap[VIRT_IMSIC_S].base,
184
+ s->aia_guests);
116
+ }
185
+ }
117
+
186
+
118
+ /* Fill-up return value */
187
if (riscv_is_32bit(&s->soc[0])) {
119
+ *val = 0;
188
#if HOST_LONG_BITS == 64
120
+ for (i = 0; i < num_irqs; i++) {
189
/* limit RAM size in a 32-bit system */
121
+ if (riscv_cpu_hviprio_index2irq(first_index + i, &irq, &rdzero)) {
122
+ continue;
123
+ }
124
+ if (rdzero) {
125
+ continue;
126
+ }
127
+ *val |= ((target_ulong)iprio[irq]) << (i * 8);
128
+ }
129
+
130
+ return RISCV_EXCP_NONE;
131
+}
132
+
133
+static int write_hvipriox(CPURISCVState *env, int first_index,
134
+ uint8_t *iprio, target_ulong val)
135
+{
136
+ int i, irq, rdzero, num_irqs = 4 * (riscv_cpu_mxl_bits(env) / 32);
137
+
138
+ /* First index has to be a multiple of number of irqs per register */
139
+ if (first_index % num_irqs) {
140
+ return (riscv_cpu_virt_enabled(env)) ?
141
+ RISCV_EXCP_VIRT_INSTRUCTION_FAULT : RISCV_EXCP_ILLEGAL_INST;
142
+ }
143
+
144
+ /* Fill-up priority arrary */
145
+ for (i = 0; i < num_irqs; i++) {
146
+ if (riscv_cpu_hviprio_index2irq(first_index + i, &irq, &rdzero)) {
147
+ continue;
148
+ }
149
+ if (rdzero) {
150
+ iprio[irq] = 0;
151
+ } else {
152
+ iprio[irq] = (val >> (i * 8)) & 0xff;
153
+ }
154
+ }
155
+
156
+ return RISCV_EXCP_NONE;
157
+}
158
+
159
+static int read_hviprio1(CPURISCVState *env, int csrno, target_ulong *val)
160
+{
161
+ return read_hvipriox(env, 0, env->hviprio, val);
162
+}
163
+
164
+static int write_hviprio1(CPURISCVState *env, int csrno, target_ulong val)
165
+{
166
+ return write_hvipriox(env, 0, env->hviprio, val);
167
+}
168
+
169
+static int read_hviprio1h(CPURISCVState *env, int csrno, target_ulong *val)
170
+{
171
+ return read_hvipriox(env, 4, env->hviprio, val);
172
+}
173
+
174
+static int write_hviprio1h(CPURISCVState *env, int csrno, target_ulong val)
175
+{
176
+ return write_hvipriox(env, 4, env->hviprio, val);
177
+}
178
+
179
+static int read_hviprio2(CPURISCVState *env, int csrno, target_ulong *val)
180
+{
181
+ return read_hvipriox(env, 8, env->hviprio, val);
182
+}
183
+
184
+static int write_hviprio2(CPURISCVState *env, int csrno, target_ulong val)
185
+{
186
+ return write_hvipriox(env, 8, env->hviprio, val);
187
+}
188
+
189
+static int read_hviprio2h(CPURISCVState *env, int csrno, target_ulong *val)
190
+{
191
+ return read_hvipriox(env, 12, env->hviprio, val);
192
+}
193
+
194
+static int write_hviprio2h(CPURISCVState *env, int csrno, target_ulong val)
195
+{
196
+ return write_hvipriox(env, 12, env->hviprio, val);
197
+}
198
+
199
/* Virtual CSR Registers */
200
static RISCVException read_vsstatus(CPURISCVState *env, int csrno,
201
target_ulong *val)
202
@@ -XXX,XX +XXX,XX @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
203
[CSR_MTVAL2] = { "mtval2", hmode, read_mtval2, write_mtval2 },
204
[CSR_MTINST] = { "mtinst", hmode, read_mtinst, write_mtinst },
205
206
+ /* Virtual Interrupts and Interrupt Priorities (H-extension with AIA) */
207
+ [CSR_HVICTL] = { "hvictl", aia_hmode, read_hvictl, write_hvictl },
208
+ [CSR_HVIPRIO1] = { "hviprio1", aia_hmode, read_hviprio1, write_hviprio1 },
209
+ [CSR_HVIPRIO2] = { "hviprio2", aia_hmode, read_hviprio2, write_hviprio2 },
210
+
211
/* Hypervisor and VS-Level High-Half CSRs (H-extension with AIA) */
212
[CSR_HIDELEGH] = { "hidelegh", aia_hmode32, NULL, NULL, rmw_hidelegh },
213
[CSR_HVIPH] = { "hviph", aia_hmode32, NULL, NULL, rmw_hviph },
214
+ [CSR_HVIPRIO1H] = { "hviprio1h", aia_hmode32, read_hviprio1h, write_hviprio1h },
215
+ [CSR_HVIPRIO2H] = { "hviprio2h", aia_hmode32, read_hviprio2h, write_hviprio2h },
216
[CSR_VSIEH] = { "vsieh", aia_hmode32, NULL, NULL, rmw_vsieh },
217
[CSR_VSIPH] = { "vsiph", aia_hmode32, NULL, NULL, rmw_vsiph },
218
219
diff --git a/target/riscv/machine.c b/target/riscv/machine.c
220
index XXXXXXX..XXXXXXX 100644
221
--- a/target/riscv/machine.c
222
+++ b/target/riscv/machine.c
223
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_hyper = {
224
VMSTATE_UINTTL(env.hgeie, RISCVCPU),
225
VMSTATE_UINTTL(env.hgeip, RISCVCPU),
226
VMSTATE_UINT64(env.htimedelta, RISCVCPU),
227
+
228
+ VMSTATE_UINTTL(env.hvictl, RISCVCPU),
229
VMSTATE_UINT8_ARRAY(env.hviprio, RISCVCPU, 64),
230
231
VMSTATE_UINT64(env.vsstatus, RISCVCPU),
232
--
190
--
233
2.34.1
191
2.41.0
234
235
diff view generated by jsdifflib
1
From: Anup Patel <anup.patel@wdc.com>
1
From: Conor Dooley <conor.dooley@microchip.com>
2
2
3
We should use the AIA INTC compatible string in the CPU INTC
3
On a dtb dumped from the virt machine, dt-validate complains:
4
DT nodes when the CPUs support AIA feature. This will allow
4
soc: pmu: {'riscv,event-to-mhpmcounters': [[1, 1, 524281], [2, 2, 524284], [65561, 65561, 524280], [65563, 65563, 524280], [65569, 65569, 524280]], 'compatible': ['riscv,pmu']} should not be valid under {'type': 'object'}
5
Linux INTC driver to use AIA local interrupt CSRs.
5
from schema $id: http://devicetree.org/schemas/simple-bus.yaml#
6
That's pretty cryptic, but running the dtb back through dtc produces
7
something a lot more reasonable:
8
Warning (simple_bus_reg): /soc/pmu: missing or empty reg/ranges property
6
9
7
Signed-off-by: Anup Patel <anup.patel@wdc.com>
10
Moving the riscv,pmu node out of the soc bus solves the problem.
8
Signed-off-by: Anup Patel <anup@brainfault.org>
11
9
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
12
Signed-off-by: Conor Dooley <conor.dooley@microchip.com>
10
Reviewed-by: Frank Chang <frank.chang@sifive.com>
13
Acked-by: Alistair Francis <alistair.francis@wdc.com>
11
Message-id: 20220204174700.534953-17-anup@brainfault.org
14
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
15
Message-ID: <20230727-groom-decline-2c57ce42841c@spud>
12
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
16
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
13
---
17
---
14
hw/riscv/virt.c | 13 +++++++++++--
18
hw/riscv/virt.c | 2 +-
15
1 file changed, 11 insertions(+), 2 deletions(-)
19
1 file changed, 1 insertion(+), 1 deletion(-)
16
20
17
diff --git a/hw/riscv/virt.c b/hw/riscv/virt.c
21
diff --git a/hw/riscv/virt.c b/hw/riscv/virt.c
18
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/riscv/virt.c
23
--- a/hw/riscv/virt.c
20
+++ b/hw/riscv/virt.c
24
+++ b/hw/riscv/virt.c
21
@@ -XXX,XX +XXX,XX @@ static void create_fdt_socket_cpus(RISCVVirtState *s, int socket,
25
@@ -XXX,XX +XXX,XX @@ static void create_fdt_pmu(RISCVVirtState *s)
22
qemu_fdt_add_subnode(mc->fdt, intc_name);
26
MachineState *ms = MACHINE(s);
23
qemu_fdt_setprop_cell(mc->fdt, intc_name, "phandle",
27
RISCVCPU hart = s->soc[0].harts[0];
24
intc_phandles[cpu]);
28
25
- qemu_fdt_setprop_string(mc->fdt, intc_name, "compatible",
29
- pmu_name = g_strdup_printf("/soc/pmu");
26
- "riscv,cpu-intc");
30
+ pmu_name = g_strdup_printf("/pmu");
27
+ if (riscv_feature(&s->soc[socket].harts[cpu].env,
31
qemu_fdt_add_subnode(ms->fdt, pmu_name);
28
+ RISCV_FEATURE_AIA)) {
32
qemu_fdt_setprop_string(ms->fdt, pmu_name, "compatible", "riscv,pmu");
29
+ static const char * const compat[2] = {
33
riscv_pmu_generate_fdt_node(ms->fdt, hart.cfg.pmu_num, pmu_name);
30
+ "riscv,cpu-intc-aia", "riscv,cpu-intc"
31
+ };
32
+ qemu_fdt_setprop_string_array(mc->fdt, intc_name, "compatible",
33
+ (char **)&compat, ARRAY_SIZE(compat));
34
+ } else {
35
+ qemu_fdt_setprop_string(mc->fdt, intc_name, "compatible",
36
+ "riscv,cpu-intc");
37
+ }
38
qemu_fdt_setprop(mc->fdt, intc_name, "interrupt-controller", NULL, 0);
39
qemu_fdt_setprop_cell(mc->fdt, intc_name, "#interrupt-cells", 1);
40
41
--
34
--
42
2.34.1
35
2.41.0
43
44
diff view generated by jsdifflib
1
From: Anup Patel <anup.patel@wdc.com>
1
From: Weiwei Li <liweiwei@iscas.ac.cn>
2
2
3
A hypervisor can optionally take guest external interrupts using
3
The Svadu specification updated the name of the *envcfg bit from
4
SGEIP bit of hip and hie CSRs.
4
HADE to ADUE.
5
5
6
Signed-off-by: Anup Patel <anup.patel@wdc.com>
6
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
7
Signed-off-by: Anup Patel <anup@brainfault.org>
7
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
8
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
8
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
9
Reviewed-by: Frank Chang <frank.chang@sifive.com>
9
Message-ID: <20230816141916.66898-1-liweiwei@iscas.ac.cn>
10
Message-id: 20220204174700.534953-3-anup@brainfault.org
11
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
10
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
12
---
11
---
13
target/riscv/cpu_bits.h | 3 +++
12
target/riscv/cpu_bits.h | 8 ++++----
14
target/riscv/cpu.c | 3 ++-
13
target/riscv/cpu.c | 4 ++--
15
target/riscv/csr.c | 18 +++++++++++-------
14
target/riscv/cpu_helper.c | 6 +++---
16
3 files changed, 16 insertions(+), 8 deletions(-)
15
target/riscv/csr.c | 12 ++++++------
16
4 files changed, 15 insertions(+), 15 deletions(-)
17
17
18
diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
18
diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
19
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/riscv/cpu_bits.h
20
--- a/target/riscv/cpu_bits.h
21
+++ b/target/riscv/cpu_bits.h
21
+++ b/target/riscv/cpu_bits.h
22
@@ -XXX,XX +XXX,XX @@ typedef enum RISCVException {
22
@@ -XXX,XX +XXX,XX @@ typedef enum RISCVException {
23
#define IRQ_S_EXT 9
23
#define MENVCFG_CBIE (3UL << 4)
24
#define IRQ_VS_EXT 10
24
#define MENVCFG_CBCFE BIT(6)
25
#define IRQ_M_EXT 11
25
#define MENVCFG_CBZE BIT(7)
26
+#define IRQ_S_GEXT 12
26
-#define MENVCFG_HADE (1ULL << 61)
27
+#define IRQ_LOCAL_MAX 16
27
+#define MENVCFG_ADUE (1ULL << 61)
28
28
#define MENVCFG_PBMTE (1ULL << 62)
29
/* mip masks */
29
#define MENVCFG_STCE (1ULL << 63)
30
#define MIP_USIP (1 << IRQ_U_SOFT)
30
31
/* For RV32 */
32
-#define MENVCFGH_HADE BIT(29)
33
+#define MENVCFGH_ADUE BIT(29)
34
#define MENVCFGH_PBMTE BIT(30)
35
#define MENVCFGH_STCE BIT(31)
36
31
@@ -XXX,XX +XXX,XX @@ typedef enum RISCVException {
37
@@ -XXX,XX +XXX,XX @@ typedef enum RISCVException {
32
#define MIP_SEIP (1 << IRQ_S_EXT)
38
#define HENVCFG_CBIE MENVCFG_CBIE
33
#define MIP_VSEIP (1 << IRQ_VS_EXT)
39
#define HENVCFG_CBCFE MENVCFG_CBCFE
34
#define MIP_MEIP (1 << IRQ_M_EXT)
40
#define HENVCFG_CBZE MENVCFG_CBZE
35
+#define MIP_SGEIP (1 << IRQ_S_GEXT)
41
-#define HENVCFG_HADE MENVCFG_HADE
36
42
+#define HENVCFG_ADUE MENVCFG_ADUE
37
/* sip masks */
43
#define HENVCFG_PBMTE MENVCFG_PBMTE
38
#define SIP_SSIP MIP_SSIP
44
#define HENVCFG_STCE MENVCFG_STCE
45
46
/* For RV32 */
47
-#define HENVCFGH_HADE MENVCFGH_HADE
48
+#define HENVCFGH_ADUE MENVCFGH_ADUE
49
#define HENVCFGH_PBMTE MENVCFGH_PBMTE
50
#define HENVCFGH_STCE MENVCFGH_STCE
51
39
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
52
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
40
index XXXXXXX..XXXXXXX 100644
53
index XXXXXXX..XXXXXXX 100644
41
--- a/target/riscv/cpu.c
54
--- a/target/riscv/cpu.c
42
+++ b/target/riscv/cpu.c
55
+++ b/target/riscv/cpu.c
43
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_reset(DeviceState *dev)
56
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_reset_hold(Object *obj)
57
env->two_stage_lookup = false;
58
59
env->menvcfg = (cpu->cfg.ext_svpbmt ? MENVCFG_PBMTE : 0) |
60
- (cpu->cfg.ext_svadu ? MENVCFG_HADE : 0);
61
+ (cpu->cfg.ext_svadu ? MENVCFG_ADUE : 0);
62
env->henvcfg = (cpu->cfg.ext_svpbmt ? HENVCFG_PBMTE : 0) |
63
- (cpu->cfg.ext_svadu ? HENVCFG_HADE : 0);
64
+ (cpu->cfg.ext_svadu ? HENVCFG_ADUE : 0);
65
66
/* Initialized default priorities of local interrupts. */
67
for (i = 0; i < ARRAY_SIZE(env->miprio); i++) {
68
diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
69
index XXXXXXX..XXXXXXX 100644
70
--- a/target/riscv/cpu_helper.c
71
+++ b/target/riscv/cpu_helper.c
72
@@ -XXX,XX +XXX,XX @@ static int get_physical_address(CPURISCVState *env, hwaddr *physical,
73
}
74
75
bool pbmte = env->menvcfg & MENVCFG_PBMTE;
76
- bool hade = env->menvcfg & MENVCFG_HADE;
77
+ bool adue = env->menvcfg & MENVCFG_ADUE;
78
79
if (first_stage && two_stage && env->virt_enabled) {
80
pbmte = pbmte && (env->henvcfg & HENVCFG_PBMTE);
81
- hade = hade && (env->henvcfg & HENVCFG_HADE);
82
+ adue = adue && (env->henvcfg & HENVCFG_ADUE);
83
}
84
85
int ptshift = (levels - 1) * ptidxbits;
86
@@ -XXX,XX +XXX,XX @@ restart:
87
88
/* Page table updates need to be atomic with MTTCG enabled */
89
if (updated_pte != pte && !is_debug) {
90
- if (!hade) {
91
+ if (!adue) {
92
return TRANSLATE_FAIL;
44
}
93
}
45
}
46
env->mcause = 0;
47
+ env->miclaim = MIP_SGEIP;
48
env->pc = env->resetvec;
49
env->two_stage_lookup = false;
50
/* mmte is supposed to have pm.current hardwired to 1 */
51
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_init(Object *obj)
52
cpu_set_cpustate_pointers(cpu);
53
54
#ifndef CONFIG_USER_ONLY
55
- qdev_init_gpio_in(DEVICE(cpu), riscv_cpu_set_irq, 12);
56
+ qdev_init_gpio_in(DEVICE(cpu), riscv_cpu_set_irq, IRQ_LOCAL_MAX);
57
#endif /* CONFIG_USER_ONLY */
58
}
59
94
60
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
95
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
61
index XXXXXXX..XXXXXXX 100644
96
index XXXXXXX..XXXXXXX 100644
62
--- a/target/riscv/csr.c
97
--- a/target/riscv/csr.c
63
+++ b/target/riscv/csr.c
98
+++ b/target/riscv/csr.c
64
@@ -XXX,XX +XXX,XX @@ static RISCVException read_timeh(CPURISCVState *env, int csrno,
99
@@ -XXX,XX +XXX,XX @@ static RISCVException write_menvcfg(CPURISCVState *env, int csrno,
65
#define M_MODE_INTERRUPTS (MIP_MSIP | MIP_MTIP | MIP_MEIP)
100
if (riscv_cpu_mxl(env) == MXL_RV64) {
66
#define S_MODE_INTERRUPTS (MIP_SSIP | MIP_STIP | MIP_SEIP)
101
mask |= (cfg->ext_svpbmt ? MENVCFG_PBMTE : 0) |
67
#define VS_MODE_INTERRUPTS (MIP_VSSIP | MIP_VSTIP | MIP_VSEIP)
102
(cfg->ext_sstc ? MENVCFG_STCE : 0) |
68
+#define HS_MODE_INTERRUPTS (MIP_SGEIP | VS_MODE_INTERRUPTS)
103
- (cfg->ext_svadu ? MENVCFG_HADE : 0);
69
104
+ (cfg->ext_svadu ? MENVCFG_ADUE : 0);
70
static const target_ulong delegable_ints = S_MODE_INTERRUPTS |
71
VS_MODE_INTERRUPTS;
72
static const target_ulong vs_delegable_ints = VS_MODE_INTERRUPTS;
73
static const target_ulong all_ints = M_MODE_INTERRUPTS | S_MODE_INTERRUPTS |
74
- VS_MODE_INTERRUPTS;
75
+ HS_MODE_INTERRUPTS;
76
#define DELEGABLE_EXCPS ((1ULL << (RISCV_EXCP_INST_ADDR_MIS)) | \
77
(1ULL << (RISCV_EXCP_INST_ACCESS_FAULT)) | \
78
(1ULL << (RISCV_EXCP_ILLEGAL_INST)) | \
79
@@ -XXX,XX +XXX,XX @@ static RISCVException write_mideleg(CPURISCVState *env, int csrno,
80
{
81
env->mideleg = (env->mideleg & ~delegable_ints) | (val & delegable_ints);
82
if (riscv_has_ext(env, RVH)) {
83
- env->mideleg |= VS_MODE_INTERRUPTS;
84
+ env->mideleg |= HS_MODE_INTERRUPTS;
85
}
105
}
106
env->menvcfg = (env->menvcfg & ~mask) | (val & mask);
107
108
@@ -XXX,XX +XXX,XX @@ static RISCVException write_menvcfgh(CPURISCVState *env, int csrno,
109
const RISCVCPUConfig *cfg = riscv_cpu_cfg(env);
110
uint64_t mask = (cfg->ext_svpbmt ? MENVCFG_PBMTE : 0) |
111
(cfg->ext_sstc ? MENVCFG_STCE : 0) |
112
- (cfg->ext_svadu ? MENVCFG_HADE : 0);
113
+ (cfg->ext_svadu ? MENVCFG_ADUE : 0);
114
uint64_t valh = (uint64_t)val << 32;
115
116
env->menvcfg = (env->menvcfg & ~mask) | (valh & mask);
117
@@ -XXX,XX +XXX,XX @@ static RISCVException read_henvcfg(CPURISCVState *env, int csrno,
118
* henvcfg.stce is read_only 0 when menvcfg.stce = 0
119
* henvcfg.hade is read_only 0 when menvcfg.hade = 0
120
*/
121
- *val = env->henvcfg & (~(HENVCFG_PBMTE | HENVCFG_STCE | HENVCFG_HADE) |
122
+ *val = env->henvcfg & (~(HENVCFG_PBMTE | HENVCFG_STCE | HENVCFG_ADUE) |
123
env->menvcfg);
86
return RISCV_EXCP_NONE;
124
return RISCV_EXCP_NONE;
87
}
125
}
88
@@ -XXX,XX +XXX,XX @@ static RISCVException write_mie(CPURISCVState *env, int csrno,
126
@@ -XXX,XX +XXX,XX @@ static RISCVException write_henvcfg(CPURISCVState *env, int csrno,
89
target_ulong val)
127
}
90
{
128
91
env->mie = (env->mie & ~all_ints) | (val & all_ints);
129
if (riscv_cpu_mxl(env) == MXL_RV64) {
92
+ if (!riscv_has_ext(env, RVH)) {
130
- mask |= env->menvcfg & (HENVCFG_PBMTE | HENVCFG_STCE | HENVCFG_HADE);
93
+ env->mie &= ~MIP_SGEIP;
131
+ mask |= env->menvcfg & (HENVCFG_PBMTE | HENVCFG_STCE | HENVCFG_ADUE);
94
+ }
132
}
133
134
env->henvcfg = (env->henvcfg & ~mask) | (val & mask);
135
@@ -XXX,XX +XXX,XX @@ static RISCVException read_henvcfgh(CPURISCVState *env, int csrno,
136
return ret;
137
}
138
139
- *val = (env->henvcfg & (~(HENVCFG_PBMTE | HENVCFG_STCE | HENVCFG_HADE) |
140
+ *val = (env->henvcfg & (~(HENVCFG_PBMTE | HENVCFG_STCE | HENVCFG_ADUE) |
141
env->menvcfg)) >> 32;
95
return RISCV_EXCP_NONE;
142
return RISCV_EXCP_NONE;
96
}
143
}
97
144
@@ -XXX,XX +XXX,XX @@ static RISCVException write_henvcfgh(CPURISCVState *env, int csrno,
98
@@ -XXX,XX +XXX,XX @@ static RISCVException rmw_sip(CPURISCVState *env, int csrno,
145
target_ulong val)
99
}
100
101
if (ret_value) {
102
- *ret_value &= env->mideleg;
103
+ *ret_value &= env->mideleg & S_MODE_INTERRUPTS;
104
}
105
return ret;
106
}
107
@@ -XXX,XX +XXX,XX @@ static RISCVException rmw_hvip(CPURISCVState *env, int csrno,
108
write_mask & hvip_writable_mask);
109
110
if (ret_value) {
111
- *ret_value &= hvip_writable_mask;
112
+ *ret_value &= VS_MODE_INTERRUPTS;
113
}
114
return ret;
115
}
116
@@ -XXX,XX +XXX,XX @@ static RISCVException rmw_hip(CPURISCVState *env, int csrno,
117
write_mask & hip_writable_mask);
118
119
if (ret_value) {
120
- *ret_value &= hip_writable_mask;
121
+ *ret_value &= HS_MODE_INTERRUPTS;
122
}
123
return ret;
124
}
125
@@ -XXX,XX +XXX,XX @@ static RISCVException rmw_hip(CPURISCVState *env, int csrno,
126
static RISCVException read_hie(CPURISCVState *env, int csrno,
127
target_ulong *val)
128
{
146
{
129
- *val = env->mie & VS_MODE_INTERRUPTS;
147
uint64_t mask = env->menvcfg & (HENVCFG_PBMTE | HENVCFG_STCE |
130
+ *val = env->mie & HS_MODE_INTERRUPTS;
148
- HENVCFG_HADE);
131
return RISCV_EXCP_NONE;
149
+ HENVCFG_ADUE);
132
}
150
uint64_t valh = (uint64_t)val << 32;
133
151
RISCVException ret;
134
static RISCVException write_hie(CPURISCVState *env, int csrno,
135
target_ulong val)
136
{
137
- target_ulong newval = (env->mie & ~VS_MODE_INTERRUPTS) | (val & VS_MODE_INTERRUPTS);
138
+ target_ulong newval = (env->mie & ~HS_MODE_INTERRUPTS) | (val & HS_MODE_INTERRUPTS);
139
return write_mie(env, CSR_MIE, newval);
140
}
141
152
142
--
153
--
143
2.34.1
154
2.41.0
144
145
diff view generated by jsdifflib
1
From: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
1
From: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
2
2
3
The addition of uxl support in gdbstub adds a few checks on the maximum
3
In the same emulated RISC-V host, the 'host' KVM CPU takes 4 times
4
register length, but omitted MXL_RV128, an experimental feature.
4
longer to boot than the 'rv64' KVM CPU.
5
This patch makes rv128 react as rv64, as previously.
6
5
7
Signed-off-by: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
6
The reason is an unintended behavior of riscv_cpu_satp_mode_finalize()
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
when satp_mode.supported = 0, i.e. when cpu_init() does not set
9
Reviewed-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
8
satp_mode_max_supported(). satp_mode_max_from_map(map) does:
10
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
9
11
Message-id: 20220124202456.420258-1-frederic.petrot@univ-grenoble-alpes.fr
10
31 - __builtin_clz(map)
11
12
This means that, if satp_mode.supported = 0, satp_mode_supported_max
13
wil be '31 - 32'. But this is C, so satp_mode_supported_max will gladly
14
set it to UINT_MAX (4294967295). After that, if the user didn't set a
15
satp_mode, set_satp_mode_default_map(cpu) will make
16
17
cfg.satp_mode.map = cfg.satp_mode.supported
18
19
So satp_mode.map = 0. And then satp_mode_map_max will be set to
20
satp_mode_max_from_map(cpu->cfg.satp_mode.map), i.e. also UINT_MAX. The
21
guard "satp_mode_map_max > satp_mode_supported_max" doesn't protect us
22
here since both are UINT_MAX.
23
24
And finally we have 2 loops:
25
26
for (int i = satp_mode_map_max - 1; i >= 0; --i) {
27
28
Which are, in fact, 2 loops from UINT_MAX -1 to -1. This is where the
29
extra delay when booting the 'host' CPU is coming from.
30
31
Commit 43d1de32f8 already set a precedence for satp_mode.supported = 0
32
in a different manner. We're doing the same here. If supported == 0,
33
interpret as 'the CPU wants the OS to handle satp mode alone' and skip
34
satp_mode_finalize().
35
36
We'll also put a guard in satp_mode_max_from_map() to assert out if map
37
is 0 since the function is not ready to deal with it.
38
39
Cc: Alexandre Ghiti <alexghiti@rivosinc.com>
40
Fixes: 6f23aaeb9b ("riscv: Allow user to set the satp mode")
41
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
42
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
43
Message-ID: <20230817152903.694926-1-dbarboza@ventanamicro.com>
12
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
44
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
13
---
45
---
14
target/riscv/cpu.c | 3 +--
46
target/riscv/cpu.c | 23 ++++++++++++++++++++---
15
target/riscv/gdbstub.c | 3 +++
47
1 file changed, 20 insertions(+), 3 deletions(-)
16
2 files changed, 4 insertions(+), 2 deletions(-)
17
48
18
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
49
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
19
index XXXXXXX..XXXXXXX 100644
50
index XXXXXXX..XXXXXXX 100644
20
--- a/target/riscv/cpu.c
51
--- a/target/riscv/cpu.c
21
+++ b/target/riscv/cpu.c
52
+++ b/target/riscv/cpu.c
22
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_realize(DeviceState *dev, Error **errp)
53
@@ -XXX,XX +XXX,XX @@ static uint8_t satp_mode_from_str(const char *satp_mode_str)
23
switch (env->misa_mxl_max) {
54
24
#ifdef TARGET_RISCV64
55
uint8_t satp_mode_max_from_map(uint32_t map)
25
case MXL_RV64:
56
{
26
- cc->gdb_core_xml_file = "riscv-64bit-cpu.xml";
57
+ /*
27
- break;
58
+ * 'map = 0' will make us return (31 - 32), which C will
28
case MXL_RV128:
59
+ * happily overflow to UINT_MAX. There's no good result to
29
+ cc->gdb_core_xml_file = "riscv-64bit-cpu.xml";
60
+ * return if 'map = 0' (e.g. returning 0 will be ambiguous
30
break;
61
+ * with the result for 'map = 1').
31
#endif
62
+ *
32
case MXL_RV32:
63
+ * Assert out if map = 0. Callers will have to deal with
33
diff --git a/target/riscv/gdbstub.c b/target/riscv/gdbstub.c
64
+ * it outside of this function.
34
index XXXXXXX..XXXXXXX 100644
65
+ */
35
--- a/target/riscv/gdbstub.c
66
+ g_assert(map > 0);
36
+++ b/target/riscv/gdbstub.c
67
+
37
@@ -XXX,XX +XXX,XX @@ int riscv_cpu_gdb_read_register(CPUState *cs, GByteArray *mem_buf, int n)
68
/* map here has at least one bit set, so no problem with clz */
38
case MXL_RV32:
69
return 31 - __builtin_clz(map);
39
return gdb_get_reg32(mem_buf, tmp);
70
}
40
case MXL_RV64:
71
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_validate_set_extensions(RISCVCPU *cpu, Error **errp)
41
+ case MXL_RV128:
72
static void riscv_cpu_satp_mode_finalize(RISCVCPU *cpu, Error **errp)
42
return gdb_get_reg64(mem_buf, tmp);
73
{
43
default:
74
bool rv32 = riscv_cpu_mxl(&cpu->env) == MXL_RV32;
44
g_assert_not_reached();
75
- uint8_t satp_mode_map_max;
45
@@ -XXX,XX +XXX,XX @@ int riscv_cpu_gdb_write_register(CPUState *cs, uint8_t *mem_buf, int n)
76
- uint8_t satp_mode_supported_max =
46
length = 4;
77
- satp_mode_max_from_map(cpu->cfg.satp_mode.supported);
47
break;
78
+ uint8_t satp_mode_map_max, satp_mode_supported_max;
48
case MXL_RV64:
79
+
49
+ case MXL_RV128:
80
+ /* The CPU wants the OS to decide which satp mode to use */
50
if (env->xl < MXL_RV64) {
81
+ if (cpu->cfg.satp_mode.supported == 0) {
51
tmp = (int32_t)ldq_p(mem_buf);
82
+ return;
52
} else {
83
+ }
53
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_register_gdb_regs_for_features(CPUState *cs)
84
+
54
1, "riscv-32bit-virtual.xml", 0);
85
+ satp_mode_supported_max =
55
break;
86
+ satp_mode_max_from_map(cpu->cfg.satp_mode.supported);
56
case MXL_RV64:
87
57
+ case MXL_RV128:
88
if (cpu->cfg.satp_mode.map == 0) {
58
gdb_register_coprocessor(cs, riscv_gdb_get_virtual,
89
if (cpu->cfg.satp_mode.init == 0) {
59
riscv_gdb_set_virtual,
60
1, "riscv-64bit-virtual.xml", 0);
61
--
90
--
62
2.34.1
91
2.41.0
63
64
diff view generated by jsdifflib
1
From: Anup Patel <anup.patel@wdc.com>
1
From: Vineet Gupta <vineetg@rivosinc.com>
2
2
3
We add "x-aia" command-line option for RISC-V HART using which
3
zicond is now codegen supported in both llvm and gcc.
4
allows users to force enable CPU AIA CSRs without changing the
5
interrupt controller available in RISC-V machine.
6
4
7
Signed-off-by: Anup Patel <anup.patel@wdc.com>
5
This change allows seamless enabling/testing of zicond in downstream
8
Signed-off-by: Anup Patel <anup@brainfault.org>
6
projects. e.g. currently riscv-gnu-toolchain parses elf attributes
7
to create a cmdline for qemu but fails short of enabling it because of
8
the "x-" prefix.
9
10
Signed-off-by: Vineet Gupta <vineetg@rivosinc.com>
11
Message-ID: <20230808181715.436395-1-vineetg@rivosinc.com>
9
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
12
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
10
Reviewed-by: Frank Chang <frank.chang@sifive.com>
11
Message-id: 20220204174700.534953-18-anup@brainfault.org
12
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
13
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
13
---
14
---
14
target/riscv/cpu.h | 1 +
15
target/riscv/cpu.c | 2 +-
15
target/riscv/cpu.c | 5 +++++
16
1 file changed, 1 insertion(+), 1 deletion(-)
16
2 files changed, 6 insertions(+)
17
17
18
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/riscv/cpu.h
21
+++ b/target/riscv/cpu.h
22
@@ -XXX,XX +XXX,XX @@ struct RISCVCPUConfig {
23
bool mmu;
24
bool pmp;
25
bool epmp;
26
+ bool aia;
27
uint64_t resetvec;
28
};
29
30
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
18
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
31
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
32
--- a/target/riscv/cpu.c
20
--- a/target/riscv/cpu.c
33
+++ b/target/riscv/cpu.c
21
+++ b/target/riscv/cpu.c
34
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_realize(DeviceState *dev, Error **errp)
22
@@ -XXX,XX +XXX,XX @@ static Property riscv_cpu_extensions[] = {
35
}
23
DEFINE_PROP_BOOL("zcf", RISCVCPU, cfg.ext_zcf, false),
36
}
24
DEFINE_PROP_BOOL("zcmp", RISCVCPU, cfg.ext_zcmp, false),
37
25
DEFINE_PROP_BOOL("zcmt", RISCVCPU, cfg.ext_zcmt, false),
38
+ if (cpu->cfg.aia) {
26
+ DEFINE_PROP_BOOL("zicond", RISCVCPU, cfg.ext_zicond, false),
39
+ riscv_set_feature(env, RISCV_FEATURE_AIA);
27
40
+ }
28
/* Vendor-specific custom extensions */
41
+
29
DEFINE_PROP_BOOL("xtheadba", RISCVCPU, cfg.ext_xtheadba, false),
42
set_resetvec(env, cpu->cfg.resetvec);
30
@@ -XXX,XX +XXX,XX @@ static Property riscv_cpu_extensions[] = {
43
31
DEFINE_PROP_BOOL("xventanacondops", RISCVCPU, cfg.ext_XVentanaCondOps, false),
44
/* Validate that MISA_MXL is set properly. */
32
45
@@ -XXX,XX +XXX,XX @@ static Property riscv_cpu_properties[] = {
33
/* These are experimental so mark with 'x-' */
46
DEFINE_PROP_BOOL("x-j", RISCVCPU, cfg.ext_j, false),
34
- DEFINE_PROP_BOOL("x-zicond", RISCVCPU, cfg.ext_zicond, false),
35
47
/* ePMP 0.9.3 */
36
/* ePMP 0.9.3 */
48
DEFINE_PROP_BOOL("x-epmp", RISCVCPU, cfg.epmp, false),
37
DEFINE_PROP_BOOL("x-epmp", RISCVCPU, cfg.epmp, false),
49
+ DEFINE_PROP_BOOL("x-aia", RISCVCPU, cfg.aia, false),
50
51
DEFINE_PROP_UINT64("resetvec", RISCVCPU, cfg.resetvec, DEFAULT_RSTVEC),
52
DEFINE_PROP_END_OF_LIST(),
53
--
38
--
54
2.34.1
39
2.41.0
55
56
diff view generated by jsdifflib
1
From: Petr Tesarik <ptesarik@suse.com>
1
From: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
2
2
3
The documentation for the generic loader says that "the maximum size of
3
A build with --enable-debug and without KVM will fail as follows:
4
the data is 8 bytes". However, attempts to set data-len=8 trigger the
5
following assertion failure:
6
4
7
../hw/core/generic-loader.c:59: generic_loader_reset: Assertion `s->data_len < sizeof(s->data)' failed.
5
/usr/bin/ld: libqemu-riscv64-softmmu.fa.p/hw_riscv_virt.c.o: in function `virt_machine_init':
6
./qemu/build/../hw/riscv/virt.c:1465: undefined reference to `kvm_riscv_aia_create'
8
7
9
The type of s->data is uint64_t (i.e. 8 bytes long), so I believe this
8
This happens because the code block with "if virt_use_kvm_aia(s)" isn't
10
assert should use <= instead of <.
9
being ignored by the debug build, resulting in an undefined reference to
10
a KVM only function.
11
11
12
Fixes: e481a1f63c93 ("generic-loader: Add a generic loader")
12
Add a 'kvm_enabled()' conditional together with virt_use_kvm_aia() will
13
Signed-off-by: Petr Tesarik <ptesarik@suse.com>
13
make the compiler crop the kvm_riscv_aia_create() call entirely from a
14
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
14
non-KVM build. Note that adding the 'kvm_enabled()' conditional inside
15
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
15
virt_use_kvm_aia() won't fix the build because this function would need
16
Message-id: 20220120092715.7805-1-ptesarik@suse.com
16
to be inlined multiple times to make the compiler zero out the entire
17
block.
18
19
While we're at it, use kvm_enabled() in all instances where
20
virt_use_kvm_aia() is checked to allow the compiler to elide these other
21
kvm-only instances as well.
22
23
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
24
Fixes: dbdb99948e ("target/riscv: select KVM AIA in riscv virt machine")
25
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
26
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
27
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
28
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
29
Message-ID: <20230830133503.711138-2-dbarboza@ventanamicro.com>
17
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
30
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
18
---
31
---
19
hw/core/generic-loader.c | 2 +-
32
hw/riscv/virt.c | 6 +++---
20
1 file changed, 1 insertion(+), 1 deletion(-)
33
1 file changed, 3 insertions(+), 3 deletions(-)
21
34
22
diff --git a/hw/core/generic-loader.c b/hw/core/generic-loader.c
35
diff --git a/hw/riscv/virt.c b/hw/riscv/virt.c
23
index XXXXXXX..XXXXXXX 100644
36
index XXXXXXX..XXXXXXX 100644
24
--- a/hw/core/generic-loader.c
37
--- a/hw/riscv/virt.c
25
+++ b/hw/core/generic-loader.c
38
+++ b/hw/riscv/virt.c
26
@@ -XXX,XX +XXX,XX @@ static void generic_loader_reset(void *opaque)
39
@@ -XXX,XX +XXX,XX @@ static void create_fdt_sockets(RISCVVirtState *s, const MemMapEntry *memmap,
27
}
40
}
28
41
29
if (s->data_len) {
42
/* KVM AIA only has one APLIC instance */
30
- assert(s->data_len < sizeof(s->data));
43
- if (virt_use_kvm_aia(s)) {
31
+ assert(s->data_len <= sizeof(s->data));
44
+ if (kvm_enabled() && virt_use_kvm_aia(s)) {
32
dma_memory_write(s->cpu->as, s->addr, &s->data, s->data_len,
45
create_fdt_socket_aplic(s, memmap, 0,
33
MEMTXATTRS_UNSPECIFIED);
46
msi_m_phandle, msi_s_phandle, phandle,
47
&intc_phandles[0], xplic_phandles,
48
@@ -XXX,XX +XXX,XX @@ static void create_fdt_sockets(RISCVVirtState *s, const MemMapEntry *memmap,
49
50
g_free(intc_phandles);
51
52
- if (virt_use_kvm_aia(s)) {
53
+ if (kvm_enabled() && virt_use_kvm_aia(s)) {
54
*irq_mmio_phandle = xplic_phandles[0];
55
*irq_virtio_phandle = xplic_phandles[0];
56
*irq_pcie_phandle = xplic_phandles[0];
57
@@ -XXX,XX +XXX,XX @@ static void virt_machine_init(MachineState *machine)
58
}
34
}
59
}
60
61
- if (virt_use_kvm_aia(s)) {
62
+ if (kvm_enabled() && virt_use_kvm_aia(s)) {
63
kvm_riscv_aia_create(machine, IMSIC_MMIO_GROUP_MIN_SHIFT,
64
VIRT_IRQCHIP_NUM_SOURCES, VIRT_IRQCHIP_NUM_MSIS,
65
memmap[VIRT_APLIC_S].base,
35
--
66
--
36
2.34.1
67
2.41.0
37
68
38
69
diff view generated by jsdifflib
1
From: Anup Patel <anup.patel@wdc.com>
1
From: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
2
2
3
The AIA device emulation (such as AIA IMSIC) should be able to set
3
Commit 6df0b37e2ab breaks a --enable-debug build in a non-KVM
4
(or provide) AIA ireg read-modify-write callback for each privilege
4
environment with the following error:
5
level of a RISC-V HART.
6
5
7
Signed-off-by: Anup Patel <anup.patel@wdc.com>
6
/usr/bin/ld: libqemu-riscv64-softmmu.fa.p/hw_intc_riscv_aplic.c.o: in function `riscv_kvm_aplic_request':
8
Signed-off-by: Anup Patel <anup@brainfault.org>
7
./qemu/build/../hw/intc/riscv_aplic.c:486: undefined reference to `kvm_set_irq'
9
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
8
collect2: error: ld returned 1 exit status
10
Reviewed-by: Frank Chang <frank.chang@sifive.com>
9
11
Message-id: 20220204174700.534953-9-anup@brainfault.org
10
This happens because the debug build will poke into the
11
'if (is_kvm_aia(aplic->msimode))' block and fail to find a reference to
12
the KVM only function riscv_kvm_aplic_request().
13
14
There are multiple solutions to fix this. We'll go with the same
15
solution from the previous patch, i.e. add a kvm_enabled() conditional
16
to filter out the block. But there's a catch: riscv_kvm_aplic_request()
17
is a local function that would end up being used if the compiler crops
18
the block, and this won't work. Quoting Richard Henderson's explanation
19
in [1]:
20
21
"(...) the compiler won't eliminate entire unused functions with -O0"
22
23
We'll solve it by moving riscv_kvm_aplic_request() to kvm.c and add its
24
declaration in kvm_riscv.h, where all other KVM specific public
25
functions are already declared. Other archs handles KVM specific code in
26
this manner and we expect to do the same from now on.
27
28
[1] https://lore.kernel.org/qemu-riscv/d2f1ad02-eb03-138f-9d08-db676deeed05@linaro.org/
29
30
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
31
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
32
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
33
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
34
Message-ID: <20230830133503.711138-3-dbarboza@ventanamicro.com>
12
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
35
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
13
---
36
---
14
target/riscv/cpu.h | 23 +++++++++++++++++++++++
37
target/riscv/kvm_riscv.h | 1 +
15
target/riscv/cpu_helper.c | 14 ++++++++++++++
38
hw/intc/riscv_aplic.c | 8 ++------
16
2 files changed, 37 insertions(+)
39
target/riscv/kvm.c | 5 +++++
40
3 files changed, 8 insertions(+), 6 deletions(-)
17
41
18
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
42
diff --git a/target/riscv/kvm_riscv.h b/target/riscv/kvm_riscv.h
19
index XXXXXXX..XXXXXXX 100644
43
index XXXXXXX..XXXXXXX 100644
20
--- a/target/riscv/cpu.h
44
--- a/target/riscv/kvm_riscv.h
21
+++ b/target/riscv/cpu.h
45
+++ b/target/riscv/kvm_riscv.h
22
@@ -XXX,XX +XXX,XX @@ struct CPURISCVState {
46
@@ -XXX,XX +XXX,XX @@ void kvm_riscv_aia_create(MachineState *machine, uint64_t group_shift,
23
uint64_t (*rdtime_fn)(uint32_t);
47
uint64_t aia_irq_num, uint64_t aia_msi_num,
24
uint32_t rdtime_fn_arg;
48
uint64_t aplic_base, uint64_t imsic_base,
25
49
uint64_t guest_num);
26
+ /* machine specific AIA ireg read-modify-write callback */
50
+void riscv_kvm_aplic_request(void *opaque, int irq, int level);
27
+#define AIA_MAKE_IREG(__isel, __priv, __virt, __vgein, __xlen) \
51
28
+ ((((__xlen) & 0xff) << 24) | \
29
+ (((__vgein) & 0x3f) << 20) | \
30
+ (((__virt) & 0x1) << 18) | \
31
+ (((__priv) & 0x3) << 16) | \
32
+ (__isel & 0xffff))
33
+#define AIA_IREG_ISEL(__ireg) ((__ireg) & 0xffff)
34
+#define AIA_IREG_PRIV(__ireg) (((__ireg) >> 16) & 0x3)
35
+#define AIA_IREG_VIRT(__ireg) (((__ireg) >> 18) & 0x1)
36
+#define AIA_IREG_VGEIN(__ireg) (((__ireg) >> 20) & 0x3f)
37
+#define AIA_IREG_XLEN(__ireg) (((__ireg) >> 24) & 0xff)
38
+ int (*aia_ireg_rmw_fn[4])(void *arg, target_ulong reg,
39
+ target_ulong *val, target_ulong new_val, target_ulong write_mask);
40
+ void *aia_ireg_rmw_fn_arg[4];
41
+
42
/* True if in debugger mode. */
43
bool debugger;
44
45
@@ -XXX,XX +XXX,XX @@ uint32_t riscv_cpu_update_mip(RISCVCPU *cpu, uint32_t mask, uint32_t value);
46
#define BOOL_TO_MASK(x) (-!!(x)) /* helper for riscv_cpu_update_mip value */
47
void riscv_cpu_set_rdtime_fn(CPURISCVState *env, uint64_t (*fn)(uint32_t),
48
uint32_t arg);
49
+void riscv_cpu_set_aia_ireg_rmw_fn(CPURISCVState *env, uint32_t priv,
50
+ int (*rmw_fn)(void *arg,
51
+ target_ulong reg,
52
+ target_ulong *val,
53
+ target_ulong new_val,
54
+ target_ulong write_mask),
55
+ void *rmw_fn_arg);
56
#endif
52
#endif
57
void riscv_cpu_set_mode(CPURISCVState *env, target_ulong newpriv);
53
diff --git a/hw/intc/riscv_aplic.c b/hw/intc/riscv_aplic.c
58
59
diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
60
index XXXXXXX..XXXXXXX 100644
54
index XXXXXXX..XXXXXXX 100644
61
--- a/target/riscv/cpu_helper.c
55
--- a/hw/intc/riscv_aplic.c
62
+++ b/target/riscv/cpu_helper.c
56
+++ b/hw/intc/riscv_aplic.c
63
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_set_rdtime_fn(CPURISCVState *env, uint64_t (*fn)(uint32_t),
57
@@ -XXX,XX +XXX,XX @@
64
env->rdtime_fn_arg = arg;
58
#include "target/riscv/cpu.h"
59
#include "sysemu/sysemu.h"
60
#include "sysemu/kvm.h"
61
+#include "kvm_riscv.h"
62
#include "migration/vmstate.h"
63
64
#define APLIC_MAX_IDC (1UL << 14)
65
@@ -XXX,XX +XXX,XX @@ static uint32_t riscv_aplic_idc_claimi(RISCVAPLICState *aplic, uint32_t idc)
66
return topi;
65
}
67
}
66
68
67
+void riscv_cpu_set_aia_ireg_rmw_fn(CPURISCVState *env, uint32_t priv,
69
-static void riscv_kvm_aplic_request(void *opaque, int irq, int level)
68
+ int (*rmw_fn)(void *arg,
70
-{
69
+ target_ulong reg,
71
- kvm_set_irq(kvm_state, irq, !!level);
70
+ target_ulong *val,
72
-}
71
+ target_ulong new_val,
73
-
72
+ target_ulong write_mask),
74
static void riscv_aplic_request(void *opaque, int irq, int level)
73
+ void *rmw_fn_arg)
75
{
76
bool update = false;
77
@@ -XXX,XX +XXX,XX @@ static void riscv_aplic_realize(DeviceState *dev, Error **errp)
78
* have IRQ lines delegated by their parent APLIC.
79
*/
80
if (!aplic->parent) {
81
- if (is_kvm_aia(aplic->msimode)) {
82
+ if (kvm_enabled() && is_kvm_aia(aplic->msimode)) {
83
qdev_init_gpio_in(dev, riscv_kvm_aplic_request, aplic->num_irqs);
84
} else {
85
qdev_init_gpio_in(dev, riscv_aplic_request, aplic->num_irqs);
86
diff --git a/target/riscv/kvm.c b/target/riscv/kvm.c
87
index XXXXXXX..XXXXXXX 100644
88
--- a/target/riscv/kvm.c
89
+++ b/target/riscv/kvm.c
90
@@ -XXX,XX +XXX,XX @@
91
#include "sysemu/runstate.h"
92
#include "hw/riscv/numa.h"
93
94
+void riscv_kvm_aplic_request(void *opaque, int irq, int level)
74
+{
95
+{
75
+ if (priv <= PRV_M) {
96
+ kvm_set_irq(kvm_state, irq, !!level);
76
+ env->aia_ireg_rmw_fn[priv] = rmw_fn;
77
+ env->aia_ireg_rmw_fn_arg[priv] = rmw_fn_arg;
78
+ }
79
+}
97
+}
80
+
98
+
81
void riscv_cpu_set_mode(CPURISCVState *env, target_ulong newpriv)
99
static uint64_t kvm_riscv_reg_id(CPURISCVState *env, uint64_t type,
100
uint64_t idx)
82
{
101
{
83
if (newpriv > PRV_M) {
84
--
102
--
85
2.34.1
103
2.41.0
86
104
87
105
diff view generated by jsdifflib
1
From: Philipp Tomsich <philipp.tomsich@vrull.eu>
1
From: Robbin Ehn <rehn@rivosinc.com>
2
2
3
As the number of extensions is growing, copying them individiually
3
This patch adds the new extensions in
4
into the DisasContext will scale less and less... instead we populate
4
linux 6.5 to the hwprobe syscall.
5
a pointer to the RISCVCPUConfig structure in the DisasContext.
6
5
7
This adds an extra indirection when checking for the availability of
6
And fixes RVC check to OR with correct value.
8
an extension (compared to copying the fields into DisasContext).
7
The previous variable contains 0 therefore it
9
While not a performance problem today, we can always (shallow) copy
8
did work.
10
the entire structure into the DisasContext (instead of putting a
11
pointer to it) if this is ever deemed necessary.
12
9
13
Signed-off-by: Philipp Tomsich <philipp.tomsich@vrull.eu>
10
Signed-off-by: Robbin Ehn <rehn@rivosinc.com>
14
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
11
Acked-by: Richard Henderson <richard.henderson@linaro.org>
15
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
12
Acked-by: Alistair Francis <alistair.francis@wdc.com>
16
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-ID: <bc82203b72d7efb30f1b4a8f9eb3d94699799dc8.camel@rivosinc.com>
17
Message-Id: <20220202005249.3566542-3-philipp.tomsich@vrull.eu>
18
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
14
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
19
---
15
---
20
target/riscv/translate.c | 2 ++
16
linux-user/syscall.c | 14 +++++++++++++-
21
1 file changed, 2 insertions(+)
17
1 file changed, 13 insertions(+), 1 deletion(-)
22
18
23
diff --git a/target/riscv/translate.c b/target/riscv/translate.c
19
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
24
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
25
--- a/target/riscv/translate.c
21
--- a/linux-user/syscall.c
26
+++ b/target/riscv/translate.c
22
+++ b/linux-user/syscall.c
27
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
23
@@ -XXX,XX +XXX,XX @@ static int do_getdents64(abi_long dirfd, abi_long arg2, abi_long count)
28
int frm;
24
#define RISCV_HWPROBE_KEY_IMA_EXT_0 4
29
RISCVMXL ol;
25
#define RISCV_HWPROBE_IMA_FD (1 << 0)
30
bool virt_enabled;
26
#define RISCV_HWPROBE_IMA_C (1 << 1)
31
+ const RISCVCPUConfig *cfg_ptr;
27
+#define RISCV_HWPROBE_IMA_V (1 << 2)
32
bool ext_ifencei;
28
+#define RISCV_HWPROBE_EXT_ZBA (1 << 3)
33
bool ext_zfh;
29
+#define RISCV_HWPROBE_EXT_ZBB (1 << 4)
34
bool ext_zfhmin;
30
+#define RISCV_HWPROBE_EXT_ZBS (1 << 5)
35
@@ -XXX,XX +XXX,XX @@ static void riscv_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
31
36
#endif
32
#define RISCV_HWPROBE_KEY_CPUPERF_0 5
37
ctx->misa_ext = env->misa_ext;
33
#define RISCV_HWPROBE_MISALIGNED_UNKNOWN (0 << 0)
38
ctx->frm = -1; /* unknown rounding mode */
34
@@ -XXX,XX +XXX,XX @@ static void risc_hwprobe_fill_pairs(CPURISCVState *env,
39
+ ctx->cfg_ptr = &(cpu->cfg);
35
riscv_has_ext(env, RVD) ?
40
ctx->ext_ifencei = cpu->cfg.ext_ifencei;
36
RISCV_HWPROBE_IMA_FD : 0;
41
ctx->ext_zfh = cpu->cfg.ext_zfh;
37
value |= riscv_has_ext(env, RVC) ?
42
ctx->ext_zfhmin = cpu->cfg.ext_zfhmin;
38
- RISCV_HWPROBE_IMA_C : pair->value;
39
+ RISCV_HWPROBE_IMA_C : 0;
40
+ value |= riscv_has_ext(env, RVV) ?
41
+ RISCV_HWPROBE_IMA_V : 0;
42
+ value |= cfg->ext_zba ?
43
+ RISCV_HWPROBE_EXT_ZBA : 0;
44
+ value |= cfg->ext_zbb ?
45
+ RISCV_HWPROBE_EXT_ZBB : 0;
46
+ value |= cfg->ext_zbs ?
47
+ RISCV_HWPROBE_EXT_ZBS : 0;
48
__put_user(value, &pair->value);
49
break;
50
case RISCV_HWPROBE_KEY_CPUPERF_0:
43
--
51
--
44
2.34.1
52
2.41.0
45
46
diff view generated by jsdifflib
1
From: Anup Patel <anup.patel@wdc.com>
1
From: Ard Biesheuvel <ardb@kernel.org>
2
2
3
The guest external interrupts from an interrupt controller are
3
Use the accelerated SubBytes/ShiftRows/AddRoundKey AES helper to
4
delivered only when the Guest/VM is running (i.e. V=1). This means
4
implement the first half of the key schedule derivation. This does not
5
any guest external interrupt which is triggered while the Guest/VM
5
actually involve shifting rows, so clone the same value into all four
6
is not running (i.e. V=0) will be missed on QEMU resulting in Guest
6
columns of the AES vector to counter that operation.
7
with sluggish response to serial console input and other I/O events.
8
7
9
To solve this, we check and inject interrupt after setting V=1.
8
Cc: Richard Henderson <richard.henderson@linaro.org>
10
9
Cc: Philippe Mathieu-Daudé <philmd@linaro.org>
11
Signed-off-by: Anup Patel <anup.patel@wdc.com>
10
Cc: Palmer Dabbelt <palmer@dabbelt.com>
12
Signed-off-by: Anup Patel <anup@brainfault.org>
11
Cc: Alistair Francis <alistair.francis@wdc.com>
13
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
12
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
14
Reviewed-by: Frank Chang <frank.chang@sifive.com>
13
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
15
Message-id: 20220204174700.534953-5-anup@brainfault.org
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-ID: <20230831154118.138727-1-ardb@kernel.org>
16
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
16
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
17
---
17
---
18
target/riscv/cpu_helper.c | 13 +++++++++++++
18
target/riscv/crypto_helper.c | 17 +++++------------
19
1 file changed, 13 insertions(+)
19
1 file changed, 5 insertions(+), 12 deletions(-)
20
20
21
diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
21
diff --git a/target/riscv/crypto_helper.c b/target/riscv/crypto_helper.c
22
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
23
--- a/target/riscv/cpu_helper.c
23
--- a/target/riscv/crypto_helper.c
24
+++ b/target/riscv/cpu_helper.c
24
+++ b/target/riscv/crypto_helper.c
25
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_set_virt_enabled(CPURISCVState *env, bool enable)
25
@@ -XXX,XX +XXX,XX @@ target_ulong HELPER(aes64ks1i)(target_ulong rs1, target_ulong rnum)
26
27
uint8_t enc_rnum = rnum;
28
uint32_t temp = (RS1 >> 32) & 0xFFFFFFFF;
29
- uint8_t rcon_ = 0;
30
- target_ulong result;
31
+ AESState t, rc = {};
32
33
if (enc_rnum != 0xA) {
34
temp = ror32(temp, 8); /* Rotate right by 8 */
35
- rcon_ = round_consts[enc_rnum];
36
+ rc.w[0] = rc.w[1] = round_consts[enc_rnum];
26
}
37
}
27
38
28
env->virt = set_field(env->virt, VIRT_ONOFF, enable);
39
- temp = ((uint32_t)AES_sbox[(temp >> 24) & 0xFF] << 24) |
29
+
40
- ((uint32_t)AES_sbox[(temp >> 16) & 0xFF] << 16) |
30
+ if (enable) {
41
- ((uint32_t)AES_sbox[(temp >> 8) & 0xFF] << 8) |
31
+ /*
42
- ((uint32_t)AES_sbox[(temp >> 0) & 0xFF] << 0);
32
+ * The guest external interrupts from an interrupt controller are
43
+ t.w[0] = t.w[1] = t.w[2] = t.w[3] = temp;
33
+ * delivered only when the Guest/VM is running (i.e. V=1). This means
44
+ aesenc_SB_SR_AK(&t, &t, &rc, false);
34
+ * any guest external interrupt which is triggered while the Guest/VM
45
35
+ * is not running (i.e. V=0) will be missed on QEMU resulting in guest
46
- temp ^= rcon_;
36
+ * with sluggish response to serial console input and other I/O events.
47
-
37
+ *
48
- result = ((uint64_t)temp << 32) | temp;
38
+ * To solve this, we check and inject interrupt after setting V=1.
49
-
39
+ */
50
- return result;
40
+ riscv_cpu_update_mip(env_archcpu(env), 0, 0);
51
+ return t.d[0];
41
+ }
42
}
52
}
43
53
44
bool riscv_cpu_two_stage_lookup(int mmu_idx)
54
target_ulong HELPER(aes64im)(target_ulong rs1)
45
--
55
--
46
2.34.1
56
2.41.0
47
57
48
58
diff view generated by jsdifflib
1
From: Anup Patel <anup.patel@wdc.com>
1
From: Akihiko Odaki <akihiko.odaki@daynix.com>
2
2
3
The machine or device emulation should be able to force set certain
3
riscv_trigger_init() had been called on reset events that can happen
4
CPU features because:
4
several times for a CPU and it allocated timers for itrigger. If old
5
1) We can have certain CPU features which are in-general optional
5
timers were present, they were simply overwritten by the new timers,
6
but implemented by RISC-V CPUs on the machine.
6
resulting in a memory leak.
7
2) We can have devices which require a certain CPU feature. For example,
8
AIA IMSIC devices expect AIA CSRs implemented by RISC-V CPUs.
9
7
10
Signed-off-by: Anup Patel <anup.patel@wdc.com>
8
Divide riscv_trigger_init() into two functions, namely
11
Signed-off-by: Anup Patel <anup@brainfault.org>
9
riscv_trigger_realize() and riscv_trigger_reset() and call them in
12
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
10
appropriate timing. The timer allocation will happen only once for a
11
CPU in riscv_trigger_realize().
12
13
Fixes: 5a4ae64cac ("target/riscv: Add itrigger support when icount is enabled")
14
Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com>
15
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
16
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
13
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
17
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
14
Reviewed-by: Frank Chang <frank.chang@sifive.com>
18
Message-ID: <20230818034059.9146-1-akihiko.odaki@daynix.com>
15
Message-id: 20220204174700.534953-6-anup@brainfault.org
16
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
19
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
17
---
20
---
18
target/riscv/cpu.h | 5 +++++
21
target/riscv/debug.h | 3 ++-
19
target/riscv/cpu.c | 11 +++--------
22
target/riscv/cpu.c | 8 +++++++-
20
2 files changed, 8 insertions(+), 8 deletions(-)
23
target/riscv/debug.c | 15 ++++++++++++---
24
3 files changed, 21 insertions(+), 5 deletions(-)
21
25
22
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
26
diff --git a/target/riscv/debug.h b/target/riscv/debug.h
23
index XXXXXXX..XXXXXXX 100644
27
index XXXXXXX..XXXXXXX 100644
24
--- a/target/riscv/cpu.h
28
--- a/target/riscv/debug.h
25
+++ b/target/riscv/cpu.h
29
+++ b/target/riscv/debug.h
26
@@ -XXX,XX +XXX,XX @@ static inline bool riscv_feature(CPURISCVState *env, int feature)
30
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_debug_excp_handler(CPUState *cs);
27
return env->features & (1ULL << feature);
31
bool riscv_cpu_debug_check_breakpoint(CPUState *cs);
28
}
32
bool riscv_cpu_debug_check_watchpoint(CPUState *cs, CPUWatchpoint *wp);
29
33
30
+static inline void riscv_set_feature(CPURISCVState *env, int feature)
34
-void riscv_trigger_init(CPURISCVState *env);
31
+{
35
+void riscv_trigger_realize(CPURISCVState *env);
32
+ env->features |= (1ULL << feature);
36
+void riscv_trigger_reset_hold(CPURISCVState *env);
33
+}
37
34
+
38
bool riscv_itrigger_enabled(CPURISCVState *env);
35
#include "cpu_user.h"
39
void riscv_itrigger_update_priv(CPURISCVState *env);
36
37
extern const char * const riscv_int_regnames[];
38
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
40
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
39
index XXXXXXX..XXXXXXX 100644
41
index XXXXXXX..XXXXXXX 100644
40
--- a/target/riscv/cpu.c
42
--- a/target/riscv/cpu.c
41
+++ b/target/riscv/cpu.c
43
+++ b/target/riscv/cpu.c
42
@@ -XXX,XX +XXX,XX @@ static void set_vext_version(CPURISCVState *env, int vext_ver)
44
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_reset_hold(Object *obj)
43
env->vext_ver = vext_ver;
45
46
#ifndef CONFIG_USER_ONLY
47
if (cpu->cfg.debug) {
48
- riscv_trigger_init(env);
49
+ riscv_trigger_reset_hold(env);
50
}
51
52
if (kvm_enabled()) {
53
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_realize(DeviceState *dev, Error **errp)
54
55
riscv_cpu_register_gdb_regs_for_features(cs);
56
57
+#ifndef CONFIG_USER_ONLY
58
+ if (cpu->cfg.debug) {
59
+ riscv_trigger_realize(&cpu->env);
60
+ }
61
+#endif
62
+
63
qemu_init_vcpu(cs);
64
cpu_reset(cs);
65
66
diff --git a/target/riscv/debug.c b/target/riscv/debug.c
67
index XXXXXXX..XXXXXXX 100644
68
--- a/target/riscv/debug.c
69
+++ b/target/riscv/debug.c
70
@@ -XXX,XX +XXX,XX @@ bool riscv_cpu_debug_check_watchpoint(CPUState *cs, CPUWatchpoint *wp)
71
return false;
44
}
72
}
45
73
46
-static void set_feature(CPURISCVState *env, int feature)
74
-void riscv_trigger_init(CPURISCVState *env)
47
-{
75
+void riscv_trigger_realize(CPURISCVState *env)
48
- env->features |= (1ULL << feature);
76
+{
49
-}
77
+ int i;
50
-
78
+
51
static void set_resetvec(CPURISCVState *env, target_ulong resetvec)
79
+ for (i = 0; i < RV_MAX_TRIGGERS; i++) {
80
+ env->itrigger_timer[i] = timer_new_ns(QEMU_CLOCK_VIRTUAL,
81
+ riscv_itrigger_timer_cb, env);
82
+ }
83
+}
84
+
85
+void riscv_trigger_reset_hold(CPURISCVState *env)
52
{
86
{
53
#ifndef CONFIG_USER_ONLY
87
target_ulong tdata1 = build_tdata1(env, TRIGGER_TYPE_AD_MATCH, 0, 0);
54
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_realize(DeviceState *dev, Error **errp)
88
int i;
89
@@ -XXX,XX +XXX,XX @@ void riscv_trigger_init(CPURISCVState *env)
90
env->tdata3[i] = 0;
91
env->cpu_breakpoint[i] = NULL;
92
env->cpu_watchpoint[i] = NULL;
93
- env->itrigger_timer[i] = timer_new_ns(QEMU_CLOCK_VIRTUAL,
94
- riscv_itrigger_timer_cb, env);
95
+ timer_del(env->itrigger_timer[i]);
55
}
96
}
56
97
}
57
if (cpu->cfg.mmu) {
58
- set_feature(env, RISCV_FEATURE_MMU);
59
+ riscv_set_feature(env, RISCV_FEATURE_MMU);
60
}
61
62
if (cpu->cfg.pmp) {
63
- set_feature(env, RISCV_FEATURE_PMP);
64
+ riscv_set_feature(env, RISCV_FEATURE_PMP);
65
66
/*
67
* Enhanced PMP should only be available
68
* on harts with PMP support
69
*/
70
if (cpu->cfg.epmp) {
71
- set_feature(env, RISCV_FEATURE_EPMP);
72
+ riscv_set_feature(env, RISCV_FEATURE_EPMP);
73
}
74
}
75
76
--
98
--
77
2.34.1
99
2.41.0
78
100
79
101
diff view generated by jsdifflib
1
From: Wilfred Mallawa <wilfred.mallawa@wdc.com>
1
From: Leon Schuermann <leons@opentitan.org>
2
2
3
This patch removes the left-over/unused `ibex_plic.h` file. Previously
3
When the rule-lock bypass (RLB) bit is set in the mseccfg CSR, the PMP
4
used by opentitan, which now follows the RISC-V standard and uses the
4
configuration lock bits must not apply. While this behavior is
5
SiFivePlicState.
5
implemented for the pmpcfgX CSRs, this bit is not respected for
6
changes to the pmpaddrX CSRs. This patch ensures that pmpaddrX CSR
7
writes work even on locked regions when the global rule-lock bypass is
8
enabled.
6
9
7
Fixes: 434e7e021 ("hw/intc: Remove the Ibex PLIC")
10
Signed-off-by: Leon Schuermann <leons@opentitan.org>
8
Signed-off-by: Wilfred Mallawa <wilfred.mallawa@wdc.com>
11
Reviewed-by: Mayuresh Chitale <mchitale@ventanamicro.com>
9
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
12
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
13
Message-ID: <20230829215046.1430463-1-leon@is.currently.online>
11
Message-id: 20220121055005.3159846-1-alistair.francis@opensource.wdc.com
12
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
14
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
13
---
15
---
14
include/hw/intc/ibex_plic.h | 67 -------------------------------------
16
target/riscv/pmp.c | 4 ++++
15
1 file changed, 67 deletions(-)
17
1 file changed, 4 insertions(+)
16
delete mode 100644 include/hw/intc/ibex_plic.h
17
18
18
diff --git a/include/hw/intc/ibex_plic.h b/include/hw/intc/ibex_plic.h
19
diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
19
deleted file mode 100644
20
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX
21
--- a/target/riscv/pmp.c
21
--- a/include/hw/intc/ibex_plic.h
22
+++ b/target/riscv/pmp.c
22
+++ /dev/null
23
@@ -XXX,XX +XXX,XX @@ static inline uint8_t pmp_get_a_field(uint8_t cfg)
23
@@ -XXX,XX +XXX,XX @@
24
*/
24
-/*
25
static inline int pmp_is_locked(CPURISCVState *env, uint32_t pmp_index)
25
- * QEMU RISC-V lowRISC Ibex PLIC
26
{
26
- *
27
+ /* mseccfg.RLB is set */
27
- * Copyright (c) 2020 Western Digital
28
+ if (MSECCFG_RLB_ISSET(env)) {
28
- *
29
+ return 0;
29
- * This program is free software; you can redistribute it and/or modify it
30
+ }
30
- * under the terms and conditions of the GNU General Public License,
31
31
- * version 2 or later, as published by the Free Software Foundation.
32
if (env->pmp_state.pmp[pmp_index].cfg_reg & PMP_LOCK) {
32
- *
33
return 1;
33
- * This program is distributed in the hope it will be useful, but WITHOUT
34
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
35
- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
36
- * more details.
37
- *
38
- * You should have received a copy of the GNU General Public License along with
39
- * this program. If not, see <http://www.gnu.org/licenses/>.
40
- */
41
-
42
-#ifndef HW_IBEX_PLIC_H
43
-#define HW_IBEX_PLIC_H
44
-
45
-#include "hw/sysbus.h"
46
-#include "qom/object.h"
47
-
48
-#define TYPE_IBEX_PLIC "ibex-plic"
49
-OBJECT_DECLARE_SIMPLE_TYPE(IbexPlicState, IBEX_PLIC)
50
-
51
-struct IbexPlicState {
52
- /*< private >*/
53
- SysBusDevice parent_obj;
54
-
55
- /*< public >*/
56
- MemoryRegion mmio;
57
-
58
- uint32_t *pending;
59
- uint32_t *hidden_pending;
60
- uint32_t *claimed;
61
- uint32_t *source;
62
- uint32_t *priority;
63
- uint32_t *enable;
64
- uint32_t threshold;
65
- uint32_t claim;
66
-
67
- /* config */
68
- uint32_t num_cpus;
69
- uint32_t num_sources;
70
-
71
- uint32_t pending_base;
72
- uint32_t pending_num;
73
-
74
- uint32_t source_base;
75
- uint32_t source_num;
76
-
77
- uint32_t priority_base;
78
- uint32_t priority_num;
79
-
80
- uint32_t enable_base;
81
- uint32_t enable_num;
82
-
83
- uint32_t threshold_base;
84
-
85
- uint32_t claim_base;
86
-
87
- qemu_irq *external_irqs;
88
-};
89
-
90
-#endif /* HW_IBEX_PLIC_H */
91
--
34
--
92
2.34.1
35
2.41.0
93
94
diff view generated by jsdifflib
1
From: Anup Patel <anup.patel@wdc.com>
1
From: Tommy Wu <tommy.wu@sifive.com>
2
2
3
The AIA specificaiton adds interrupt filtering support for M-mode
3
According to the new spec, when vsiselect has a reserved value, attempts
4
and HS-mode. Using AIA interrupt filtering M-mode and H-mode can
4
from M-mode or HS-mode to access vsireg, or from VS-mode to access
5
take local interrupt 13 or above and selectively inject same local
5
sireg, should preferably raise an illegal instruction exception.
6
interrupt to lower privilege modes.
7
6
8
At the moment, we don't have any local interrupts above 12 so we
7
Signed-off-by: Tommy Wu <tommy.wu@sifive.com>
9
add dummy implementation (i.e. read zero and ignore write) of AIA
10
interrupt filtering CSRs.
11
12
Signed-off-by: Anup Patel <anup.patel@wdc.com>
13
Signed-off-by: Anup Patel <anup@brainfault.org>
14
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
15
Reviewed-by: Frank Chang <frank.chang@sifive.com>
8
Reviewed-by: Frank Chang <frank.chang@sifive.com>
16
Message-id: 20220204174700.534953-13-anup@brainfault.org
9
Message-ID: <20230816061647.600672-1-tommy.wu@sifive.com>
17
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
10
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
18
---
11
---
19
target/riscv/csr.c | 23 +++++++++++++++++++++++
12
target/riscv/csr.c | 7 +++++--
20
1 file changed, 23 insertions(+)
13
1 file changed, 5 insertions(+), 2 deletions(-)
21
14
22
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
15
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
23
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
24
--- a/target/riscv/csr.c
17
--- a/target/riscv/csr.c
25
+++ b/target/riscv/csr.c
18
+++ b/target/riscv/csr.c
26
@@ -XXX,XX +XXX,XX @@ static RISCVException any32(CPURISCVState *env, int csrno)
19
@@ -XXX,XX +XXX,XX @@ static int rmw_iprio(target_ulong xlen,
27
20
static int rmw_xireg(CPURISCVState *env, int csrno, target_ulong *val,
28
}
21
target_ulong new_val, target_ulong wr_mask)
29
30
+static int aia_any(CPURISCVState *env, int csrno)
31
+{
32
+ if (!riscv_feature(env, RISCV_FEATURE_AIA)) {
33
+ return RISCV_EXCP_ILLEGAL_INST;
34
+ }
35
+
36
+ return any(env, csrno);
37
+}
38
+
39
static int aia_any32(CPURISCVState *env, int csrno)
40
{
22
{
41
if (!riscv_feature(env, RISCV_FEATURE_AIA)) {
23
- bool virt;
42
@@ -XXX,XX +XXX,XX @@ static RISCVException read_zero(CPURISCVState *env, int csrno,
24
+ bool virt, isel_reserved;
25
uint8_t *iprio;
26
int ret = -EINVAL;
27
target_ulong priv, isel, vgein;
28
@@ -XXX,XX +XXX,XX @@ static int rmw_xireg(CPURISCVState *env, int csrno, target_ulong *val,
29
30
/* Decode register details from CSR number */
31
virt = false;
32
+ isel_reserved = false;
33
switch (csrno) {
34
case CSR_MIREG:
35
iprio = env->miprio;
36
@@ -XXX,XX +XXX,XX @@ static int rmw_xireg(CPURISCVState *env, int csrno, target_ulong *val,
37
riscv_cpu_mxl_bits(env)),
38
val, new_val, wr_mask);
39
}
40
+ } else {
41
+ isel_reserved = true;
42
}
43
44
done:
45
if (ret) {
46
- return (env->virt_enabled && virt) ?
47
+ return (env->virt_enabled && virt && !isel_reserved) ?
48
RISCV_EXCP_VIRT_INSTRUCTION_FAULT : RISCV_EXCP_ILLEGAL_INST;
49
}
43
return RISCV_EXCP_NONE;
50
return RISCV_EXCP_NONE;
44
}
45
46
+static RISCVException write_ignore(CPURISCVState *env, int csrno,
47
+ target_ulong val)
48
+{
49
+ return RISCV_EXCP_NONE;
50
+}
51
+
52
static RISCVException read_mhartid(CPURISCVState *env, int csrno,
53
target_ulong *val)
54
{
55
@@ -XXX,XX +XXX,XX @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
56
[CSR_MTVAL] = { "mtval", any, read_mtval, write_mtval },
57
[CSR_MIP] = { "mip", any, NULL, NULL, rmw_mip },
58
59
+ /* Virtual Interrupts for Supervisor Level (AIA) */
60
+ [CSR_MVIEN] = { "mvien", aia_any, read_zero, write_ignore },
61
+ [CSR_MVIP] = { "mvip", aia_any, read_zero, write_ignore },
62
+
63
/* Machine-Level High-Half CSRs (AIA) */
64
[CSR_MIDELEGH] = { "midelegh", aia_any32, NULL, NULL, rmw_midelegh },
65
[CSR_MIEH] = { "mieh", aia_any32, NULL, NULL, rmw_mieh },
66
+ [CSR_MVIENH] = { "mvienh", aia_any32, read_zero, write_ignore },
67
+ [CSR_MVIPH] = { "mviph", aia_any32, read_zero, write_ignore },
68
[CSR_MIPH] = { "miph", aia_any32, NULL, NULL, rmw_miph },
69
70
/* Supervisor Trap Setup */
71
@@ -XXX,XX +XXX,XX @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
72
[CSR_MTINST] = { "mtinst", hmode, read_mtinst, write_mtinst },
73
74
/* Virtual Interrupts and Interrupt Priorities (H-extension with AIA) */
75
+ [CSR_HVIEN] = { "hvien", aia_hmode, read_zero, write_ignore },
76
[CSR_HVICTL] = { "hvictl", aia_hmode, read_hvictl, write_hvictl },
77
[CSR_HVIPRIO1] = { "hviprio1", aia_hmode, read_hviprio1, write_hviprio1 },
78
[CSR_HVIPRIO2] = { "hviprio2", aia_hmode, read_hviprio2, write_hviprio2 },
79
80
/* Hypervisor and VS-Level High-Half CSRs (H-extension with AIA) */
81
[CSR_HIDELEGH] = { "hidelegh", aia_hmode32, NULL, NULL, rmw_hidelegh },
82
+ [CSR_HVIENH] = { "hvienh", aia_hmode32, read_zero, write_ignore },
83
[CSR_HVIPH] = { "hviph", aia_hmode32, NULL, NULL, rmw_hviph },
84
[CSR_HVIPRIO1H] = { "hviprio1h", aia_hmode32, read_hviprio1h, write_hviprio1h },
85
[CSR_HVIPRIO2H] = { "hviprio2h", aia_hmode32, read_hviprio2h, write_hviprio2h },
86
--
51
--
87
2.34.1
52
2.41.0
88
89
diff view generated by jsdifflib
1
From: Anup Patel <anup.patel@wdc.com>
1
From: Nikita Shubin <n.shubin@yadro.com>
2
2
3
We should be returning illegal instruction trap when RV64 HS-mode tries
3
As per ISA:
4
to access RV32 HS-mode CSR.
5
4
6
Fixes: d6f20dacea51 ("target/riscv: Fix 32-bit HS mode access permissions")
5
"For CSRRWI, if rd=x0, then the instruction shall not read the CSR and
7
Signed-off-by: Anup Patel <anup.patel@wdc.com>
6
shall not cause any of the side effects that might occur on a CSR read."
8
Signed-off-by: Anup Patel <anup@brainfault.org>
7
8
trans_csrrwi() and trans_csrrw() call do_csrw() if rd=x0, do_csrw() calls
9
riscv_csrrw_do64(), via helper_csrw() passing NULL as *ret_value.
10
11
Signed-off-by: Nikita Shubin <n.shubin@yadro.com>
9
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
12
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
10
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
13
Message-ID: <20230808090914.17634-1-nikita.shubin@maquefel.me>
11
Reviewed-by: Frank Chang <frank.chang@sifive.com>
12
Message-id: 20220204174700.534953-2-anup@brainfault.org
13
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
14
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
14
---
15
---
15
target/riscv/csr.c | 2 +-
16
target/riscv/csr.c | 24 +++++++++++++++---------
16
1 file changed, 1 insertion(+), 1 deletion(-)
17
1 file changed, 15 insertions(+), 9 deletions(-)
17
18
18
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
19
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
19
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
20
--- a/target/riscv/csr.c
21
--- a/target/riscv/csr.c
21
+++ b/target/riscv/csr.c
22
+++ b/target/riscv/csr.c
22
@@ -XXX,XX +XXX,XX @@ static RISCVException hmode(CPURISCVState *env, int csrno)
23
@@ -XXX,XX +XXX,XX @@ static RISCVException riscv_csrrw_do64(CPURISCVState *env, int csrno,
23
static RISCVException hmode32(CPURISCVState *env, int csrno)
24
target_ulong write_mask)
24
{
25
{
25
if (riscv_cpu_mxl(env) != MXL_RV32) {
26
RISCVException ret;
26
- if (riscv_cpu_virt_enabled(env)) {
27
- target_ulong old_value;
27
+ if (!riscv_cpu_virt_enabled(env)) {
28
+ target_ulong old_value = 0;
28
return RISCV_EXCP_ILLEGAL_INST;
29
29
} else {
30
/* execute combined read/write operation if it exists */
30
return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
31
if (csr_ops[csrno].op) {
32
return csr_ops[csrno].op(env, csrno, ret_value, new_value, write_mask);
33
}
34
35
- /* if no accessor exists then return failure */
36
- if (!csr_ops[csrno].read) {
37
- return RISCV_EXCP_ILLEGAL_INST;
38
- }
39
- /* read old value */
40
- ret = csr_ops[csrno].read(env, csrno, &old_value);
41
- if (ret != RISCV_EXCP_NONE) {
42
- return ret;
43
+ /*
44
+ * ret_value == NULL means that rd=x0 and we're coming from helper_csrw()
45
+ * and we can't throw side effects caused by CSR reads.
46
+ */
47
+ if (ret_value) {
48
+ /* if no accessor exists then return failure */
49
+ if (!csr_ops[csrno].read) {
50
+ return RISCV_EXCP_ILLEGAL_INST;
51
+ }
52
+ /* read old value */
53
+ ret = csr_ops[csrno].read(env, csrno, &old_value);
54
+ if (ret != RISCV_EXCP_NONE) {
55
+ return ret;
56
+ }
57
}
58
59
/* write value if writable and write mask set, otherwise drop writes */
31
--
60
--
32
2.34.1
61
2.41.0
33
34
diff view generated by jsdifflib