1
From: Alistair Francis <alistair.francis@wdc.com>
1
The following changes since commit c5ea91da443b458352c1b629b490ee6631775cb4:
2
2
3
The following changes since commit 0a301624c2f4ced3331ffd5bce85b4274fe132af:
3
Merge tag 'pull-trivial-patches' of https://gitlab.com/mjt0k/qemu into staging (2023-09-08 10:06:25 -0400)
4
5
Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20220208' into staging (2022-02-08 11:40:08 +0000)
6
4
7
are available in the Git repository at:
5
are available in the Git repository at:
8
6
9
git@github.com:alistair23/qemu.git tags/pull-riscv-to-apply-20220212
7
https://github.com/alistair23/qemu.git tags/pull-riscv-to-apply-20230911
10
8
11
for you to fetch changes up to 31d69b66ed89fa0f66d4e5a15e9664c92c3ed8f8:
9
for you to fetch changes up to e7a03409f29e2da59297d55afbaec98c96e43e3a:
12
10
13
docs/system: riscv: Update description of CPU (2022-02-11 18:31:29 +1000)
11
target/riscv: don't read CSR in riscv_csrrw_do64 (2023-09-11 11:45:55 +1000)
14
12
15
----------------------------------------------------------------
13
----------------------------------------------------------------
16
Fourth RISC-V PR for QEMU 7.0
14
First RISC-V PR for 8.2
17
15
18
* Remove old Ibex PLIC header file
16
* Remove 'host' CPU from TCG
19
* Allow writing 8 bytes with generic loader
17
* riscv_htif Fixup printing on big endian hosts
20
* Fixes for RV128
18
* Add zmmul isa string
21
* Refactor RISC-V CPU configs
19
* Add smepmp isa string
22
* Initial support for XVentanaCondOps custom extension
20
* Fix page_check_range use in fault-only-first
23
* Fix for vill field in vtype
21
* Use existing lookup tables for MixColumns
24
* Fix trap cause for RV32 HS-mode CSR access from RV64 HS-mode
22
* Add RISC-V vector cryptographic instruction set support
25
* RISC-V AIA support for virt machine
23
* Implement WARL behaviour for mcountinhibit/mcounteren
26
* Support for svnapot, svinval and svpbmt extensions
24
* Add Zihintntl extension ISA string to DTS
25
* Fix zfa fleq.d and fltq.d
26
* Fix upper/lower mtime write calculation
27
* Make rtc variable names consistent
28
* Use abi type for linux-user target_ucontext
29
* Add RISC-V KVM AIA Support
30
* Fix riscv,pmu DT node path in the virt machine
31
* Update CSR bits name for svadu extension
32
* Mark zicond non-experimental
33
* Fix satp_mode_finalize() when satp_mode.supported = 0
34
* Fix non-KVM --enable-debug build
35
* Add new extensions to hwprobe
36
* Use accelerated helper for AES64KS1I
37
* Allocate itrigger timers only once
38
* Respect mseccfg.RLB for pmpaddrX changes
39
* Align the AIA model to v1.0 ratified spec
40
* Don't read the CSR in riscv_csrrw_do64
27
41
28
----------------------------------------------------------------
42
----------------------------------------------------------------
29
Anup Patel (23):
43
Akihiko Odaki (1):
30
target/riscv: Fix trap cause for RV32 HS-mode CSR access from RV64 HS-mode
44
target/riscv: Allocate itrigger timers only once
31
target/riscv: Implement SGEIP bit in hip and hie CSRs
32
target/riscv: Implement hgeie and hgeip CSRs
33
target/riscv: Improve delivery of guest external interrupts
34
target/riscv: Allow setting CPU feature from machine/device emulation
35
target/riscv: Add AIA cpu feature
36
target/riscv: Add defines for AIA CSRs
37
target/riscv: Allow AIA device emulation to set ireg rmw callback
38
target/riscv: Implement AIA local interrupt priorities
39
target/riscv: Implement AIA CSRs for 64 local interrupts on RV32
40
target/riscv: Implement AIA hvictl and hviprioX CSRs
41
target/riscv: Implement AIA interrupt filtering CSRs
42
target/riscv: Implement AIA mtopi, stopi, and vstopi CSRs
43
target/riscv: Implement AIA xiselect and xireg CSRs
44
target/riscv: Implement AIA IMSIC interface CSRs
45
hw/riscv: virt: Use AIA INTC compatible string when available
46
target/riscv: Allow users to force enable AIA CSRs in HART
47
hw/intc: Add RISC-V AIA APLIC device emulation
48
hw/riscv: virt: Add optional AIA APLIC support to virt machine
49
hw/intc: Add RISC-V AIA IMSIC device emulation
50
hw/riscv: virt: Add optional AIA IMSIC support to virt machine
51
docs/system: riscv: Document AIA options for virt machine
52
hw/riscv: virt: Increase maximum number of allowed CPUs
53
45
54
Frédéric Pétrot (1):
46
Ard Biesheuvel (2):
55
target/riscv: correct "code should not be reached" for x-rv128
47
target/riscv: Use existing lookup tables for MixColumns
48
target/riscv: Use accelerated helper for AES64KS1I
56
49
57
Guo Ren (1):
50
Conor Dooley (1):
58
target/riscv: Ignore reserved bits in PTE for RV64
51
hw/riscv: virt: Fix riscv,pmu DT node path
59
52
60
LIU Zhiwei (1):
53
Daniel Henrique Barboza (6):
61
target/riscv: Fix vill field write in vtype
54
target/riscv/cpu.c: do not run 'host' CPU with TCG
55
target/riscv/cpu.c: add zmmul isa string
56
target/riscv/cpu.c: add smepmp isa string
57
target/riscv: fix satp_mode_finalize() when satp_mode.supported = 0
58
hw/riscv/virt.c: fix non-KVM --enable-debug build
59
hw/intc/riscv_aplic.c fix non-KVM --enable-debug build
62
60
63
Petr Tesarik (1):
61
Dickon Hood (2):
64
Allow setting up to 8 bytes with the generic loader
62
target/riscv: Refactor translation of vector-widening instruction
63
target/riscv: Add Zvbb ISA extension support
65
64
66
Philipp Tomsich (7):
65
Jason Chien (3):
67
target/riscv: refactor (anonymous struct) RISCVCPU.cfg into 'struct RISCVCPUConfig'
66
target/riscv: Add Zihintntl extension ISA string to DTS
68
target/riscv: riscv_tr_init_disas_context: copy pointer-to-cfg into cfg_ptr
67
hw/intc: Fix upper/lower mtime write calculation
69
target/riscv: access configuration through cfg_ptr in DisasContext
68
hw/intc: Make rtc variable names consistent
70
target/riscv: access cfg structure through DisasContext
71
target/riscv: iterate over a table of decoders
72
target/riscv: Add XVentanaCondOps custom extension
73
target/riscv: add a MAINTAINERS entry for XVentanaCondOps
74
69
75
Weiwei Li (4):
70
Kiran Ostrolenk (4):
76
target/riscv: add PTE_A/PTE_D/PTE_U bits check for inner PTE
71
target/riscv: Refactor some of the generic vector functionality
77
target/riscv: add support for svnapot extension
72
target/riscv: Refactor vector-vector translation macro
78
target/riscv: add support for svinval extension
73
target/riscv: Refactor some of the generic vector functionality
79
target/riscv: add support for svpbmt extension
74
target/riscv: Add Zvknh ISA extension support
80
75
81
Wilfred Mallawa (1):
76
LIU Zhiwei (3):
82
include: hw: remove ibex_plic.h
77
target/riscv: Fix page_check_range use in fault-only-first
78
target/riscv: Fix zfa fleq.d and fltq.d
79
linux-user/riscv: Use abi type for target_ucontext
83
80
84
Yu Li (1):
81
Lawrence Hunter (2):
85
docs/system: riscv: Update description of CPU
82
target/riscv: Add Zvbc ISA extension support
83
target/riscv: Add Zvksh ISA extension support
86
84
87
docs/system/riscv/virt.rst | 22 +-
85
Leon Schuermann (1):
88
include/hw/intc/ibex_plic.h | 67 -
86
target/riscv/pmp.c: respect mseccfg.RLB for pmpaddrX changes
89
include/hw/intc/riscv_aplic.h | 79 ++
90
include/hw/intc/riscv_imsic.h | 68 ++
91
include/hw/riscv/virt.h | 41 +-
92
target/riscv/cpu.h | 169 ++-
93
target/riscv/cpu_bits.h | 129 ++
94
target/riscv/XVentanaCondOps.decode | 25 +
95
target/riscv/insn32.decode | 7 +
96
hw/core/generic-loader.c | 2 +-
97
hw/intc/riscv_aplic.c | 978 +++++++++++++++
98
hw/intc/riscv_imsic.c | 448 +++++++
99
hw/riscv/virt.c | 712 +++++++++--
100
target/riscv/cpu.c | 113 +-
101
target/riscv/cpu_helper.c | 377 +++++-
102
target/riscv/csr.c | 1282 ++++++++++++++++++--
103
target/riscv/gdbstub.c | 3 +
104
target/riscv/machine.c | 24 +-
105
target/riscv/translate.c | 61 +-
106
target/riscv/vector_helper.c | 1 +
107
target/riscv/insn_trans/trans_rvb.c.inc | 8 +-
108
target/riscv/insn_trans/trans_rvi.c.inc | 2 +-
109
target/riscv/insn_trans/trans_rvv.c.inc | 146 ++-
110
target/riscv/insn_trans/trans_rvzfh.c.inc | 4 +-
111
target/riscv/insn_trans/trans_svinval.c.inc | 75 ++
112
.../riscv/insn_trans/trans_xventanacondops.c.inc | 39 +
113
MAINTAINERS | 7 +
114
hw/intc/Kconfig | 6 +
115
hw/intc/meson.build | 2 +
116
hw/riscv/Kconfig | 2 +
117
target/riscv/meson.build | 1 +
118
31 files changed, 4409 insertions(+), 491 deletions(-)
119
delete mode 100644 include/hw/intc/ibex_plic.h
120
create mode 100644 include/hw/intc/riscv_aplic.h
121
create mode 100644 include/hw/intc/riscv_imsic.h
122
create mode 100644 target/riscv/XVentanaCondOps.decode
123
create mode 100644 hw/intc/riscv_aplic.c
124
create mode 100644 hw/intc/riscv_imsic.c
125
create mode 100644 target/riscv/insn_trans/trans_svinval.c.inc
126
create mode 100644 target/riscv/insn_trans/trans_xventanacondops.c.inc
127
87
88
Max Chou (3):
89
crypto: Create sm4_subword
90
crypto: Add SM4 constant parameter CK
91
target/riscv: Add Zvksed ISA extension support
92
93
Nazar Kazakov (4):
94
target/riscv: Remove redundant "cpu_vl == 0" checks
95
target/riscv: Move vector translation checks
96
target/riscv: Add Zvkned ISA extension support
97
target/riscv: Add Zvkg ISA extension support
98
99
Nikita Shubin (1):
100
target/riscv: don't read CSR in riscv_csrrw_do64
101
102
Rob Bradford (1):
103
target/riscv: Implement WARL behaviour for mcountinhibit/mcounteren
104
105
Robbin Ehn (1):
106
linux-user/riscv: Add new extensions to hwprobe
107
108
Thomas Huth (2):
109
hw/char/riscv_htif: Fix printing of console characters on big endian hosts
110
hw/char/riscv_htif: Fix the console syscall on big endian hosts
111
112
Tommy Wu (1):
113
target/riscv: Align the AIA model to v1.0 ratified spec
114
115
Vineet Gupta (1):
116
riscv: zicond: make non-experimental
117
118
Weiwei Li (1):
119
target/riscv: Update CSR bits name for svadu extension
120
121
Yong-Xuan Wang (5):
122
target/riscv: support the AIA device emulation with KVM enabled
123
target/riscv: check the in-kernel irqchip support
124
target/riscv: Create an KVM AIA irqchip
125
target/riscv: update APLIC and IMSIC to support KVM AIA
126
target/riscv: select KVM AIA in riscv virt machine
127
128
include/crypto/aes.h | 7 +
129
include/crypto/sm4.h | 9 +
130
target/riscv/cpu_bits.h | 8 +-
131
target/riscv/cpu_cfg.h | 9 +
132
target/riscv/debug.h | 3 +-
133
target/riscv/helper.h | 98 +++
134
target/riscv/kvm_riscv.h | 5 +
135
target/riscv/vector_internals.h | 228 +++++++
136
target/riscv/insn32.decode | 58 ++
137
crypto/aes.c | 4 +-
138
crypto/sm4.c | 10 +
139
hw/char/riscv_htif.c | 12 +-
140
hw/intc/riscv_aclint.c | 11 +-
141
hw/intc/riscv_aplic.c | 52 +-
142
hw/intc/riscv_imsic.c | 25 +-
143
hw/riscv/virt.c | 374 ++++++------
144
linux-user/riscv/signal.c | 4 +-
145
linux-user/syscall.c | 14 +-
146
target/arm/tcg/crypto_helper.c | 10 +-
147
target/riscv/cpu.c | 83 ++-
148
target/riscv/cpu_helper.c | 6 +-
149
target/riscv/crypto_helper.c | 51 +-
150
target/riscv/csr.c | 54 +-
151
target/riscv/debug.c | 15 +-
152
target/riscv/kvm.c | 201 ++++++-
153
target/riscv/pmp.c | 4 +
154
target/riscv/translate.c | 1 +
155
target/riscv/vcrypto_helper.c | 970 ++++++++++++++++++++++++++++++
156
target/riscv/vector_helper.c | 245 +-------
157
target/riscv/vector_internals.c | 81 +++
158
target/riscv/insn_trans/trans_rvv.c.inc | 171 +++---
159
target/riscv/insn_trans/trans_rvvk.c.inc | 606 +++++++++++++++++++
160
target/riscv/insn_trans/trans_rvzfa.c.inc | 4 +-
161
target/riscv/meson.build | 4 +-
162
34 files changed, 2785 insertions(+), 652 deletions(-)
163
create mode 100644 target/riscv/vector_internals.h
164
create mode 100644 target/riscv/vcrypto_helper.c
165
create mode 100644 target/riscv/vector_internals.c
166
create mode 100644 target/riscv/insn_trans/trans_rvvk.c.inc
diff view generated by jsdifflib
1
From: Yu Li <liyu.yukiteru@bytedance.com>
1
From: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
2
2
3
Since the hypervisor extension been non experimental and enabled for
3
The 'host' CPU is available in a CONFIG_KVM build and it's currently
4
default CPU, the previous command is no longer available and the
4
available for all accels, but is a KVM only CPU. This means that in a
5
option `x-h=true` or `h=true` is also no longer required.
5
RISC-V KVM capable host we can do things like this:
6
6
7
Signed-off-by: Yu Li <liyu.yukiteru@bytedance.com>
7
$ ./build/qemu-system-riscv64 -M virt,accel=tcg -cpu host --nographic
8
qemu-system-riscv64: H extension requires priv spec 1.12.0
9
10
This CPU does not have a priv spec because we don't filter its extensions
11
via priv spec. We shouldn't be reaching riscv_cpu_realize_tcg() at all
12
with the 'host' CPU.
13
14
We don't have a way to filter the 'host' CPU out of the available CPU
15
options (-cpu help) if the build includes both KVM and TCG. What we can
16
do is to error out during riscv_cpu_realize_tcg() if the user chooses
17
the 'host' CPU with accel=tcg:
18
19
$ ./build/qemu-system-riscv64 -M virt,accel=tcg -cpu host --nographic
20
qemu-system-riscv64: 'host' CPU is not compatible with TCG acceleration
21
22
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
8
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
23
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
9
Message-Id: <9040401e-8f87-ef4a-d840-6703f08d068c@bytedance.com>
24
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
25
Message-Id: <20230721133411.474105-1-dbarboza@ventanamicro.com>
10
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
26
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
11
---
27
---
12
docs/system/riscv/virt.rst | 6 +++---
28
target/riscv/cpu.c | 5 +++++
13
1 file changed, 3 insertions(+), 3 deletions(-)
29
1 file changed, 5 insertions(+)
14
30
15
diff --git a/docs/system/riscv/virt.rst b/docs/system/riscv/virt.rst
31
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
16
index XXXXXXX..XXXXXXX 100644
32
index XXXXXXX..XXXXXXX 100644
17
--- a/docs/system/riscv/virt.rst
33
--- a/target/riscv/cpu.c
18
+++ b/docs/system/riscv/virt.rst
34
+++ b/target/riscv/cpu.c
19
@@ -XXX,XX +XXX,XX @@ The ``virt`` machine supports the following devices:
35
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_realize_tcg(DeviceState *dev, Error **errp)
20
* 1 generic PCIe host bridge
36
CPURISCVState *env = &cpu->env;
21
* The fw_cfg device that allows a guest to obtain data from QEMU
37
Error *local_err = NULL;
22
38
23
-Note that the default CPU is a generic RV32GC/RV64GC. Optional extensions
39
+ if (object_dynamic_cast(OBJECT(dev), TYPE_RISCV_CPU_HOST)) {
24
-can be enabled via command line parameters, e.g.: ``-cpu rv64,x-h=true``
40
+ error_setg(errp, "'host' CPU is not compatible with TCG acceleration");
25
-enables the hypervisor extension for RV64.
41
+ return;
26
+The hypervisor extension has been enabled for the default CPU, so virtual
42
+ }
27
+machines with hypervisor extension can simply be used without explicitly
43
+
28
+declaring.
44
riscv_cpu_validate_misa_mxl(cpu, &local_err);
29
45
if (local_err != NULL) {
30
Hardware configuration information
46
error_propagate(errp, local_err);
31
----------------------------------
32
--
47
--
33
2.34.1
48
2.41.0
34
49
35
50
diff view generated by jsdifflib
1
From: Anup Patel <anup.patel@wdc.com>
1
From: Thomas Huth <thuth@redhat.com>
2
2
3
We define a CPU feature for AIA CSR support in RISC-V CPUs which
3
The character that should be printed is stored in the 64 bit "payload"
4
can be set by machine/device emulation. The RISC-V CSR emulation
4
variable. The code currently tries to print it by taking the address
5
will also check this feature for emulating AIA CSRs.
5
of the variable and passing this pointer to qemu_chr_fe_write(). However,
6
this only works on little endian hosts where the least significant bits
7
are stored on the lowest address. To do this in a portable way, we have
8
to store the value in an uint8_t variable instead.
6
9
7
Signed-off-by: Anup Patel <anup.patel@wdc.com>
10
Fixes: 5033606780 ("RISC-V HTIF Console")
8
Signed-off-by: Anup Patel <anup@brainfault.org>
11
Signed-off-by: Thomas Huth <thuth@redhat.com>
9
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
10
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
12
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
11
Reviewed-by: Frank Chang <frank.chang@sifive.com>
13
Reviewed-by: Bin Meng <bmeng@tinylab.org>
12
Message-id: 20220204174700.534953-7-anup@brainfault.org
14
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
15
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
16
Message-Id: <20230721094720.902454-2-thuth@redhat.com>
13
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
17
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
14
---
18
---
15
target/riscv/cpu.h | 3 ++-
19
hw/char/riscv_htif.c | 3 ++-
16
1 file changed, 2 insertions(+), 1 deletion(-)
20
1 file changed, 2 insertions(+), 1 deletion(-)
17
21
18
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
22
diff --git a/hw/char/riscv_htif.c b/hw/char/riscv_htif.c
19
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
20
--- a/target/riscv/cpu.h
24
--- a/hw/char/riscv_htif.c
21
+++ b/target/riscv/cpu.h
25
+++ b/hw/char/riscv_htif.c
22
@@ -XXX,XX +XXX,XX @@ enum {
26
@@ -XXX,XX +XXX,XX @@ static void htif_handle_tohost_write(HTIFState *s, uint64_t val_written)
23
RISCV_FEATURE_MMU,
27
s->tohost = 0; /* clear to indicate we read */
24
RISCV_FEATURE_PMP,
28
return;
25
RISCV_FEATURE_EPMP,
29
} else if (cmd == HTIF_CONSOLE_CMD_PUTC) {
26
- RISCV_FEATURE_MISA
30
- qemu_chr_fe_write(&s->chr, (uint8_t *)&payload, 1);
27
+ RISCV_FEATURE_MISA,
31
+ uint8_t ch = (uint8_t)payload;
28
+ RISCV_FEATURE_AIA
32
+ qemu_chr_fe_write(&s->chr, &ch, 1);
29
};
33
resp = 0x100 | (uint8_t)payload;
30
34
} else {
31
#define PRIV_VERSION_1_10_0 0x00011000
35
qemu_log("HTIF device %d: unknown command\n", device);
32
--
36
--
33
2.34.1
37
2.41.0
34
38
35
39
diff view generated by jsdifflib
New patch
1
From: Thomas Huth <thuth@redhat.com>
1
2
3
Values that have been read via cpu_physical_memory_read() from the
4
guest's memory have to be swapped in case the host endianess differs
5
from the guest.
6
7
Fixes: a6e13e31d5 ("riscv_htif: Support console output via proxy syscall")
8
Signed-off-by: Thomas Huth <thuth@redhat.com>
9
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
10
Reviewed-by: Bin Meng <bmeng@tinylab.org>
11
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
12
Message-Id: <20230721094720.902454-3-thuth@redhat.com>
13
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
14
---
15
hw/char/riscv_htif.c | 9 +++++----
16
1 file changed, 5 insertions(+), 4 deletions(-)
17
18
diff --git a/hw/char/riscv_htif.c b/hw/char/riscv_htif.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/char/riscv_htif.c
21
+++ b/hw/char/riscv_htif.c
22
@@ -XXX,XX +XXX,XX @@
23
#include "qemu/timer.h"
24
#include "qemu/error-report.h"
25
#include "exec/address-spaces.h"
26
+#include "exec/tswap.h"
27
#include "sysemu/dma.h"
28
29
#define RISCV_DEBUG_HTIF 0
30
@@ -XXX,XX +XXX,XX @@ static void htif_handle_tohost_write(HTIFState *s, uint64_t val_written)
31
} else {
32
uint64_t syscall[8];
33
cpu_physical_memory_read(payload, syscall, sizeof(syscall));
34
- if (syscall[0] == PK_SYS_WRITE &&
35
- syscall[1] == HTIF_DEV_CONSOLE &&
36
- syscall[3] == HTIF_CONSOLE_CMD_PUTC) {
37
+ if (tswap64(syscall[0]) == PK_SYS_WRITE &&
38
+ tswap64(syscall[1]) == HTIF_DEV_CONSOLE &&
39
+ tswap64(syscall[3]) == HTIF_CONSOLE_CMD_PUTC) {
40
uint8_t ch;
41
- cpu_physical_memory_read(syscall[2], &ch, 1);
42
+ cpu_physical_memory_read(tswap64(syscall[2]), &ch, 1);
43
qemu_chr_fe_write(&s->chr, &ch, 1);
44
resp = 0x100 | (uint8_t)payload;
45
} else {
46
--
47
2.41.0
diff view generated by jsdifflib
New patch
1
From: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
1
2
3
zmmul was promoted from experimental to ratified in commit 6d00ffad4e95.
4
Add a riscv,isa string for it.
5
6
Fixes: 6d00ffad4e95 ("target/riscv: move zmmul out of the experimental properties")
7
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
8
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
9
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
10
Message-Id: <20230720132424.371132-2-dbarboza@ventanamicro.com>
11
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
12
---
13
target/riscv/cpu.c | 1 +
14
1 file changed, 1 insertion(+)
15
16
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/riscv/cpu.c
19
+++ b/target/riscv/cpu.c
20
@@ -XXX,XX +XXX,XX @@ static const struct isa_ext_data isa_edata_arr[] = {
21
ISA_EXT_DATA_ENTRY(zicsr, PRIV_VERSION_1_10_0, ext_icsr),
22
ISA_EXT_DATA_ENTRY(zifencei, PRIV_VERSION_1_10_0, ext_ifencei),
23
ISA_EXT_DATA_ENTRY(zihintpause, PRIV_VERSION_1_10_0, ext_zihintpause),
24
+ ISA_EXT_DATA_ENTRY(zmmul, PRIV_VERSION_1_12_0, ext_zmmul),
25
ISA_EXT_DATA_ENTRY(zawrs, PRIV_VERSION_1_12_0, ext_zawrs),
26
ISA_EXT_DATA_ENTRY(zfa, PRIV_VERSION_1_12_0, ext_zfa),
27
ISA_EXT_DATA_ENTRY(zfbfmin, PRIV_VERSION_1_12_0, ext_zfbfmin),
28
--
29
2.41.0
diff view generated by jsdifflib
New patch
1
From: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
1
2
3
The cpu->cfg.epmp extension is still experimental, but it already has a
4
'smepmp' riscv,isa string. Add it.
5
6
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
7
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
8
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
9
Message-Id: <20230720132424.371132-3-dbarboza@ventanamicro.com>
10
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
11
---
12
target/riscv/cpu.c | 1 +
13
1 file changed, 1 insertion(+)
14
15
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/riscv/cpu.c
18
+++ b/target/riscv/cpu.c
19
@@ -XXX,XX +XXX,XX @@ static const struct isa_ext_data isa_edata_arr[] = {
20
ISA_EXT_DATA_ENTRY(zhinx, PRIV_VERSION_1_12_0, ext_zhinx),
21
ISA_EXT_DATA_ENTRY(zhinxmin, PRIV_VERSION_1_12_0, ext_zhinxmin),
22
ISA_EXT_DATA_ENTRY(smaia, PRIV_VERSION_1_12_0, ext_smaia),
23
+ ISA_EXT_DATA_ENTRY(smepmp, PRIV_VERSION_1_12_0, epmp),
24
ISA_EXT_DATA_ENTRY(smstateen, PRIV_VERSION_1_12_0, ext_smstateen),
25
ISA_EXT_DATA_ENTRY(ssaia, PRIV_VERSION_1_12_0, ext_ssaia),
26
ISA_EXT_DATA_ENTRY(sscofpmf, PRIV_VERSION_1_12_0, ext_sscofpmf),
27
--
28
2.41.0
diff view generated by jsdifflib
1
From: LIU Zhiwei <zhiwei_liu@c-sky.com>
1
From: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
2
2
3
The guest should be able to set the vill bit as part of vsetvl.
3
Commit bef6f008b98(accel/tcg: Return bool from page_check_range) converts
4
integer return value to bool type. However, it wrongly converted the use
5
of the API in riscv fault-only-first, where page_check_range < = 0, should
6
be converted to !page_check_range.
4
7
5
Currently we may set env->vill to 1 in the vsetvl helper, but there
8
Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
6
is nowhere that we set it to 0, so once it transitions to 1 it's stuck
7
there until the system is reset.
8
9
Signed-off-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
10
Message-ID: <20230729031618.821-1-zhiwei_liu@linux.alibaba.com>
12
Message-Id: <20220201064601.41143-1-zhiwei_liu@c-sky.com>
13
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
11
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
14
---
12
---
15
target/riscv/vector_helper.c | 1 +
13
target/riscv/vector_helper.c | 2 +-
16
1 file changed, 1 insertion(+)
14
1 file changed, 1 insertion(+), 1 deletion(-)
17
15
18
diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c
16
diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c
19
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
20
--- a/target/riscv/vector_helper.c
18
--- a/target/riscv/vector_helper.c
21
+++ b/target/riscv/vector_helper.c
19
+++ b/target/riscv/vector_helper.c
22
@@ -XXX,XX +XXX,XX @@ target_ulong HELPER(vsetvl)(CPURISCVState *env, target_ulong s1,
20
@@ -XXX,XX +XXX,XX @@ vext_ldff(void *vd, void *v0, target_ulong base,
23
env->vl = vl;
21
cpu_mmu_index(env, false));
24
env->vtype = s2;
22
if (host) {
25
env->vstart = 0;
23
#ifdef CONFIG_USER_ONLY
26
+ env->vill = 0;
24
- if (page_check_range(addr, offset, PAGE_READ)) {
27
return vl;
25
+ if (!page_check_range(addr, offset, PAGE_READ)) {
28
}
26
vl = i;
29
27
goto ProbeSuccess;
28
}
30
--
29
--
31
2.34.1
30
2.41.0
32
33
diff view generated by jsdifflib
1
From: Anup Patel <anup.patel@wdc.com>
1
From: Ard Biesheuvel <ardb@kernel.org>
2
2
3
The RISC-V AIA specification extends RISC-V local interrupts and
3
The AES MixColumns and InvMixColumns operations are relatively
4
introduces new CSRs. This patch adds defines for the new AIA CSRs.
4
expensive 4x4 matrix multiplications in GF(2^8), which is why C
5
implementations usually rely on precomputed lookup tables rather than
6
performing the calculations on demand.
5
7
6
Signed-off-by: Anup Patel <anup.patel@wdc.com>
8
Given that we already carry those tables in QEMU, we can just grab the
7
Signed-off-by: Anup Patel <anup@brainfault.org>
9
right value in the implementation of the RISC-V AES32 instructions. Note
8
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
10
that the tables in question are permuted according to the respective
9
Reviewed-by: Frank Chang <frank.chang@sifive.com>
11
Sbox, so we can omit the Sbox lookup as well in this case.
10
Message-id: 20220204174700.534953-8-anup@brainfault.org
12
13
Cc: Richard Henderson <richard.henderson@linaro.org>
14
Cc: Philippe Mathieu-Daudé <philmd@linaro.org>
15
Cc: Zewen Ye <lustrew@foxmail.com>
16
Cc: Weiwei Li <liweiwei@iscas.ac.cn>
17
Cc: Junqiang Wang <wangjunqiang@iscas.ac.cn>
18
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
19
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Message-ID: <20230731084043.1791984-1-ardb@kernel.org>
11
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
21
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
12
---
22
---
13
target/riscv/cpu_bits.h | 119 ++++++++++++++++++++++++++++++++++++++++
23
include/crypto/aes.h | 7 +++++++
14
1 file changed, 119 insertions(+)
24
crypto/aes.c | 4 ++--
25
target/riscv/crypto_helper.c | 34 ++++------------------------------
26
3 files changed, 13 insertions(+), 32 deletions(-)
15
27
16
diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
28
diff --git a/include/crypto/aes.h b/include/crypto/aes.h
17
index XXXXXXX..XXXXXXX 100644
29
index XXXXXXX..XXXXXXX 100644
18
--- a/target/riscv/cpu_bits.h
30
--- a/include/crypto/aes.h
19
+++ b/target/riscv/cpu_bits.h
31
+++ b/include/crypto/aes.h
20
@@ -XXX,XX +XXX,XX @@
32
@@ -XXX,XX +XXX,XX @@ void AES_decrypt(const unsigned char *in, unsigned char *out,
21
#define CSR_MTVAL 0x343
33
extern const uint8_t AES_sbox[256];
22
#define CSR_MIP 0x344
34
extern const uint8_t AES_isbox[256];
23
35
24
+/* Machine-Level Window to Indirectly Accessed Registers (AIA) */
36
+/*
25
+#define CSR_MISELECT 0x350
37
+AES_Te0[x] = S [x].[02, 01, 01, 03];
26
+#define CSR_MIREG 0x351
38
+AES_Td0[x] = Si[x].[0e, 09, 0d, 0b];
39
+*/
27
+
40
+
28
+/* Machine-Level Interrupts (AIA) */
41
+extern const uint32_t AES_Te0[256], AES_Td0[256];
29
+#define CSR_MTOPI 0xfb0
30
+
31
+/* Machine-Level IMSIC Interface (AIA) */
32
+#define CSR_MSETEIPNUM 0x358
33
+#define CSR_MCLREIPNUM 0x359
34
+#define CSR_MSETEIENUM 0x35a
35
+#define CSR_MCLREIENUM 0x35b
36
+#define CSR_MTOPEI 0x35c
37
+
38
+/* Virtual Interrupts for Supervisor Level (AIA) */
39
+#define CSR_MVIEN 0x308
40
+#define CSR_MVIP 0x309
41
+
42
+/* Machine-Level High-Half CSRs (AIA) */
43
+#define CSR_MIDELEGH 0x313
44
+#define CSR_MIEH 0x314
45
+#define CSR_MVIENH 0x318
46
+#define CSR_MVIPH 0x319
47
+#define CSR_MIPH 0x354
48
+
49
/* Supervisor Trap Setup */
50
#define CSR_SSTATUS 0x100
51
#define CSR_SEDELEG 0x102
52
@@ -XXX,XX +XXX,XX @@
53
#define CSR_SPTBR 0x180
54
#define CSR_SATP 0x180
55
56
+/* Supervisor-Level Window to Indirectly Accessed Registers (AIA) */
57
+#define CSR_SISELECT 0x150
58
+#define CSR_SIREG 0x151
59
+
60
+/* Supervisor-Level Interrupts (AIA) */
61
+#define CSR_STOPI 0xdb0
62
+
63
+/* Supervisor-Level IMSIC Interface (AIA) */
64
+#define CSR_SSETEIPNUM 0x158
65
+#define CSR_SCLREIPNUM 0x159
66
+#define CSR_SSETEIENUM 0x15a
67
+#define CSR_SCLREIENUM 0x15b
68
+#define CSR_STOPEI 0x15c
69
+
70
+/* Supervisor-Level High-Half CSRs (AIA) */
71
+#define CSR_SIEH 0x114
72
+#define CSR_SIPH 0x154
73
+
74
/* Hpervisor CSRs */
75
#define CSR_HSTATUS 0x600
76
#define CSR_HEDELEG 0x602
77
@@ -XXX,XX +XXX,XX @@
78
#define CSR_MTINST 0x34a
79
#define CSR_MTVAL2 0x34b
80
81
+/* Virtual Interrupts and Interrupt Priorities (H-extension with AIA) */
82
+#define CSR_HVIEN 0x608
83
+#define CSR_HVICTL 0x609
84
+#define CSR_HVIPRIO1 0x646
85
+#define CSR_HVIPRIO2 0x647
86
+
87
+/* VS-Level Window to Indirectly Accessed Registers (H-extension with AIA) */
88
+#define CSR_VSISELECT 0x250
89
+#define CSR_VSIREG 0x251
90
+
91
+/* VS-Level Interrupts (H-extension with AIA) */
92
+#define CSR_VSTOPI 0xeb0
93
+
94
+/* VS-Level IMSIC Interface (H-extension with AIA) */
95
+#define CSR_VSSETEIPNUM 0x258
96
+#define CSR_VSCLREIPNUM 0x259
97
+#define CSR_VSSETEIENUM 0x25a
98
+#define CSR_VSCLREIENUM 0x25b
99
+#define CSR_VSTOPEI 0x25c
100
+
101
+/* Hypervisor and VS-Level High-Half CSRs (H-extension with AIA) */
102
+#define CSR_HIDELEGH 0x613
103
+#define CSR_HVIENH 0x618
104
+#define CSR_HVIPH 0x655
105
+#define CSR_HVIPRIO1H 0x656
106
+#define CSR_HVIPRIO2H 0x657
107
+#define CSR_VSIEH 0x214
108
+#define CSR_VSIPH 0x254
109
+
110
/* Enhanced Physical Memory Protection (ePMP) */
111
#define CSR_MSECCFG 0x747
112
#define CSR_MSECCFGH 0x757
113
@@ -XXX,XX +XXX,XX @@ typedef enum RISCVException {
114
#define UMTE_U_PM_INSN U_PM_INSN
115
#define UMTE_MASK (UMTE_U_PM_ENABLE | MMTE_U_PM_CURRENT | UMTE_U_PM_INSN)
116
117
+/* MISELECT, SISELECT, and VSISELECT bits (AIA) */
118
+#define ISELECT_IPRIO0 0x30
119
+#define ISELECT_IPRIO15 0x3f
120
+#define ISELECT_IMSIC_EIDELIVERY 0x70
121
+#define ISELECT_IMSIC_EITHRESHOLD 0x72
122
+#define ISELECT_IMSIC_EIP0 0x80
123
+#define ISELECT_IMSIC_EIP63 0xbf
124
+#define ISELECT_IMSIC_EIE0 0xc0
125
+#define ISELECT_IMSIC_EIE63 0xff
126
+#define ISELECT_IMSIC_FIRST ISELECT_IMSIC_EIDELIVERY
127
+#define ISELECT_IMSIC_LAST ISELECT_IMSIC_EIE63
128
+#define ISELECT_MASK 0x1ff
129
+
130
+/* Dummy [M|S|VS]ISELECT value for emulating [M|S|VS]TOPEI CSRs */
131
+#define ISELECT_IMSIC_TOPEI (ISELECT_MASK + 1)
132
+
133
+/* IMSIC bits (AIA) */
134
+#define IMSIC_TOPEI_IID_SHIFT 16
135
+#define IMSIC_TOPEI_IID_MASK 0x7ff
136
+#define IMSIC_TOPEI_IPRIO_MASK 0x7ff
137
+#define IMSIC_EIPx_BITS 32
138
+#define IMSIC_EIEx_BITS 32
139
+
140
+/* MTOPI and STOPI bits (AIA) */
141
+#define TOPI_IID_SHIFT 16
142
+#define TOPI_IID_MASK 0xfff
143
+#define TOPI_IPRIO_MASK 0xff
144
+
145
+/* Interrupt priority bits (AIA) */
146
+#define IPRIO_IRQ_BITS 8
147
+#define IPRIO_MMAXIPRIO 255
148
+#define IPRIO_DEFAULT_UPPER 4
149
+#define IPRIO_DEFAULT_MIDDLE (IPRIO_DEFAULT_UPPER + 24)
150
+#define IPRIO_DEFAULT_M IPRIO_DEFAULT_MIDDLE
151
+#define IPRIO_DEFAULT_S (IPRIO_DEFAULT_M + 3)
152
+#define IPRIO_DEFAULT_SGEXT (IPRIO_DEFAULT_S + 3)
153
+#define IPRIO_DEFAULT_VS (IPRIO_DEFAULT_SGEXT + 1)
154
+#define IPRIO_DEFAULT_LOWER (IPRIO_DEFAULT_VS + 3)
155
+
156
+/* HVICTL bits (AIA) */
157
+#define HVICTL_VTI 0x40000000
158
+#define HVICTL_IID 0x0fff0000
159
+#define HVICTL_IPRIOM 0x00000100
160
+#define HVICTL_IPRIO 0x000000ff
161
+#define HVICTL_VALID_MASK \
162
+ (HVICTL_VTI | HVICTL_IID | HVICTL_IPRIOM | HVICTL_IPRIO)
163
+
42
+
164
#endif
43
#endif
44
diff --git a/crypto/aes.c b/crypto/aes.c
45
index XXXXXXX..XXXXXXX 100644
46
--- a/crypto/aes.c
47
+++ b/crypto/aes.c
48
@@ -XXX,XX +XXX,XX @@ AES_Td3[x] = Si[x].[09, 0d, 0b, 0e];
49
AES_Td4[x] = Si[x].[01, 01, 01, 01];
50
*/
51
52
-static const uint32_t AES_Te0[256] = {
53
+const uint32_t AES_Te0[256] = {
54
0xc66363a5U, 0xf87c7c84U, 0xee777799U, 0xf67b7b8dU,
55
0xfff2f20dU, 0xd66b6bbdU, 0xde6f6fb1U, 0x91c5c554U,
56
0x60303050U, 0x02010103U, 0xce6767a9U, 0x562b2b7dU,
57
@@ -XXX,XX +XXX,XX @@ static const uint32_t AES_Te4[256] = {
58
0xb0b0b0b0U, 0x54545454U, 0xbbbbbbbbU, 0x16161616U,
59
};
60
61
-static const uint32_t AES_Td0[256] = {
62
+const uint32_t AES_Td0[256] = {
63
0x51f4a750U, 0x7e416553U, 0x1a17a4c3U, 0x3a275e96U,
64
0x3bab6bcbU, 0x1f9d45f1U, 0xacfa58abU, 0x4be30393U,
65
0x2030fa55U, 0xad766df6U, 0x88cc7691U, 0xf5024c25U,
66
diff --git a/target/riscv/crypto_helper.c b/target/riscv/crypto_helper.c
67
index XXXXXXX..XXXXXXX 100644
68
--- a/target/riscv/crypto_helper.c
69
+++ b/target/riscv/crypto_helper.c
70
@@ -XXX,XX +XXX,XX @@
71
#include "crypto/aes-round.h"
72
#include "crypto/sm4.h"
73
74
-#define AES_XTIME(a) \
75
- ((a << 1) ^ ((a & 0x80) ? 0x1b : 0))
76
-
77
-#define AES_GFMUL(a, b) (( \
78
- (((b) & 0x1) ? (a) : 0) ^ \
79
- (((b) & 0x2) ? AES_XTIME(a) : 0) ^ \
80
- (((b) & 0x4) ? AES_XTIME(AES_XTIME(a)) : 0) ^ \
81
- (((b) & 0x8) ? AES_XTIME(AES_XTIME(AES_XTIME(a))) : 0)) & 0xFF)
82
-
83
-static inline uint32_t aes_mixcolumn_byte(uint8_t x, bool fwd)
84
-{
85
- uint32_t u;
86
-
87
- if (fwd) {
88
- u = (AES_GFMUL(x, 3) << 24) | (x << 16) | (x << 8) |
89
- (AES_GFMUL(x, 2) << 0);
90
- } else {
91
- u = (AES_GFMUL(x, 0xb) << 24) | (AES_GFMUL(x, 0xd) << 16) |
92
- (AES_GFMUL(x, 0x9) << 8) | (AES_GFMUL(x, 0xe) << 0);
93
- }
94
- return u;
95
-}
96
-
97
#define sext32_xlen(x) (target_ulong)(int32_t)(x)
98
99
static inline target_ulong aes32_operation(target_ulong shamt,
100
@@ -XXX,XX +XXX,XX @@ static inline target_ulong aes32_operation(target_ulong shamt,
101
bool enc, bool mix)
102
{
103
uint8_t si = rs2 >> shamt;
104
- uint8_t so;
105
uint32_t mixed;
106
target_ulong res;
107
108
if (enc) {
109
- so = AES_sbox[si];
110
if (mix) {
111
- mixed = aes_mixcolumn_byte(so, true);
112
+ mixed = be32_to_cpu(AES_Te0[si]);
113
} else {
114
- mixed = so;
115
+ mixed = AES_sbox[si];
116
}
117
} else {
118
- so = AES_isbox[si];
119
if (mix) {
120
- mixed = aes_mixcolumn_byte(so, false);
121
+ mixed = be32_to_cpu(AES_Td0[si]);
122
} else {
123
- mixed = so;
124
+ mixed = AES_isbox[si];
125
}
126
}
127
mixed = rol32(mixed, shamt);
165
--
128
--
166
2.34.1
129
2.41.0
167
130
168
131
diff view generated by jsdifflib
1
From: Anup Patel <anup.patel@wdc.com>
1
From: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
2
2
3
The RISC-V AIA (Advanced Interrupt Architecture) defines a new
3
Take some functions/macros out of `vector_helper` and put them in a new
4
interrupt controller for wired interrupts called APLIC (Advanced
4
module called `vector_internals`. This ensures they can be used by both
5
Platform Level Interrupt Controller). The APLIC is capabable of
5
vector and vector-crypto helpers (latter implemented in proceeding
6
forwarding wired interupts to RISC-V HARTs directly or as MSIs
6
commits).
7
(Message Signaled Interupts).
8
7
9
This patch adds device emulation for RISC-V AIA APLIC.
8
Signed-off-by: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
10
9
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
11
Signed-off-by: Anup Patel <anup.patel@wdc.com>
10
Signed-off-by: Max Chou <max.chou@sifive.com>
12
Signed-off-by: Anup Patel <anup@brainfault.org>
11
Acked-by: Alistair Francis <alistair.francis@wdc.com>
13
Reviewed-by: Frank Chang <frank.chang@sifive.com>
12
Message-ID: <20230711165917.2629866-2-max.chou@sifive.com>
14
Message-id: 20220204174700.534953-19-anup@brainfault.org
15
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
13
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
16
---
14
---
17
include/hw/intc/riscv_aplic.h | 79 +++
15
target/riscv/vector_internals.h | 182 +++++++++++++++++++++++++++++
18
hw/intc/riscv_aplic.c | 978 ++++++++++++++++++++++++++++++++++
16
target/riscv/vector_helper.c | 201 +-------------------------------
19
hw/intc/Kconfig | 3 +
17
target/riscv/vector_internals.c | 81 +++++++++++++
20
hw/intc/meson.build | 1 +
18
target/riscv/meson.build | 1 +
21
4 files changed, 1061 insertions(+)
19
4 files changed, 265 insertions(+), 200 deletions(-)
22
create mode 100644 include/hw/intc/riscv_aplic.h
20
create mode 100644 target/riscv/vector_internals.h
23
create mode 100644 hw/intc/riscv_aplic.c
21
create mode 100644 target/riscv/vector_internals.c
24
22
25
diff --git a/include/hw/intc/riscv_aplic.h b/include/hw/intc/riscv_aplic.h
23
diff --git a/target/riscv/vector_internals.h b/target/riscv/vector_internals.h
26
new file mode 100644
24
new file mode 100644
27
index XXXXXXX..XXXXXXX
25
index XXXXXXX..XXXXXXX
28
--- /dev/null
26
--- /dev/null
29
+++ b/include/hw/intc/riscv_aplic.h
27
+++ b/target/riscv/vector_internals.h
30
@@ -XXX,XX +XXX,XX @@
28
@@ -XXX,XX +XXX,XX @@
31
+/*
29
+/*
32
+ * RISC-V APLIC (Advanced Platform Level Interrupt Controller) interface
30
+ * RISC-V Vector Extension Internals
33
+ *
31
+ *
34
+ * Copyright (c) 2021 Western Digital Corporation or its affiliates.
32
+ * Copyright (c) 2020 T-Head Semiconductor Co., Ltd. All rights reserved.
35
+ *
33
+ *
36
+ * This program is free software; you can redistribute it and/or modify it
34
+ * This program is free software; you can redistribute it and/or modify it
37
+ * under the terms and conditions of the GNU General Public License,
35
+ * under the terms and conditions of the GNU General Public License,
38
+ * version 2 or later, as published by the Free Software Foundation.
36
+ * version 2 or later, as published by the Free Software Foundation.
39
+ *
37
+ *
...
...
44
+ *
42
+ *
45
+ * You should have received a copy of the GNU General Public License along with
43
+ * You should have received a copy of the GNU General Public License along with
46
+ * this program. If not, see <http://www.gnu.org/licenses/>.
44
+ * this program. If not, see <http://www.gnu.org/licenses/>.
47
+ */
45
+ */
48
+
46
+
49
+#ifndef HW_RISCV_APLIC_H
47
+#ifndef TARGET_RISCV_VECTOR_INTERNALS_H
50
+#define HW_RISCV_APLIC_H
48
+#define TARGET_RISCV_VECTOR_INTERNALS_H
51
+
49
+
52
+#include "hw/sysbus.h"
50
+#include "qemu/osdep.h"
53
+#include "qom/object.h"
51
+#include "qemu/bitops.h"
54
+
52
+#include "cpu.h"
55
+#define TYPE_RISCV_APLIC "riscv.aplic"
53
+#include "tcg/tcg-gvec-desc.h"
56
+
54
+#include "internals.h"
57
+typedef struct RISCVAPLICState RISCVAPLICState;
55
+
58
+DECLARE_INSTANCE_CHECKER(RISCVAPLICState, RISCV_APLIC, TYPE_RISCV_APLIC)
56
+static inline uint32_t vext_nf(uint32_t desc)
59
+
57
+{
60
+#define APLIC_MIN_SIZE 0x4000
58
+ return FIELD_EX32(simd_data(desc), VDATA, NF);
61
+#define APLIC_SIZE_ALIGN(__x) (((__x) + (APLIC_MIN_SIZE - 1)) & \
59
+}
62
+ ~(APLIC_MIN_SIZE - 1))
60
+
63
+#define APLIC_SIZE(__num_harts) (APLIC_MIN_SIZE + \
61
+/*
64
+ APLIC_SIZE_ALIGN(32 * (__num_harts)))
62
+ * Note that vector data is stored in host-endian 64-bit chunks,
65
+
63
+ * so addressing units smaller than that needs a host-endian fixup.
66
+struct RISCVAPLICState {
64
+ */
67
+ /*< private >*/
65
+#if HOST_BIG_ENDIAN
68
+ SysBusDevice parent_obj;
66
+#define H1(x) ((x) ^ 7)
69
+ qemu_irq *external_irqs;
67
+#define H1_2(x) ((x) ^ 6)
70
+
68
+#define H1_4(x) ((x) ^ 4)
71
+ /*< public >*/
69
+#define H2(x) ((x) ^ 3)
72
+ MemoryRegion mmio;
70
+#define H4(x) ((x) ^ 1)
73
+ uint32_t bitfield_words;
71
+#define H8(x) ((x))
74
+ uint32_t domaincfg;
72
+#else
75
+ uint32_t mmsicfgaddr;
73
+#define H1(x) (x)
76
+ uint32_t mmsicfgaddrH;
74
+#define H1_2(x) (x)
77
+ uint32_t smsicfgaddr;
75
+#define H1_4(x) (x)
78
+ uint32_t smsicfgaddrH;
76
+#define H2(x) (x)
79
+ uint32_t genmsi;
77
+#define H4(x) (x)
80
+ uint32_t *sourcecfg;
78
+#define H8(x) (x)
81
+ uint32_t *state;
82
+ uint32_t *target;
83
+ uint32_t *idelivery;
84
+ uint32_t *iforce;
85
+ uint32_t *ithreshold;
86
+
87
+ /* topology */
88
+#define QEMU_APLIC_MAX_CHILDREN 16
89
+ struct RISCVAPLICState *parent;
90
+ struct RISCVAPLICState *children[QEMU_APLIC_MAX_CHILDREN];
91
+ uint16_t num_children;
92
+
93
+ /* config */
94
+ uint32_t aperture_size;
95
+ uint32_t hartid_base;
96
+ uint32_t num_harts;
97
+ uint32_t iprio_mask;
98
+ uint32_t num_irqs;
99
+ bool msimode;
100
+ bool mmode;
101
+};
102
+
103
+void riscv_aplic_add_child(DeviceState *parent, DeviceState *child);
104
+
105
+DeviceState *riscv_aplic_create(hwaddr addr, hwaddr size,
106
+ uint32_t hartid_base, uint32_t num_harts, uint32_t num_sources,
107
+ uint32_t iprio_bits, bool msimode, bool mmode, DeviceState *parent);
108
+
109
+#endif
79
+#endif
110
diff --git a/hw/intc/riscv_aplic.c b/hw/intc/riscv_aplic.c
80
+
81
+/*
82
+ * Encode LMUL to lmul as following:
83
+ * LMUL vlmul lmul
84
+ * 1 000 0
85
+ * 2 001 1
86
+ * 4 010 2
87
+ * 8 011 3
88
+ * - 100 -
89
+ * 1/8 101 -3
90
+ * 1/4 110 -2
91
+ * 1/2 111 -1
92
+ */
93
+static inline int32_t vext_lmul(uint32_t desc)
94
+{
95
+ return sextract32(FIELD_EX32(simd_data(desc), VDATA, LMUL), 0, 3);
96
+}
97
+
98
+static inline uint32_t vext_vm(uint32_t desc)
99
+{
100
+ return FIELD_EX32(simd_data(desc), VDATA, VM);
101
+}
102
+
103
+static inline uint32_t vext_vma(uint32_t desc)
104
+{
105
+ return FIELD_EX32(simd_data(desc), VDATA, VMA);
106
+}
107
+
108
+static inline uint32_t vext_vta(uint32_t desc)
109
+{
110
+ return FIELD_EX32(simd_data(desc), VDATA, VTA);
111
+}
112
+
113
+static inline uint32_t vext_vta_all_1s(uint32_t desc)
114
+{
115
+ return FIELD_EX32(simd_data(desc), VDATA, VTA_ALL_1S);
116
+}
117
+
118
+/*
119
+ * Earlier designs (pre-0.9) had a varying number of bits
120
+ * per mask value (MLEN). In the 0.9 design, MLEN=1.
121
+ * (Section 4.5)
122
+ */
123
+static inline int vext_elem_mask(void *v0, int index)
124
+{
125
+ int idx = index / 64;
126
+ int pos = index % 64;
127
+ return (((uint64_t *)v0)[idx] >> pos) & 1;
128
+}
129
+
130
+/*
131
+ * Get number of total elements, including prestart, body and tail elements.
132
+ * Note that when LMUL < 1, the tail includes the elements past VLMAX that
133
+ * are held in the same vector register.
134
+ */
135
+static inline uint32_t vext_get_total_elems(CPURISCVState *env, uint32_t desc,
136
+ uint32_t esz)
137
+{
138
+ uint32_t vlenb = simd_maxsz(desc);
139
+ uint32_t sew = 1 << FIELD_EX64(env->vtype, VTYPE, VSEW);
140
+ int8_t emul = ctzl(esz) - ctzl(sew) + vext_lmul(desc) < 0 ? 0 :
141
+ ctzl(esz) - ctzl(sew) + vext_lmul(desc);
142
+ return (vlenb << emul) / esz;
143
+}
144
+
145
+/* set agnostic elements to 1s */
146
+void vext_set_elems_1s(void *base, uint32_t is_agnostic, uint32_t cnt,
147
+ uint32_t tot);
148
+
149
+/* expand macro args before macro */
150
+#define RVVCALL(macro, ...) macro(__VA_ARGS__)
151
+
152
+/* (TD, T1, T2, TX1, TX2) */
153
+#define OP_UUU_B uint8_t, uint8_t, uint8_t, uint8_t, uint8_t
154
+#define OP_UUU_H uint16_t, uint16_t, uint16_t, uint16_t, uint16_t
155
+#define OP_UUU_W uint32_t, uint32_t, uint32_t, uint32_t, uint32_t
156
+#define OP_UUU_D uint64_t, uint64_t, uint64_t, uint64_t, uint64_t
157
+
158
+/* operation of two vector elements */
159
+typedef void opivv2_fn(void *vd, void *vs1, void *vs2, int i);
160
+
161
+#define OPIVV2(NAME, TD, T1, T2, TX1, TX2, HD, HS1, HS2, OP) \
162
+static void do_##NAME(void *vd, void *vs1, void *vs2, int i) \
163
+{ \
164
+ TX1 s1 = *((T1 *)vs1 + HS1(i)); \
165
+ TX2 s2 = *((T2 *)vs2 + HS2(i)); \
166
+ *((TD *)vd + HD(i)) = OP(s2, s1); \
167
+}
168
+
169
+void do_vext_vv(void *vd, void *v0, void *vs1, void *vs2,
170
+ CPURISCVState *env, uint32_t desc,
171
+ opivv2_fn *fn, uint32_t esz);
172
+
173
+/* generate the helpers for OPIVV */
174
+#define GEN_VEXT_VV(NAME, ESZ) \
175
+void HELPER(NAME)(void *vd, void *v0, void *vs1, \
176
+ void *vs2, CPURISCVState *env, \
177
+ uint32_t desc) \
178
+{ \
179
+ do_vext_vv(vd, v0, vs1, vs2, env, desc, \
180
+ do_##NAME, ESZ); \
181
+}
182
+
183
+typedef void opivx2_fn(void *vd, target_long s1, void *vs2, int i);
184
+
185
+/*
186
+ * (T1)s1 gives the real operator type.
187
+ * (TX1)(T1)s1 expands the operator type of widen or narrow operations.
188
+ */
189
+#define OPIVX2(NAME, TD, T1, T2, TX1, TX2, HD, HS2, OP) \
190
+static void do_##NAME(void *vd, target_long s1, void *vs2, int i) \
191
+{ \
192
+ TX2 s2 = *((T2 *)vs2 + HS2(i)); \
193
+ *((TD *)vd + HD(i)) = OP(s2, (TX1)(T1)s1); \
194
+}
195
+
196
+void do_vext_vx(void *vd, void *v0, target_long s1, void *vs2,
197
+ CPURISCVState *env, uint32_t desc,
198
+ opivx2_fn fn, uint32_t esz);
199
+
200
+/* generate the helpers for OPIVX */
201
+#define GEN_VEXT_VX(NAME, ESZ) \
202
+void HELPER(NAME)(void *vd, void *v0, target_ulong s1, \
203
+ void *vs2, CPURISCVState *env, \
204
+ uint32_t desc) \
205
+{ \
206
+ do_vext_vx(vd, v0, s1, vs2, env, desc, \
207
+ do_##NAME, ESZ); \
208
+}
209
+
210
+#endif /* TARGET_RISCV_VECTOR_INTERNALS_H */
211
diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c
212
index XXXXXXX..XXXXXXX 100644
213
--- a/target/riscv/vector_helper.c
214
+++ b/target/riscv/vector_helper.c
215
@@ -XXX,XX +XXX,XX @@
216
#include "fpu/softfloat.h"
217
#include "tcg/tcg-gvec-desc.h"
218
#include "internals.h"
219
+#include "vector_internals.h"
220
#include <math.h>
221
222
target_ulong HELPER(vsetvl)(CPURISCVState *env, target_ulong s1,
223
@@ -XXX,XX +XXX,XX @@ target_ulong HELPER(vsetvl)(CPURISCVState *env, target_ulong s1,
224
return vl;
225
}
226
227
-/*
228
- * Note that vector data is stored in host-endian 64-bit chunks,
229
- * so addressing units smaller than that needs a host-endian fixup.
230
- */
231
-#if HOST_BIG_ENDIAN
232
-#define H1(x) ((x) ^ 7)
233
-#define H1_2(x) ((x) ^ 6)
234
-#define H1_4(x) ((x) ^ 4)
235
-#define H2(x) ((x) ^ 3)
236
-#define H4(x) ((x) ^ 1)
237
-#define H8(x) ((x))
238
-#else
239
-#define H1(x) (x)
240
-#define H1_2(x) (x)
241
-#define H1_4(x) (x)
242
-#define H2(x) (x)
243
-#define H4(x) (x)
244
-#define H8(x) (x)
245
-#endif
246
-
247
-static inline uint32_t vext_nf(uint32_t desc)
248
-{
249
- return FIELD_EX32(simd_data(desc), VDATA, NF);
250
-}
251
-
252
-static inline uint32_t vext_vm(uint32_t desc)
253
-{
254
- return FIELD_EX32(simd_data(desc), VDATA, VM);
255
-}
256
-
257
-/*
258
- * Encode LMUL to lmul as following:
259
- * LMUL vlmul lmul
260
- * 1 000 0
261
- * 2 001 1
262
- * 4 010 2
263
- * 8 011 3
264
- * - 100 -
265
- * 1/8 101 -3
266
- * 1/4 110 -2
267
- * 1/2 111 -1
268
- */
269
-static inline int32_t vext_lmul(uint32_t desc)
270
-{
271
- return sextract32(FIELD_EX32(simd_data(desc), VDATA, LMUL), 0, 3);
272
-}
273
-
274
-static inline uint32_t vext_vta(uint32_t desc)
275
-{
276
- return FIELD_EX32(simd_data(desc), VDATA, VTA);
277
-}
278
-
279
-static inline uint32_t vext_vma(uint32_t desc)
280
-{
281
- return FIELD_EX32(simd_data(desc), VDATA, VMA);
282
-}
283
-
284
-static inline uint32_t vext_vta_all_1s(uint32_t desc)
285
-{
286
- return FIELD_EX32(simd_data(desc), VDATA, VTA_ALL_1S);
287
-}
288
-
289
/*
290
* Get the maximum number of elements can be operated.
291
*
292
@@ -XXX,XX +XXX,XX @@ static inline uint32_t vext_max_elems(uint32_t desc, uint32_t log2_esz)
293
return scale < 0 ? vlenb >> -scale : vlenb << scale;
294
}
295
296
-/*
297
- * Get number of total elements, including prestart, body and tail elements.
298
- * Note that when LMUL < 1, the tail includes the elements past VLMAX that
299
- * are held in the same vector register.
300
- */
301
-static inline uint32_t vext_get_total_elems(CPURISCVState *env, uint32_t desc,
302
- uint32_t esz)
303
-{
304
- uint32_t vlenb = simd_maxsz(desc);
305
- uint32_t sew = 1 << FIELD_EX64(env->vtype, VTYPE, VSEW);
306
- int8_t emul = ctzl(esz) - ctzl(sew) + vext_lmul(desc) < 0 ? 0 :
307
- ctzl(esz) - ctzl(sew) + vext_lmul(desc);
308
- return (vlenb << emul) / esz;
309
-}
310
-
311
static inline target_ulong adjust_addr(CPURISCVState *env, target_ulong addr)
312
{
313
return (addr & ~env->cur_pmmask) | env->cur_pmbase;
314
@@ -XXX,XX +XXX,XX @@ static void probe_pages(CPURISCVState *env, target_ulong addr,
315
}
316
}
317
318
-/* set agnostic elements to 1s */
319
-static void vext_set_elems_1s(void *base, uint32_t is_agnostic, uint32_t cnt,
320
- uint32_t tot)
321
-{
322
- if (is_agnostic == 0) {
323
- /* policy undisturbed */
324
- return;
325
- }
326
- if (tot - cnt == 0) {
327
- return;
328
- }
329
- memset(base + cnt, -1, tot - cnt);
330
-}
331
-
332
static inline void vext_set_elem_mask(void *v0, int index,
333
uint8_t value)
334
{
335
@@ -XXX,XX +XXX,XX @@ static inline void vext_set_elem_mask(void *v0, int index,
336
((uint64_t *)v0)[idx] = deposit64(old, pos, 1, value);
337
}
338
339
-/*
340
- * Earlier designs (pre-0.9) had a varying number of bits
341
- * per mask value (MLEN). In the 0.9 design, MLEN=1.
342
- * (Section 4.5)
343
- */
344
-static inline int vext_elem_mask(void *v0, int index)
345
-{
346
- int idx = index / 64;
347
- int pos = index % 64;
348
- return (((uint64_t *)v0)[idx] >> pos) & 1;
349
-}
350
-
351
/* elements operations for load and store */
352
typedef void vext_ldst_elem_fn(CPURISCVState *env, abi_ptr addr,
353
uint32_t idx, void *vd, uintptr_t retaddr);
354
@@ -XXX,XX +XXX,XX @@ GEN_VEXT_ST_WHOLE(vs8r_v, int8_t, ste_b)
355
* Vector Integer Arithmetic Instructions
356
*/
357
358
-/* expand macro args before macro */
359
-#define RVVCALL(macro, ...) macro(__VA_ARGS__)
360
-
361
/* (TD, T1, T2, TX1, TX2) */
362
#define OP_SSS_B int8_t, int8_t, int8_t, int8_t, int8_t
363
#define OP_SSS_H int16_t, int16_t, int16_t, int16_t, int16_t
364
#define OP_SSS_W int32_t, int32_t, int32_t, int32_t, int32_t
365
#define OP_SSS_D int64_t, int64_t, int64_t, int64_t, int64_t
366
-#define OP_UUU_B uint8_t, uint8_t, uint8_t, uint8_t, uint8_t
367
-#define OP_UUU_H uint16_t, uint16_t, uint16_t, uint16_t, uint16_t
368
-#define OP_UUU_W uint32_t, uint32_t, uint32_t, uint32_t, uint32_t
369
-#define OP_UUU_D uint64_t, uint64_t, uint64_t, uint64_t, uint64_t
370
#define OP_SUS_B int8_t, uint8_t, int8_t, uint8_t, int8_t
371
#define OP_SUS_H int16_t, uint16_t, int16_t, uint16_t, int16_t
372
#define OP_SUS_W int32_t, uint32_t, int32_t, uint32_t, int32_t
373
@@ -XXX,XX +XXX,XX @@ GEN_VEXT_ST_WHOLE(vs8r_v, int8_t, ste_b)
374
#define NOP_UUU_H uint16_t, uint16_t, uint32_t, uint16_t, uint32_t
375
#define NOP_UUU_W uint32_t, uint32_t, uint64_t, uint32_t, uint64_t
376
377
-/* operation of two vector elements */
378
-typedef void opivv2_fn(void *vd, void *vs1, void *vs2, int i);
379
-
380
-#define OPIVV2(NAME, TD, T1, T2, TX1, TX2, HD, HS1, HS2, OP) \
381
-static void do_##NAME(void *vd, void *vs1, void *vs2, int i) \
382
-{ \
383
- TX1 s1 = *((T1 *)vs1 + HS1(i)); \
384
- TX2 s2 = *((T2 *)vs2 + HS2(i)); \
385
- *((TD *)vd + HD(i)) = OP(s2, s1); \
386
-}
387
#define DO_SUB(N, M) (N - M)
388
#define DO_RSUB(N, M) (M - N)
389
390
@@ -XXX,XX +XXX,XX @@ RVVCALL(OPIVV2, vsub_vv_h, OP_SSS_H, H2, H2, H2, DO_SUB)
391
RVVCALL(OPIVV2, vsub_vv_w, OP_SSS_W, H4, H4, H4, DO_SUB)
392
RVVCALL(OPIVV2, vsub_vv_d, OP_SSS_D, H8, H8, H8, DO_SUB)
393
394
-static void do_vext_vv(void *vd, void *v0, void *vs1, void *vs2,
395
- CPURISCVState *env, uint32_t desc,
396
- opivv2_fn *fn, uint32_t esz)
397
-{
398
- uint32_t vm = vext_vm(desc);
399
- uint32_t vl = env->vl;
400
- uint32_t total_elems = vext_get_total_elems(env, desc, esz);
401
- uint32_t vta = vext_vta(desc);
402
- uint32_t vma = vext_vma(desc);
403
- uint32_t i;
404
-
405
- for (i = env->vstart; i < vl; i++) {
406
- if (!vm && !vext_elem_mask(v0, i)) {
407
- /* set masked-off elements to 1s */
408
- vext_set_elems_1s(vd, vma, i * esz, (i + 1) * esz);
409
- continue;
410
- }
411
- fn(vd, vs1, vs2, i);
412
- }
413
- env->vstart = 0;
414
- /* set tail elements to 1s */
415
- vext_set_elems_1s(vd, vta, vl * esz, total_elems * esz);
416
-}
417
-
418
-/* generate the helpers for OPIVV */
419
-#define GEN_VEXT_VV(NAME, ESZ) \
420
-void HELPER(NAME)(void *vd, void *v0, void *vs1, \
421
- void *vs2, CPURISCVState *env, \
422
- uint32_t desc) \
423
-{ \
424
- do_vext_vv(vd, v0, vs1, vs2, env, desc, \
425
- do_##NAME, ESZ); \
426
-}
427
-
428
GEN_VEXT_VV(vadd_vv_b, 1)
429
GEN_VEXT_VV(vadd_vv_h, 2)
430
GEN_VEXT_VV(vadd_vv_w, 4)
431
@@ -XXX,XX +XXX,XX @@ GEN_VEXT_VV(vsub_vv_h, 2)
432
GEN_VEXT_VV(vsub_vv_w, 4)
433
GEN_VEXT_VV(vsub_vv_d, 8)
434
435
-typedef void opivx2_fn(void *vd, target_long s1, void *vs2, int i);
436
-
437
-/*
438
- * (T1)s1 gives the real operator type.
439
- * (TX1)(T1)s1 expands the operator type of widen or narrow operations.
440
- */
441
-#define OPIVX2(NAME, TD, T1, T2, TX1, TX2, HD, HS2, OP) \
442
-static void do_##NAME(void *vd, target_long s1, void *vs2, int i) \
443
-{ \
444
- TX2 s2 = *((T2 *)vs2 + HS2(i)); \
445
- *((TD *)vd + HD(i)) = OP(s2, (TX1)(T1)s1); \
446
-}
447
448
RVVCALL(OPIVX2, vadd_vx_b, OP_SSS_B, H1, H1, DO_ADD)
449
RVVCALL(OPIVX2, vadd_vx_h, OP_SSS_H, H2, H2, DO_ADD)
450
@@ -XXX,XX +XXX,XX @@ RVVCALL(OPIVX2, vrsub_vx_h, OP_SSS_H, H2, H2, DO_RSUB)
451
RVVCALL(OPIVX2, vrsub_vx_w, OP_SSS_W, H4, H4, DO_RSUB)
452
RVVCALL(OPIVX2, vrsub_vx_d, OP_SSS_D, H8, H8, DO_RSUB)
453
454
-static void do_vext_vx(void *vd, void *v0, target_long s1, void *vs2,
455
- CPURISCVState *env, uint32_t desc,
456
- opivx2_fn fn, uint32_t esz)
457
-{
458
- uint32_t vm = vext_vm(desc);
459
- uint32_t vl = env->vl;
460
- uint32_t total_elems = vext_get_total_elems(env, desc, esz);
461
- uint32_t vta = vext_vta(desc);
462
- uint32_t vma = vext_vma(desc);
463
- uint32_t i;
464
-
465
- for (i = env->vstart; i < vl; i++) {
466
- if (!vm && !vext_elem_mask(v0, i)) {
467
- /* set masked-off elements to 1s */
468
- vext_set_elems_1s(vd, vma, i * esz, (i + 1) * esz);
469
- continue;
470
- }
471
- fn(vd, s1, vs2, i);
472
- }
473
- env->vstart = 0;
474
- /* set tail elements to 1s */
475
- vext_set_elems_1s(vd, vta, vl * esz, total_elems * esz);
476
-}
477
-
478
-/* generate the helpers for OPIVX */
479
-#define GEN_VEXT_VX(NAME, ESZ) \
480
-void HELPER(NAME)(void *vd, void *v0, target_ulong s1, \
481
- void *vs2, CPURISCVState *env, \
482
- uint32_t desc) \
483
-{ \
484
- do_vext_vx(vd, v0, s1, vs2, env, desc, \
485
- do_##NAME, ESZ); \
486
-}
487
-
488
GEN_VEXT_VX(vadd_vx_b, 1)
489
GEN_VEXT_VX(vadd_vx_h, 2)
490
GEN_VEXT_VX(vadd_vx_w, 4)
491
diff --git a/target/riscv/vector_internals.c b/target/riscv/vector_internals.c
111
new file mode 100644
492
new file mode 100644
112
index XXXXXXX..XXXXXXX
493
index XXXXXXX..XXXXXXX
113
--- /dev/null
494
--- /dev/null
114
+++ b/hw/intc/riscv_aplic.c
495
+++ b/target/riscv/vector_internals.c
115
@@ -XXX,XX +XXX,XX @@
496
@@ -XXX,XX +XXX,XX @@
116
+/*
497
+/*
117
+ * RISC-V APLIC (Advanced Platform Level Interrupt Controller)
498
+ * RISC-V Vector Extension Internals
118
+ *
499
+ *
119
+ * Copyright (c) 2021 Western Digital Corporation or its affiliates.
500
+ * Copyright (c) 2020 T-Head Semiconductor Co., Ltd. All rights reserved.
120
+ *
501
+ *
121
+ * This program is free software; you can redistribute it and/or modify it
502
+ * This program is free software; you can redistribute it and/or modify it
122
+ * under the terms and conditions of the GNU General Public License,
503
+ * under the terms and conditions of the GNU General Public License,
123
+ * version 2 or later, as published by the Free Software Foundation.
504
+ * version 2 or later, as published by the Free Software Foundation.
124
+ *
505
+ *
...
...
129
+ *
510
+ *
130
+ * You should have received a copy of the GNU General Public License along with
511
+ * You should have received a copy of the GNU General Public License along with
131
+ * this program. If not, see <http://www.gnu.org/licenses/>.
512
+ * this program. If not, see <http://www.gnu.org/licenses/>.
132
+ */
513
+ */
133
+
514
+
134
+#include "qemu/osdep.h"
515
+#include "vector_internals.h"
135
+#include "qapi/error.h"
516
+
136
+#include "qemu/log.h"
517
+/* set agnostic elements to 1s */
137
+#include "qemu/module.h"
518
+void vext_set_elems_1s(void *base, uint32_t is_agnostic, uint32_t cnt,
138
+#include "qemu/error-report.h"
519
+ uint32_t tot)
139
+#include "qemu/bswap.h"
520
+{
140
+#include "exec/address-spaces.h"
521
+ if (is_agnostic == 0) {
141
+#include "hw/sysbus.h"
522
+ /* policy undisturbed */
142
+#include "hw/pci/msi.h"
523
+ return;
143
+#include "hw/boards.h"
524
+ }
144
+#include "hw/qdev-properties.h"
525
+ if (tot - cnt == 0) {
145
+#include "hw/intc/riscv_aplic.h"
526
+ return ;
146
+#include "hw/irq.h"
527
+ }
147
+#include "target/riscv/cpu.h"
528
+ memset(base + cnt, -1, tot - cnt);
148
+#include "sysemu/sysemu.h"
529
+}
149
+#include "migration/vmstate.h"
530
+
150
+
531
+void do_vext_vv(void *vd, void *v0, void *vs1, void *vs2,
151
+#define APLIC_MAX_IDC (1UL << 14)
532
+ CPURISCVState *env, uint32_t desc,
152
+#define APLIC_MAX_SOURCE 1024
533
+ opivv2_fn *fn, uint32_t esz)
153
+#define APLIC_MIN_IPRIO_BITS 1
534
+{
154
+#define APLIC_MAX_IPRIO_BITS 8
535
+ uint32_t vm = vext_vm(desc);
155
+#define APLIC_MAX_CHILDREN 1024
536
+ uint32_t vl = env->vl;
156
+
537
+ uint32_t total_elems = vext_get_total_elems(env, desc, esz);
157
+#define APLIC_DOMAINCFG 0x0000
538
+ uint32_t vta = vext_vta(desc);
158
+#define APLIC_DOMAINCFG_RDONLY 0x80000000
539
+ uint32_t vma = vext_vma(desc);
159
+#define APLIC_DOMAINCFG_IE (1 << 8)
540
+ uint32_t i;
160
+#define APLIC_DOMAINCFG_DM (1 << 2)
541
+
161
+#define APLIC_DOMAINCFG_BE (1 << 0)
542
+ for (i = env->vstart; i < vl; i++) {
162
+
543
+ if (!vm && !vext_elem_mask(v0, i)) {
163
+#define APLIC_SOURCECFG_BASE 0x0004
544
+ /* set masked-off elements to 1s */
164
+#define APLIC_SOURCECFG_D (1 << 10)
545
+ vext_set_elems_1s(vd, vma, i * esz, (i + 1) * esz);
165
+#define APLIC_SOURCECFG_CHILDIDX_MASK 0x000003ff
166
+#define APLIC_SOURCECFG_SM_MASK 0x00000007
167
+#define APLIC_SOURCECFG_SM_INACTIVE 0x0
168
+#define APLIC_SOURCECFG_SM_DETACH 0x1
169
+#define APLIC_SOURCECFG_SM_EDGE_RISE 0x4
170
+#define APLIC_SOURCECFG_SM_EDGE_FALL 0x5
171
+#define APLIC_SOURCECFG_SM_LEVEL_HIGH 0x6
172
+#define APLIC_SOURCECFG_SM_LEVEL_LOW 0x7
173
+
174
+#define APLIC_MMSICFGADDR 0x1bc0
175
+#define APLIC_MMSICFGADDRH 0x1bc4
176
+#define APLIC_SMSICFGADDR 0x1bc8
177
+#define APLIC_SMSICFGADDRH 0x1bcc
178
+
179
+#define APLIC_xMSICFGADDRH_L (1UL << 31)
180
+#define APLIC_xMSICFGADDRH_HHXS_MASK 0x1f
181
+#define APLIC_xMSICFGADDRH_HHXS_SHIFT 24
182
+#define APLIC_xMSICFGADDRH_LHXS_MASK 0x7
183
+#define APLIC_xMSICFGADDRH_LHXS_SHIFT 20
184
+#define APLIC_xMSICFGADDRH_HHXW_MASK 0x7
185
+#define APLIC_xMSICFGADDRH_HHXW_SHIFT 16
186
+#define APLIC_xMSICFGADDRH_LHXW_MASK 0xf
187
+#define APLIC_xMSICFGADDRH_LHXW_SHIFT 12
188
+#define APLIC_xMSICFGADDRH_BAPPN_MASK 0xfff
189
+
190
+#define APLIC_xMSICFGADDR_PPN_SHIFT 12
191
+
192
+#define APLIC_xMSICFGADDR_PPN_HART(__lhxs) \
193
+ ((1UL << (__lhxs)) - 1)
194
+
195
+#define APLIC_xMSICFGADDR_PPN_LHX_MASK(__lhxw) \
196
+ ((1UL << (__lhxw)) - 1)
197
+#define APLIC_xMSICFGADDR_PPN_LHX_SHIFT(__lhxs) \
198
+ ((__lhxs))
199
+#define APLIC_xMSICFGADDR_PPN_LHX(__lhxw, __lhxs) \
200
+ (APLIC_xMSICFGADDR_PPN_LHX_MASK(__lhxw) << \
201
+ APLIC_xMSICFGADDR_PPN_LHX_SHIFT(__lhxs))
202
+
203
+#define APLIC_xMSICFGADDR_PPN_HHX_MASK(__hhxw) \
204
+ ((1UL << (__hhxw)) - 1)
205
+#define APLIC_xMSICFGADDR_PPN_HHX_SHIFT(__hhxs) \
206
+ ((__hhxs) + APLIC_xMSICFGADDR_PPN_SHIFT)
207
+#define APLIC_xMSICFGADDR_PPN_HHX(__hhxw, __hhxs) \
208
+ (APLIC_xMSICFGADDR_PPN_HHX_MASK(__hhxw) << \
209
+ APLIC_xMSICFGADDR_PPN_HHX_SHIFT(__hhxs))
210
+
211
+#define APLIC_xMSICFGADDRH_VALID_MASK \
212
+ (APLIC_xMSICFGADDRH_L | \
213
+ (APLIC_xMSICFGADDRH_HHXS_MASK << APLIC_xMSICFGADDRH_HHXS_SHIFT) | \
214
+ (APLIC_xMSICFGADDRH_LHXS_MASK << APLIC_xMSICFGADDRH_LHXS_SHIFT) | \
215
+ (APLIC_xMSICFGADDRH_HHXW_MASK << APLIC_xMSICFGADDRH_HHXW_SHIFT) | \
216
+ (APLIC_xMSICFGADDRH_LHXW_MASK << APLIC_xMSICFGADDRH_LHXW_SHIFT) | \
217
+ APLIC_xMSICFGADDRH_BAPPN_MASK)
218
+
219
+#define APLIC_SETIP_BASE 0x1c00
220
+#define APLIC_SETIPNUM 0x1cdc
221
+
222
+#define APLIC_CLRIP_BASE 0x1d00
223
+#define APLIC_CLRIPNUM 0x1ddc
224
+
225
+#define APLIC_SETIE_BASE 0x1e00
226
+#define APLIC_SETIENUM 0x1edc
227
+
228
+#define APLIC_CLRIE_BASE 0x1f00
229
+#define APLIC_CLRIENUM 0x1fdc
230
+
231
+#define APLIC_SETIPNUM_LE 0x2000
232
+#define APLIC_SETIPNUM_BE 0x2004
233
+
234
+#define APLIC_ISTATE_PENDING (1U << 0)
235
+#define APLIC_ISTATE_ENABLED (1U << 1)
236
+#define APLIC_ISTATE_ENPEND (APLIC_ISTATE_ENABLED | \
237
+ APLIC_ISTATE_PENDING)
238
+#define APLIC_ISTATE_INPUT (1U << 8)
239
+
240
+#define APLIC_GENMSI 0x3000
241
+
242
+#define APLIC_TARGET_BASE 0x3004
243
+#define APLIC_TARGET_HART_IDX_SHIFT 18
244
+#define APLIC_TARGET_HART_IDX_MASK 0x3fff
245
+#define APLIC_TARGET_GUEST_IDX_SHIFT 12
246
+#define APLIC_TARGET_GUEST_IDX_MASK 0x3f
247
+#define APLIC_TARGET_IPRIO_MASK 0xff
248
+#define APLIC_TARGET_EIID_MASK 0x7ff
249
+
250
+#define APLIC_IDC_BASE 0x4000
251
+#define APLIC_IDC_SIZE 32
252
+
253
+#define APLIC_IDC_IDELIVERY 0x00
254
+
255
+#define APLIC_IDC_IFORCE 0x04
256
+
257
+#define APLIC_IDC_ITHRESHOLD 0x08
258
+
259
+#define APLIC_IDC_TOPI 0x18
260
+#define APLIC_IDC_TOPI_ID_SHIFT 16
261
+#define APLIC_IDC_TOPI_ID_MASK 0x3ff
262
+#define APLIC_IDC_TOPI_PRIO_MASK 0xff
263
+
264
+#define APLIC_IDC_CLAIMI 0x1c
265
+
266
+static uint32_t riscv_aplic_read_input_word(RISCVAPLICState *aplic,
267
+ uint32_t word)
268
+{
269
+ uint32_t i, irq, ret = 0;
270
+
271
+ for (i = 0; i < 32; i++) {
272
+ irq = word * 32 + i;
273
+ if (!irq || aplic->num_irqs <= irq) {
274
+ continue;
546
+ continue;
275
+ }
547
+ }
276
+
548
+ fn(vd, vs1, vs2, i);
277
+ ret |= ((aplic->state[irq] & APLIC_ISTATE_INPUT) ? 1 : 0) << i;
278
+ }
549
+ }
279
+
550
+ env->vstart = 0;
280
+ return ret;
551
+ /* set tail elements to 1s */
281
+}
552
+ vext_set_elems_1s(vd, vta, vl * esz, total_elems * esz);
282
+
553
+}
283
+static uint32_t riscv_aplic_read_pending_word(RISCVAPLICState *aplic,
554
+
284
+ uint32_t word)
555
+void do_vext_vx(void *vd, void *v0, target_long s1, void *vs2,
285
+{
556
+ CPURISCVState *env, uint32_t desc,
286
+ uint32_t i, irq, ret = 0;
557
+ opivx2_fn fn, uint32_t esz)
287
+
558
+{
288
+ for (i = 0; i < 32; i++) {
559
+ uint32_t vm = vext_vm(desc);
289
+ irq = word * 32 + i;
560
+ uint32_t vl = env->vl;
290
+ if (!irq || aplic->num_irqs <= irq) {
561
+ uint32_t total_elems = vext_get_total_elems(env, desc, esz);
562
+ uint32_t vta = vext_vta(desc);
563
+ uint32_t vma = vext_vma(desc);
564
+ uint32_t i;
565
+
566
+ for (i = env->vstart; i < vl; i++) {
567
+ if (!vm && !vext_elem_mask(v0, i)) {
568
+ /* set masked-off elements to 1s */
569
+ vext_set_elems_1s(vd, vma, i * esz, (i + 1) * esz);
291
+ continue;
570
+ continue;
292
+ }
571
+ }
293
+
572
+ fn(vd, s1, vs2, i);
294
+ ret |= ((aplic->state[irq] & APLIC_ISTATE_PENDING) ? 1 : 0) << i;
295
+ }
573
+ }
296
+
574
+ env->vstart = 0;
297
+ return ret;
575
+ /* set tail elements to 1s */
298
+}
576
+ vext_set_elems_1s(vd, vta, vl * esz, total_elems * esz);
299
+
577
+}
300
+static void riscv_aplic_set_pending_raw(RISCVAPLICState *aplic,
578
diff --git a/target/riscv/meson.build b/target/riscv/meson.build
301
+ uint32_t irq, bool pending)
302
+{
303
+ if (pending) {
304
+ aplic->state[irq] |= APLIC_ISTATE_PENDING;
305
+ } else {
306
+ aplic->state[irq] &= ~APLIC_ISTATE_PENDING;
307
+ }
308
+}
309
+
310
+static void riscv_aplic_set_pending(RISCVAPLICState *aplic,
311
+ uint32_t irq, bool pending)
312
+{
313
+ uint32_t sourcecfg, sm;
314
+
315
+ if ((irq <= 0) || (aplic->num_irqs <= irq)) {
316
+ return;
317
+ }
318
+
319
+ sourcecfg = aplic->sourcecfg[irq];
320
+ if (sourcecfg & APLIC_SOURCECFG_D) {
321
+ return;
322
+ }
323
+
324
+ sm = sourcecfg & APLIC_SOURCECFG_SM_MASK;
325
+ if ((sm == APLIC_SOURCECFG_SM_INACTIVE) ||
326
+ ((!aplic->msimode || (aplic->msimode && !pending)) &&
327
+ ((sm == APLIC_SOURCECFG_SM_LEVEL_HIGH) ||
328
+ (sm == APLIC_SOURCECFG_SM_LEVEL_LOW)))) {
329
+ return;
330
+ }
331
+
332
+ riscv_aplic_set_pending_raw(aplic, irq, pending);
333
+}
334
+
335
+static void riscv_aplic_set_pending_word(RISCVAPLICState *aplic,
336
+ uint32_t word, uint32_t value,
337
+ bool pending)
338
+{
339
+ uint32_t i, irq;
340
+
341
+ for (i = 0; i < 32; i++) {
342
+ irq = word * 32 + i;
343
+ if (!irq || aplic->num_irqs <= irq) {
344
+ continue;
345
+ }
346
+
347
+ if (value & (1U << i)) {
348
+ riscv_aplic_set_pending(aplic, irq, pending);
349
+ }
350
+ }
351
+}
352
+
353
+static uint32_t riscv_aplic_read_enabled_word(RISCVAPLICState *aplic,
354
+ int word)
355
+{
356
+ uint32_t i, irq, ret = 0;
357
+
358
+ for (i = 0; i < 32; i++) {
359
+ irq = word * 32 + i;
360
+ if (!irq || aplic->num_irqs <= irq) {
361
+ continue;
362
+ }
363
+
364
+ ret |= ((aplic->state[irq] & APLIC_ISTATE_ENABLED) ? 1 : 0) << i;
365
+ }
366
+
367
+ return ret;
368
+}
369
+
370
+static void riscv_aplic_set_enabled_raw(RISCVAPLICState *aplic,
371
+ uint32_t irq, bool enabled)
372
+{
373
+ if (enabled) {
374
+ aplic->state[irq] |= APLIC_ISTATE_ENABLED;
375
+ } else {
376
+ aplic->state[irq] &= ~APLIC_ISTATE_ENABLED;
377
+ }
378
+}
379
+
380
+static void riscv_aplic_set_enabled(RISCVAPLICState *aplic,
381
+ uint32_t irq, bool enabled)
382
+{
383
+ uint32_t sourcecfg, sm;
384
+
385
+ if ((irq <= 0) || (aplic->num_irqs <= irq)) {
386
+ return;
387
+ }
388
+
389
+ sourcecfg = aplic->sourcecfg[irq];
390
+ if (sourcecfg & APLIC_SOURCECFG_D) {
391
+ return;
392
+ }
393
+
394
+ sm = sourcecfg & APLIC_SOURCECFG_SM_MASK;
395
+ if (sm == APLIC_SOURCECFG_SM_INACTIVE) {
396
+ return;
397
+ }
398
+
399
+ riscv_aplic_set_enabled_raw(aplic, irq, enabled);
400
+}
401
+
402
+static void riscv_aplic_set_enabled_word(RISCVAPLICState *aplic,
403
+ uint32_t word, uint32_t value,
404
+ bool enabled)
405
+{
406
+ uint32_t i, irq;
407
+
408
+ for (i = 0; i < 32; i++) {
409
+ irq = word * 32 + i;
410
+ if (!irq || aplic->num_irqs <= irq) {
411
+ continue;
412
+ }
413
+
414
+ if (value & (1U << i)) {
415
+ riscv_aplic_set_enabled(aplic, irq, enabled);
416
+ }
417
+ }
418
+}
419
+
420
+static void riscv_aplic_msi_send(RISCVAPLICState *aplic,
421
+ uint32_t hart_idx, uint32_t guest_idx,
422
+ uint32_t eiid)
423
+{
424
+ uint64_t addr;
425
+ MemTxResult result;
426
+ RISCVAPLICState *aplic_m;
427
+ uint32_t lhxs, lhxw, hhxs, hhxw, group_idx, msicfgaddr, msicfgaddrH;
428
+
429
+ aplic_m = aplic;
430
+ while (aplic_m && !aplic_m->mmode) {
431
+ aplic_m = aplic_m->parent;
432
+ }
433
+ if (!aplic_m) {
434
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: m-level APLIC not found\n",
435
+ __func__);
436
+ return;
437
+ }
438
+
439
+ if (aplic->mmode) {
440
+ msicfgaddr = aplic_m->mmsicfgaddr;
441
+ msicfgaddrH = aplic_m->mmsicfgaddrH;
442
+ } else {
443
+ msicfgaddr = aplic_m->smsicfgaddr;
444
+ msicfgaddrH = aplic_m->smsicfgaddrH;
445
+ }
446
+
447
+ lhxs = (msicfgaddrH >> APLIC_xMSICFGADDRH_LHXS_SHIFT) &
448
+ APLIC_xMSICFGADDRH_LHXS_MASK;
449
+ lhxw = (msicfgaddrH >> APLIC_xMSICFGADDRH_LHXW_SHIFT) &
450
+ APLIC_xMSICFGADDRH_LHXW_MASK;
451
+ hhxs = (msicfgaddrH >> APLIC_xMSICFGADDRH_HHXS_SHIFT) &
452
+ APLIC_xMSICFGADDRH_HHXS_MASK;
453
+ hhxw = (msicfgaddrH >> APLIC_xMSICFGADDRH_HHXW_SHIFT) &
454
+ APLIC_xMSICFGADDRH_HHXW_MASK;
455
+
456
+ group_idx = hart_idx >> lhxw;
457
+ hart_idx &= APLIC_xMSICFGADDR_PPN_LHX_MASK(lhxw);
458
+
459
+ addr = msicfgaddr;
460
+ addr |= ((uint64_t)(msicfgaddrH & APLIC_xMSICFGADDRH_BAPPN_MASK)) << 32;
461
+ addr |= ((uint64_t)(group_idx & APLIC_xMSICFGADDR_PPN_HHX_MASK(hhxw))) <<
462
+ APLIC_xMSICFGADDR_PPN_HHX_SHIFT(hhxs);
463
+ addr |= ((uint64_t)(hart_idx & APLIC_xMSICFGADDR_PPN_LHX_MASK(lhxw))) <<
464
+ APLIC_xMSICFGADDR_PPN_LHX_SHIFT(lhxs);
465
+ addr |= (uint64_t)(guest_idx & APLIC_xMSICFGADDR_PPN_HART(lhxs));
466
+ addr <<= APLIC_xMSICFGADDR_PPN_SHIFT;
467
+
468
+ address_space_stl_le(&address_space_memory, addr,
469
+ eiid, MEMTXATTRS_UNSPECIFIED, &result);
470
+ if (result != MEMTX_OK) {
471
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: MSI write failed for "
472
+ "hart_index=%d guest_index=%d eiid=%d\n",
473
+ __func__, hart_idx, guest_idx, eiid);
474
+ }
475
+}
476
+
477
+static void riscv_aplic_msi_irq_update(RISCVAPLICState *aplic, uint32_t irq)
478
+{
479
+ uint32_t hart_idx, guest_idx, eiid;
480
+
481
+ if (!aplic->msimode || (aplic->num_irqs <= irq) ||
482
+ !(aplic->domaincfg & APLIC_DOMAINCFG_IE)) {
483
+ return;
484
+ }
485
+
486
+ if ((aplic->state[irq] & APLIC_ISTATE_ENPEND) != APLIC_ISTATE_ENPEND) {
487
+ return;
488
+ }
489
+
490
+ riscv_aplic_set_pending_raw(aplic, irq, false);
491
+
492
+ hart_idx = aplic->target[irq] >> APLIC_TARGET_HART_IDX_SHIFT;
493
+ hart_idx &= APLIC_TARGET_HART_IDX_MASK;
494
+ if (aplic->mmode) {
495
+ /* M-level APLIC ignores guest_index */
496
+ guest_idx = 0;
497
+ } else {
498
+ guest_idx = aplic->target[irq] >> APLIC_TARGET_GUEST_IDX_SHIFT;
499
+ guest_idx &= APLIC_TARGET_GUEST_IDX_MASK;
500
+ }
501
+ eiid = aplic->target[irq] & APLIC_TARGET_EIID_MASK;
502
+ riscv_aplic_msi_send(aplic, hart_idx, guest_idx, eiid);
503
+}
504
+
505
+static uint32_t riscv_aplic_idc_topi(RISCVAPLICState *aplic, uint32_t idc)
506
+{
507
+ uint32_t best_irq, best_iprio;
508
+ uint32_t irq, iprio, ihartidx, ithres;
509
+
510
+ if (aplic->num_harts <= idc) {
511
+ return 0;
512
+ }
513
+
514
+ ithres = aplic->ithreshold[idc];
515
+ best_irq = best_iprio = UINT32_MAX;
516
+ for (irq = 1; irq < aplic->num_irqs; irq++) {
517
+ if ((aplic->state[irq] & APLIC_ISTATE_ENPEND) !=
518
+ APLIC_ISTATE_ENPEND) {
519
+ continue;
520
+ }
521
+
522
+ ihartidx = aplic->target[irq] >> APLIC_TARGET_HART_IDX_SHIFT;
523
+ ihartidx &= APLIC_TARGET_HART_IDX_MASK;
524
+ if (ihartidx != idc) {
525
+ continue;
526
+ }
527
+
528
+ iprio = aplic->target[irq] & aplic->iprio_mask;
529
+ if (ithres && iprio >= ithres) {
530
+ continue;
531
+ }
532
+
533
+ if (iprio < best_iprio) {
534
+ best_irq = irq;
535
+ best_iprio = iprio;
536
+ }
537
+ }
538
+
539
+ if (best_irq < aplic->num_irqs && best_iprio <= aplic->iprio_mask) {
540
+ return (best_irq << APLIC_IDC_TOPI_ID_SHIFT) | best_iprio;
541
+ }
542
+
543
+ return 0;
544
+}
545
+
546
+static void riscv_aplic_idc_update(RISCVAPLICState *aplic, uint32_t idc)
547
+{
548
+ uint32_t topi;
549
+
550
+ if (aplic->msimode || aplic->num_harts <= idc) {
551
+ return;
552
+ }
553
+
554
+ topi = riscv_aplic_idc_topi(aplic, idc);
555
+ if ((aplic->domaincfg & APLIC_DOMAINCFG_IE) &&
556
+ aplic->idelivery[idc] &&
557
+ (aplic->iforce[idc] || topi)) {
558
+ qemu_irq_raise(aplic->external_irqs[idc]);
559
+ } else {
560
+ qemu_irq_lower(aplic->external_irqs[idc]);
561
+ }
562
+}
563
+
564
+static uint32_t riscv_aplic_idc_claimi(RISCVAPLICState *aplic, uint32_t idc)
565
+{
566
+ uint32_t irq, state, sm, topi = riscv_aplic_idc_topi(aplic, idc);
567
+
568
+ if (!topi) {
569
+ aplic->iforce[idc] = 0;
570
+ return 0;
571
+ }
572
+
573
+ irq = (topi >> APLIC_IDC_TOPI_ID_SHIFT) & APLIC_IDC_TOPI_ID_MASK;
574
+ sm = aplic->sourcecfg[irq] & APLIC_SOURCECFG_SM_MASK;
575
+ state = aplic->state[irq];
576
+ riscv_aplic_set_pending_raw(aplic, irq, false);
577
+ if ((sm == APLIC_SOURCECFG_SM_LEVEL_HIGH) &&
578
+ (state & APLIC_ISTATE_INPUT)) {
579
+ riscv_aplic_set_pending_raw(aplic, irq, true);
580
+ } else if ((sm == APLIC_SOURCECFG_SM_LEVEL_LOW) &&
581
+ !(state & APLIC_ISTATE_INPUT)) {
582
+ riscv_aplic_set_pending_raw(aplic, irq, true);
583
+ }
584
+ riscv_aplic_idc_update(aplic, idc);
585
+
586
+ return topi;
587
+}
588
+
589
+static void riscv_aplic_request(void *opaque, int irq, int level)
590
+{
591
+ bool update = false;
592
+ RISCVAPLICState *aplic = opaque;
593
+ uint32_t sourcecfg, childidx, state, idc;
594
+
595
+ assert((0 < irq) && (irq < aplic->num_irqs));
596
+
597
+ sourcecfg = aplic->sourcecfg[irq];
598
+ if (sourcecfg & APLIC_SOURCECFG_D) {
599
+ childidx = sourcecfg & APLIC_SOURCECFG_CHILDIDX_MASK;
600
+ if (childidx < aplic->num_children) {
601
+ riscv_aplic_request(aplic->children[childidx], irq, level);
602
+ }
603
+ return;
604
+ }
605
+
606
+ state = aplic->state[irq];
607
+ switch (sourcecfg & APLIC_SOURCECFG_SM_MASK) {
608
+ case APLIC_SOURCECFG_SM_EDGE_RISE:
609
+ if ((level > 0) && !(state & APLIC_ISTATE_INPUT) &&
610
+ !(state & APLIC_ISTATE_PENDING)) {
611
+ riscv_aplic_set_pending_raw(aplic, irq, true);
612
+ update = true;
613
+ }
614
+ break;
615
+ case APLIC_SOURCECFG_SM_EDGE_FALL:
616
+ if ((level <= 0) && (state & APLIC_ISTATE_INPUT) &&
617
+ !(state & APLIC_ISTATE_PENDING)) {
618
+ riscv_aplic_set_pending_raw(aplic, irq, true);
619
+ update = true;
620
+ }
621
+ break;
622
+ case APLIC_SOURCECFG_SM_LEVEL_HIGH:
623
+ if ((level > 0) && !(state & APLIC_ISTATE_PENDING)) {
624
+ riscv_aplic_set_pending_raw(aplic, irq, true);
625
+ update = true;
626
+ }
627
+ break;
628
+ case APLIC_SOURCECFG_SM_LEVEL_LOW:
629
+ if ((level <= 0) && !(state & APLIC_ISTATE_PENDING)) {
630
+ riscv_aplic_set_pending_raw(aplic, irq, true);
631
+ update = true;
632
+ }
633
+ break;
634
+ default:
635
+ break;
636
+ }
637
+
638
+ if (level <= 0) {
639
+ aplic->state[irq] &= ~APLIC_ISTATE_INPUT;
640
+ } else {
641
+ aplic->state[irq] |= APLIC_ISTATE_INPUT;
642
+ }
643
+
644
+ if (update) {
645
+ if (aplic->msimode) {
646
+ riscv_aplic_msi_irq_update(aplic, irq);
647
+ } else {
648
+ idc = aplic->target[irq] >> APLIC_TARGET_HART_IDX_SHIFT;
649
+ idc &= APLIC_TARGET_HART_IDX_MASK;
650
+ riscv_aplic_idc_update(aplic, idc);
651
+ }
652
+ }
653
+}
654
+
655
+static uint64_t riscv_aplic_read(void *opaque, hwaddr addr, unsigned size)
656
+{
657
+ uint32_t irq, word, idc;
658
+ RISCVAPLICState *aplic = opaque;
659
+
660
+ /* Reads must be 4 byte words */
661
+ if ((addr & 0x3) != 0) {
662
+ goto err;
663
+ }
664
+
665
+ if (addr == APLIC_DOMAINCFG) {
666
+ return APLIC_DOMAINCFG_RDONLY | aplic->domaincfg |
667
+ (aplic->msimode ? APLIC_DOMAINCFG_DM : 0);
668
+ } else if ((APLIC_SOURCECFG_BASE <= addr) &&
669
+ (addr < (APLIC_SOURCECFG_BASE + (aplic->num_irqs - 1) * 4))) {
670
+ irq = ((addr - APLIC_SOURCECFG_BASE) >> 2) + 1;
671
+ return aplic->sourcecfg[irq];
672
+ } else if (aplic->mmode && aplic->msimode &&
673
+ (addr == APLIC_MMSICFGADDR)) {
674
+ return aplic->mmsicfgaddr;
675
+ } else if (aplic->mmode && aplic->msimode &&
676
+ (addr == APLIC_MMSICFGADDRH)) {
677
+ return aplic->mmsicfgaddrH;
678
+ } else if (aplic->mmode && aplic->msimode &&
679
+ (addr == APLIC_SMSICFGADDR)) {
680
+ /*
681
+ * Registers SMSICFGADDR and SMSICFGADDRH are implemented only if:
682
+ * (a) the interrupt domain is at machine level
683
+ * (b) the domain's harts implement supervisor mode
684
+ * (c) the domain has one or more child supervisor-level domains
685
+ * that support MSI delivery mode (domaincfg.DM is not read-
686
+ * only zero in at least one of the supervisor-level child
687
+ * domains).
688
+ */
689
+ return (aplic->num_children) ? aplic->smsicfgaddr : 0;
690
+ } else if (aplic->mmode && aplic->msimode &&
691
+ (addr == APLIC_SMSICFGADDRH)) {
692
+ return (aplic->num_children) ? aplic->smsicfgaddrH : 0;
693
+ } else if ((APLIC_SETIP_BASE <= addr) &&
694
+ (addr < (APLIC_SETIP_BASE + aplic->bitfield_words * 4))) {
695
+ word = (addr - APLIC_SETIP_BASE) >> 2;
696
+ return riscv_aplic_read_pending_word(aplic, word);
697
+ } else if (addr == APLIC_SETIPNUM) {
698
+ return 0;
699
+ } else if ((APLIC_CLRIP_BASE <= addr) &&
700
+ (addr < (APLIC_CLRIP_BASE + aplic->bitfield_words * 4))) {
701
+ word = (addr - APLIC_CLRIP_BASE) >> 2;
702
+ return riscv_aplic_read_input_word(aplic, word);
703
+ } else if (addr == APLIC_CLRIPNUM) {
704
+ return 0;
705
+ } else if ((APLIC_SETIE_BASE <= addr) &&
706
+ (addr < (APLIC_SETIE_BASE + aplic->bitfield_words * 4))) {
707
+ word = (addr - APLIC_SETIE_BASE) >> 2;
708
+ return riscv_aplic_read_enabled_word(aplic, word);
709
+ } else if (addr == APLIC_SETIENUM) {
710
+ return 0;
711
+ } else if ((APLIC_CLRIE_BASE <= addr) &&
712
+ (addr < (APLIC_CLRIE_BASE + aplic->bitfield_words * 4))) {
713
+ return 0;
714
+ } else if (addr == APLIC_CLRIENUM) {
715
+ return 0;
716
+ } else if (addr == APLIC_SETIPNUM_LE) {
717
+ return 0;
718
+ } else if (addr == APLIC_SETIPNUM_BE) {
719
+ return 0;
720
+ } else if (addr == APLIC_GENMSI) {
721
+ return (aplic->msimode) ? aplic->genmsi : 0;
722
+ } else if ((APLIC_TARGET_BASE <= addr) &&
723
+ (addr < (APLIC_TARGET_BASE + (aplic->num_irqs - 1) * 4))) {
724
+ irq = ((addr - APLIC_TARGET_BASE) >> 2) + 1;
725
+ return aplic->target[irq];
726
+ } else if (!aplic->msimode && (APLIC_IDC_BASE <= addr) &&
727
+ (addr < (APLIC_IDC_BASE + aplic->num_harts * APLIC_IDC_SIZE))) {
728
+ idc = (addr - APLIC_IDC_BASE) / APLIC_IDC_SIZE;
729
+ switch (addr - (APLIC_IDC_BASE + idc * APLIC_IDC_SIZE)) {
730
+ case APLIC_IDC_IDELIVERY:
731
+ return aplic->idelivery[idc];
732
+ case APLIC_IDC_IFORCE:
733
+ return aplic->iforce[idc];
734
+ case APLIC_IDC_ITHRESHOLD:
735
+ return aplic->ithreshold[idc];
736
+ case APLIC_IDC_TOPI:
737
+ return riscv_aplic_idc_topi(aplic, idc);
738
+ case APLIC_IDC_CLAIMI:
739
+ return riscv_aplic_idc_claimi(aplic, idc);
740
+ default:
741
+ goto err;
742
+ };
743
+ }
744
+
745
+err:
746
+ qemu_log_mask(LOG_GUEST_ERROR,
747
+ "%s: Invalid register read 0x%" HWADDR_PRIx "\n",
748
+ __func__, addr);
749
+ return 0;
750
+}
751
+
752
+static void riscv_aplic_write(void *opaque, hwaddr addr, uint64_t value,
753
+ unsigned size)
754
+{
755
+ RISCVAPLICState *aplic = opaque;
756
+ uint32_t irq, word, idc = UINT32_MAX;
757
+
758
+ /* Writes must be 4 byte words */
759
+ if ((addr & 0x3) != 0) {
760
+ goto err;
761
+ }
762
+
763
+ if (addr == APLIC_DOMAINCFG) {
764
+ /* Only IE bit writeable at the moment */
765
+ value &= APLIC_DOMAINCFG_IE;
766
+ aplic->domaincfg = value;
767
+ } else if ((APLIC_SOURCECFG_BASE <= addr) &&
768
+ (addr < (APLIC_SOURCECFG_BASE + (aplic->num_irqs - 1) * 4))) {
769
+ irq = ((addr - APLIC_SOURCECFG_BASE) >> 2) + 1;
770
+ if (!aplic->num_children && (value & APLIC_SOURCECFG_D)) {
771
+ value = 0;
772
+ }
773
+ if (value & APLIC_SOURCECFG_D) {
774
+ value &= (APLIC_SOURCECFG_D | APLIC_SOURCECFG_CHILDIDX_MASK);
775
+ } else {
776
+ value &= (APLIC_SOURCECFG_D | APLIC_SOURCECFG_SM_MASK);
777
+ }
778
+ aplic->sourcecfg[irq] = value;
779
+ if ((aplic->sourcecfg[irq] & APLIC_SOURCECFG_D) ||
780
+ (aplic->sourcecfg[irq] == 0)) {
781
+ riscv_aplic_set_pending_raw(aplic, irq, false);
782
+ riscv_aplic_set_enabled_raw(aplic, irq, false);
783
+ }
784
+ } else if (aplic->mmode && aplic->msimode &&
785
+ (addr == APLIC_MMSICFGADDR)) {
786
+ if (!(aplic->mmsicfgaddrH & APLIC_xMSICFGADDRH_L)) {
787
+ aplic->mmsicfgaddr = value;
788
+ }
789
+ } else if (aplic->mmode && aplic->msimode &&
790
+ (addr == APLIC_MMSICFGADDRH)) {
791
+ if (!(aplic->mmsicfgaddrH & APLIC_xMSICFGADDRH_L)) {
792
+ aplic->mmsicfgaddrH = value & APLIC_xMSICFGADDRH_VALID_MASK;
793
+ }
794
+ } else if (aplic->mmode && aplic->msimode &&
795
+ (addr == APLIC_SMSICFGADDR)) {
796
+ /*
797
+ * Registers SMSICFGADDR and SMSICFGADDRH are implemented only if:
798
+ * (a) the interrupt domain is at machine level
799
+ * (b) the domain's harts implement supervisor mode
800
+ * (c) the domain has one or more child supervisor-level domains
801
+ * that support MSI delivery mode (domaincfg.DM is not read-
802
+ * only zero in at least one of the supervisor-level child
803
+ * domains).
804
+ */
805
+ if (aplic->num_children &&
806
+ !(aplic->smsicfgaddrH & APLIC_xMSICFGADDRH_L)) {
807
+ aplic->smsicfgaddr = value;
808
+ }
809
+ } else if (aplic->mmode && aplic->msimode &&
810
+ (addr == APLIC_SMSICFGADDRH)) {
811
+ if (aplic->num_children &&
812
+ !(aplic->smsicfgaddrH & APLIC_xMSICFGADDRH_L)) {
813
+ aplic->smsicfgaddrH = value & APLIC_xMSICFGADDRH_VALID_MASK;
814
+ }
815
+ } else if ((APLIC_SETIP_BASE <= addr) &&
816
+ (addr < (APLIC_SETIP_BASE + aplic->bitfield_words * 4))) {
817
+ word = (addr - APLIC_SETIP_BASE) >> 2;
818
+ riscv_aplic_set_pending_word(aplic, word, value, true);
819
+ } else if (addr == APLIC_SETIPNUM) {
820
+ riscv_aplic_set_pending(aplic, value, true);
821
+ } else if ((APLIC_CLRIP_BASE <= addr) &&
822
+ (addr < (APLIC_CLRIP_BASE + aplic->bitfield_words * 4))) {
823
+ word = (addr - APLIC_CLRIP_BASE) >> 2;
824
+ riscv_aplic_set_pending_word(aplic, word, value, false);
825
+ } else if (addr == APLIC_CLRIPNUM) {
826
+ riscv_aplic_set_pending(aplic, value, false);
827
+ } else if ((APLIC_SETIE_BASE <= addr) &&
828
+ (addr < (APLIC_SETIE_BASE + aplic->bitfield_words * 4))) {
829
+ word = (addr - APLIC_SETIE_BASE) >> 2;
830
+ riscv_aplic_set_enabled_word(aplic, word, value, true);
831
+ } else if (addr == APLIC_SETIENUM) {
832
+ riscv_aplic_set_enabled(aplic, value, true);
833
+ } else if ((APLIC_CLRIE_BASE <= addr) &&
834
+ (addr < (APLIC_CLRIE_BASE + aplic->bitfield_words * 4))) {
835
+ word = (addr - APLIC_CLRIE_BASE) >> 2;
836
+ riscv_aplic_set_enabled_word(aplic, word, value, false);
837
+ } else if (addr == APLIC_CLRIENUM) {
838
+ riscv_aplic_set_enabled(aplic, value, false);
839
+ } else if (addr == APLIC_SETIPNUM_LE) {
840
+ riscv_aplic_set_pending(aplic, value, true);
841
+ } else if (addr == APLIC_SETIPNUM_BE) {
842
+ riscv_aplic_set_pending(aplic, bswap32(value), true);
843
+ } else if (addr == APLIC_GENMSI) {
844
+ if (aplic->msimode) {
845
+ aplic->genmsi = value & ~(APLIC_TARGET_GUEST_IDX_MASK <<
846
+ APLIC_TARGET_GUEST_IDX_SHIFT);
847
+ riscv_aplic_msi_send(aplic,
848
+ value >> APLIC_TARGET_HART_IDX_SHIFT,
849
+ 0,
850
+ value & APLIC_TARGET_EIID_MASK);
851
+ }
852
+ } else if ((APLIC_TARGET_BASE <= addr) &&
853
+ (addr < (APLIC_TARGET_BASE + (aplic->num_irqs - 1) * 4))) {
854
+ irq = ((addr - APLIC_TARGET_BASE) >> 2) + 1;
855
+ if (aplic->msimode) {
856
+ aplic->target[irq] = value;
857
+ } else {
858
+ aplic->target[irq] = (value & ~APLIC_TARGET_IPRIO_MASK) |
859
+ ((value & aplic->iprio_mask) ?
860
+ (value & aplic->iprio_mask) : 1);
861
+ }
862
+ } else if (!aplic->msimode && (APLIC_IDC_BASE <= addr) &&
863
+ (addr < (APLIC_IDC_BASE + aplic->num_harts * APLIC_IDC_SIZE))) {
864
+ idc = (addr - APLIC_IDC_BASE) / APLIC_IDC_SIZE;
865
+ switch (addr - (APLIC_IDC_BASE + idc * APLIC_IDC_SIZE)) {
866
+ case APLIC_IDC_IDELIVERY:
867
+ aplic->idelivery[idc] = value & 0x1;
868
+ break;
869
+ case APLIC_IDC_IFORCE:
870
+ aplic->iforce[idc] = value & 0x1;
871
+ break;
872
+ case APLIC_IDC_ITHRESHOLD:
873
+ aplic->ithreshold[idc] = value & aplic->iprio_mask;
874
+ break;
875
+ default:
876
+ goto err;
877
+ };
878
+ } else {
879
+ goto err;
880
+ }
881
+
882
+ if (aplic->msimode) {
883
+ for (irq = 1; irq < aplic->num_irqs; irq++) {
884
+ riscv_aplic_msi_irq_update(aplic, irq);
885
+ }
886
+ } else {
887
+ if (idc == UINT32_MAX) {
888
+ for (idc = 0; idc < aplic->num_harts; idc++) {
889
+ riscv_aplic_idc_update(aplic, idc);
890
+ }
891
+ } else {
892
+ riscv_aplic_idc_update(aplic, idc);
893
+ }
894
+ }
895
+
896
+ return;
897
+
898
+err:
899
+ qemu_log_mask(LOG_GUEST_ERROR,
900
+ "%s: Invalid register write 0x%" HWADDR_PRIx "\n",
901
+ __func__, addr);
902
+}
903
+
904
+static const MemoryRegionOps riscv_aplic_ops = {
905
+ .read = riscv_aplic_read,
906
+ .write = riscv_aplic_write,
907
+ .endianness = DEVICE_LITTLE_ENDIAN,
908
+ .valid = {
909
+ .min_access_size = 4,
910
+ .max_access_size = 4
911
+ }
912
+};
913
+
914
+static void riscv_aplic_realize(DeviceState *dev, Error **errp)
915
+{
916
+ uint32_t i;
917
+ RISCVAPLICState *aplic = RISCV_APLIC(dev);
918
+
919
+ aplic->bitfield_words = (aplic->num_irqs + 31) >> 5;
920
+ aplic->sourcecfg = g_new0(uint32_t, aplic->num_irqs);
921
+ aplic->state = g_new(uint32_t, aplic->num_irqs);
922
+ aplic->target = g_new0(uint32_t, aplic->num_irqs);
923
+ if (!aplic->msimode) {
924
+ for (i = 0; i < aplic->num_irqs; i++) {
925
+ aplic->target[i] = 1;
926
+ }
927
+ }
928
+ aplic->idelivery = g_new0(uint32_t, aplic->num_harts);
929
+ aplic->iforce = g_new0(uint32_t, aplic->num_harts);
930
+ aplic->ithreshold = g_new0(uint32_t, aplic->num_harts);
931
+
932
+ memory_region_init_io(&aplic->mmio, OBJECT(dev), &riscv_aplic_ops, aplic,
933
+ TYPE_RISCV_APLIC, aplic->aperture_size);
934
+ sysbus_init_mmio(SYS_BUS_DEVICE(dev), &aplic->mmio);
935
+
936
+ /*
937
+ * Only root APLICs have hardware IRQ lines. All non-root APLICs
938
+ * have IRQ lines delegated by their parent APLIC.
939
+ */
940
+ if (!aplic->parent) {
941
+ qdev_init_gpio_in(dev, riscv_aplic_request, aplic->num_irqs);
942
+ }
943
+
944
+ /* Create output IRQ lines for non-MSI mode */
945
+ if (!aplic->msimode) {
946
+ aplic->external_irqs = g_malloc(sizeof(qemu_irq) * aplic->num_harts);
947
+ qdev_init_gpio_out(dev, aplic->external_irqs, aplic->num_harts);
948
+
949
+ /* Claim the CPU interrupt to be triggered by this APLIC */
950
+ for (i = 0; i < aplic->num_harts; i++) {
951
+ RISCVCPU *cpu = RISCV_CPU(qemu_get_cpu(aplic->hartid_base + i));
952
+ if (riscv_cpu_claim_interrupts(cpu,
953
+ (aplic->mmode) ? MIP_MEIP : MIP_SEIP) < 0) {
954
+ error_report("%s already claimed",
955
+ (aplic->mmode) ? "MEIP" : "SEIP");
956
+ exit(1);
957
+ }
958
+ }
959
+ }
960
+
961
+ msi_nonbroken = true;
962
+}
963
+
964
+static Property riscv_aplic_properties[] = {
965
+ DEFINE_PROP_UINT32("aperture-size", RISCVAPLICState, aperture_size, 0),
966
+ DEFINE_PROP_UINT32("hartid-base", RISCVAPLICState, hartid_base, 0),
967
+ DEFINE_PROP_UINT32("num-harts", RISCVAPLICState, num_harts, 0),
968
+ DEFINE_PROP_UINT32("iprio-mask", RISCVAPLICState, iprio_mask, 0),
969
+ DEFINE_PROP_UINT32("num-irqs", RISCVAPLICState, num_irqs, 0),
970
+ DEFINE_PROP_BOOL("msimode", RISCVAPLICState, msimode, 0),
971
+ DEFINE_PROP_BOOL("mmode", RISCVAPLICState, mmode, 0),
972
+ DEFINE_PROP_END_OF_LIST(),
973
+};
974
+
975
+static const VMStateDescription vmstate_riscv_aplic = {
976
+ .name = "riscv_aplic",
977
+ .version_id = 1,
978
+ .minimum_version_id = 1,
979
+ .fields = (VMStateField[]) {
980
+ VMSTATE_UINT32(domaincfg, RISCVAPLICState),
981
+ VMSTATE_UINT32(mmsicfgaddr, RISCVAPLICState),
982
+ VMSTATE_UINT32(mmsicfgaddrH, RISCVAPLICState),
983
+ VMSTATE_UINT32(smsicfgaddr, RISCVAPLICState),
984
+ VMSTATE_UINT32(smsicfgaddrH, RISCVAPLICState),
985
+ VMSTATE_UINT32(genmsi, RISCVAPLICState),
986
+ VMSTATE_VARRAY_UINT32(sourcecfg, RISCVAPLICState,
987
+ num_irqs, 0,
988
+ vmstate_info_uint32, uint32_t),
989
+ VMSTATE_VARRAY_UINT32(state, RISCVAPLICState,
990
+ num_irqs, 0,
991
+ vmstate_info_uint32, uint32_t),
992
+ VMSTATE_VARRAY_UINT32(target, RISCVAPLICState,
993
+ num_irqs, 0,
994
+ vmstate_info_uint32, uint32_t),
995
+ VMSTATE_VARRAY_UINT32(idelivery, RISCVAPLICState,
996
+ num_harts, 0,
997
+ vmstate_info_uint32, uint32_t),
998
+ VMSTATE_VARRAY_UINT32(iforce, RISCVAPLICState,
999
+ num_harts, 0,
1000
+ vmstate_info_uint32, uint32_t),
1001
+ VMSTATE_VARRAY_UINT32(ithreshold, RISCVAPLICState,
1002
+ num_harts, 0,
1003
+ vmstate_info_uint32, uint32_t),
1004
+ VMSTATE_END_OF_LIST()
1005
+ }
1006
+};
1007
+
1008
+static void riscv_aplic_class_init(ObjectClass *klass, void *data)
1009
+{
1010
+ DeviceClass *dc = DEVICE_CLASS(klass);
1011
+
1012
+ device_class_set_props(dc, riscv_aplic_properties);
1013
+ dc->realize = riscv_aplic_realize;
1014
+ dc->vmsd = &vmstate_riscv_aplic;
1015
+}
1016
+
1017
+static const TypeInfo riscv_aplic_info = {
1018
+ .name = TYPE_RISCV_APLIC,
1019
+ .parent = TYPE_SYS_BUS_DEVICE,
1020
+ .instance_size = sizeof(RISCVAPLICState),
1021
+ .class_init = riscv_aplic_class_init,
1022
+};
1023
+
1024
+static void riscv_aplic_register_types(void)
1025
+{
1026
+ type_register_static(&riscv_aplic_info);
1027
+}
1028
+
1029
+type_init(riscv_aplic_register_types)
1030
+
1031
+/*
1032
+ * Add a APLIC device to another APLIC device as child for
1033
+ * interrupt delegation.
1034
+ */
1035
+void riscv_aplic_add_child(DeviceState *parent, DeviceState *child)
1036
+{
1037
+ RISCVAPLICState *caplic, *paplic;
1038
+
1039
+ assert(parent && child);
1040
+ caplic = RISCV_APLIC(child);
1041
+ paplic = RISCV_APLIC(parent);
1042
+
1043
+ assert(paplic->num_irqs == caplic->num_irqs);
1044
+ assert(paplic->num_children <= QEMU_APLIC_MAX_CHILDREN);
1045
+
1046
+ caplic->parent = paplic;
1047
+ paplic->children[paplic->num_children] = caplic;
1048
+ paplic->num_children++;
1049
+}
1050
+
1051
+/*
1052
+ * Create APLIC device.
1053
+ */
1054
+DeviceState *riscv_aplic_create(hwaddr addr, hwaddr size,
1055
+ uint32_t hartid_base, uint32_t num_harts, uint32_t num_sources,
1056
+ uint32_t iprio_bits, bool msimode, bool mmode, DeviceState *parent)
1057
+{
1058
+ DeviceState *dev = qdev_new(TYPE_RISCV_APLIC);
1059
+ uint32_t i;
1060
+
1061
+ assert(num_harts < APLIC_MAX_IDC);
1062
+ assert((APLIC_IDC_BASE + (num_harts * APLIC_IDC_SIZE)) <= size);
1063
+ assert(num_sources < APLIC_MAX_SOURCE);
1064
+ assert(APLIC_MIN_IPRIO_BITS <= iprio_bits);
1065
+ assert(iprio_bits <= APLIC_MAX_IPRIO_BITS);
1066
+
1067
+ qdev_prop_set_uint32(dev, "aperture-size", size);
1068
+ qdev_prop_set_uint32(dev, "hartid-base", hartid_base);
1069
+ qdev_prop_set_uint32(dev, "num-harts", num_harts);
1070
+ qdev_prop_set_uint32(dev, "iprio-mask", ((1U << iprio_bits) - 1));
1071
+ qdev_prop_set_uint32(dev, "num-irqs", num_sources + 1);
1072
+ qdev_prop_set_bit(dev, "msimode", msimode);
1073
+ qdev_prop_set_bit(dev, "mmode", mmode);
1074
+
1075
+ sysbus_realize_and_unref(SYS_BUS_DEVICE(dev), &error_fatal);
1076
+ sysbus_mmio_map(SYS_BUS_DEVICE(dev), 0, addr);
1077
+
1078
+ if (parent) {
1079
+ riscv_aplic_add_child(parent, dev);
1080
+ }
1081
+
1082
+ if (!msimode) {
1083
+ for (i = 0; i < num_harts; i++) {
1084
+ CPUState *cpu = qemu_get_cpu(hartid_base + i);
1085
+
1086
+ qdev_connect_gpio_out_named(dev, NULL, i,
1087
+ qdev_get_gpio_in(DEVICE(cpu),
1088
+ (mmode) ? IRQ_M_EXT : IRQ_S_EXT));
1089
+ }
1090
+ }
1091
+
1092
+ return dev;
1093
+}
1094
diff --git a/hw/intc/Kconfig b/hw/intc/Kconfig
1095
index XXXXXXX..XXXXXXX 100644
579
index XXXXXXX..XXXXXXX 100644
1096
--- a/hw/intc/Kconfig
580
--- a/target/riscv/meson.build
1097
+++ b/hw/intc/Kconfig
581
+++ b/target/riscv/meson.build
1098
@@ -XXX,XX +XXX,XX @@ config LOONGSON_LIOINTC
582
@@ -XXX,XX +XXX,XX @@ riscv_ss.add(files(
1099
config RISCV_ACLINT
583
'gdbstub.c',
1100
bool
584
'op_helper.c',
1101
585
'vector_helper.c',
1102
+config RISCV_APLIC
586
+ 'vector_internals.c',
1103
+ bool
587
'bitmanip_helper.c',
1104
+
588
'translate.c',
1105
config SIFIVE_PLIC
589
'm128_helper.c',
1106
bool
1107
1108
diff --git a/hw/intc/meson.build b/hw/intc/meson.build
1109
index XXXXXXX..XXXXXXX 100644
1110
--- a/hw/intc/meson.build
1111
+++ b/hw/intc/meson.build
1112
@@ -XXX,XX +XXX,XX @@ specific_ss.add(when: 'CONFIG_S390_FLIC', if_true: files('s390_flic.c'))
1113
specific_ss.add(when: 'CONFIG_S390_FLIC_KVM', if_true: files('s390_flic_kvm.c'))
1114
specific_ss.add(when: 'CONFIG_SH_INTC', if_true: files('sh_intc.c'))
1115
specific_ss.add(when: 'CONFIG_RISCV_ACLINT', if_true: files('riscv_aclint.c'))
1116
+specific_ss.add(when: 'CONFIG_RISCV_APLIC', if_true: files('riscv_aplic.c'))
1117
specific_ss.add(when: 'CONFIG_SIFIVE_PLIC', if_true: files('sifive_plic.c'))
1118
specific_ss.add(when: 'CONFIG_XICS', if_true: files('xics.c'))
1119
specific_ss.add(when: ['CONFIG_KVM', 'CONFIG_XICS'],
1120
--
590
--
1121
2.34.1
591
2.41.0
1122
1123
1124
diff view generated by jsdifflib
1
From: Philipp Tomsich <philipp.tomsich@vrull.eu>
1
From: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
2
2
3
To split up the decoder into multiple functions (both to support
3
Refactor the non SEW-specific stuff out of `GEN_OPIVV_TRANS` into
4
vendor-specific opcodes in separate files and to simplify maintenance
4
function `opivv_trans` (similar to `opivi_trans`). `opivv_trans` will be
5
of orthogonal extensions), this changes decode_op to iterate over a
5
used in proceeding vector-crypto commits.
6
table of decoders predicated on guard functions.
7
6
8
This commit only adds the new structure and the table, allowing for
7
Signed-off-by: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
9
the easy addition of additional decoders in the future.
10
11
Signed-off-by: Philipp Tomsich <philipp.tomsich@vrull.eu>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
9
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
14
Message-Id: <20220202005249.3566542-6-philipp.tomsich@vrull.eu>
10
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
11
Signed-off-by: Max Chou <max.chou@sifive.com>
12
Message-ID: <20230711165917.2629866-3-max.chou@sifive.com>
15
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
13
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
16
---
14
---
17
target/riscv/translate.c | 32 +++++++++++++++++++++++++++-----
15
target/riscv/insn_trans/trans_rvv.c.inc | 62 +++++++++++++------------
18
1 file changed, 27 insertions(+), 5 deletions(-)
16
1 file changed, 32 insertions(+), 30 deletions(-)
19
17
20
diff --git a/target/riscv/translate.c b/target/riscv/translate.c
18
diff --git a/target/riscv/insn_trans/trans_rvv.c.inc b/target/riscv/insn_trans/trans_rvv.c.inc
21
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
22
--- a/target/riscv/translate.c
20
--- a/target/riscv/insn_trans/trans_rvv.c.inc
23
+++ b/target/riscv/translate.c
21
+++ b/target/riscv/insn_trans/trans_rvv.c.inc
24
@@ -XXX,XX +XXX,XX @@ static inline bool has_ext(DisasContext *ctx, uint32_t ext)
22
@@ -XXX,XX +XXX,XX @@ GEN_OPIWX_WIDEN_TRANS(vwadd_wx)
25
return ctx->misa_ext & ext;
23
GEN_OPIWX_WIDEN_TRANS(vwsubu_wx)
26
}
24
GEN_OPIWX_WIDEN_TRANS(vwsub_wx)
27
25
28
+static bool always_true_p(DisasContext *ctx __attribute__((__unused__)))
26
+static bool opivv_trans(uint32_t vd, uint32_t vs1, uint32_t vs2, uint32_t vm,
27
+ gen_helper_gvec_4_ptr *fn, DisasContext *s)
29
+{
28
+{
29
+ uint32_t data = 0;
30
+ TCGLabel *over = gen_new_label();
31
+ tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
32
+ tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
33
+
34
+ data = FIELD_DP32(data, VDATA, VM, vm);
35
+ data = FIELD_DP32(data, VDATA, LMUL, s->lmul);
36
+ data = FIELD_DP32(data, VDATA, VTA, s->vta);
37
+ data = FIELD_DP32(data, VDATA, VTA_ALL_1S, s->cfg_vta_all_1s);
38
+ data = FIELD_DP32(data, VDATA, VMA, s->vma);
39
+ tcg_gen_gvec_4_ptr(vreg_ofs(s, vd), vreg_ofs(s, 0), vreg_ofs(s, vs1),
40
+ vreg_ofs(s, vs2), cpu_env, s->cfg_ptr->vlen / 8,
41
+ s->cfg_ptr->vlen / 8, data, fn);
42
+ mark_vs_dirty(s);
43
+ gen_set_label(over);
30
+ return true;
44
+ return true;
31
+}
45
+}
32
+
46
+
33
#ifdef TARGET_RISCV32
47
/* Vector Integer Add-with-Carry / Subtract-with-Borrow Instructions */
34
#define get_xl(ctx) MXL_RV32
48
/* OPIVV without GVEC IR */
35
#elif defined(CONFIG_USER_ONLY)
49
-#define GEN_OPIVV_TRANS(NAME, CHECK) \
36
@@ -XXX,XX +XXX,XX @@ static uint32_t opcode_at(DisasContextBase *dcbase, target_ulong pc)
50
-static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
37
51
-{ \
38
static void decode_opc(CPURISCVState *env, DisasContext *ctx, uint16_t opcode)
52
- if (CHECK(s, a)) { \
39
{
53
- uint32_t data = 0; \
40
- /* check for compressed insn */
54
- static gen_helper_gvec_4_ptr * const fns[4] = { \
41
+ /*
55
- gen_helper_##NAME##_b, gen_helper_##NAME##_h, \
42
+ * A table with predicate (i.e., guard) functions and decoder functions
56
- gen_helper_##NAME##_w, gen_helper_##NAME##_d, \
43
+ * that are tested in-order until a decoder matches onto the opcode.
57
- }; \
44
+ */
58
- TCGLabel *over = gen_new_label(); \
45
+ static const struct {
59
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
46
+ bool (*guard_func)(DisasContext *);
60
- tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
47
+ bool (*decode_func)(DisasContext *, uint32_t);
61
- \
48
+ } decoders[] = {
62
- data = FIELD_DP32(data, VDATA, VM, a->vm); \
49
+ { always_true_p, decode_insn32 },
63
- data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \
50
+ };
64
- data = FIELD_DP32(data, VDATA, VTA, s->vta); \
51
+
65
- data = \
52
+ /* Check for compressed insn */
66
- FIELD_DP32(data, VDATA, VTA_ALL_1S, s->cfg_vta_all_1s);\
53
if (extract16(opcode, 0, 2) != 3) {
67
- data = FIELD_DP32(data, VDATA, VMA, s->vma); \
54
if (!has_ext(ctx, RVC)) {
68
- tcg_gen_gvec_4_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0), \
55
gen_exception_illegal(ctx);
69
- vreg_ofs(s, a->rs1), \
56
} else {
70
- vreg_ofs(s, a->rs2), cpu_env, \
57
ctx->opcode = opcode;
71
- s->cfg_ptr->vlen / 8, \
58
ctx->pc_succ_insn = ctx->base.pc_next + 2;
72
- s->cfg_ptr->vlen / 8, data, \
59
- if (!decode_insn16(ctx, opcode)) {
73
- fns[s->sew]); \
60
- gen_exception_illegal(ctx);
74
- mark_vs_dirty(s); \
61
+ if (decode_insn16(ctx, opcode)) {
75
- gen_set_label(over); \
62
+ return;
76
- return true; \
63
}
77
- } \
64
}
78
- return false; \
65
} else {
79
+#define GEN_OPIVV_TRANS(NAME, CHECK) \
66
@@ -XXX,XX +XXX,XX @@ static void decode_opc(CPURISCVState *env, DisasContext *ctx, uint16_t opcode)
80
+static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
67
ctx->base.pc_next + 2));
81
+{ \
68
ctx->opcode = opcode32;
82
+ if (CHECK(s, a)) { \
69
ctx->pc_succ_insn = ctx->base.pc_next + 4;
83
+ static gen_helper_gvec_4_ptr * const fns[4] = { \
70
- if (!decode_insn32(ctx, opcode32)) {
84
+ gen_helper_##NAME##_b, gen_helper_##NAME##_h, \
71
- gen_exception_illegal(ctx);
85
+ gen_helper_##NAME##_w, gen_helper_##NAME##_d, \
72
+
86
+ }; \
73
+ for (size_t i = 0; i < ARRAY_SIZE(decoders); ++i) {
87
+ return opivv_trans(a->rd, a->rs1, a->rs2, a->vm, fns[s->sew], s);\
74
+ if (decoders[i].guard_func(ctx) &&
88
+ } \
75
+ decoders[i].decode_func(ctx, opcode32)) {
89
+ return false; \
76
+ return;
77
+ }
78
}
79
}
80
+
81
+ gen_exception_illegal(ctx);
82
}
90
}
83
91
84
static void riscv_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
92
/*
85
--
93
--
86
2.34.1
94
2.41.0
87
88
diff view generated by jsdifflib
1
From: Philipp Tomsich <philipp.tomsich@vrull.eu>
1
From: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
2
2
3
The implementation in trans_{rvi,rvv,rvzfh}.c.inc accesses the shallow
3
Remove the redundant "vl == 0" check which is already included within the vstart >= vl check, when vl == 0.
4
copies (in DisasContext) of some of the elements available in the
5
RISCVCPUConfig structure. This commit redirects accesses to use the
6
cfg_ptr copied into DisasContext and removes the shallow copies.
7
4
8
Signed-off-by: Philipp Tomsich <philipp.tomsich@vrull.eu>
5
Signed-off-by: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
9
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
6
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
10
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Max Chou <max.chou@sifive.com>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Acked-by: Alistair Francis <alistair.francis@wdc.com>
12
Message-Id: <20220202005249.3566542-4-philipp.tomsich@vrull.eu>
9
Message-ID: <20230711165917.2629866-4-max.chou@sifive.com>
13
[ Changes by AF:
14
- Fixup checkpatch failures
15
]
16
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
10
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
17
---
11
---
18
target/riscv/translate.c | 14 ---
12
target/riscv/insn_trans/trans_rvv.c.inc | 31 +------------------------
19
target/riscv/insn_trans/trans_rvi.c.inc | 2 +-
13
1 file changed, 1 insertion(+), 30 deletions(-)
20
target/riscv/insn_trans/trans_rvv.c.inc | 146 ++++++++++++++--------
21
target/riscv/insn_trans/trans_rvzfh.c.inc | 4 +-
22
4 files changed, 97 insertions(+), 69 deletions(-)
23
14
24
diff --git a/target/riscv/translate.c b/target/riscv/translate.c
25
index XXXXXXX..XXXXXXX 100644
26
--- a/target/riscv/translate.c
27
+++ b/target/riscv/translate.c
28
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
29
RISCVMXL ol;
30
bool virt_enabled;
31
const RISCVCPUConfig *cfg_ptr;
32
- bool ext_ifencei;
33
- bool ext_zfh;
34
- bool ext_zfhmin;
35
- bool ext_zve32f;
36
- bool ext_zve64f;
37
bool hlsx;
38
/* vector extension */
39
bool vill;
40
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
41
*/
42
int8_t lmul;
43
uint8_t sew;
44
- uint16_t vlen;
45
- uint16_t elen;
46
target_ulong vstart;
47
bool vl_eq_vlmax;
48
uint8_t ntemp;
49
@@ -XXX,XX +XXX,XX @@ static void riscv_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
50
ctx->misa_ext = env->misa_ext;
51
ctx->frm = -1; /* unknown rounding mode */
52
ctx->cfg_ptr = &(cpu->cfg);
53
- ctx->ext_ifencei = cpu->cfg.ext_ifencei;
54
- ctx->ext_zfh = cpu->cfg.ext_zfh;
55
- ctx->ext_zfhmin = cpu->cfg.ext_zfhmin;
56
- ctx->ext_zve32f = cpu->cfg.ext_zve32f;
57
- ctx->ext_zve64f = cpu->cfg.ext_zve64f;
58
- ctx->vlen = cpu->cfg.vlen;
59
- ctx->elen = cpu->cfg.elen;
60
ctx->mstatus_hs_fs = FIELD_EX32(tb_flags, TB_FLAGS, MSTATUS_HS_FS);
61
ctx->mstatus_hs_vs = FIELD_EX32(tb_flags, TB_FLAGS, MSTATUS_HS_VS);
62
ctx->hlsx = FIELD_EX32(tb_flags, TB_FLAGS, HLSX);
63
diff --git a/target/riscv/insn_trans/trans_rvi.c.inc b/target/riscv/insn_trans/trans_rvi.c.inc
64
index XXXXXXX..XXXXXXX 100644
65
--- a/target/riscv/insn_trans/trans_rvi.c.inc
66
+++ b/target/riscv/insn_trans/trans_rvi.c.inc
67
@@ -XXX,XX +XXX,XX @@ static bool trans_fence(DisasContext *ctx, arg_fence *a)
68
69
static bool trans_fence_i(DisasContext *ctx, arg_fence_i *a)
70
{
71
- if (!ctx->ext_ifencei) {
72
+ if (!ctx->cfg_ptr->ext_ifencei) {
73
return false;
74
}
75
76
diff --git a/target/riscv/insn_trans/trans_rvv.c.inc b/target/riscv/insn_trans/trans_rvv.c.inc
15
diff --git a/target/riscv/insn_trans/trans_rvv.c.inc b/target/riscv/insn_trans/trans_rvv.c.inc
77
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
78
--- a/target/riscv/insn_trans/trans_rvv.c.inc
17
--- a/target/riscv/insn_trans/trans_rvv.c.inc
79
+++ b/target/riscv/insn_trans/trans_rvv.c.inc
18
+++ b/target/riscv/insn_trans/trans_rvv.c.inc
80
@@ -XXX,XX +XXX,XX @@ static bool require_zve32f(DisasContext *s)
19
@@ -XXX,XX +XXX,XX @@ static bool ldst_us_trans(uint32_t vd, uint32_t rs1, uint32_t data,
81
}
20
TCGv_i32 desc;
82
21
83
/* Zve32f doesn't support FP64. (Section 18.2) */
22
TCGLabel *over = gen_new_label();
84
- return s->ext_zve32f ? s->sew <= MO_32 : true;
23
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
85
+ return s->cfg_ptr->ext_zve32f ? s->sew <= MO_32 : true;
24
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
86
}
25
87
26
dest = tcg_temp_new_ptr();
88
static bool require_scale_zve32f(DisasContext *s)
27
@@ -XXX,XX +XXX,XX @@ static bool ldst_stride_trans(uint32_t vd, uint32_t rs1, uint32_t rs2,
89
@@ -XXX,XX +XXX,XX @@ static bool require_scale_zve32f(DisasContext *s)
28
TCGv_i32 desc;
90
}
29
91
30
TCGLabel *over = gen_new_label();
92
/* Zve32f doesn't support FP64. (Section 18.2) */
31
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
93
- return s->ext_zve64f ? s->sew <= MO_16 : true;
32
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
94
+ return s->cfg_ptr->ext_zve64f ? s->sew <= MO_16 : true;
33
95
}
34
dest = tcg_temp_new_ptr();
96
35
@@ -XXX,XX +XXX,XX @@ static bool ldst_index_trans(uint32_t vd, uint32_t rs1, uint32_t vs2,
97
static bool require_zve64f(DisasContext *s)
36
TCGv_i32 desc;
98
@@ -XXX,XX +XXX,XX @@ static bool require_zve64f(DisasContext *s)
37
99
}
38
TCGLabel *over = gen_new_label();
100
39
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
101
/* Zve64f doesn't support FP64. (Section 18.2) */
40
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
102
- return s->ext_zve64f ? s->sew <= MO_32 : true;
41
103
+ return s->cfg_ptr->ext_zve64f ? s->sew <= MO_32 : true;
42
dest = tcg_temp_new_ptr();
104
}
43
@@ -XXX,XX +XXX,XX @@ static bool ldff_trans(uint32_t vd, uint32_t rs1, uint32_t data,
105
44
TCGv_i32 desc;
106
static bool require_scale_zve64f(DisasContext *s)
45
107
@@ -XXX,XX +XXX,XX @@ static bool require_scale_zve64f(DisasContext *s)
46
TCGLabel *over = gen_new_label();
108
}
47
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
109
48
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
110
/* Zve64f doesn't support FP64. (Section 18.2) */
49
111
- return s->ext_zve64f ? s->sew <= MO_16 : true;
50
dest = tcg_temp_new_ptr();
112
+ return s->cfg_ptr->ext_zve64f ? s->sew <= MO_16 : true;
51
@@ -XXX,XX +XXX,XX @@ do_opivv_gvec(DisasContext *s, arg_rmrr *a, GVecGen3Fn *gvec_fn,
113
}
114
115
/* Destination vector register group cannot overlap source mask register. */
116
@@ -XXX,XX +XXX,XX @@ static bool do_vsetvl(DisasContext *s, int rd, int rs1, TCGv s2)
117
TCGv s1, dst;
118
119
if (!require_rvv(s) ||
120
- !(has_ext(s, RVV) || s->ext_zve32f || s->ext_zve64f)) {
121
+ !(has_ext(s, RVV) || s->cfg_ptr->ext_zve32f ||
122
+ s->cfg_ptr->ext_zve64f)) {
123
return false;
52
return false;
124
}
53
}
125
54
126
@@ -XXX,XX +XXX,XX @@ static bool do_vsetivli(DisasContext *s, int rd, TCGv s1, TCGv s2)
55
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
127
TCGv dst;
56
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
128
57
129
if (!require_rvv(s) ||
58
if (a->vm && s->vl_eq_vlmax && !(s->vta && s->lmul < 0)) {
130
- !(has_ext(s, RVV) || s->ext_zve32f || s->ext_zve64f)) {
59
@@ -XXX,XX +XXX,XX @@ static bool opivx_trans(uint32_t vd, uint32_t rs1, uint32_t vs2, uint32_t vm,
131
+ !(has_ext(s, RVV) || s->cfg_ptr->ext_zve32f ||
60
uint32_t data = 0;
132
+ s->cfg_ptr->ext_zve64f)) {
61
133
return false;
62
TCGLabel *over = gen_new_label();
134
}
63
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
135
64
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
136
@@ -XXX,XX +XXX,XX @@ static bool trans_vsetivli(DisasContext *s, arg_vsetivli *a)
65
137
/* vector register offset from env */
66
dest = tcg_temp_new_ptr();
138
static uint32_t vreg_ofs(DisasContext *s, int reg)
67
@@ -XXX,XX +XXX,XX @@ static bool opivi_trans(uint32_t vd, uint32_t imm, uint32_t vs2, uint32_t vm,
68
uint32_t data = 0;
69
70
TCGLabel *over = gen_new_label();
71
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
72
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
73
74
dest = tcg_temp_new_ptr();
75
@@ -XXX,XX +XXX,XX @@ static bool do_opivv_widen(DisasContext *s, arg_rmrr *a,
76
if (checkfn(s, a)) {
77
uint32_t data = 0;
78
TCGLabel *over = gen_new_label();
79
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
80
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
81
82
data = FIELD_DP32(data, VDATA, VM, a->vm);
83
@@ -XXX,XX +XXX,XX @@ static bool do_opiwv_widen(DisasContext *s, arg_rmrr *a,
84
if (opiwv_widen_check(s, a)) {
85
uint32_t data = 0;
86
TCGLabel *over = gen_new_label();
87
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
88
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
89
90
data = FIELD_DP32(data, VDATA, VM, a->vm);
91
@@ -XXX,XX +XXX,XX @@ static bool opivv_trans(uint32_t vd, uint32_t vs1, uint32_t vs2, uint32_t vm,
139
{
92
{
140
- return offsetof(CPURISCVState, vreg) + reg * s->vlen / 8;
93
uint32_t data = 0;
141
+ return offsetof(CPURISCVState, vreg) + reg * s->cfg_ptr->vlen / 8;
94
TCGLabel *over = gen_new_label();
142
}
95
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
143
96
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
144
/* check functions */
145
@@ -XXX,XX +XXX,XX @@ static bool vext_check_st_index(DisasContext *s, int vd, int vs2, int nf,
146
* when XLEN=32. (Section 18.2)
147
*/
148
if (get_xl(s) == MXL_RV32) {
149
- ret &= (!has_ext(s, RVV) && s->ext_zve64f ? eew != MO_64 : true);
150
+ ret &= (!has_ext(s, RVV) &&
151
+ s->cfg_ptr->ext_zve64f ? eew != MO_64 : true);
152
}
153
154
return ret;
155
@@ -XXX,XX +XXX,XX @@ static bool vext_wide_check_common(DisasContext *s, int vd, int vm)
156
{
157
return (s->lmul <= 2) &&
158
(s->sew < MO_64) &&
159
- ((s->sew + 1) <= (s->elen >> 4)) &&
160
+ ((s->sew + 1) <= (s->cfg_ptr->elen >> 4)) &&
161
require_align(vd, s->lmul + 1) &&
162
require_vm(vm, vd);
163
}
164
@@ -XXX,XX +XXX,XX @@ static bool vext_narrow_check_common(DisasContext *s, int vd, int vs2,
165
{
166
return (s->lmul <= 2) &&
167
(s->sew < MO_64) &&
168
- ((s->sew + 1) <= (s->elen >> 4)) &&
169
+ ((s->sew + 1) <= (s->cfg_ptr->elen >> 4)) &&
170
require_align(vs2, s->lmul + 1) &&
171
require_align(vd, s->lmul) &&
172
require_vm(vm, vd);
173
@@ -XXX,XX +XXX,XX @@ static bool ldst_us_trans(uint32_t vd, uint32_t rs1, uint32_t data,
174
* The first part is vlen in bytes, encoded in maxsz of simd_desc.
175
* The second part is lmul, encoded in data of simd_desc.
176
*/
177
- desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
178
+ desc = tcg_constant_i32(simd_desc(s->cfg_ptr->vlen / 8,
179
+ s->cfg_ptr->vlen / 8, data));
180
181
tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, vd));
182
tcg_gen_addi_ptr(mask, cpu_env, vreg_ofs(s, 0));
183
@@ -XXX,XX +XXX,XX @@ static bool ldst_stride_trans(uint32_t vd, uint32_t rs1, uint32_t rs2,
184
mask = tcg_temp_new_ptr();
185
base = get_gpr(s, rs1, EXT_NONE);
186
stride = get_gpr(s, rs2, EXT_NONE);
187
- desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
188
+ desc = tcg_constant_i32(simd_desc(s->cfg_ptr->vlen / 8,
189
+ s->cfg_ptr->vlen / 8, data));
190
191
tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, vd));
192
tcg_gen_addi_ptr(mask, cpu_env, vreg_ofs(s, 0));
193
@@ -XXX,XX +XXX,XX @@ static bool ldst_index_trans(uint32_t vd, uint32_t rs1, uint32_t vs2,
194
mask = tcg_temp_new_ptr();
195
index = tcg_temp_new_ptr();
196
base = get_gpr(s, rs1, EXT_NONE);
197
- desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
198
+ desc = tcg_constant_i32(simd_desc(s->cfg_ptr->vlen / 8,
199
+ s->cfg_ptr->vlen / 8, data));
200
201
tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, vd));
202
tcg_gen_addi_ptr(index, cpu_env, vreg_ofs(s, vs2));
203
@@ -XXX,XX +XXX,XX @@ static bool ldff_trans(uint32_t vd, uint32_t rs1, uint32_t data,
204
dest = tcg_temp_new_ptr();
205
mask = tcg_temp_new_ptr();
206
base = get_gpr(s, rs1, EXT_NONE);
207
- desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
208
+ desc = tcg_constant_i32(simd_desc(s->cfg_ptr->vlen / 8,
209
+ s->cfg_ptr->vlen / 8, data));
210
211
tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, vd));
212
tcg_gen_addi_ptr(mask, cpu_env, vreg_ofs(s, 0));
213
@@ -XXX,XX +XXX,XX @@ static bool ldst_whole_trans(uint32_t vd, uint32_t rs1, uint32_t nf,
214
215
uint32_t data = FIELD_DP32(0, VDATA, NF, nf);
216
dest = tcg_temp_new_ptr();
217
- desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
218
+ desc = tcg_constant_i32(simd_desc(s->cfg_ptr->vlen / 8,
219
+ s->cfg_ptr->vlen / 8, data));
220
221
base = get_gpr(s, rs1, EXT_NONE);
222
tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, vd));
223
@@ -XXX,XX +XXX,XX @@ GEN_LDST_WHOLE_TRANS(vs8r_v, 8, true)
224
static inline uint32_t MAXSZ(DisasContext *s)
225
{
226
int scale = s->lmul - 3;
227
- return scale < 0 ? s->vlen >> -scale : s->vlen << scale;
228
+ return scale < 0 ? s->cfg_ptr->vlen >> -scale : s->cfg_ptr->vlen << scale;
229
}
230
231
static bool opivv_check(DisasContext *s, arg_rmrr *a)
232
@@ -XXX,XX +XXX,XX @@ do_opivv_gvec(DisasContext *s, arg_rmrr *a, GVecGen3Fn *gvec_fn,
233
data = FIELD_DP32(data, VDATA, LMUL, s->lmul);
234
tcg_gen_gvec_4_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0),
235
vreg_ofs(s, a->rs1), vreg_ofs(s, a->rs2),
236
- cpu_env, s->vlen / 8, s->vlen / 8, data, fn);
237
+ cpu_env, s->cfg_ptr->vlen / 8,
238
+ s->cfg_ptr->vlen / 8, data, fn);
239
}
240
mark_vs_dirty(s);
241
gen_set_label(over);
242
@@ -XXX,XX +XXX,XX @@ static bool opivx_trans(uint32_t vd, uint32_t rs1, uint32_t vs2, uint32_t vm,
243
97
244
data = FIELD_DP32(data, VDATA, VM, vm);
98
data = FIELD_DP32(data, VDATA, VM, vm);
245
data = FIELD_DP32(data, VDATA, LMUL, s->lmul);
246
- desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
247
+ desc = tcg_constant_i32(simd_desc(s->cfg_ptr->vlen / 8,
248
+ s->cfg_ptr->vlen / 8, data));
249
250
tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, vd));
251
tcg_gen_addi_ptr(src2, cpu_env, vreg_ofs(s, vs2));
252
@@ -XXX,XX +XXX,XX @@ static bool opivi_trans(uint32_t vd, uint32_t imm, uint32_t vs2, uint32_t vm,
253
254
data = FIELD_DP32(data, VDATA, VM, vm);
255
data = FIELD_DP32(data, VDATA, LMUL, s->lmul);
256
- desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
257
+ desc = tcg_constant_i32(simd_desc(s->cfg_ptr->vlen / 8,
258
+ s->cfg_ptr->vlen / 8, data));
259
260
tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, vd));
261
tcg_gen_addi_ptr(src2, cpu_env, vreg_ofs(s, vs2));
262
@@ -XXX,XX +XXX,XX @@ static bool do_opivv_widen(DisasContext *s, arg_rmrr *a,
263
tcg_gen_gvec_4_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0),
264
vreg_ofs(s, a->rs1),
265
vreg_ofs(s, a->rs2),
266
- cpu_env, s->vlen / 8, s->vlen / 8,
267
+ cpu_env, s->cfg_ptr->vlen / 8,
268
+ s->cfg_ptr->vlen / 8,
269
data, fn);
270
mark_vs_dirty(s);
271
gen_set_label(over);
272
@@ -XXX,XX +XXX,XX @@ static bool do_opiwv_widen(DisasContext *s, arg_rmrr *a,
273
tcg_gen_gvec_4_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0),
274
vreg_ofs(s, a->rs1),
275
vreg_ofs(s, a->rs2),
276
- cpu_env, s->vlen / 8, s->vlen / 8, data, fn);
277
+ cpu_env, s->cfg_ptr->vlen / 8,
278
+ s->cfg_ptr->vlen / 8, data, fn);
279
mark_vs_dirty(s);
280
gen_set_label(over);
281
return true;
282
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
99
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
283
tcg_gen_gvec_4_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0), \
100
gen_helper_##NAME##_w, \
284
vreg_ofs(s, a->rs1), \
101
}; \
285
vreg_ofs(s, a->rs2), cpu_env, \
102
TCGLabel *over = gen_new_label(); \
286
- s->vlen / 8, s->vlen / 8, data, \
103
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
287
+ s->cfg_ptr->vlen / 8, \
104
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
288
+ s->cfg_ptr->vlen / 8, data, \
105
\
289
fns[s->sew]); \
106
data = FIELD_DP32(data, VDATA, VM, a->vm); \
290
mark_vs_dirty(s); \
107
@@ -XXX,XX +XXX,XX @@ static bool trans_vmv_v_v(DisasContext *s, arg_vmv_v_v *a)
291
gen_set_label(over); \
108
gen_helper_vmv_v_v_w, gen_helper_vmv_v_v_d,
109
};
110
TCGLabel *over = gen_new_label();
111
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
112
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
113
114
tcg_gen_gvec_2_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, a->rs1),
115
@@ -XXX,XX +XXX,XX @@ static bool trans_vmv_v_x(DisasContext *s, arg_vmv_v_x *a)
116
vext_check_ss(s, a->rd, 0, 1)) {
117
TCGv s1;
118
TCGLabel *over = gen_new_label();
119
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
120
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
121
122
s1 = get_gpr(s, a->rs1, EXT_SIGN);
123
@@ -XXX,XX +XXX,XX @@ static bool trans_vmv_v_i(DisasContext *s, arg_vmv_v_i *a)
124
gen_helper_vmv_v_x_w, gen_helper_vmv_v_x_d,
125
};
126
TCGLabel *over = gen_new_label();
127
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
128
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
129
130
s1 = tcg_constant_i64(simm);
292
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
131
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
293
tcg_gen_gvec_4_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0), \
132
}; \
294
vreg_ofs(s, a->rs1), \
133
TCGLabel *over = gen_new_label(); \
295
vreg_ofs(s, a->rs2), cpu_env, \
134
gen_set_rm(s, RISCV_FRM_DYN); \
296
- s->vlen / 8, s->vlen / 8, data, \
135
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
297
+ s->cfg_ptr->vlen / 8, \
136
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
298
+ s->cfg_ptr->vlen / 8, data, \
137
\
299
fns[s->sew]); \
138
data = FIELD_DP32(data, VDATA, VM, a->vm); \
300
mark_vs_dirty(s); \
139
@@ -XXX,XX +XXX,XX @@ static bool opfvf_trans(uint32_t vd, uint32_t rs1, uint32_t vs2,
301
gen_set_label(over); \
140
TCGv_i64 t1;
302
@@ -XXX,XX +XXX,XX @@ static bool vmulh_vv_check(DisasContext *s, arg_rmrr *a)
141
303
* are not included for EEW=64 in Zve64*. (Section 18.2)
142
TCGLabel *over = gen_new_label();
304
*/
143
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
305
return opivv_check(s, a) &&
144
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
306
- (!has_ext(s, RVV) && s->ext_zve64f ? s->sew != MO_64 : true);
145
307
+ (!has_ext(s, RVV) &&
146
dest = tcg_temp_new_ptr();
308
+ s->cfg_ptr->ext_zve64f ? s->sew != MO_64 : true);
147
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
309
}
148
}; \
310
149
TCGLabel *over = gen_new_label(); \
311
static bool vmulh_vx_check(DisasContext *s, arg_rmrr *a)
150
gen_set_rm(s, RISCV_FRM_DYN); \
312
@@ -XXX,XX +XXX,XX @@ static bool vmulh_vx_check(DisasContext *s, arg_rmrr *a)
151
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
313
* are not included for EEW=64 in Zve64*. (Section 18.2)
152
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);\
314
*/
153
\
315
return opivx_check(s, a) &&
154
data = FIELD_DP32(data, VDATA, VM, a->vm); \
316
- (!has_ext(s, RVV) && s->ext_zve64f ? s->sew != MO_64 : true);
155
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
317
+ (!has_ext(s, RVV) &&
156
}; \
318
+ s->cfg_ptr->ext_zve64f ? s->sew != MO_64 : true);
157
TCGLabel *over = gen_new_label(); \
319
}
158
gen_set_rm(s, RISCV_FRM_DYN); \
320
159
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
321
GEN_OPIVV_GVEC_TRANS(vmul_vv, mul)
160
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
322
@@ -XXX,XX +XXX,XX @@ static bool trans_vmv_v_v(DisasContext *s, arg_vmv_v_v *a)
161
\
323
tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
162
data = FIELD_DP32(data, VDATA, VM, a->vm); \
324
163
@@ -XXX,XX +XXX,XX @@ static bool do_opfv(DisasContext *s, arg_rmr *a,
325
tcg_gen_gvec_2_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, a->rs1),
164
uint32_t data = 0;
326
- cpu_env, s->vlen / 8, s->vlen / 8, data,
165
TCGLabel *over = gen_new_label();
327
+ cpu_env, s->cfg_ptr->vlen / 8,
166
gen_set_rm_chkfrm(s, rm);
328
+ s->cfg_ptr->vlen / 8, data,
167
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
329
fns[s->sew]);
168
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
330
gen_set_label(over);
169
331
}
170
data = FIELD_DP32(data, VDATA, VM, a->vm);
332
@@ -XXX,XX +XXX,XX @@ static bool trans_vmv_v_x(DisasContext *s, arg_vmv_v_x *a)
171
@@ -XXX,XX +XXX,XX @@ static bool trans_vfmv_v_f(DisasContext *s, arg_vfmv_v_f *a)
172
gen_helper_vmv_v_x_d,
333
};
173
};
334
174
TCGLabel *over = gen_new_label();
335
tcg_gen_ext_tl_i64(s1_i64, s1);
175
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
336
- desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
176
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
337
+ desc = tcg_constant_i32(simd_desc(s->cfg_ptr->vlen / 8,
177
338
+ s->cfg_ptr->vlen / 8, data));
178
t1 = tcg_temp_new_i64();
339
tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, a->rd));
179
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmr *a) \
340
fns[s->sew](dest, s1_i64, cpu_env, desc);
180
}; \
341
181
TCGLabel *over = gen_new_label(); \
342
@@ -XXX,XX +XXX,XX @@ static bool trans_vmv_v_i(DisasContext *s, arg_vmv_v_i *a)
182
gen_set_rm_chkfrm(s, FRM); \
343
183
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
344
s1 = tcg_constant_i64(simm);
184
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
345
dest = tcg_temp_new_ptr();
185
\
346
- desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
186
data = FIELD_DP32(data, VDATA, VM, a->vm); \
347
+ desc = tcg_constant_i32(simd_desc(s->cfg_ptr->vlen / 8,
187
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmr *a) \
348
+ s->cfg_ptr->vlen / 8, data));
188
}; \
349
tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, a->rd));
189
TCGLabel *over = gen_new_label(); \
350
fns[s->sew](dest, s1, cpu_env, desc);
190
gen_set_rm(s, RISCV_FRM_DYN); \
351
191
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
352
@@ -XXX,XX +XXX,XX @@ static bool vsmul_vv_check(DisasContext *s, arg_rmrr *a)
192
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
353
* for EEW=64 in Zve64*. (Section 18.2)
193
\
354
*/
194
data = FIELD_DP32(data, VDATA, VM, a->vm); \
355
return opivv_check(s, a) &&
195
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmr *a) \
356
- (!has_ext(s, RVV) && s->ext_zve64f ? s->sew != MO_64 : true);
196
}; \
357
+ (!has_ext(s, RVV) &&
197
TCGLabel *over = gen_new_label(); \
358
+ s->cfg_ptr->ext_zve64f ? s->sew != MO_64 : true);
198
gen_set_rm_chkfrm(s, FRM); \
359
}
199
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
360
200
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
361
static bool vsmul_vx_check(DisasContext *s, arg_rmrr *a)
201
\
362
@@ -XXX,XX +XXX,XX @@ static bool vsmul_vx_check(DisasContext *s, arg_rmrr *a)
202
data = FIELD_DP32(data, VDATA, VM, a->vm); \
363
* for EEW=64 in Zve64*. (Section 18.2)
203
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmr *a) \
364
*/
204
}; \
365
return opivx_check(s, a) &&
205
TCGLabel *over = gen_new_label(); \
366
- (!has_ext(s, RVV) && s->ext_zve64f ? s->sew != MO_64 : true);
206
gen_set_rm_chkfrm(s, FRM); \
367
+ (!has_ext(s, RVV) &&
207
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
368
+ s->cfg_ptr->ext_zve64f ? s->sew != MO_64 : true);
208
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
369
}
209
\
370
210
data = FIELD_DP32(data, VDATA, VM, a->vm); \
371
GEN_OPIVV_TRANS(vsmul_vv, vsmul_vv_check)
211
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_r *a) \
372
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
212
uint32_t data = 0; \
373
tcg_gen_gvec_4_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0), \
213
gen_helper_gvec_4_ptr *fn = gen_helper_##NAME; \
374
vreg_ofs(s, a->rs1), \
214
TCGLabel *over = gen_new_label(); \
375
vreg_ofs(s, a->rs2), cpu_env, \
215
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
376
- s->vlen / 8, s->vlen / 8, data, \
216
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
377
+ s->cfg_ptr->vlen / 8, \
217
\
378
+ s->cfg_ptr->vlen / 8, data, \
379
fns[s->sew - 1]); \
380
mark_vs_dirty(s); \
381
gen_set_label(over); \
382
@@ -XXX,XX +XXX,XX @@ static bool opfvf_trans(uint32_t vd, uint32_t rs1, uint32_t vs2,
383
dest = tcg_temp_new_ptr();
384
mask = tcg_temp_new_ptr();
385
src2 = tcg_temp_new_ptr();
386
- desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
387
+ desc = tcg_constant_i32(simd_desc(s->cfg_ptr->vlen / 8,
388
+ s->cfg_ptr->vlen / 8, data));
389
390
tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, vd));
391
tcg_gen_addi_ptr(src2, cpu_env, vreg_ofs(s, vs2));
392
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
393
tcg_gen_gvec_4_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0), \
394
vreg_ofs(s, a->rs1), \
395
vreg_ofs(s, a->rs2), cpu_env, \
396
- s->vlen / 8, s->vlen / 8, data, \
397
+ s->cfg_ptr->vlen / 8, \
398
+ s->cfg_ptr->vlen / 8, data, \
399
fns[s->sew - 1]); \
400
mark_vs_dirty(s); \
401
gen_set_label(over); \
402
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
403
tcg_gen_gvec_4_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0), \
404
vreg_ofs(s, a->rs1), \
405
vreg_ofs(s, a->rs2), cpu_env, \
406
- s->vlen / 8, s->vlen / 8, data, \
407
+ s->cfg_ptr->vlen / 8, \
408
+ s->cfg_ptr->vlen / 8, data, \
409
fns[s->sew - 1]); \
410
mark_vs_dirty(s); \
411
gen_set_label(over); \
412
@@ -XXX,XX +XXX,XX @@ static bool do_opfv(DisasContext *s, arg_rmr *a,
413
data = FIELD_DP32(data, VDATA, LMUL, s->lmul);
414
tcg_gen_gvec_3_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0),
415
vreg_ofs(s, a->rs2), cpu_env,
416
- s->vlen / 8, s->vlen / 8, data, fn);
417
+ s->cfg_ptr->vlen / 8,
418
+ s->cfg_ptr->vlen / 8, data, fn);
419
mark_vs_dirty(s);
420
gen_set_label(over);
421
return true;
422
@@ -XXX,XX +XXX,XX @@ static bool trans_vfmv_v_f(DisasContext *s, arg_vfmv_v_f *a)
423
do_nanbox(s, t1, cpu_fpr[a->rs1]);
424
425
dest = tcg_temp_new_ptr();
426
- desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
427
+ desc = tcg_constant_i32(simd_desc(s->cfg_ptr->vlen / 8,
428
+ s->cfg_ptr->vlen / 8, data));
429
tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, a->rd));
430
431
fns[s->sew - 1](dest, t1, cpu_env, desc);
432
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmr *a) \
433
data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \
218
data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \
434
tcg_gen_gvec_3_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0), \
435
vreg_ofs(s, a->rs2), cpu_env, \
436
- s->vlen / 8, s->vlen / 8, data, \
437
+ s->cfg_ptr->vlen / 8, \
438
+ s->cfg_ptr->vlen / 8, data, \
439
fns[s->sew - 1]); \
440
mark_vs_dirty(s); \
441
gen_set_label(over); \
442
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmr *a) \
443
data = FIELD_DP32(data, VDATA, VM, a->vm); \
444
tcg_gen_gvec_3_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0), \
445
vreg_ofs(s, a->rs2), cpu_env, \
446
- s->vlen / 8, s->vlen / 8, data, \
447
+ s->cfg_ptr->vlen / 8, \
448
+ s->cfg_ptr->vlen / 8, data, \
449
fns[s->sew]); \
450
mark_vs_dirty(s); \
451
gen_set_label(over); \
452
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmr *a) \
453
data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \
454
tcg_gen_gvec_3_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0), \
455
vreg_ofs(s, a->rs2), cpu_env, \
456
- s->vlen / 8, s->vlen / 8, data, \
457
+ s->cfg_ptr->vlen / 8, \
458
+ s->cfg_ptr->vlen / 8, data, \
459
fns[s->sew - 1]); \
460
mark_vs_dirty(s); \
461
gen_set_label(over); \
462
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmr *a) \
463
data = FIELD_DP32(data, VDATA, VM, a->vm); \
464
tcg_gen_gvec_3_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0), \
465
vreg_ofs(s, a->rs2), cpu_env, \
466
- s->vlen / 8, s->vlen / 8, data, \
467
+ s->cfg_ptr->vlen / 8, \
468
+ s->cfg_ptr->vlen / 8, data, \
469
fns[s->sew]); \
470
mark_vs_dirty(s); \
471
gen_set_label(over); \
472
@@ -XXX,XX +XXX,XX @@ GEN_OPIVV_TRANS(vredxor_vs, reduction_check)
473
static bool reduction_widen_check(DisasContext *s, arg_rmrr *a)
474
{
475
return reduction_check(s, a) && (s->sew < MO_64) &&
476
- ((s->sew + 1) <= (s->elen >> 4));
477
+ ((s->sew + 1) <= (s->cfg_ptr->elen >> 4));
478
}
479
480
GEN_OPIVV_WIDEN_TRANS(vwredsum_vs, reduction_widen_check)
481
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_r *a) \
482
tcg_gen_gvec_4_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0), \
483
vreg_ofs(s, a->rs1), \
484
vreg_ofs(s, a->rs2), cpu_env, \
485
- s->vlen / 8, s->vlen / 8, data, fn); \
486
+ s->cfg_ptr->vlen / 8, \
487
+ s->cfg_ptr->vlen / 8, data, fn); \
488
mark_vs_dirty(s); \
489
gen_set_label(over); \
490
return true; \
491
@@ -XXX,XX +XXX,XX @@ static bool trans_vcpop_m(DisasContext *s, arg_rmr *a)
492
mask = tcg_temp_new_ptr();
493
src2 = tcg_temp_new_ptr();
494
dst = dest_gpr(s, a->rd);
495
- desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
496
+ desc = tcg_constant_i32(simd_desc(s->cfg_ptr->vlen / 8,
497
+ s->cfg_ptr->vlen / 8, data));
498
499
tcg_gen_addi_ptr(src2, cpu_env, vreg_ofs(s, a->rs2));
500
tcg_gen_addi_ptr(mask, cpu_env, vreg_ofs(s, 0));
501
@@ -XXX,XX +XXX,XX @@ static bool trans_vfirst_m(DisasContext *s, arg_rmr *a)
502
mask = tcg_temp_new_ptr();
503
src2 = tcg_temp_new_ptr();
504
dst = dest_gpr(s, a->rd);
505
- desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
506
+ desc = tcg_constant_i32(simd_desc(s->cfg_ptr->vlen / 8,
507
+ s->cfg_ptr->vlen / 8, data));
508
509
tcg_gen_addi_ptr(src2, cpu_env, vreg_ofs(s, a->rs2));
510
tcg_gen_addi_ptr(mask, cpu_env, vreg_ofs(s, 0));
511
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmr *a) \
512
data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \
513
tcg_gen_gvec_3_ptr(vreg_ofs(s, a->rd), \
514
vreg_ofs(s, 0), vreg_ofs(s, a->rs2), \
515
- cpu_env, s->vlen / 8, s->vlen / 8, \
516
+ cpu_env, s->cfg_ptr->vlen / 8, \
517
+ s->cfg_ptr->vlen / 8, \
518
data, fn); \
519
mark_vs_dirty(s); \
520
gen_set_label(over); \
521
@@ -XXX,XX +XXX,XX @@ static bool trans_viota_m(DisasContext *s, arg_viota_m *a)
522
};
523
tcg_gen_gvec_3_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0),
524
vreg_ofs(s, a->rs2), cpu_env,
525
- s->vlen / 8, s->vlen / 8, data, fns[s->sew]);
526
+ s->cfg_ptr->vlen / 8,
527
+ s->cfg_ptr->vlen / 8, data, fns[s->sew]);
528
mark_vs_dirty(s);
529
gen_set_label(over);
530
return true;
531
@@ -XXX,XX +XXX,XX @@ static bool trans_vid_v(DisasContext *s, arg_vid_v *a)
219
@@ -XXX,XX +XXX,XX @@ static bool trans_vid_v(DisasContext *s, arg_vid_v *a)
532
gen_helper_vid_v_w, gen_helper_vid_v_d,
220
require_vm(a->vm, a->rd)) {
533
};
221
uint32_t data = 0;
534
tcg_gen_gvec_2_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0),
222
TCGLabel *over = gen_new_label();
535
- cpu_env, s->vlen / 8, s->vlen / 8,
223
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
536
+ cpu_env, s->cfg_ptr->vlen / 8,
224
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
537
+ s->cfg_ptr->vlen / 8,
225
538
data, fns[s->sew]);
226
data = FIELD_DP32(data, VDATA, VM, a->vm);
539
mark_vs_dirty(s);
227
@@ -XXX,XX +XXX,XX @@ static bool trans_vmv_s_x(DisasContext *s, arg_vmv_s_x *a)
540
gen_set_label(over);
228
TCGv s1;
541
@@ -XXX,XX +XXX,XX @@ static bool trans_vrgather_vx(DisasContext *s, arg_rmrr *a)
229
TCGLabel *over = gen_new_label();
542
230
543
if (a->vm && s->vl_eq_vlmax) {
231
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
544
int scale = s->lmul - (s->sew + 3);
232
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
545
- int vlmax = scale < 0 ? s->vlen >> -scale : s->vlen << scale;
233
546
+ int vlmax = scale < 0 ?
234
t1 = tcg_temp_new_i64();
547
+ s->cfg_ptr->vlen >> -scale : s->cfg_ptr->vlen << scale;
235
@@ -XXX,XX +XXX,XX @@ static bool trans_vfmv_s_f(DisasContext *s, arg_vfmv_s_f *a)
548
TCGv_i64 dest = tcg_temp_new_i64();
236
TCGv_i64 t1;
549
237
TCGLabel *over = gen_new_label();
550
if (a->rs1 == 0) {
238
551
@@ -XXX,XX +XXX,XX @@ static bool trans_vrgather_vi(DisasContext *s, arg_rmrr *a)
239
- /* if vl == 0 or vstart >= vl, skip vector register write back */
552
240
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
553
if (a->vm && s->vl_eq_vlmax) {
241
+ /* if vstart >= vl, skip vector register write back */
554
int scale = s->lmul - (s->sew + 3);
242
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
555
- int vlmax = scale < 0 ? s->vlen >> -scale : s->vlen << scale;
243
556
+ int vlmax = scale < 0 ?
244
/* NaN-box f[rs1] */
557
+ s->cfg_ptr->vlen >> -scale : s->cfg_ptr->vlen << scale;
558
if (a->rs1 >= vlmax) {
559
tcg_gen_gvec_dup_imm(MO_64, vreg_ofs(s, a->rd),
560
MAXSZ(s), MAXSZ(s), 0);
561
@@ -XXX,XX +XXX,XX @@ static bool trans_vcompress_vm(DisasContext *s, arg_r *a)
562
data = FIELD_DP32(data, VDATA, LMUL, s->lmul);
563
tcg_gen_gvec_4_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0),
564
vreg_ofs(s, a->rs1), vreg_ofs(s, a->rs2),
565
- cpu_env, s->vlen / 8, s->vlen / 8, data,
566
+ cpu_env, s->cfg_ptr->vlen / 8,
567
+ s->cfg_ptr->vlen / 8, data,
568
fns[s->sew]);
569
mark_vs_dirty(s);
570
gen_set_label(over);
571
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_##NAME * a) \
572
if (require_rvv(s) && \
573
QEMU_IS_ALIGNED(a->rd, LEN) && \
574
QEMU_IS_ALIGNED(a->rs2, LEN)) { \
575
- uint32_t maxsz = (s->vlen >> 3) * LEN; \
576
+ uint32_t maxsz = (s->cfg_ptr->vlen >> 3) * LEN; \
577
if (s->vstart == 0) { \
578
/* EEW = 8 */ \
579
tcg_gen_gvec_mov(MO_8, vreg_ofs(s, a->rd), \
580
@@ -XXX,XX +XXX,XX @@ static bool int_ext_op(DisasContext *s, arg_rmr *a, uint8_t seq)
245
@@ -XXX,XX +XXX,XX @@ static bool int_ext_op(DisasContext *s, arg_rmr *a, uint8_t seq)
581
246
uint32_t data = 0;
582
tcg_gen_gvec_3_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0),
247
gen_helper_gvec_3_ptr *fn;
583
vreg_ofs(s, a->rs2), cpu_env,
248
TCGLabel *over = gen_new_label();
584
- s->vlen / 8, s->vlen / 8, data, fn);
249
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
585
+ s->cfg_ptr->vlen / 8,
250
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
586
+ s->cfg_ptr->vlen / 8, data, fn);
251
587
252
static gen_helper_gvec_3_ptr * const fns[6][4] = {
588
mark_vs_dirty(s);
589
gen_set_label(over);
590
diff --git a/target/riscv/insn_trans/trans_rvzfh.c.inc b/target/riscv/insn_trans/trans_rvzfh.c.inc
591
index XXXXXXX..XXXXXXX 100644
592
--- a/target/riscv/insn_trans/trans_rvzfh.c.inc
593
+++ b/target/riscv/insn_trans/trans_rvzfh.c.inc
594
@@ -XXX,XX +XXX,XX @@
595
*/
596
597
#define REQUIRE_ZFH(ctx) do { \
598
- if (!ctx->ext_zfh) { \
599
+ if (!ctx->cfg_ptr->ext_zfh) { \
600
return false; \
601
} \
602
} while (0)
603
604
#define REQUIRE_ZFH_OR_ZFHMIN(ctx) do { \
605
- if (!(ctx->ext_zfh || ctx->ext_zfhmin)) { \
606
+ if (!(ctx->cfg_ptr->ext_zfh || ctx->cfg_ptr->ext_zfhmin)) { \
607
return false; \
608
} \
609
} while (0)
610
--
253
--
611
2.34.1
254
2.41.0
612
613
diff view generated by jsdifflib
1
From: Anup Patel <anup.patel@wdc.com>
1
From: Lawrence Hunter <lawrence.hunter@codethink.co.uk>
2
2
3
The RISC-V AIA (Advanced Interrupt Architecture) defines a new
3
This commit adds support for the Zvbc vector-crypto extension, which
4
interrupt controller for MSIs (message signal interrupts) called
4
consists of the following instructions:
5
IMSIC (Incoming Message Signal Interrupt Controller). The IMSIC
5
6
is per-HART device and also suppport virtualizaiton of MSIs using
6
* vclmulh.[vx,vv]
7
dedicated VS-level guest interrupt files.
7
* vclmul.[vx,vv]
8
8
9
This patch adds device emulation for RISC-V AIA IMSIC which
9
Translation functions are defined in
10
supports M-level, S-level, and VS-level MSIs.
10
`target/riscv/insn_trans/trans_rvvk.c.inc` and helpers are defined in
11
11
`target/riscv/vcrypto_helper.c`.
12
Signed-off-by: Anup Patel <anup.patel@wdc.com>
12
13
Signed-off-by: Anup Patel <anup@brainfault.org>
13
Co-authored-by: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
14
Reviewed-by: Frank Chang <frank.chang@sifive.com>
14
Co-authored-by: Max Chou <max.chou@sifive.com>
15
Message-id: 20220204174700.534953-21-anup@brainfault.org
15
Signed-off-by: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
16
Signed-off-by: Lawrence Hunter <lawrence.hunter@codethink.co.uk>
17
Signed-off-by: Max Chou <max.chou@sifive.com>
18
[max.chou@sifive.com: Exposed x-zvbc property]
19
Message-ID: <20230711165917.2629866-5-max.chou@sifive.com>
16
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
20
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
17
---
21
---
18
include/hw/intc/riscv_imsic.h | 68 ++++++
22
target/riscv/cpu_cfg.h | 1 +
19
hw/intc/riscv_imsic.c | 448 ++++++++++++++++++++++++++++++++++
23
target/riscv/helper.h | 6 +++
20
hw/intc/Kconfig | 3 +
24
target/riscv/insn32.decode | 6 +++
21
hw/intc/meson.build | 1 +
25
target/riscv/cpu.c | 9 ++++
22
4 files changed, 520 insertions(+)
26
target/riscv/translate.c | 1 +
23
create mode 100644 include/hw/intc/riscv_imsic.h
27
target/riscv/vcrypto_helper.c | 59 ++++++++++++++++++++++
24
create mode 100644 hw/intc/riscv_imsic.c
28
target/riscv/insn_trans/trans_rvvk.c.inc | 62 ++++++++++++++++++++++++
25
29
target/riscv/meson.build | 3 +-
26
diff --git a/include/hw/intc/riscv_imsic.h b/include/hw/intc/riscv_imsic.h
30
8 files changed, 146 insertions(+), 1 deletion(-)
31
create mode 100644 target/riscv/vcrypto_helper.c
32
create mode 100644 target/riscv/insn_trans/trans_rvvk.c.inc
33
34
diff --git a/target/riscv/cpu_cfg.h b/target/riscv/cpu_cfg.h
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/riscv/cpu_cfg.h
37
+++ b/target/riscv/cpu_cfg.h
38
@@ -XXX,XX +XXX,XX @@ struct RISCVCPUConfig {
39
bool ext_zve32f;
40
bool ext_zve64f;
41
bool ext_zve64d;
42
+ bool ext_zvbc;
43
bool ext_zmmul;
44
bool ext_zvfbfmin;
45
bool ext_zvfbfwma;
46
diff --git a/target/riscv/helper.h b/target/riscv/helper.h
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/riscv/helper.h
49
+++ b/target/riscv/helper.h
50
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_5(vfwcvtbf16_f_f_v, void, ptr, ptr, ptr, env, i32)
51
52
DEF_HELPER_6(vfwmaccbf16_vv, void, ptr, ptr, ptr, ptr, env, i32)
53
DEF_HELPER_6(vfwmaccbf16_vf, void, ptr, ptr, i64, ptr, env, i32)
54
+
55
+/* Vector crypto functions */
56
+DEF_HELPER_6(vclmul_vv, void, ptr, ptr, ptr, ptr, env, i32)
57
+DEF_HELPER_6(vclmul_vx, void, ptr, ptr, tl, ptr, env, i32)
58
+DEF_HELPER_6(vclmulh_vv, void, ptr, ptr, ptr, ptr, env, i32)
59
+DEF_HELPER_6(vclmulh_vx, void, ptr, ptr, tl, ptr, env, i32)
60
diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode
61
index XXXXXXX..XXXXXXX 100644
62
--- a/target/riscv/insn32.decode
63
+++ b/target/riscv/insn32.decode
64
@@ -XXX,XX +XXX,XX @@ vfwcvtbf16_f_f_v 010010 . ..... 01101 001 ..... 1010111 @r2_vm
65
# *** Zvfbfwma Standard Extension ***
66
vfwmaccbf16_vv 111011 . ..... ..... 001 ..... 1010111 @r_vm
67
vfwmaccbf16_vf 111011 . ..... ..... 101 ..... 1010111 @r_vm
68
+
69
+# *** Zvbc vector crypto extension ***
70
+vclmul_vv 001100 . ..... ..... 010 ..... 1010111 @r_vm
71
+vclmul_vx 001100 . ..... ..... 110 ..... 1010111 @r_vm
72
+vclmulh_vv 001101 . ..... ..... 010 ..... 1010111 @r_vm
73
+vclmulh_vx 001101 . ..... ..... 110 ..... 1010111 @r_vm
74
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
75
index XXXXXXX..XXXXXXX 100644
76
--- a/target/riscv/cpu.c
77
+++ b/target/riscv/cpu.c
78
@@ -XXX,XX +XXX,XX @@ static const struct isa_ext_data isa_edata_arr[] = {
79
ISA_EXT_DATA_ENTRY(zksed, PRIV_VERSION_1_12_0, ext_zksed),
80
ISA_EXT_DATA_ENTRY(zksh, PRIV_VERSION_1_12_0, ext_zksh),
81
ISA_EXT_DATA_ENTRY(zkt, PRIV_VERSION_1_12_0, ext_zkt),
82
+ ISA_EXT_DATA_ENTRY(zvbc, PRIV_VERSION_1_12_0, ext_zvbc),
83
ISA_EXT_DATA_ENTRY(zve32f, PRIV_VERSION_1_10_0, ext_zve32f),
84
ISA_EXT_DATA_ENTRY(zve64f, PRIV_VERSION_1_10_0, ext_zve64f),
85
ISA_EXT_DATA_ENTRY(zve64d, PRIV_VERSION_1_10_0, ext_zve64d),
86
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_validate_set_extensions(RISCVCPU *cpu, Error **errp)
87
return;
88
}
89
90
+ if (cpu->cfg.ext_zvbc && !cpu->cfg.ext_zve64f) {
91
+ error_setg(errp, "Zvbc extension requires V or Zve64{f,d} extensions");
92
+ return;
93
+ }
94
+
95
if (cpu->cfg.ext_zk) {
96
cpu->cfg.ext_zkn = true;
97
cpu->cfg.ext_zkr = true;
98
@@ -XXX,XX +XXX,XX @@ static Property riscv_cpu_extensions[] = {
99
DEFINE_PROP_BOOL("x-zvfbfmin", RISCVCPU, cfg.ext_zvfbfmin, false),
100
DEFINE_PROP_BOOL("x-zvfbfwma", RISCVCPU, cfg.ext_zvfbfwma, false),
101
102
+ /* Vector cryptography extensions */
103
+ DEFINE_PROP_BOOL("x-zvbc", RISCVCPU, cfg.ext_zvbc, false),
104
+
105
DEFINE_PROP_END_OF_LIST(),
106
};
107
108
diff --git a/target/riscv/translate.c b/target/riscv/translate.c
109
index XXXXXXX..XXXXXXX 100644
110
--- a/target/riscv/translate.c
111
+++ b/target/riscv/translate.c
112
@@ -XXX,XX +XXX,XX @@ static uint32_t opcode_at(DisasContextBase *dcbase, target_ulong pc)
113
#include "insn_trans/trans_rvzfa.c.inc"
114
#include "insn_trans/trans_rvzfh.c.inc"
115
#include "insn_trans/trans_rvk.c.inc"
116
+#include "insn_trans/trans_rvvk.c.inc"
117
#include "insn_trans/trans_privileged.c.inc"
118
#include "insn_trans/trans_svinval.c.inc"
119
#include "insn_trans/trans_rvbf16.c.inc"
120
diff --git a/target/riscv/vcrypto_helper.c b/target/riscv/vcrypto_helper.c
27
new file mode 100644
121
new file mode 100644
28
index XXXXXXX..XXXXXXX
122
index XXXXXXX..XXXXXXX
29
--- /dev/null
123
--- /dev/null
30
+++ b/include/hw/intc/riscv_imsic.h
124
+++ b/target/riscv/vcrypto_helper.c
31
@@ -XXX,XX +XXX,XX @@
125
@@ -XXX,XX +XXX,XX @@
32
+/*
126
+/*
33
+ * RISC-V IMSIC (Incoming Message Signal Interrupt Controller) interface
127
+ * RISC-V Vector Crypto Extension Helpers for QEMU.
34
+ *
128
+ *
35
+ * Copyright (c) 2021 Western Digital Corporation or its affiliates.
129
+ * Copyright (C) 2023 SiFive, Inc.
130
+ * Written by Codethink Ltd and SiFive.
36
+ *
131
+ *
37
+ * This program is free software; you can redistribute it and/or modify it
132
+ * This program is free software; you can redistribute it and/or modify it
38
+ * under the terms and conditions of the GNU General Public License,
133
+ * under the terms and conditions of the GNU General Public License,
39
+ * version 2 or later, as published by the Free Software Foundation.
134
+ * version 2 or later, as published by the Free Software Foundation.
40
+ *
135
+ *
...
...
45
+ *
140
+ *
46
+ * You should have received a copy of the GNU General Public License along with
141
+ * You should have received a copy of the GNU General Public License along with
47
+ * this program. If not, see <http://www.gnu.org/licenses/>.
142
+ * this program. If not, see <http://www.gnu.org/licenses/>.
48
+ */
143
+ */
49
+
144
+
50
+#ifndef HW_RISCV_IMSIC_H
145
+#include "qemu/osdep.h"
51
+#define HW_RISCV_IMSIC_H
146
+#include "qemu/host-utils.h"
52
+
147
+#include "qemu/bitops.h"
53
+#include "hw/sysbus.h"
148
+#include "cpu.h"
54
+#include "qom/object.h"
149
+#include "exec/memop.h"
55
+
150
+#include "exec/exec-all.h"
56
+#define TYPE_RISCV_IMSIC "riscv.imsic"
151
+#include "exec/helper-proto.h"
57
+
152
+#include "internals.h"
58
+typedef struct RISCVIMSICState RISCVIMSICState;
153
+#include "vector_internals.h"
59
+DECLARE_INSTANCE_CHECKER(RISCVIMSICState, RISCV_IMSIC, TYPE_RISCV_IMSIC)
154
+
60
+
155
+static uint64_t clmul64(uint64_t y, uint64_t x)
61
+#define IMSIC_MMIO_PAGE_SHIFT 12
156
+{
62
+#define IMSIC_MMIO_PAGE_SZ (1UL << IMSIC_MMIO_PAGE_SHIFT)
157
+ uint64_t result = 0;
63
+#define IMSIC_MMIO_SIZE(__num_pages) ((__num_pages) * IMSIC_MMIO_PAGE_SZ)
158
+ for (int j = 63; j >= 0; j--) {
64
+
159
+ if ((y >> j) & 1) {
65
+#define IMSIC_MMIO_HART_GUEST_MAX_BTIS 6
160
+ result ^= (x << j);
66
+#define IMSIC_MMIO_GROUP_MIN_SHIFT 24
161
+ }
67
+
162
+ }
68
+#define IMSIC_HART_NUM_GUESTS(__guest_bits) \
163
+ return result;
69
+ (1U << (__guest_bits))
164
+}
70
+#define IMSIC_HART_SIZE(__guest_bits) \
165
+
71
+ (IMSIC_HART_NUM_GUESTS(__guest_bits) * IMSIC_MMIO_PAGE_SZ)
166
+static uint64_t clmulh64(uint64_t y, uint64_t x)
72
+#define IMSIC_GROUP_NUM_HARTS(__hart_bits) \
167
+{
73
+ (1U << (__hart_bits))
168
+ uint64_t result = 0;
74
+#define IMSIC_GROUP_SIZE(__hart_bits, __guest_bits) \
169
+ for (int j = 63; j >= 1; j--) {
75
+ (IMSIC_GROUP_NUM_HARTS(__hart_bits) * IMSIC_HART_SIZE(__guest_bits))
170
+ if ((y >> j) & 1) {
76
+
171
+ result ^= (x >> (64 - j));
77
+struct RISCVIMSICState {
172
+ }
78
+ /*< private >*/
173
+ }
79
+ SysBusDevice parent_obj;
174
+ return result;
80
+ qemu_irq *external_irqs;
175
+}
81
+
176
+
82
+ /*< public >*/
177
+RVVCALL(OPIVV2, vclmul_vv, OP_UUU_D, H8, H8, H8, clmul64)
83
+ MemoryRegion mmio;
178
+GEN_VEXT_VV(vclmul_vv, 8)
84
+ uint32_t num_eistate;
179
+RVVCALL(OPIVX2, vclmul_vx, OP_UUU_D, H8, H8, clmul64)
85
+ uint32_t *eidelivery;
180
+GEN_VEXT_VX(vclmul_vx, 8)
86
+ uint32_t *eithreshold;
181
+RVVCALL(OPIVV2, vclmulh_vv, OP_UUU_D, H8, H8, H8, clmulh64)
87
+ uint32_t *eistate;
182
+GEN_VEXT_VV(vclmulh_vv, 8)
88
+
183
+RVVCALL(OPIVX2, vclmulh_vx, OP_UUU_D, H8, H8, clmulh64)
89
+ /* config */
184
+GEN_VEXT_VX(vclmulh_vx, 8)
90
+ bool mmode;
185
diff --git a/target/riscv/insn_trans/trans_rvvk.c.inc b/target/riscv/insn_trans/trans_rvvk.c.inc
91
+ uint32_t hartid;
92
+ uint32_t num_pages;
93
+ uint32_t num_irqs;
94
+};
95
+
96
+DeviceState *riscv_imsic_create(hwaddr addr, uint32_t hartid, bool mmode,
97
+ uint32_t num_pages, uint32_t num_ids);
98
+
99
+#endif
100
diff --git a/hw/intc/riscv_imsic.c b/hw/intc/riscv_imsic.c
101
new file mode 100644
186
new file mode 100644
102
index XXXXXXX..XXXXXXX
187
index XXXXXXX..XXXXXXX
103
--- /dev/null
188
--- /dev/null
104
+++ b/hw/intc/riscv_imsic.c
189
+++ b/target/riscv/insn_trans/trans_rvvk.c.inc
105
@@ -XXX,XX +XXX,XX @@
190
@@ -XXX,XX +XXX,XX @@
106
+/*
191
+/*
107
+ * RISC-V IMSIC (Incoming Message Signaled Interrupt Controller)
192
+ * RISC-V translation routines for the vector crypto extension.
108
+ *
193
+ *
109
+ * Copyright (c) 2021 Western Digital Corporation or its affiliates.
194
+ * Copyright (C) 2023 SiFive, Inc.
195
+ * Written by Codethink Ltd and SiFive.
110
+ *
196
+ *
111
+ * This program is free software; you can redistribute it and/or modify it
197
+ * This program is free software; you can redistribute it and/or modify it
112
+ * under the terms and conditions of the GNU General Public License,
198
+ * under the terms and conditions of the GNU General Public License,
113
+ * version 2 or later, as published by the Free Software Foundation.
199
+ * version 2 or later, as published by the Free Software Foundation.
114
+ *
200
+ *
...
...
119
+ *
205
+ *
120
+ * You should have received a copy of the GNU General Public License along with
206
+ * You should have received a copy of the GNU General Public License along with
121
+ * this program. If not, see <http://www.gnu.org/licenses/>.
207
+ * this program. If not, see <http://www.gnu.org/licenses/>.
122
+ */
208
+ */
123
+
209
+
124
+#include "qemu/osdep.h"
125
+#include "qapi/error.h"
126
+#include "qemu/log.h"
127
+#include "qemu/module.h"
128
+#include "qemu/error-report.h"
129
+#include "qemu/bswap.h"
130
+#include "exec/address-spaces.h"
131
+#include "hw/sysbus.h"
132
+#include "hw/pci/msi.h"
133
+#include "hw/boards.h"
134
+#include "hw/qdev-properties.h"
135
+#include "hw/intc/riscv_imsic.h"
136
+#include "hw/irq.h"
137
+#include "target/riscv/cpu.h"
138
+#include "target/riscv/cpu_bits.h"
139
+#include "sysemu/sysemu.h"
140
+#include "migration/vmstate.h"
141
+
142
+#define IMSIC_MMIO_PAGE_LE 0x00
143
+#define IMSIC_MMIO_PAGE_BE 0x04
144
+
145
+#define IMSIC_MIN_ID ((IMSIC_EIPx_BITS * 2) - 1)
146
+#define IMSIC_MAX_ID (IMSIC_TOPEI_IID_MASK)
147
+
148
+#define IMSIC_EISTATE_PENDING (1U << 0)
149
+#define IMSIC_EISTATE_ENABLED (1U << 1)
150
+#define IMSIC_EISTATE_ENPEND (IMSIC_EISTATE_ENABLED | \
151
+ IMSIC_EISTATE_PENDING)
152
+
153
+static uint32_t riscv_imsic_topei(RISCVIMSICState *imsic, uint32_t page)
154
+{
155
+ uint32_t i, max_irq, base;
156
+
157
+ base = page * imsic->num_irqs;
158
+ max_irq = (imsic->eithreshold[page] &&
159
+ (imsic->eithreshold[page] <= imsic->num_irqs)) ?
160
+ imsic->eithreshold[page] : imsic->num_irqs;
161
+ for (i = 1; i < max_irq; i++) {
162
+ if ((imsic->eistate[base + i] & IMSIC_EISTATE_ENPEND) ==
163
+ IMSIC_EISTATE_ENPEND) {
164
+ return (i << IMSIC_TOPEI_IID_SHIFT) | i;
165
+ }
166
+ }
167
+
168
+ return 0;
169
+}
170
+
171
+static void riscv_imsic_update(RISCVIMSICState *imsic, uint32_t page)
172
+{
173
+ if (imsic->eidelivery[page] && riscv_imsic_topei(imsic, page)) {
174
+ qemu_irq_raise(imsic->external_irqs[page]);
175
+ } else {
176
+ qemu_irq_lower(imsic->external_irqs[page]);
177
+ }
178
+}
179
+
180
+static int riscv_imsic_eidelivery_rmw(RISCVIMSICState *imsic, uint32_t page,
181
+ target_ulong *val,
182
+ target_ulong new_val,
183
+ target_ulong wr_mask)
184
+{
185
+ target_ulong old_val = imsic->eidelivery[page];
186
+
187
+ if (val) {
188
+ *val = old_val;
189
+ }
190
+
191
+ wr_mask &= 0x1;
192
+ imsic->eidelivery[page] = (old_val & ~wr_mask) | (new_val & wr_mask);
193
+
194
+ riscv_imsic_update(imsic, page);
195
+ return 0;
196
+}
197
+
198
+static int riscv_imsic_eithreshold_rmw(RISCVIMSICState *imsic, uint32_t page,
199
+ target_ulong *val,
200
+ target_ulong new_val,
201
+ target_ulong wr_mask)
202
+{
203
+ target_ulong old_val = imsic->eithreshold[page];
204
+
205
+ if (val) {
206
+ *val = old_val;
207
+ }
208
+
209
+ wr_mask &= IMSIC_MAX_ID;
210
+ imsic->eithreshold[page] = (old_val & ~wr_mask) | (new_val & wr_mask);
211
+
212
+ riscv_imsic_update(imsic, page);
213
+ return 0;
214
+}
215
+
216
+static int riscv_imsic_topei_rmw(RISCVIMSICState *imsic, uint32_t page,
217
+ target_ulong *val, target_ulong new_val,
218
+ target_ulong wr_mask)
219
+{
220
+ uint32_t base, topei = riscv_imsic_topei(imsic, page);
221
+
222
+ /* Read pending and enabled interrupt with highest priority */
223
+ if (val) {
224
+ *val = topei;
225
+ }
226
+
227
+ /* Writes ignore value and clear top pending interrupt */
228
+ if (topei && wr_mask) {
229
+ topei >>= IMSIC_TOPEI_IID_SHIFT;
230
+ base = page * imsic->num_irqs;
231
+ if (topei) {
232
+ imsic->eistate[base + topei] &= ~IMSIC_EISTATE_PENDING;
233
+ }
234
+
235
+ riscv_imsic_update(imsic, page);
236
+ }
237
+
238
+ return 0;
239
+}
240
+
241
+static int riscv_imsic_eix_rmw(RISCVIMSICState *imsic,
242
+ uint32_t xlen, uint32_t page,
243
+ uint32_t num, bool pend, target_ulong *val,
244
+ target_ulong new_val, target_ulong wr_mask)
245
+{
246
+ uint32_t i, base;
247
+ target_ulong mask;
248
+ uint32_t state = (pend) ? IMSIC_EISTATE_PENDING : IMSIC_EISTATE_ENABLED;
249
+
250
+ if (xlen != 32) {
251
+ if (num & 0x1) {
252
+ return -EINVAL;
253
+ }
254
+ num >>= 1;
255
+ }
256
+ if (num >= (imsic->num_irqs / xlen)) {
257
+ return -EINVAL;
258
+ }
259
+
260
+ base = (page * imsic->num_irqs) + (num * xlen);
261
+
262
+ if (val) {
263
+ *val = 0;
264
+ for (i = 0; i < xlen; i++) {
265
+ mask = (target_ulong)1 << i;
266
+ *val |= (imsic->eistate[base + i] & state) ? mask : 0;
267
+ }
268
+ }
269
+
270
+ for (i = 0; i < xlen; i++) {
271
+ /* Bit0 of eip0 and eie0 are read-only zero */
272
+ if (!num && !i) {
273
+ continue;
274
+ }
275
+
276
+ mask = (target_ulong)1 << i;
277
+ if (wr_mask & mask) {
278
+ if (new_val & mask) {
279
+ imsic->eistate[base + i] |= state;
280
+ } else {
281
+ imsic->eistate[base + i] &= ~state;
282
+ }
283
+ }
284
+ }
285
+
286
+ riscv_imsic_update(imsic, page);
287
+ return 0;
288
+}
289
+
290
+static int riscv_imsic_rmw(void *arg, target_ulong reg, target_ulong *val,
291
+ target_ulong new_val, target_ulong wr_mask)
292
+{
293
+ RISCVIMSICState *imsic = arg;
294
+ uint32_t isel, priv, virt, vgein, xlen, page;
295
+
296
+ priv = AIA_IREG_PRIV(reg);
297
+ virt = AIA_IREG_VIRT(reg);
298
+ isel = AIA_IREG_ISEL(reg);
299
+ vgein = AIA_IREG_VGEIN(reg);
300
+ xlen = AIA_IREG_XLEN(reg);
301
+
302
+ if (imsic->mmode) {
303
+ if (priv == PRV_M && !virt) {
304
+ page = 0;
305
+ } else {
306
+ goto err;
307
+ }
308
+ } else {
309
+ if (priv == PRV_S) {
310
+ if (virt) {
311
+ if (vgein && vgein < imsic->num_pages) {
312
+ page = vgein;
313
+ } else {
314
+ goto err;
315
+ }
316
+ } else {
317
+ page = 0;
318
+ }
319
+ } else {
320
+ goto err;
321
+ }
322
+ }
323
+
324
+ switch (isel) {
325
+ case ISELECT_IMSIC_EIDELIVERY:
326
+ return riscv_imsic_eidelivery_rmw(imsic, page, val,
327
+ new_val, wr_mask);
328
+ case ISELECT_IMSIC_EITHRESHOLD:
329
+ return riscv_imsic_eithreshold_rmw(imsic, page, val,
330
+ new_val, wr_mask);
331
+ case ISELECT_IMSIC_TOPEI:
332
+ return riscv_imsic_topei_rmw(imsic, page, val, new_val, wr_mask);
333
+ case ISELECT_IMSIC_EIP0 ... ISELECT_IMSIC_EIP63:
334
+ return riscv_imsic_eix_rmw(imsic, xlen, page,
335
+ isel - ISELECT_IMSIC_EIP0,
336
+ true, val, new_val, wr_mask);
337
+ case ISELECT_IMSIC_EIE0 ... ISELECT_IMSIC_EIE63:
338
+ return riscv_imsic_eix_rmw(imsic, xlen, page,
339
+ isel - ISELECT_IMSIC_EIE0,
340
+ false, val, new_val, wr_mask);
341
+ default:
342
+ break;
343
+ };
344
+
345
+err:
346
+ qemu_log_mask(LOG_GUEST_ERROR,
347
+ "%s: Invalid register priv=%d virt=%d isel=%d vgein=%d\n",
348
+ __func__, priv, virt, isel, vgein);
349
+ return -EINVAL;
350
+}
351
+
352
+static uint64_t riscv_imsic_read(void *opaque, hwaddr addr, unsigned size)
353
+{
354
+ RISCVIMSICState *imsic = opaque;
355
+
356
+ /* Reads must be 4 byte words */
357
+ if ((addr & 0x3) != 0) {
358
+ goto err;
359
+ }
360
+
361
+ /* Reads cannot be out of range */
362
+ if (addr > IMSIC_MMIO_SIZE(imsic->num_pages)) {
363
+ goto err;
364
+ }
365
+
366
+ return 0;
367
+
368
+err:
369
+ qemu_log_mask(LOG_GUEST_ERROR,
370
+ "%s: Invalid register read 0x%" HWADDR_PRIx "\n",
371
+ __func__, addr);
372
+ return 0;
373
+}
374
+
375
+static void riscv_imsic_write(void *opaque, hwaddr addr, uint64_t value,
376
+ unsigned size)
377
+{
378
+ RISCVIMSICState *imsic = opaque;
379
+ uint32_t page;
380
+
381
+ /* Writes must be 4 byte words */
382
+ if ((addr & 0x3) != 0) {
383
+ goto err;
384
+ }
385
+
386
+ /* Writes cannot be out of range */
387
+ if (addr > IMSIC_MMIO_SIZE(imsic->num_pages)) {
388
+ goto err;
389
+ }
390
+
391
+ /* Writes only supported for MSI little-endian registers */
392
+ page = addr >> IMSIC_MMIO_PAGE_SHIFT;
393
+ if ((addr & (IMSIC_MMIO_PAGE_SZ - 1)) == IMSIC_MMIO_PAGE_LE) {
394
+ if (value && (value < imsic->num_irqs)) {
395
+ imsic->eistate[(page * imsic->num_irqs) + value] |=
396
+ IMSIC_EISTATE_PENDING;
397
+ }
398
+ }
399
+
400
+ /* Update CPU external interrupt status */
401
+ riscv_imsic_update(imsic, page);
402
+
403
+ return;
404
+
405
+err:
406
+ qemu_log_mask(LOG_GUEST_ERROR,
407
+ "%s: Invalid register write 0x%" HWADDR_PRIx "\n",
408
+ __func__, addr);
409
+}
410
+
411
+static const MemoryRegionOps riscv_imsic_ops = {
412
+ .read = riscv_imsic_read,
413
+ .write = riscv_imsic_write,
414
+ .endianness = DEVICE_LITTLE_ENDIAN,
415
+ .valid = {
416
+ .min_access_size = 4,
417
+ .max_access_size = 4
418
+ }
419
+};
420
+
421
+static void riscv_imsic_realize(DeviceState *dev, Error **errp)
422
+{
423
+ RISCVIMSICState *imsic = RISCV_IMSIC(dev);
424
+ RISCVCPU *rcpu = RISCV_CPU(qemu_get_cpu(imsic->hartid));
425
+ CPUState *cpu = qemu_get_cpu(imsic->hartid);
426
+ CPURISCVState *env = cpu ? cpu->env_ptr : NULL;
427
+
428
+ imsic->num_eistate = imsic->num_pages * imsic->num_irqs;
429
+ imsic->eidelivery = g_new0(uint32_t, imsic->num_pages);
430
+ imsic->eithreshold = g_new0(uint32_t, imsic->num_pages);
431
+ imsic->eistate = g_new0(uint32_t, imsic->num_eistate);
432
+
433
+ memory_region_init_io(&imsic->mmio, OBJECT(dev), &riscv_imsic_ops,
434
+ imsic, TYPE_RISCV_IMSIC,
435
+ IMSIC_MMIO_SIZE(imsic->num_pages));
436
+ sysbus_init_mmio(SYS_BUS_DEVICE(dev), &imsic->mmio);
437
+
438
+ /* Claim the CPU interrupt to be triggered by this IMSIC */
439
+ if (riscv_cpu_claim_interrupts(rcpu,
440
+ (imsic->mmode) ? MIP_MEIP : MIP_SEIP) < 0) {
441
+ error_setg(errp, "%s already claimed",
442
+ (imsic->mmode) ? "MEIP" : "SEIP");
443
+ return;
444
+ }
445
+
446
+ /* Create output IRQ lines */
447
+ imsic->external_irqs = g_malloc(sizeof(qemu_irq) * imsic->num_pages);
448
+ qdev_init_gpio_out(dev, imsic->external_irqs, imsic->num_pages);
449
+
450
+ /* Force select AIA feature and setup CSR read-modify-write callback */
451
+ if (env) {
452
+ riscv_set_feature(env, RISCV_FEATURE_AIA);
453
+ if (!imsic->mmode) {
454
+ riscv_cpu_set_geilen(env, imsic->num_pages - 1);
455
+ }
456
+ riscv_cpu_set_aia_ireg_rmw_fn(env, (imsic->mmode) ? PRV_M : PRV_S,
457
+ riscv_imsic_rmw, imsic);
458
+ }
459
+
460
+ msi_nonbroken = true;
461
+}
462
+
463
+static Property riscv_imsic_properties[] = {
464
+ DEFINE_PROP_BOOL("mmode", RISCVIMSICState, mmode, 0),
465
+ DEFINE_PROP_UINT32("hartid", RISCVIMSICState, hartid, 0),
466
+ DEFINE_PROP_UINT32("num-pages", RISCVIMSICState, num_pages, 0),
467
+ DEFINE_PROP_UINT32("num-irqs", RISCVIMSICState, num_irqs, 0),
468
+ DEFINE_PROP_END_OF_LIST(),
469
+};
470
+
471
+static const VMStateDescription vmstate_riscv_imsic = {
472
+ .name = "riscv_imsic",
473
+ .version_id = 1,
474
+ .minimum_version_id = 1,
475
+ .fields = (VMStateField[]) {
476
+ VMSTATE_VARRAY_UINT32(eidelivery, RISCVIMSICState,
477
+ num_pages, 0,
478
+ vmstate_info_uint32, uint32_t),
479
+ VMSTATE_VARRAY_UINT32(eithreshold, RISCVIMSICState,
480
+ num_pages, 0,
481
+ vmstate_info_uint32, uint32_t),
482
+ VMSTATE_VARRAY_UINT32(eistate, RISCVIMSICState,
483
+ num_eistate, 0,
484
+ vmstate_info_uint32, uint32_t),
485
+ VMSTATE_END_OF_LIST()
486
+ }
487
+};
488
+
489
+static void riscv_imsic_class_init(ObjectClass *klass, void *data)
490
+{
491
+ DeviceClass *dc = DEVICE_CLASS(klass);
492
+
493
+ device_class_set_props(dc, riscv_imsic_properties);
494
+ dc->realize = riscv_imsic_realize;
495
+ dc->vmsd = &vmstate_riscv_imsic;
496
+}
497
+
498
+static const TypeInfo riscv_imsic_info = {
499
+ .name = TYPE_RISCV_IMSIC,
500
+ .parent = TYPE_SYS_BUS_DEVICE,
501
+ .instance_size = sizeof(RISCVIMSICState),
502
+ .class_init = riscv_imsic_class_init,
503
+};
504
+
505
+static void riscv_imsic_register_types(void)
506
+{
507
+ type_register_static(&riscv_imsic_info);
508
+}
509
+
510
+type_init(riscv_imsic_register_types)
511
+
512
+/*
210
+/*
513
+ * Create IMSIC device.
211
+ * Zvbc
514
+ */
212
+ */
515
+DeviceState *riscv_imsic_create(hwaddr addr, uint32_t hartid, bool mmode,
213
+
516
+ uint32_t num_pages, uint32_t num_ids)
214
+#define GEN_VV_MASKED_TRANS(NAME, CHECK) \
517
+{
215
+ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
518
+ DeviceState *dev = qdev_new(TYPE_RISCV_IMSIC);
216
+ { \
519
+ CPUState *cpu = qemu_get_cpu(hartid);
217
+ if (CHECK(s, a)) { \
520
+ uint32_t i;
218
+ return opivv_trans(a->rd, a->rs1, a->rs2, a->vm, \
521
+
219
+ gen_helper_##NAME, s); \
522
+ assert(!(addr & (IMSIC_MMIO_PAGE_SZ - 1)));
220
+ } \
523
+ if (mmode) {
221
+ return false; \
524
+ assert(num_pages == 1);
222
+ }
525
+ } else {
223
+
526
+ assert(num_pages >= 1 && num_pages <= (IRQ_LOCAL_GUEST_MAX + 1));
224
+static bool vclmul_vv_check(DisasContext *s, arg_rmrr *a)
527
+ }
225
+{
528
+ assert(IMSIC_MIN_ID <= num_ids);
226
+ return opivv_check(s, a) &&
529
+ assert(num_ids <= IMSIC_MAX_ID);
227
+ s->cfg_ptr->ext_zvbc == true &&
530
+ assert((num_ids & IMSIC_MIN_ID) == IMSIC_MIN_ID);
228
+ s->sew == MO_64;
531
+
229
+}
532
+ qdev_prop_set_bit(dev, "mmode", mmode);
230
+
533
+ qdev_prop_set_uint32(dev, "hartid", hartid);
231
+GEN_VV_MASKED_TRANS(vclmul_vv, vclmul_vv_check)
534
+ qdev_prop_set_uint32(dev, "num-pages", num_pages);
232
+GEN_VV_MASKED_TRANS(vclmulh_vv, vclmul_vv_check)
535
+ qdev_prop_set_uint32(dev, "num-irqs", num_ids + 1);
233
+
536
+
234
+#define GEN_VX_MASKED_TRANS(NAME, CHECK) \
537
+ sysbus_realize_and_unref(SYS_BUS_DEVICE(dev), &error_fatal);
235
+ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
538
+ sysbus_mmio_map(SYS_BUS_DEVICE(dev), 0, addr);
236
+ { \
539
+
237
+ if (CHECK(s, a)) { \
540
+ for (i = 0; i < num_pages; i++) {
238
+ return opivx_trans(a->rd, a->rs1, a->rs2, a->vm, \
541
+ if (!i) {
239
+ gen_helper_##NAME, s); \
542
+ qdev_connect_gpio_out_named(dev, NULL, i,
240
+ } \
543
+ qdev_get_gpio_in(DEVICE(cpu),
241
+ return false; \
544
+ (mmode) ? IRQ_M_EXT : IRQ_S_EXT));
242
+ }
545
+ } else {
243
+
546
+ qdev_connect_gpio_out_named(dev, NULL, i,
244
+static bool vclmul_vx_check(DisasContext *s, arg_rmrr *a)
547
+ qdev_get_gpio_in(DEVICE(cpu),
245
+{
548
+ IRQ_LOCAL_MAX + i - 1));
246
+ return opivx_check(s, a) &&
549
+ }
247
+ s->cfg_ptr->ext_zvbc == true &&
550
+ }
248
+ s->sew == MO_64;
551
+
249
+}
552
+ return dev;
250
+
553
+}
251
+GEN_VX_MASKED_TRANS(vclmul_vx, vclmul_vx_check)
554
diff --git a/hw/intc/Kconfig b/hw/intc/Kconfig
252
+GEN_VX_MASKED_TRANS(vclmulh_vx, vclmul_vx_check)
555
index XXXXXXX..XXXXXXX 100644
253
diff --git a/target/riscv/meson.build b/target/riscv/meson.build
556
--- a/hw/intc/Kconfig
254
index XXXXXXX..XXXXXXX 100644
557
+++ b/hw/intc/Kconfig
255
--- a/target/riscv/meson.build
558
@@ -XXX,XX +XXX,XX @@ config RISCV_ACLINT
256
+++ b/target/riscv/meson.build
559
config RISCV_APLIC
257
@@ -XXX,XX +XXX,XX @@ riscv_ss.add(files(
560
bool
258
'translate.c',
561
259
'm128_helper.c',
562
+config RISCV_IMSIC
260
'crypto_helper.c',
563
+ bool
261
- 'zce_helper.c'
564
+
262
+ 'zce_helper.c',
565
config SIFIVE_PLIC
263
+ 'vcrypto_helper.c'
566
bool
264
))
567
265
riscv_ss.add(when: 'CONFIG_KVM', if_true: files('kvm.c'), if_false: files('kvm-stub.c'))
568
diff --git a/hw/intc/meson.build b/hw/intc/meson.build
266
569
index XXXXXXX..XXXXXXX 100644
570
--- a/hw/intc/meson.build
571
+++ b/hw/intc/meson.build
572
@@ -XXX,XX +XXX,XX @@ specific_ss.add(when: 'CONFIG_S390_FLIC_KVM', if_true: files('s390_flic_kvm.c'))
573
specific_ss.add(when: 'CONFIG_SH_INTC', if_true: files('sh_intc.c'))
574
specific_ss.add(when: 'CONFIG_RISCV_ACLINT', if_true: files('riscv_aclint.c'))
575
specific_ss.add(when: 'CONFIG_RISCV_APLIC', if_true: files('riscv_aplic.c'))
576
+specific_ss.add(when: 'CONFIG_RISCV_IMSIC', if_true: files('riscv_imsic.c'))
577
specific_ss.add(when: 'CONFIG_SIFIVE_PLIC', if_true: files('sifive_plic.c'))
578
specific_ss.add(when: 'CONFIG_XICS', if_true: files('xics.c'))
579
specific_ss.add(when: ['CONFIG_KVM', 'CONFIG_XICS'],
580
--
267
--
581
2.34.1
268
2.41.0
582
583
diff view generated by jsdifflib
1
From: Anup Patel <anup.patel@wdc.com>
1
From: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
2
2
3
The AIA specification adds new CSRs for RV32 so that RISC-V hart can
3
Move the checks out of `do_opiv{v,x,i}_gvec{,_shift}` functions
4
support 64 local interrupts on both RV32 and RV64.
4
and into the corresponding macros. This enables the functions to be
5
reused in proceeding commits without check duplication.
5
6
6
Signed-off-by: Anup Patel <anup.patel@wdc.com>
7
Signed-off-by: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
7
Signed-off-by: Anup Patel <anup@brainfault.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
9
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
9
Reviewed-by: Frank Chang <frank.chang@sifive.com>
10
Signed-off-by: Max Chou <max.chou@sifive.com>
10
Message-id: 20220204174700.534953-11-anup@brainfault.org
11
Message-ID: <20230711165917.2629866-6-max.chou@sifive.com>
11
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
12
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
12
---
13
---
13
target/riscv/cpu.h | 14 +-
14
target/riscv/insn_trans/trans_rvv.c.inc | 28 +++++++++++--------------
14
target/riscv/cpu_helper.c | 10 +-
15
1 file changed, 12 insertions(+), 16 deletions(-)
15
target/riscv/csr.c | 560 +++++++++++++++++++++++++++++++-------
16
target/riscv/machine.c | 10 +-
17
4 files changed, 474 insertions(+), 120 deletions(-)
18
16
19
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
17
diff --git a/target/riscv/insn_trans/trans_rvv.c.inc b/target/riscv/insn_trans/trans_rvv.c.inc
20
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
21
--- a/target/riscv/cpu.h
19
--- a/target/riscv/insn_trans/trans_rvv.c.inc
22
+++ b/target/riscv/cpu.h
20
+++ b/target/riscv/insn_trans/trans_rvv.c.inc
23
@@ -XXX,XX +XXX,XX @@ struct CPURISCVState {
21
@@ -XXX,XX +XXX,XX @@ do_opivv_gvec(DisasContext *s, arg_rmrr *a, GVecGen3Fn *gvec_fn,
24
*/
22
gen_helper_gvec_4_ptr *fn)
25
uint64_t mstatus;
23
{
26
24
TCGLabel *over = gen_new_label();
27
- target_ulong mip;
25
- if (!opivv_check(s, a)) {
28
+ uint64_t mip;
26
- return false;
29
27
- }
30
- uint32_t miclaim;
28
31
+ uint64_t miclaim;
29
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
32
30
33
- target_ulong mie;
31
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
34
- target_ulong mideleg;
32
gen_helper_##NAME##_b, gen_helper_##NAME##_h, \
35
+ uint64_t mie;
33
gen_helper_##NAME##_w, gen_helper_##NAME##_d, \
36
+ uint64_t mideleg;
34
}; \
37
35
+ if (!opivv_check(s, a)) { \
38
target_ulong satp; /* since: priv-1.10.0 */
36
+ return false; \
39
target_ulong stval;
37
+ } \
40
@@ -XXX,XX +XXX,XX @@ struct CPURISCVState {
38
return do_opivv_gvec(s, a, tcg_gen_gvec_##SUF, fns[s->sew]); \
41
/* Hypervisor CSRs */
42
target_ulong hstatus;
43
target_ulong hedeleg;
44
- target_ulong hideleg;
45
+ uint64_t hideleg;
46
target_ulong hcounteren;
47
target_ulong htval;
48
target_ulong htinst;
49
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_list(void);
50
#ifndef CONFIG_USER_ONLY
51
bool riscv_cpu_exec_interrupt(CPUState *cs, int interrupt_request);
52
void riscv_cpu_swap_hypervisor_regs(CPURISCVState *env);
53
-int riscv_cpu_claim_interrupts(RISCVCPU *cpu, uint32_t interrupts);
54
-uint32_t riscv_cpu_update_mip(RISCVCPU *cpu, uint32_t mask, uint32_t value);
55
+int riscv_cpu_claim_interrupts(RISCVCPU *cpu, uint64_t interrupts);
56
+uint64_t riscv_cpu_update_mip(RISCVCPU *cpu, uint64_t mask, uint64_t value);
57
#define BOOL_TO_MASK(x) (-!!(x)) /* helper for riscv_cpu_update_mip value */
58
void riscv_cpu_set_rdtime_fn(CPURISCVState *env, uint64_t (*fn)(uint32_t),
59
uint32_t arg);
60
diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
61
index XXXXXXX..XXXXXXX 100644
62
--- a/target/riscv/cpu_helper.c
63
+++ b/target/riscv/cpu_helper.c
64
@@ -XXX,XX +XXX,XX @@ bool riscv_cpu_two_stage_lookup(int mmu_idx)
65
return mmu_idx & TB_FLAGS_PRIV_HYP_ACCESS_MASK;
66
}
39
}
67
40
68
-int riscv_cpu_claim_interrupts(RISCVCPU *cpu, uint32_t interrupts)
41
@@ -XXX,XX +XXX,XX @@ static inline bool
69
+int riscv_cpu_claim_interrupts(RISCVCPU *cpu, uint64_t interrupts)
42
do_opivx_gvec(DisasContext *s, arg_rmrr *a, GVecGen2sFn *gvec_fn,
43
gen_helper_opivx *fn)
70
{
44
{
71
CPURISCVState *env = &cpu->env;
45
- if (!opivx_check(s, a)) {
72
if (env->miclaim & interrupts) {
46
- return false;
73
@@ -XXX,XX +XXX,XX @@ int riscv_cpu_claim_interrupts(RISCVCPU *cpu, uint32_t interrupts)
47
- }
74
}
48
-
49
if (a->vm && s->vl_eq_vlmax && !(s->vta && s->lmul < 0)) {
50
TCGv_i64 src1 = tcg_temp_new_i64();
51
52
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
53
gen_helper_##NAME##_b, gen_helper_##NAME##_h, \
54
gen_helper_##NAME##_w, gen_helper_##NAME##_d, \
55
}; \
56
+ if (!opivx_check(s, a)) { \
57
+ return false; \
58
+ } \
59
return do_opivx_gvec(s, a, tcg_gen_gvec_##SUF, fns[s->sew]); \
75
}
60
}
76
61
77
-uint32_t riscv_cpu_update_mip(RISCVCPU *cpu, uint32_t mask, uint32_t value)
62
@@ -XXX,XX +XXX,XX @@ static inline bool
78
+uint64_t riscv_cpu_update_mip(RISCVCPU *cpu, uint64_t mask, uint64_t value)
63
do_opivi_gvec(DisasContext *s, arg_rmrr *a, GVecGen2iFn *gvec_fn,
64
gen_helper_opivx *fn, imm_mode_t imm_mode)
79
{
65
{
80
CPURISCVState *env = &cpu->env;
66
- if (!opivx_check(s, a)) {
81
CPUState *cs = CPU(cpu);
67
- return false;
82
- uint32_t gein, vsgein = 0, old = env->mip;
68
- }
83
+ uint64_t gein, vsgein = 0, old = env->mip;
69
-
84
bool locked = false;
70
if (a->vm && s->vl_eq_vlmax && !(s->vta && s->lmul < 0)) {
85
71
gvec_fn(s->sew, vreg_ofs(s, a->rd), vreg_ofs(s, a->rs2),
86
if (riscv_cpu_virt_enabled(env)) {
72
extract_imm(s, a->rs1, imm_mode), MAXSZ(s), MAXSZ(s));
87
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_do_interrupt(CPUState *cs)
73
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
88
*/
74
gen_helper_##OPIVX##_b, gen_helper_##OPIVX##_h, \
89
bool async = !!(cs->exception_index & RISCV_EXCP_INT_FLAG);
75
gen_helper_##OPIVX##_w, gen_helper_##OPIVX##_d, \
90
target_ulong cause = cs->exception_index & RISCV_EXCP_INT_MASK;
76
}; \
91
- target_ulong deleg = async ? env->mideleg : env->medeleg;
77
+ if (!opivx_check(s, a)) { \
92
+ uint64_t deleg = async ? env->mideleg : env->medeleg;
78
+ return false; \
93
target_ulong tval = 0;
79
+ } \
94
target_ulong htval = 0;
80
return do_opivi_gvec(s, a, tcg_gen_gvec_##SUF, \
95
target_ulong mtval2 = 0;
81
fns[s->sew], IMM_MODE); \
96
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_do_interrupt(CPUState *cs)
97
cause < TARGET_LONG_BITS && ((deleg >> cause) & 1)) {
98
/* handle the trap in S-mode */
99
if (riscv_has_ext(env, RVH)) {
100
- target_ulong hdeleg = async ? env->hideleg : env->hedeleg;
101
+ uint64_t hdeleg = async ? env->hideleg : env->hedeleg;
102
103
if (riscv_cpu_virt_enabled(env) && ((hdeleg >> cause) & 1)) {
104
/* Trap to VS mode */
105
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
106
index XXXXXXX..XXXXXXX 100644
107
--- a/target/riscv/csr.c
108
+++ b/target/riscv/csr.c
109
@@ -XXX,XX +XXX,XX @@ static RISCVException any32(CPURISCVState *env, int csrno)
110
111
}
82
}
112
83
@@ -XXX,XX +XXX,XX @@ static inline bool
113
+static int aia_any32(CPURISCVState *env, int csrno)
84
do_opivx_gvec_shift(DisasContext *s, arg_rmrr *a, GVecGen2sFn32 *gvec_fn,
114
+{
85
gen_helper_opivx *fn)
115
+ if (!riscv_feature(env, RISCV_FEATURE_AIA)) {
116
+ return RISCV_EXCP_ILLEGAL_INST;
117
+ }
118
+
119
+ return any32(env, csrno);
120
+}
121
+
122
static RISCVException smode(CPURISCVState *env, int csrno)
123
{
86
{
124
if (riscv_has_ext(env, RVS)) {
87
- if (!opivx_check(s, a)) {
125
@@ -XXX,XX +XXX,XX @@ static RISCVException smode(CPURISCVState *env, int csrno)
88
- return false;
126
return RISCV_EXCP_ILLEGAL_INST;
89
- }
90
-
91
if (a->vm && s->vl_eq_vlmax && !(s->vta && s->lmul < 0)) {
92
TCGv_i32 src1 = tcg_temp_new_i32();
93
94
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
95
gen_helper_##NAME##_b, gen_helper_##NAME##_h, \
96
gen_helper_##NAME##_w, gen_helper_##NAME##_d, \
97
}; \
98
- \
99
+ if (!opivx_check(s, a)) { \
100
+ return false; \
101
+ } \
102
return do_opivx_gvec_shift(s, a, tcg_gen_gvec_##SUF, fns[s->sew]); \
127
}
103
}
128
104
129
+static int smode32(CPURISCVState *env, int csrno)
130
+{
131
+ if (riscv_cpu_mxl(env) != MXL_RV32) {
132
+ return RISCV_EXCP_ILLEGAL_INST;
133
+ }
134
+
135
+ return smode(env, csrno);
136
+}
137
+
138
+static int aia_smode32(CPURISCVState *env, int csrno)
139
+{
140
+ if (!riscv_feature(env, RISCV_FEATURE_AIA)) {
141
+ return RISCV_EXCP_ILLEGAL_INST;
142
+ }
143
+
144
+ return smode32(env, csrno);
145
+}
146
+
147
static RISCVException hmode(CPURISCVState *env, int csrno)
148
{
149
if (riscv_has_ext(env, RVS) &&
150
@@ -XXX,XX +XXX,XX @@ static RISCVException pointer_masking(CPURISCVState *env, int csrno)
151
return RISCV_EXCP_ILLEGAL_INST;
152
}
153
154
+static int aia_hmode32(CPURISCVState *env, int csrno)
155
+{
156
+ if (!riscv_feature(env, RISCV_FEATURE_AIA)) {
157
+ return RISCV_EXCP_ILLEGAL_INST;
158
+ }
159
+
160
+ return hmode32(env, csrno);
161
+}
162
+
163
static RISCVException pmp(CPURISCVState *env, int csrno)
164
{
165
if (riscv_feature(env, RISCV_FEATURE_PMP)) {
166
@@ -XXX,XX +XXX,XX @@ static RISCVException read_timeh(CPURISCVState *env, int csrno,
167
168
/* Machine constants */
169
170
-#define M_MODE_INTERRUPTS (MIP_MSIP | MIP_MTIP | MIP_MEIP)
171
-#define S_MODE_INTERRUPTS (MIP_SSIP | MIP_STIP | MIP_SEIP)
172
-#define VS_MODE_INTERRUPTS (MIP_VSSIP | MIP_VSTIP | MIP_VSEIP)
173
-#define HS_MODE_INTERRUPTS (MIP_SGEIP | VS_MODE_INTERRUPTS)
174
+#define M_MODE_INTERRUPTS ((uint64_t)(MIP_MSIP | MIP_MTIP | MIP_MEIP))
175
+#define S_MODE_INTERRUPTS ((uint64_t)(MIP_SSIP | MIP_STIP | MIP_SEIP))
176
+#define VS_MODE_INTERRUPTS ((uint64_t)(MIP_VSSIP | MIP_VSTIP | MIP_VSEIP))
177
+#define HS_MODE_INTERRUPTS ((uint64_t)(MIP_SGEIP | VS_MODE_INTERRUPTS))
178
179
-static const target_ulong delegable_ints = S_MODE_INTERRUPTS |
180
+static const uint64_t delegable_ints = S_MODE_INTERRUPTS |
181
VS_MODE_INTERRUPTS;
182
-static const target_ulong vs_delegable_ints = VS_MODE_INTERRUPTS;
183
-static const target_ulong all_ints = M_MODE_INTERRUPTS | S_MODE_INTERRUPTS |
184
+static const uint64_t vs_delegable_ints = VS_MODE_INTERRUPTS;
185
+static const uint64_t all_ints = M_MODE_INTERRUPTS | S_MODE_INTERRUPTS |
186
HS_MODE_INTERRUPTS;
187
#define DELEGABLE_EXCPS ((1ULL << (RISCV_EXCP_INST_ADDR_MIS)) | \
188
(1ULL << (RISCV_EXCP_INST_ACCESS_FAULT)) | \
189
@@ -XXX,XX +XXX,XX @@ static RISCVException write_medeleg(CPURISCVState *env, int csrno,
190
return RISCV_EXCP_NONE;
191
}
192
193
-static RISCVException read_mideleg(CPURISCVState *env, int csrno,
194
- target_ulong *val)
195
+static RISCVException rmw_mideleg64(CPURISCVState *env, int csrno,
196
+ uint64_t *ret_val,
197
+ uint64_t new_val, uint64_t wr_mask)
198
{
199
- *val = env->mideleg;
200
- return RISCV_EXCP_NONE;
201
-}
202
+ uint64_t mask = wr_mask & delegable_ints;
203
+
204
+ if (ret_val) {
205
+ *ret_val = env->mideleg;
206
+ }
207
+
208
+ env->mideleg = (env->mideleg & ~mask) | (new_val & mask);
209
210
-static RISCVException write_mideleg(CPURISCVState *env, int csrno,
211
- target_ulong val)
212
-{
213
- env->mideleg = (env->mideleg & ~delegable_ints) | (val & delegable_ints);
214
if (riscv_has_ext(env, RVH)) {
215
env->mideleg |= HS_MODE_INTERRUPTS;
216
}
217
+
218
return RISCV_EXCP_NONE;
219
}
220
221
-static RISCVException read_mie(CPURISCVState *env, int csrno,
222
- target_ulong *val)
223
+static RISCVException rmw_mideleg(CPURISCVState *env, int csrno,
224
+ target_ulong *ret_val,
225
+ target_ulong new_val, target_ulong wr_mask)
226
{
227
- *val = env->mie;
228
- return RISCV_EXCP_NONE;
229
+ uint64_t rval;
230
+ RISCVException ret;
231
+
232
+ ret = rmw_mideleg64(env, csrno, &rval, new_val, wr_mask);
233
+ if (ret_val) {
234
+ *ret_val = rval;
235
+ }
236
+
237
+ return ret;
238
}
239
240
-static RISCVException write_mie(CPURISCVState *env, int csrno,
241
- target_ulong val)
242
+static RISCVException rmw_midelegh(CPURISCVState *env, int csrno,
243
+ target_ulong *ret_val,
244
+ target_ulong new_val,
245
+ target_ulong wr_mask)
246
{
247
- env->mie = (env->mie & ~all_ints) | (val & all_ints);
248
+ uint64_t rval;
249
+ RISCVException ret;
250
+
251
+ ret = rmw_mideleg64(env, csrno, &rval,
252
+ ((uint64_t)new_val) << 32, ((uint64_t)wr_mask) << 32);
253
+ if (ret_val) {
254
+ *ret_val = rval >> 32;
255
+ }
256
+
257
+ return ret;
258
+}
259
+
260
+static RISCVException rmw_mie64(CPURISCVState *env, int csrno,
261
+ uint64_t *ret_val,
262
+ uint64_t new_val, uint64_t wr_mask)
263
+{
264
+ uint64_t mask = wr_mask & all_ints;
265
+
266
+ if (ret_val) {
267
+ *ret_val = env->mie;
268
+ }
269
+
270
+ env->mie = (env->mie & ~mask) | (new_val & mask);
271
+
272
if (!riscv_has_ext(env, RVH)) {
273
- env->mie &= ~MIP_SGEIP;
274
+ env->mie &= ~((uint64_t)MIP_SGEIP);
275
}
276
+
277
return RISCV_EXCP_NONE;
278
}
279
280
+static RISCVException rmw_mie(CPURISCVState *env, int csrno,
281
+ target_ulong *ret_val,
282
+ target_ulong new_val, target_ulong wr_mask)
283
+{
284
+ uint64_t rval;
285
+ RISCVException ret;
286
+
287
+ ret = rmw_mie64(env, csrno, &rval, new_val, wr_mask);
288
+ if (ret_val) {
289
+ *ret_val = rval;
290
+ }
291
+
292
+ return ret;
293
+}
294
+
295
+static RISCVException rmw_mieh(CPURISCVState *env, int csrno,
296
+ target_ulong *ret_val,
297
+ target_ulong new_val, target_ulong wr_mask)
298
+{
299
+ uint64_t rval;
300
+ RISCVException ret;
301
+
302
+ ret = rmw_mie64(env, csrno, &rval,
303
+ ((uint64_t)new_val) << 32, ((uint64_t)wr_mask) << 32);
304
+ if (ret_val) {
305
+ *ret_val = rval >> 32;
306
+ }
307
+
308
+ return ret;
309
+}
310
+
311
static RISCVException read_mtvec(CPURISCVState *env, int csrno,
312
target_ulong *val)
313
{
314
@@ -XXX,XX +XXX,XX @@ static RISCVException write_mtval(CPURISCVState *env, int csrno,
315
return RISCV_EXCP_NONE;
316
}
317
318
-static RISCVException rmw_mip(CPURISCVState *env, int csrno,
319
- target_ulong *ret_value,
320
- target_ulong new_value, target_ulong write_mask)
321
+static RISCVException rmw_mip64(CPURISCVState *env, int csrno,
322
+ uint64_t *ret_val,
323
+ uint64_t new_val, uint64_t wr_mask)
324
{
325
RISCVCPU *cpu = env_archcpu(env);
326
/* Allow software control of delegable interrupts not claimed by hardware */
327
- target_ulong mask = write_mask & delegable_ints & ~env->miclaim;
328
- uint32_t gin, old_mip;
329
+ uint64_t old_mip, mask = wr_mask & delegable_ints & ~env->miclaim;
330
+ uint32_t gin;
331
332
if (mask) {
333
- old_mip = riscv_cpu_update_mip(cpu, mask, (new_value & mask));
334
+ old_mip = riscv_cpu_update_mip(cpu, mask, (new_val & mask));
335
} else {
336
old_mip = env->mip;
337
}
338
@@ -XXX,XX +XXX,XX @@ static RISCVException rmw_mip(CPURISCVState *env, int csrno,
339
old_mip |= (env->hgeip & ((target_ulong)1 << gin)) ? MIP_VSEIP : 0;
340
}
341
342
- if (ret_value) {
343
- *ret_value = old_mip;
344
+ if (ret_val) {
345
+ *ret_val = old_mip;
346
}
347
348
return RISCV_EXCP_NONE;
349
}
350
351
+static RISCVException rmw_mip(CPURISCVState *env, int csrno,
352
+ target_ulong *ret_val,
353
+ target_ulong new_val, target_ulong wr_mask)
354
+{
355
+ uint64_t rval;
356
+ RISCVException ret;
357
+
358
+ ret = rmw_mip64(env, csrno, &rval, new_val, wr_mask);
359
+ if (ret_val) {
360
+ *ret_val = rval;
361
+ }
362
+
363
+ return ret;
364
+}
365
+
366
+static RISCVException rmw_miph(CPURISCVState *env, int csrno,
367
+ target_ulong *ret_val,
368
+ target_ulong new_val, target_ulong wr_mask)
369
+{
370
+ uint64_t rval;
371
+ RISCVException ret;
372
+
373
+ ret = rmw_mip64(env, csrno, &rval,
374
+ ((uint64_t)new_val) << 32, ((uint64_t)wr_mask) << 32);
375
+ if (ret_val) {
376
+ *ret_val = rval >> 32;
377
+ }
378
+
379
+ return ret;
380
+}
381
+
382
/* Supervisor Trap Setup */
383
static RISCVException read_sstatus_i128(CPURISCVState *env, int csrno,
384
Int128 *val)
385
@@ -XXX,XX +XXX,XX @@ static RISCVException write_sstatus(CPURISCVState *env, int csrno,
386
return write_mstatus(env, CSR_MSTATUS, newval);
387
}
388
389
-static RISCVException read_vsie(CPURISCVState *env, int csrno,
390
- target_ulong *val)
391
+static RISCVException rmw_vsie64(CPURISCVState *env, int csrno,
392
+ uint64_t *ret_val,
393
+ uint64_t new_val, uint64_t wr_mask)
394
{
395
- /* Shift the VS bits to their S bit location in vsie */
396
- *val = (env->mie & env->hideleg & VS_MODE_INTERRUPTS) >> 1;
397
- return RISCV_EXCP_NONE;
398
+ RISCVException ret;
399
+ uint64_t rval, vsbits, mask = env->hideleg & VS_MODE_INTERRUPTS;
400
+
401
+ /* Bring VS-level bits to correct position */
402
+ vsbits = new_val & (VS_MODE_INTERRUPTS >> 1);
403
+ new_val &= ~(VS_MODE_INTERRUPTS >> 1);
404
+ new_val |= vsbits << 1;
405
+ vsbits = wr_mask & (VS_MODE_INTERRUPTS >> 1);
406
+ wr_mask &= ~(VS_MODE_INTERRUPTS >> 1);
407
+ wr_mask |= vsbits << 1;
408
+
409
+ ret = rmw_mie64(env, csrno, &rval, new_val, wr_mask & mask);
410
+ if (ret_val) {
411
+ rval &= mask;
412
+ vsbits = rval & VS_MODE_INTERRUPTS;
413
+ rval &= ~VS_MODE_INTERRUPTS;
414
+ *ret_val = rval | (vsbits >> 1);
415
+ }
416
+
417
+ return ret;
418
}
419
420
-static RISCVException read_sie(CPURISCVState *env, int csrno,
421
- target_ulong *val)
422
+static RISCVException rmw_vsie(CPURISCVState *env, int csrno,
423
+ target_ulong *ret_val,
424
+ target_ulong new_val, target_ulong wr_mask)
425
{
426
- if (riscv_cpu_virt_enabled(env)) {
427
- read_vsie(env, CSR_VSIE, val);
428
- } else {
429
- *val = env->mie & env->mideleg;
430
+ uint64_t rval;
431
+ RISCVException ret;
432
+
433
+ ret = rmw_vsie64(env, csrno, &rval, new_val, wr_mask);
434
+ if (ret_val) {
435
+ *ret_val = rval;
436
}
437
- return RISCV_EXCP_NONE;
438
+
439
+ return ret;
440
}
441
442
-static RISCVException write_vsie(CPURISCVState *env, int csrno,
443
- target_ulong val)
444
+static RISCVException rmw_vsieh(CPURISCVState *env, int csrno,
445
+ target_ulong *ret_val,
446
+ target_ulong new_val, target_ulong wr_mask)
447
{
448
- /* Shift the S bits to their VS bit location in mie */
449
- target_ulong newval = (env->mie & ~VS_MODE_INTERRUPTS) |
450
- ((val << 1) & env->hideleg & VS_MODE_INTERRUPTS);
451
- return write_mie(env, CSR_MIE, newval);
452
+ uint64_t rval;
453
+ RISCVException ret;
454
+
455
+ ret = rmw_vsie64(env, csrno, &rval,
456
+ ((uint64_t)new_val) << 32, ((uint64_t)wr_mask) << 32);
457
+ if (ret_val) {
458
+ *ret_val = rval >> 32;
459
+ }
460
+
461
+ return ret;
462
}
463
464
-static int write_sie(CPURISCVState *env, int csrno, target_ulong val)
465
+static RISCVException rmw_sie64(CPURISCVState *env, int csrno,
466
+ uint64_t *ret_val,
467
+ uint64_t new_val, uint64_t wr_mask)
468
{
469
+ RISCVException ret;
470
+ uint64_t mask = env->mideleg & S_MODE_INTERRUPTS;
471
+
472
if (riscv_cpu_virt_enabled(env)) {
473
- write_vsie(env, CSR_VSIE, val);
474
+ ret = rmw_vsie64(env, CSR_VSIE, ret_val, new_val, wr_mask);
475
} else {
476
- target_ulong newval = (env->mie & ~S_MODE_INTERRUPTS) |
477
- (val & S_MODE_INTERRUPTS);
478
- write_mie(env, CSR_MIE, newval);
479
+ ret = rmw_mie64(env, csrno, ret_val, new_val, wr_mask & mask);
480
}
481
482
- return RISCV_EXCP_NONE;
483
+ if (ret_val) {
484
+ *ret_val &= mask;
485
+ }
486
+
487
+ return ret;
488
+}
489
+
490
+static RISCVException rmw_sie(CPURISCVState *env, int csrno,
491
+ target_ulong *ret_val,
492
+ target_ulong new_val, target_ulong wr_mask)
493
+{
494
+ uint64_t rval;
495
+ RISCVException ret;
496
+
497
+ ret = rmw_sie64(env, csrno, &rval, new_val, wr_mask);
498
+ if (ret_val) {
499
+ *ret_val = rval;
500
+ }
501
+
502
+ return ret;
503
+}
504
+
505
+static RISCVException rmw_sieh(CPURISCVState *env, int csrno,
506
+ target_ulong *ret_val,
507
+ target_ulong new_val, target_ulong wr_mask)
508
+{
509
+ uint64_t rval;
510
+ RISCVException ret;
511
+
512
+ ret = rmw_sie64(env, csrno, &rval,
513
+ ((uint64_t)new_val) << 32, ((uint64_t)wr_mask) << 32);
514
+ if (ret_val) {
515
+ *ret_val = rval >> 32;
516
+ }
517
+
518
+ return ret;
519
}
520
521
static RISCVException read_stvec(CPURISCVState *env, int csrno,
522
@@ -XXX,XX +XXX,XX @@ static RISCVException write_stval(CPURISCVState *env, int csrno,
523
return RISCV_EXCP_NONE;
524
}
525
526
+static RISCVException rmw_vsip64(CPURISCVState *env, int csrno,
527
+ uint64_t *ret_val,
528
+ uint64_t new_val, uint64_t wr_mask)
529
+{
530
+ RISCVException ret;
531
+ uint64_t rval, vsbits, mask = env->hideleg & vsip_writable_mask;
532
+
533
+ /* Bring VS-level bits to correct position */
534
+ vsbits = new_val & (VS_MODE_INTERRUPTS >> 1);
535
+ new_val &= ~(VS_MODE_INTERRUPTS >> 1);
536
+ new_val |= vsbits << 1;
537
+ vsbits = wr_mask & (VS_MODE_INTERRUPTS >> 1);
538
+ wr_mask &= ~(VS_MODE_INTERRUPTS >> 1);
539
+ wr_mask |= vsbits << 1;
540
+
541
+ ret = rmw_mip64(env, csrno, &rval, new_val, wr_mask & mask);
542
+ if (ret_val) {
543
+ rval &= mask;
544
+ vsbits = rval & VS_MODE_INTERRUPTS;
545
+ rval &= ~VS_MODE_INTERRUPTS;
546
+ *ret_val = rval | (vsbits >> 1);
547
+ }
548
+
549
+ return ret;
550
+}
551
+
552
static RISCVException rmw_vsip(CPURISCVState *env, int csrno,
553
- target_ulong *ret_value,
554
- target_ulong new_value, target_ulong write_mask)
555
+ target_ulong *ret_val,
556
+ target_ulong new_val, target_ulong wr_mask)
557
{
558
- /* Shift the S bits to their VS bit location in mip */
559
- int ret = rmw_mip(env, csrno, ret_value, new_value << 1,
560
- (write_mask << 1) & vsip_writable_mask & env->hideleg);
561
+ uint64_t rval;
562
+ RISCVException ret;
563
564
- if (ret_value) {
565
- *ret_value &= VS_MODE_INTERRUPTS;
566
- /* Shift the VS bits to their S bit location in vsip */
567
- *ret_value >>= 1;
568
+ ret = rmw_vsip64(env, csrno, &rval, new_val, wr_mask);
569
+ if (ret_val) {
570
+ *ret_val = rval;
571
}
572
+
573
return ret;
574
}
575
576
-static RISCVException rmw_sip(CPURISCVState *env, int csrno,
577
- target_ulong *ret_value,
578
- target_ulong new_value, target_ulong write_mask)
579
+static RISCVException rmw_vsiph(CPURISCVState *env, int csrno,
580
+ target_ulong *ret_val,
581
+ target_ulong new_val, target_ulong wr_mask)
582
{
583
- int ret;
584
+ uint64_t rval;
585
+ RISCVException ret;
586
+
587
+ ret = rmw_vsip64(env, csrno, &rval,
588
+ ((uint64_t)new_val) << 32, ((uint64_t)wr_mask) << 32);
589
+ if (ret_val) {
590
+ *ret_val = rval >> 32;
591
+ }
592
+
593
+ return ret;
594
+}
595
+
596
+static RISCVException rmw_sip64(CPURISCVState *env, int csrno,
597
+ uint64_t *ret_val,
598
+ uint64_t new_val, uint64_t wr_mask)
599
+{
600
+ RISCVException ret;
601
+ uint64_t mask = env->mideleg & sip_writable_mask;
602
603
if (riscv_cpu_virt_enabled(env)) {
604
- ret = rmw_vsip(env, CSR_VSIP, ret_value, new_value, write_mask);
605
+ ret = rmw_vsip64(env, CSR_VSIP, ret_val, new_val, wr_mask);
606
} else {
607
- ret = rmw_mip(env, csrno, ret_value, new_value,
608
- write_mask & env->mideleg & sip_writable_mask);
609
+ ret = rmw_mip64(env, csrno, ret_val, new_val, wr_mask & mask);
610
}
611
612
- if (ret_value) {
613
- *ret_value &= env->mideleg & S_MODE_INTERRUPTS;
614
+ if (ret_val) {
615
+ *ret_val &= env->mideleg & S_MODE_INTERRUPTS;
616
+ }
617
+
618
+ return ret;
619
+}
620
+
621
+static RISCVException rmw_sip(CPURISCVState *env, int csrno,
622
+ target_ulong *ret_val,
623
+ target_ulong new_val, target_ulong wr_mask)
624
+{
625
+ uint64_t rval;
626
+ RISCVException ret;
627
+
628
+ ret = rmw_sip64(env, csrno, &rval, new_val, wr_mask);
629
+ if (ret_val) {
630
+ *ret_val = rval;
631
}
632
+
633
+ return ret;
634
+}
635
+
636
+static RISCVException rmw_siph(CPURISCVState *env, int csrno,
637
+ target_ulong *ret_val,
638
+ target_ulong new_val, target_ulong wr_mask)
639
+{
640
+ uint64_t rval;
641
+ RISCVException ret;
642
+
643
+ ret = rmw_sip64(env, csrno, &rval,
644
+ ((uint64_t)new_val) << 32, ((uint64_t)wr_mask) << 32);
645
+ if (ret_val) {
646
+ *ret_val = rval >> 32;
647
+ }
648
+
649
return ret;
650
}
651
652
@@ -XXX,XX +XXX,XX @@ static RISCVException write_hedeleg(CPURISCVState *env, int csrno,
653
return RISCV_EXCP_NONE;
654
}
655
656
-static RISCVException read_hideleg(CPURISCVState *env, int csrno,
657
- target_ulong *val)
658
+static RISCVException rmw_hideleg64(CPURISCVState *env, int csrno,
659
+ uint64_t *ret_val,
660
+ uint64_t new_val, uint64_t wr_mask)
661
{
662
- *val = env->hideleg;
663
+ uint64_t mask = wr_mask & vs_delegable_ints;
664
+
665
+ if (ret_val) {
666
+ *ret_val = env->hideleg & vs_delegable_ints;
667
+ }
668
+
669
+ env->hideleg = (env->hideleg & ~mask) | (new_val & mask);
670
return RISCV_EXCP_NONE;
671
}
672
673
-static RISCVException write_hideleg(CPURISCVState *env, int csrno,
674
- target_ulong val)
675
+static RISCVException rmw_hideleg(CPURISCVState *env, int csrno,
676
+ target_ulong *ret_val,
677
+ target_ulong new_val, target_ulong wr_mask)
678
{
679
- env->hideleg = val & vs_delegable_ints;
680
- return RISCV_EXCP_NONE;
681
+ uint64_t rval;
682
+ RISCVException ret;
683
+
684
+ ret = rmw_hideleg64(env, csrno, &rval, new_val, wr_mask);
685
+ if (ret_val) {
686
+ *ret_val = rval;
687
+ }
688
+
689
+ return ret;
690
+}
691
+
692
+static RISCVException rmw_hidelegh(CPURISCVState *env, int csrno,
693
+ target_ulong *ret_val,
694
+ target_ulong new_val, target_ulong wr_mask)
695
+{
696
+ uint64_t rval;
697
+ RISCVException ret;
698
+
699
+ ret = rmw_hideleg64(env, csrno, &rval,
700
+ ((uint64_t)new_val) << 32, ((uint64_t)wr_mask) << 32);
701
+ if (ret_val) {
702
+ *ret_val = rval >> 32;
703
+ }
704
+
705
+ return ret;
706
+}
707
+
708
+static RISCVException rmw_hvip64(CPURISCVState *env, int csrno,
709
+ uint64_t *ret_val,
710
+ uint64_t new_val, uint64_t wr_mask)
711
+{
712
+ RISCVException ret;
713
+
714
+ ret = rmw_mip64(env, csrno, ret_val, new_val,
715
+ wr_mask & hvip_writable_mask);
716
+ if (ret_val) {
717
+ *ret_val &= VS_MODE_INTERRUPTS;
718
+ }
719
+
720
+ return ret;
721
}
722
723
static RISCVException rmw_hvip(CPURISCVState *env, int csrno,
724
- target_ulong *ret_value,
725
- target_ulong new_value, target_ulong write_mask)
726
+ target_ulong *ret_val,
727
+ target_ulong new_val, target_ulong wr_mask)
728
{
729
- int ret = rmw_mip(env, csrno, ret_value, new_value,
730
- write_mask & hvip_writable_mask);
731
+ uint64_t rval;
732
+ RISCVException ret;
733
734
- if (ret_value) {
735
- *ret_value &= VS_MODE_INTERRUPTS;
736
+ ret = rmw_hvip64(env, csrno, &rval, new_val, wr_mask);
737
+ if (ret_val) {
738
+ *ret_val = rval;
739
+ }
740
+
741
+ return ret;
742
+}
743
+
744
+static RISCVException rmw_hviph(CPURISCVState *env, int csrno,
745
+ target_ulong *ret_val,
746
+ target_ulong new_val, target_ulong wr_mask)
747
+{
748
+ uint64_t rval;
749
+ RISCVException ret;
750
+
751
+ ret = rmw_hvip64(env, csrno, &rval,
752
+ ((uint64_t)new_val) << 32, ((uint64_t)wr_mask) << 32);
753
+ if (ret_val) {
754
+ *ret_val = rval >> 32;
755
}
756
+
757
return ret;
758
}
759
760
@@ -XXX,XX +XXX,XX @@ static RISCVException rmw_hip(CPURISCVState *env, int csrno,
761
return ret;
762
}
763
764
-static RISCVException read_hie(CPURISCVState *env, int csrno,
765
- target_ulong *val)
766
+static RISCVException rmw_hie(CPURISCVState *env, int csrno,
767
+ target_ulong *ret_val,
768
+ target_ulong new_val, target_ulong wr_mask)
769
{
770
- *val = env->mie & HS_MODE_INTERRUPTS;
771
- return RISCV_EXCP_NONE;
772
-}
773
+ uint64_t rval;
774
+ RISCVException ret;
775
776
-static RISCVException write_hie(CPURISCVState *env, int csrno,
777
- target_ulong val)
778
-{
779
- target_ulong newval = (env->mie & ~HS_MODE_INTERRUPTS) | (val & HS_MODE_INTERRUPTS);
780
- return write_mie(env, CSR_MIE, newval);
781
+ ret = rmw_mie64(env, csrno, &rval, new_val, wr_mask & HS_MODE_INTERRUPTS);
782
+ if (ret_val) {
783
+ *ret_val = rval & HS_MODE_INTERRUPTS;
784
+ }
785
+
786
+ return ret;
787
}
788
789
static RISCVException read_hcounteren(CPURISCVState *env, int csrno,
790
@@ -XXX,XX +XXX,XX @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
791
read_mstatus_i128 },
792
[CSR_MISA] = { "misa", any, read_misa, write_misa, NULL,
793
read_misa_i128 },
794
- [CSR_MIDELEG] = { "mideleg", any, read_mideleg, write_mideleg },
795
+ [CSR_MIDELEG] = { "mideleg", any, NULL, NULL, rmw_mideleg },
796
[CSR_MEDELEG] = { "medeleg", any, read_medeleg, write_medeleg },
797
- [CSR_MIE] = { "mie", any, read_mie, write_mie },
798
+ [CSR_MIE] = { "mie", any, NULL, NULL, rmw_mie },
799
[CSR_MTVEC] = { "mtvec", any, read_mtvec, write_mtvec },
800
[CSR_MCOUNTEREN] = { "mcounteren", any, read_mcounteren, write_mcounteren },
801
802
@@ -XXX,XX +XXX,XX @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
803
[CSR_MTVAL] = { "mtval", any, read_mtval, write_mtval },
804
[CSR_MIP] = { "mip", any, NULL, NULL, rmw_mip },
805
806
+ /* Machine-Level High-Half CSRs (AIA) */
807
+ [CSR_MIDELEGH] = { "midelegh", aia_any32, NULL, NULL, rmw_midelegh },
808
+ [CSR_MIEH] = { "mieh", aia_any32, NULL, NULL, rmw_mieh },
809
+ [CSR_MIPH] = { "miph", aia_any32, NULL, NULL, rmw_miph },
810
+
811
/* Supervisor Trap Setup */
812
[CSR_SSTATUS] = { "sstatus", smode, read_sstatus, write_sstatus, NULL,
813
read_sstatus_i128 },
814
- [CSR_SIE] = { "sie", smode, read_sie, write_sie },
815
+ [CSR_SIE] = { "sie", smode, NULL, NULL, rmw_sie },
816
[CSR_STVEC] = { "stvec", smode, read_stvec, write_stvec },
817
[CSR_SCOUNTEREN] = { "scounteren", smode, read_scounteren, write_scounteren },
818
819
@@ -XXX,XX +XXX,XX @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
820
/* Supervisor Protection and Translation */
821
[CSR_SATP] = { "satp", smode, read_satp, write_satp },
822
823
+ /* Supervisor-Level High-Half CSRs (AIA) */
824
+ [CSR_SIEH] = { "sieh", aia_smode32, NULL, NULL, rmw_sieh },
825
+ [CSR_SIPH] = { "siph", aia_smode32, NULL, NULL, rmw_siph },
826
+
827
[CSR_HSTATUS] = { "hstatus", hmode, read_hstatus, write_hstatus },
828
[CSR_HEDELEG] = { "hedeleg", hmode, read_hedeleg, write_hedeleg },
829
- [CSR_HIDELEG] = { "hideleg", hmode, read_hideleg, write_hideleg },
830
+ [CSR_HIDELEG] = { "hideleg", hmode, NULL, NULL, rmw_hideleg },
831
[CSR_HVIP] = { "hvip", hmode, NULL, NULL, rmw_hvip },
832
[CSR_HIP] = { "hip", hmode, NULL, NULL, rmw_hip },
833
- [CSR_HIE] = { "hie", hmode, read_hie, write_hie },
834
+ [CSR_HIE] = { "hie", hmode, NULL, NULL, rmw_hie },
835
[CSR_HCOUNTEREN] = { "hcounteren", hmode, read_hcounteren, write_hcounteren },
836
[CSR_HGEIE] = { "hgeie", hmode, read_hgeie, write_hgeie },
837
[CSR_HTVAL] = { "htval", hmode, read_htval, write_htval },
838
@@ -XXX,XX +XXX,XX @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
839
840
[CSR_VSSTATUS] = { "vsstatus", hmode, read_vsstatus, write_vsstatus },
841
[CSR_VSIP] = { "vsip", hmode, NULL, NULL, rmw_vsip },
842
- [CSR_VSIE] = { "vsie", hmode, read_vsie, write_vsie },
843
+ [CSR_VSIE] = { "vsie", hmode, NULL, NULL, rmw_vsie },
844
[CSR_VSTVEC] = { "vstvec", hmode, read_vstvec, write_vstvec },
845
[CSR_VSSCRATCH] = { "vsscratch", hmode, read_vsscratch, write_vsscratch },
846
[CSR_VSEPC] = { "vsepc", hmode, read_vsepc, write_vsepc },
847
@@ -XXX,XX +XXX,XX @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
848
[CSR_MTVAL2] = { "mtval2", hmode, read_mtval2, write_mtval2 },
849
[CSR_MTINST] = { "mtinst", hmode, read_mtinst, write_mtinst },
850
851
+ /* Hypervisor and VS-Level High-Half CSRs (H-extension with AIA) */
852
+ [CSR_HIDELEGH] = { "hidelegh", aia_hmode32, NULL, NULL, rmw_hidelegh },
853
+ [CSR_HVIPH] = { "hviph", aia_hmode32, NULL, NULL, rmw_hviph },
854
+ [CSR_VSIEH] = { "vsieh", aia_hmode32, NULL, NULL, rmw_vsieh },
855
+ [CSR_VSIPH] = { "vsiph", aia_hmode32, NULL, NULL, rmw_vsiph },
856
+
857
/* Physical Memory Protection */
858
[CSR_MSECCFG] = { "mseccfg", epmp, read_mseccfg, write_mseccfg },
859
[CSR_PMPCFG0] = { "pmpcfg0", pmp, read_pmpcfg, write_pmpcfg },
860
diff --git a/target/riscv/machine.c b/target/riscv/machine.c
861
index XXXXXXX..XXXXXXX 100644
862
--- a/target/riscv/machine.c
863
+++ b/target/riscv/machine.c
864
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_hyper = {
865
.fields = (VMStateField[]) {
866
VMSTATE_UINTTL(env.hstatus, RISCVCPU),
867
VMSTATE_UINTTL(env.hedeleg, RISCVCPU),
868
- VMSTATE_UINTTL(env.hideleg, RISCVCPU),
869
+ VMSTATE_UINT64(env.hideleg, RISCVCPU),
870
VMSTATE_UINTTL(env.hcounteren, RISCVCPU),
871
VMSTATE_UINTTL(env.htval, RISCVCPU),
872
VMSTATE_UINTTL(env.htinst, RISCVCPU),
873
@@ -XXX,XX +XXX,XX @@ const VMStateDescription vmstate_riscv_cpu = {
874
VMSTATE_UINTTL(env.resetvec, RISCVCPU),
875
VMSTATE_UINTTL(env.mhartid, RISCVCPU),
876
VMSTATE_UINT64(env.mstatus, RISCVCPU),
877
- VMSTATE_UINTTL(env.mip, RISCVCPU),
878
- VMSTATE_UINT32(env.miclaim, RISCVCPU),
879
- VMSTATE_UINTTL(env.mie, RISCVCPU),
880
- VMSTATE_UINTTL(env.mideleg, RISCVCPU),
881
+ VMSTATE_UINT64(env.mip, RISCVCPU),
882
+ VMSTATE_UINT64(env.miclaim, RISCVCPU),
883
+ VMSTATE_UINT64(env.mie, RISCVCPU),
884
+ VMSTATE_UINT64(env.mideleg, RISCVCPU),
885
VMSTATE_UINTTL(env.satp, RISCVCPU),
886
VMSTATE_UINTTL(env.stval, RISCVCPU),
887
VMSTATE_UINTTL(env.medeleg, RISCVCPU),
888
--
105
--
889
2.34.1
106
2.41.0
890
891
diff view generated by jsdifflib
New patch
1
From: Dickon Hood <dickon.hood@codethink.co.uk>
1
2
3
Zvbb (implemented in later commit) has a widening instruction, which
4
requires an extra check on the enabled extensions. Refactor
5
GEN_OPIVX_WIDEN_TRANS() to take a check function to avoid reimplementing
6
it.
7
8
Signed-off-by: Dickon Hood <dickon.hood@codethink.co.uk>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
11
Signed-off-by: Max Chou <max.chou@sifive.com>
12
Message-ID: <20230711165917.2629866-7-max.chou@sifive.com>
13
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
14
---
15
target/riscv/insn_trans/trans_rvv.c.inc | 52 +++++++++++--------------
16
1 file changed, 23 insertions(+), 29 deletions(-)
17
18
diff --git a/target/riscv/insn_trans/trans_rvv.c.inc b/target/riscv/insn_trans/trans_rvv.c.inc
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/riscv/insn_trans/trans_rvv.c.inc
21
+++ b/target/riscv/insn_trans/trans_rvv.c.inc
22
@@ -XXX,XX +XXX,XX @@ static bool opivx_widen_check(DisasContext *s, arg_rmrr *a)
23
vext_check_ds(s, a->rd, a->rs2, a->vm);
24
}
25
26
-static bool do_opivx_widen(DisasContext *s, arg_rmrr *a,
27
- gen_helper_opivx *fn)
28
-{
29
- if (opivx_widen_check(s, a)) {
30
- return opivx_trans(a->rd, a->rs1, a->rs2, a->vm, fn, s);
31
- }
32
- return false;
33
-}
34
-
35
-#define GEN_OPIVX_WIDEN_TRANS(NAME) \
36
-static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
37
-{ \
38
- static gen_helper_opivx * const fns[3] = { \
39
- gen_helper_##NAME##_b, \
40
- gen_helper_##NAME##_h, \
41
- gen_helper_##NAME##_w \
42
- }; \
43
- return do_opivx_widen(s, a, fns[s->sew]); \
44
+#define GEN_OPIVX_WIDEN_TRANS(NAME, CHECK) \
45
+static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
46
+{ \
47
+ if (CHECK(s, a)) { \
48
+ static gen_helper_opivx * const fns[3] = { \
49
+ gen_helper_##NAME##_b, \
50
+ gen_helper_##NAME##_h, \
51
+ gen_helper_##NAME##_w \
52
+ }; \
53
+ return opivx_trans(a->rd, a->rs1, a->rs2, a->vm, fns[s->sew], s); \
54
+ } \
55
+ return false; \
56
}
57
58
-GEN_OPIVX_WIDEN_TRANS(vwaddu_vx)
59
-GEN_OPIVX_WIDEN_TRANS(vwadd_vx)
60
-GEN_OPIVX_WIDEN_TRANS(vwsubu_vx)
61
-GEN_OPIVX_WIDEN_TRANS(vwsub_vx)
62
+GEN_OPIVX_WIDEN_TRANS(vwaddu_vx, opivx_widen_check)
63
+GEN_OPIVX_WIDEN_TRANS(vwadd_vx, opivx_widen_check)
64
+GEN_OPIVX_WIDEN_TRANS(vwsubu_vx, opivx_widen_check)
65
+GEN_OPIVX_WIDEN_TRANS(vwsub_vx, opivx_widen_check)
66
67
/* WIDEN OPIVV with WIDEN */
68
static bool opiwv_widen_check(DisasContext *s, arg_rmrr *a)
69
@@ -XXX,XX +XXX,XX @@ GEN_OPIVX_TRANS(vrem_vx, opivx_check)
70
GEN_OPIVV_WIDEN_TRANS(vwmul_vv, opivv_widen_check)
71
GEN_OPIVV_WIDEN_TRANS(vwmulu_vv, opivv_widen_check)
72
GEN_OPIVV_WIDEN_TRANS(vwmulsu_vv, opivv_widen_check)
73
-GEN_OPIVX_WIDEN_TRANS(vwmul_vx)
74
-GEN_OPIVX_WIDEN_TRANS(vwmulu_vx)
75
-GEN_OPIVX_WIDEN_TRANS(vwmulsu_vx)
76
+GEN_OPIVX_WIDEN_TRANS(vwmul_vx, opivx_widen_check)
77
+GEN_OPIVX_WIDEN_TRANS(vwmulu_vx, opivx_widen_check)
78
+GEN_OPIVX_WIDEN_TRANS(vwmulsu_vx, opivx_widen_check)
79
80
/* Vector Single-Width Integer Multiply-Add Instructions */
81
GEN_OPIVV_TRANS(vmacc_vv, opivv_check)
82
@@ -XXX,XX +XXX,XX @@ GEN_OPIVX_TRANS(vnmsub_vx, opivx_check)
83
GEN_OPIVV_WIDEN_TRANS(vwmaccu_vv, opivv_widen_check)
84
GEN_OPIVV_WIDEN_TRANS(vwmacc_vv, opivv_widen_check)
85
GEN_OPIVV_WIDEN_TRANS(vwmaccsu_vv, opivv_widen_check)
86
-GEN_OPIVX_WIDEN_TRANS(vwmaccu_vx)
87
-GEN_OPIVX_WIDEN_TRANS(vwmacc_vx)
88
-GEN_OPIVX_WIDEN_TRANS(vwmaccsu_vx)
89
-GEN_OPIVX_WIDEN_TRANS(vwmaccus_vx)
90
+GEN_OPIVX_WIDEN_TRANS(vwmaccu_vx, opivx_widen_check)
91
+GEN_OPIVX_WIDEN_TRANS(vwmacc_vx, opivx_widen_check)
92
+GEN_OPIVX_WIDEN_TRANS(vwmaccsu_vx, opivx_widen_check)
93
+GEN_OPIVX_WIDEN_TRANS(vwmaccus_vx, opivx_widen_check)
94
95
/* Vector Integer Merge and Move Instructions */
96
static bool trans_vmv_v_v(DisasContext *s, arg_vmv_v_v *a)
97
--
98
2.41.0
diff view generated by jsdifflib
New patch
1
From: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
1
2
3
Move some macros out of `vector_helper` and into `vector_internals`.
4
This ensures they can be used by both vector and vector-crypto helpers
5
(latter implemented in proceeding commits).
6
7
Signed-off-by: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
8
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
9
Signed-off-by: Max Chou <max.chou@sifive.com>
10
Message-ID: <20230711165917.2629866-8-max.chou@sifive.com>
11
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
12
---
13
target/riscv/vector_internals.h | 46 +++++++++++++++++++++++++++++++++
14
target/riscv/vector_helper.c | 42 ------------------------------
15
2 files changed, 46 insertions(+), 42 deletions(-)
16
17
diff --git a/target/riscv/vector_internals.h b/target/riscv/vector_internals.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/riscv/vector_internals.h
20
+++ b/target/riscv/vector_internals.h
21
@@ -XXX,XX +XXX,XX @@ void vext_set_elems_1s(void *base, uint32_t is_agnostic, uint32_t cnt,
22
/* expand macro args before macro */
23
#define RVVCALL(macro, ...) macro(__VA_ARGS__)
24
25
+/* (TD, T2, TX2) */
26
+#define OP_UU_B uint8_t, uint8_t, uint8_t
27
+#define OP_UU_H uint16_t, uint16_t, uint16_t
28
+#define OP_UU_W uint32_t, uint32_t, uint32_t
29
+#define OP_UU_D uint64_t, uint64_t, uint64_t
30
+
31
/* (TD, T1, T2, TX1, TX2) */
32
#define OP_UUU_B uint8_t, uint8_t, uint8_t, uint8_t, uint8_t
33
#define OP_UUU_H uint16_t, uint16_t, uint16_t, uint16_t, uint16_t
34
#define OP_UUU_W uint32_t, uint32_t, uint32_t, uint32_t, uint32_t
35
#define OP_UUU_D uint64_t, uint64_t, uint64_t, uint64_t, uint64_t
36
37
+#define OPIVV1(NAME, TD, T2, TX2, HD, HS2, OP) \
38
+static void do_##NAME(void *vd, void *vs2, int i) \
39
+{ \
40
+ TX2 s2 = *((T2 *)vs2 + HS2(i)); \
41
+ *((TD *)vd + HD(i)) = OP(s2); \
42
+}
43
+
44
+#define GEN_VEXT_V(NAME, ESZ) \
45
+void HELPER(NAME)(void *vd, void *v0, void *vs2, \
46
+ CPURISCVState *env, uint32_t desc) \
47
+{ \
48
+ uint32_t vm = vext_vm(desc); \
49
+ uint32_t vl = env->vl; \
50
+ uint32_t total_elems = \
51
+ vext_get_total_elems(env, desc, ESZ); \
52
+ uint32_t vta = vext_vta(desc); \
53
+ uint32_t vma = vext_vma(desc); \
54
+ uint32_t i; \
55
+ \
56
+ for (i = env->vstart; i < vl; i++) { \
57
+ if (!vm && !vext_elem_mask(v0, i)) { \
58
+ /* set masked-off elements to 1s */ \
59
+ vext_set_elems_1s(vd, vma, i * ESZ, \
60
+ (i + 1) * ESZ); \
61
+ continue; \
62
+ } \
63
+ do_##NAME(vd, vs2, i); \
64
+ } \
65
+ env->vstart = 0; \
66
+ /* set tail elements to 1s */ \
67
+ vext_set_elems_1s(vd, vta, vl * ESZ, \
68
+ total_elems * ESZ); \
69
+}
70
+
71
/* operation of two vector elements */
72
typedef void opivv2_fn(void *vd, void *vs1, void *vs2, int i);
73
74
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *v0, target_ulong s1, \
75
do_##NAME, ESZ); \
76
}
77
78
+/* Three of the widening shortening macros: */
79
+/* (TD, T1, T2, TX1, TX2) */
80
+#define WOP_UUU_B uint16_t, uint8_t, uint8_t, uint16_t, uint16_t
81
+#define WOP_UUU_H uint32_t, uint16_t, uint16_t, uint32_t, uint32_t
82
+#define WOP_UUU_W uint64_t, uint32_t, uint32_t, uint64_t, uint64_t
83
+
84
#endif /* TARGET_RISCV_VECTOR_INTERNALS_H */
85
diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c
86
index XXXXXXX..XXXXXXX 100644
87
--- a/target/riscv/vector_helper.c
88
+++ b/target/riscv/vector_helper.c
89
@@ -XXX,XX +XXX,XX @@ GEN_VEXT_ST_WHOLE(vs8r_v, int8_t, ste_b)
90
#define OP_SUS_H int16_t, uint16_t, int16_t, uint16_t, int16_t
91
#define OP_SUS_W int32_t, uint32_t, int32_t, uint32_t, int32_t
92
#define OP_SUS_D int64_t, uint64_t, int64_t, uint64_t, int64_t
93
-#define WOP_UUU_B uint16_t, uint8_t, uint8_t, uint16_t, uint16_t
94
-#define WOP_UUU_H uint32_t, uint16_t, uint16_t, uint32_t, uint32_t
95
-#define WOP_UUU_W uint64_t, uint32_t, uint32_t, uint64_t, uint64_t
96
#define WOP_SSS_B int16_t, int8_t, int8_t, int16_t, int16_t
97
#define WOP_SSS_H int32_t, int16_t, int16_t, int32_t, int32_t
98
#define WOP_SSS_W int64_t, int32_t, int32_t, int64_t, int64_t
99
@@ -XXX,XX +XXX,XX @@ GEN_VEXT_VF(vfwnmsac_vf_h, 4)
100
GEN_VEXT_VF(vfwnmsac_vf_w, 8)
101
102
/* Vector Floating-Point Square-Root Instruction */
103
-/* (TD, T2, TX2) */
104
-#define OP_UU_H uint16_t, uint16_t, uint16_t
105
-#define OP_UU_W uint32_t, uint32_t, uint32_t
106
-#define OP_UU_D uint64_t, uint64_t, uint64_t
107
-
108
#define OPFVV1(NAME, TD, T2, TX2, HD, HS2, OP) \
109
static void do_##NAME(void *vd, void *vs2, int i, \
110
CPURISCVState *env) \
111
@@ -XXX,XX +XXX,XX @@ GEN_VEXT_CMP_VF(vmfge_vf_w, uint32_t, H4, vmfge32)
112
GEN_VEXT_CMP_VF(vmfge_vf_d, uint64_t, H8, vmfge64)
113
114
/* Vector Floating-Point Classify Instruction */
115
-#define OPIVV1(NAME, TD, T2, TX2, HD, HS2, OP) \
116
-static void do_##NAME(void *vd, void *vs2, int i) \
117
-{ \
118
- TX2 s2 = *((T2 *)vs2 + HS2(i)); \
119
- *((TD *)vd + HD(i)) = OP(s2); \
120
-}
121
-
122
-#define GEN_VEXT_V(NAME, ESZ) \
123
-void HELPER(NAME)(void *vd, void *v0, void *vs2, \
124
- CPURISCVState *env, uint32_t desc) \
125
-{ \
126
- uint32_t vm = vext_vm(desc); \
127
- uint32_t vl = env->vl; \
128
- uint32_t total_elems = \
129
- vext_get_total_elems(env, desc, ESZ); \
130
- uint32_t vta = vext_vta(desc); \
131
- uint32_t vma = vext_vma(desc); \
132
- uint32_t i; \
133
- \
134
- for (i = env->vstart; i < vl; i++) { \
135
- if (!vm && !vext_elem_mask(v0, i)) { \
136
- /* set masked-off elements to 1s */ \
137
- vext_set_elems_1s(vd, vma, i * ESZ, \
138
- (i + 1) * ESZ); \
139
- continue; \
140
- } \
141
- do_##NAME(vd, vs2, i); \
142
- } \
143
- env->vstart = 0; \
144
- /* set tail elements to 1s */ \
145
- vext_set_elems_1s(vd, vta, vl * ESZ, \
146
- total_elems * ESZ); \
147
-}
148
-
149
target_ulong fclass_h(uint64_t frs1)
150
{
151
float16 f = frs1;
152
--
153
2.41.0
diff view generated by jsdifflib
1
From: Philipp Tomsich <philipp.tomsich@vrull.eu>
1
From: Dickon Hood <dickon.hood@codethink.co.uk>
2
2
3
This adds the decoder and translation for the XVentanaCondOps custom
3
This commit adds support for the Zvbb vector-crypto extension, which
4
extension (vendor-defined by Ventana Micro Systems), which is
4
consists of the following instructions:
5
documented at https://github.com/ventanamicro/ventana-custom-extensions/releases/download/v1.0.0/ventana-custom-extensions-v1.0.0.pdf
6
5
7
This commit then also adds a guard-function (has_XVentanaCondOps_p)
6
* vrol.[vv,vx]
8
and the decoder function to the table of decoders, enabling the
7
* vror.[vv,vx,vi]
9
support for the XVentanaCondOps extension.
8
* vbrev8.v
9
* vrev8.v
10
* vandn.[vv,vx]
11
* vbrev.v
12
* vclz.v
13
* vctz.v
14
* vcpop.v
15
* vwsll.[vv,vx,vi]
10
16
11
Signed-off-by: Philipp Tomsich <philipp.tomsich@vrull.eu>
17
Translation functions are defined in
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
18
`target/riscv/insn_trans/trans_rvvk.c.inc` and helpers are defined in
13
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
19
`target/riscv/vcrypto_helper.c`.
14
Message-Id: <20220202005249.3566542-7-philipp.tomsich@vrull.eu>
20
21
Co-authored-by: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
22
Co-authored-by: William Salmon <will.salmon@codethink.co.uk>
23
Co-authored-by: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
24
[max.chou@sifive.com: Fix imm mode of vror.vi]
25
Signed-off-by: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
26
Signed-off-by: William Salmon <will.salmon@codethink.co.uk>
27
Signed-off-by: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
28
Signed-off-by: Dickon Hood <dickon.hood@codethink.co.uk>
29
Signed-off-by: Max Chou <max.chou@sifive.com>
30
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
31
[max.chou@sifive.com: Exposed x-zvbb property]
32
Message-ID: <20230711165917.2629866-9-max.chou@sifive.com>
15
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
33
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
16
---
34
---
17
target/riscv/cpu.h | 3 ++
35
target/riscv/cpu_cfg.h | 1 +
18
target/riscv/XVentanaCondOps.decode | 25 ++++++++++++
36
target/riscv/helper.h | 62 +++++++++
19
target/riscv/cpu.c | 3 ++
37
target/riscv/insn32.decode | 20 +++
20
target/riscv/translate.c | 12 ++++++
38
target/riscv/cpu.c | 12 ++
21
.../insn_trans/trans_xventanacondops.c.inc | 39 +++++++++++++++++++
39
target/riscv/vcrypto_helper.c | 138 +++++++++++++++++++
22
target/riscv/meson.build | 1 +
40
target/riscv/insn_trans/trans_rvvk.c.inc | 164 +++++++++++++++++++++++
23
6 files changed, 83 insertions(+)
41
6 files changed, 397 insertions(+)
24
create mode 100644 target/riscv/XVentanaCondOps.decode
25
create mode 100644 target/riscv/insn_trans/trans_xventanacondops.c.inc
26
42
27
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
43
diff --git a/target/riscv/cpu_cfg.h b/target/riscv/cpu_cfg.h
28
index XXXXXXX..XXXXXXX 100644
44
index XXXXXXX..XXXXXXX 100644
29
--- a/target/riscv/cpu.h
45
--- a/target/riscv/cpu_cfg.h
30
+++ b/target/riscv/cpu.h
46
+++ b/target/riscv/cpu_cfg.h
31
@@ -XXX,XX +XXX,XX @@ struct RISCVCPUConfig {
47
@@ -XXX,XX +XXX,XX @@ struct RISCVCPUConfig {
32
bool ext_zve32f;
48
bool ext_zve32f;
33
bool ext_zve64f;
49
bool ext_zve64f;
34
50
bool ext_zve64d;
35
+ /* Vendor-specific custom extensions */
51
+ bool ext_zvbb;
36
+ bool ext_XVentanaCondOps;
52
bool ext_zvbc;
37
+
53
bool ext_zmmul;
38
char *priv_spec;
54
bool ext_zvfbfmin;
39
char *user_spec;
55
diff --git a/target/riscv/helper.h b/target/riscv/helper.h
40
char *bext_spec;
56
index XXXXXXX..XXXXXXX 100644
41
diff --git a/target/riscv/XVentanaCondOps.decode b/target/riscv/XVentanaCondOps.decode
57
--- a/target/riscv/helper.h
42
new file mode 100644
58
+++ b/target/riscv/helper.h
43
index XXXXXXX..XXXXXXX
59
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_6(vclmul_vv, void, ptr, ptr, ptr, ptr, env, i32)
44
--- /dev/null
60
DEF_HELPER_6(vclmul_vx, void, ptr, ptr, tl, ptr, env, i32)
45
+++ b/target/riscv/XVentanaCondOps.decode
61
DEF_HELPER_6(vclmulh_vv, void, ptr, ptr, ptr, ptr, env, i32)
62
DEF_HELPER_6(vclmulh_vx, void, ptr, ptr, tl, ptr, env, i32)
63
+
64
+DEF_HELPER_6(vror_vv_b, void, ptr, ptr, ptr, ptr, env, i32)
65
+DEF_HELPER_6(vror_vv_h, void, ptr, ptr, ptr, ptr, env, i32)
66
+DEF_HELPER_6(vror_vv_w, void, ptr, ptr, ptr, ptr, env, i32)
67
+DEF_HELPER_6(vror_vv_d, void, ptr, ptr, ptr, ptr, env, i32)
68
+
69
+DEF_HELPER_6(vror_vx_b, void, ptr, ptr, tl, ptr, env, i32)
70
+DEF_HELPER_6(vror_vx_h, void, ptr, ptr, tl, ptr, env, i32)
71
+DEF_HELPER_6(vror_vx_w, void, ptr, ptr, tl, ptr, env, i32)
72
+DEF_HELPER_6(vror_vx_d, void, ptr, ptr, tl, ptr, env, i32)
73
+
74
+DEF_HELPER_6(vrol_vv_b, void, ptr, ptr, ptr, ptr, env, i32)
75
+DEF_HELPER_6(vrol_vv_h, void, ptr, ptr, ptr, ptr, env, i32)
76
+DEF_HELPER_6(vrol_vv_w, void, ptr, ptr, ptr, ptr, env, i32)
77
+DEF_HELPER_6(vrol_vv_d, void, ptr, ptr, ptr, ptr, env, i32)
78
+
79
+DEF_HELPER_6(vrol_vx_b, void, ptr, ptr, tl, ptr, env, i32)
80
+DEF_HELPER_6(vrol_vx_h, void, ptr, ptr, tl, ptr, env, i32)
81
+DEF_HELPER_6(vrol_vx_w, void, ptr, ptr, tl, ptr, env, i32)
82
+DEF_HELPER_6(vrol_vx_d, void, ptr, ptr, tl, ptr, env, i32)
83
+
84
+DEF_HELPER_5(vrev8_v_b, void, ptr, ptr, ptr, env, i32)
85
+DEF_HELPER_5(vrev8_v_h, void, ptr, ptr, ptr, env, i32)
86
+DEF_HELPER_5(vrev8_v_w, void, ptr, ptr, ptr, env, i32)
87
+DEF_HELPER_5(vrev8_v_d, void, ptr, ptr, ptr, env, i32)
88
+DEF_HELPER_5(vbrev8_v_b, void, ptr, ptr, ptr, env, i32)
89
+DEF_HELPER_5(vbrev8_v_h, void, ptr, ptr, ptr, env, i32)
90
+DEF_HELPER_5(vbrev8_v_w, void, ptr, ptr, ptr, env, i32)
91
+DEF_HELPER_5(vbrev8_v_d, void, ptr, ptr, ptr, env, i32)
92
+DEF_HELPER_5(vbrev_v_b, void, ptr, ptr, ptr, env, i32)
93
+DEF_HELPER_5(vbrev_v_h, void, ptr, ptr, ptr, env, i32)
94
+DEF_HELPER_5(vbrev_v_w, void, ptr, ptr, ptr, env, i32)
95
+DEF_HELPER_5(vbrev_v_d, void, ptr, ptr, ptr, env, i32)
96
+
97
+DEF_HELPER_5(vclz_v_b, void, ptr, ptr, ptr, env, i32)
98
+DEF_HELPER_5(vclz_v_h, void, ptr, ptr, ptr, env, i32)
99
+DEF_HELPER_5(vclz_v_w, void, ptr, ptr, ptr, env, i32)
100
+DEF_HELPER_5(vclz_v_d, void, ptr, ptr, ptr, env, i32)
101
+DEF_HELPER_5(vctz_v_b, void, ptr, ptr, ptr, env, i32)
102
+DEF_HELPER_5(vctz_v_h, void, ptr, ptr, ptr, env, i32)
103
+DEF_HELPER_5(vctz_v_w, void, ptr, ptr, ptr, env, i32)
104
+DEF_HELPER_5(vctz_v_d, void, ptr, ptr, ptr, env, i32)
105
+DEF_HELPER_5(vcpop_v_b, void, ptr, ptr, ptr, env, i32)
106
+DEF_HELPER_5(vcpop_v_h, void, ptr, ptr, ptr, env, i32)
107
+DEF_HELPER_5(vcpop_v_w, void, ptr, ptr, ptr, env, i32)
108
+DEF_HELPER_5(vcpop_v_d, void, ptr, ptr, ptr, env, i32)
109
+
110
+DEF_HELPER_6(vwsll_vv_b, void, ptr, ptr, ptr, ptr, env, i32)
111
+DEF_HELPER_6(vwsll_vv_h, void, ptr, ptr, ptr, ptr, env, i32)
112
+DEF_HELPER_6(vwsll_vv_w, void, ptr, ptr, ptr, ptr, env, i32)
113
+DEF_HELPER_6(vwsll_vx_b, void, ptr, ptr, tl, ptr, env, i32)
114
+DEF_HELPER_6(vwsll_vx_h, void, ptr, ptr, tl, ptr, env, i32)
115
+DEF_HELPER_6(vwsll_vx_w, void, ptr, ptr, tl, ptr, env, i32)
116
+
117
+DEF_HELPER_6(vandn_vv_b, void, ptr, ptr, ptr, ptr, env, i32)
118
+DEF_HELPER_6(vandn_vv_h, void, ptr, ptr, ptr, ptr, env, i32)
119
+DEF_HELPER_6(vandn_vv_w, void, ptr, ptr, ptr, ptr, env, i32)
120
+DEF_HELPER_6(vandn_vv_d, void, ptr, ptr, ptr, ptr, env, i32)
121
+DEF_HELPER_6(vandn_vx_b, void, ptr, ptr, tl, ptr, env, i32)
122
+DEF_HELPER_6(vandn_vx_h, void, ptr, ptr, tl, ptr, env, i32)
123
+DEF_HELPER_6(vandn_vx_w, void, ptr, ptr, tl, ptr, env, i32)
124
+DEF_HELPER_6(vandn_vx_d, void, ptr, ptr, tl, ptr, env, i32)
125
diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode
126
index XXXXXXX..XXXXXXX 100644
127
--- a/target/riscv/insn32.decode
128
+++ b/target/riscv/insn32.decode
46
@@ -XXX,XX +XXX,XX @@
129
@@ -XXX,XX +XXX,XX @@
47
+#
130
%imm_u 12:s20 !function=ex_shift_12
48
+# RISC-V translation routines for the XVentanaCondOps extension
131
%imm_bs 30:2 !function=ex_shift_3
49
+#
132
%imm_rnum 20:4
50
+# Copyright (c) 2022 Dr. Philipp Tomsich, philipp.tomsich@vrull.eu
133
+%imm_z6 26:1 15:5
51
+#
134
52
+# SPDX-License-Identifier: LGPL-2.1-or-later
135
# Argument sets:
53
+#
136
&empty
54
+# Reference: VTx-family custom instructions
137
@@ -XXX,XX +XXX,XX @@
55
+# Custom ISA extensions for Ventana Micro Systems RISC-V cores
138
@r_vm ...... vm:1 ..... ..... ... ..... ....... &rmrr %rs2 %rs1 %rd
56
+# (https://github.com/ventanamicro/ventana-custom-extensions/releases/download/v1.0.0/ventana-custom-extensions-v1.0.0.pdf)
139
@r_vm_1 ...... . ..... ..... ... ..... ....... &rmrr vm=1 %rs2 %rs1 %rd
57
+
140
@r_vm_0 ...... . ..... ..... ... ..... ....... &rmrr vm=0 %rs2 %rs1 %rd
58
+# Fields
141
+@r2_zimm6 ..... . vm:1 ..... ..... ... ..... ....... &rmrr %rs2 rs1=%imm_z6 %rd
59
+%rs2 20:5
142
@r2_zimm11 . zimm:11 ..... ... ..... ....... %rs1 %rd
60
+%rs1 15:5
143
@r2_zimm10 .. zimm:10 ..... ... ..... ....... %rs1 %rd
61
+%rd 7:5
144
@r2_s ....... ..... ..... ... ..... ....... %rs2 %rs1
62
+
145
@@ -XXX,XX +XXX,XX @@ vclmul_vv 001100 . ..... ..... 010 ..... 1010111 @r_vm
63
+# Argument sets
146
vclmul_vx 001100 . ..... ..... 110 ..... 1010111 @r_vm
64
+&r rd rs1 rs2 !extern
147
vclmulh_vv 001101 . ..... ..... 010 ..... 1010111 @r_vm
65
+
148
vclmulh_vx 001101 . ..... ..... 110 ..... 1010111 @r_vm
66
+# Formats
149
+
67
+@r ....... ..... ..... ... ..... ....... &r %rs2 %rs1 %rd
150
+# *** Zvbb vector crypto extension ***
68
+
151
+vrol_vv 010101 . ..... ..... 000 ..... 1010111 @r_vm
69
+# *** RV64 Custom-3 Extension ***
152
+vrol_vx 010101 . ..... ..... 100 ..... 1010111 @r_vm
70
+vt_maskc 0000000 ..... ..... 110 ..... 1111011 @r
153
+vror_vv 010100 . ..... ..... 000 ..... 1010111 @r_vm
71
+vt_maskcn 0000000 ..... ..... 111 ..... 1111011 @r
154
+vror_vx 010100 . ..... ..... 100 ..... 1010111 @r_vm
155
+vror_vi 01010. . ..... ..... 011 ..... 1010111 @r2_zimm6
156
+vbrev8_v 010010 . ..... 01000 010 ..... 1010111 @r2_vm
157
+vrev8_v 010010 . ..... 01001 010 ..... 1010111 @r2_vm
158
+vandn_vv 000001 . ..... ..... 000 ..... 1010111 @r_vm
159
+vandn_vx 000001 . ..... ..... 100 ..... 1010111 @r_vm
160
+vbrev_v 010010 . ..... 01010 010 ..... 1010111 @r2_vm
161
+vclz_v 010010 . ..... 01100 010 ..... 1010111 @r2_vm
162
+vctz_v 010010 . ..... 01101 010 ..... 1010111 @r2_vm
163
+vcpop_v 010010 . ..... 01110 010 ..... 1010111 @r2_vm
164
+vwsll_vv 110101 . ..... ..... 000 ..... 1010111 @r_vm
165
+vwsll_vx 110101 . ..... ..... 100 ..... 1010111 @r_vm
166
+vwsll_vi 110101 . ..... ..... 011 ..... 1010111 @r_vm
72
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
167
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
73
index XXXXXXX..XXXXXXX 100644
168
index XXXXXXX..XXXXXXX 100644
74
--- a/target/riscv/cpu.c
169
--- a/target/riscv/cpu.c
75
+++ b/target/riscv/cpu.c
170
+++ b/target/riscv/cpu.c
76
@@ -XXX,XX +XXX,XX @@ static Property riscv_cpu_properties[] = {
171
@@ -XXX,XX +XXX,XX @@ static const struct isa_ext_data isa_edata_arr[] = {
77
DEFINE_PROP_BOOL("zbc", RISCVCPU, cfg.ext_zbc, true),
172
ISA_EXT_DATA_ENTRY(zksed, PRIV_VERSION_1_12_0, ext_zksed),
78
DEFINE_PROP_BOOL("zbs", RISCVCPU, cfg.ext_zbs, true),
173
ISA_EXT_DATA_ENTRY(zksh, PRIV_VERSION_1_12_0, ext_zksh),
79
174
ISA_EXT_DATA_ENTRY(zkt, PRIV_VERSION_1_12_0, ext_zkt),
80
+ /* Vendor-specific custom extensions */
175
+ ISA_EXT_DATA_ENTRY(zvbb, PRIV_VERSION_1_12_0, ext_zvbb),
81
+ DEFINE_PROP_BOOL("xventanacondops", RISCVCPU, cfg.ext_XVentanaCondOps, false),
176
ISA_EXT_DATA_ENTRY(zvbc, PRIV_VERSION_1_12_0, ext_zvbc),
82
+
177
ISA_EXT_DATA_ENTRY(zve32f, PRIV_VERSION_1_10_0, ext_zve32f),
83
/* These are experimental so mark with 'x-' */
178
ISA_EXT_DATA_ENTRY(zve64f, PRIV_VERSION_1_10_0, ext_zve64f),
84
DEFINE_PROP_BOOL("x-j", RISCVCPU, cfg.ext_j, false),
179
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_validate_set_extensions(RISCVCPU *cpu, Error **errp)
85
/* ePMP 0.9.3 */
180
return;
86
diff --git a/target/riscv/translate.c b/target/riscv/translate.c
181
}
182
183
+ /*
184
+ * In principle Zve*x would also suffice here, were they supported
185
+ * in qemu
186
+ */
187
+ if (cpu->cfg.ext_zvbb && !cpu->cfg.ext_zve32f) {
188
+ error_setg(errp,
189
+ "Vector crypto extensions require V or Zve* extensions");
190
+ return;
191
+ }
192
+
193
if (cpu->cfg.ext_zvbc && !cpu->cfg.ext_zve64f) {
194
error_setg(errp, "Zvbc extension requires V or Zve64{f,d} extensions");
195
return;
196
@@ -XXX,XX +XXX,XX @@ static Property riscv_cpu_extensions[] = {
197
DEFINE_PROP_BOOL("x-zvfbfwma", RISCVCPU, cfg.ext_zvfbfwma, false),
198
199
/* Vector cryptography extensions */
200
+ DEFINE_PROP_BOOL("x-zvbb", RISCVCPU, cfg.ext_zvbb, false),
201
DEFINE_PROP_BOOL("x-zvbc", RISCVCPU, cfg.ext_zvbc, false),
202
203
DEFINE_PROP_END_OF_LIST(),
204
diff --git a/target/riscv/vcrypto_helper.c b/target/riscv/vcrypto_helper.c
87
index XXXXXXX..XXXXXXX 100644
205
index XXXXXXX..XXXXXXX 100644
88
--- a/target/riscv/translate.c
206
--- a/target/riscv/vcrypto_helper.c
89
+++ b/target/riscv/translate.c
207
+++ b/target/riscv/vcrypto_helper.c
90
@@ -XXX,XX +XXX,XX @@ static bool always_true_p(DisasContext *ctx __attribute__((__unused__)))
91
return true;
92
}
93
94
+#define MATERIALISE_EXT_PREDICATE(ext) \
95
+ static bool has_ ## ext ## _p(DisasContext *ctx) \
96
+ { \
97
+ return ctx->cfg_ptr->ext_ ## ext ; \
98
+ }
99
+
100
+MATERIALISE_EXT_PREDICATE(XVentanaCondOps);
101
+
102
#ifdef TARGET_RISCV32
103
#define get_xl(ctx) MXL_RV32
104
#elif defined(CONFIG_USER_ONLY)
105
@@ -XXX,XX +XXX,XX @@ static uint32_t opcode_at(DisasContextBase *dcbase, target_ulong pc)
106
#include "insn_trans/trans_rvb.c.inc"
107
#include "insn_trans/trans_rvzfh.c.inc"
108
#include "insn_trans/trans_privileged.c.inc"
109
+#include "insn_trans/trans_xventanacondops.c.inc"
110
111
/* Include the auto-generated decoder for 16 bit insn */
112
#include "decode-insn16.c.inc"
113
+/* Include decoders for factored-out extensions */
114
+#include "decode-XVentanaCondOps.c.inc"
115
116
static void decode_opc(CPURISCVState *env, DisasContext *ctx, uint16_t opcode)
117
{
118
@@ -XXX,XX +XXX,XX @@ static void decode_opc(CPURISCVState *env, DisasContext *ctx, uint16_t opcode)
119
bool (*decode_func)(DisasContext *, uint32_t);
120
} decoders[] = {
121
{ always_true_p, decode_insn32 },
122
+ { has_XVentanaCondOps_p, decode_XVentanaCodeOps },
123
};
124
125
/* Check for compressed insn */
126
diff --git a/target/riscv/insn_trans/trans_xventanacondops.c.inc b/target/riscv/insn_trans/trans_xventanacondops.c.inc
127
new file mode 100644
128
index XXXXXXX..XXXXXXX
129
--- /dev/null
130
+++ b/target/riscv/insn_trans/trans_xventanacondops.c.inc
131
@@ -XXX,XX +XXX,XX @@
208
@@ -XXX,XX +XXX,XX @@
209
#include "qemu/osdep.h"
210
#include "qemu/host-utils.h"
211
#include "qemu/bitops.h"
212
+#include "qemu/bswap.h"
213
#include "cpu.h"
214
#include "exec/memop.h"
215
#include "exec/exec-all.h"
216
@@ -XXX,XX +XXX,XX @@ RVVCALL(OPIVV2, vclmulh_vv, OP_UUU_D, H8, H8, H8, clmulh64)
217
GEN_VEXT_VV(vclmulh_vv, 8)
218
RVVCALL(OPIVX2, vclmulh_vx, OP_UUU_D, H8, H8, clmulh64)
219
GEN_VEXT_VX(vclmulh_vx, 8)
220
+
221
+RVVCALL(OPIVV2, vror_vv_b, OP_UUU_B, H1, H1, H1, ror8)
222
+RVVCALL(OPIVV2, vror_vv_h, OP_UUU_H, H2, H2, H2, ror16)
223
+RVVCALL(OPIVV2, vror_vv_w, OP_UUU_W, H4, H4, H4, ror32)
224
+RVVCALL(OPIVV2, vror_vv_d, OP_UUU_D, H8, H8, H8, ror64)
225
+GEN_VEXT_VV(vror_vv_b, 1)
226
+GEN_VEXT_VV(vror_vv_h, 2)
227
+GEN_VEXT_VV(vror_vv_w, 4)
228
+GEN_VEXT_VV(vror_vv_d, 8)
229
+
230
+RVVCALL(OPIVX2, vror_vx_b, OP_UUU_B, H1, H1, ror8)
231
+RVVCALL(OPIVX2, vror_vx_h, OP_UUU_H, H2, H2, ror16)
232
+RVVCALL(OPIVX2, vror_vx_w, OP_UUU_W, H4, H4, ror32)
233
+RVVCALL(OPIVX2, vror_vx_d, OP_UUU_D, H8, H8, ror64)
234
+GEN_VEXT_VX(vror_vx_b, 1)
235
+GEN_VEXT_VX(vror_vx_h, 2)
236
+GEN_VEXT_VX(vror_vx_w, 4)
237
+GEN_VEXT_VX(vror_vx_d, 8)
238
+
239
+RVVCALL(OPIVV2, vrol_vv_b, OP_UUU_B, H1, H1, H1, rol8)
240
+RVVCALL(OPIVV2, vrol_vv_h, OP_UUU_H, H2, H2, H2, rol16)
241
+RVVCALL(OPIVV2, vrol_vv_w, OP_UUU_W, H4, H4, H4, rol32)
242
+RVVCALL(OPIVV2, vrol_vv_d, OP_UUU_D, H8, H8, H8, rol64)
243
+GEN_VEXT_VV(vrol_vv_b, 1)
244
+GEN_VEXT_VV(vrol_vv_h, 2)
245
+GEN_VEXT_VV(vrol_vv_w, 4)
246
+GEN_VEXT_VV(vrol_vv_d, 8)
247
+
248
+RVVCALL(OPIVX2, vrol_vx_b, OP_UUU_B, H1, H1, rol8)
249
+RVVCALL(OPIVX2, vrol_vx_h, OP_UUU_H, H2, H2, rol16)
250
+RVVCALL(OPIVX2, vrol_vx_w, OP_UUU_W, H4, H4, rol32)
251
+RVVCALL(OPIVX2, vrol_vx_d, OP_UUU_D, H8, H8, rol64)
252
+GEN_VEXT_VX(vrol_vx_b, 1)
253
+GEN_VEXT_VX(vrol_vx_h, 2)
254
+GEN_VEXT_VX(vrol_vx_w, 4)
255
+GEN_VEXT_VX(vrol_vx_d, 8)
256
+
257
+static uint64_t brev8(uint64_t val)
258
+{
259
+ val = ((val & 0x5555555555555555ull) << 1) |
260
+ ((val & 0xAAAAAAAAAAAAAAAAull) >> 1);
261
+ val = ((val & 0x3333333333333333ull) << 2) |
262
+ ((val & 0xCCCCCCCCCCCCCCCCull) >> 2);
263
+ val = ((val & 0x0F0F0F0F0F0F0F0Full) << 4) |
264
+ ((val & 0xF0F0F0F0F0F0F0F0ull) >> 4);
265
+
266
+ return val;
267
+}
268
+
269
+RVVCALL(OPIVV1, vbrev8_v_b, OP_UU_B, H1, H1, brev8)
270
+RVVCALL(OPIVV1, vbrev8_v_h, OP_UU_H, H2, H2, brev8)
271
+RVVCALL(OPIVV1, vbrev8_v_w, OP_UU_W, H4, H4, brev8)
272
+RVVCALL(OPIVV1, vbrev8_v_d, OP_UU_D, H8, H8, brev8)
273
+GEN_VEXT_V(vbrev8_v_b, 1)
274
+GEN_VEXT_V(vbrev8_v_h, 2)
275
+GEN_VEXT_V(vbrev8_v_w, 4)
276
+GEN_VEXT_V(vbrev8_v_d, 8)
277
+
278
+#define DO_IDENTITY(a) (a)
279
+RVVCALL(OPIVV1, vrev8_v_b, OP_UU_B, H1, H1, DO_IDENTITY)
280
+RVVCALL(OPIVV1, vrev8_v_h, OP_UU_H, H2, H2, bswap16)
281
+RVVCALL(OPIVV1, vrev8_v_w, OP_UU_W, H4, H4, bswap32)
282
+RVVCALL(OPIVV1, vrev8_v_d, OP_UU_D, H8, H8, bswap64)
283
+GEN_VEXT_V(vrev8_v_b, 1)
284
+GEN_VEXT_V(vrev8_v_h, 2)
285
+GEN_VEXT_V(vrev8_v_w, 4)
286
+GEN_VEXT_V(vrev8_v_d, 8)
287
+
288
+#define DO_ANDN(a, b) ((a) & ~(b))
289
+RVVCALL(OPIVV2, vandn_vv_b, OP_UUU_B, H1, H1, H1, DO_ANDN)
290
+RVVCALL(OPIVV2, vandn_vv_h, OP_UUU_H, H2, H2, H2, DO_ANDN)
291
+RVVCALL(OPIVV2, vandn_vv_w, OP_UUU_W, H4, H4, H4, DO_ANDN)
292
+RVVCALL(OPIVV2, vandn_vv_d, OP_UUU_D, H8, H8, H8, DO_ANDN)
293
+GEN_VEXT_VV(vandn_vv_b, 1)
294
+GEN_VEXT_VV(vandn_vv_h, 2)
295
+GEN_VEXT_VV(vandn_vv_w, 4)
296
+GEN_VEXT_VV(vandn_vv_d, 8)
297
+
298
+RVVCALL(OPIVX2, vandn_vx_b, OP_UUU_B, H1, H1, DO_ANDN)
299
+RVVCALL(OPIVX2, vandn_vx_h, OP_UUU_H, H2, H2, DO_ANDN)
300
+RVVCALL(OPIVX2, vandn_vx_w, OP_UUU_W, H4, H4, DO_ANDN)
301
+RVVCALL(OPIVX2, vandn_vx_d, OP_UUU_D, H8, H8, DO_ANDN)
302
+GEN_VEXT_VX(vandn_vx_b, 1)
303
+GEN_VEXT_VX(vandn_vx_h, 2)
304
+GEN_VEXT_VX(vandn_vx_w, 4)
305
+GEN_VEXT_VX(vandn_vx_d, 8)
306
+
307
+RVVCALL(OPIVV1, vbrev_v_b, OP_UU_B, H1, H1, revbit8)
308
+RVVCALL(OPIVV1, vbrev_v_h, OP_UU_H, H2, H2, revbit16)
309
+RVVCALL(OPIVV1, vbrev_v_w, OP_UU_W, H4, H4, revbit32)
310
+RVVCALL(OPIVV1, vbrev_v_d, OP_UU_D, H8, H8, revbit64)
311
+GEN_VEXT_V(vbrev_v_b, 1)
312
+GEN_VEXT_V(vbrev_v_h, 2)
313
+GEN_VEXT_V(vbrev_v_w, 4)
314
+GEN_VEXT_V(vbrev_v_d, 8)
315
+
316
+RVVCALL(OPIVV1, vclz_v_b, OP_UU_B, H1, H1, clz8)
317
+RVVCALL(OPIVV1, vclz_v_h, OP_UU_H, H2, H2, clz16)
318
+RVVCALL(OPIVV1, vclz_v_w, OP_UU_W, H4, H4, clz32)
319
+RVVCALL(OPIVV1, vclz_v_d, OP_UU_D, H8, H8, clz64)
320
+GEN_VEXT_V(vclz_v_b, 1)
321
+GEN_VEXT_V(vclz_v_h, 2)
322
+GEN_VEXT_V(vclz_v_w, 4)
323
+GEN_VEXT_V(vclz_v_d, 8)
324
+
325
+RVVCALL(OPIVV1, vctz_v_b, OP_UU_B, H1, H1, ctz8)
326
+RVVCALL(OPIVV1, vctz_v_h, OP_UU_H, H2, H2, ctz16)
327
+RVVCALL(OPIVV1, vctz_v_w, OP_UU_W, H4, H4, ctz32)
328
+RVVCALL(OPIVV1, vctz_v_d, OP_UU_D, H8, H8, ctz64)
329
+GEN_VEXT_V(vctz_v_b, 1)
330
+GEN_VEXT_V(vctz_v_h, 2)
331
+GEN_VEXT_V(vctz_v_w, 4)
332
+GEN_VEXT_V(vctz_v_d, 8)
333
+
334
+RVVCALL(OPIVV1, vcpop_v_b, OP_UU_B, H1, H1, ctpop8)
335
+RVVCALL(OPIVV1, vcpop_v_h, OP_UU_H, H2, H2, ctpop16)
336
+RVVCALL(OPIVV1, vcpop_v_w, OP_UU_W, H4, H4, ctpop32)
337
+RVVCALL(OPIVV1, vcpop_v_d, OP_UU_D, H8, H8, ctpop64)
338
+GEN_VEXT_V(vcpop_v_b, 1)
339
+GEN_VEXT_V(vcpop_v_h, 2)
340
+GEN_VEXT_V(vcpop_v_w, 4)
341
+GEN_VEXT_V(vcpop_v_d, 8)
342
+
343
+#define DO_SLL(N, M) (N << (M & (sizeof(N) * 8 - 1)))
344
+RVVCALL(OPIVV2, vwsll_vv_b, WOP_UUU_B, H2, H1, H1, DO_SLL)
345
+RVVCALL(OPIVV2, vwsll_vv_h, WOP_UUU_H, H4, H2, H2, DO_SLL)
346
+RVVCALL(OPIVV2, vwsll_vv_w, WOP_UUU_W, H8, H4, H4, DO_SLL)
347
+GEN_VEXT_VV(vwsll_vv_b, 2)
348
+GEN_VEXT_VV(vwsll_vv_h, 4)
349
+GEN_VEXT_VV(vwsll_vv_w, 8)
350
+
351
+RVVCALL(OPIVX2, vwsll_vx_b, WOP_UUU_B, H2, H1, DO_SLL)
352
+RVVCALL(OPIVX2, vwsll_vx_h, WOP_UUU_H, H4, H2, DO_SLL)
353
+RVVCALL(OPIVX2, vwsll_vx_w, WOP_UUU_W, H8, H4, DO_SLL)
354
+GEN_VEXT_VX(vwsll_vx_b, 2)
355
+GEN_VEXT_VX(vwsll_vx_h, 4)
356
+GEN_VEXT_VX(vwsll_vx_w, 8)
357
diff --git a/target/riscv/insn_trans/trans_rvvk.c.inc b/target/riscv/insn_trans/trans_rvvk.c.inc
358
index XXXXXXX..XXXXXXX 100644
359
--- a/target/riscv/insn_trans/trans_rvvk.c.inc
360
+++ b/target/riscv/insn_trans/trans_rvvk.c.inc
361
@@ -XXX,XX +XXX,XX @@ static bool vclmul_vx_check(DisasContext *s, arg_rmrr *a)
362
363
GEN_VX_MASKED_TRANS(vclmul_vx, vclmul_vx_check)
364
GEN_VX_MASKED_TRANS(vclmulh_vx, vclmul_vx_check)
365
+
132
+/*
366
+/*
133
+ * RISC-V translation routines for the XVentanaCondOps extension.
367
+ * Zvbb
134
+ *
135
+ * Copyright (c) 2021-2022 VRULL GmbH.
136
+ *
137
+ * This program is free software; you can redistribute it and/or modify it
138
+ * under the terms and conditions of the GNU General Public License,
139
+ * version 2 or later, as published by the Free Software Foundation.
140
+ *
141
+ * This program is distributed in the hope it will be useful, but WITHOUT
142
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
143
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
144
+ * more details.
145
+ *
146
+ * You should have received a copy of the GNU General Public License along with
147
+ * this program. If not, see <http://www.gnu.org/licenses/>.
148
+ */
368
+ */
149
+
369
+
150
+static bool gen_vt_condmask(DisasContext *ctx, arg_r *a, TCGCond cond)
370
+#define GEN_OPIVI_GVEC_TRANS_CHECK(NAME, IMM_MODE, OPIVX, SUF, CHECK) \
371
+ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
372
+ { \
373
+ if (CHECK(s, a)) { \
374
+ static gen_helper_opivx *const fns[4] = { \
375
+ gen_helper_##OPIVX##_b, \
376
+ gen_helper_##OPIVX##_h, \
377
+ gen_helper_##OPIVX##_w, \
378
+ gen_helper_##OPIVX##_d, \
379
+ }; \
380
+ return do_opivi_gvec(s, a, tcg_gen_gvec_##SUF, fns[s->sew], \
381
+ IMM_MODE); \
382
+ } \
383
+ return false; \
384
+ }
385
+
386
+#define GEN_OPIVV_GVEC_TRANS_CHECK(NAME, SUF, CHECK) \
387
+ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
388
+ { \
389
+ if (CHECK(s, a)) { \
390
+ static gen_helper_gvec_4_ptr *const fns[4] = { \
391
+ gen_helper_##NAME##_b, \
392
+ gen_helper_##NAME##_h, \
393
+ gen_helper_##NAME##_w, \
394
+ gen_helper_##NAME##_d, \
395
+ }; \
396
+ return do_opivv_gvec(s, a, tcg_gen_gvec_##SUF, fns[s->sew]); \
397
+ } \
398
+ return false; \
399
+ }
400
+
401
+#define GEN_OPIVX_GVEC_SHIFT_TRANS_CHECK(NAME, SUF, CHECK) \
402
+ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
403
+ { \
404
+ if (CHECK(s, a)) { \
405
+ static gen_helper_opivx *const fns[4] = { \
406
+ gen_helper_##NAME##_b, \
407
+ gen_helper_##NAME##_h, \
408
+ gen_helper_##NAME##_w, \
409
+ gen_helper_##NAME##_d, \
410
+ }; \
411
+ return do_opivx_gvec_shift(s, a, tcg_gen_gvec_##SUF, \
412
+ fns[s->sew]); \
413
+ } \
414
+ return false; \
415
+ }
416
+
417
+static bool zvbb_vv_check(DisasContext *s, arg_rmrr *a)
151
+{
418
+{
152
+ TCGv dest = dest_gpr(ctx, a->rd);
419
+ return opivv_check(s, a) && s->cfg_ptr->ext_zvbb == true;
153
+ TCGv src1 = get_gpr(ctx, a->rs1, EXT_NONE);
154
+ TCGv src2 = get_gpr(ctx, a->rs2, EXT_NONE);
155
+
156
+ tcg_gen_movcond_tl(cond, dest, src2, ctx->zero, src1, ctx->zero);
157
+
158
+ gen_set_gpr(ctx, a->rd, dest);
159
+ return true;
160
+}
420
+}
161
+
421
+
162
+static bool trans_vt_maskc(DisasContext *ctx, arg_r *a)
422
+static bool zvbb_vx_check(DisasContext *s, arg_rmrr *a)
163
+{
423
+{
164
+ return gen_vt_condmask(ctx, a, TCG_COND_NE);
424
+ return opivx_check(s, a) && s->cfg_ptr->ext_zvbb == true;
165
+}
425
+}
166
+
426
+
167
+static bool trans_vt_maskcn(DisasContext *ctx, arg_r *a)
427
+/* vrol.v[vx] */
428
+GEN_OPIVV_GVEC_TRANS_CHECK(vrol_vv, rotlv, zvbb_vv_check)
429
+GEN_OPIVX_GVEC_SHIFT_TRANS_CHECK(vrol_vx, rotls, zvbb_vx_check)
430
+
431
+/* vror.v[vxi] */
432
+GEN_OPIVV_GVEC_TRANS_CHECK(vror_vv, rotrv, zvbb_vv_check)
433
+GEN_OPIVX_GVEC_SHIFT_TRANS_CHECK(vror_vx, rotrs, zvbb_vx_check)
434
+GEN_OPIVI_GVEC_TRANS_CHECK(vror_vi, IMM_TRUNC_SEW, vror_vx, rotri, zvbb_vx_check)
435
+
436
+#define GEN_OPIVX_GVEC_TRANS_CHECK(NAME, SUF, CHECK) \
437
+ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
438
+ { \
439
+ if (CHECK(s, a)) { \
440
+ static gen_helper_opivx *const fns[4] = { \
441
+ gen_helper_##NAME##_b, \
442
+ gen_helper_##NAME##_h, \
443
+ gen_helper_##NAME##_w, \
444
+ gen_helper_##NAME##_d, \
445
+ }; \
446
+ return do_opivx_gvec(s, a, tcg_gen_gvec_##SUF, fns[s->sew]); \
447
+ } \
448
+ return false; \
449
+ }
450
+
451
+/* vandn.v[vx] */
452
+GEN_OPIVV_GVEC_TRANS_CHECK(vandn_vv, andc, zvbb_vv_check)
453
+GEN_OPIVX_GVEC_TRANS_CHECK(vandn_vx, andcs, zvbb_vx_check)
454
+
455
+#define GEN_OPIV_TRANS(NAME, CHECK) \
456
+ static bool trans_##NAME(DisasContext *s, arg_rmr *a) \
457
+ { \
458
+ if (CHECK(s, a)) { \
459
+ uint32_t data = 0; \
460
+ static gen_helper_gvec_3_ptr *const fns[4] = { \
461
+ gen_helper_##NAME##_b, \
462
+ gen_helper_##NAME##_h, \
463
+ gen_helper_##NAME##_w, \
464
+ gen_helper_##NAME##_d, \
465
+ }; \
466
+ TCGLabel *over = gen_new_label(); \
467
+ tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
468
+ \
469
+ data = FIELD_DP32(data, VDATA, VM, a->vm); \
470
+ data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \
471
+ data = FIELD_DP32(data, VDATA, VTA, s->vta); \
472
+ data = FIELD_DP32(data, VDATA, VTA_ALL_1S, s->cfg_vta_all_1s); \
473
+ data = FIELD_DP32(data, VDATA, VMA, s->vma); \
474
+ tcg_gen_gvec_3_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0), \
475
+ vreg_ofs(s, a->rs2), cpu_env, \
476
+ s->cfg_ptr->vlen / 8, s->cfg_ptr->vlen / 8, \
477
+ data, fns[s->sew]); \
478
+ mark_vs_dirty(s); \
479
+ gen_set_label(over); \
480
+ return true; \
481
+ } \
482
+ return false; \
483
+ }
484
+
485
+static bool zvbb_opiv_check(DisasContext *s, arg_rmr *a)
168
+{
486
+{
169
+ return gen_vt_condmask(ctx, a, TCG_COND_EQ);
487
+ return s->cfg_ptr->ext_zvbb == true &&
488
+ require_rvv(s) &&
489
+ vext_check_isa_ill(s) &&
490
+ vext_check_ss(s, a->rd, a->rs2, a->vm);
170
+}
491
+}
171
diff --git a/target/riscv/meson.build b/target/riscv/meson.build
492
+
172
index XXXXXXX..XXXXXXX 100644
493
+GEN_OPIV_TRANS(vbrev8_v, zvbb_opiv_check)
173
--- a/target/riscv/meson.build
494
+GEN_OPIV_TRANS(vrev8_v, zvbb_opiv_check)
174
+++ b/target/riscv/meson.build
495
+GEN_OPIV_TRANS(vbrev_v, zvbb_opiv_check)
175
@@ -XXX,XX +XXX,XX @@ dir = meson.current_source_dir()
496
+GEN_OPIV_TRANS(vclz_v, zvbb_opiv_check)
176
gen = [
497
+GEN_OPIV_TRANS(vctz_v, zvbb_opiv_check)
177
decodetree.process('insn16.decode', extra_args: ['--static-decode=decode_insn16', '--insnwidth=16']),
498
+GEN_OPIV_TRANS(vcpop_v, zvbb_opiv_check)
178
decodetree.process('insn32.decode', extra_args: '--static-decode=decode_insn32'),
499
+
179
+ decodetree.process('XVentanaCondOps.decode', extra_args: '--static-decode=decode_XVentanaCodeOps'),
500
+static bool vwsll_vv_check(DisasContext *s, arg_rmrr *a)
180
]
501
+{
181
502
+ return s->cfg_ptr->ext_zvbb && opivv_widen_check(s, a);
182
riscv_ss = ss.source_set()
503
+}
504
+
505
+static bool vwsll_vx_check(DisasContext *s, arg_rmrr *a)
506
+{
507
+ return s->cfg_ptr->ext_zvbb && opivx_widen_check(s, a);
508
+}
509
+
510
+/* OPIVI without GVEC IR */
511
+#define GEN_OPIVI_WIDEN_TRANS(NAME, IMM_MODE, OPIVX, CHECK) \
512
+ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
513
+ { \
514
+ if (CHECK(s, a)) { \
515
+ static gen_helper_opivx *const fns[3] = { \
516
+ gen_helper_##OPIVX##_b, \
517
+ gen_helper_##OPIVX##_h, \
518
+ gen_helper_##OPIVX##_w, \
519
+ }; \
520
+ return opivi_trans(a->rd, a->rs1, a->rs2, a->vm, fns[s->sew], s, \
521
+ IMM_MODE); \
522
+ } \
523
+ return false; \
524
+ }
525
+
526
+GEN_OPIVV_WIDEN_TRANS(vwsll_vv, vwsll_vv_check)
527
+GEN_OPIVX_WIDEN_TRANS(vwsll_vx, vwsll_vx_check)
528
+GEN_OPIVI_WIDEN_TRANS(vwsll_vi, IMM_ZX, vwsll_vx, vwsll_vx_check)
183
--
529
--
184
2.34.1
530
2.41.0
185
186
diff view generated by jsdifflib
1
From: Weiwei Li <liweiwei@iscas.ac.cn>
1
From: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
2
2
3
- add PTE_PBMT bits: It uses two PTE bits, but otherwise has no effect on QEMU, since QEMU is sequentially consistent and doesn't model PMAs currently
3
This commit adds support for the Zvkned vector-crypto extension, which
4
- add PTE_PBMT bit check for inner PTE
4
consists of the following instructions:
5
5
6
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
6
* vaesef.[vv,vs]
7
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
7
* vaesdf.[vv,vs]
8
Reviewed-by: Anup Patel <anup@brainfault.org>
8
* vaesdm.[vv,vs]
9
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
9
* vaesz.vs
10
Message-Id: <20220204022658.18097-6-liweiwei@iscas.ac.cn>
10
* vaesem.[vv,vs]
11
* vaeskf1.vi
12
* vaeskf2.vi
13
14
Translation functions are defined in
15
`target/riscv/insn_trans/trans_rvvk.c.inc` and helpers are defined in
16
`target/riscv/vcrypto_helper.c`.
17
18
Co-authored-by: Lawrence Hunter <lawrence.hunter@codethink.co.uk>
19
Co-authored-by: William Salmon <will.salmon@codethink.co.uk>
20
[max.chou@sifive.com: Replaced vstart checking by TCG op]
21
Signed-off-by: Lawrence Hunter <lawrence.hunter@codethink.co.uk>
22
Signed-off-by: William Salmon <will.salmon@codethink.co.uk>
23
Signed-off-by: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
24
Signed-off-by: Max Chou <max.chou@sifive.com>
25
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
26
[max.chou@sifive.com: Imported aes-round.h and exposed x-zvkned
27
property]
28
[max.chou@sifive.com: Fixed endian issues and replaced the vstart & vl
29
egs checking by helper function]
30
[max.chou@sifive.com: Replaced bswap32 calls in aes key expanding]
31
Message-ID: <20230711165917.2629866-10-max.chou@sifive.com>
11
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
32
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
12
---
33
---
13
target/riscv/cpu_bits.h | 2 ++
34
target/riscv/cpu_cfg.h | 1 +
14
target/riscv/cpu.c | 1 +
35
target/riscv/helper.h | 14 ++
15
target/riscv/cpu_helper.c | 4 +++-
36
target/riscv/insn32.decode | 14 ++
16
3 files changed, 6 insertions(+), 1 deletion(-)
37
target/riscv/cpu.c | 4 +-
38
target/riscv/vcrypto_helper.c | 202 +++++++++++++++++++++++
39
target/riscv/insn_trans/trans_rvvk.c.inc | 147 +++++++++++++++++
40
6 files changed, 381 insertions(+), 1 deletion(-)
17
41
18
diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
42
diff --git a/target/riscv/cpu_cfg.h b/target/riscv/cpu_cfg.h
19
index XXXXXXX..XXXXXXX 100644
43
index XXXXXXX..XXXXXXX 100644
20
--- a/target/riscv/cpu_bits.h
44
--- a/target/riscv/cpu_cfg.h
21
+++ b/target/riscv/cpu_bits.h
45
+++ b/target/riscv/cpu_cfg.h
22
@@ -XXX,XX +XXX,XX @@ typedef enum {
46
@@ -XXX,XX +XXX,XX @@ struct RISCVCPUConfig {
23
#define PTE_A 0x040 /* Accessed */
47
bool ext_zve64d;
24
#define PTE_D 0x080 /* Dirty */
48
bool ext_zvbb;
25
#define PTE_SOFT 0x300 /* Reserved for Software */
49
bool ext_zvbc;
26
+#define PTE_PBMT 0x6000000000000000ULL /* Page-based memory types */
50
+ bool ext_zvkned;
27
#define PTE_N 0x8000000000000000ULL /* NAPOT translation */
51
bool ext_zmmul;
28
+#define PTE_ATTR (PTE_N | PTE_PBMT) /* All attributes bits */
52
bool ext_zvfbfmin;
29
53
bool ext_zvfbfwma;
30
/* Page table PPN shift amount */
54
diff --git a/target/riscv/helper.h b/target/riscv/helper.h
31
#define PTE_PPN_SHIFT 10
55
index XXXXXXX..XXXXXXX 100644
56
--- a/target/riscv/helper.h
57
+++ b/target/riscv/helper.h
58
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_6(vandn_vx_b, void, ptr, ptr, tl, ptr, env, i32)
59
DEF_HELPER_6(vandn_vx_h, void, ptr, ptr, tl, ptr, env, i32)
60
DEF_HELPER_6(vandn_vx_w, void, ptr, ptr, tl, ptr, env, i32)
61
DEF_HELPER_6(vandn_vx_d, void, ptr, ptr, tl, ptr, env, i32)
62
+
63
+DEF_HELPER_2(egs_check, void, i32, env)
64
+
65
+DEF_HELPER_4(vaesef_vv, void, ptr, ptr, env, i32)
66
+DEF_HELPER_4(vaesef_vs, void, ptr, ptr, env, i32)
67
+DEF_HELPER_4(vaesdf_vv, void, ptr, ptr, env, i32)
68
+DEF_HELPER_4(vaesdf_vs, void, ptr, ptr, env, i32)
69
+DEF_HELPER_4(vaesem_vv, void, ptr, ptr, env, i32)
70
+DEF_HELPER_4(vaesem_vs, void, ptr, ptr, env, i32)
71
+DEF_HELPER_4(vaesdm_vv, void, ptr, ptr, env, i32)
72
+DEF_HELPER_4(vaesdm_vs, void, ptr, ptr, env, i32)
73
+DEF_HELPER_4(vaesz_vs, void, ptr, ptr, env, i32)
74
+DEF_HELPER_5(vaeskf1_vi, void, ptr, ptr, i32, env, i32)
75
+DEF_HELPER_5(vaeskf2_vi, void, ptr, ptr, i32, env, i32)
76
diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode
77
index XXXXXXX..XXXXXXX 100644
78
--- a/target/riscv/insn32.decode
79
+++ b/target/riscv/insn32.decode
80
@@ -XXX,XX +XXX,XX @@
81
@r_rm ....... ..... ..... ... ..... ....... %rs2 %rs1 %rm %rd
82
@r2_rm ....... ..... ..... ... ..... ....... %rs1 %rm %rd
83
@r2 ....... ..... ..... ... ..... ....... &r2 %rs1 %rd
84
+@r2_vm_1 ...... . ..... ..... ... ..... ....... &rmr vm=1 %rs2 %rd
85
@r2_nfvm ... ... vm:1 ..... ..... ... ..... ....... &r2nfvm %nf %rs1 %rd
86
@r2_vm ...... vm:1 ..... ..... ... ..... ....... &rmr %rs2 %rd
87
@r1_vm ...... vm:1 ..... ..... ... ..... ....... %rd
88
@@ -XXX,XX +XXX,XX @@ vcpop_v 010010 . ..... 01110 010 ..... 1010111 @r2_vm
89
vwsll_vv 110101 . ..... ..... 000 ..... 1010111 @r_vm
90
vwsll_vx 110101 . ..... ..... 100 ..... 1010111 @r_vm
91
vwsll_vi 110101 . ..... ..... 011 ..... 1010111 @r_vm
92
+
93
+# *** Zvkned vector crypto extension ***
94
+vaesef_vv 101000 1 ..... 00011 010 ..... 1110111 @r2_vm_1
95
+vaesef_vs 101001 1 ..... 00011 010 ..... 1110111 @r2_vm_1
96
+vaesdf_vv 101000 1 ..... 00001 010 ..... 1110111 @r2_vm_1
97
+vaesdf_vs 101001 1 ..... 00001 010 ..... 1110111 @r2_vm_1
98
+vaesem_vv 101000 1 ..... 00010 010 ..... 1110111 @r2_vm_1
99
+vaesem_vs 101001 1 ..... 00010 010 ..... 1110111 @r2_vm_1
100
+vaesdm_vv 101000 1 ..... 00000 010 ..... 1110111 @r2_vm_1
101
+vaesdm_vs 101001 1 ..... 00000 010 ..... 1110111 @r2_vm_1
102
+vaesz_vs 101001 1 ..... 00111 010 ..... 1110111 @r2_vm_1
103
+vaeskf1_vi 100010 1 ..... ..... 010 ..... 1110111 @r_vm_1
104
+vaeskf2_vi 101010 1 ..... ..... 010 ..... 1110111 @r_vm_1
32
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
105
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
33
index XXXXXXX..XXXXXXX 100644
106
index XXXXXXX..XXXXXXX 100644
34
--- a/target/riscv/cpu.c
107
--- a/target/riscv/cpu.c
35
+++ b/target/riscv/cpu.c
108
+++ b/target/riscv/cpu.c
36
@@ -XXX,XX +XXX,XX @@ static Property riscv_cpu_properties[] = {
109
@@ -XXX,XX +XXX,XX @@ static const struct isa_ext_data isa_edata_arr[] = {
37
110
ISA_EXT_DATA_ENTRY(zvfbfwma, PRIV_VERSION_1_12_0, ext_zvfbfwma),
38
DEFINE_PROP_BOOL("svinval", RISCVCPU, cfg.ext_svinval, false),
111
ISA_EXT_DATA_ENTRY(zvfh, PRIV_VERSION_1_12_0, ext_zvfh),
39
DEFINE_PROP_BOOL("svnapot", RISCVCPU, cfg.ext_svnapot, false),
112
ISA_EXT_DATA_ENTRY(zvfhmin, PRIV_VERSION_1_12_0, ext_zvfhmin),
40
+ DEFINE_PROP_BOOL("svpbmt", RISCVCPU, cfg.ext_svpbmt, false),
113
+ ISA_EXT_DATA_ENTRY(zvkned, PRIV_VERSION_1_12_0, ext_zvkned),
41
114
ISA_EXT_DATA_ENTRY(zhinx, PRIV_VERSION_1_12_0, ext_zhinx),
42
DEFINE_PROP_BOOL("zba", RISCVCPU, cfg.ext_zba, true),
115
ISA_EXT_DATA_ENTRY(zhinxmin, PRIV_VERSION_1_12_0, ext_zhinxmin),
43
DEFINE_PROP_BOOL("zbb", RISCVCPU, cfg.ext_zbb, true),
116
ISA_EXT_DATA_ENTRY(smaia, PRIV_VERSION_1_12_0, ext_smaia),
44
diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
117
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_validate_set_extensions(RISCVCPU *cpu, Error **errp)
118
* In principle Zve*x would also suffice here, were they supported
119
* in qemu
120
*/
121
- if (cpu->cfg.ext_zvbb && !cpu->cfg.ext_zve32f) {
122
+ if ((cpu->cfg.ext_zvbb || cpu->cfg.ext_zvkned) && !cpu->cfg.ext_zve32f) {
123
error_setg(errp,
124
"Vector crypto extensions require V or Zve* extensions");
125
return;
126
@@ -XXX,XX +XXX,XX @@ static Property riscv_cpu_extensions[] = {
127
/* Vector cryptography extensions */
128
DEFINE_PROP_BOOL("x-zvbb", RISCVCPU, cfg.ext_zvbb, false),
129
DEFINE_PROP_BOOL("x-zvbc", RISCVCPU, cfg.ext_zvbc, false),
130
+ DEFINE_PROP_BOOL("x-zvkned", RISCVCPU, cfg.ext_zvkned, false),
131
132
DEFINE_PROP_END_OF_LIST(),
133
};
134
diff --git a/target/riscv/vcrypto_helper.c b/target/riscv/vcrypto_helper.c
45
index XXXXXXX..XXXXXXX 100644
135
index XXXXXXX..XXXXXXX 100644
46
--- a/target/riscv/cpu_helper.c
136
--- a/target/riscv/vcrypto_helper.c
47
+++ b/target/riscv/cpu_helper.c
137
+++ b/target/riscv/vcrypto_helper.c
48
@@ -XXX,XX +XXX,XX @@ restart:
138
@@ -XXX,XX +XXX,XX @@
49
if (!(pte & PTE_V)) {
139
#include "qemu/bitops.h"
50
/* Invalid PTE */
140
#include "qemu/bswap.h"
51
return TRANSLATE_FAIL;
141
#include "cpu.h"
52
+ } else if (!cpu->cfg.ext_svpbmt && (pte & PTE_PBMT)) {
142
+#include "crypto/aes.h"
53
+ return TRANSLATE_FAIL;
143
+#include "crypto/aes-round.h"
54
} else if (!(pte & (PTE_R | PTE_W | PTE_X))) {
144
#include "exec/memop.h"
55
/* Inner PTE, continue walking */
145
#include "exec/exec-all.h"
56
- if (pte & (PTE_D | PTE_A | PTE_U | PTE_N)) {
146
#include "exec/helper-proto.h"
57
+ if (pte & (PTE_D | PTE_A | PTE_U | PTE_ATTR)) {
147
@@ -XXX,XX +XXX,XX @@ RVVCALL(OPIVX2, vwsll_vx_w, WOP_UUU_W, H8, H4, DO_SLL)
58
return TRANSLATE_FAIL;
148
GEN_VEXT_VX(vwsll_vx_b, 2)
59
}
149
GEN_VEXT_VX(vwsll_vx_h, 4)
60
base = ppn << PGSHIFT;
150
GEN_VEXT_VX(vwsll_vx_w, 8)
151
+
152
+void HELPER(egs_check)(uint32_t egs, CPURISCVState *env)
153
+{
154
+ uint32_t vl = env->vl;
155
+ uint32_t vstart = env->vstart;
156
+
157
+ if (vl % egs != 0 || vstart % egs != 0) {
158
+ riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC());
159
+ }
160
+}
161
+
162
+static inline void xor_round_key(AESState *round_state, AESState *round_key)
163
+{
164
+ round_state->v = round_state->v ^ round_key->v;
165
+}
166
+
167
+#define GEN_ZVKNED_HELPER_VV(NAME, ...) \
168
+ void HELPER(NAME)(void *vd, void *vs2, CPURISCVState *env, \
169
+ uint32_t desc) \
170
+ { \
171
+ uint32_t vl = env->vl; \
172
+ uint32_t total_elems = vext_get_total_elems(env, desc, 4); \
173
+ uint32_t vta = vext_vta(desc); \
174
+ \
175
+ for (uint32_t i = env->vstart / 4; i < env->vl / 4; i++) { \
176
+ AESState round_key; \
177
+ round_key.d[0] = *((uint64_t *)vs2 + H8(i * 2 + 0)); \
178
+ round_key.d[1] = *((uint64_t *)vs2 + H8(i * 2 + 1)); \
179
+ AESState round_state; \
180
+ round_state.d[0] = *((uint64_t *)vd + H8(i * 2 + 0)); \
181
+ round_state.d[1] = *((uint64_t *)vd + H8(i * 2 + 1)); \
182
+ __VA_ARGS__; \
183
+ *((uint64_t *)vd + H8(i * 2 + 0)) = round_state.d[0]; \
184
+ *((uint64_t *)vd + H8(i * 2 + 1)) = round_state.d[1]; \
185
+ } \
186
+ env->vstart = 0; \
187
+ /* set tail elements to 1s */ \
188
+ vext_set_elems_1s(vd, vta, vl * 4, total_elems * 4); \
189
+ }
190
+
191
+#define GEN_ZVKNED_HELPER_VS(NAME, ...) \
192
+ void HELPER(NAME)(void *vd, void *vs2, CPURISCVState *env, \
193
+ uint32_t desc) \
194
+ { \
195
+ uint32_t vl = env->vl; \
196
+ uint32_t total_elems = vext_get_total_elems(env, desc, 4); \
197
+ uint32_t vta = vext_vta(desc); \
198
+ \
199
+ for (uint32_t i = env->vstart / 4; i < env->vl / 4; i++) { \
200
+ AESState round_key; \
201
+ round_key.d[0] = *((uint64_t *)vs2 + H8(0)); \
202
+ round_key.d[1] = *((uint64_t *)vs2 + H8(1)); \
203
+ AESState round_state; \
204
+ round_state.d[0] = *((uint64_t *)vd + H8(i * 2 + 0)); \
205
+ round_state.d[1] = *((uint64_t *)vd + H8(i * 2 + 1)); \
206
+ __VA_ARGS__; \
207
+ *((uint64_t *)vd + H8(i * 2 + 0)) = round_state.d[0]; \
208
+ *((uint64_t *)vd + H8(i * 2 + 1)) = round_state.d[1]; \
209
+ } \
210
+ env->vstart = 0; \
211
+ /* set tail elements to 1s */ \
212
+ vext_set_elems_1s(vd, vta, vl * 4, total_elems * 4); \
213
+ }
214
+
215
+GEN_ZVKNED_HELPER_VV(vaesef_vv, aesenc_SB_SR_AK(&round_state,
216
+ &round_state,
217
+ &round_key,
218
+ false);)
219
+GEN_ZVKNED_HELPER_VS(vaesef_vs, aesenc_SB_SR_AK(&round_state,
220
+ &round_state,
221
+ &round_key,
222
+ false);)
223
+GEN_ZVKNED_HELPER_VV(vaesdf_vv, aesdec_ISB_ISR_AK(&round_state,
224
+ &round_state,
225
+ &round_key,
226
+ false);)
227
+GEN_ZVKNED_HELPER_VS(vaesdf_vs, aesdec_ISB_ISR_AK(&round_state,
228
+ &round_state,
229
+ &round_key,
230
+ false);)
231
+GEN_ZVKNED_HELPER_VV(vaesem_vv, aesenc_SB_SR_MC_AK(&round_state,
232
+ &round_state,
233
+ &round_key,
234
+ false);)
235
+GEN_ZVKNED_HELPER_VS(vaesem_vs, aesenc_SB_SR_MC_AK(&round_state,
236
+ &round_state,
237
+ &round_key,
238
+ false);)
239
+GEN_ZVKNED_HELPER_VV(vaesdm_vv, aesdec_ISB_ISR_AK_IMC(&round_state,
240
+ &round_state,
241
+ &round_key,
242
+ false);)
243
+GEN_ZVKNED_HELPER_VS(vaesdm_vs, aesdec_ISB_ISR_AK_IMC(&round_state,
244
+ &round_state,
245
+ &round_key,
246
+ false);)
247
+GEN_ZVKNED_HELPER_VS(vaesz_vs, xor_round_key(&round_state, &round_key);)
248
+
249
+void HELPER(vaeskf1_vi)(void *vd_vptr, void *vs2_vptr, uint32_t uimm,
250
+ CPURISCVState *env, uint32_t desc)
251
+{
252
+ uint32_t *vd = vd_vptr;
253
+ uint32_t *vs2 = vs2_vptr;
254
+ uint32_t vl = env->vl;
255
+ uint32_t total_elems = vext_get_total_elems(env, desc, 4);
256
+ uint32_t vta = vext_vta(desc);
257
+
258
+ uimm &= 0b1111;
259
+ if (uimm > 10 || uimm == 0) {
260
+ uimm ^= 0b1000;
261
+ }
262
+
263
+ for (uint32_t i = env->vstart / 4; i < env->vl / 4; i++) {
264
+ uint32_t rk[8], tmp;
265
+ static const uint32_t rcon[] = {
266
+ 0x00000001, 0x00000002, 0x00000004, 0x00000008, 0x00000010,
267
+ 0x00000020, 0x00000040, 0x00000080, 0x0000001B, 0x00000036,
268
+ };
269
+
270
+ rk[0] = vs2[i * 4 + H4(0)];
271
+ rk[1] = vs2[i * 4 + H4(1)];
272
+ rk[2] = vs2[i * 4 + H4(2)];
273
+ rk[3] = vs2[i * 4 + H4(3)];
274
+ tmp = ror32(rk[3], 8);
275
+
276
+ rk[4] = rk[0] ^ (((uint32_t)AES_sbox[(tmp >> 24) & 0xff] << 24) |
277
+ ((uint32_t)AES_sbox[(tmp >> 16) & 0xff] << 16) |
278
+ ((uint32_t)AES_sbox[(tmp >> 8) & 0xff] << 8) |
279
+ ((uint32_t)AES_sbox[(tmp >> 0) & 0xff] << 0))
280
+ ^ rcon[uimm - 1];
281
+ rk[5] = rk[1] ^ rk[4];
282
+ rk[6] = rk[2] ^ rk[5];
283
+ rk[7] = rk[3] ^ rk[6];
284
+
285
+ vd[i * 4 + H4(0)] = rk[4];
286
+ vd[i * 4 + H4(1)] = rk[5];
287
+ vd[i * 4 + H4(2)] = rk[6];
288
+ vd[i * 4 + H4(3)] = rk[7];
289
+ }
290
+ env->vstart = 0;
291
+ /* set tail elements to 1s */
292
+ vext_set_elems_1s(vd, vta, vl * 4, total_elems * 4);
293
+}
294
+
295
+void HELPER(vaeskf2_vi)(void *vd_vptr, void *vs2_vptr, uint32_t uimm,
296
+ CPURISCVState *env, uint32_t desc)
297
+{
298
+ uint32_t *vd = vd_vptr;
299
+ uint32_t *vs2 = vs2_vptr;
300
+ uint32_t vl = env->vl;
301
+ uint32_t total_elems = vext_get_total_elems(env, desc, 4);
302
+ uint32_t vta = vext_vta(desc);
303
+
304
+ uimm &= 0b1111;
305
+ if (uimm > 14 || uimm < 2) {
306
+ uimm ^= 0b1000;
307
+ }
308
+
309
+ for (uint32_t i = env->vstart / 4; i < env->vl / 4; i++) {
310
+ uint32_t rk[12], tmp;
311
+ static const uint32_t rcon[] = {
312
+ 0x00000001, 0x00000002, 0x00000004, 0x00000008, 0x00000010,
313
+ 0x00000020, 0x00000040, 0x00000080, 0x0000001B, 0x00000036,
314
+ };
315
+
316
+ rk[0] = vd[i * 4 + H4(0)];
317
+ rk[1] = vd[i * 4 + H4(1)];
318
+ rk[2] = vd[i * 4 + H4(2)];
319
+ rk[3] = vd[i * 4 + H4(3)];
320
+ rk[4] = vs2[i * 4 + H4(0)];
321
+ rk[5] = vs2[i * 4 + H4(1)];
322
+ rk[6] = vs2[i * 4 + H4(2)];
323
+ rk[7] = vs2[i * 4 + H4(3)];
324
+
325
+ if (uimm % 2 == 0) {
326
+ tmp = ror32(rk[7], 8);
327
+ rk[8] = rk[0] ^ (((uint32_t)AES_sbox[(tmp >> 24) & 0xff] << 24) |
328
+ ((uint32_t)AES_sbox[(tmp >> 16) & 0xff] << 16) |
329
+ ((uint32_t)AES_sbox[(tmp >> 8) & 0xff] << 8) |
330
+ ((uint32_t)AES_sbox[(tmp >> 0) & 0xff] << 0))
331
+ ^ rcon[(uimm - 1) / 2];
332
+ } else {
333
+ rk[8] = rk[0] ^ (((uint32_t)AES_sbox[(rk[7] >> 24) & 0xff] << 24) |
334
+ ((uint32_t)AES_sbox[(rk[7] >> 16) & 0xff] << 16) |
335
+ ((uint32_t)AES_sbox[(rk[7] >> 8) & 0xff] << 8) |
336
+ ((uint32_t)AES_sbox[(rk[7] >> 0) & 0xff] << 0));
337
+ }
338
+ rk[9] = rk[1] ^ rk[8];
339
+ rk[10] = rk[2] ^ rk[9];
340
+ rk[11] = rk[3] ^ rk[10];
341
+
342
+ vd[i * 4 + H4(0)] = rk[8];
343
+ vd[i * 4 + H4(1)] = rk[9];
344
+ vd[i * 4 + H4(2)] = rk[10];
345
+ vd[i * 4 + H4(3)] = rk[11];
346
+ }
347
+ env->vstart = 0;
348
+ /* set tail elements to 1s */
349
+ vext_set_elems_1s(vd, vta, vl * 4, total_elems * 4);
350
+}
351
diff --git a/target/riscv/insn_trans/trans_rvvk.c.inc b/target/riscv/insn_trans/trans_rvvk.c.inc
352
index XXXXXXX..XXXXXXX 100644
353
--- a/target/riscv/insn_trans/trans_rvvk.c.inc
354
+++ b/target/riscv/insn_trans/trans_rvvk.c.inc
355
@@ -XXX,XX +XXX,XX @@ static bool vwsll_vx_check(DisasContext *s, arg_rmrr *a)
356
GEN_OPIVV_WIDEN_TRANS(vwsll_vv, vwsll_vv_check)
357
GEN_OPIVX_WIDEN_TRANS(vwsll_vx, vwsll_vx_check)
358
GEN_OPIVI_WIDEN_TRANS(vwsll_vi, IMM_ZX, vwsll_vx, vwsll_vx_check)
359
+
360
+/*
361
+ * Zvkned
362
+ */
363
+
364
+#define ZVKNED_EGS 4
365
+
366
+#define GEN_V_UNMASKED_TRANS(NAME, CHECK, EGS) \
367
+ static bool trans_##NAME(DisasContext *s, arg_##NAME *a) \
368
+ { \
369
+ if (CHECK(s, a)) { \
370
+ TCGv_ptr rd_v, rs2_v; \
371
+ TCGv_i32 desc, egs; \
372
+ uint32_t data = 0; \
373
+ TCGLabel *over = gen_new_label(); \
374
+ \
375
+ if (!s->vstart_eq_zero || !s->vl_eq_vlmax) { \
376
+ /* save opcode for unwinding in case we throw an exception */ \
377
+ decode_save_opc(s); \
378
+ egs = tcg_constant_i32(EGS); \
379
+ gen_helper_egs_check(egs, cpu_env); \
380
+ tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
381
+ } \
382
+ \
383
+ data = FIELD_DP32(data, VDATA, VM, a->vm); \
384
+ data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \
385
+ data = FIELD_DP32(data, VDATA, VTA, s->vta); \
386
+ data = FIELD_DP32(data, VDATA, VTA_ALL_1S, s->cfg_vta_all_1s); \
387
+ data = FIELD_DP32(data, VDATA, VMA, s->vma); \
388
+ rd_v = tcg_temp_new_ptr(); \
389
+ rs2_v = tcg_temp_new_ptr(); \
390
+ desc = tcg_constant_i32( \
391
+ simd_desc(s->cfg_ptr->vlen / 8, s->cfg_ptr->vlen / 8, data)); \
392
+ tcg_gen_addi_ptr(rd_v, cpu_env, vreg_ofs(s, a->rd)); \
393
+ tcg_gen_addi_ptr(rs2_v, cpu_env, vreg_ofs(s, a->rs2)); \
394
+ gen_helper_##NAME(rd_v, rs2_v, cpu_env, desc); \
395
+ mark_vs_dirty(s); \
396
+ gen_set_label(over); \
397
+ return true; \
398
+ } \
399
+ return false; \
400
+ }
401
+
402
+static bool vaes_check_vv(DisasContext *s, arg_rmr *a)
403
+{
404
+ int egw_bytes = ZVKNED_EGS << s->sew;
405
+ return s->cfg_ptr->ext_zvkned == true &&
406
+ require_rvv(s) &&
407
+ vext_check_isa_ill(s) &&
408
+ MAXSZ(s) >= egw_bytes &&
409
+ require_align(a->rd, s->lmul) &&
410
+ require_align(a->rs2, s->lmul) &&
411
+ s->sew == MO_32;
412
+}
413
+
414
+static bool vaes_check_overlap(DisasContext *s, int vd, int vs2)
415
+{
416
+ int8_t op_size = s->lmul <= 0 ? 1 : 1 << s->lmul;
417
+ return !is_overlapped(vd, op_size, vs2, 1);
418
+}
419
+
420
+static bool vaes_check_vs(DisasContext *s, arg_rmr *a)
421
+{
422
+ int egw_bytes = ZVKNED_EGS << s->sew;
423
+ return vaes_check_overlap(s, a->rd, a->rs2) &&
424
+ MAXSZ(s) >= egw_bytes &&
425
+ s->cfg_ptr->ext_zvkned == true &&
426
+ require_rvv(s) &&
427
+ vext_check_isa_ill(s) &&
428
+ require_align(a->rd, s->lmul) &&
429
+ s->sew == MO_32;
430
+}
431
+
432
+GEN_V_UNMASKED_TRANS(vaesef_vv, vaes_check_vv, ZVKNED_EGS)
433
+GEN_V_UNMASKED_TRANS(vaesef_vs, vaes_check_vs, ZVKNED_EGS)
434
+GEN_V_UNMASKED_TRANS(vaesdf_vv, vaes_check_vv, ZVKNED_EGS)
435
+GEN_V_UNMASKED_TRANS(vaesdf_vs, vaes_check_vs, ZVKNED_EGS)
436
+GEN_V_UNMASKED_TRANS(vaesdm_vv, vaes_check_vv, ZVKNED_EGS)
437
+GEN_V_UNMASKED_TRANS(vaesdm_vs, vaes_check_vs, ZVKNED_EGS)
438
+GEN_V_UNMASKED_TRANS(vaesz_vs, vaes_check_vs, ZVKNED_EGS)
439
+GEN_V_UNMASKED_TRANS(vaesem_vv, vaes_check_vv, ZVKNED_EGS)
440
+GEN_V_UNMASKED_TRANS(vaesem_vs, vaes_check_vs, ZVKNED_EGS)
441
+
442
+#define GEN_VI_UNMASKED_TRANS(NAME, CHECK, EGS) \
443
+ static bool trans_##NAME(DisasContext *s, arg_##NAME *a) \
444
+ { \
445
+ if (CHECK(s, a)) { \
446
+ TCGv_ptr rd_v, rs2_v; \
447
+ TCGv_i32 uimm_v, desc, egs; \
448
+ uint32_t data = 0; \
449
+ TCGLabel *over = gen_new_label(); \
450
+ \
451
+ if (!s->vstart_eq_zero || !s->vl_eq_vlmax) { \
452
+ /* save opcode for unwinding in case we throw an exception */ \
453
+ decode_save_opc(s); \
454
+ egs = tcg_constant_i32(EGS); \
455
+ gen_helper_egs_check(egs, cpu_env); \
456
+ tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
457
+ } \
458
+ \
459
+ data = FIELD_DP32(data, VDATA, VM, a->vm); \
460
+ data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \
461
+ data = FIELD_DP32(data, VDATA, VTA, s->vta); \
462
+ data = FIELD_DP32(data, VDATA, VTA_ALL_1S, s->cfg_vta_all_1s); \
463
+ data = FIELD_DP32(data, VDATA, VMA, s->vma); \
464
+ \
465
+ rd_v = tcg_temp_new_ptr(); \
466
+ rs2_v = tcg_temp_new_ptr(); \
467
+ uimm_v = tcg_constant_i32(a->rs1); \
468
+ desc = tcg_constant_i32( \
469
+ simd_desc(s->cfg_ptr->vlen / 8, s->cfg_ptr->vlen / 8, data)); \
470
+ tcg_gen_addi_ptr(rd_v, cpu_env, vreg_ofs(s, a->rd)); \
471
+ tcg_gen_addi_ptr(rs2_v, cpu_env, vreg_ofs(s, a->rs2)); \
472
+ gen_helper_##NAME(rd_v, rs2_v, uimm_v, cpu_env, desc); \
473
+ mark_vs_dirty(s); \
474
+ gen_set_label(over); \
475
+ return true; \
476
+ } \
477
+ return false; \
478
+ }
479
+
480
+static bool vaeskf1_check(DisasContext *s, arg_vaeskf1_vi *a)
481
+{
482
+ int egw_bytes = ZVKNED_EGS << s->sew;
483
+ return s->cfg_ptr->ext_zvkned == true &&
484
+ require_rvv(s) &&
485
+ vext_check_isa_ill(s) &&
486
+ MAXSZ(s) >= egw_bytes &&
487
+ s->sew == MO_32 &&
488
+ require_align(a->rd, s->lmul) &&
489
+ require_align(a->rs2, s->lmul);
490
+}
491
+
492
+static bool vaeskf2_check(DisasContext *s, arg_vaeskf2_vi *a)
493
+{
494
+ int egw_bytes = ZVKNED_EGS << s->sew;
495
+ return s->cfg_ptr->ext_zvkned == true &&
496
+ require_rvv(s) &&
497
+ vext_check_isa_ill(s) &&
498
+ MAXSZ(s) >= egw_bytes &&
499
+ s->sew == MO_32 &&
500
+ require_align(a->rd, s->lmul) &&
501
+ require_align(a->rs2, s->lmul);
502
+}
503
+
504
+GEN_VI_UNMASKED_TRANS(vaeskf1_vi, vaeskf1_check, ZVKNED_EGS)
505
+GEN_VI_UNMASKED_TRANS(vaeskf2_vi, vaeskf2_check, ZVKNED_EGS)
61
--
506
--
62
2.34.1
507
2.41.0
63
64
diff view generated by jsdifflib
1
From: Weiwei Li <liweiwei@iscas.ac.cn>
1
From: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
2
2
3
- sinval.vma, hinval.vvma and hinval.gvma do the same as sfence.vma, hfence.vvma and hfence.gvma except extension check
3
This commit adds support for the Zvknh vector-crypto extension, which
4
- do nothing other than extension check for sfence.w.inval and sfence.inval.ir
4
consists of the following instructions:
5
5
6
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
6
* vsha2ms.vv
7
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
7
* vsha2c[hl].vv
8
Reviewed-by: Anup Patel <anup@brainfault.org>
8
9
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
9
Translation functions are defined in
10
Message-Id: <20220204022658.18097-5-liweiwei@iscas.ac.cn>
10
`target/riscv/insn_trans/trans_rvvk.c.inc` and helpers are defined in
11
`target/riscv/vcrypto_helper.c`.
12
13
Co-authored-by: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
14
Co-authored-by: Lawrence Hunter <lawrence.hunter@codethink.co.uk>
15
[max.chou@sifive.com: Replaced vstart checking by TCG op]
16
Signed-off-by: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
17
Signed-off-by: Lawrence Hunter <lawrence.hunter@codethink.co.uk>
18
Signed-off-by: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
19
Signed-off-by: Max Chou <max.chou@sifive.com>
20
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
21
[max.chou@sifive.com: Exposed x-zvknha & x-zvknhb properties]
22
[max.chou@sifive.com: Replaced SEW selection to happened during
23
translation]
24
Message-ID: <20230711165917.2629866-11-max.chou@sifive.com>
11
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
25
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
12
---
26
---
13
target/riscv/cpu.h | 1 +
27
target/riscv/cpu_cfg.h | 2 +
14
target/riscv/insn32.decode | 7 ++
28
target/riscv/helper.h | 6 +
15
target/riscv/cpu.c | 1 +
29
target/riscv/insn32.decode | 5 +
16
target/riscv/translate.c | 1 +
30
target/riscv/cpu.c | 13 +-
17
target/riscv/insn_trans/trans_svinval.c.inc | 75 +++++++++++++++++++++
31
target/riscv/vcrypto_helper.c | 238 +++++++++++++++++++++++
18
5 files changed, 85 insertions(+)
32
target/riscv/insn_trans/trans_rvvk.c.inc | 129 ++++++++++++
19
create mode 100644 target/riscv/insn_trans/trans_svinval.c.inc
33
6 files changed, 390 insertions(+), 3 deletions(-)
20
34
21
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
35
diff --git a/target/riscv/cpu_cfg.h b/target/riscv/cpu_cfg.h
22
index XXXXXXX..XXXXXXX 100644
36
index XXXXXXX..XXXXXXX 100644
23
--- a/target/riscv/cpu.h
37
--- a/target/riscv/cpu_cfg.h
24
+++ b/target/riscv/cpu.h
38
+++ b/target/riscv/cpu_cfg.h
25
@@ -XXX,XX +XXX,XX @@ struct RISCVCPUConfig {
39
@@ -XXX,XX +XXX,XX @@ struct RISCVCPUConfig {
26
bool ext_counters;
40
bool ext_zvbb;
27
bool ext_ifencei;
41
bool ext_zvbc;
28
bool ext_icsr;
42
bool ext_zvkned;
29
+ bool ext_svinval;
43
+ bool ext_zvknha;
30
bool ext_svnapot;
44
+ bool ext_zvknhb;
31
bool ext_svpbmt;
45
bool ext_zmmul;
32
bool ext_zfh;
46
bool ext_zvfbfmin;
47
bool ext_zvfbfwma;
48
diff --git a/target/riscv/helper.h b/target/riscv/helper.h
49
index XXXXXXX..XXXXXXX 100644
50
--- a/target/riscv/helper.h
51
+++ b/target/riscv/helper.h
52
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_4(vaesdm_vs, void, ptr, ptr, env, i32)
53
DEF_HELPER_4(vaesz_vs, void, ptr, ptr, env, i32)
54
DEF_HELPER_5(vaeskf1_vi, void, ptr, ptr, i32, env, i32)
55
DEF_HELPER_5(vaeskf2_vi, void, ptr, ptr, i32, env, i32)
56
+
57
+DEF_HELPER_5(vsha2ms_vv, void, ptr, ptr, ptr, env, i32)
58
+DEF_HELPER_5(vsha2ch32_vv, void, ptr, ptr, ptr, env, i32)
59
+DEF_HELPER_5(vsha2ch64_vv, void, ptr, ptr, ptr, env, i32)
60
+DEF_HELPER_5(vsha2cl32_vv, void, ptr, ptr, ptr, env, i32)
61
+DEF_HELPER_5(vsha2cl64_vv, void, ptr, ptr, ptr, env, i32)
33
diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode
62
diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode
34
index XXXXXXX..XXXXXXX 100644
63
index XXXXXXX..XXXXXXX 100644
35
--- a/target/riscv/insn32.decode
64
--- a/target/riscv/insn32.decode
36
+++ b/target/riscv/insn32.decode
65
+++ b/target/riscv/insn32.decode
37
@@ -XXX,XX +XXX,XX @@ fcvt_l_h 1100010 00010 ..... ... ..... 1010011 @r2_rm
66
@@ -XXX,XX +XXX,XX @@ vaesdm_vs 101001 1 ..... 00000 010 ..... 1110111 @r2_vm_1
38
fcvt_lu_h 1100010 00011 ..... ... ..... 1010011 @r2_rm
67
vaesz_vs 101001 1 ..... 00111 010 ..... 1110111 @r2_vm_1
39
fcvt_h_l 1101010 00010 ..... ... ..... 1010011 @r2_rm
68
vaeskf1_vi 100010 1 ..... ..... 010 ..... 1110111 @r_vm_1
40
fcvt_h_lu 1101010 00011 ..... ... ..... 1010011 @r2_rm
69
vaeskf2_vi 101010 1 ..... ..... 010 ..... 1110111 @r_vm_1
41
+
70
+
42
+# *** Svinval Standard Extension ***
71
+# *** Zvknh vector crypto extension ***
43
+sinval_vma 0001011 ..... ..... 000 00000 1110011 @sfence_vma
72
+vsha2ms_vv 101101 1 ..... ..... 010 ..... 1110111 @r_vm_1
44
+sfence_w_inval 0001100 00000 00000 000 00000 1110011
73
+vsha2ch_vv 101110 1 ..... ..... 010 ..... 1110111 @r_vm_1
45
+sfence_inval_ir 0001100 00001 00000 000 00000 1110011
74
+vsha2cl_vv 101111 1 ..... ..... 010 ..... 1110111 @r_vm_1
46
+hinval_vvma 0010011 ..... ..... 000 00000 1110011 @hfence_vvma
47
+hinval_gvma 0110011 ..... ..... 000 00000 1110011 @hfence_gvma
48
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
75
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
49
index XXXXXXX..XXXXXXX 100644
76
index XXXXXXX..XXXXXXX 100644
50
--- a/target/riscv/cpu.c
77
--- a/target/riscv/cpu.c
51
+++ b/target/riscv/cpu.c
78
+++ b/target/riscv/cpu.c
52
@@ -XXX,XX +XXX,XX @@ static Property riscv_cpu_properties[] = {
79
@@ -XXX,XX +XXX,XX @@ static const struct isa_ext_data isa_edata_arr[] = {
53
DEFINE_PROP_UINT16("vlen", RISCVCPU, cfg.vlen, 128),
80
ISA_EXT_DATA_ENTRY(zvfh, PRIV_VERSION_1_12_0, ext_zvfh),
54
DEFINE_PROP_UINT16("elen", RISCVCPU, cfg.elen, 64),
81
ISA_EXT_DATA_ENTRY(zvfhmin, PRIV_VERSION_1_12_0, ext_zvfhmin),
55
82
ISA_EXT_DATA_ENTRY(zvkned, PRIV_VERSION_1_12_0, ext_zvkned),
56
+ DEFINE_PROP_BOOL("svinval", RISCVCPU, cfg.ext_svinval, false),
83
+ ISA_EXT_DATA_ENTRY(zvknha, PRIV_VERSION_1_12_0, ext_zvknha),
57
DEFINE_PROP_BOOL("svnapot", RISCVCPU, cfg.ext_svnapot, false),
84
+ ISA_EXT_DATA_ENTRY(zvknhb, PRIV_VERSION_1_12_0, ext_zvknhb),
58
85
ISA_EXT_DATA_ENTRY(zhinx, PRIV_VERSION_1_12_0, ext_zhinx),
59
DEFINE_PROP_BOOL("zba", RISCVCPU, cfg.ext_zba, true),
86
ISA_EXT_DATA_ENTRY(zhinxmin, PRIV_VERSION_1_12_0, ext_zhinxmin),
60
diff --git a/target/riscv/translate.c b/target/riscv/translate.c
87
ISA_EXT_DATA_ENTRY(smaia, PRIV_VERSION_1_12_0, ext_smaia),
88
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_validate_set_extensions(RISCVCPU *cpu, Error **errp)
89
* In principle Zve*x would also suffice here, were they supported
90
* in qemu
91
*/
92
- if ((cpu->cfg.ext_zvbb || cpu->cfg.ext_zvkned) && !cpu->cfg.ext_zve32f) {
93
+ if ((cpu->cfg.ext_zvbb || cpu->cfg.ext_zvkned || cpu->cfg.ext_zvknha) &&
94
+ !cpu->cfg.ext_zve32f) {
95
error_setg(errp,
96
"Vector crypto extensions require V or Zve* extensions");
97
return;
98
}
99
100
- if (cpu->cfg.ext_zvbc && !cpu->cfg.ext_zve64f) {
101
- error_setg(errp, "Zvbc extension requires V or Zve64{f,d} extensions");
102
+ if ((cpu->cfg.ext_zvbc || cpu->cfg.ext_zvknhb) && !cpu->cfg.ext_zve64f) {
103
+ error_setg(
104
+ errp,
105
+ "Zvbc and Zvknhb extensions require V or Zve64{f,d} extensions");
106
return;
107
}
108
109
@@ -XXX,XX +XXX,XX @@ static Property riscv_cpu_extensions[] = {
110
DEFINE_PROP_BOOL("x-zvbb", RISCVCPU, cfg.ext_zvbb, false),
111
DEFINE_PROP_BOOL("x-zvbc", RISCVCPU, cfg.ext_zvbc, false),
112
DEFINE_PROP_BOOL("x-zvkned", RISCVCPU, cfg.ext_zvkned, false),
113
+ DEFINE_PROP_BOOL("x-zvknha", RISCVCPU, cfg.ext_zvknha, false),
114
+ DEFINE_PROP_BOOL("x-zvknhb", RISCVCPU, cfg.ext_zvknhb, false),
115
116
DEFINE_PROP_END_OF_LIST(),
117
};
118
diff --git a/target/riscv/vcrypto_helper.c b/target/riscv/vcrypto_helper.c
61
index XXXXXXX..XXXXXXX 100644
119
index XXXXXXX..XXXXXXX 100644
62
--- a/target/riscv/translate.c
120
--- a/target/riscv/vcrypto_helper.c
63
+++ b/target/riscv/translate.c
121
+++ b/target/riscv/vcrypto_helper.c
64
@@ -XXX,XX +XXX,XX @@ static uint32_t opcode_at(DisasContextBase *dcbase, target_ulong pc)
122
@@ -XXX,XX +XXX,XX @@ void HELPER(vaeskf2_vi)(void *vd_vptr, void *vs2_vptr, uint32_t uimm,
65
#include "insn_trans/trans_rvb.c.inc"
123
/* set tail elements to 1s */
66
#include "insn_trans/trans_rvzfh.c.inc"
124
vext_set_elems_1s(vd, vta, vl * 4, total_elems * 4);
67
#include "insn_trans/trans_privileged.c.inc"
125
}
68
+#include "insn_trans/trans_svinval.c.inc"
126
+
69
#include "insn_trans/trans_xventanacondops.c.inc"
127
+static inline uint32_t sig0_sha256(uint32_t x)
70
128
+{
71
/* Include the auto-generated decoder for 16 bit insn */
129
+ return ror32(x, 7) ^ ror32(x, 18) ^ (x >> 3);
72
diff --git a/target/riscv/insn_trans/trans_svinval.c.inc b/target/riscv/insn_trans/trans_svinval.c.inc
130
+}
73
new file mode 100644
131
+
74
index XXXXXXX..XXXXXXX
132
+static inline uint32_t sig1_sha256(uint32_t x)
75
--- /dev/null
133
+{
76
+++ b/target/riscv/insn_trans/trans_svinval.c.inc
134
+ return ror32(x, 17) ^ ror32(x, 19) ^ (x >> 10);
77
@@ -XXX,XX +XXX,XX @@
135
+}
136
+
137
+static inline uint64_t sig0_sha512(uint64_t x)
138
+{
139
+ return ror64(x, 1) ^ ror64(x, 8) ^ (x >> 7);
140
+}
141
+
142
+static inline uint64_t sig1_sha512(uint64_t x)
143
+{
144
+ return ror64(x, 19) ^ ror64(x, 61) ^ (x >> 6);
145
+}
146
+
147
+static inline void vsha2ms_e32(uint32_t *vd, uint32_t *vs1, uint32_t *vs2)
148
+{
149
+ uint32_t res[4];
150
+ res[0] = sig1_sha256(vs1[H4(2)]) + vs2[H4(1)] + sig0_sha256(vd[H4(1)]) +
151
+ vd[H4(0)];
152
+ res[1] = sig1_sha256(vs1[H4(3)]) + vs2[H4(2)] + sig0_sha256(vd[H4(2)]) +
153
+ vd[H4(1)];
154
+ res[2] =
155
+ sig1_sha256(res[0]) + vs2[H4(3)] + sig0_sha256(vd[H4(3)]) + vd[H4(2)];
156
+ res[3] =
157
+ sig1_sha256(res[1]) + vs1[H4(0)] + sig0_sha256(vs2[H4(0)]) + vd[H4(3)];
158
+ vd[H4(3)] = res[3];
159
+ vd[H4(2)] = res[2];
160
+ vd[H4(1)] = res[1];
161
+ vd[H4(0)] = res[0];
162
+}
163
+
164
+static inline void vsha2ms_e64(uint64_t *vd, uint64_t *vs1, uint64_t *vs2)
165
+{
166
+ uint64_t res[4];
167
+ res[0] = sig1_sha512(vs1[2]) + vs2[1] + sig0_sha512(vd[1]) + vd[0];
168
+ res[1] = sig1_sha512(vs1[3]) + vs2[2] + sig0_sha512(vd[2]) + vd[1];
169
+ res[2] = sig1_sha512(res[0]) + vs2[3] + sig0_sha512(vd[3]) + vd[2];
170
+ res[3] = sig1_sha512(res[1]) + vs1[0] + sig0_sha512(vs2[0]) + vd[3];
171
+ vd[3] = res[3];
172
+ vd[2] = res[2];
173
+ vd[1] = res[1];
174
+ vd[0] = res[0];
175
+}
176
+
177
+void HELPER(vsha2ms_vv)(void *vd, void *vs1, void *vs2, CPURISCVState *env,
178
+ uint32_t desc)
179
+{
180
+ uint32_t sew = FIELD_EX64(env->vtype, VTYPE, VSEW);
181
+ uint32_t esz = sew == MO_32 ? 4 : 8;
182
+ uint32_t total_elems;
183
+ uint32_t vta = vext_vta(desc);
184
+
185
+ for (uint32_t i = env->vstart / 4; i < env->vl / 4; i++) {
186
+ if (sew == MO_32) {
187
+ vsha2ms_e32(((uint32_t *)vd) + i * 4, ((uint32_t *)vs1) + i * 4,
188
+ ((uint32_t *)vs2) + i * 4);
189
+ } else {
190
+ /* If not 32 then SEW should be 64 */
191
+ vsha2ms_e64(((uint64_t *)vd) + i * 4, ((uint64_t *)vs1) + i * 4,
192
+ ((uint64_t *)vs2) + i * 4);
193
+ }
194
+ }
195
+ /* set tail elements to 1s */
196
+ total_elems = vext_get_total_elems(env, desc, esz);
197
+ vext_set_elems_1s(vd, vta, env->vl * esz, total_elems * esz);
198
+ env->vstart = 0;
199
+}
200
+
201
+static inline uint64_t sum0_64(uint64_t x)
202
+{
203
+ return ror64(x, 28) ^ ror64(x, 34) ^ ror64(x, 39);
204
+}
205
+
206
+static inline uint32_t sum0_32(uint32_t x)
207
+{
208
+ return ror32(x, 2) ^ ror32(x, 13) ^ ror32(x, 22);
209
+}
210
+
211
+static inline uint64_t sum1_64(uint64_t x)
212
+{
213
+ return ror64(x, 14) ^ ror64(x, 18) ^ ror64(x, 41);
214
+}
215
+
216
+static inline uint32_t sum1_32(uint32_t x)
217
+{
218
+ return ror32(x, 6) ^ ror32(x, 11) ^ ror32(x, 25);
219
+}
220
+
221
+#define ch(x, y, z) ((x & y) ^ ((~x) & z))
222
+
223
+#define maj(x, y, z) ((x & y) ^ (x & z) ^ (y & z))
224
+
225
+static void vsha2c_64(uint64_t *vs2, uint64_t *vd, uint64_t *vs1)
226
+{
227
+ uint64_t a = vs2[3], b = vs2[2], e = vs2[1], f = vs2[0];
228
+ uint64_t c = vd[3], d = vd[2], g = vd[1], h = vd[0];
229
+ uint64_t W0 = vs1[0], W1 = vs1[1];
230
+ uint64_t T1 = h + sum1_64(e) + ch(e, f, g) + W0;
231
+ uint64_t T2 = sum0_64(a) + maj(a, b, c);
232
+
233
+ h = g;
234
+ g = f;
235
+ f = e;
236
+ e = d + T1;
237
+ d = c;
238
+ c = b;
239
+ b = a;
240
+ a = T1 + T2;
241
+
242
+ T1 = h + sum1_64(e) + ch(e, f, g) + W1;
243
+ T2 = sum0_64(a) + maj(a, b, c);
244
+ h = g;
245
+ g = f;
246
+ f = e;
247
+ e = d + T1;
248
+ d = c;
249
+ c = b;
250
+ b = a;
251
+ a = T1 + T2;
252
+
253
+ vd[0] = f;
254
+ vd[1] = e;
255
+ vd[2] = b;
256
+ vd[3] = a;
257
+}
258
+
259
+static void vsha2c_32(uint32_t *vs2, uint32_t *vd, uint32_t *vs1)
260
+{
261
+ uint32_t a = vs2[H4(3)], b = vs2[H4(2)], e = vs2[H4(1)], f = vs2[H4(0)];
262
+ uint32_t c = vd[H4(3)], d = vd[H4(2)], g = vd[H4(1)], h = vd[H4(0)];
263
+ uint32_t W0 = vs1[H4(0)], W1 = vs1[H4(1)];
264
+ uint32_t T1 = h + sum1_32(e) + ch(e, f, g) + W0;
265
+ uint32_t T2 = sum0_32(a) + maj(a, b, c);
266
+
267
+ h = g;
268
+ g = f;
269
+ f = e;
270
+ e = d + T1;
271
+ d = c;
272
+ c = b;
273
+ b = a;
274
+ a = T1 + T2;
275
+
276
+ T1 = h + sum1_32(e) + ch(e, f, g) + W1;
277
+ T2 = sum0_32(a) + maj(a, b, c);
278
+ h = g;
279
+ g = f;
280
+ f = e;
281
+ e = d + T1;
282
+ d = c;
283
+ c = b;
284
+ b = a;
285
+ a = T1 + T2;
286
+
287
+ vd[H4(0)] = f;
288
+ vd[H4(1)] = e;
289
+ vd[H4(2)] = b;
290
+ vd[H4(3)] = a;
291
+}
292
+
293
+void HELPER(vsha2ch32_vv)(void *vd, void *vs1, void *vs2, CPURISCVState *env,
294
+ uint32_t desc)
295
+{
296
+ const uint32_t esz = 4;
297
+ uint32_t total_elems;
298
+ uint32_t vta = vext_vta(desc);
299
+
300
+ for (uint32_t i = env->vstart / 4; i < env->vl / 4; i++) {
301
+ vsha2c_32(((uint32_t *)vs2) + 4 * i, ((uint32_t *)vd) + 4 * i,
302
+ ((uint32_t *)vs1) + 4 * i + 2);
303
+ }
304
+
305
+ /* set tail elements to 1s */
306
+ total_elems = vext_get_total_elems(env, desc, esz);
307
+ vext_set_elems_1s(vd, vta, env->vl * esz, total_elems * esz);
308
+ env->vstart = 0;
309
+}
310
+
311
+void HELPER(vsha2ch64_vv)(void *vd, void *vs1, void *vs2, CPURISCVState *env,
312
+ uint32_t desc)
313
+{
314
+ const uint32_t esz = 8;
315
+ uint32_t total_elems;
316
+ uint32_t vta = vext_vta(desc);
317
+
318
+ for (uint32_t i = env->vstart / 4; i < env->vl / 4; i++) {
319
+ vsha2c_64(((uint64_t *)vs2) + 4 * i, ((uint64_t *)vd) + 4 * i,
320
+ ((uint64_t *)vs1) + 4 * i + 2);
321
+ }
322
+
323
+ /* set tail elements to 1s */
324
+ total_elems = vext_get_total_elems(env, desc, esz);
325
+ vext_set_elems_1s(vd, vta, env->vl * esz, total_elems * esz);
326
+ env->vstart = 0;
327
+}
328
+
329
+void HELPER(vsha2cl32_vv)(void *vd, void *vs1, void *vs2, CPURISCVState *env,
330
+ uint32_t desc)
331
+{
332
+ const uint32_t esz = 4;
333
+ uint32_t total_elems;
334
+ uint32_t vta = vext_vta(desc);
335
+
336
+ for (uint32_t i = env->vstart / 4; i < env->vl / 4; i++) {
337
+ vsha2c_32(((uint32_t *)vs2) + 4 * i, ((uint32_t *)vd) + 4 * i,
338
+ (((uint32_t *)vs1) + 4 * i));
339
+ }
340
+
341
+ /* set tail elements to 1s */
342
+ total_elems = vext_get_total_elems(env, desc, esz);
343
+ vext_set_elems_1s(vd, vta, env->vl * esz, total_elems * esz);
344
+ env->vstart = 0;
345
+}
346
+
347
+void HELPER(vsha2cl64_vv)(void *vd, void *vs1, void *vs2, CPURISCVState *env,
348
+ uint32_t desc)
349
+{
350
+ uint32_t esz = 8;
351
+ uint32_t total_elems;
352
+ uint32_t vta = vext_vta(desc);
353
+
354
+ for (uint32_t i = env->vstart / 4; i < env->vl / 4; i++) {
355
+ vsha2c_64(((uint64_t *)vs2) + 4 * i, ((uint64_t *)vd) + 4 * i,
356
+ (((uint64_t *)vs1) + 4 * i));
357
+ }
358
+
359
+ /* set tail elements to 1s */
360
+ total_elems = vext_get_total_elems(env, desc, esz);
361
+ vext_set_elems_1s(vd, vta, env->vl * esz, total_elems * esz);
362
+ env->vstart = 0;
363
+}
364
diff --git a/target/riscv/insn_trans/trans_rvvk.c.inc b/target/riscv/insn_trans/trans_rvvk.c.inc
365
index XXXXXXX..XXXXXXX 100644
366
--- a/target/riscv/insn_trans/trans_rvvk.c.inc
367
+++ b/target/riscv/insn_trans/trans_rvvk.c.inc
368
@@ -XXX,XX +XXX,XX @@ static bool vaeskf2_check(DisasContext *s, arg_vaeskf2_vi *a)
369
370
GEN_VI_UNMASKED_TRANS(vaeskf1_vi, vaeskf1_check, ZVKNED_EGS)
371
GEN_VI_UNMASKED_TRANS(vaeskf2_vi, vaeskf2_check, ZVKNED_EGS)
372
+
78
+/*
373
+/*
79
+ * RISC-V translation routines for the Svinval Standard Instruction Set.
374
+ * Zvknh
80
+ *
81
+ * Copyright (c) 2020-2022 PLCT lab
82
+ *
83
+ * This program is free software; you can redistribute it and/or modify it
84
+ * under the terms and conditions of the GNU General Public License,
85
+ * version 2 or later, as published by the Free Software Foundation.
86
+ *
87
+ * This program is distributed in the hope it will be useful, but WITHOUT
88
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
89
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
90
+ * more details.
91
+ *
92
+ * You should have received a copy of the GNU General Public License along with
93
+ * this program. If not, see <http://www.gnu.org/licenses/>.
94
+ */
375
+ */
95
+
376
+
96
+#define REQUIRE_SVINVAL(ctx) do { \
377
+#define ZVKNH_EGS 4
97
+ if (!ctx->cfg_ptr->ext_svinval) { \
378
+
98
+ return false; \
379
+#define GEN_VV_UNMASKED_TRANS(NAME, CHECK, EGS) \
99
+ } \
380
+ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
100
+} while (0)
381
+ { \
101
+
382
+ if (CHECK(s, a)) { \
102
+static bool trans_sinval_vma(DisasContext *ctx, arg_sinval_vma *a)
383
+ uint32_t data = 0; \
103
+{
384
+ TCGLabel *over = gen_new_label(); \
104
+ REQUIRE_SVINVAL(ctx);
385
+ TCGv_i32 egs; \
105
+ /* Do the same as sfence.vma currently */
386
+ \
106
+ REQUIRE_EXT(ctx, RVS);
387
+ if (!s->vstart_eq_zero || !s->vl_eq_vlmax) { \
107
+#ifndef CONFIG_USER_ONLY
388
+ /* save opcode for unwinding in case we throw an exception */ \
108
+ gen_helper_tlb_flush(cpu_env);
389
+ decode_save_opc(s); \
109
+ return true;
390
+ egs = tcg_constant_i32(EGS); \
110
+#endif
391
+ gen_helper_egs_check(egs, cpu_env); \
392
+ tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
393
+ } \
394
+ \
395
+ data = FIELD_DP32(data, VDATA, VM, a->vm); \
396
+ data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \
397
+ data = FIELD_DP32(data, VDATA, VTA, s->vta); \
398
+ data = FIELD_DP32(data, VDATA, VTA_ALL_1S, s->cfg_vta_all_1s); \
399
+ data = FIELD_DP32(data, VDATA, VMA, s->vma); \
400
+ \
401
+ tcg_gen_gvec_3_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, a->rs1), \
402
+ vreg_ofs(s, a->rs2), cpu_env, \
403
+ s->cfg_ptr->vlen / 8, s->cfg_ptr->vlen / 8, \
404
+ data, gen_helper_##NAME); \
405
+ \
406
+ mark_vs_dirty(s); \
407
+ gen_set_label(over); \
408
+ return true; \
409
+ } \
410
+ return false; \
411
+ }
412
+
413
+static bool vsha_check_sew(DisasContext *s)
414
+{
415
+ return (s->cfg_ptr->ext_zvknha == true && s->sew == MO_32) ||
416
+ (s->cfg_ptr->ext_zvknhb == true &&
417
+ (s->sew == MO_32 || s->sew == MO_64));
418
+}
419
+
420
+static bool vsha_check(DisasContext *s, arg_rmrr *a)
421
+{
422
+ int egw_bytes = ZVKNH_EGS << s->sew;
423
+ int mult = 1 << MAX(s->lmul, 0);
424
+ return opivv_check(s, a) &&
425
+ vsha_check_sew(s) &&
426
+ MAXSZ(s) >= egw_bytes &&
427
+ !is_overlapped(a->rd, mult, a->rs1, mult) &&
428
+ !is_overlapped(a->rd, mult, a->rs2, mult) &&
429
+ s->lmul >= 0;
430
+}
431
+
432
+GEN_VV_UNMASKED_TRANS(vsha2ms_vv, vsha_check, ZVKNH_EGS)
433
+
434
+static bool trans_vsha2cl_vv(DisasContext *s, arg_rmrr *a)
435
+{
436
+ if (vsha_check(s, a)) {
437
+ uint32_t data = 0;
438
+ TCGLabel *over = gen_new_label();
439
+ TCGv_i32 egs;
440
+
441
+ if (!s->vstart_eq_zero || !s->vl_eq_vlmax) {
442
+ /* save opcode for unwinding in case we throw an exception */
443
+ decode_save_opc(s);
444
+ egs = tcg_constant_i32(ZVKNH_EGS);
445
+ gen_helper_egs_check(egs, cpu_env);
446
+ tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
447
+ }
448
+
449
+ data = FIELD_DP32(data, VDATA, VM, a->vm);
450
+ data = FIELD_DP32(data, VDATA, LMUL, s->lmul);
451
+ data = FIELD_DP32(data, VDATA, VTA, s->vta);
452
+ data = FIELD_DP32(data, VDATA, VTA_ALL_1S, s->cfg_vta_all_1s);
453
+ data = FIELD_DP32(data, VDATA, VMA, s->vma);
454
+
455
+ tcg_gen_gvec_3_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, a->rs1),
456
+ vreg_ofs(s, a->rs2), cpu_env, s->cfg_ptr->vlen / 8,
457
+ s->cfg_ptr->vlen / 8, data,
458
+ s->sew == MO_32 ?
459
+ gen_helper_vsha2cl32_vv : gen_helper_vsha2cl64_vv);
460
+
461
+ mark_vs_dirty(s);
462
+ gen_set_label(over);
463
+ return true;
464
+ }
111
+ return false;
465
+ return false;
112
+}
466
+}
113
+
467
+
114
+static bool trans_sfence_w_inval(DisasContext *ctx, arg_sfence_w_inval *a)
468
+static bool trans_vsha2ch_vv(DisasContext *s, arg_rmrr *a)
115
+{
469
+{
116
+ REQUIRE_SVINVAL(ctx);
470
+ if (vsha_check(s, a)) {
117
+ REQUIRE_EXT(ctx, RVS);
471
+ uint32_t data = 0;
118
+ /* Do nothing currently */
472
+ TCGLabel *over = gen_new_label();
119
+ return true;
473
+ TCGv_i32 egs;
120
+}
474
+
121
+
475
+ if (!s->vstart_eq_zero || !s->vl_eq_vlmax) {
122
+static bool trans_sfence_inval_ir(DisasContext *ctx, arg_sfence_inval_ir *a)
476
+ /* save opcode for unwinding in case we throw an exception */
123
+{
477
+ decode_save_opc(s);
124
+ REQUIRE_SVINVAL(ctx);
478
+ egs = tcg_constant_i32(ZVKNH_EGS);
125
+ REQUIRE_EXT(ctx, RVS);
479
+ gen_helper_egs_check(egs, cpu_env);
126
+ /* Do nothing currently */
480
+ tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
127
+ return true;
481
+ }
128
+}
482
+
129
+
483
+ data = FIELD_DP32(data, VDATA, VM, a->vm);
130
+static bool trans_hinval_vvma(DisasContext *ctx, arg_hinval_vvma *a)
484
+ data = FIELD_DP32(data, VDATA, LMUL, s->lmul);
131
+{
485
+ data = FIELD_DP32(data, VDATA, VTA, s->vta);
132
+ REQUIRE_SVINVAL(ctx);
486
+ data = FIELD_DP32(data, VDATA, VTA_ALL_1S, s->cfg_vta_all_1s);
133
+ /* Do the same as hfence.vvma currently */
487
+ data = FIELD_DP32(data, VDATA, VMA, s->vma);
134
+ REQUIRE_EXT(ctx, RVH);
488
+
135
+#ifndef CONFIG_USER_ONLY
489
+ tcg_gen_gvec_3_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, a->rs1),
136
+ gen_helper_hyp_tlb_flush(cpu_env);
490
+ vreg_ofs(s, a->rs2), cpu_env, s->cfg_ptr->vlen / 8,
137
+ return true;
491
+ s->cfg_ptr->vlen / 8, data,
138
+#endif
492
+ s->sew == MO_32 ?
493
+ gen_helper_vsha2ch32_vv : gen_helper_vsha2ch64_vv);
494
+
495
+ mark_vs_dirty(s);
496
+ gen_set_label(over);
497
+ return true;
498
+ }
139
+ return false;
499
+ return false;
140
+}
500
+}
141
+
142
+static bool trans_hinval_gvma(DisasContext *ctx, arg_hinval_gvma *a)
143
+{
144
+ REQUIRE_SVINVAL(ctx);
145
+ /* Do the same as hfence.gvma currently */
146
+ REQUIRE_EXT(ctx, RVH);
147
+#ifndef CONFIG_USER_ONLY
148
+ gen_helper_hyp_gvma_tlb_flush(cpu_env);
149
+ return true;
150
+#endif
151
+ return false;
152
+}
153
--
501
--
154
2.34.1
502
2.41.0
155
156
diff view generated by jsdifflib
1
From: Weiwei Li <liweiwei@iscas.ac.cn>
1
From: Lawrence Hunter <lawrence.hunter@codethink.co.uk>
2
2
3
- add PTE_N bit
3
This commit adds support for the Zvksh vector-crypto extension, which
4
- add PTE_N bit check for inner PTE
4
consists of the following instructions:
5
- update address translation to support 64KiB continuous region (napot_bits = 4)
5
6
6
* vsm3me.vv
7
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
7
* vsm3c.vi
8
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
8
9
Reviewed-by: Anup Patel <anup@brainfault.org>
9
Translation functions are defined in
10
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
10
`target/riscv/insn_trans/trans_rvvk.c.inc` and helpers are defined in
11
Message-Id: <20220204022658.18097-4-liweiwei@iscas.ac.cn>
11
`target/riscv/vcrypto_helper.c`.
12
13
Co-authored-by: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
14
[max.chou@sifive.com: Replaced vstart checking by TCG op]
15
Signed-off-by: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
16
Signed-off-by: Lawrence Hunter <lawrence.hunter@codethink.co.uk>
17
Signed-off-by: Max Chou <max.chou@sifive.com>
18
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
19
[max.chou@sifive.com: Exposed x-zvksh property]
20
Message-ID: <20230711165917.2629866-12-max.chou@sifive.com>
12
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
21
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
13
---
22
---
14
target/riscv/cpu_bits.h | 1 +
23
target/riscv/cpu_cfg.h | 1 +
15
target/riscv/cpu.c | 2 ++
24
target/riscv/helper.h | 3 +
16
target/riscv/cpu_helper.c | 18 +++++++++++++++---
25
target/riscv/insn32.decode | 4 +
17
3 files changed, 18 insertions(+), 3 deletions(-)
26
target/riscv/cpu.c | 6 +-
18
27
target/riscv/vcrypto_helper.c | 134 +++++++++++++++++++++++
19
diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
28
target/riscv/insn_trans/trans_rvvk.c.inc | 31 ++++++
20
index XXXXXXX..XXXXXXX 100644
29
6 files changed, 177 insertions(+), 2 deletions(-)
21
--- a/target/riscv/cpu_bits.h
30
22
+++ b/target/riscv/cpu_bits.h
31
diff --git a/target/riscv/cpu_cfg.h b/target/riscv/cpu_cfg.h
23
@@ -XXX,XX +XXX,XX @@ typedef enum {
32
index XXXXXXX..XXXXXXX 100644
24
#define PTE_A 0x040 /* Accessed */
33
--- a/target/riscv/cpu_cfg.h
25
#define PTE_D 0x080 /* Dirty */
34
+++ b/target/riscv/cpu_cfg.h
26
#define PTE_SOFT 0x300 /* Reserved for Software */
35
@@ -XXX,XX +XXX,XX @@ struct RISCVCPUConfig {
27
+#define PTE_N 0x8000000000000000ULL /* NAPOT translation */
36
bool ext_zvkned;
28
37
bool ext_zvknha;
29
/* Page table PPN shift amount */
38
bool ext_zvknhb;
30
#define PTE_PPN_SHIFT 10
39
+ bool ext_zvksh;
40
bool ext_zmmul;
41
bool ext_zvfbfmin;
42
bool ext_zvfbfwma;
43
diff --git a/target/riscv/helper.h b/target/riscv/helper.h
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/riscv/helper.h
46
+++ b/target/riscv/helper.h
47
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_5(vsha2ch32_vv, void, ptr, ptr, ptr, env, i32)
48
DEF_HELPER_5(vsha2ch64_vv, void, ptr, ptr, ptr, env, i32)
49
DEF_HELPER_5(vsha2cl32_vv, void, ptr, ptr, ptr, env, i32)
50
DEF_HELPER_5(vsha2cl64_vv, void, ptr, ptr, ptr, env, i32)
51
+
52
+DEF_HELPER_5(vsm3me_vv, void, ptr, ptr, ptr, env, i32)
53
+DEF_HELPER_5(vsm3c_vi, void, ptr, ptr, i32, env, i32)
54
diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode
55
index XXXXXXX..XXXXXXX 100644
56
--- a/target/riscv/insn32.decode
57
+++ b/target/riscv/insn32.decode
58
@@ -XXX,XX +XXX,XX @@ vaeskf2_vi 101010 1 ..... ..... 010 ..... 1110111 @r_vm_1
59
vsha2ms_vv 101101 1 ..... ..... 010 ..... 1110111 @r_vm_1
60
vsha2ch_vv 101110 1 ..... ..... 010 ..... 1110111 @r_vm_1
61
vsha2cl_vv 101111 1 ..... ..... 010 ..... 1110111 @r_vm_1
62
+
63
+# *** Zvksh vector crypto extension ***
64
+vsm3me_vv 100000 1 ..... ..... 010 ..... 1110111 @r_vm_1
65
+vsm3c_vi 101011 1 ..... ..... 010 ..... 1110111 @r_vm_1
31
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
66
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
32
index XXXXXXX..XXXXXXX 100644
67
index XXXXXXX..XXXXXXX 100644
33
--- a/target/riscv/cpu.c
68
--- a/target/riscv/cpu.c
34
+++ b/target/riscv/cpu.c
69
+++ b/target/riscv/cpu.c
35
@@ -XXX,XX +XXX,XX @@ static Property riscv_cpu_properties[] = {
70
@@ -XXX,XX +XXX,XX @@ static const struct isa_ext_data isa_edata_arr[] = {
36
DEFINE_PROP_UINT16("vlen", RISCVCPU, cfg.vlen, 128),
71
ISA_EXT_DATA_ENTRY(zvkned, PRIV_VERSION_1_12_0, ext_zvkned),
37
DEFINE_PROP_UINT16("elen", RISCVCPU, cfg.elen, 64),
72
ISA_EXT_DATA_ENTRY(zvknha, PRIV_VERSION_1_12_0, ext_zvknha),
38
73
ISA_EXT_DATA_ENTRY(zvknhb, PRIV_VERSION_1_12_0, ext_zvknhb),
39
+ DEFINE_PROP_BOOL("svnapot", RISCVCPU, cfg.ext_svnapot, false),
74
+ ISA_EXT_DATA_ENTRY(zvksh, PRIV_VERSION_1_12_0, ext_zvksh),
40
+
75
ISA_EXT_DATA_ENTRY(zhinx, PRIV_VERSION_1_12_0, ext_zhinx),
41
DEFINE_PROP_BOOL("zba", RISCVCPU, cfg.ext_zba, true),
76
ISA_EXT_DATA_ENTRY(zhinxmin, PRIV_VERSION_1_12_0, ext_zhinxmin),
42
DEFINE_PROP_BOOL("zbb", RISCVCPU, cfg.ext_zbb, true),
77
ISA_EXT_DATA_ENTRY(smaia, PRIV_VERSION_1_12_0, ext_smaia),
43
DEFINE_PROP_BOOL("zbc", RISCVCPU, cfg.ext_zbc, true),
78
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_validate_set_extensions(RISCVCPU *cpu, Error **errp)
44
diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
79
* In principle Zve*x would also suffice here, were they supported
45
index XXXXXXX..XXXXXXX 100644
80
* in qemu
46
--- a/target/riscv/cpu_helper.c
81
*/
47
+++ b/target/riscv/cpu_helper.c
82
- if ((cpu->cfg.ext_zvbb || cpu->cfg.ext_zvkned || cpu->cfg.ext_zvknha) &&
48
@@ -XXX,XX +XXX,XX @@ static int get_physical_address(CPURISCVState *env, hwaddr *physical,
83
- !cpu->cfg.ext_zve32f) {
49
bool use_background = false;
84
+ if ((cpu->cfg.ext_zvbb || cpu->cfg.ext_zvkned || cpu->cfg.ext_zvknha ||
50
hwaddr ppn;
85
+ cpu->cfg.ext_zvksh) && !cpu->cfg.ext_zve32f) {
51
RISCVCPU *cpu = env_archcpu(env);
86
error_setg(errp,
52
+ int napot_bits = 0;
87
"Vector crypto extensions require V or Zve* extensions");
53
+ target_ulong napot_mask;
88
return;
54
89
@@ -XXX,XX +XXX,XX @@ static Property riscv_cpu_extensions[] = {
55
/*
90
DEFINE_PROP_BOOL("x-zvkned", RISCVCPU, cfg.ext_zvkned, false),
56
* Check if we should use the background registers for the two
91
DEFINE_PROP_BOOL("x-zvknha", RISCVCPU, cfg.ext_zvknha, false),
57
@@ -XXX,XX +XXX,XX @@ restart:
92
DEFINE_PROP_BOOL("x-zvknhb", RISCVCPU, cfg.ext_zvknhb, false),
58
return TRANSLATE_FAIL;
93
+ DEFINE_PROP_BOOL("x-zvksh", RISCVCPU, cfg.ext_zvksh, false),
59
} else if (!(pte & (PTE_R | PTE_W | PTE_X))) {
94
60
/* Inner PTE, continue walking */
95
DEFINE_PROP_END_OF_LIST(),
61
- if (pte & (PTE_D | PTE_A | PTE_U)) {
96
};
62
+ if (pte & (PTE_D | PTE_A | PTE_U | PTE_N)) {
97
diff --git a/target/riscv/vcrypto_helper.c b/target/riscv/vcrypto_helper.c
63
return TRANSLATE_FAIL;
98
index XXXXXXX..XXXXXXX 100644
64
}
99
--- a/target/riscv/vcrypto_helper.c
65
base = ppn << PGSHIFT;
100
+++ b/target/riscv/vcrypto_helper.c
66
@@ -XXX,XX +XXX,XX @@ restart:
101
@@ -XXX,XX +XXX,XX @@ void HELPER(vsha2cl64_vv)(void *vd, void *vs1, void *vs2, CPURISCVState *env,
67
/* for superpage mappings, make a fake leaf PTE for the TLB's
102
vext_set_elems_1s(vd, vta, env->vl * esz, total_elems * esz);
68
benefit. */
103
env->vstart = 0;
69
target_ulong vpn = addr >> PGSHIFT;
104
}
70
- *physical = ((ppn | (vpn & ((1L << ptshift) - 1))) << PGSHIFT) |
105
+
71
- (addr & ~TARGET_PAGE_MASK);
106
+static inline uint32_t p1(uint32_t x)
72
+
107
+{
73
+ if (cpu->cfg.ext_svnapot && (pte & PTE_N)) {
108
+ return x ^ rol32(x, 15) ^ rol32(x, 23);
74
+ napot_bits = ctzl(ppn) + 1;
109
+}
75
+ if ((i != (levels - 1)) || (napot_bits != 4)) {
110
+
76
+ return TRANSLATE_FAIL;
111
+static inline uint32_t zvksh_w(uint32_t m16, uint32_t m9, uint32_t m3,
77
+ }
112
+ uint32_t m13, uint32_t m6)
78
+ }
113
+{
79
+
114
+ return p1(m16 ^ m9 ^ rol32(m3, 15)) ^ rol32(m13, 7) ^ m6;
80
+ napot_mask = (1 << napot_bits) - 1;
115
+}
81
+ *physical = (((ppn & ~napot_mask) | (vpn & napot_mask) |
116
+
82
+ (vpn & (((target_ulong)1 << ptshift) - 1))
117
+void HELPER(vsm3me_vv)(void *vd_vptr, void *vs1_vptr, void *vs2_vptr,
83
+ ) << PGSHIFT) | (addr & ~TARGET_PAGE_MASK);
118
+ CPURISCVState *env, uint32_t desc)
84
119
+{
85
/* set permissions on the TLB entry */
120
+ uint32_t esz = memop_size(FIELD_EX64(env->vtype, VTYPE, VSEW));
86
if ((pte & PTE_R) || ((pte & PTE_X) && mxr)) {
121
+ uint32_t total_elems = vext_get_total_elems(env, desc, esz);
122
+ uint32_t vta = vext_vta(desc);
123
+ uint32_t *vd = vd_vptr;
124
+ uint32_t *vs1 = vs1_vptr;
125
+ uint32_t *vs2 = vs2_vptr;
126
+
127
+ for (int i = env->vstart / 8; i < env->vl / 8; i++) {
128
+ uint32_t w[24];
129
+ for (int j = 0; j < 8; j++) {
130
+ w[j] = bswap32(vs1[H4((i * 8) + j)]);
131
+ w[j + 8] = bswap32(vs2[H4((i * 8) + j)]);
132
+ }
133
+ for (int j = 0; j < 8; j++) {
134
+ w[j + 16] =
135
+ zvksh_w(w[j], w[j + 7], w[j + 13], w[j + 3], w[j + 10]);
136
+ }
137
+ for (int j = 0; j < 8; j++) {
138
+ vd[(i * 8) + j] = bswap32(w[H4(j + 16)]);
139
+ }
140
+ }
141
+ vext_set_elems_1s(vd_vptr, vta, env->vl * esz, total_elems * esz);
142
+ env->vstart = 0;
143
+}
144
+
145
+static inline uint32_t ff1(uint32_t x, uint32_t y, uint32_t z)
146
+{
147
+ return x ^ y ^ z;
148
+}
149
+
150
+static inline uint32_t ff2(uint32_t x, uint32_t y, uint32_t z)
151
+{
152
+ return (x & y) | (x & z) | (y & z);
153
+}
154
+
155
+static inline uint32_t ff_j(uint32_t x, uint32_t y, uint32_t z, uint32_t j)
156
+{
157
+ return (j <= 15) ? ff1(x, y, z) : ff2(x, y, z);
158
+}
159
+
160
+static inline uint32_t gg1(uint32_t x, uint32_t y, uint32_t z)
161
+{
162
+ return x ^ y ^ z;
163
+}
164
+
165
+static inline uint32_t gg2(uint32_t x, uint32_t y, uint32_t z)
166
+{
167
+ return (x & y) | (~x & z);
168
+}
169
+
170
+static inline uint32_t gg_j(uint32_t x, uint32_t y, uint32_t z, uint32_t j)
171
+{
172
+ return (j <= 15) ? gg1(x, y, z) : gg2(x, y, z);
173
+}
174
+
175
+static inline uint32_t t_j(uint32_t j)
176
+{
177
+ return (j <= 15) ? 0x79cc4519 : 0x7a879d8a;
178
+}
179
+
180
+static inline uint32_t p_0(uint32_t x)
181
+{
182
+ return x ^ rol32(x, 9) ^ rol32(x, 17);
183
+}
184
+
185
+static void sm3c(uint32_t *vd, uint32_t *vs1, uint32_t *vs2, uint32_t uimm)
186
+{
187
+ uint32_t x0, x1;
188
+ uint32_t j;
189
+ uint32_t ss1, ss2, tt1, tt2;
190
+ x0 = vs2[0] ^ vs2[4];
191
+ x1 = vs2[1] ^ vs2[5];
192
+ j = 2 * uimm;
193
+ ss1 = rol32(rol32(vs1[0], 12) + vs1[4] + rol32(t_j(j), j % 32), 7);
194
+ ss2 = ss1 ^ rol32(vs1[0], 12);
195
+ tt1 = ff_j(vs1[0], vs1[1], vs1[2], j) + vs1[3] + ss2 + x0;
196
+ tt2 = gg_j(vs1[4], vs1[5], vs1[6], j) + vs1[7] + ss1 + vs2[0];
197
+ vs1[3] = vs1[2];
198
+ vd[3] = rol32(vs1[1], 9);
199
+ vs1[1] = vs1[0];
200
+ vd[1] = tt1;
201
+ vs1[7] = vs1[6];
202
+ vd[7] = rol32(vs1[5], 19);
203
+ vs1[5] = vs1[4];
204
+ vd[5] = p_0(tt2);
205
+ j = 2 * uimm + 1;
206
+ ss1 = rol32(rol32(vd[1], 12) + vd[5] + rol32(t_j(j), j % 32), 7);
207
+ ss2 = ss1 ^ rol32(vd[1], 12);
208
+ tt1 = ff_j(vd[1], vs1[1], vd[3], j) + vs1[3] + ss2 + x1;
209
+ tt2 = gg_j(vd[5], vs1[5], vd[7], j) + vs1[7] + ss1 + vs2[1];
210
+ vd[2] = rol32(vs1[1], 9);
211
+ vd[0] = tt1;
212
+ vd[6] = rol32(vs1[5], 19);
213
+ vd[4] = p_0(tt2);
214
+}
215
+
216
+void HELPER(vsm3c_vi)(void *vd_vptr, void *vs2_vptr, uint32_t uimm,
217
+ CPURISCVState *env, uint32_t desc)
218
+{
219
+ uint32_t esz = memop_size(FIELD_EX64(env->vtype, VTYPE, VSEW));
220
+ uint32_t total_elems = vext_get_total_elems(env, desc, esz);
221
+ uint32_t vta = vext_vta(desc);
222
+ uint32_t *vd = vd_vptr;
223
+ uint32_t *vs2 = vs2_vptr;
224
+ uint32_t v1[8], v2[8], v3[8];
225
+
226
+ for (int i = env->vstart / 8; i < env->vl / 8; i++) {
227
+ for (int k = 0; k < 8; k++) {
228
+ v2[k] = bswap32(vd[H4(i * 8 + k)]);
229
+ v3[k] = bswap32(vs2[H4(i * 8 + k)]);
230
+ }
231
+ sm3c(v1, v2, v3, uimm);
232
+ for (int k = 0; k < 8; k++) {
233
+ vd[i * 8 + k] = bswap32(v1[H4(k)]);
234
+ }
235
+ }
236
+ vext_set_elems_1s(vd_vptr, vta, env->vl * esz, total_elems * esz);
237
+ env->vstart = 0;
238
+}
239
diff --git a/target/riscv/insn_trans/trans_rvvk.c.inc b/target/riscv/insn_trans/trans_rvvk.c.inc
240
index XXXXXXX..XXXXXXX 100644
241
--- a/target/riscv/insn_trans/trans_rvvk.c.inc
242
+++ b/target/riscv/insn_trans/trans_rvvk.c.inc
243
@@ -XXX,XX +XXX,XX @@ static bool trans_vsha2ch_vv(DisasContext *s, arg_rmrr *a)
244
}
245
return false;
246
}
247
+
248
+/*
249
+ * Zvksh
250
+ */
251
+
252
+#define ZVKSH_EGS 8
253
+
254
+static inline bool vsm3_check(DisasContext *s, arg_rmrr *a)
255
+{
256
+ int egw_bytes = ZVKSH_EGS << s->sew;
257
+ int mult = 1 << MAX(s->lmul, 0);
258
+ return s->cfg_ptr->ext_zvksh == true &&
259
+ require_rvv(s) &&
260
+ vext_check_isa_ill(s) &&
261
+ !is_overlapped(a->rd, mult, a->rs2, mult) &&
262
+ MAXSZ(s) >= egw_bytes &&
263
+ s->sew == MO_32;
264
+}
265
+
266
+static inline bool vsm3me_check(DisasContext *s, arg_rmrr *a)
267
+{
268
+ return vsm3_check(s, a) && vext_check_sss(s, a->rd, a->rs1, a->rs2, a->vm);
269
+}
270
+
271
+static inline bool vsm3c_check(DisasContext *s, arg_rmrr *a)
272
+{
273
+ return vsm3_check(s, a) && vext_check_ss(s, a->rd, a->rs2, a->vm);
274
+}
275
+
276
+GEN_VV_UNMASKED_TRANS(vsm3me_vv, vsm3me_check, ZVKSH_EGS)
277
+GEN_VI_UNMASKED_TRANS(vsm3c_vi, vsm3c_check, ZVKSH_EGS)
87
--
278
--
88
2.34.1
279
2.41.0
89
90
diff view generated by jsdifflib
1
From: Anup Patel <anup.patel@wdc.com>
1
From: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
2
2
3
The hgeie and hgeip CSRs are required for emulating an external
3
This commit adds support for the Zvkg vector-crypto extension, which
4
interrupt controller capable of injecting virtual external interrupt
4
consists of the following instructions:
5
to Guest/VM running at VS-level.
5
6
6
* vgmul.vv
7
Signed-off-by: Anup Patel <anup.patel@wdc.com>
7
* vghsh.vv
8
Signed-off-by: Anup Patel <anup@brainfault.org>
8
9
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
9
Translation functions are defined in
10
Reviewed-by: Frank Chang <frank.chang@sifive.com>
10
`target/riscv/insn_trans/trans_rvvk.c.inc` and helpers are defined in
11
Message-id: 20220204174700.534953-4-anup@brainfault.org
11
`target/riscv/vcrypto_helper.c`.
12
13
Co-authored-by: Lawrence Hunter <lawrence.hunter@codethink.co.uk>
14
[max.chou@sifive.com: Replaced vstart checking by TCG op]
15
Signed-off-by: Lawrence Hunter <lawrence.hunter@codethink.co.uk>
16
Signed-off-by: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
17
Signed-off-by: Max Chou <max.chou@sifive.com>
18
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
19
[max.chou@sifive.com: Exposed x-zvkg property]
20
[max.chou@sifive.com: Replaced uint by int for cross win32 build]
21
Message-ID: <20230711165917.2629866-13-max.chou@sifive.com>
12
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
22
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
13
---
23
---
14
target/riscv/cpu.h | 5 +++
24
target/riscv/cpu_cfg.h | 1 +
15
target/riscv/cpu_bits.h | 1 +
25
target/riscv/helper.h | 3 +
16
target/riscv/cpu.c | 67 +++++++++++++++++++++++++++------------
26
target/riscv/insn32.decode | 4 ++
17
target/riscv/cpu_helper.c | 37 +++++++++++++++++++--
27
target/riscv/cpu.c | 6 +-
18
target/riscv/csr.c | 43 +++++++++++++++++--------
28
target/riscv/vcrypto_helper.c | 72 ++++++++++++++++++++++++
19
target/riscv/machine.c | 6 ++--
29
target/riscv/insn_trans/trans_rvvk.c.inc | 30 ++++++++++
20
6 files changed, 121 insertions(+), 38 deletions(-)
30
6 files changed, 114 insertions(+), 2 deletions(-)
21
31
22
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
32
diff --git a/target/riscv/cpu_cfg.h b/target/riscv/cpu_cfg.h
23
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
24
--- a/target/riscv/cpu.h
34
--- a/target/riscv/cpu_cfg.h
25
+++ b/target/riscv/cpu.h
35
+++ b/target/riscv/cpu_cfg.h
26
@@ -XXX,XX +XXX,XX @@ struct CPURISCVState {
36
@@ -XXX,XX +XXX,XX @@ struct RISCVCPUConfig {
27
target_ulong priv;
37
bool ext_zve64d;
28
/* This contains QEMU specific information about the virt state. */
38
bool ext_zvbb;
29
target_ulong virt;
39
bool ext_zvbc;
30
+ target_ulong geilen;
40
+ bool ext_zvkg;
31
target_ulong resetvec;
41
bool ext_zvkned;
32
42
bool ext_zvknha;
33
target_ulong mhartid;
43
bool ext_zvknhb;
34
@@ -XXX,XX +XXX,XX @@ struct CPURISCVState {
44
diff --git a/target/riscv/helper.h b/target/riscv/helper.h
35
target_ulong htval;
45
index XXXXXXX..XXXXXXX 100644
36
target_ulong htinst;
46
--- a/target/riscv/helper.h
37
target_ulong hgatp;
47
+++ b/target/riscv/helper.h
38
+ target_ulong hgeie;
48
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_5(vsha2cl64_vv, void, ptr, ptr, ptr, env, i32)
39
+ target_ulong hgeip;
49
40
uint64_t htimedelta;
50
DEF_HELPER_5(vsm3me_vv, void, ptr, ptr, ptr, env, i32)
41
51
DEF_HELPER_5(vsm3c_vi, void, ptr, ptr, i32, env, i32)
42
/* Upper 64-bits of 128-bit CSRs */
52
+
43
@@ -XXX,XX +XXX,XX @@ int riscv_cpu_write_elf32_note(WriteCoreDumpFunction f, CPUState *cs,
53
+DEF_HELPER_5(vghsh_vv, void, ptr, ptr, ptr, env, i32)
44
int riscv_cpu_gdb_read_register(CPUState *cpu, GByteArray *buf, int reg);
54
+DEF_HELPER_4(vgmul_vv, void, ptr, ptr, env, i32)
45
int riscv_cpu_gdb_write_register(CPUState *cpu, uint8_t *buf, int reg);
55
diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode
46
bool riscv_cpu_fp_enabled(CPURISCVState *env);
56
index XXXXXXX..XXXXXXX 100644
47
+target_ulong riscv_cpu_get_geilen(CPURISCVState *env);
57
--- a/target/riscv/insn32.decode
48
+void riscv_cpu_set_geilen(CPURISCVState *env, target_ulong geilen);
58
+++ b/target/riscv/insn32.decode
49
bool riscv_cpu_vector_enabled(CPURISCVState *env);
59
@@ -XXX,XX +XXX,XX @@ vsha2cl_vv 101111 1 ..... ..... 010 ..... 1110111 @r_vm_1
50
bool riscv_cpu_virt_enabled(CPURISCVState *env);
60
# *** Zvksh vector crypto extension ***
51
void riscv_cpu_set_virt_enabled(CPURISCVState *env, bool enable);
61
vsm3me_vv 100000 1 ..... ..... 010 ..... 1110111 @r_vm_1
52
diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
62
vsm3c_vi 101011 1 ..... ..... 010 ..... 1110111 @r_vm_1
53
index XXXXXXX..XXXXXXX 100644
63
+
54
--- a/target/riscv/cpu_bits.h
64
+# *** Zvkg vector crypto extension ***
55
+++ b/target/riscv/cpu_bits.h
65
+vghsh_vv 101100 1 ..... ..... 010 ..... 1110111 @r_vm_1
56
@@ -XXX,XX +XXX,XX @@ typedef enum RISCVException {
66
+vgmul_vv 101000 1 ..... 10001 010 ..... 1110111 @r2_vm_1
57
#define IRQ_M_EXT 11
58
#define IRQ_S_GEXT 12
59
#define IRQ_LOCAL_MAX 16
60
+#define IRQ_LOCAL_GUEST_MAX (TARGET_LONG_BITS - 1)
61
62
/* mip masks */
63
#define MIP_USIP (1 << IRQ_U_SOFT)
64
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
67
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
65
index XXXXXXX..XXXXXXX 100644
68
index XXXXXXX..XXXXXXX 100644
66
--- a/target/riscv/cpu.c
69
--- a/target/riscv/cpu.c
67
+++ b/target/riscv/cpu.c
70
+++ b/target/riscv/cpu.c
68
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_realize(DeviceState *dev, Error **errp)
71
@@ -XXX,XX +XXX,XX @@ static const struct isa_ext_data isa_edata_arr[] = {
69
static void riscv_cpu_set_irq(void *opaque, int irq, int level)
72
ISA_EXT_DATA_ENTRY(zvfbfwma, PRIV_VERSION_1_12_0, ext_zvfbfwma),
70
{
73
ISA_EXT_DATA_ENTRY(zvfh, PRIV_VERSION_1_12_0, ext_zvfh),
71
RISCVCPU *cpu = RISCV_CPU(opaque);
74
ISA_EXT_DATA_ENTRY(zvfhmin, PRIV_VERSION_1_12_0, ext_zvfhmin),
72
+ CPURISCVState *env = &cpu->env;
75
+ ISA_EXT_DATA_ENTRY(zvkg, PRIV_VERSION_1_12_0, ext_zvkg),
73
76
ISA_EXT_DATA_ENTRY(zvkned, PRIV_VERSION_1_12_0, ext_zvkned),
74
- switch (irq) {
77
ISA_EXT_DATA_ENTRY(zvknha, PRIV_VERSION_1_12_0, ext_zvknha),
75
- case IRQ_U_SOFT:
78
ISA_EXT_DATA_ENTRY(zvknhb, PRIV_VERSION_1_12_0, ext_zvknhb),
76
- case IRQ_S_SOFT:
79
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_validate_set_extensions(RISCVCPU *cpu, Error **errp)
77
- case IRQ_VS_SOFT:
80
* In principle Zve*x would also suffice here, were they supported
78
- case IRQ_M_SOFT:
81
* in qemu
79
- case IRQ_U_TIMER:
82
*/
80
- case IRQ_S_TIMER:
83
- if ((cpu->cfg.ext_zvbb || cpu->cfg.ext_zvkned || cpu->cfg.ext_zvknha ||
81
- case IRQ_VS_TIMER:
84
- cpu->cfg.ext_zvksh) && !cpu->cfg.ext_zve32f) {
82
- case IRQ_M_TIMER:
85
+ if ((cpu->cfg.ext_zvbb || cpu->cfg.ext_zvkg || cpu->cfg.ext_zvkned ||
83
- case IRQ_U_EXT:
86
+ cpu->cfg.ext_zvknha || cpu->cfg.ext_zvksh) && !cpu->cfg.ext_zve32f) {
84
- case IRQ_S_EXT:
87
error_setg(errp,
85
- case IRQ_VS_EXT:
88
"Vector crypto extensions require V or Zve* extensions");
86
- case IRQ_M_EXT:
89
return;
87
- if (kvm_enabled()) {
90
@@ -XXX,XX +XXX,XX @@ static Property riscv_cpu_extensions[] = {
88
- kvm_riscv_set_irq(cpu, irq, level);
91
/* Vector cryptography extensions */
89
- } else {
92
DEFINE_PROP_BOOL("x-zvbb", RISCVCPU, cfg.ext_zvbb, false),
90
- riscv_cpu_update_mip(cpu, 1 << irq, BOOL_TO_MASK(level));
93
DEFINE_PROP_BOOL("x-zvbc", RISCVCPU, cfg.ext_zvbc, false),
91
+ if (irq < IRQ_LOCAL_MAX) {
94
+ DEFINE_PROP_BOOL("x-zvkg", RISCVCPU, cfg.ext_zvkg, false),
92
+ switch (irq) {
95
DEFINE_PROP_BOOL("x-zvkned", RISCVCPU, cfg.ext_zvkned, false),
93
+ case IRQ_U_SOFT:
96
DEFINE_PROP_BOOL("x-zvknha", RISCVCPU, cfg.ext_zvknha, false),
94
+ case IRQ_S_SOFT:
97
DEFINE_PROP_BOOL("x-zvknhb", RISCVCPU, cfg.ext_zvknhb, false),
95
+ case IRQ_VS_SOFT:
98
diff --git a/target/riscv/vcrypto_helper.c b/target/riscv/vcrypto_helper.c
96
+ case IRQ_M_SOFT:
99
index XXXXXXX..XXXXXXX 100644
97
+ case IRQ_U_TIMER:
100
--- a/target/riscv/vcrypto_helper.c
98
+ case IRQ_S_TIMER:
101
+++ b/target/riscv/vcrypto_helper.c
99
+ case IRQ_VS_TIMER:
102
@@ -XXX,XX +XXX,XX @@ void HELPER(vsm3c_vi)(void *vd_vptr, void *vs2_vptr, uint32_t uimm,
100
+ case IRQ_M_TIMER:
103
vext_set_elems_1s(vd_vptr, vta, env->vl * esz, total_elems * esz);
101
+ case IRQ_U_EXT:
104
env->vstart = 0;
102
+ case IRQ_S_EXT:
105
}
103
+ case IRQ_VS_EXT:
106
+
104
+ case IRQ_M_EXT:
107
+void HELPER(vghsh_vv)(void *vd_vptr, void *vs1_vptr, void *vs2_vptr,
105
+ if (kvm_enabled()) {
108
+ CPURISCVState *env, uint32_t desc)
106
+ kvm_riscv_set_irq(cpu, irq, level);
109
+{
107
+ } else {
110
+ uint64_t *vd = vd_vptr;
108
+ riscv_cpu_update_mip(cpu, 1 << irq, BOOL_TO_MASK(level));
111
+ uint64_t *vs1 = vs1_vptr;
109
+ }
112
+ uint64_t *vs2 = vs2_vptr;
110
+ break;
113
+ uint32_t vta = vext_vta(desc);
111
+ default:
114
+ uint32_t total_elems = vext_get_total_elems(env, desc, 4);
112
+ g_assert_not_reached();
115
+
113
}
116
+ for (uint32_t i = env->vstart / 4; i < env->vl / 4; i++) {
114
- break;
117
+ uint64_t Y[2] = {vd[i * 2 + 0], vd[i * 2 + 1]};
115
- default:
118
+ uint64_t H[2] = {brev8(vs2[i * 2 + 0]), brev8(vs2[i * 2 + 1])};
116
+ } else if (irq < (IRQ_LOCAL_MAX + IRQ_LOCAL_GUEST_MAX)) {
119
+ uint64_t X[2] = {vs1[i * 2 + 0], vs1[i * 2 + 1]};
117
+ /* Require H-extension for handling guest local interrupts */
120
+ uint64_t Z[2] = {0, 0};
118
+ if (!riscv_has_ext(env, RVH)) {
121
+
119
+ g_assert_not_reached();
122
+ uint64_t S[2] = {brev8(Y[0] ^ X[0]), brev8(Y[1] ^ X[1])};
123
+
124
+ for (int j = 0; j < 128; j++) {
125
+ if ((S[j / 64] >> (j % 64)) & 1) {
126
+ Z[0] ^= H[0];
127
+ Z[1] ^= H[1];
128
+ }
129
+ bool reduce = ((H[1] >> 63) & 1);
130
+ H[1] = H[1] << 1 | H[0] >> 63;
131
+ H[0] = H[0] << 1;
132
+ if (reduce) {
133
+ H[0] ^= 0x87;
134
+ }
120
+ }
135
+ }
121
+
136
+
122
+ /* Compute bit position in HGEIP CSR */
137
+ vd[i * 2 + 0] = brev8(Z[0]);
123
+ irq = irq - IRQ_LOCAL_MAX + 1;
138
+ vd[i * 2 + 1] = brev8(Z[1]);
124
+ if (env->geilen < irq) {
139
+ }
125
+ g_assert_not_reached();
140
+ /* set tail elements to 1s */
141
+ vext_set_elems_1s(vd, vta, env->vl * 4, total_elems * 4);
142
+ env->vstart = 0;
143
+}
144
+
145
+void HELPER(vgmul_vv)(void *vd_vptr, void *vs2_vptr, CPURISCVState *env,
146
+ uint32_t desc)
147
+{
148
+ uint64_t *vd = vd_vptr;
149
+ uint64_t *vs2 = vs2_vptr;
150
+ uint32_t vta = vext_vta(desc);
151
+ uint32_t total_elems = vext_get_total_elems(env, desc, 4);
152
+
153
+ for (uint32_t i = env->vstart / 4; i < env->vl / 4; i++) {
154
+ uint64_t Y[2] = {brev8(vd[i * 2 + 0]), brev8(vd[i * 2 + 1])};
155
+ uint64_t H[2] = {brev8(vs2[i * 2 + 0]), brev8(vs2[i * 2 + 1])};
156
+ uint64_t Z[2] = {0, 0};
157
+
158
+ for (int j = 0; j < 128; j++) {
159
+ if ((Y[j / 64] >> (j % 64)) & 1) {
160
+ Z[0] ^= H[0];
161
+ Z[1] ^= H[1];
162
+ }
163
+ bool reduce = ((H[1] >> 63) & 1);
164
+ H[1] = H[1] << 1 | H[0] >> 63;
165
+ H[0] = H[0] << 1;
166
+ if (reduce) {
167
+ H[0] ^= 0x87;
168
+ }
126
+ }
169
+ }
127
+
170
+
128
+ /* Update HGEIP CSR */
171
+ vd[i * 2 + 0] = brev8(Z[0]);
129
+ env->hgeip &= ~((target_ulong)1 << irq);
172
+ vd[i * 2 + 1] = brev8(Z[1]);
130
+ if (level) {
131
+ env->hgeip |= (target_ulong)1 << irq;
132
+ }
133
+
134
+ /* Update mip.SGEIP bit */
135
+ riscv_cpu_update_mip(cpu, MIP_SGEIP,
136
+ BOOL_TO_MASK(!!(env->hgeie & env->hgeip)));
137
+ } else {
138
g_assert_not_reached();
139
}
140
}
141
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_init(Object *obj)
142
cpu_set_cpustate_pointers(cpu);
143
144
#ifndef CONFIG_USER_ONLY
145
- qdev_init_gpio_in(DEVICE(cpu), riscv_cpu_set_irq, IRQ_LOCAL_MAX);
146
+ qdev_init_gpio_in(DEVICE(cpu), riscv_cpu_set_irq,
147
+ IRQ_LOCAL_MAX + IRQ_LOCAL_GUEST_MAX);
148
#endif /* CONFIG_USER_ONLY */
149
}
150
151
diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
152
index XXXXXXX..XXXXXXX 100644
153
--- a/target/riscv/cpu_helper.c
154
+++ b/target/riscv/cpu_helper.c
155
@@ -XXX,XX +XXX,XX @@ static int riscv_cpu_local_irq_pending(CPURISCVState *env)
156
target_ulong mstatus_mie = get_field(env->mstatus, MSTATUS_MIE);
157
target_ulong mstatus_sie = get_field(env->mstatus, MSTATUS_SIE);
158
159
- target_ulong pending = env->mip & env->mie;
160
+ target_ulong vsgemask =
161
+ (target_ulong)1 << get_field(env->hstatus, HSTATUS_VGEIN);
162
+ target_ulong vsgein = (env->hgeip & vsgemask) ? MIP_VSEIP : 0;
163
+
164
+ target_ulong pending = (env->mip | vsgein) & env->mie;
165
166
target_ulong mie = env->priv < PRV_M ||
167
(env->priv == PRV_M && mstatus_mie);
168
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_swap_hypervisor_regs(CPURISCVState *env)
169
}
170
}
171
172
+target_ulong riscv_cpu_get_geilen(CPURISCVState *env)
173
+{
174
+ if (!riscv_has_ext(env, RVH)) {
175
+ return 0;
176
+ }
173
+ }
177
+
174
+ /* set tail elements to 1s */
178
+ return env->geilen;
175
+ vext_set_elems_1s(vd, vta, env->vl * 4, total_elems * 4);
179
+}
176
+ env->vstart = 0;
180
+
177
+}
181
+void riscv_cpu_set_geilen(CPURISCVState *env, target_ulong geilen)
178
diff --git a/target/riscv/insn_trans/trans_rvvk.c.inc b/target/riscv/insn_trans/trans_rvvk.c.inc
182
+{
179
index XXXXXXX..XXXXXXX 100644
183
+ if (!riscv_has_ext(env, RVH)) {
180
--- a/target/riscv/insn_trans/trans_rvvk.c.inc
184
+ return;
181
+++ b/target/riscv/insn_trans/trans_rvvk.c.inc
185
+ }
182
@@ -XXX,XX +XXX,XX @@ static inline bool vsm3c_check(DisasContext *s, arg_rmrr *a)
186
+
183
187
+ if (geilen > (TARGET_LONG_BITS - 1)) {
184
GEN_VV_UNMASKED_TRANS(vsm3me_vv, vsm3me_check, ZVKSH_EGS)
188
+ return;
185
GEN_VI_UNMASKED_TRANS(vsm3c_vi, vsm3c_check, ZVKSH_EGS)
189
+ }
186
+
190
+
187
+/*
191
+ env->geilen = geilen;
188
+ * Zvkg
192
+}
189
+ */
193
+
190
+
194
bool riscv_cpu_virt_enabled(CPURISCVState *env)
191
+#define ZVKG_EGS 4
195
{
192
+
196
if (!riscv_has_ext(env, RVH)) {
193
+static bool vgmul_check(DisasContext *s, arg_rmr *a)
197
@@ -XXX,XX +XXX,XX @@ uint32_t riscv_cpu_update_mip(RISCVCPU *cpu, uint32_t mask, uint32_t value)
194
+{
198
{
195
+ int egw_bytes = ZVKG_EGS << s->sew;
199
CPURISCVState *env = &cpu->env;
196
+ return s->cfg_ptr->ext_zvkg == true &&
200
CPUState *cs = CPU(cpu);
197
+ vext_check_isa_ill(s) &&
201
- uint32_t old = env->mip;
198
+ require_rvv(s) &&
202
+ uint32_t gein, vsgein = 0, old = env->mip;
199
+ MAXSZ(s) >= egw_bytes &&
203
bool locked = false;
200
+ vext_check_ss(s, a->rd, a->rs2, a->vm) &&
204
201
+ s->sew == MO_32;
205
+ if (riscv_cpu_virt_enabled(env)) {
202
+}
206
+ gein = get_field(env->hstatus, HSTATUS_VGEIN);
203
+
207
+ vsgein = (env->hgeip & (1ULL << gein)) ? MIP_VSEIP : 0;
204
+GEN_V_UNMASKED_TRANS(vgmul_vv, vgmul_check, ZVKG_EGS)
208
+ }
205
+
209
+
206
+static bool vghsh_check(DisasContext *s, arg_rmrr *a)
210
if (!qemu_mutex_iothread_locked()) {
207
+{
211
locked = true;
208
+ int egw_bytes = ZVKG_EGS << s->sew;
212
qemu_mutex_lock_iothread();
209
+ return s->cfg_ptr->ext_zvkg == true &&
213
@@ -XXX,XX +XXX,XX @@ uint32_t riscv_cpu_update_mip(RISCVCPU *cpu, uint32_t mask, uint32_t value)
210
+ opivv_check(s, a) &&
214
211
+ MAXSZ(s) >= egw_bytes &&
215
env->mip = (env->mip & ~mask) | (value & mask);
212
+ s->sew == MO_32;
216
213
+}
217
- if (env->mip) {
214
+
218
+ if (env->mip | vsgein) {
215
+GEN_VV_UNMASKED_TRANS(vghsh_vv, vghsh_check, ZVKG_EGS)
219
cpu_interrupt(cs, CPU_INTERRUPT_HARD);
220
} else {
221
cpu_reset_interrupt(cs, CPU_INTERRUPT_HARD);
222
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
223
index XXXXXXX..XXXXXXX 100644
224
--- a/target/riscv/csr.c
225
+++ b/target/riscv/csr.c
226
@@ -XXX,XX +XXX,XX @@ static RISCVException rmw_mip(CPURISCVState *env, int csrno,
227
RISCVCPU *cpu = env_archcpu(env);
228
/* Allow software control of delegable interrupts not claimed by hardware */
229
target_ulong mask = write_mask & delegable_ints & ~env->miclaim;
230
- uint32_t old_mip;
231
+ uint32_t gin, old_mip;
232
233
if (mask) {
234
old_mip = riscv_cpu_update_mip(cpu, mask, (new_value & mask));
235
@@ -XXX,XX +XXX,XX @@ static RISCVException rmw_mip(CPURISCVState *env, int csrno,
236
old_mip = env->mip;
237
}
238
239
+ if (csrno != CSR_HVIP) {
240
+ gin = get_field(env->hstatus, HSTATUS_VGEIN);
241
+ old_mip |= (env->hgeip & ((target_ulong)1 << gin)) ? MIP_VSEIP : 0;
242
+ }
243
+
244
if (ret_value) {
245
*ret_value = old_mip;
246
}
247
@@ -XXX,XX +XXX,XX @@ static RISCVException rmw_vsip(CPURISCVState *env, int csrno,
248
target_ulong new_value, target_ulong write_mask)
249
{
250
/* Shift the S bits to their VS bit location in mip */
251
- int ret = rmw_mip(env, 0, ret_value, new_value << 1,
252
+ int ret = rmw_mip(env, csrno, ret_value, new_value << 1,
253
(write_mask << 1) & vsip_writable_mask & env->hideleg);
254
255
if (ret_value) {
256
@@ -XXX,XX +XXX,XX @@ static RISCVException rmw_sip(CPURISCVState *env, int csrno,
257
if (riscv_cpu_virt_enabled(env)) {
258
ret = rmw_vsip(env, CSR_VSIP, ret_value, new_value, write_mask);
259
} else {
260
- ret = rmw_mip(env, CSR_MSTATUS, ret_value, new_value,
261
+ ret = rmw_mip(env, csrno, ret_value, new_value,
262
write_mask & env->mideleg & sip_writable_mask);
263
}
264
265
@@ -XXX,XX +XXX,XX @@ static RISCVException rmw_hvip(CPURISCVState *env, int csrno,
266
target_ulong *ret_value,
267
target_ulong new_value, target_ulong write_mask)
268
{
269
- int ret = rmw_mip(env, 0, ret_value, new_value,
270
+ int ret = rmw_mip(env, csrno, ret_value, new_value,
271
write_mask & hvip_writable_mask);
272
273
if (ret_value) {
274
@@ -XXX,XX +XXX,XX @@ static RISCVException rmw_hip(CPURISCVState *env, int csrno,
275
target_ulong *ret_value,
276
target_ulong new_value, target_ulong write_mask)
277
{
278
- int ret = rmw_mip(env, 0, ret_value, new_value,
279
+ int ret = rmw_mip(env, csrno, ret_value, new_value,
280
write_mask & hip_writable_mask);
281
282
if (ret_value) {
283
@@ -XXX,XX +XXX,XX @@ static RISCVException write_hcounteren(CPURISCVState *env, int csrno,
284
return RISCV_EXCP_NONE;
285
}
286
287
-static RISCVException write_hgeie(CPURISCVState *env, int csrno,
288
- target_ulong val)
289
+static RISCVException read_hgeie(CPURISCVState *env, int csrno,
290
+ target_ulong *val)
291
{
292
if (val) {
293
- qemu_log_mask(LOG_UNIMP, "No support for a non-zero GEILEN.");
294
+ *val = env->hgeie;
295
}
296
return RISCV_EXCP_NONE;
297
}
298
299
+static RISCVException write_hgeie(CPURISCVState *env, int csrno,
300
+ target_ulong val)
301
+{
302
+ /* Only GEILEN:1 bits implemented and BIT0 is never implemented */
303
+ val &= ((((target_ulong)1) << env->geilen) - 1) << 1;
304
+ env->hgeie = val;
305
+ /* Update mip.SGEIP bit */
306
+ riscv_cpu_update_mip(env_archcpu(env), MIP_SGEIP,
307
+ BOOL_TO_MASK(!!(env->hgeie & env->hgeip)));
308
+ return RISCV_EXCP_NONE;
309
+}
310
+
311
static RISCVException read_htval(CPURISCVState *env, int csrno,
312
target_ulong *val)
313
{
314
@@ -XXX,XX +XXX,XX @@ static RISCVException write_htinst(CPURISCVState *env, int csrno,
315
return RISCV_EXCP_NONE;
316
}
317
318
-static RISCVException write_hgeip(CPURISCVState *env, int csrno,
319
- target_ulong val)
320
+static RISCVException read_hgeip(CPURISCVState *env, int csrno,
321
+ target_ulong *val)
322
{
323
if (val) {
324
- qemu_log_mask(LOG_UNIMP, "No support for a non-zero GEILEN.");
325
+ *val = env->hgeip;
326
}
327
return RISCV_EXCP_NONE;
328
}
329
@@ -XXX,XX +XXX,XX @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
330
[CSR_HIP] = { "hip", hmode, NULL, NULL, rmw_hip },
331
[CSR_HIE] = { "hie", hmode, read_hie, write_hie },
332
[CSR_HCOUNTEREN] = { "hcounteren", hmode, read_hcounteren, write_hcounteren },
333
- [CSR_HGEIE] = { "hgeie", hmode, read_zero, write_hgeie },
334
+ [CSR_HGEIE] = { "hgeie", hmode, read_hgeie, write_hgeie },
335
[CSR_HTVAL] = { "htval", hmode, read_htval, write_htval },
336
[CSR_HTINST] = { "htinst", hmode, read_htinst, write_htinst },
337
- [CSR_HGEIP] = { "hgeip", hmode, read_zero, write_hgeip },
338
+ [CSR_HGEIP] = { "hgeip", hmode, read_hgeip, NULL },
339
[CSR_HGATP] = { "hgatp", hmode, read_hgatp, write_hgatp },
340
[CSR_HTIMEDELTA] = { "htimedelta", hmode, read_htimedelta, write_htimedelta },
341
[CSR_HTIMEDELTAH] = { "htimedeltah", hmode32, read_htimedeltah, write_htimedeltah },
342
diff --git a/target/riscv/machine.c b/target/riscv/machine.c
343
index XXXXXXX..XXXXXXX 100644
344
--- a/target/riscv/machine.c
345
+++ b/target/riscv/machine.c
346
@@ -XXX,XX +XXX,XX @@ static bool hyper_needed(void *opaque)
347
348
static const VMStateDescription vmstate_hyper = {
349
.name = "cpu/hyper",
350
- .version_id = 1,
351
- .minimum_version_id = 1,
352
+ .version_id = 2,
353
+ .minimum_version_id = 2,
354
.needed = hyper_needed,
355
.fields = (VMStateField[]) {
356
VMSTATE_UINTTL(env.hstatus, RISCVCPU),
357
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_hyper = {
358
VMSTATE_UINTTL(env.htval, RISCVCPU),
359
VMSTATE_UINTTL(env.htinst, RISCVCPU),
360
VMSTATE_UINTTL(env.hgatp, RISCVCPU),
361
+ VMSTATE_UINTTL(env.hgeie, RISCVCPU),
362
+ VMSTATE_UINTTL(env.hgeip, RISCVCPU),
363
VMSTATE_UINT64(env.htimedelta, RISCVCPU),
364
365
VMSTATE_UINT64(env.vsstatus, RISCVCPU),
366
--
216
--
367
2.34.1
217
2.41.0
368
369
diff view generated by jsdifflib
1
From: Petr Tesarik <ptesarik@suse.com>
1
From: Max Chou <max.chou@sifive.com>
2
2
3
The documentation for the generic loader says that "the maximum size of
3
Allows sharing of sm4_subword between different targets.
4
the data is 8 bytes". However, attempts to set data-len=8 trigger the
5
following assertion failure:
6
4
7
../hw/core/generic-loader.c:59: generic_loader_reset: Assertion `s->data_len < sizeof(s->data)' failed.
5
Signed-off-by: Max Chou <max.chou@sifive.com>
8
6
Reviewed-by: Frank Chang <frank.chang@sifive.com>
9
The type of s->data is uint64_t (i.e. 8 bytes long), so I believe this
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
assert should use <= instead of <.
8
Signed-off-by: Max Chou <max.chou@sifive.com>
11
9
Message-ID: <20230711165917.2629866-14-max.chou@sifive.com>
12
Fixes: e481a1f63c93 ("generic-loader: Add a generic loader")
13
Signed-off-by: Petr Tesarik <ptesarik@suse.com>
14
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
15
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
16
Message-id: 20220120092715.7805-1-ptesarik@suse.com
17
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
10
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
18
---
11
---
19
hw/core/generic-loader.c | 2 +-
12
include/crypto/sm4.h | 8 ++++++++
20
1 file changed, 1 insertion(+), 1 deletion(-)
13
target/arm/tcg/crypto_helper.c | 10 ++--------
14
2 files changed, 10 insertions(+), 8 deletions(-)
21
15
22
diff --git a/hw/core/generic-loader.c b/hw/core/generic-loader.c
16
diff --git a/include/crypto/sm4.h b/include/crypto/sm4.h
23
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
24
--- a/hw/core/generic-loader.c
18
--- a/include/crypto/sm4.h
25
+++ b/hw/core/generic-loader.c
19
+++ b/include/crypto/sm4.h
26
@@ -XXX,XX +XXX,XX @@ static void generic_loader_reset(void *opaque)
20
@@ -XXX,XX +XXX,XX @@
27
}
21
28
22
extern const uint8_t sm4_sbox[256];
29
if (s->data_len) {
23
30
- assert(s->data_len < sizeof(s->data));
24
+static inline uint32_t sm4_subword(uint32_t word)
31
+ assert(s->data_len <= sizeof(s->data));
25
+{
32
dma_memory_write(s->cpu->as, s->addr, &s->data, s->data_len,
26
+ return sm4_sbox[word & 0xff] |
33
MEMTXATTRS_UNSPECIFIED);
27
+ sm4_sbox[(word >> 8) & 0xff] << 8 |
28
+ sm4_sbox[(word >> 16) & 0xff] << 16 |
29
+ sm4_sbox[(word >> 24) & 0xff] << 24;
30
+}
31
+
32
#endif
33
diff --git a/target/arm/tcg/crypto_helper.c b/target/arm/tcg/crypto_helper.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/tcg/crypto_helper.c
36
+++ b/target/arm/tcg/crypto_helper.c
37
@@ -XXX,XX +XXX,XX @@ static void do_crypto_sm4e(uint64_t *rd, uint64_t *rn, uint64_t *rm)
38
CR_ST_WORD(d, (i + 3) % 4) ^
39
CR_ST_WORD(n, i);
40
41
- t = sm4_sbox[t & 0xff] |
42
- sm4_sbox[(t >> 8) & 0xff] << 8 |
43
- sm4_sbox[(t >> 16) & 0xff] << 16 |
44
- sm4_sbox[(t >> 24) & 0xff] << 24;
45
+ t = sm4_subword(t);
46
47
CR_ST_WORD(d, i) ^= t ^ rol32(t, 2) ^ rol32(t, 10) ^ rol32(t, 18) ^
48
rol32(t, 24);
49
@@ -XXX,XX +XXX,XX @@ static void do_crypto_sm4ekey(uint64_t *rd, uint64_t *rn, uint64_t *rm)
50
CR_ST_WORD(d, (i + 3) % 4) ^
51
CR_ST_WORD(m, i);
52
53
- t = sm4_sbox[t & 0xff] |
54
- sm4_sbox[(t >> 8) & 0xff] << 8 |
55
- sm4_sbox[(t >> 16) & 0xff] << 16 |
56
- sm4_sbox[(t >> 24) & 0xff] << 24;
57
+ t = sm4_subword(t);
58
59
CR_ST_WORD(d, i) ^= t ^ rol32(t, 13) ^ rol32(t, 23);
34
}
60
}
35
--
61
--
36
2.34.1
62
2.41.0
37
38
diff view generated by jsdifflib
1
From: Philipp Tomsich <philipp.tomsich@vrull.eu>
1
From: Max Chou <max.chou@sifive.com>
2
2
3
Signed-off-by: Philipp Tomsich <philipp.tomsich@vrull.eu>
3
Adds sm4_ck constant for use in sm4 cryptography across different targets.
4
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
4
5
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
5
Signed-off-by: Max Chou <max.chou@sifive.com>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Frank Chang <frank.chang@sifive.com>
7
Message-Id: <20220202005249.3566542-2-philipp.tomsich@vrull.eu>
7
Signed-off-by: Max Chou <max.chou@sifive.com>
8
Message-ID: <20230711165917.2629866-15-max.chou@sifive.com>
8
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
9
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
9
---
10
---
10
target/riscv/cpu.h | 78 ++++++++++++++++++++++++----------------------
11
include/crypto/sm4.h | 1 +
11
1 file changed, 41 insertions(+), 37 deletions(-)
12
crypto/sm4.c | 10 ++++++++++
13
2 files changed, 11 insertions(+)
12
14
13
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
15
diff --git a/include/crypto/sm4.h b/include/crypto/sm4.h
14
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
15
--- a/target/riscv/cpu.h
17
--- a/include/crypto/sm4.h
16
+++ b/target/riscv/cpu.h
18
+++ b/include/crypto/sm4.h
17
@@ -XXX,XX +XXX,XX @@ struct RISCVCPUClass {
19
@@ -XXX,XX +XXX,XX @@
18
DeviceReset parent_reset;
20
#define QEMU_SM4_H
21
22
extern const uint8_t sm4_sbox[256];
23
+extern const uint32_t sm4_ck[32];
24
25
static inline uint32_t sm4_subword(uint32_t word)
26
{
27
diff --git a/crypto/sm4.c b/crypto/sm4.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/crypto/sm4.c
30
+++ b/crypto/sm4.c
31
@@ -XXX,XX +XXX,XX @@ uint8_t const sm4_sbox[] = {
32
0x79, 0xee, 0x5f, 0x3e, 0xd7, 0xcb, 0x39, 0x48,
19
};
33
};
20
34
21
+struct RISCVCPUConfig {
35
+uint32_t const sm4_ck[] = {
22
+ bool ext_i;
36
+ 0x00070e15, 0x1c232a31, 0x383f464d, 0x545b6269,
23
+ bool ext_e;
37
+ 0x70777e85, 0x8c939aa1, 0xa8afb6bd, 0xc4cbd2d9,
24
+ bool ext_g;
38
+ 0xe0e7eef5, 0xfc030a11, 0x181f262d, 0x343b4249,
25
+ bool ext_m;
39
+ 0x50575e65, 0x6c737a81, 0x888f969d, 0xa4abb2b9,
26
+ bool ext_a;
40
+ 0xc0c7ced5, 0xdce3eaf1, 0xf8ff060d, 0x141b2229,
27
+ bool ext_f;
41
+ 0x30373e45, 0x4c535a61, 0x686f767d, 0x848b9299,
28
+ bool ext_d;
42
+ 0xa0a7aeb5, 0xbcc3cad1, 0xd8dfe6ed, 0xf4fb0209,
29
+ bool ext_c;
43
+ 0x10171e25, 0x2c333a41, 0x484f565d, 0x646b7279
30
+ bool ext_s;
31
+ bool ext_u;
32
+ bool ext_h;
33
+ bool ext_j;
34
+ bool ext_v;
35
+ bool ext_zba;
36
+ bool ext_zbb;
37
+ bool ext_zbc;
38
+ bool ext_zbs;
39
+ bool ext_counters;
40
+ bool ext_ifencei;
41
+ bool ext_icsr;
42
+ bool ext_zfh;
43
+ bool ext_zfhmin;
44
+ bool ext_zve32f;
45
+ bool ext_zve64f;
46
+
47
+ char *priv_spec;
48
+ char *user_spec;
49
+ char *bext_spec;
50
+ char *vext_spec;
51
+ uint16_t vlen;
52
+ uint16_t elen;
53
+ bool mmu;
54
+ bool pmp;
55
+ bool epmp;
56
+ uint64_t resetvec;
57
+};
44
+};
58
+
59
+typedef struct RISCVCPUConfig RISCVCPUConfig;
60
+
61
/**
62
* RISCVCPU:
63
* @env: #CPURISCVState
64
@@ -XXX,XX +XXX,XX @@ struct RISCVCPU {
65
char *dyn_vreg_xml;
66
67
/* Configuration Settings */
68
- struct {
69
- bool ext_i;
70
- bool ext_e;
71
- bool ext_g;
72
- bool ext_m;
73
- bool ext_a;
74
- bool ext_f;
75
- bool ext_d;
76
- bool ext_c;
77
- bool ext_s;
78
- bool ext_u;
79
- bool ext_h;
80
- bool ext_j;
81
- bool ext_v;
82
- bool ext_zba;
83
- bool ext_zbb;
84
- bool ext_zbc;
85
- bool ext_zbs;
86
- bool ext_counters;
87
- bool ext_ifencei;
88
- bool ext_icsr;
89
- bool ext_zfh;
90
- bool ext_zfhmin;
91
- bool ext_zve32f;
92
- bool ext_zve64f;
93
-
94
- char *priv_spec;
95
- char *user_spec;
96
- char *bext_spec;
97
- char *vext_spec;
98
- uint16_t vlen;
99
- uint16_t elen;
100
- bool mmu;
101
- bool pmp;
102
- bool epmp;
103
- uint64_t resetvec;
104
- } cfg;
105
+ RISCVCPUConfig cfg;
106
};
107
108
static inline int riscv_has_ext(CPURISCVState *env, target_ulong ext)
109
--
45
--
110
2.34.1
46
2.41.0
111
112
diff view generated by jsdifflib
1
From: Anup Patel <anup.patel@wdc.com>
1
From: Max Chou <max.chou@sifive.com>
2
2
3
The AIA spec defines programmable 8-bit priority for each local interrupt
3
This commit adds support for the Zvksed vector-crypto extension, which
4
at M-level, S-level and VS-level so we extend local interrupt processing
4
consists of the following instructions:
5
to consider AIA interrupt priorities. The AIA CSRs which help software
5
6
configure local interrupt priorities will be added by subsequent patches.
6
* vsm4k.vi
7
7
* vsm4r.[vv,vs]
8
Signed-off-by: Anup Patel <anup.patel@wdc.com>
8
9
Signed-off-by: Anup Patel <anup@brainfault.org>
9
Translation functions are defined in
10
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
10
`target/riscv/insn_trans/trans_rvvk.c.inc` and helpers are defined in
11
Message-id: 20220204174700.534953-10-anup@brainfault.org
11
`target/riscv/vcrypto_helper.c`.
12
13
Signed-off-by: Max Chou <max.chou@sifive.com>
14
Reviewed-by: Frank Chang <frank.chang@sifive.com>
15
[lawrence.hunter@codethink.co.uk: Moved SM4 functions from
16
crypto_helper.c to vcrypto_helper.c]
17
[nazar.kazakov@codethink.co.uk: Added alignment checks, refactored code to
18
use macros, and minor style changes]
19
Signed-off-by: Max Chou <max.chou@sifive.com>
20
Message-ID: <20230711165917.2629866-16-max.chou@sifive.com>
12
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
21
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
13
---
22
---
14
target/riscv/cpu.h | 12 ++
23
target/riscv/cpu_cfg.h | 1 +
15
target/riscv/cpu.c | 19 +++
24
target/riscv/helper.h | 4 +
16
target/riscv/cpu_helper.c | 281 +++++++++++++++++++++++++++++++++++---
25
target/riscv/insn32.decode | 5 +
17
target/riscv/machine.c | 3 +
26
target/riscv/cpu.c | 5 +-
18
4 files changed, 294 insertions(+), 21 deletions(-)
27
target/riscv/vcrypto_helper.c | 127 +++++++++++++++++++++++
19
28
target/riscv/insn_trans/trans_rvvk.c.inc | 43 ++++++++
20
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
29
6 files changed, 184 insertions(+), 1 deletion(-)
21
index XXXXXXX..XXXXXXX 100644
30
22
--- a/target/riscv/cpu.h
31
diff --git a/target/riscv/cpu_cfg.h b/target/riscv/cpu_cfg.h
23
+++ b/target/riscv/cpu.h
32
index XXXXXXX..XXXXXXX 100644
24
@@ -XXX,XX +XXX,XX @@ struct CPURISCVState {
33
--- a/target/riscv/cpu_cfg.h
25
target_ulong mcause;
34
+++ b/target/riscv/cpu_cfg.h
26
target_ulong mtval; /* since: priv-1.10.0 */
35
@@ -XXX,XX +XXX,XX @@ struct RISCVCPUConfig {
27
36
bool ext_zvkned;
28
+ /* Machine and Supervisor interrupt priorities */
37
bool ext_zvknha;
29
+ uint8_t miprio[64];
38
bool ext_zvknhb;
30
+ uint8_t siprio[64];
39
+ bool ext_zvksed;
31
+
40
bool ext_zvksh;
32
/* Hypervisor CSRs */
41
bool ext_zmmul;
33
target_ulong hstatus;
42
bool ext_zvfbfmin;
34
target_ulong hedeleg;
43
diff --git a/target/riscv/helper.h b/target/riscv/helper.h
35
@@ -XXX,XX +XXX,XX @@ struct CPURISCVState {
44
index XXXXXXX..XXXXXXX 100644
36
target_ulong hgeip;
45
--- a/target/riscv/helper.h
37
uint64_t htimedelta;
46
+++ b/target/riscv/helper.h
38
47
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_5(vsm3c_vi, void, ptr, ptr, i32, env, i32)
39
+ /* Hypervisor controlled virtual interrupt priorities */
48
40
+ uint8_t hviprio[64];
49
DEF_HELPER_5(vghsh_vv, void, ptr, ptr, ptr, env, i32)
41
+
50
DEF_HELPER_4(vgmul_vv, void, ptr, ptr, env, i32)
42
/* Upper 64-bits of 128-bit CSRs */
51
+
43
uint64_t mscratchh;
52
+DEF_HELPER_5(vsm4k_vi, void, ptr, ptr, i32, env, i32)
44
uint64_t sscratchh;
53
+DEF_HELPER_4(vsm4r_vv, void, ptr, ptr, env, i32)
45
@@ -XXX,XX +XXX,XX @@ int riscv_cpu_write_elf32_note(WriteCoreDumpFunction f, CPUState *cs,
54
+DEF_HELPER_4(vsm4r_vs, void, ptr, ptr, env, i32)
46
int cpuid, void *opaque);
55
diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode
47
int riscv_cpu_gdb_read_register(CPUState *cpu, GByteArray *buf, int reg);
56
index XXXXXXX..XXXXXXX 100644
48
int riscv_cpu_gdb_write_register(CPUState *cpu, uint8_t *buf, int reg);
57
--- a/target/riscv/insn32.decode
49
+int riscv_cpu_hviprio_index2irq(int index, int *out_irq, int *out_rdzero);
58
+++ b/target/riscv/insn32.decode
50
+uint8_t riscv_cpu_default_priority(int irq);
59
@@ -XXX,XX +XXX,XX @@ vsm3c_vi 101011 1 ..... ..... 010 ..... 1110111 @r_vm_1
51
+int riscv_cpu_mirq_pending(CPURISCVState *env);
60
# *** Zvkg vector crypto extension ***
52
+int riscv_cpu_sirq_pending(CPURISCVState *env);
61
vghsh_vv 101100 1 ..... ..... 010 ..... 1110111 @r_vm_1
53
+int riscv_cpu_vsirq_pending(CPURISCVState *env);
62
vgmul_vv 101000 1 ..... 10001 010 ..... 1110111 @r2_vm_1
54
bool riscv_cpu_fp_enabled(CPURISCVState *env);
63
+
55
target_ulong riscv_cpu_get_geilen(CPURISCVState *env);
64
+# *** Zvksed vector crypto extension ***
56
void riscv_cpu_set_geilen(CPURISCVState *env, target_ulong geilen);
65
+vsm4k_vi 100001 1 ..... ..... 010 ..... 1110111 @r_vm_1
66
+vsm4r_vv 101000 1 ..... 10000 010 ..... 1110111 @r2_vm_1
67
+vsm4r_vs 101001 1 ..... 10000 010 ..... 1110111 @r2_vm_1
57
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
68
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
58
index XXXXXXX..XXXXXXX 100644
69
index XXXXXXX..XXXXXXX 100644
59
--- a/target/riscv/cpu.c
70
--- a/target/riscv/cpu.c
60
+++ b/target/riscv/cpu.c
71
+++ b/target/riscv/cpu.c
61
@@ -XXX,XX +XXX,XX @@ void restore_state_to_opc(CPURISCVState *env, TranslationBlock *tb,
72
@@ -XXX,XX +XXX,XX @@ static const struct isa_ext_data isa_edata_arr[] = {
62
73
ISA_EXT_DATA_ENTRY(zvkned, PRIV_VERSION_1_12_0, ext_zvkned),
63
static void riscv_cpu_reset(DeviceState *dev)
74
ISA_EXT_DATA_ENTRY(zvknha, PRIV_VERSION_1_12_0, ext_zvknha),
64
{
75
ISA_EXT_DATA_ENTRY(zvknhb, PRIV_VERSION_1_12_0, ext_zvknhb),
65
+#ifndef CONFIG_USER_ONLY
76
+ ISA_EXT_DATA_ENTRY(zvksed, PRIV_VERSION_1_12_0, ext_zvksed),
66
+ uint8_t iprio;
77
ISA_EXT_DATA_ENTRY(zvksh, PRIV_VERSION_1_12_0, ext_zvksh),
67
+ int i, irq, rdzero;
78
ISA_EXT_DATA_ENTRY(zhinx, PRIV_VERSION_1_12_0, ext_zhinx),
68
+#endif
79
ISA_EXT_DATA_ENTRY(zhinxmin, PRIV_VERSION_1_12_0, ext_zhinxmin),
69
CPUState *cs = CPU(dev);
80
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_validate_set_extensions(RISCVCPU *cpu, Error **errp)
70
RISCVCPU *cpu = RISCV_CPU(cs);
81
* in qemu
71
RISCVCPUClass *mcc = RISCV_CPU_GET_CLASS(cpu);
82
*/
72
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_reset(DeviceState *dev)
83
if ((cpu->cfg.ext_zvbb || cpu->cfg.ext_zvkg || cpu->cfg.ext_zvkned ||
73
env->miclaim = MIP_SGEIP;
84
- cpu->cfg.ext_zvknha || cpu->cfg.ext_zvksh) && !cpu->cfg.ext_zve32f) {
74
env->pc = env->resetvec;
85
+ cpu->cfg.ext_zvknha || cpu->cfg.ext_zvksed || cpu->cfg.ext_zvksh) &&
75
env->two_stage_lookup = false;
86
+ !cpu->cfg.ext_zve32f) {
76
+
87
error_setg(errp,
77
+ /* Initialized default priorities of local interrupts. */
88
"Vector crypto extensions require V or Zve* extensions");
78
+ for (i = 0; i < ARRAY_SIZE(env->miprio); i++) {
89
return;
79
+ iprio = riscv_cpu_default_priority(i);
90
@@ -XXX,XX +XXX,XX @@ static Property riscv_cpu_extensions[] = {
80
+ env->miprio[i] = (i == IRQ_M_EXT) ? 0 : iprio;
91
DEFINE_PROP_BOOL("x-zvkned", RISCVCPU, cfg.ext_zvkned, false),
81
+ env->siprio[i] = (i == IRQ_S_EXT) ? 0 : iprio;
92
DEFINE_PROP_BOOL("x-zvknha", RISCVCPU, cfg.ext_zvknha, false),
82
+ env->hviprio[i] = 0;
93
DEFINE_PROP_BOOL("x-zvknhb", RISCVCPU, cfg.ext_zvknhb, false),
83
+ }
94
+ DEFINE_PROP_BOOL("x-zvksed", RISCVCPU, cfg.ext_zvksed, false),
84
+ i = 0;
95
DEFINE_PROP_BOOL("x-zvksh", RISCVCPU, cfg.ext_zvksh, false),
85
+ while (!riscv_cpu_hviprio_index2irq(i, &irq, &rdzero)) {
96
86
+ if (!rdzero) {
97
DEFINE_PROP_END_OF_LIST(),
87
+ env->hviprio[irq] = env->miprio[irq];
98
diff --git a/target/riscv/vcrypto_helper.c b/target/riscv/vcrypto_helper.c
88
+ }
99
index XXXXXXX..XXXXXXX 100644
89
+ i++;
100
--- a/target/riscv/vcrypto_helper.c
90
+ }
101
+++ b/target/riscv/vcrypto_helper.c
91
/* mmte is supposed to have pm.current hardwired to 1 */
102
@@ -XXX,XX +XXX,XX @@
92
env->mmte |= (PM_EXT_INITIAL | MMTE_M_PM_CURRENT);
103
#include "cpu.h"
93
#endif
104
#include "crypto/aes.h"
94
diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
105
#include "crypto/aes-round.h"
95
index XXXXXXX..XXXXXXX 100644
106
+#include "crypto/sm4.h"
96
--- a/target/riscv/cpu_helper.c
107
#include "exec/memop.h"
97
+++ b/target/riscv/cpu_helper.c
108
#include "exec/exec-all.h"
98
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_update_mask(CPURISCVState *env)
109
#include "exec/helper-proto.h"
110
@@ -XXX,XX +XXX,XX @@ void HELPER(vgmul_vv)(void *vd_vptr, void *vs2_vptr, CPURISCVState *env,
111
vext_set_elems_1s(vd, vta, env->vl * 4, total_elems * 4);
112
env->vstart = 0;
99
}
113
}
100
114
+
101
#ifndef CONFIG_USER_ONLY
115
+void HELPER(vsm4k_vi)(void *vd, void *vs2, uint32_t uimm5, CPURISCVState *env,
102
-static int riscv_cpu_local_irq_pending(CPURISCVState *env)
116
+ uint32_t desc)
117
+{
118
+ const uint32_t egs = 4;
119
+ uint32_t rnd = uimm5 & 0x7;
120
+ uint32_t group_start = env->vstart / egs;
121
+ uint32_t group_end = env->vl / egs;
122
+ uint32_t esz = sizeof(uint32_t);
123
+ uint32_t total_elems = vext_get_total_elems(env, desc, esz);
124
+
125
+ for (uint32_t i = group_start; i < group_end; ++i) {
126
+ uint32_t vstart = i * egs;
127
+ uint32_t vend = (i + 1) * egs;
128
+ uint32_t rk[4] = {0};
129
+ uint32_t tmp[8] = {0};
130
+
131
+ for (uint32_t j = vstart; j < vend; ++j) {
132
+ rk[j - vstart] = *((uint32_t *)vs2 + H4(j));
133
+ }
134
+
135
+ for (uint32_t j = 0; j < egs; ++j) {
136
+ tmp[j] = rk[j];
137
+ }
138
+
139
+ for (uint32_t j = 0; j < egs; ++j) {
140
+ uint32_t b, s;
141
+ b = tmp[j + 1] ^ tmp[j + 2] ^ tmp[j + 3] ^ sm4_ck[rnd * 4 + j];
142
+
143
+ s = sm4_subword(b);
144
+
145
+ tmp[j + 4] = tmp[j] ^ (s ^ rol32(s, 13) ^ rol32(s, 23));
146
+ }
147
+
148
+ for (uint32_t j = vstart; j < vend; ++j) {
149
+ *((uint32_t *)vd + H4(j)) = tmp[egs + (j - vstart)];
150
+ }
151
+ }
152
+
153
+ env->vstart = 0;
154
+ /* set tail elements to 1s */
155
+ vext_set_elems_1s(vd, vext_vta(desc), env->vl * esz, total_elems * esz);
156
+}
157
+
158
+static void do_sm4_round(uint32_t *rk, uint32_t *buf)
159
+{
160
+ const uint32_t egs = 4;
161
+ uint32_t s, b;
162
+
163
+ for (uint32_t j = egs; j < egs * 2; ++j) {
164
+ b = buf[j - 3] ^ buf[j - 2] ^ buf[j - 1] ^ rk[j - 4];
165
+
166
+ s = sm4_subword(b);
167
+
168
+ buf[j] = buf[j - 4] ^ (s ^ rol32(s, 2) ^ rol32(s, 10) ^ rol32(s, 18) ^
169
+ rol32(s, 24));
170
+ }
171
+}
172
+
173
+void HELPER(vsm4r_vv)(void *vd, void *vs2, CPURISCVState *env, uint32_t desc)
174
+{
175
+ const uint32_t egs = 4;
176
+ uint32_t group_start = env->vstart / egs;
177
+ uint32_t group_end = env->vl / egs;
178
+ uint32_t esz = sizeof(uint32_t);
179
+ uint32_t total_elems = vext_get_total_elems(env, desc, esz);
180
+
181
+ for (uint32_t i = group_start; i < group_end; ++i) {
182
+ uint32_t vstart = i * egs;
183
+ uint32_t vend = (i + 1) * egs;
184
+ uint32_t rk[4] = {0};
185
+ uint32_t tmp[8] = {0};
186
+
187
+ for (uint32_t j = vstart; j < vend; ++j) {
188
+ rk[j - vstart] = *((uint32_t *)vs2 + H4(j));
189
+ }
190
+
191
+ for (uint32_t j = vstart; j < vend; ++j) {
192
+ tmp[j - vstart] = *((uint32_t *)vd + H4(j));
193
+ }
194
+
195
+ do_sm4_round(rk, tmp);
196
+
197
+ for (uint32_t j = vstart; j < vend; ++j) {
198
+ *((uint32_t *)vd + H4(j)) = tmp[egs + (j - vstart)];
199
+ }
200
+ }
201
+
202
+ env->vstart = 0;
203
+ /* set tail elements to 1s */
204
+ vext_set_elems_1s(vd, vext_vta(desc), env->vl * esz, total_elems * esz);
205
+}
206
+
207
+void HELPER(vsm4r_vs)(void *vd, void *vs2, CPURISCVState *env, uint32_t desc)
208
+{
209
+ const uint32_t egs = 4;
210
+ uint32_t group_start = env->vstart / egs;
211
+ uint32_t group_end = env->vl / egs;
212
+ uint32_t esz = sizeof(uint32_t);
213
+ uint32_t total_elems = vext_get_total_elems(env, desc, esz);
214
+
215
+ for (uint32_t i = group_start; i < group_end; ++i) {
216
+ uint32_t vstart = i * egs;
217
+ uint32_t vend = (i + 1) * egs;
218
+ uint32_t rk[4] = {0};
219
+ uint32_t tmp[8] = {0};
220
+
221
+ for (uint32_t j = 0; j < egs; ++j) {
222
+ rk[j] = *((uint32_t *)vs2 + H4(j));
223
+ }
224
+
225
+ for (uint32_t j = vstart; j < vend; ++j) {
226
+ tmp[j - vstart] = *((uint32_t *)vd + H4(j));
227
+ }
228
+
229
+ do_sm4_round(rk, tmp);
230
+
231
+ for (uint32_t j = vstart; j < vend; ++j) {
232
+ *((uint32_t *)vd + H4(j)) = tmp[egs + (j - vstart)];
233
+ }
234
+ }
235
+
236
+ env->vstart = 0;
237
+ /* set tail elements to 1s */
238
+ vext_set_elems_1s(vd, vext_vta(desc), env->vl * esz, total_elems * esz);
239
+}
240
diff --git a/target/riscv/insn_trans/trans_rvvk.c.inc b/target/riscv/insn_trans/trans_rvvk.c.inc
241
index XXXXXXX..XXXXXXX 100644
242
--- a/target/riscv/insn_trans/trans_rvvk.c.inc
243
+++ b/target/riscv/insn_trans/trans_rvvk.c.inc
244
@@ -XXX,XX +XXX,XX @@ static bool vghsh_check(DisasContext *s, arg_rmrr *a)
245
}
246
247
GEN_VV_UNMASKED_TRANS(vghsh_vv, vghsh_check, ZVKG_EGS)
103
+
248
+
104
+/*
249
+/*
105
+ * The HS-mode is allowed to configure priority only for the
250
+ * Zvksed
106
+ * following VS-mode local interrupts:
107
+ *
108
+ * 0 (Reserved interrupt, reads as zero)
109
+ * 1 Supervisor software interrupt
110
+ * 4 (Reserved interrupt, reads as zero)
111
+ * 5 Supervisor timer interrupt
112
+ * 8 (Reserved interrupt, reads as zero)
113
+ * 13 (Reserved interrupt)
114
+ * 14 "
115
+ * 15 "
116
+ * 16 "
117
+ * 18 Debug/trace interrupt
118
+ * 20 (Reserved interrupt)
119
+ * 22 "
120
+ * 24 "
121
+ * 26 "
122
+ * 28 "
123
+ * 30 (Reserved for standard reporting of bus or system errors)
124
+ */
251
+ */
125
+
252
+
126
+static const int hviprio_index2irq[] = {
253
+#define ZVKSED_EGS 4
127
+ 0, 1, 4, 5, 8, 13, 14, 15, 16, 18, 20, 22, 24, 26, 28, 30 };
254
+
128
+static const int hviprio_index2rdzero[] = {
255
+static bool zvksed_check(DisasContext *s)
129
+ 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
256
+{
130
+
257
+ int egw_bytes = ZVKSED_EGS << s->sew;
131
+int riscv_cpu_hviprio_index2irq(int index, int *out_irq, int *out_rdzero)
258
+ return s->cfg_ptr->ext_zvksed == true &&
132
+{
259
+ require_rvv(s) &&
133
+ if (index < 0 || ARRAY_SIZE(hviprio_index2irq) <= index) {
260
+ vext_check_isa_ill(s) &&
134
+ return -EINVAL;
261
+ MAXSZ(s) >= egw_bytes &&
135
+ }
262
+ s->sew == MO_32;
136
+
263
+}
137
+ if (out_irq) {
264
+
138
+ *out_irq = hviprio_index2irq[index];
265
+static bool vsm4k_vi_check(DisasContext *s, arg_rmrr *a)
139
+ }
266
+{
140
+
267
+ return zvksed_check(s) &&
141
+ if (out_rdzero) {
268
+ require_align(a->rd, s->lmul) &&
142
+ *out_rdzero = hviprio_index2rdzero[index];
269
+ require_align(a->rs2, s->lmul);
143
+ }
270
+}
144
+
271
+
145
+ return 0;
272
+GEN_VI_UNMASKED_TRANS(vsm4k_vi, vsm4k_vi_check, ZVKSED_EGS)
146
+}
273
+
147
+
274
+static bool vsm4r_vv_check(DisasContext *s, arg_rmr *a)
148
+/*
275
+{
149
+ * Default priorities of local interrupts are defined in the
276
+ return zvksed_check(s) &&
150
+ * RISC-V Advanced Interrupt Architecture specification.
277
+ require_align(a->rd, s->lmul) &&
151
+ *
278
+ require_align(a->rs2, s->lmul);
152
+ * ----------------------------------------------------------------
279
+}
153
+ * Default |
280
+
154
+ * Priority | Major Interrupt Numbers
281
+GEN_V_UNMASKED_TRANS(vsm4r_vv, vsm4r_vv_check, ZVKSED_EGS)
155
+ * ----------------------------------------------------------------
282
+
156
+ * Highest | 63 (3f), 62 (3e), 31 (1f), 30 (1e), 61 (3d), 60 (3c),
283
+static bool vsm4r_vs_check(DisasContext *s, arg_rmr *a)
157
+ * | 59 (3b), 58 (3a), 29 (1d), 28 (1c), 57 (39), 56 (38),
284
+{
158
+ * | 55 (37), 54 (36), 27 (1b), 26 (1a), 53 (35), 52 (34),
285
+ return zvksed_check(s) &&
159
+ * | 51 (33), 50 (32), 25 (19), 24 (18), 49 (31), 48 (30)
286
+ !is_overlapped(a->rd, 1 << MAX(s->lmul, 0), a->rs2, 1) &&
160
+ * |
287
+ require_align(a->rd, s->lmul);
161
+ * | 11 (0b), 3 (03), 7 (07)
288
+}
162
+ * | 9 (09), 1 (01), 5 (05)
289
+
163
+ * | 12 (0c)
290
+GEN_V_UNMASKED_TRANS(vsm4r_vs, vsm4r_vs_check, ZVKSED_EGS)
164
+ * | 10 (0a), 2 (02), 6 (06)
165
+ * |
166
+ * | 47 (2f), 46 (2e), 23 (17), 22 (16), 45 (2d), 44 (2c),
167
+ * | 43 (2b), 42 (2a), 21 (15), 20 (14), 41 (29), 40 (28),
168
+ * | 39 (27), 38 (26), 19 (13), 18 (12), 37 (25), 36 (24),
169
+ * Lowest | 35 (23), 34 (22), 17 (11), 16 (10), 33 (21), 32 (20)
170
+ * ----------------------------------------------------------------
171
+ */
172
+static const uint8_t default_iprio[64] = {
173
+ [63] = IPRIO_DEFAULT_UPPER,
174
+ [62] = IPRIO_DEFAULT_UPPER + 1,
175
+ [31] = IPRIO_DEFAULT_UPPER + 2,
176
+ [30] = IPRIO_DEFAULT_UPPER + 3,
177
+ [61] = IPRIO_DEFAULT_UPPER + 4,
178
+ [60] = IPRIO_DEFAULT_UPPER + 5,
179
+
180
+ [59] = IPRIO_DEFAULT_UPPER + 6,
181
+ [58] = IPRIO_DEFAULT_UPPER + 7,
182
+ [29] = IPRIO_DEFAULT_UPPER + 8,
183
+ [28] = IPRIO_DEFAULT_UPPER + 9,
184
+ [57] = IPRIO_DEFAULT_UPPER + 10,
185
+ [56] = IPRIO_DEFAULT_UPPER + 11,
186
+
187
+ [55] = IPRIO_DEFAULT_UPPER + 12,
188
+ [54] = IPRIO_DEFAULT_UPPER + 13,
189
+ [27] = IPRIO_DEFAULT_UPPER + 14,
190
+ [26] = IPRIO_DEFAULT_UPPER + 15,
191
+ [53] = IPRIO_DEFAULT_UPPER + 16,
192
+ [52] = IPRIO_DEFAULT_UPPER + 17,
193
+
194
+ [51] = IPRIO_DEFAULT_UPPER + 18,
195
+ [50] = IPRIO_DEFAULT_UPPER + 19,
196
+ [25] = IPRIO_DEFAULT_UPPER + 20,
197
+ [24] = IPRIO_DEFAULT_UPPER + 21,
198
+ [49] = IPRIO_DEFAULT_UPPER + 22,
199
+ [48] = IPRIO_DEFAULT_UPPER + 23,
200
+
201
+ [11] = IPRIO_DEFAULT_M,
202
+ [3] = IPRIO_DEFAULT_M + 1,
203
+ [7] = IPRIO_DEFAULT_M + 2,
204
+
205
+ [9] = IPRIO_DEFAULT_S,
206
+ [1] = IPRIO_DEFAULT_S + 1,
207
+ [5] = IPRIO_DEFAULT_S + 2,
208
+
209
+ [12] = IPRIO_DEFAULT_SGEXT,
210
+
211
+ [10] = IPRIO_DEFAULT_VS,
212
+ [2] = IPRIO_DEFAULT_VS + 1,
213
+ [6] = IPRIO_DEFAULT_VS + 2,
214
+
215
+ [47] = IPRIO_DEFAULT_LOWER,
216
+ [46] = IPRIO_DEFAULT_LOWER + 1,
217
+ [23] = IPRIO_DEFAULT_LOWER + 2,
218
+ [22] = IPRIO_DEFAULT_LOWER + 3,
219
+ [45] = IPRIO_DEFAULT_LOWER + 4,
220
+ [44] = IPRIO_DEFAULT_LOWER + 5,
221
+
222
+ [43] = IPRIO_DEFAULT_LOWER + 6,
223
+ [42] = IPRIO_DEFAULT_LOWER + 7,
224
+ [21] = IPRIO_DEFAULT_LOWER + 8,
225
+ [20] = IPRIO_DEFAULT_LOWER + 9,
226
+ [41] = IPRIO_DEFAULT_LOWER + 10,
227
+ [40] = IPRIO_DEFAULT_LOWER + 11,
228
+
229
+ [39] = IPRIO_DEFAULT_LOWER + 12,
230
+ [38] = IPRIO_DEFAULT_LOWER + 13,
231
+ [19] = IPRIO_DEFAULT_LOWER + 14,
232
+ [18] = IPRIO_DEFAULT_LOWER + 15,
233
+ [37] = IPRIO_DEFAULT_LOWER + 16,
234
+ [36] = IPRIO_DEFAULT_LOWER + 17,
235
+
236
+ [35] = IPRIO_DEFAULT_LOWER + 18,
237
+ [34] = IPRIO_DEFAULT_LOWER + 19,
238
+ [17] = IPRIO_DEFAULT_LOWER + 20,
239
+ [16] = IPRIO_DEFAULT_LOWER + 21,
240
+ [33] = IPRIO_DEFAULT_LOWER + 22,
241
+ [32] = IPRIO_DEFAULT_LOWER + 23,
242
+};
243
+
244
+uint8_t riscv_cpu_default_priority(int irq)
245
{
246
- target_ulong virt_enabled = riscv_cpu_virt_enabled(env);
247
+ if (irq < 0 || irq > 63) {
248
+ return IPRIO_MMAXIPRIO;
249
+ }
250
+
251
+ return default_iprio[irq] ? default_iprio[irq] : IPRIO_MMAXIPRIO;
252
+};
253
254
- target_ulong mstatus_mie = get_field(env->mstatus, MSTATUS_MIE);
255
- target_ulong mstatus_sie = get_field(env->mstatus, MSTATUS_SIE);
256
+static int riscv_cpu_pending_to_irq(CPURISCVState *env,
257
+ int extirq, unsigned int extirq_def_prio,
258
+ uint64_t pending, uint8_t *iprio)
259
+{
260
+ int irq, best_irq = RISCV_EXCP_NONE;
261
+ unsigned int prio, best_prio = UINT_MAX;
262
263
- target_ulong vsgemask =
264
- (target_ulong)1 << get_field(env->hstatus, HSTATUS_VGEIN);
265
- target_ulong vsgein = (env->hgeip & vsgemask) ? MIP_VSEIP : 0;
266
+ if (!pending) {
267
+ return RISCV_EXCP_NONE;
268
+ }
269
270
- target_ulong pending = (env->mip | vsgein) & env->mie;
271
+ irq = ctz64(pending);
272
+ if (!riscv_feature(env, RISCV_FEATURE_AIA)) {
273
+ return irq;
274
+ }
275
276
- target_ulong mie = env->priv < PRV_M ||
277
- (env->priv == PRV_M && mstatus_mie);
278
- target_ulong sie = env->priv < PRV_S ||
279
- (env->priv == PRV_S && mstatus_sie);
280
- target_ulong hsie = virt_enabled || sie;
281
- target_ulong vsie = virt_enabled && sie;
282
+ pending = pending >> irq;
283
+ while (pending) {
284
+ prio = iprio[irq];
285
+ if (!prio) {
286
+ if (irq == extirq) {
287
+ prio = extirq_def_prio;
288
+ } else {
289
+ prio = (riscv_cpu_default_priority(irq) < extirq_def_prio) ?
290
+ 1 : IPRIO_MMAXIPRIO;
291
+ }
292
+ }
293
+ if ((pending & 0x1) && (prio <= best_prio)) {
294
+ best_irq = irq;
295
+ best_prio = prio;
296
+ }
297
+ irq++;
298
+ pending = pending >> 1;
299
+ }
300
301
- target_ulong irqs =
302
- (pending & ~env->mideleg & -mie) |
303
- (pending & env->mideleg & ~env->hideleg & -hsie) |
304
- (pending & env->mideleg & env->hideleg & -vsie);
305
+ return best_irq;
306
+}
307
308
- if (irqs) {
309
- return ctz64(irqs); /* since non-zero */
310
+static uint64_t riscv_cpu_all_pending(CPURISCVState *env)
311
+{
312
+ uint32_t gein = get_field(env->hstatus, HSTATUS_VGEIN);
313
+ uint64_t vsgein = (env->hgeip & (1ULL << gein)) ? MIP_VSEIP : 0;
314
+
315
+ return (env->mip | vsgein) & env->mie;
316
+}
317
+
318
+int riscv_cpu_mirq_pending(CPURISCVState *env)
319
+{
320
+ uint64_t irqs = riscv_cpu_all_pending(env) & ~env->mideleg &
321
+ ~(MIP_SGEIP | MIP_VSSIP | MIP_VSTIP | MIP_VSEIP);
322
+
323
+ return riscv_cpu_pending_to_irq(env, IRQ_M_EXT, IPRIO_DEFAULT_M,
324
+ irqs, env->miprio);
325
+}
326
+
327
+int riscv_cpu_sirq_pending(CPURISCVState *env)
328
+{
329
+ uint64_t irqs = riscv_cpu_all_pending(env) & env->mideleg &
330
+ ~(MIP_VSSIP | MIP_VSTIP | MIP_VSEIP);
331
+
332
+ return riscv_cpu_pending_to_irq(env, IRQ_S_EXT, IPRIO_DEFAULT_S,
333
+ irqs, env->siprio);
334
+}
335
+
336
+int riscv_cpu_vsirq_pending(CPURISCVState *env)
337
+{
338
+ uint64_t irqs = riscv_cpu_all_pending(env) & env->mideleg &
339
+ (MIP_VSSIP | MIP_VSTIP | MIP_VSEIP);
340
+
341
+ return riscv_cpu_pending_to_irq(env, IRQ_S_EXT, IPRIO_DEFAULT_S,
342
+ irqs >> 1, env->hviprio);
343
+}
344
+
345
+static int riscv_cpu_local_irq_pending(CPURISCVState *env)
346
+{
347
+ int virq;
348
+ uint64_t irqs, pending, mie, hsie, vsie;
349
+
350
+ /* Determine interrupt enable state of all privilege modes */
351
+ if (riscv_cpu_virt_enabled(env)) {
352
+ mie = 1;
353
+ hsie = 1;
354
+ vsie = (env->priv < PRV_S) ||
355
+ (env->priv == PRV_S && get_field(env->mstatus, MSTATUS_SIE));
356
} else {
357
- return RISCV_EXCP_NONE; /* indicates no pending interrupt */
358
+ mie = (env->priv < PRV_M) ||
359
+ (env->priv == PRV_M && get_field(env->mstatus, MSTATUS_MIE));
360
+ hsie = (env->priv < PRV_S) ||
361
+ (env->priv == PRV_S && get_field(env->mstatus, MSTATUS_SIE));
362
+ vsie = 0;
363
+ }
364
+
365
+ /* Determine all pending interrupts */
366
+ pending = riscv_cpu_all_pending(env);
367
+
368
+ /* Check M-mode interrupts */
369
+ irqs = pending & ~env->mideleg & -mie;
370
+ if (irqs) {
371
+ return riscv_cpu_pending_to_irq(env, IRQ_M_EXT, IPRIO_DEFAULT_M,
372
+ irqs, env->miprio);
373
}
374
+
375
+ /* Check HS-mode interrupts */
376
+ irqs = pending & env->mideleg & ~env->hideleg & -hsie;
377
+ if (irqs) {
378
+ return riscv_cpu_pending_to_irq(env, IRQ_S_EXT, IPRIO_DEFAULT_S,
379
+ irqs, env->siprio);
380
+ }
381
+
382
+ /* Check VS-mode interrupts */
383
+ irqs = pending & env->mideleg & env->hideleg & -vsie;
384
+ if (irqs) {
385
+ virq = riscv_cpu_pending_to_irq(env, IRQ_S_EXT, IPRIO_DEFAULT_S,
386
+ irqs >> 1, env->hviprio);
387
+ return (virq <= 0) ? virq : virq + 1;
388
+ }
389
+
390
+ /* Indicate no pending interrupt */
391
+ return RISCV_EXCP_NONE;
392
}
393
394
bool riscv_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
395
diff --git a/target/riscv/machine.c b/target/riscv/machine.c
396
index XXXXXXX..XXXXXXX 100644
397
--- a/target/riscv/machine.c
398
+++ b/target/riscv/machine.c
399
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_hyper = {
400
VMSTATE_UINTTL(env.hgeie, RISCVCPU),
401
VMSTATE_UINTTL(env.hgeip, RISCVCPU),
402
VMSTATE_UINT64(env.htimedelta, RISCVCPU),
403
+ VMSTATE_UINT8_ARRAY(env.hviprio, RISCVCPU, 64),
404
405
VMSTATE_UINT64(env.vsstatus, RISCVCPU),
406
VMSTATE_UINTTL(env.vstvec, RISCVCPU),
407
@@ -XXX,XX +XXX,XX @@ const VMStateDescription vmstate_riscv_cpu = {
408
.fields = (VMStateField[]) {
409
VMSTATE_UINTTL_ARRAY(env.gpr, RISCVCPU, 32),
410
VMSTATE_UINT64_ARRAY(env.fpr, RISCVCPU, 32),
411
+ VMSTATE_UINT8_ARRAY(env.miprio, RISCVCPU, 64),
412
+ VMSTATE_UINT8_ARRAY(env.siprio, RISCVCPU, 64),
413
VMSTATE_UINTTL(env.pc, RISCVCPU),
414
VMSTATE_UINTTL(env.load_res, RISCVCPU),
415
VMSTATE_UINTTL(env.load_val, RISCVCPU),
416
--
291
--
417
2.34.1
292
2.41.0
418
419
diff view generated by jsdifflib
1
From: Anup Patel <anup.patel@wdc.com>
1
From: Rob Bradford <rbradford@rivosinc.com>
2
2
3
The AIA specification defines IMSIC interface CSRs for easy access
3
These are WARL fields - zero out the bits for unavailable counters and
4
to the per-HART IMSIC registers without using indirect xiselect and
4
special case the TM bit in mcountinhibit which is hardwired to zero.
5
xireg CSRs. This patch implements the AIA IMSIC interface CSRs.
5
This patch achieves this by modifying the value written so that any use
6
of the field will see the correctly masked bits.
6
7
7
Signed-off-by: Anup Patel <anup.patel@wdc.com>
8
Tested by modifying OpenSBI to write max value to these CSRs and upon
8
Signed-off-by: Anup Patel <anup@brainfault.org>
9
subsequent read the appropriate number of bits for number of PMUs is
9
Reviewed-by: Frank Chang <frank.chang@sifive.com>
10
enabled and the TM bit is zero in mcountinhibit.
10
Message-id: 20220204174700.534953-16-anup@brainfault.org
11
12
Signed-off-by: Rob Bradford <rbradford@rivosinc.com>
13
Acked-by: Alistair Francis <alistair.francis@wdc.com>
14
Reviewed-by: Atish Patra <atishp@rivosinc.com>
15
Message-ID: <20230802124906.24197-1-rbradford@rivosinc.com>
11
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
16
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
12
---
17
---
13
target/riscv/csr.c | 203 +++++++++++++++++++++++++++++++++++++++++++++
18
target/riscv/csr.c | 11 +++++++++--
14
1 file changed, 203 insertions(+)
19
1 file changed, 9 insertions(+), 2 deletions(-)
15
20
16
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
21
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
17
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
18
--- a/target/riscv/csr.c
23
--- a/target/riscv/csr.c
19
+++ b/target/riscv/csr.c
24
+++ b/target/riscv/csr.c
20
@@ -XXX,XX +XXX,XX @@ static int aia_xlate_vs_csrno(CPURISCVState *env, int csrno)
25
@@ -XXX,XX +XXX,XX @@ static RISCVException write_mcountinhibit(CPURISCVState *env, int csrno,
21
return CSR_VSISELECT;
26
{
22
case CSR_SIREG:
27
int cidx;
23
return CSR_VSIREG;
28
PMUCTRState *counter;
24
+ case CSR_SSETEIPNUM:
29
+ RISCVCPU *cpu = env_archcpu(env);
25
+ return CSR_VSSETEIPNUM;
30
26
+ case CSR_SCLREIPNUM:
31
- env->mcountinhibit = val;
27
+ return CSR_VSCLREIPNUM;
32
+ /* WARL register - disable unavailable counters; TM bit is always 0 */
28
+ case CSR_SSETEIENUM:
33
+ env->mcountinhibit =
29
+ return CSR_VSSETEIENUM;
34
+ val & (cpu->pmu_avail_ctrs | COUNTEREN_CY | COUNTEREN_IR);
30
+ case CSR_SCLREIENUM:
35
31
+ return CSR_VSCLREIENUM;
36
/* Check if any other counter is also monitoring cycles/instructions */
32
+ case CSR_STOPEI:
37
for (cidx = 0; cidx < RV_MAX_MHPMCOUNTERS; cidx++) {
33
+ return CSR_VSTOPEI;
38
@@ -XXX,XX +XXX,XX @@ static RISCVException read_mcounteren(CPURISCVState *env, int csrno,
34
default:
39
static RISCVException write_mcounteren(CPURISCVState *env, int csrno,
35
return csrno;
40
target_ulong val)
36
};
41
{
37
@@ -XXX,XX +XXX,XX @@ done:
42
- env->mcounteren = val;
43
+ RISCVCPU *cpu = env_archcpu(env);
44
+
45
+ /* WARL register - disable unavailable counters */
46
+ env->mcounteren = val & (cpu->pmu_avail_ctrs | COUNTEREN_CY | COUNTEREN_TM |
47
+ COUNTEREN_IR);
38
return RISCV_EXCP_NONE;
48
return RISCV_EXCP_NONE;
39
}
49
}
40
50
41
+static int rmw_xsetclreinum(CPURISCVState *env, int csrno, target_ulong *val,
42
+ target_ulong new_val, target_ulong wr_mask)
43
+{
44
+ int ret = -EINVAL;
45
+ bool set, pend, virt;
46
+ target_ulong priv, isel, vgein, xlen, nval, wmask;
47
+
48
+ /* Translate CSR number for VS-mode */
49
+ csrno = aia_xlate_vs_csrno(env, csrno);
50
+
51
+ /* Decode register details from CSR number */
52
+ virt = set = pend = false;
53
+ switch (csrno) {
54
+ case CSR_MSETEIPNUM:
55
+ priv = PRV_M;
56
+ set = true;
57
+ pend = true;
58
+ break;
59
+ case CSR_MCLREIPNUM:
60
+ priv = PRV_M;
61
+ pend = true;
62
+ break;
63
+ case CSR_MSETEIENUM:
64
+ priv = PRV_M;
65
+ set = true;
66
+ break;
67
+ case CSR_MCLREIENUM:
68
+ priv = PRV_M;
69
+ break;
70
+ case CSR_SSETEIPNUM:
71
+ priv = PRV_S;
72
+ set = true;
73
+ pend = true;
74
+ break;
75
+ case CSR_SCLREIPNUM:
76
+ priv = PRV_S;
77
+ pend = true;
78
+ break;
79
+ case CSR_SSETEIENUM:
80
+ priv = PRV_S;
81
+ set = true;
82
+ break;
83
+ case CSR_SCLREIENUM:
84
+ priv = PRV_S;
85
+ break;
86
+ case CSR_VSSETEIPNUM:
87
+ priv = PRV_S;
88
+ virt = true;
89
+ set = true;
90
+ pend = true;
91
+ break;
92
+ case CSR_VSCLREIPNUM:
93
+ priv = PRV_S;
94
+ virt = true;
95
+ pend = true;
96
+ break;
97
+ case CSR_VSSETEIENUM:
98
+ priv = PRV_S;
99
+ virt = true;
100
+ set = true;
101
+ break;
102
+ case CSR_VSCLREIENUM:
103
+ priv = PRV_S;
104
+ virt = true;
105
+ break;
106
+ default:
107
+ goto done;
108
+ };
109
+
110
+ /* IMSIC CSRs only available when machine implements IMSIC. */
111
+ if (!env->aia_ireg_rmw_fn[priv]) {
112
+ goto done;
113
+ }
114
+
115
+ /* Find the selected guest interrupt file */
116
+ vgein = (virt) ? get_field(env->hstatus, HSTATUS_VGEIN) : 0;
117
+
118
+ /* Selected guest interrupt file should be valid */
119
+ if (virt && (!vgein || env->geilen < vgein)) {
120
+ goto done;
121
+ }
122
+
123
+ /* Set/Clear CSRs always read zero */
124
+ if (val) {
125
+ *val = 0;
126
+ }
127
+
128
+ if (wr_mask) {
129
+ /* Get interrupt number */
130
+ new_val &= wr_mask;
131
+
132
+ /* Find target interrupt pending/enable register */
133
+ xlen = riscv_cpu_mxl_bits(env);
134
+ isel = (new_val / xlen);
135
+ isel *= (xlen / IMSIC_EIPx_BITS);
136
+ isel += (pend) ? ISELECT_IMSIC_EIP0 : ISELECT_IMSIC_EIE0;
137
+
138
+ /* Find the interrupt bit to be set/clear */
139
+ wmask = ((target_ulong)1) << (new_val % xlen);
140
+ nval = (set) ? wmask : 0;
141
+
142
+ /* Call machine specific IMSIC register emulation */
143
+ ret = env->aia_ireg_rmw_fn[priv](env->aia_ireg_rmw_fn_arg[priv],
144
+ AIA_MAKE_IREG(isel, priv, virt,
145
+ vgein, xlen),
146
+ NULL, nval, wmask);
147
+ } else {
148
+ ret = 0;
149
+ }
150
+
151
+done:
152
+ if (ret) {
153
+ return (riscv_cpu_virt_enabled(env) && virt) ?
154
+ RISCV_EXCP_VIRT_INSTRUCTION_FAULT : RISCV_EXCP_ILLEGAL_INST;
155
+ }
156
+ return RISCV_EXCP_NONE;
157
+}
158
+
159
+static int rmw_xtopei(CPURISCVState *env, int csrno, target_ulong *val,
160
+ target_ulong new_val, target_ulong wr_mask)
161
+{
162
+ bool virt;
163
+ int ret = -EINVAL;
164
+ target_ulong priv, vgein;
165
+
166
+ /* Translate CSR number for VS-mode */
167
+ csrno = aia_xlate_vs_csrno(env, csrno);
168
+
169
+ /* Decode register details from CSR number */
170
+ virt = false;
171
+ switch (csrno) {
172
+ case CSR_MTOPEI:
173
+ priv = PRV_M;
174
+ break;
175
+ case CSR_STOPEI:
176
+ priv = PRV_S;
177
+ break;
178
+ case CSR_VSTOPEI:
179
+ priv = PRV_S;
180
+ virt = true;
181
+ break;
182
+ default:
183
+ goto done;
184
+ };
185
+
186
+ /* IMSIC CSRs only available when machine implements IMSIC. */
187
+ if (!env->aia_ireg_rmw_fn[priv]) {
188
+ goto done;
189
+ }
190
+
191
+ /* Find the selected guest interrupt file */
192
+ vgein = (virt) ? get_field(env->hstatus, HSTATUS_VGEIN) : 0;
193
+
194
+ /* Selected guest interrupt file should be valid */
195
+ if (virt && (!vgein || env->geilen < vgein)) {
196
+ goto done;
197
+ }
198
+
199
+ /* Call machine specific IMSIC register emulation for TOPEI */
200
+ ret = env->aia_ireg_rmw_fn[priv](env->aia_ireg_rmw_fn_arg[priv],
201
+ AIA_MAKE_IREG(ISELECT_IMSIC_TOPEI, priv, virt, vgein,
202
+ riscv_cpu_mxl_bits(env)),
203
+ val, new_val, wr_mask);
204
+
205
+done:
206
+ if (ret) {
207
+ return (riscv_cpu_virt_enabled(env) && virt) ?
208
+ RISCV_EXCP_VIRT_INSTRUCTION_FAULT : RISCV_EXCP_ILLEGAL_INST;
209
+ }
210
+ return RISCV_EXCP_NONE;
211
+}
212
+
213
static RISCVException read_mtvec(CPURISCVState *env, int csrno,
214
target_ulong *val)
215
{
216
@@ -XXX,XX +XXX,XX @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
217
/* Machine-Level Interrupts (AIA) */
218
[CSR_MTOPI] = { "mtopi", aia_any, read_mtopi },
219
220
+ /* Machine-Level IMSIC Interface (AIA) */
221
+ [CSR_MSETEIPNUM] = { "mseteipnum", aia_any, NULL, NULL, rmw_xsetclreinum },
222
+ [CSR_MCLREIPNUM] = { "mclreipnum", aia_any, NULL, NULL, rmw_xsetclreinum },
223
+ [CSR_MSETEIENUM] = { "mseteienum", aia_any, NULL, NULL, rmw_xsetclreinum },
224
+ [CSR_MCLREIENUM] = { "mclreienum", aia_any, NULL, NULL, rmw_xsetclreinum },
225
+ [CSR_MTOPEI] = { "mtopei", aia_any, NULL, NULL, rmw_xtopei },
226
+
227
/* Virtual Interrupts for Supervisor Level (AIA) */
228
[CSR_MVIEN] = { "mvien", aia_any, read_zero, write_ignore },
229
[CSR_MVIP] = { "mvip", aia_any, read_zero, write_ignore },
230
@@ -XXX,XX +XXX,XX @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
231
/* Supervisor-Level Interrupts (AIA) */
232
[CSR_STOPI] = { "stopi", aia_smode, read_stopi },
233
234
+ /* Supervisor-Level IMSIC Interface (AIA) */
235
+ [CSR_SSETEIPNUM] = { "sseteipnum", aia_smode, NULL, NULL, rmw_xsetclreinum },
236
+ [CSR_SCLREIPNUM] = { "sclreipnum", aia_smode, NULL, NULL, rmw_xsetclreinum },
237
+ [CSR_SSETEIENUM] = { "sseteienum", aia_smode, NULL, NULL, rmw_xsetclreinum },
238
+ [CSR_SCLREIENUM] = { "sclreienum", aia_smode, NULL, NULL, rmw_xsetclreinum },
239
+ [CSR_STOPEI] = { "stopei", aia_smode, NULL, NULL, rmw_xtopei },
240
+
241
/* Supervisor-Level High-Half CSRs (AIA) */
242
[CSR_SIEH] = { "sieh", aia_smode32, NULL, NULL, rmw_sieh },
243
[CSR_SIPH] = { "siph", aia_smode32, NULL, NULL, rmw_siph },
244
@@ -XXX,XX +XXX,XX @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
245
/* VS-Level Interrupts (H-extension with AIA) */
246
[CSR_VSTOPI] = { "vstopi", aia_hmode, read_vstopi },
247
248
+ /* VS-Level IMSIC Interface (H-extension with AIA) */
249
+ [CSR_VSSETEIPNUM] = { "vsseteipnum", aia_hmode, NULL, NULL, rmw_xsetclreinum },
250
+ [CSR_VSCLREIPNUM] = { "vsclreipnum", aia_hmode, NULL, NULL, rmw_xsetclreinum },
251
+ [CSR_VSSETEIENUM] = { "vsseteienum", aia_hmode, NULL, NULL, rmw_xsetclreinum },
252
+ [CSR_VSCLREIENUM] = { "vsclreienum", aia_hmode, NULL, NULL, rmw_xsetclreinum },
253
+ [CSR_VSTOPEI] = { "vstopei", aia_hmode, NULL, NULL, rmw_xtopei },
254
+
255
/* Hypervisor and VS-Level High-Half CSRs (H-extension with AIA) */
256
[CSR_HIDELEGH] = { "hidelegh", aia_hmode32, NULL, NULL, rmw_hidelegh },
257
[CSR_HVIENH] = { "hvienh", aia_hmode32, read_zero, write_ignore },
258
--
51
--
259
2.34.1
52
2.41.0
260
261
diff view generated by jsdifflib
1
From: Anup Patel <anup.patel@wdc.com>
1
From: Jason Chien <jason.chien@sifive.com>
2
2
3
The machine or device emulation should be able to force set certain
3
RVA23 Profiles states:
4
CPU features because:
4
The RVA23 profiles are intended to be used for 64-bit application
5
1) We can have certain CPU features which are in-general optional
5
processors that will run rich OS stacks from standard binary OS
6
but implemented by RISC-V CPUs on the machine.
6
distributions and with a substantial number of third-party binary user
7
2) We can have devices which require a certain CPU feature. For example,
7
applications that will be supported over a considerable length of time
8
AIA IMSIC devices expect AIA CSRs implemented by RISC-V CPUs.
8
in the field.
9
9
10
Signed-off-by: Anup Patel <anup.patel@wdc.com>
10
The chapter 4 of the unprivileged spec introduces the Zihintntl extension
11
Signed-off-by: Anup Patel <anup@brainfault.org>
11
and Zihintntl is a mandatory extension presented in RVA23 Profiles, whose
12
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
12
purpose is to enable application and operating system portability across
13
different implementations. Thus the DTS should contain the Zihintntl ISA
14
string in order to pass to software.
15
16
The unprivileged spec states:
17
Like any HINTs, these instructions may be freely ignored. Hence, although
18
they are described in terms of cache-based memory hierarchies, they do not
19
mandate the provision of caches.
20
21
These instructions are encoded with non-used opcode, e.g. ADD x0, x0, x2,
22
which QEMU already supports, and QEMU does not emulate cache. Therefore
23
these instructions can be considered as a no-op, and we only need to add
24
a new property for the Zihintntl extension.
25
26
Reviewed-by: Frank Chang <frank.chang@sifive.com>
13
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
27
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
14
Reviewed-by: Frank Chang <frank.chang@sifive.com>
28
Signed-off-by: Jason Chien <jason.chien@sifive.com>
15
Message-id: 20220204174700.534953-6-anup@brainfault.org
29
Message-ID: <20230726074049.19505-2-jason.chien@sifive.com>
16
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
30
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
17
---
31
---
18
target/riscv/cpu.h | 5 +++++
32
target/riscv/cpu_cfg.h | 1 +
19
target/riscv/cpu.c | 11 +++--------
33
target/riscv/cpu.c | 2 ++
20
2 files changed, 8 insertions(+), 8 deletions(-)
34
2 files changed, 3 insertions(+)
21
35
22
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
36
diff --git a/target/riscv/cpu_cfg.h b/target/riscv/cpu_cfg.h
23
index XXXXXXX..XXXXXXX 100644
37
index XXXXXXX..XXXXXXX 100644
24
--- a/target/riscv/cpu.h
38
--- a/target/riscv/cpu_cfg.h
25
+++ b/target/riscv/cpu.h
39
+++ b/target/riscv/cpu_cfg.h
26
@@ -XXX,XX +XXX,XX @@ static inline bool riscv_feature(CPURISCVState *env, int feature)
40
@@ -XXX,XX +XXX,XX @@ struct RISCVCPUConfig {
27
return env->features & (1ULL << feature);
41
bool ext_icbom;
28
}
42
bool ext_icboz;
29
43
bool ext_zicond;
30
+static inline void riscv_set_feature(CPURISCVState *env, int feature)
44
+ bool ext_zihintntl;
31
+{
45
bool ext_zihintpause;
32
+ env->features |= (1ULL << feature);
46
bool ext_smstateen;
33
+}
47
bool ext_sstc;
34
+
35
#include "cpu_user.h"
36
37
extern const char * const riscv_int_regnames[];
38
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
48
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
39
index XXXXXXX..XXXXXXX 100644
49
index XXXXXXX..XXXXXXX 100644
40
--- a/target/riscv/cpu.c
50
--- a/target/riscv/cpu.c
41
+++ b/target/riscv/cpu.c
51
+++ b/target/riscv/cpu.c
42
@@ -XXX,XX +XXX,XX @@ static void set_vext_version(CPURISCVState *env, int vext_ver)
52
@@ -XXX,XX +XXX,XX @@ static const struct isa_ext_data isa_edata_arr[] = {
43
env->vext_ver = vext_ver;
53
ISA_EXT_DATA_ENTRY(zicond, PRIV_VERSION_1_12_0, ext_zicond),
44
}
54
ISA_EXT_DATA_ENTRY(zicsr, PRIV_VERSION_1_10_0, ext_icsr),
45
55
ISA_EXT_DATA_ENTRY(zifencei, PRIV_VERSION_1_10_0, ext_ifencei),
46
-static void set_feature(CPURISCVState *env, int feature)
56
+ ISA_EXT_DATA_ENTRY(zihintntl, PRIV_VERSION_1_10_0, ext_zihintntl),
47
-{
57
ISA_EXT_DATA_ENTRY(zihintpause, PRIV_VERSION_1_10_0, ext_zihintpause),
48
- env->features |= (1ULL << feature);
58
ISA_EXT_DATA_ENTRY(zmmul, PRIV_VERSION_1_12_0, ext_zmmul),
49
-}
59
ISA_EXT_DATA_ENTRY(zawrs, PRIV_VERSION_1_12_0, ext_zawrs),
50
-
60
@@ -XXX,XX +XXX,XX @@ static Property riscv_cpu_extensions[] = {
51
static void set_resetvec(CPURISCVState *env, target_ulong resetvec)
61
DEFINE_PROP_BOOL("sscofpmf", RISCVCPU, cfg.ext_sscofpmf, false),
52
{
62
DEFINE_PROP_BOOL("Zifencei", RISCVCPU, cfg.ext_ifencei, true),
53
#ifndef CONFIG_USER_ONLY
63
DEFINE_PROP_BOOL("Zicsr", RISCVCPU, cfg.ext_icsr, true),
54
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_realize(DeviceState *dev, Error **errp)
64
+ DEFINE_PROP_BOOL("Zihintntl", RISCVCPU, cfg.ext_zihintntl, true),
55
}
65
DEFINE_PROP_BOOL("Zihintpause", RISCVCPU, cfg.ext_zihintpause, true),
56
66
DEFINE_PROP_BOOL("Zawrs", RISCVCPU, cfg.ext_zawrs, true),
57
if (cpu->cfg.mmu) {
67
DEFINE_PROP_BOOL("Zfa", RISCVCPU, cfg.ext_zfa, true),
58
- set_feature(env, RISCV_FEATURE_MMU);
59
+ riscv_set_feature(env, RISCV_FEATURE_MMU);
60
}
61
62
if (cpu->cfg.pmp) {
63
- set_feature(env, RISCV_FEATURE_PMP);
64
+ riscv_set_feature(env, RISCV_FEATURE_PMP);
65
66
/*
67
* Enhanced PMP should only be available
68
* on harts with PMP support
69
*/
70
if (cpu->cfg.epmp) {
71
- set_feature(env, RISCV_FEATURE_EPMP);
72
+ riscv_set_feature(env, RISCV_FEATURE_EPMP);
73
}
74
}
75
76
--
68
--
77
2.34.1
69
2.41.0
78
79
diff view generated by jsdifflib
1
From: Weiwei Li <liweiwei@iscas.ac.cn>
1
From: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
2
2
3
For non-leaf PTEs, the D, A, and U bits are reserved for future standard use.
3
Commit a47842d ("riscv: Add support for the Zfa extension") implemented the zfa extension.
4
However, it has some typos for fleq.d and fltq.d. Both of them misused the fltq.s
5
helper function.
4
6
5
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
7
Fixes: a47842d ("riscv: Add support for the Zfa extension")
6
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
8
Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
7
Reviewed-by: Anup Patel <anup@brainfault.org>
9
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
8
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
10
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
9
Message-Id: <20220204022658.18097-3-liweiwei@iscas.ac.cn>
11
Message-ID: <20230728003906.768-1-zhiwei_liu@linux.alibaba.com>
10
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
12
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
11
---
13
---
12
target/riscv/cpu_helper.c | 3 +++
14
target/riscv/insn_trans/trans_rvzfa.c.inc | 4 ++--
13
1 file changed, 3 insertions(+)
15
1 file changed, 2 insertions(+), 2 deletions(-)
14
16
15
diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
17
diff --git a/target/riscv/insn_trans/trans_rvzfa.c.inc b/target/riscv/insn_trans/trans_rvzfa.c.inc
16
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
17
--- a/target/riscv/cpu_helper.c
19
--- a/target/riscv/insn_trans/trans_rvzfa.c.inc
18
+++ b/target/riscv/cpu_helper.c
20
+++ b/target/riscv/insn_trans/trans_rvzfa.c.inc
19
@@ -XXX,XX +XXX,XX @@ restart:
21
@@ -XXX,XX +XXX,XX @@ bool trans_fleq_d(DisasContext *ctx, arg_fleq_d *a)
20
return TRANSLATE_FAIL;
22
TCGv_i64 src1 = get_fpr_hs(ctx, a->rs1);
21
} else if (!(pte & (PTE_R | PTE_W | PTE_X))) {
23
TCGv_i64 src2 = get_fpr_hs(ctx, a->rs2);
22
/* Inner PTE, continue walking */
24
23
+ if (pte & (PTE_D | PTE_A | PTE_U)) {
25
- gen_helper_fltq_s(dest, cpu_env, src1, src2);
24
+ return TRANSLATE_FAIL;
26
+ gen_helper_fleq_d(dest, cpu_env, src1, src2);
25
+ }
27
gen_set_gpr(ctx, a->rd, dest);
26
base = ppn << PGSHIFT;
28
return true;
27
} else if ((pte & (PTE_R | PTE_W | PTE_X)) == PTE_W) {
29
}
28
/* Reserved leaf PTE flags: PTE_W */
30
@@ -XXX,XX +XXX,XX @@ bool trans_fltq_d(DisasContext *ctx, arg_fltq_d *a)
31
TCGv_i64 src1 = get_fpr_hs(ctx, a->rs1);
32
TCGv_i64 src2 = get_fpr_hs(ctx, a->rs2);
33
34
- gen_helper_fltq_s(dest, cpu_env, src1, src2);
35
+ gen_helper_fltq_d(dest, cpu_env, src1, src2);
36
gen_set_gpr(ctx, a->rd, dest);
37
return true;
38
}
29
--
39
--
30
2.34.1
40
2.41.0
31
32
diff view generated by jsdifflib
1
From: Philipp Tomsich <philipp.tomsich@vrull.eu>
1
From: Jason Chien <jason.chien@sifive.com>
2
2
3
The XVentanaCondOps extension is supported by VRULL on behalf of the
3
When writing the upper mtime, we should keep the original lower mtime
4
Ventana Micro. Add myself as a point-of-contact.
4
whose value is given by cpu_riscv_read_rtc() instead of
5
cpu_riscv_read_rtc_raw(). The same logic applies to writes to lower mtime.
5
6
6
Signed-off-by: Philipp Tomsich <philipp.tomsich@vrull.eu>
7
Signed-off-by: Jason Chien <jason.chien@sifive.com>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
8
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
9
Message-Id: <20220202005249.3566542-8-philipp.tomsich@vrull.eu>
9
Message-ID: <20230728082502.26439-1-jason.chien@sifive.com>
10
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
10
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
11
---
11
---
12
MAINTAINERS | 7 +++++++
12
hw/intc/riscv_aclint.c | 5 +++--
13
1 file changed, 7 insertions(+)
13
1 file changed, 3 insertions(+), 2 deletions(-)
14
14
15
diff --git a/MAINTAINERS b/MAINTAINERS
15
diff --git a/hw/intc/riscv_aclint.c b/hw/intc/riscv_aclint.c
16
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
17
--- a/MAINTAINERS
17
--- a/hw/intc/riscv_aclint.c
18
+++ b/MAINTAINERS
18
+++ b/hw/intc/riscv_aclint.c
19
@@ -XXX,XX +XXX,XX @@ F: include/hw/riscv/
19
@@ -XXX,XX +XXX,XX @@ static void riscv_aclint_mtimer_write(void *opaque, hwaddr addr,
20
F: linux-user/host/riscv32/
20
return;
21
F: linux-user/host/riscv64/
21
} else if (addr == mtimer->time_base || addr == mtimer->time_base + 4) {
22
22
uint64_t rtc_r = cpu_riscv_read_rtc_raw(mtimer->timebase_freq);
23
+RISC-V XVentanaCondOps extension
23
+ uint64_t rtc = cpu_riscv_read_rtc(mtimer);
24
+M: Philipp Tomsich <philipp.tomsich@vrull.eu>
24
25
+L: qemu-riscv@nongnu.org
25
if (addr == mtimer->time_base) {
26
+S: Supported
26
if (size == 4) {
27
+F: target/riscv/XVentanaCondOps.decode
27
/* time_lo for RV32/RV64 */
28
+F: target/riscv/insn_trans/trans_xventanacondops.c.inc
28
- mtimer->time_delta = ((rtc_r & ~0xFFFFFFFFULL) | value) - rtc_r;
29
+
29
+ mtimer->time_delta = ((rtc & ~0xFFFFFFFFULL) | value) - rtc_r;
30
RENESAS RX CPUs
30
} else {
31
R: Yoshinori Sato <ysato@users.sourceforge.jp>
31
/* time for RV64 */
32
S: Orphan
32
mtimer->time_delta = value - rtc_r;
33
@@ -XXX,XX +XXX,XX @@ static void riscv_aclint_mtimer_write(void *opaque, hwaddr addr,
34
} else {
35
if (size == 4) {
36
/* time_hi for RV32/RV64 */
37
- mtimer->time_delta = (value << 32 | (rtc_r & 0xFFFFFFFF)) - rtc_r;
38
+ mtimer->time_delta = (value << 32 | (rtc & 0xFFFFFFFF)) - rtc_r;
39
} else {
40
qemu_log_mask(LOG_GUEST_ERROR,
41
"aclint-mtimer: invalid time_hi write: %08x",
33
--
42
--
34
2.34.1
43
2.41.0
35
36
diff view generated by jsdifflib
1
From: Anup Patel <anup.patel@wdc.com>
1
From: Jason Chien <jason.chien@sifive.com>
2
2
3
We have two new machine options "aia" and "aia-guests" available
3
The variables whose values are given by cpu_riscv_read_rtc() should be named
4
for the RISC-V virt machine so let's document these options.
4
"rtc". The variables whose value are given by cpu_riscv_read_rtc_raw()
5
should be named "rtc_r".
5
6
6
Signed-off-by: Anup Patel <anup.patel@wdc.com>
7
Signed-off-by: Jason Chien <jason.chien@sifive.com>
7
Signed-off-by: Anup Patel <anup@brainfault.org>
8
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
8
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
9
Reviewed-by: Frank Chang <frank.chang@sifive.com>
9
Message-ID: <20230728082502.26439-2-jason.chien@sifive.com>
10
Message-id: 20220204174700.534953-23-anup@brainfault.org
11
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
10
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
12
---
11
---
13
docs/system/riscv/virt.rst | 16 ++++++++++++++++
12
hw/intc/riscv_aclint.c | 6 +++---
14
1 file changed, 16 insertions(+)
13
1 file changed, 3 insertions(+), 3 deletions(-)
15
14
16
diff --git a/docs/system/riscv/virt.rst b/docs/system/riscv/virt.rst
15
diff --git a/hw/intc/riscv_aclint.c b/hw/intc/riscv_aclint.c
17
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
18
--- a/docs/system/riscv/virt.rst
17
--- a/hw/intc/riscv_aclint.c
19
+++ b/docs/system/riscv/virt.rst
18
+++ b/hw/intc/riscv_aclint.c
20
@@ -XXX,XX +XXX,XX @@ The following machine-specific options are supported:
19
@@ -XXX,XX +XXX,XX @@ static void riscv_aclint_mtimer_write_timecmp(RISCVAclintMTimerState *mtimer,
21
When this option is "on", ACLINT devices will be emulated instead of
20
uint64_t next;
22
SiFive CLINT. When not specified, this option is assumed to be "off".
21
uint64_t diff;
23
22
24
+- aia=[none|aplic|aplic-imsic]
23
- uint64_t rtc_r = cpu_riscv_read_rtc(mtimer);
25
+
24
+ uint64_t rtc = cpu_riscv_read_rtc(mtimer);
26
+ This option allows selecting interrupt controller defined by the AIA
25
27
+ (advanced interrupt architecture) specification. The "aia=aplic" selects
26
/* Compute the relative hartid w.r.t the socket */
28
+ APLIC (advanced platform level interrupt controller) to handle wired
27
hartid = hartid - mtimer->hartid_base;
29
+ interrupts whereas the "aia=aplic-imsic" selects APLIC and IMSIC (incoming
28
30
+ message signaled interrupt controller) to handle both wired interrupts and
29
mtimer->timecmp[hartid] = value;
31
+ MSIs. When not specified, this option is assumed to be "none" which selects
30
- if (mtimer->timecmp[hartid] <= rtc_r) {
32
+ SiFive PLIC to handle wired interrupts.
31
+ if (mtimer->timecmp[hartid] <= rtc) {
33
+
32
/*
34
+- aia-guests=nnn
33
* If we're setting an MTIMECMP value in the "past",
35
+
34
* immediately raise the timer interrupt
36
+ The number of per-HART VS-level AIA IMSIC pages to be emulated for a guest
35
@@ -XXX,XX +XXX,XX @@ static void riscv_aclint_mtimer_write_timecmp(RISCVAclintMTimerState *mtimer,
37
+ having AIA IMSIC (i.e. "aia=aplic-imsic" selected). When not specified,
36
38
+ the default number of per-HART VS-level AIA IMSIC pages is 0.
37
/* otherwise, set up the future timer interrupt */
39
+
38
qemu_irq_lower(mtimer->timer_irqs[hartid]);
40
Running Linux kernel
39
- diff = mtimer->timecmp[hartid] - rtc_r;
41
--------------------
40
+ diff = mtimer->timecmp[hartid] - rtc;
41
/* back to ns (note args switched in muldiv64) */
42
uint64_t ns_diff = muldiv64(diff, NANOSECONDS_PER_SECOND, timebase_freq);
42
43
43
--
44
--
44
2.34.1
45
2.41.0
45
46
diff view generated by jsdifflib
1
From: Philipp Tomsich <philipp.tomsich@vrull.eu>
1
From: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
2
2
3
The Zb[abcs] support code still uses the RISCV_CPU macros to access
3
We should not use types dependend on host arch for target_ucontext.
4
the configuration information (i.e., check whether an extension is
4
This bug is found when run rv32 applications.
5
available/enabled). Now that we provide this information directly
6
from DisasContext, we can access this directly via the cfg_ptr field.
7
5
8
Signed-off-by: Philipp Tomsich <philipp.tomsich@vrull.eu>
6
Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
9
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
10
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-Id: <20220202005249.3566542-5-philipp.tomsich@vrull.eu>
8
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
10
Message-ID: <20230811055438.1945-1-zhiwei_liu@linux.alibaba.com>
13
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
11
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
14
---
12
---
15
target/riscv/insn_trans/trans_rvb.c.inc | 8 ++++----
13
linux-user/riscv/signal.c | 4 ++--
16
1 file changed, 4 insertions(+), 4 deletions(-)
14
1 file changed, 2 insertions(+), 2 deletions(-)
17
15
18
diff --git a/target/riscv/insn_trans/trans_rvb.c.inc b/target/riscv/insn_trans/trans_rvb.c.inc
16
diff --git a/linux-user/riscv/signal.c b/linux-user/riscv/signal.c
19
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
20
--- a/target/riscv/insn_trans/trans_rvb.c.inc
18
--- a/linux-user/riscv/signal.c
21
+++ b/target/riscv/insn_trans/trans_rvb.c.inc
19
+++ b/linux-user/riscv/signal.c
22
@@ -XXX,XX +XXX,XX @@
20
@@ -XXX,XX +XXX,XX @@ struct target_sigcontext {
23
*/
21
}; /* cf. riscv-linux:arch/riscv/include/uapi/asm/ptrace.h */
24
22
25
#define REQUIRE_ZBA(ctx) do { \
23
struct target_ucontext {
26
- if (!RISCV_CPU(ctx->cs)->cfg.ext_zba) { \
24
- unsigned long uc_flags;
27
+ if (ctx->cfg_ptr->ext_zba) { \
25
- struct target_ucontext *uc_link;
28
return false; \
26
+ abi_ulong uc_flags;
29
} \
27
+ abi_ptr uc_link;
30
} while (0)
28
target_stack_t uc_stack;
31
29
target_sigset_t uc_sigmask;
32
#define REQUIRE_ZBB(ctx) do { \
30
uint8_t __unused[1024 / 8 - sizeof(target_sigset_t)];
33
- if (!RISCV_CPU(ctx->cs)->cfg.ext_zbb) { \
34
+ if (ctx->cfg_ptr->ext_zbb) { \
35
return false; \
36
} \
37
} while (0)
38
39
#define REQUIRE_ZBC(ctx) do { \
40
- if (!RISCV_CPU(ctx->cs)->cfg.ext_zbc) { \
41
+ if (ctx->cfg_ptr->ext_zbc) { \
42
return false; \
43
} \
44
} while (0)
45
46
#define REQUIRE_ZBS(ctx) do { \
47
- if (!RISCV_CPU(ctx->cs)->cfg.ext_zbs) { \
48
+ if (ctx->cfg_ptr->ext_zbs) { \
49
return false; \
50
} \
51
} while (0)
52
--
31
--
53
2.34.1
32
2.41.0
54
33
55
34
diff view generated by jsdifflib
1
From: Anup Patel <anup.patel@wdc.com>
1
From: Yong-Xuan Wang <yongxuan.wang@sifive.com>
2
2
3
We extend virt machine to emulate both AIA IMSIC and AIA APLIC
3
In this patch, we create the APLIC and IMSIC FDT helper functions and
4
devices only when "aia=aplic-imsic" parameter is passed along
4
remove M mode AIA devices when using KVM acceleration.
5
with machine name in the QEMU command-line. The AIA IMSIC is
6
only a per-HART MSI controller so we use AIA APLIC in MSI-mode
7
to forward all wired interrupts as MSIs to the AIA IMSIC.
8
5
9
We also provide "aia-guests=<xyz>" parameter which can be used
6
Signed-off-by: Yong-Xuan Wang <yongxuan.wang@sifive.com>
10
to specify number of VS-level AIA IMSIC Guests MMIO pages for
7
Reviewed-by: Jim Shu <jim.shu@sifive.com>
11
each HART.
8
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
12
9
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
13
Signed-off-by: Anup Patel <anup.patel@wdc.com>
10
Message-ID: <20230727102439.22554-2-yongxuan.wang@sifive.com>
14
Signed-off-by: Anup Patel <anup@brainfault.org>
15
Acked-by: Alistair Francis <alistair.francis@wdc.com>
16
Message-id: 20220204174700.534953-22-anup@brainfault.org
17
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
11
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
18
---
12
---
19
include/hw/riscv/virt.h | 17 +-
13
hw/riscv/virt.c | 290 +++++++++++++++++++++++-------------------------
20
hw/riscv/virt.c | 440 ++++++++++++++++++++++++++++++++--------
14
1 file changed, 137 insertions(+), 153 deletions(-)
21
hw/riscv/Kconfig | 1 +
22
3 files changed, 374 insertions(+), 84 deletions(-)
23
15
24
diff --git a/include/hw/riscv/virt.h b/include/hw/riscv/virt.h
25
index XXXXXXX..XXXXXXX 100644
26
--- a/include/hw/riscv/virt.h
27
+++ b/include/hw/riscv/virt.h
28
@@ -XXX,XX +XXX,XX @@
29
#include "hw/block/flash.h"
30
#include "qom/object.h"
31
32
-#define VIRT_CPUS_MAX 32
33
-#define VIRT_SOCKETS_MAX 8
34
+#define VIRT_CPUS_MAX_BITS 3
35
+#define VIRT_CPUS_MAX (1 << VIRT_CPUS_MAX_BITS)
36
+#define VIRT_SOCKETS_MAX_BITS 2
37
+#define VIRT_SOCKETS_MAX (1 << VIRT_SOCKETS_MAX_BITS)
38
39
#define TYPE_RISCV_VIRT_MACHINE MACHINE_TYPE_NAME("virt")
40
typedef struct RISCVVirtState RISCVVirtState;
41
@@ -XXX,XX +XXX,XX @@ DECLARE_INSTANCE_CHECKER(RISCVVirtState, RISCV_VIRT_MACHINE,
42
typedef enum RISCVVirtAIAType {
43
VIRT_AIA_TYPE_NONE = 0,
44
VIRT_AIA_TYPE_APLIC,
45
+ VIRT_AIA_TYPE_APLIC_IMSIC,
46
} RISCVVirtAIAType;
47
48
struct RISCVVirtState {
49
@@ -XXX,XX +XXX,XX @@ struct RISCVVirtState {
50
int fdt_size;
51
bool have_aclint;
52
RISCVVirtAIAType aia_type;
53
+ int aia_guests;
54
};
55
56
enum {
57
@@ -XXX,XX +XXX,XX @@ enum {
58
VIRT_UART0,
59
VIRT_VIRTIO,
60
VIRT_FW_CFG,
61
+ VIRT_IMSIC_M,
62
+ VIRT_IMSIC_S,
63
VIRT_FLASH,
64
VIRT_DRAM,
65
VIRT_PCIE_MMIO,
66
@@ -XXX,XX +XXX,XX @@ enum {
67
VIRTIO_NDEV = 0x35 /* Arbitrary maximum number of interrupts */
68
};
69
70
-#define VIRT_IRQCHIP_NUM_SOURCES 127
71
+#define VIRT_IRQCHIP_IPI_MSI 1
72
+#define VIRT_IRQCHIP_NUM_MSIS 255
73
+#define VIRT_IRQCHIP_NUM_SOURCES VIRTIO_NDEV
74
#define VIRT_IRQCHIP_NUM_PRIO_BITS 3
75
+#define VIRT_IRQCHIP_MAX_GUESTS_BITS 3
76
+#define VIRT_IRQCHIP_MAX_GUESTS ((1U << VIRT_IRQCHIP_MAX_GUESTS_BITS) - 1U)
77
78
#define VIRT_PLIC_PRIORITY_BASE 0x04
79
#define VIRT_PLIC_PENDING_BASE 0x1000
80
@@ -XXX,XX +XXX,XX @@ enum {
81
#define FDT_PCI_INT_CELLS 1
82
#define FDT_PLIC_INT_CELLS 1
83
#define FDT_APLIC_INT_CELLS 2
84
+#define FDT_IMSIC_INT_CELLS 0
85
#define FDT_MAX_INT_CELLS 2
86
#define FDT_MAX_INT_MAP_WIDTH (FDT_PCI_ADDR_CELLS + FDT_PCI_INT_CELLS + \
87
1 + FDT_MAX_INT_CELLS)
88
diff --git a/hw/riscv/virt.c b/hw/riscv/virt.c
16
diff --git a/hw/riscv/virt.c b/hw/riscv/virt.c
89
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
90
--- a/hw/riscv/virt.c
18
--- a/hw/riscv/virt.c
91
+++ b/hw/riscv/virt.c
19
+++ b/hw/riscv/virt.c
92
@@ -XXX,XX +XXX,XX @@
20
@@ -XXX,XX +XXX,XX @@ static uint32_t imsic_num_bits(uint32_t count)
93
#include "hw/riscv/numa.h"
21
return ret;
94
#include "hw/intc/riscv_aclint.h"
22
}
95
#include "hw/intc/riscv_aplic.h"
23
96
+#include "hw/intc/riscv_imsic.h"
24
-static void create_fdt_imsic(RISCVVirtState *s, const MemMapEntry *memmap,
97
#include "hw/intc/sifive_plic.h"
25
- uint32_t *phandle, uint32_t *intc_phandles,
98
#include "hw/misc/sifive_test.h"
26
- uint32_t *msi_m_phandle, uint32_t *msi_s_phandle)
99
#include "chardev/char.h"
27
+static void create_fdt_one_imsic(RISCVVirtState *s, hwaddr base_addr,
100
@@ -XXX,XX +XXX,XX @@
28
+ uint32_t *intc_phandles, uint32_t msi_phandle,
101
#include "hw/pci-host/gpex.h"
29
+ bool m_mode, uint32_t imsic_guest_bits)
102
#include "hw/display/ramfb.h"
103
104
+#define VIRT_IMSIC_GROUP_MAX_SIZE (1U << IMSIC_MMIO_GROUP_MIN_SHIFT)
105
+#if VIRT_IMSIC_GROUP_MAX_SIZE < \
106
+ IMSIC_GROUP_SIZE(VIRT_CPUS_MAX_BITS, VIRT_IRQCHIP_MAX_GUESTS_BITS)
107
+#error "Can't accomodate single IMSIC group in address space"
108
+#endif
109
+
110
+#define VIRT_IMSIC_MAX_SIZE (VIRT_SOCKETS_MAX * \
111
+ VIRT_IMSIC_GROUP_MAX_SIZE)
112
+#if 0x4000000 < VIRT_IMSIC_MAX_SIZE
113
+#error "Can't accomodate all IMSIC groups in address space"
114
+#endif
115
+
116
static const MemMapEntry virt_memmap[] = {
117
[VIRT_DEBUG] = { 0x0, 0x100 },
118
[VIRT_MROM] = { 0x1000, 0xf000 },
119
@@ -XXX,XX +XXX,XX @@ static const MemMapEntry virt_memmap[] = {
120
[VIRT_VIRTIO] = { 0x10001000, 0x1000 },
121
[VIRT_FW_CFG] = { 0x10100000, 0x18 },
122
[VIRT_FLASH] = { 0x20000000, 0x4000000 },
123
+ [VIRT_IMSIC_M] = { 0x24000000, VIRT_IMSIC_MAX_SIZE },
124
+ [VIRT_IMSIC_S] = { 0x28000000, VIRT_IMSIC_MAX_SIZE },
125
[VIRT_PCIE_ECAM] = { 0x30000000, 0x10000000 },
126
[VIRT_PCIE_MMIO] = { 0x40000000, 0x40000000 },
127
[VIRT_DRAM] = { 0x80000000, 0x0 },
128
@@ -XXX,XX +XXX,XX @@ static void create_fdt_socket_aclint(RISCVVirtState *s,
129
{
30
{
130
int cpu;
31
int cpu, socket;
131
char *name;
32
char *imsic_name;
132
- unsigned long addr;
33
MachineState *ms = MACHINE(s);
133
+ unsigned long addr, size;
34
int socket_count = riscv_socket_count(ms);
134
uint32_t aclint_cells_size;
35
- uint32_t imsic_max_hart_per_socket, imsic_guest_bits;
135
uint32_t *aclint_mswi_cells;
36
+ uint32_t imsic_max_hart_per_socket;
136
uint32_t *aclint_sswi_cells;
37
uint32_t *imsic_cells, *imsic_regs, imsic_addr, imsic_size;
137
@@ -XXX,XX +XXX,XX @@ static void create_fdt_socket_aclint(RISCVVirtState *s,
38
138
}
39
- *msi_m_phandle = (*phandle)++;
139
aclint_cells_size = s->soc[socket].num_harts * sizeof(uint32_t) * 2;
40
- *msi_s_phandle = (*phandle)++;
140
41
imsic_cells = g_new0(uint32_t, ms->smp.cpus * 2);
141
- addr = memmap[VIRT_CLINT].base + (memmap[VIRT_CLINT].size * socket);
42
imsic_regs = g_new0(uint32_t, socket_count * 4);
142
- name = g_strdup_printf("/soc/mswi@%lx", addr);
43
143
- qemu_fdt_add_subnode(mc->fdt, name);
44
- /* M-level IMSIC node */
144
- qemu_fdt_setprop_string(mc->fdt, name, "compatible", "riscv,aclint-mswi");
45
for (cpu = 0; cpu < ms->smp.cpus; cpu++) {
145
- qemu_fdt_setprop_cells(mc->fdt, name, "reg",
46
imsic_cells[cpu * 2 + 0] = cpu_to_be32(intc_phandles[cpu]);
146
- 0x0, addr, 0x0, RISCV_ACLINT_SWI_SIZE);
47
- imsic_cells[cpu * 2 + 1] = cpu_to_be32(IRQ_M_EXT);
147
- qemu_fdt_setprop(mc->fdt, name, "interrupts-extended",
48
+ imsic_cells[cpu * 2 + 1] = cpu_to_be32(m_mode ? IRQ_M_EXT : IRQ_S_EXT);
148
- aclint_mswi_cells, aclint_cells_size);
49
}
149
- qemu_fdt_setprop(mc->fdt, name, "interrupt-controller", NULL, 0);
50
- imsic_max_hart_per_socket = 0;
150
- qemu_fdt_setprop_cell(mc->fdt, name, "#interrupt-cells", 0);
51
- for (socket = 0; socket < socket_count; socket++) {
151
- riscv_socket_fdt_write_id(mc, mc->fdt, name, socket);
52
- imsic_addr = memmap[VIRT_IMSIC_M].base +
152
- g_free(name);
53
- socket * VIRT_IMSIC_GROUP_MAX_SIZE;
153
+ if (s->aia_type != VIRT_AIA_TYPE_APLIC_IMSIC) {
54
- imsic_size = IMSIC_HART_SIZE(0) * s->soc[socket].num_harts;
154
+ addr = memmap[VIRT_CLINT].base + (memmap[VIRT_CLINT].size * socket);
55
- imsic_regs[socket * 4 + 0] = 0;
155
+ name = g_strdup_printf("/soc/mswi@%lx", addr);
56
- imsic_regs[socket * 4 + 1] = cpu_to_be32(imsic_addr);
156
+ qemu_fdt_add_subnode(mc->fdt, name);
57
- imsic_regs[socket * 4 + 2] = 0;
157
+ qemu_fdt_setprop_string(mc->fdt, name, "compatible",
58
- imsic_regs[socket * 4 + 3] = cpu_to_be32(imsic_size);
158
+ "riscv,aclint-mswi");
59
- if (imsic_max_hart_per_socket < s->soc[socket].num_harts) {
159
+ qemu_fdt_setprop_cells(mc->fdt, name, "reg",
60
- imsic_max_hart_per_socket = s->soc[socket].num_harts;
160
+ 0x0, addr, 0x0, RISCV_ACLINT_SWI_SIZE);
61
- }
161
+ qemu_fdt_setprop(mc->fdt, name, "interrupts-extended",
62
- }
162
+ aclint_mswi_cells, aclint_cells_size);
63
- imsic_name = g_strdup_printf("/soc/imsics@%lx",
163
+ qemu_fdt_setprop(mc->fdt, name, "interrupt-controller", NULL, 0);
64
- (unsigned long)memmap[VIRT_IMSIC_M].base);
164
+ qemu_fdt_setprop_cell(mc->fdt, name, "#interrupt-cells", 0);
65
- qemu_fdt_add_subnode(ms->fdt, imsic_name);
165
+ riscv_socket_fdt_write_id(mc, mc->fdt, name, socket);
66
- qemu_fdt_setprop_string(ms->fdt, imsic_name, "compatible",
166
+ g_free(name);
67
- "riscv,imsics");
167
+ }
68
- qemu_fdt_setprop_cell(ms->fdt, imsic_name, "#interrupt-cells",
168
69
- FDT_IMSIC_INT_CELLS);
169
- addr = memmap[VIRT_CLINT].base + RISCV_ACLINT_SWI_SIZE +
70
- qemu_fdt_setprop(ms->fdt, imsic_name, "interrupt-controller",
170
- (memmap[VIRT_CLINT].size * socket);
71
- NULL, 0);
171
+ if (s->aia_type == VIRT_AIA_TYPE_APLIC_IMSIC) {
72
- qemu_fdt_setprop(ms->fdt, imsic_name, "msi-controller",
172
+ addr = memmap[VIRT_CLINT].base +
73
- NULL, 0);
173
+ (RISCV_ACLINT_DEFAULT_MTIMER_SIZE * socket);
74
- qemu_fdt_setprop(ms->fdt, imsic_name, "interrupts-extended",
174
+ size = RISCV_ACLINT_DEFAULT_MTIMER_SIZE;
75
- imsic_cells, ms->smp.cpus * sizeof(uint32_t) * 2);
175
+ } else {
76
- qemu_fdt_setprop(ms->fdt, imsic_name, "reg", imsic_regs,
176
+ addr = memmap[VIRT_CLINT].base + RISCV_ACLINT_SWI_SIZE +
77
- socket_count * sizeof(uint32_t) * 4);
177
+ (memmap[VIRT_CLINT].size * socket);
78
- qemu_fdt_setprop_cell(ms->fdt, imsic_name, "riscv,num-ids",
178
+ size = memmap[VIRT_CLINT].size - RISCV_ACLINT_SWI_SIZE;
79
- VIRT_IRQCHIP_NUM_MSIS);
179
+ }
80
- if (socket_count > 1) {
180
name = g_strdup_printf("/soc/mtimer@%lx", addr);
81
- qemu_fdt_setprop_cell(ms->fdt, imsic_name, "riscv,hart-index-bits",
181
qemu_fdt_add_subnode(mc->fdt, name);
82
- imsic_num_bits(imsic_max_hart_per_socket));
182
qemu_fdt_setprop_string(mc->fdt, name, "compatible",
83
- qemu_fdt_setprop_cell(ms->fdt, imsic_name, "riscv,group-index-bits",
183
"riscv,aclint-mtimer");
84
- imsic_num_bits(socket_count));
184
qemu_fdt_setprop_cells(mc->fdt, name, "reg",
85
- qemu_fdt_setprop_cell(ms->fdt, imsic_name, "riscv,group-index-shift",
185
0x0, addr + RISCV_ACLINT_DEFAULT_MTIME,
86
- IMSIC_MMIO_GROUP_MIN_SHIFT);
186
- 0x0, memmap[VIRT_CLINT].size - RISCV_ACLINT_SWI_SIZE -
87
- }
187
- RISCV_ACLINT_DEFAULT_MTIME,
88
- qemu_fdt_setprop_cell(ms->fdt, imsic_name, "phandle", *msi_m_phandle);
188
+ 0x0, size - RISCV_ACLINT_DEFAULT_MTIME,
89
-
189
0x0, addr + RISCV_ACLINT_DEFAULT_MTIMECMP,
90
- g_free(imsic_name);
190
0x0, RISCV_ACLINT_DEFAULT_MTIME);
91
191
qemu_fdt_setprop(mc->fdt, name, "interrupts-extended",
92
- /* S-level IMSIC node */
192
@@ -XXX,XX +XXX,XX @@ static void create_fdt_socket_aclint(RISCVVirtState *s,
93
- for (cpu = 0; cpu < ms->smp.cpus; cpu++) {
193
riscv_socket_fdt_write_id(mc, mc->fdt, name, socket);
94
- imsic_cells[cpu * 2 + 0] = cpu_to_be32(intc_phandles[cpu]);
194
g_free(name);
95
- imsic_cells[cpu * 2 + 1] = cpu_to_be32(IRQ_S_EXT);
195
96
- }
196
- addr = memmap[VIRT_ACLINT_SSWI].base +
97
- imsic_guest_bits = imsic_num_bits(s->aia_guests + 1);
197
- (memmap[VIRT_ACLINT_SSWI].size * socket);
98
imsic_max_hart_per_socket = 0;
198
- name = g_strdup_printf("/soc/sswi@%lx", addr);
99
for (socket = 0; socket < socket_count; socket++) {
199
- qemu_fdt_add_subnode(mc->fdt, name);
100
- imsic_addr = memmap[VIRT_IMSIC_S].base +
200
- qemu_fdt_setprop_string(mc->fdt, name, "compatible", "riscv,aclint-sswi");
101
- socket * VIRT_IMSIC_GROUP_MAX_SIZE;
201
- qemu_fdt_setprop_cells(mc->fdt, name, "reg",
102
+ imsic_addr = base_addr + socket * VIRT_IMSIC_GROUP_MAX_SIZE;
202
- 0x0, addr, 0x0, memmap[VIRT_ACLINT_SSWI].size);
103
imsic_size = IMSIC_HART_SIZE(imsic_guest_bits) *
203
- qemu_fdt_setprop(mc->fdt, name, "interrupts-extended",
104
s->soc[socket].num_harts;
204
- aclint_sswi_cells, aclint_cells_size);
105
imsic_regs[socket * 4 + 0] = 0;
205
- qemu_fdt_setprop(mc->fdt, name, "interrupt-controller", NULL, 0);
106
@@ -XXX,XX +XXX,XX @@ static void create_fdt_imsic(RISCVVirtState *s, const MemMapEntry *memmap,
206
- qemu_fdt_setprop_cell(mc->fdt, name, "#interrupt-cells", 0);
107
imsic_max_hart_per_socket = s->soc[socket].num_harts;
207
- riscv_socket_fdt_write_id(mc, mc->fdt, name, socket);
108
}
208
- g_free(name);
109
}
209
+ if (s->aia_type != VIRT_AIA_TYPE_APLIC_IMSIC) {
110
- imsic_name = g_strdup_printf("/soc/imsics@%lx",
210
+ addr = memmap[VIRT_ACLINT_SSWI].base +
111
- (unsigned long)memmap[VIRT_IMSIC_S].base);
211
+ (memmap[VIRT_ACLINT_SSWI].size * socket);
112
+
212
+ name = g_strdup_printf("/soc/sswi@%lx", addr);
113
+ imsic_name = g_strdup_printf("/soc/imsics@%lx", (unsigned long)base_addr);
213
+ qemu_fdt_add_subnode(mc->fdt, name);
114
qemu_fdt_add_subnode(ms->fdt, imsic_name);
214
+ qemu_fdt_setprop_string(mc->fdt, name, "compatible",
115
- qemu_fdt_setprop_string(ms->fdt, imsic_name, "compatible",
215
+ "riscv,aclint-sswi");
116
- "riscv,imsics");
216
+ qemu_fdt_setprop_cells(mc->fdt, name, "reg",
117
+ qemu_fdt_setprop_string(ms->fdt, imsic_name, "compatible", "riscv,imsics");
217
+ 0x0, addr, 0x0, memmap[VIRT_ACLINT_SSWI].size);
118
qemu_fdt_setprop_cell(ms->fdt, imsic_name, "#interrupt-cells",
218
+ qemu_fdt_setprop(mc->fdt, name, "interrupts-extended",
119
- FDT_IMSIC_INT_CELLS);
219
+ aclint_sswi_cells, aclint_cells_size);
120
- qemu_fdt_setprop(ms->fdt, imsic_name, "interrupt-controller",
220
+ qemu_fdt_setprop(mc->fdt, name, "interrupt-controller", NULL, 0);
121
- NULL, 0);
221
+ qemu_fdt_setprop_cell(mc->fdt, name, "#interrupt-cells", 0);
122
- qemu_fdt_setprop(ms->fdt, imsic_name, "msi-controller",
222
+ riscv_socket_fdt_write_id(mc, mc->fdt, name, socket);
123
- NULL, 0);
223
+ g_free(name);
124
+ FDT_IMSIC_INT_CELLS);
224
+ }
125
+ qemu_fdt_setprop(ms->fdt, imsic_name, "interrupt-controller", NULL, 0);
225
126
+ qemu_fdt_setprop(ms->fdt, imsic_name, "msi-controller", NULL, 0);
226
g_free(aclint_mswi_cells);
127
qemu_fdt_setprop(ms->fdt, imsic_name, "interrupts-extended",
227
g_free(aclint_mtimer_cells);
128
- imsic_cells, ms->smp.cpus * sizeof(uint32_t) * 2);
228
@@ -XXX,XX +XXX,XX @@ static void create_fdt_socket_plic(RISCVVirtState *s,
129
+ imsic_cells, ms->smp.cpus * sizeof(uint32_t) * 2);
229
g_free(plic_cells);
130
qemu_fdt_setprop(ms->fdt, imsic_name, "reg", imsic_regs,
131
- socket_count * sizeof(uint32_t) * 4);
132
+ socket_count * sizeof(uint32_t) * 4);
133
qemu_fdt_setprop_cell(ms->fdt, imsic_name, "riscv,num-ids",
134
- VIRT_IRQCHIP_NUM_MSIS);
135
+ VIRT_IRQCHIP_NUM_MSIS);
136
+
137
if (imsic_guest_bits) {
138
qemu_fdt_setprop_cell(ms->fdt, imsic_name, "riscv,guest-index-bits",
139
- imsic_guest_bits);
140
+ imsic_guest_bits);
141
}
142
+
143
if (socket_count > 1) {
144
qemu_fdt_setprop_cell(ms->fdt, imsic_name, "riscv,hart-index-bits",
145
- imsic_num_bits(imsic_max_hart_per_socket));
146
+ imsic_num_bits(imsic_max_hart_per_socket));
147
qemu_fdt_setprop_cell(ms->fdt, imsic_name, "riscv,group-index-bits",
148
- imsic_num_bits(socket_count));
149
+ imsic_num_bits(socket_count));
150
qemu_fdt_setprop_cell(ms->fdt, imsic_name, "riscv,group-index-shift",
151
- IMSIC_MMIO_GROUP_MIN_SHIFT);
152
+ IMSIC_MMIO_GROUP_MIN_SHIFT);
153
}
154
- qemu_fdt_setprop_cell(ms->fdt, imsic_name, "phandle", *msi_s_phandle);
155
- g_free(imsic_name);
156
+ qemu_fdt_setprop_cell(ms->fdt, imsic_name, "phandle", msi_phandle);
157
158
+ g_free(imsic_name);
159
g_free(imsic_regs);
160
g_free(imsic_cells);
230
}
161
}
231
162
232
-static void create_fdt_socket_aia(RISCVVirtState *s,
163
-static void create_fdt_socket_aplic(RISCVVirtState *s,
233
- const MemMapEntry *memmap, int socket,
164
- const MemMapEntry *memmap, int socket,
234
- uint32_t *phandle, uint32_t *intc_phandles,
165
- uint32_t msi_m_phandle,
235
- uint32_t *aplic_phandles)
166
- uint32_t msi_s_phandle,
236
+static uint32_t imsic_num_bits(uint32_t count)
167
- uint32_t *phandle,
237
+{
168
- uint32_t *intc_phandles,
238
+ uint32_t ret = 0;
169
- uint32_t *aplic_phandles)
239
+
240
+ while (BIT(ret) < count) {
241
+ ret++;
242
+ }
243
+
244
+ return ret;
245
+}
246
+
247
+static void create_fdt_imsic(RISCVVirtState *s, const MemMapEntry *memmap,
170
+static void create_fdt_imsic(RISCVVirtState *s, const MemMapEntry *memmap,
248
+ uint32_t *phandle, uint32_t *intc_phandles,
171
+ uint32_t *phandle, uint32_t *intc_phandles,
249
+ uint32_t *msi_m_phandle, uint32_t *msi_s_phandle)
172
+ uint32_t *msi_m_phandle, uint32_t *msi_s_phandle)
250
+{
173
+{
251
+ int cpu, socket;
252
+ char *imsic_name;
253
+ MachineState *mc = MACHINE(s);
254
+ uint32_t imsic_max_hart_per_socket, imsic_guest_bits;
255
+ uint32_t *imsic_cells, *imsic_regs, imsic_addr, imsic_size;
256
+
257
+ *msi_m_phandle = (*phandle)++;
174
+ *msi_m_phandle = (*phandle)++;
258
+ *msi_s_phandle = (*phandle)++;
175
+ *msi_s_phandle = (*phandle)++;
259
+ imsic_cells = g_new0(uint32_t, mc->smp.cpus * 2);
176
+
260
+ imsic_regs = g_new0(uint32_t, riscv_socket_count(mc) * 4);
177
+ if (!kvm_enabled()) {
261
+
178
+ /* M-level IMSIC node */
262
+ /* M-level IMSIC node */
179
+ create_fdt_one_imsic(s, memmap[VIRT_IMSIC_M].base, intc_phandles,
263
+ for (cpu = 0; cpu < mc->smp.cpus; cpu++) {
180
+ *msi_m_phandle, true, 0);
264
+ imsic_cells[cpu * 2 + 0] = cpu_to_be32(intc_phandles[cpu]);
265
+ imsic_cells[cpu * 2 + 1] = cpu_to_be32(IRQ_M_EXT);
266
+ }
181
+ }
267
+ imsic_max_hart_per_socket = 0;
182
+
268
+ for (socket = 0; socket < riscv_socket_count(mc); socket++) {
183
+ /* S-level IMSIC node */
269
+ imsic_addr = memmap[VIRT_IMSIC_M].base +
184
+ create_fdt_one_imsic(s, memmap[VIRT_IMSIC_S].base, intc_phandles,
270
+ socket * VIRT_IMSIC_GROUP_MAX_SIZE;
185
+ *msi_s_phandle, false,
271
+ imsic_size = IMSIC_HART_SIZE(0) * s->soc[socket].num_harts;
186
+ imsic_num_bits(s->aia_guests + 1));
272
+ imsic_regs[socket * 4 + 0] = 0;
187
+
273
+ imsic_regs[socket * 4 + 1] = cpu_to_be32(imsic_addr);
188
+}
274
+ imsic_regs[socket * 4 + 2] = 0;
189
+
275
+ imsic_regs[socket * 4 + 3] = cpu_to_be32(imsic_size);
190
+static void create_fdt_one_aplic(RISCVVirtState *s, int socket,
276
+ if (imsic_max_hart_per_socket < s->soc[socket].num_harts) {
191
+ unsigned long aplic_addr, uint32_t aplic_size,
277
+ imsic_max_hart_per_socket = s->soc[socket].num_harts;
192
+ uint32_t msi_phandle,
278
+ }
193
+ uint32_t *intc_phandles,
194
+ uint32_t aplic_phandle,
195
+ uint32_t aplic_child_phandle,
196
+ bool m_mode)
197
{
198
int cpu;
199
char *aplic_name;
200
uint32_t *aplic_cells;
201
- unsigned long aplic_addr;
202
MachineState *ms = MACHINE(s);
203
- uint32_t aplic_m_phandle, aplic_s_phandle;
204
205
- aplic_m_phandle = (*phandle)++;
206
- aplic_s_phandle = (*phandle)++;
207
aplic_cells = g_new0(uint32_t, s->soc[socket].num_harts * 2);
208
209
- /* M-level APLIC node */
210
for (cpu = 0; cpu < s->soc[socket].num_harts; cpu++) {
211
aplic_cells[cpu * 2 + 0] = cpu_to_be32(intc_phandles[cpu]);
212
- aplic_cells[cpu * 2 + 1] = cpu_to_be32(IRQ_M_EXT);
213
+ aplic_cells[cpu * 2 + 1] = cpu_to_be32(m_mode ? IRQ_M_EXT : IRQ_S_EXT);
214
}
215
- aplic_addr = memmap[VIRT_APLIC_M].base +
216
- (memmap[VIRT_APLIC_M].size * socket);
217
+
218
aplic_name = g_strdup_printf("/soc/aplic@%lx", aplic_addr);
219
qemu_fdt_add_subnode(ms->fdt, aplic_name);
220
qemu_fdt_setprop_string(ms->fdt, aplic_name, "compatible", "riscv,aplic");
221
qemu_fdt_setprop_cell(ms->fdt, aplic_name,
222
- "#interrupt-cells", FDT_APLIC_INT_CELLS);
223
+ "#interrupt-cells", FDT_APLIC_INT_CELLS);
224
qemu_fdt_setprop(ms->fdt, aplic_name, "interrupt-controller", NULL, 0);
225
+
226
if (s->aia_type == VIRT_AIA_TYPE_APLIC) {
227
qemu_fdt_setprop(ms->fdt, aplic_name, "interrupts-extended",
228
- aplic_cells, s->soc[socket].num_harts * sizeof(uint32_t) * 2);
229
+ aplic_cells,
230
+ s->soc[socket].num_harts * sizeof(uint32_t) * 2);
231
} else {
232
- qemu_fdt_setprop_cell(ms->fdt, aplic_name, "msi-parent",
233
- msi_m_phandle);
234
+ qemu_fdt_setprop_cell(ms->fdt, aplic_name, "msi-parent", msi_phandle);
235
}
236
+
237
qemu_fdt_setprop_cells(ms->fdt, aplic_name, "reg",
238
- 0x0, aplic_addr, 0x0, memmap[VIRT_APLIC_M].size);
239
+ 0x0, aplic_addr, 0x0, aplic_size);
240
qemu_fdt_setprop_cell(ms->fdt, aplic_name, "riscv,num-sources",
241
- VIRT_IRQCHIP_NUM_SOURCES);
242
- qemu_fdt_setprop_cell(ms->fdt, aplic_name, "riscv,children",
243
- aplic_s_phandle);
244
- qemu_fdt_setprop_cells(ms->fdt, aplic_name, "riscv,delegate",
245
- aplic_s_phandle, 0x1, VIRT_IRQCHIP_NUM_SOURCES);
246
+ VIRT_IRQCHIP_NUM_SOURCES);
247
+
248
+ if (aplic_child_phandle) {
249
+ qemu_fdt_setprop_cell(ms->fdt, aplic_name, "riscv,children",
250
+ aplic_child_phandle);
251
+ qemu_fdt_setprop_cells(ms->fdt, aplic_name, "riscv,delegate",
252
+ aplic_child_phandle, 0x1,
253
+ VIRT_IRQCHIP_NUM_SOURCES);
279
+ }
254
+ }
280
+ imsic_name = g_strdup_printf("/soc/imsics@%lx",
255
+
281
+ memmap[VIRT_IMSIC_M].base);
256
riscv_socket_fdt_write_id(ms, aplic_name, socket);
282
+ qemu_fdt_add_subnode(mc->fdt, imsic_name);
257
- qemu_fdt_setprop_cell(ms->fdt, aplic_name, "phandle", aplic_m_phandle);
283
+ qemu_fdt_setprop_string(mc->fdt, imsic_name, "compatible",
258
+ qemu_fdt_setprop_cell(ms->fdt, aplic_name, "phandle", aplic_phandle);
284
+ "riscv,imsics");
259
+
285
+ qemu_fdt_setprop_cell(mc->fdt, imsic_name, "#interrupt-cells",
260
g_free(aplic_name);
286
+ FDT_IMSIC_INT_CELLS);
261
+ g_free(aplic_cells);
287
+ qemu_fdt_setprop(mc->fdt, imsic_name, "interrupt-controller",
288
+ NULL, 0);
289
+ qemu_fdt_setprop(mc->fdt, imsic_name, "msi-controller",
290
+ NULL, 0);
291
+ qemu_fdt_setprop(mc->fdt, imsic_name, "interrupts-extended",
292
+ imsic_cells, mc->smp.cpus * sizeof(uint32_t) * 2);
293
+ qemu_fdt_setprop(mc->fdt, imsic_name, "reg", imsic_regs,
294
+ riscv_socket_count(mc) * sizeof(uint32_t) * 4);
295
+ qemu_fdt_setprop_cell(mc->fdt, imsic_name, "riscv,num-ids",
296
+ VIRT_IRQCHIP_NUM_MSIS);
297
+ qemu_fdt_setprop_cells(mc->fdt, imsic_name, "riscv,ipi-id",
298
+ VIRT_IRQCHIP_IPI_MSI);
299
+ if (riscv_socket_count(mc) > 1) {
300
+ qemu_fdt_setprop_cell(mc->fdt, imsic_name, "riscv,hart-index-bits",
301
+ imsic_num_bits(imsic_max_hart_per_socket));
302
+ qemu_fdt_setprop_cell(mc->fdt, imsic_name, "riscv,group-index-bits",
303
+ imsic_num_bits(riscv_socket_count(mc)));
304
+ qemu_fdt_setprop_cell(mc->fdt, imsic_name, "riscv,group-index-shift",
305
+ IMSIC_MMIO_GROUP_MIN_SHIFT);
306
+ }
307
+ qemu_fdt_setprop_cell(mc->fdt, imsic_name, "phandle", *msi_m_phandle);
308
+ g_free(imsic_name);
309
+
310
+ /* S-level IMSIC node */
311
+ for (cpu = 0; cpu < mc->smp.cpus; cpu++) {
312
+ imsic_cells[cpu * 2 + 0] = cpu_to_be32(intc_phandles[cpu]);
313
+ imsic_cells[cpu * 2 + 1] = cpu_to_be32(IRQ_S_EXT);
314
+ }
315
+ imsic_guest_bits = imsic_num_bits(s->aia_guests + 1);
316
+ imsic_max_hart_per_socket = 0;
317
+ for (socket = 0; socket < riscv_socket_count(mc); socket++) {
318
+ imsic_addr = memmap[VIRT_IMSIC_S].base +
319
+ socket * VIRT_IMSIC_GROUP_MAX_SIZE;
320
+ imsic_size = IMSIC_HART_SIZE(imsic_guest_bits) *
321
+ s->soc[socket].num_harts;
322
+ imsic_regs[socket * 4 + 0] = 0;
323
+ imsic_regs[socket * 4 + 1] = cpu_to_be32(imsic_addr);
324
+ imsic_regs[socket * 4 + 2] = 0;
325
+ imsic_regs[socket * 4 + 3] = cpu_to_be32(imsic_size);
326
+ if (imsic_max_hart_per_socket < s->soc[socket].num_harts) {
327
+ imsic_max_hart_per_socket = s->soc[socket].num_harts;
328
+ }
329
+ }
330
+ imsic_name = g_strdup_printf("/soc/imsics@%lx",
331
+ memmap[VIRT_IMSIC_S].base);
332
+ qemu_fdt_add_subnode(mc->fdt, imsic_name);
333
+ qemu_fdt_setprop_string(mc->fdt, imsic_name, "compatible",
334
+ "riscv,imsics");
335
+ qemu_fdt_setprop_cell(mc->fdt, imsic_name, "#interrupt-cells",
336
+ FDT_IMSIC_INT_CELLS);
337
+ qemu_fdt_setprop(mc->fdt, imsic_name, "interrupt-controller",
338
+ NULL, 0);
339
+ qemu_fdt_setprop(mc->fdt, imsic_name, "msi-controller",
340
+ NULL, 0);
341
+ qemu_fdt_setprop(mc->fdt, imsic_name, "interrupts-extended",
342
+ imsic_cells, mc->smp.cpus * sizeof(uint32_t) * 2);
343
+ qemu_fdt_setprop(mc->fdt, imsic_name, "reg", imsic_regs,
344
+ riscv_socket_count(mc) * sizeof(uint32_t) * 4);
345
+ qemu_fdt_setprop_cell(mc->fdt, imsic_name, "riscv,num-ids",
346
+ VIRT_IRQCHIP_NUM_MSIS);
347
+ qemu_fdt_setprop_cells(mc->fdt, imsic_name, "riscv,ipi-id",
348
+ VIRT_IRQCHIP_IPI_MSI);
349
+ if (imsic_guest_bits) {
350
+ qemu_fdt_setprop_cell(mc->fdt, imsic_name, "riscv,guest-index-bits",
351
+ imsic_guest_bits);
352
+ }
353
+ if (riscv_socket_count(mc) > 1) {
354
+ qemu_fdt_setprop_cell(mc->fdt, imsic_name, "riscv,hart-index-bits",
355
+ imsic_num_bits(imsic_max_hart_per_socket));
356
+ qemu_fdt_setprop_cell(mc->fdt, imsic_name, "riscv,group-index-bits",
357
+ imsic_num_bits(riscv_socket_count(mc)));
358
+ qemu_fdt_setprop_cell(mc->fdt, imsic_name, "riscv,group-index-shift",
359
+ IMSIC_MMIO_GROUP_MIN_SHIFT);
360
+ }
361
+ qemu_fdt_setprop_cell(mc->fdt, imsic_name, "phandle", *msi_s_phandle);
362
+ g_free(imsic_name);
363
+
364
+ g_free(imsic_regs);
365
+ g_free(imsic_cells);
366
+}
262
+}
367
+
263
264
- /* S-level APLIC node */
265
- for (cpu = 0; cpu < s->soc[socket].num_harts; cpu++) {
266
- aplic_cells[cpu * 2 + 0] = cpu_to_be32(intc_phandles[cpu]);
267
- aplic_cells[cpu * 2 + 1] = cpu_to_be32(IRQ_S_EXT);
368
+static void create_fdt_socket_aplic(RISCVVirtState *s,
268
+static void create_fdt_socket_aplic(RISCVVirtState *s,
369
+ const MemMapEntry *memmap, int socket,
269
+ const MemMapEntry *memmap, int socket,
370
+ uint32_t msi_m_phandle,
270
+ uint32_t msi_m_phandle,
371
+ uint32_t msi_s_phandle,
271
+ uint32_t msi_s_phandle,
372
+ uint32_t *phandle,
272
+ uint32_t *phandle,
373
+ uint32_t *intc_phandles,
273
+ uint32_t *intc_phandles,
374
+ uint32_t *aplic_phandles)
274
+ uint32_t *aplic_phandles)
375
{
275
+{
376
int cpu;
276
+ char *aplic_name;
377
char *aplic_name;
277
+ unsigned long aplic_addr;
378
@@ -XXX,XX +XXX,XX @@ static void create_fdt_socket_aia(RISCVVirtState *s,
278
+ MachineState *ms = MACHINE(s);
379
qemu_fdt_setprop_cell(mc->fdt, aplic_name,
279
+ uint32_t aplic_m_phandle, aplic_s_phandle;
380
"#interrupt-cells", FDT_APLIC_INT_CELLS);
280
+
381
qemu_fdt_setprop(mc->fdt, aplic_name, "interrupt-controller", NULL, 0);
281
+ aplic_m_phandle = (*phandle)++;
382
- qemu_fdt_setprop(mc->fdt, aplic_name, "interrupts-extended",
282
+ aplic_s_phandle = (*phandle)++;
383
- aplic_cells, s->soc[socket].num_harts * sizeof(uint32_t) * 2);
283
+
384
+ if (s->aia_type == VIRT_AIA_TYPE_APLIC) {
284
+ if (!kvm_enabled()) {
385
+ qemu_fdt_setprop(mc->fdt, aplic_name, "interrupts-extended",
285
+ /* M-level APLIC node */
386
+ aplic_cells, s->soc[socket].num_harts * sizeof(uint32_t) * 2);
286
+ aplic_addr = memmap[VIRT_APLIC_M].base +
387
+ } else {
287
+ (memmap[VIRT_APLIC_M].size * socket);
388
+ qemu_fdt_setprop_cell(mc->fdt, aplic_name, "msi-parent",
288
+ create_fdt_one_aplic(s, socket, aplic_addr, memmap[VIRT_APLIC_M].size,
389
+ msi_m_phandle);
289
+ msi_m_phandle, intc_phandles,
390
+ }
290
+ aplic_m_phandle, aplic_s_phandle,
391
qemu_fdt_setprop_cells(mc->fdt, aplic_name, "reg",
291
+ true);
392
0x0, aplic_addr, 0x0, memmap[VIRT_APLIC_M].size);
292
}
393
qemu_fdt_setprop_cell(mc->fdt, aplic_name, "riscv,num-sources",
293
+
394
@@ -XXX,XX +XXX,XX @@ static void create_fdt_socket_aia(RISCVVirtState *s,
294
+ /* S-level APLIC node */
395
qemu_fdt_setprop_cell(mc->fdt, aplic_name,
295
aplic_addr = memmap[VIRT_APLIC_S].base +
396
"#interrupt-cells", FDT_APLIC_INT_CELLS);
296
(memmap[VIRT_APLIC_S].size * socket);
397
qemu_fdt_setprop(mc->fdt, aplic_name, "interrupt-controller", NULL, 0);
297
+ create_fdt_one_aplic(s, socket, aplic_addr, memmap[VIRT_APLIC_S].size,
398
- qemu_fdt_setprop(mc->fdt, aplic_name, "interrupts-extended",
298
+ msi_s_phandle, intc_phandles,
399
- aplic_cells, s->soc[socket].num_harts * sizeof(uint32_t) * 2);
299
+ aplic_s_phandle, 0,
400
+ if (s->aia_type == VIRT_AIA_TYPE_APLIC) {
300
+ false);
401
+ qemu_fdt_setprop(mc->fdt, aplic_name, "interrupts-extended",
301
+
402
+ aplic_cells, s->soc[socket].num_harts * sizeof(uint32_t) * 2);
302
aplic_name = g_strdup_printf("/soc/aplic@%lx", aplic_addr);
403
+ } else {
303
- qemu_fdt_add_subnode(ms->fdt, aplic_name);
404
+ qemu_fdt_setprop_cell(mc->fdt, aplic_name, "msi-parent",
304
- qemu_fdt_setprop_string(ms->fdt, aplic_name, "compatible", "riscv,aplic");
405
+ msi_s_phandle);
305
- qemu_fdt_setprop_cell(ms->fdt, aplic_name,
406
+ }
306
- "#interrupt-cells", FDT_APLIC_INT_CELLS);
407
qemu_fdt_setprop_cells(mc->fdt, aplic_name, "reg",
307
- qemu_fdt_setprop(ms->fdt, aplic_name, "interrupt-controller", NULL, 0);
408
0x0, aplic_addr, 0x0, memmap[VIRT_APLIC_S].size);
308
- if (s->aia_type == VIRT_AIA_TYPE_APLIC) {
409
qemu_fdt_setprop_cell(mc->fdt, aplic_name, "riscv,num-sources",
309
- qemu_fdt_setprop(ms->fdt, aplic_name, "interrupts-extended",
410
@@ -XXX,XX +XXX,XX @@ static void create_fdt_sockets(RISCVVirtState *s, const MemMapEntry *memmap,
310
- aplic_cells, s->soc[socket].num_harts * sizeof(uint32_t) * 2);
411
bool is_32_bit, uint32_t *phandle,
311
- } else {
412
uint32_t *irq_mmio_phandle,
312
- qemu_fdt_setprop_cell(ms->fdt, aplic_name, "msi-parent",
413
uint32_t *irq_pcie_phandle,
313
- msi_s_phandle);
414
- uint32_t *irq_virtio_phandle)
314
- }
415
+ uint32_t *irq_virtio_phandle,
315
- qemu_fdt_setprop_cells(ms->fdt, aplic_name, "reg",
416
+ uint32_t *msi_pcie_phandle)
316
- 0x0, aplic_addr, 0x0, memmap[VIRT_APLIC_S].size);
417
{
317
- qemu_fdt_setprop_cell(ms->fdt, aplic_name, "riscv,num-sources",
418
- int socket;
318
- VIRT_IRQCHIP_NUM_SOURCES);
419
char *clust_name;
319
- riscv_socket_fdt_write_id(ms, aplic_name, socket);
420
- uint32_t *intc_phandles;
320
- qemu_fdt_setprop_cell(ms->fdt, aplic_name, "phandle", aplic_s_phandle);
421
+ int socket, phandle_pos;
321
422
MachineState *mc = MACHINE(s);
322
if (!socket) {
423
- uint32_t xplic_phandles[MAX_NODES];
323
platform_bus_add_all_fdt_nodes(ms->fdt, aplic_name,
424
+ uint32_t msi_m_phandle = 0, msi_s_phandle = 0;
324
@@ -XXX,XX +XXX,XX @@ static void create_fdt_socket_aplic(RISCVVirtState *s,
425
+ uint32_t *intc_phandles, xplic_phandles[MAX_NODES];
325
426
326
g_free(aplic_name);
427
qemu_fdt_add_subnode(mc->fdt, "/cpus");
327
428
qemu_fdt_setprop_cell(mc->fdt, "/cpus", "timebase-frequency",
328
- g_free(aplic_cells);
429
@@ -XXX,XX +XXX,XX @@ static void create_fdt_sockets(RISCVVirtState *s, const MemMapEntry *memmap,
329
aplic_phandles[socket] = aplic_s_phandle;
430
qemu_fdt_setprop_cell(mc->fdt, "/cpus", "#address-cells", 0x1);
330
}
431
qemu_fdt_add_subnode(mc->fdt, "/cpus/cpu-map");
331
432
332
@@ -XXX,XX +XXX,XX @@ static DeviceState *virt_create_aia(RISCVVirtAIAType aia_type, int aia_guests,
433
+ intc_phandles = g_new0(uint32_t, mc->smp.cpus);
333
int i;
434
+
334
hwaddr addr;
435
+ phandle_pos = mc->smp.cpus;
335
uint32_t guest_bits;
436
for (socket = (riscv_socket_count(mc) - 1); socket >= 0; socket--) {
336
- DeviceState *aplic_m;
437
+ phandle_pos -= s->soc[socket].num_harts;
337
- bool msimode = (aia_type == VIRT_AIA_TYPE_APLIC_IMSIC) ? true : false;
438
+
338
+ DeviceState *aplic_s = NULL;
439
clust_name = g_strdup_printf("/cpus/cpu-map/cluster%d", socket);
339
+ DeviceState *aplic_m = NULL;
440
qemu_fdt_add_subnode(mc->fdt, clust_name);
340
+ bool msimode = aia_type == VIRT_AIA_TYPE_APLIC_IMSIC;
441
341
442
- intc_phandles = g_new0(uint32_t, s->soc[socket].num_harts);
342
if (msimode) {
343
- /* Per-socket M-level IMSICs */
344
- addr = memmap[VIRT_IMSIC_M].base + socket * VIRT_IMSIC_GROUP_MAX_SIZE;
345
- for (i = 0; i < hart_count; i++) {
346
- riscv_imsic_create(addr + i * IMSIC_HART_SIZE(0),
347
- base_hartid + i, true, 1,
348
- VIRT_IRQCHIP_NUM_MSIS);
349
+ if (!kvm_enabled()) {
350
+ /* Per-socket M-level IMSICs */
351
+ addr = memmap[VIRT_IMSIC_M].base +
352
+ socket * VIRT_IMSIC_GROUP_MAX_SIZE;
353
+ for (i = 0; i < hart_count; i++) {
354
+ riscv_imsic_create(addr + i * IMSIC_HART_SIZE(0),
355
+ base_hartid + i, true, 1,
356
+ VIRT_IRQCHIP_NUM_MSIS);
357
+ }
358
}
359
360
/* Per-socket S-level IMSICs */
361
@@ -XXX,XX +XXX,XX @@ static DeviceState *virt_create_aia(RISCVVirtAIAType aia_type, int aia_guests,
362
}
363
}
364
365
- /* Per-socket M-level APLIC */
366
- aplic_m = riscv_aplic_create(
367
- memmap[VIRT_APLIC_M].base + socket * memmap[VIRT_APLIC_M].size,
368
- memmap[VIRT_APLIC_M].size,
369
- (msimode) ? 0 : base_hartid,
370
- (msimode) ? 0 : hart_count,
371
- VIRT_IRQCHIP_NUM_SOURCES,
372
- VIRT_IRQCHIP_NUM_PRIO_BITS,
373
- msimode, true, NULL);
443
-
374
-
444
create_fdt_socket_cpus(s, socket, clust_name, phandle,
375
- if (aplic_m) {
445
- is_32_bit, intc_phandles);
376
- /* Per-socket S-level APLIC */
446
+ is_32_bit, &intc_phandles[phandle_pos]);
377
- riscv_aplic_create(
447
378
- memmap[VIRT_APLIC_S].base + socket * memmap[VIRT_APLIC_S].size,
448
create_fdt_socket_memory(s, memmap, socket);
379
- memmap[VIRT_APLIC_S].size,
449
380
- (msimode) ? 0 : base_hartid,
450
+ g_free(clust_name);
381
- (msimode) ? 0 : hart_count,
451
+
382
- VIRT_IRQCHIP_NUM_SOURCES,
452
if (!kvm_enabled()) {
383
- VIRT_IRQCHIP_NUM_PRIO_BITS,
453
if (s->have_aclint) {
384
- msimode, false, aplic_m);
454
- create_fdt_socket_aclint(s, memmap, socket, intc_phandles);
385
+ if (!kvm_enabled()) {
455
+ create_fdt_socket_aclint(s, memmap, socket,
386
+ /* Per-socket M-level APLIC */
456
+ &intc_phandles[phandle_pos]);
387
+ aplic_m = riscv_aplic_create(memmap[VIRT_APLIC_M].base +
457
} else {
388
+ socket * memmap[VIRT_APLIC_M].size,
458
- create_fdt_socket_clint(s, memmap, socket, intc_phandles);
389
+ memmap[VIRT_APLIC_M].size,
459
+ create_fdt_socket_clint(s, memmap, socket,
390
+ (msimode) ? 0 : base_hartid,
460
+ &intc_phandles[phandle_pos]);
391
+ (msimode) ? 0 : hart_count,
461
}
392
+ VIRT_IRQCHIP_NUM_SOURCES,
462
}
393
+ VIRT_IRQCHIP_NUM_PRIO_BITS,
463
+ }
394
+ msimode, true, NULL);
464
+
395
}
465
+ if (s->aia_type == VIRT_AIA_TYPE_APLIC_IMSIC) {
396
466
+ create_fdt_imsic(s, memmap, phandle, intc_phandles,
397
- return aplic_m;
467
+ &msi_m_phandle, &msi_s_phandle);
398
+ /* Per-socket S-level APLIC */
468
+ *msi_pcie_phandle = msi_s_phandle;
399
+ aplic_s = riscv_aplic_create(memmap[VIRT_APLIC_S].base +
469
+ }
400
+ socket * memmap[VIRT_APLIC_S].size,
470
+
401
+ memmap[VIRT_APLIC_S].size,
471
+ phandle_pos = mc->smp.cpus;
402
+ (msimode) ? 0 : base_hartid,
472
+ for (socket = (riscv_socket_count(mc) - 1); socket >= 0; socket--) {
403
+ (msimode) ? 0 : hart_count,
473
+ phandle_pos -= s->soc[socket].num_harts;
404
+ VIRT_IRQCHIP_NUM_SOURCES,
474
405
+ VIRT_IRQCHIP_NUM_PRIO_BITS,
475
if (s->aia_type == VIRT_AIA_TYPE_NONE) {
406
+ msimode, false, aplic_m);
476
create_fdt_socket_plic(s, memmap, socket, phandle,
407
+
477
- intc_phandles, xplic_phandles);
408
+ return kvm_enabled() ? aplic_s : aplic_m;
478
+ &intc_phandles[phandle_pos], xplic_phandles);
479
} else {
480
- create_fdt_socket_aia(s, memmap, socket, phandle,
481
- intc_phandles, xplic_phandles);
482
+ create_fdt_socket_aplic(s, memmap, socket,
483
+ msi_m_phandle, msi_s_phandle, phandle,
484
+ &intc_phandles[phandle_pos], xplic_phandles);
485
}
486
-
487
- g_free(intc_phandles);
488
- g_free(clust_name);
489
}
490
491
+ g_free(intc_phandles);
492
+
493
for (socket = 0; socket < riscv_socket_count(mc); socket++) {
494
if (socket == 0) {
495
*irq_mmio_phandle = xplic_phandles[socket];
496
@@ -XXX,XX +XXX,XX @@ static void create_fdt_virtio(RISCVVirtState *s, const MemMapEntry *memmap,
497
}
409
}
498
410
499
static void create_fdt_pcie(RISCVVirtState *s, const MemMapEntry *memmap,
411
static void create_platform_bus(RISCVVirtState *s, DeviceState *irqchip)
500
- uint32_t irq_pcie_phandle)
501
+ uint32_t irq_pcie_phandle,
502
+ uint32_t msi_pcie_phandle)
503
{
504
char *name;
505
MachineState *mc = MACHINE(s);
506
@@ -XXX,XX +XXX,XX @@ static void create_fdt_pcie(RISCVVirtState *s, const MemMapEntry *memmap,
507
qemu_fdt_setprop_cells(mc->fdt, name, "bus-range", 0,
508
memmap[VIRT_PCIE_ECAM].size / PCIE_MMCFG_SIZE_MIN - 1);
509
qemu_fdt_setprop(mc->fdt, name, "dma-coherent", NULL, 0);
510
+ if (s->aia_type == VIRT_AIA_TYPE_APLIC_IMSIC) {
511
+ qemu_fdt_setprop_cell(mc->fdt, name, "msi-parent", msi_pcie_phandle);
512
+ }
513
qemu_fdt_setprop_cells(mc->fdt, name, "reg", 0,
514
memmap[VIRT_PCIE_ECAM].base, 0, memmap[VIRT_PCIE_ECAM].size);
515
qemu_fdt_setprop_sized_cells(mc->fdt, name, "ranges",
516
@@ -XXX,XX +XXX,XX @@ static void create_fdt(RISCVVirtState *s, const MemMapEntry *memmap,
517
uint64_t mem_size, const char *cmdline, bool is_32_bit)
518
{
519
MachineState *mc = MACHINE(s);
520
- uint32_t phandle = 1, irq_mmio_phandle = 1;
521
+ uint32_t phandle = 1, irq_mmio_phandle = 1, msi_pcie_phandle = 1;
522
uint32_t irq_pcie_phandle = 1, irq_virtio_phandle = 1;
523
524
if (mc->dtb) {
525
@@ -XXX,XX +XXX,XX @@ static void create_fdt(RISCVVirtState *s, const MemMapEntry *memmap,
526
qemu_fdt_setprop_cell(mc->fdt, "/soc", "#address-cells", 0x2);
527
528
create_fdt_sockets(s, memmap, is_32_bit, &phandle,
529
- &irq_mmio_phandle, &irq_pcie_phandle, &irq_virtio_phandle);
530
+ &irq_mmio_phandle, &irq_pcie_phandle, &irq_virtio_phandle,
531
+ &msi_pcie_phandle);
532
533
create_fdt_virtio(s, memmap, irq_virtio_phandle);
534
535
- create_fdt_pcie(s, memmap, irq_pcie_phandle);
536
+ create_fdt_pcie(s, memmap, irq_pcie_phandle, msi_pcie_phandle);
537
538
create_fdt_reset(s, memmap, &phandle);
539
540
@@ -XXX,XX +XXX,XX @@ static DeviceState *virt_create_plic(const MemMapEntry *memmap, int socket,
541
return ret;
542
}
543
544
-static DeviceState *virt_create_aia(RISCVVirtAIAType aia_type,
545
+static DeviceState *virt_create_aia(RISCVVirtAIAType aia_type, int aia_guests,
546
const MemMapEntry *memmap, int socket,
547
int base_hartid, int hart_count)
548
{
549
+ int i;
550
+ hwaddr addr;
551
+ uint32_t guest_bits;
552
DeviceState *aplic_m;
553
+ bool msimode = (aia_type == VIRT_AIA_TYPE_APLIC_IMSIC) ? true : false;
554
+
555
+ if (msimode) {
556
+ /* Per-socket M-level IMSICs */
557
+ addr = memmap[VIRT_IMSIC_M].base + socket * VIRT_IMSIC_GROUP_MAX_SIZE;
558
+ for (i = 0; i < hart_count; i++) {
559
+ riscv_imsic_create(addr + i * IMSIC_HART_SIZE(0),
560
+ base_hartid + i, true, 1,
561
+ VIRT_IRQCHIP_NUM_MSIS);
562
+ }
563
+
564
+ /* Per-socket S-level IMSICs */
565
+ guest_bits = imsic_num_bits(aia_guests + 1);
566
+ addr = memmap[VIRT_IMSIC_S].base + socket * VIRT_IMSIC_GROUP_MAX_SIZE;
567
+ for (i = 0; i < hart_count; i++) {
568
+ riscv_imsic_create(addr + i * IMSIC_HART_SIZE(guest_bits),
569
+ base_hartid + i, false, 1 + aia_guests,
570
+ VIRT_IRQCHIP_NUM_MSIS);
571
+ }
572
+ }
573
574
/* Per-socket M-level APLIC */
575
aplic_m = riscv_aplic_create(
576
memmap[VIRT_APLIC_M].base + socket * memmap[VIRT_APLIC_M].size,
577
memmap[VIRT_APLIC_M].size,
578
- base_hartid, hart_count,
579
+ (msimode) ? 0 : base_hartid,
580
+ (msimode) ? 0 : hart_count,
581
VIRT_IRQCHIP_NUM_SOURCES,
582
VIRT_IRQCHIP_NUM_PRIO_BITS,
583
- false, true, NULL);
584
+ msimode, true, NULL);
585
586
if (aplic_m) {
587
/* Per-socket S-level APLIC */
588
riscv_aplic_create(
589
memmap[VIRT_APLIC_S].base + socket * memmap[VIRT_APLIC_S].size,
590
memmap[VIRT_APLIC_S].size,
591
- base_hartid, hart_count,
592
+ (msimode) ? 0 : base_hartid,
593
+ (msimode) ? 0 : hart_count,
594
VIRT_IRQCHIP_NUM_SOURCES,
595
VIRT_IRQCHIP_NUM_PRIO_BITS,
596
- false, false, aplic_m);
597
+ msimode, false, aplic_m);
598
}
599
600
return aplic_m;
601
@@ -XXX,XX +XXX,XX @@ static void virt_machine_init(MachineState *machine)
602
sysbus_realize(SYS_BUS_DEVICE(&s->soc[i]), &error_abort);
603
604
if (!kvm_enabled()) {
605
- /* Per-socket CLINT */
606
- riscv_aclint_swi_create(
607
- memmap[VIRT_CLINT].base + i * memmap[VIRT_CLINT].size,
608
- base_hartid, hart_count, false);
609
- riscv_aclint_mtimer_create(
610
- memmap[VIRT_CLINT].base + i * memmap[VIRT_CLINT].size +
611
- RISCV_ACLINT_SWI_SIZE,
612
- RISCV_ACLINT_DEFAULT_MTIMER_SIZE, base_hartid, hart_count,
613
- RISCV_ACLINT_DEFAULT_MTIMECMP, RISCV_ACLINT_DEFAULT_MTIME,
614
- RISCV_ACLINT_DEFAULT_TIMEBASE_FREQ, true);
615
-
616
- /* Per-socket ACLINT SSWI */
617
if (s->have_aclint) {
618
+ if (s->aia_type == VIRT_AIA_TYPE_APLIC_IMSIC) {
619
+ /* Per-socket ACLINT MTIMER */
620
+ riscv_aclint_mtimer_create(memmap[VIRT_CLINT].base +
621
+ i * RISCV_ACLINT_DEFAULT_MTIMER_SIZE,
622
+ RISCV_ACLINT_DEFAULT_MTIMER_SIZE,
623
+ base_hartid, hart_count,
624
+ RISCV_ACLINT_DEFAULT_MTIMECMP,
625
+ RISCV_ACLINT_DEFAULT_MTIME,
626
+ RISCV_ACLINT_DEFAULT_TIMEBASE_FREQ, true);
627
+ } else {
628
+ /* Per-socket ACLINT MSWI, MTIMER, and SSWI */
629
+ riscv_aclint_swi_create(memmap[VIRT_CLINT].base +
630
+ i * memmap[VIRT_CLINT].size,
631
+ base_hartid, hart_count, false);
632
+ riscv_aclint_mtimer_create(memmap[VIRT_CLINT].base +
633
+ i * memmap[VIRT_CLINT].size + RISCV_ACLINT_SWI_SIZE,
634
+ RISCV_ACLINT_DEFAULT_MTIMER_SIZE,
635
+ base_hartid, hart_count,
636
+ RISCV_ACLINT_DEFAULT_MTIMECMP,
637
+ RISCV_ACLINT_DEFAULT_MTIME,
638
+ RISCV_ACLINT_DEFAULT_TIMEBASE_FREQ, true);
639
+ riscv_aclint_swi_create(memmap[VIRT_ACLINT_SSWI].base +
640
+ i * memmap[VIRT_ACLINT_SSWI].size,
641
+ base_hartid, hart_count, true);
642
+ }
643
+ } else {
644
+ /* Per-socket SiFive CLINT */
645
riscv_aclint_swi_create(
646
- memmap[VIRT_ACLINT_SSWI].base +
647
- i * memmap[VIRT_ACLINT_SSWI].size,
648
- base_hartid, hart_count, true);
649
+ memmap[VIRT_CLINT].base + i * memmap[VIRT_CLINT].size,
650
+ base_hartid, hart_count, false);
651
+ riscv_aclint_mtimer_create(memmap[VIRT_CLINT].base +
652
+ i * memmap[VIRT_CLINT].size + RISCV_ACLINT_SWI_SIZE,
653
+ RISCV_ACLINT_DEFAULT_MTIMER_SIZE,
654
+ base_hartid, hart_count,
655
+ RISCV_ACLINT_DEFAULT_MTIMECMP,
656
+ RISCV_ACLINT_DEFAULT_MTIME,
657
+ RISCV_ACLINT_DEFAULT_TIMEBASE_FREQ, true);
658
}
659
}
660
661
@@ -XXX,XX +XXX,XX @@ static void virt_machine_init(MachineState *machine)
662
s->irqchip[i] = virt_create_plic(memmap, i,
663
base_hartid, hart_count);
664
} else {
665
- s->irqchip[i] = virt_create_aia(s->aia_type, memmap, i,
666
- base_hartid, hart_count);
667
+ s->irqchip[i] = virt_create_aia(s->aia_type, s->aia_guests,
668
+ memmap, i, base_hartid,
669
+ hart_count);
670
}
671
672
/* Try to use different IRQCHIP instance based device type */
673
@@ -XXX,XX +XXX,XX @@ static void virt_machine_instance_init(Object *obj)
674
{
675
}
676
677
+static char *virt_get_aia_guests(Object *obj, Error **errp)
678
+{
679
+ RISCVVirtState *s = RISCV_VIRT_MACHINE(obj);
680
+ char val[32];
681
+
682
+ sprintf(val, "%d", s->aia_guests);
683
+ return g_strdup(val);
684
+}
685
+
686
+static void virt_set_aia_guests(Object *obj, const char *val, Error **errp)
687
+{
688
+ RISCVVirtState *s = RISCV_VIRT_MACHINE(obj);
689
+
690
+ s->aia_guests = atoi(val);
691
+ if (s->aia_guests < 0 || s->aia_guests > VIRT_IRQCHIP_MAX_GUESTS) {
692
+ error_setg(errp, "Invalid number of AIA IMSIC guests");
693
+ error_append_hint(errp, "Valid values be between 0 and %d.\n",
694
+ VIRT_IRQCHIP_MAX_GUESTS);
695
+ }
696
+}
697
+
698
static char *virt_get_aia(Object *obj, Error **errp)
699
{
700
RISCVVirtState *s = RISCV_VIRT_MACHINE(obj);
701
@@ -XXX,XX +XXX,XX @@ static char *virt_get_aia(Object *obj, Error **errp)
702
case VIRT_AIA_TYPE_APLIC:
703
val = "aplic";
704
break;
705
+ case VIRT_AIA_TYPE_APLIC_IMSIC:
706
+ val = "aplic-imsic";
707
+ break;
708
default:
709
val = "none";
710
break;
711
@@ -XXX,XX +XXX,XX @@ static void virt_set_aia(Object *obj, const char *val, Error **errp)
712
s->aia_type = VIRT_AIA_TYPE_NONE;
713
} else if (!strcmp(val, "aplic")) {
714
s->aia_type = VIRT_AIA_TYPE_APLIC;
715
+ } else if (!strcmp(val, "aplic-imsic")) {
716
+ s->aia_type = VIRT_AIA_TYPE_APLIC_IMSIC;
717
} else {
718
error_setg(errp, "Invalid AIA interrupt controller type");
719
- error_append_hint(errp, "Valid values are none, and aplic.\n");
720
+ error_append_hint(errp, "Valid values are none, aplic, and "
721
+ "aplic-imsic.\n");
722
}
723
}
724
725
@@ -XXX,XX +XXX,XX @@ static void virt_set_aclint(Object *obj, bool value, Error **errp)
726
727
static void virt_machine_class_init(ObjectClass *oc, void *data)
728
{
729
+ char str[128];
730
MachineClass *mc = MACHINE_CLASS(oc);
731
732
mc->desc = "RISC-V VirtIO board";
733
@@ -XXX,XX +XXX,XX @@ static void virt_machine_class_init(ObjectClass *oc, void *data)
734
object_class_property_set_description(oc, "aia",
735
"Set type of AIA interrupt "
736
"conttoller. Valid values are "
737
- "none, and aplic.");
738
+ "none, aplic, and aplic-imsic.");
739
+
740
+ object_class_property_add_str(oc, "aia-guests",
741
+ virt_get_aia_guests,
742
+ virt_set_aia_guests);
743
+ sprintf(str, "Set number of guest MMIO pages for AIA IMSIC. Valid value "
744
+ "should be between 0 and %d.", VIRT_IRQCHIP_MAX_GUESTS);
745
+ object_class_property_set_description(oc, "aia-guests", str);
746
}
747
748
static const TypeInfo virt_machine_typeinfo = {
749
diff --git a/hw/riscv/Kconfig b/hw/riscv/Kconfig
750
index XXXXXXX..XXXXXXX 100644
751
--- a/hw/riscv/Kconfig
752
+++ b/hw/riscv/Kconfig
753
@@ -XXX,XX +XXX,XX @@ config RISCV_VIRT
754
select SERIAL
755
select RISCV_ACLINT
756
select RISCV_APLIC
757
+ select RISCV_IMSIC
758
select SIFIVE_PLIC
759
select SIFIVE_TEST
760
select VIRTIO_MMIO
761
--
412
--
762
2.34.1
413
2.41.0
763
764
diff view generated by jsdifflib
1
From: Guo Ren <ren_guo@c-sky.com>
1
From: Yong-Xuan Wang <yongxuan.wang@sifive.com>
2
2
3
Highest bits of PTE has been used for svpbmt, ref: [1], [2], so we
3
We check the in-kernel irqchip support when using KVM acceleration.
4
need to ignore them. They cannot be a part of ppn.
5
4
6
1: The RISC-V Instruction Set Manual, Volume II: Privileged Architecture
5
Signed-off-by: Yong-Xuan Wang <yongxuan.wang@sifive.com>
7
4.4 Sv39: Page-Based 39-bit Virtual-Memory System
6
Reviewed-by: Jim Shu <jim.shu@sifive.com>
8
4.5 Sv48: Page-Based 48-bit Virtual-Memory System
7
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
9
8
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
10
2: https://github.com/riscv/virtual-memory/blob/main/specs/663-Svpbmt-diff.pdf
9
Message-ID: <20230727102439.22554-3-yongxuan.wang@sifive.com>
11
12
Signed-off-by: Guo Ren <ren_guo@c-sky.com>
13
Reviewed-by: Liu Zhiwei <zhiwei_liu@c-sky.com>
14
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
15
Cc: Bin Meng <bmeng.cn@gmail.com>
16
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
17
Message-Id: <20220204022658.18097-2-liweiwei@iscas.ac.cn>
18
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
10
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
19
---
11
---
20
target/riscv/cpu.h | 15 +++++++++++++++
12
target/riscv/kvm.c | 10 +++++++++-
21
target/riscv/cpu_bits.h | 3 +++
13
1 file changed, 9 insertions(+), 1 deletion(-)
22
target/riscv/cpu_helper.c | 13 ++++++++++++-
23
3 files changed, 30 insertions(+), 1 deletion(-)
24
14
25
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
15
diff --git a/target/riscv/kvm.c b/target/riscv/kvm.c
26
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
27
--- a/target/riscv/cpu.h
17
--- a/target/riscv/kvm.c
28
+++ b/target/riscv/cpu.h
18
+++ b/target/riscv/kvm.c
29
@@ -XXX,XX +XXX,XX @@ struct RISCVCPUConfig {
19
@@ -XXX,XX +XXX,XX @@ int kvm_arch_init(MachineState *ms, KVMState *s)
30
bool ext_counters;
20
31
bool ext_ifencei;
21
int kvm_arch_irqchip_create(KVMState *s)
32
bool ext_icsr;
22
{
33
+ bool ext_svnapot;
23
- return 0;
34
+ bool ext_svpbmt;
24
+ if (kvm_kernel_irqchip_split()) {
35
bool ext_zfh;
25
+ error_report("-machine kernel_irqchip=split is not supported on RISC-V.");
36
bool ext_zfhmin;
26
+ exit(1);
37
bool ext_zve32f;
27
+ }
38
@@ -XXX,XX +XXX,XX @@ static inline int riscv_cpu_xlen(CPURISCVState *env)
28
+
39
return 16 << env->xl;
29
+ /*
30
+ * We can create the VAIA using the newer device control API.
31
+ */
32
+ return kvm_check_extension(s, KVM_CAP_DEVICE_CTRL);
40
}
33
}
41
34
42
+#ifdef TARGET_RISCV32
35
int kvm_arch_process_async_events(CPUState *cs)
43
+#define riscv_cpu_sxl(env) ((void)(env), MXL_RV32)
44
+#else
45
+static inline RISCVMXL riscv_cpu_sxl(CPURISCVState *env)
46
+{
47
+#ifdef CONFIG_USER_ONLY
48
+ return env->misa_mxl;
49
+#else
50
+ return get_field(env->mstatus, MSTATUS64_SXL);
51
+#endif
52
+}
53
+#endif
54
+
55
/*
56
* Encode LMUL to lmul as follows:
57
* LMUL vlmul lmul
58
diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
59
index XXXXXXX..XXXXXXX 100644
60
--- a/target/riscv/cpu_bits.h
61
+++ b/target/riscv/cpu_bits.h
62
@@ -XXX,XX +XXX,XX @@ typedef enum {
63
/* Page table PPN shift amount */
64
#define PTE_PPN_SHIFT 10
65
66
+/* Page table PPN mask */
67
+#define PTE_PPN_MASK 0x3FFFFFFFFFFC00ULL
68
+
69
/* Leaf page shift amount */
70
#define PGSHIFT 12
71
72
diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
73
index XXXXXXX..XXXXXXX 100644
74
--- a/target/riscv/cpu_helper.c
75
+++ b/target/riscv/cpu_helper.c
76
@@ -XXX,XX +XXX,XX @@ static int get_physical_address(CPURISCVState *env, hwaddr *physical,
77
MemTxAttrs attrs = MEMTXATTRS_UNSPECIFIED;
78
int mode = mmu_idx & TB_FLAGS_PRIV_MMU_MASK;
79
bool use_background = false;
80
+ hwaddr ppn;
81
+ RISCVCPU *cpu = env_archcpu(env);
82
83
/*
84
* Check if we should use the background registers for the two
85
@@ -XXX,XX +XXX,XX @@ restart:
86
return TRANSLATE_FAIL;
87
}
88
89
- hwaddr ppn = pte >> PTE_PPN_SHIFT;
90
+ if (riscv_cpu_sxl(env) == MXL_RV32) {
91
+ ppn = pte >> PTE_PPN_SHIFT;
92
+ } else if (cpu->cfg.ext_svpbmt || cpu->cfg.ext_svnapot) {
93
+ ppn = (pte & (target_ulong)PTE_PPN_MASK) >> PTE_PPN_SHIFT;
94
+ } else {
95
+ ppn = pte >> PTE_PPN_SHIFT;
96
+ if ((pte & ~(target_ulong)PTE_PPN_MASK) >> PTE_PPN_SHIFT) {
97
+ return TRANSLATE_FAIL;
98
+ }
99
+ }
100
101
if (!(pte & PTE_V)) {
102
/* Invalid PTE */
103
--
36
--
104
2.34.1
37
2.41.0
105
106
diff view generated by jsdifflib
1
From: Anup Patel <anup.patel@wdc.com>
1
From: Yong-Xuan Wang <yongxuan.wang@sifive.com>
2
2
3
The AIA specification defines [m|s|vs]iselect and [m|s|vs]ireg CSRs
3
We create a vAIA chip by using the KVM_DEV_TYPE_RISCV_AIA and then set up
4
which allow indirect access to interrupt priority arrays and per-HART
4
the chip with the KVM_DEV_RISCV_AIA_GRP_* APIs.
5
IMSIC registers. This patch implements AIA xiselect and xireg CSRs.
5
We also extend KVM accelerator to specify the KVM AIA mode. The "riscv-aia"
6
parameter is passed along with --accel in QEMU command-line.
7
1) "riscv-aia=emul": IMSIC is emulated by hypervisor
8
2) "riscv-aia=hwaccel": use hardware guest IMSIC
9
3) "riscv-aia=auto": use the hardware guest IMSICs whenever available
10
otherwise we fallback to software emulation.
6
11
7
Signed-off-by: Anup Patel <anup.patel@wdc.com>
12
Signed-off-by: Yong-Xuan Wang <yongxuan.wang@sifive.com>
8
Signed-off-by: Anup Patel <anup@brainfault.org>
13
Reviewed-by: Jim Shu <jim.shu@sifive.com>
9
Reviewed-by: Frank Chang <frank.chang@sifive.com>
14
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
10
Message-id: 20220204174700.534953-15-anup@brainfault.org
15
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
16
Message-ID: <20230727102439.22554-4-yongxuan.wang@sifive.com>
11
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
17
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
12
---
18
---
13
target/riscv/cpu.h | 7 ++
19
target/riscv/kvm_riscv.h | 4 +
14
target/riscv/csr.c | 177 +++++++++++++++++++++++++++++++++++++++++
20
target/riscv/kvm.c | 186 +++++++++++++++++++++++++++++++++++++++
15
target/riscv/machine.c | 3 +
21
2 files changed, 190 insertions(+)
16
3 files changed, 187 insertions(+)
17
22
18
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
23
diff --git a/target/riscv/kvm_riscv.h b/target/riscv/kvm_riscv.h
19
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
20
--- a/target/riscv/cpu.h
25
--- a/target/riscv/kvm_riscv.h
21
+++ b/target/riscv/cpu.h
26
+++ b/target/riscv/kvm_riscv.h
22
@@ -XXX,XX +XXX,XX @@ struct CPURISCVState {
27
@@ -XXX,XX +XXX,XX @@
23
uint8_t miprio[64];
28
void kvm_riscv_init_user_properties(Object *cpu_obj);
24
uint8_t siprio[64];
29
void kvm_riscv_reset_vcpu(RISCVCPU *cpu);
25
30
void kvm_riscv_set_irq(RISCVCPU *cpu, int irq, int level);
26
+ /* AIA CSRs */
31
+void kvm_riscv_aia_create(MachineState *machine, uint64_t group_shift,
27
+ target_ulong miselect;
32
+ uint64_t aia_irq_num, uint64_t aia_msi_num,
28
+ target_ulong siselect;
33
+ uint64_t aplic_base, uint64_t imsic_base,
29
+
34
+ uint64_t guest_num);
30
/* Hypervisor CSRs */
35
31
target_ulong hstatus;
36
#endif
32
target_ulong hedeleg;
37
diff --git a/target/riscv/kvm.c b/target/riscv/kvm.c
33
@@ -XXX,XX +XXX,XX @@ struct CPURISCVState {
34
target_ulong vstval;
35
target_ulong vsatp;
36
37
+ /* AIA VS-mode CSRs */
38
+ target_ulong vsiselect;
39
+
40
target_ulong mtval2;
41
target_ulong mtinst;
42
43
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
44
index XXXXXXX..XXXXXXX 100644
38
index XXXXXXX..XXXXXXX 100644
45
--- a/target/riscv/csr.c
39
--- a/target/riscv/kvm.c
46
+++ b/target/riscv/csr.c
40
+++ b/target/riscv/kvm.c
47
@@ -XXX,XX +XXX,XX @@ static int read_mtopi(CPURISCVState *env, int csrno, target_ulong *val)
41
@@ -XXX,XX +XXX,XX @@
48
return RISCV_EXCP_NONE;
42
#include "exec/address-spaces.h"
43
#include "hw/boards.h"
44
#include "hw/irq.h"
45
+#include "hw/intc/riscv_imsic.h"
46
#include "qemu/log.h"
47
#include "hw/loader.h"
48
#include "kvm_riscv.h"
49
@@ -XXX,XX +XXX,XX @@
50
#include "chardev/char-fe.h"
51
#include "migration/migration.h"
52
#include "sysemu/runstate.h"
53
+#include "hw/riscv/numa.h"
54
55
static uint64_t kvm_riscv_reg_id(CPURISCVState *env, uint64_t type,
56
uint64_t idx)
57
@@ -XXX,XX +XXX,XX @@ bool kvm_arch_cpu_check_are_resettable(void)
58
return true;
49
}
59
}
50
60
51
+static int aia_xlate_vs_csrno(CPURISCVState *env, int csrno)
61
+static int aia_mode;
52
+{
62
+
53
+ if (!riscv_cpu_virt_enabled(env)) {
63
+static const char *kvm_aia_mode_str(uint64_t mode)
54
+ return csrno;
64
+{
55
+ }
65
+ switch (mode) {
56
+
66
+ case KVM_DEV_RISCV_AIA_MODE_EMUL:
57
+ switch (csrno) {
67
+ return "emul";
58
+ case CSR_SISELECT:
68
+ case KVM_DEV_RISCV_AIA_MODE_HWACCEL:
59
+ return CSR_VSISELECT;
69
+ return "hwaccel";
60
+ case CSR_SIREG:
70
+ case KVM_DEV_RISCV_AIA_MODE_AUTO:
61
+ return CSR_VSIREG;
62
+ default:
71
+ default:
63
+ return csrno;
72
+ return "auto";
64
+ };
73
+ };
65
+}
74
+}
66
+
75
+
67
+static int rmw_xiselect(CPURISCVState *env, int csrno, target_ulong *val,
76
+static char *riscv_get_kvm_aia(Object *obj, Error **errp)
68
+ target_ulong new_val, target_ulong wr_mask)
77
+{
69
+{
78
+ return g_strdup(kvm_aia_mode_str(aia_mode));
70
+ target_ulong *iselect;
79
+}
71
+
80
+
72
+ /* Translate CSR number for VS-mode */
81
+static void riscv_set_kvm_aia(Object *obj, const char *val, Error **errp)
73
+ csrno = aia_xlate_vs_csrno(env, csrno);
82
+{
74
+
83
+ if (!strcmp(val, "emul")) {
75
+ /* Find the iselect CSR based on CSR number */
84
+ aia_mode = KVM_DEV_RISCV_AIA_MODE_EMUL;
76
+ switch (csrno) {
85
+ } else if (!strcmp(val, "hwaccel")) {
77
+ case CSR_MISELECT:
86
+ aia_mode = KVM_DEV_RISCV_AIA_MODE_HWACCEL;
78
+ iselect = &env->miselect;
87
+ } else if (!strcmp(val, "auto")) {
79
+ break;
88
+ aia_mode = KVM_DEV_RISCV_AIA_MODE_AUTO;
80
+ case CSR_SISELECT:
89
+ } else {
81
+ iselect = &env->siselect;
90
+ error_setg(errp, "Invalid KVM AIA mode");
82
+ break;
91
+ error_append_hint(errp, "Valid values are emul, hwaccel, and auto.\n");
83
+ case CSR_VSISELECT:
92
+ }
84
+ iselect = &env->vsiselect;
93
+}
85
+ break;
94
+
86
+ default:
95
void kvm_arch_accel_class_init(ObjectClass *oc)
87
+ return RISCV_EXCP_ILLEGAL_INST;
96
{
88
+ };
97
+ object_class_property_add_str(oc, "riscv-aia", riscv_get_kvm_aia,
89
+
98
+ riscv_set_kvm_aia);
90
+ if (val) {
99
+ object_class_property_set_description(oc, "riscv-aia",
91
+ *val = *iselect;
100
+ "Set KVM AIA mode. Valid values are "
92
+ }
101
+ "emul, hwaccel, and auto. Default "
93
+
102
+ "is auto.");
94
+ wr_mask &= ISELECT_MASK;
103
+ object_property_set_default_str(object_class_property_find(oc, "riscv-aia"),
95
+ if (wr_mask) {
104
+ "auto");
96
+ *iselect = (*iselect & ~wr_mask) | (new_val & wr_mask);
105
+}
97
+ }
106
+
98
+
107
+void kvm_riscv_aia_create(MachineState *machine, uint64_t group_shift,
99
+ return RISCV_EXCP_NONE;
108
+ uint64_t aia_irq_num, uint64_t aia_msi_num,
100
+}
109
+ uint64_t aplic_base, uint64_t imsic_base,
101
+
110
+ uint64_t guest_num)
102
+static int rmw_iprio(target_ulong xlen,
111
+{
103
+ target_ulong iselect, uint8_t *iprio,
112
+ int ret, i;
104
+ target_ulong *val, target_ulong new_val,
113
+ int aia_fd = -1;
105
+ target_ulong wr_mask, int ext_irq_no)
114
+ uint64_t default_aia_mode;
106
+{
115
+ uint64_t socket_count = riscv_socket_count(machine);
107
+ int i, firq, nirqs;
116
+ uint64_t max_hart_per_socket = 0;
108
+ target_ulong old_val;
117
+ uint64_t socket, base_hart, hart_count, socket_imsic_base, imsic_addr;
109
+
118
+ uint64_t socket_bits, hart_bits, guest_bits;
110
+ if (iselect < ISELECT_IPRIO0 || ISELECT_IPRIO15 < iselect) {
119
+
111
+ return -EINVAL;
120
+ aia_fd = kvm_create_device(kvm_state, KVM_DEV_TYPE_RISCV_AIA, false);
112
+ }
121
+
113
+ if (xlen != 32 && iselect & 0x1) {
122
+ if (aia_fd < 0) {
114
+ return -EINVAL;
123
+ error_report("Unable to create in-kernel irqchip");
115
+ }
124
+ exit(1);
116
+
125
+ }
117
+ nirqs = 4 * (xlen / 32);
126
+
118
+ firq = ((iselect - ISELECT_IPRIO0) / (xlen / 32)) * (nirqs);
127
+ ret = kvm_device_access(aia_fd, KVM_DEV_RISCV_AIA_GRP_CONFIG,
119
+
128
+ KVM_DEV_RISCV_AIA_CONFIG_MODE,
120
+ old_val = 0;
129
+ &default_aia_mode, false, NULL);
121
+ for (i = 0; i < nirqs; i++) {
130
+ if (ret < 0) {
122
+ old_val |= ((target_ulong)iprio[firq + i]) << (IPRIO_IRQ_BITS * i);
131
+ error_report("KVM AIA: failed to get current KVM AIA mode");
123
+ }
132
+ exit(1);
124
+
133
+ }
125
+ if (val) {
134
+ qemu_log("KVM AIA: default mode is %s\n",
126
+ *val = old_val;
135
+ kvm_aia_mode_str(default_aia_mode));
127
+ }
136
+
128
+
137
+ if (default_aia_mode != aia_mode) {
129
+ if (wr_mask) {
138
+ ret = kvm_device_access(aia_fd, KVM_DEV_RISCV_AIA_GRP_CONFIG,
130
+ new_val = (old_val & ~wr_mask) | (new_val & wr_mask);
139
+ KVM_DEV_RISCV_AIA_CONFIG_MODE,
131
+ for (i = 0; i < nirqs; i++) {
140
+ &aia_mode, true, NULL);
132
+ /*
141
+ if (ret < 0)
133
+ * M-level and S-level external IRQ priority always read-only
142
+ warn_report("KVM AIA: failed to set KVM AIA mode");
134
+ * zero. This means default priority order is always preferred
143
+ else
135
+ * for M-level and S-level external IRQs.
144
+ qemu_log("KVM AIA: set current mode to %s\n",
136
+ */
145
+ kvm_aia_mode_str(aia_mode));
137
+ if ((firq + i) == ext_irq_no) {
146
+ }
138
+ continue;
147
+
148
+ ret = kvm_device_access(aia_fd, KVM_DEV_RISCV_AIA_GRP_CONFIG,
149
+ KVM_DEV_RISCV_AIA_CONFIG_SRCS,
150
+ &aia_irq_num, true, NULL);
151
+ if (ret < 0) {
152
+ error_report("KVM AIA: failed to set number of input irq lines");
153
+ exit(1);
154
+ }
155
+
156
+ ret = kvm_device_access(aia_fd, KVM_DEV_RISCV_AIA_GRP_CONFIG,
157
+ KVM_DEV_RISCV_AIA_CONFIG_IDS,
158
+ &aia_msi_num, true, NULL);
159
+ if (ret < 0) {
160
+ error_report("KVM AIA: failed to set number of msi");
161
+ exit(1);
162
+ }
163
+
164
+ socket_bits = find_last_bit(&socket_count, BITS_PER_LONG) + 1;
165
+ ret = kvm_device_access(aia_fd, KVM_DEV_RISCV_AIA_GRP_CONFIG,
166
+ KVM_DEV_RISCV_AIA_CONFIG_GROUP_BITS,
167
+ &socket_bits, true, NULL);
168
+ if (ret < 0) {
169
+ error_report("KVM AIA: failed to set group_bits");
170
+ exit(1);
171
+ }
172
+
173
+ ret = kvm_device_access(aia_fd, KVM_DEV_RISCV_AIA_GRP_CONFIG,
174
+ KVM_DEV_RISCV_AIA_CONFIG_GROUP_SHIFT,
175
+ &group_shift, true, NULL);
176
+ if (ret < 0) {
177
+ error_report("KVM AIA: failed to set group_shift");
178
+ exit(1);
179
+ }
180
+
181
+ guest_bits = guest_num == 0 ? 0 :
182
+ find_last_bit(&guest_num, BITS_PER_LONG) + 1;
183
+ ret = kvm_device_access(aia_fd, KVM_DEV_RISCV_AIA_GRP_CONFIG,
184
+ KVM_DEV_RISCV_AIA_CONFIG_GUEST_BITS,
185
+ &guest_bits, true, NULL);
186
+ if (ret < 0) {
187
+ error_report("KVM AIA: failed to set guest_bits");
188
+ exit(1);
189
+ }
190
+
191
+ ret = kvm_device_access(aia_fd, KVM_DEV_RISCV_AIA_GRP_ADDR,
192
+ KVM_DEV_RISCV_AIA_ADDR_APLIC,
193
+ &aplic_base, true, NULL);
194
+ if (ret < 0) {
195
+ error_report("KVM AIA: failed to set the base address of APLIC");
196
+ exit(1);
197
+ }
198
+
199
+ for (socket = 0; socket < socket_count; socket++) {
200
+ socket_imsic_base = imsic_base + socket * (1U << group_shift);
201
+ hart_count = riscv_socket_hart_count(machine, socket);
202
+ base_hart = riscv_socket_first_hartid(machine, socket);
203
+
204
+ if (max_hart_per_socket < hart_count) {
205
+ max_hart_per_socket = hart_count;
206
+ }
207
+
208
+ for (i = 0; i < hart_count; i++) {
209
+ imsic_addr = socket_imsic_base + i * IMSIC_HART_SIZE(guest_bits);
210
+ ret = kvm_device_access(aia_fd, KVM_DEV_RISCV_AIA_GRP_ADDR,
211
+ KVM_DEV_RISCV_AIA_ADDR_IMSIC(i + base_hart),
212
+ &imsic_addr, true, NULL);
213
+ if (ret < 0) {
214
+ error_report("KVM AIA: failed to set the IMSIC address for hart %d", i);
215
+ exit(1);
139
+ }
216
+ }
140
+ iprio[firq + i] = (new_val >> (IPRIO_IRQ_BITS * i)) & 0xff;
141
+ }
217
+ }
142
+ }
218
+ }
143
+
219
+
144
+ return 0;
220
+ hart_bits = find_last_bit(&max_hart_per_socket, BITS_PER_LONG) + 1;
145
+}
221
+ ret = kvm_device_access(aia_fd, KVM_DEV_RISCV_AIA_GRP_CONFIG,
146
+
222
+ KVM_DEV_RISCV_AIA_CONFIG_HART_BITS,
147
+static int rmw_xireg(CPURISCVState *env, int csrno, target_ulong *val,
223
+ &hart_bits, true, NULL);
148
+ target_ulong new_val, target_ulong wr_mask)
224
+ if (ret < 0) {
149
+{
225
+ error_report("KVM AIA: failed to set hart_bits");
150
+ bool virt;
226
+ exit(1);
151
+ uint8_t *iprio;
227
+ }
152
+ int ret = -EINVAL;
228
+
153
+ target_ulong priv, isel, vgein;
229
+ if (kvm_has_gsi_routing()) {
154
+
230
+ for (uint64_t idx = 0; idx < aia_irq_num + 1; ++idx) {
155
+ /* Translate CSR number for VS-mode */
231
+ /* KVM AIA only has one APLIC instance */
156
+ csrno = aia_xlate_vs_csrno(env, csrno);
232
+ kvm_irqchip_add_irq_route(kvm_state, idx, 0, idx);
157
+
158
+ /* Decode register details from CSR number */
159
+ virt = false;
160
+ switch (csrno) {
161
+ case CSR_MIREG:
162
+ iprio = env->miprio;
163
+ isel = env->miselect;
164
+ priv = PRV_M;
165
+ break;
166
+ case CSR_SIREG:
167
+ iprio = env->siprio;
168
+ isel = env->siselect;
169
+ priv = PRV_S;
170
+ break;
171
+ case CSR_VSIREG:
172
+ iprio = env->hviprio;
173
+ isel = env->vsiselect;
174
+ priv = PRV_S;
175
+ virt = true;
176
+ break;
177
+ default:
178
+ goto done;
179
+ };
180
+
181
+ /* Find the selected guest interrupt file */
182
+ vgein = (virt) ? get_field(env->hstatus, HSTATUS_VGEIN) : 0;
183
+
184
+ if (ISELECT_IPRIO0 <= isel && isel <= ISELECT_IPRIO15) {
185
+ /* Local interrupt priority registers not available for VS-mode */
186
+ if (!virt) {
187
+ ret = rmw_iprio(riscv_cpu_mxl_bits(env),
188
+ isel, iprio, val, new_val, wr_mask,
189
+ (priv == PRV_M) ? IRQ_M_EXT : IRQ_S_EXT);
190
+ }
233
+ }
191
+ } else if (ISELECT_IMSIC_FIRST <= isel && isel <= ISELECT_IMSIC_LAST) {
234
+ kvm_gsi_routing_allowed = true;
192
+ /* IMSIC registers only available when machine implements it. */
235
+ kvm_irqchip_commit_routes(kvm_state);
193
+ if (env->aia_ireg_rmw_fn[priv]) {
236
+ }
194
+ /* Selected guest interrupt file should not be zero */
237
+
195
+ if (virt && (!vgein || env->geilen < vgein)) {
238
+ ret = kvm_device_access(aia_fd, KVM_DEV_RISCV_AIA_GRP_CTRL,
196
+ goto done;
239
+ KVM_DEV_RISCV_AIA_CTRL_INIT,
197
+ }
240
+ NULL, true, NULL);
198
+ /* Call machine specific IMSIC register emulation */
241
+ if (ret < 0) {
199
+ ret = env->aia_ireg_rmw_fn[priv](env->aia_ireg_rmw_fn_arg[priv],
242
+ error_report("KVM AIA: initialized fail");
200
+ AIA_MAKE_IREG(isel, priv, virt, vgein,
243
+ exit(1);
201
+ riscv_cpu_mxl_bits(env)),
244
+ }
202
+ val, new_val, wr_mask);
245
+
203
+ }
246
+ kvm_msi_via_irqfd_allowed = kvm_irqfds_enabled();
204
+ }
247
}
205
+
206
+done:
207
+ if (ret) {
208
+ return (riscv_cpu_virt_enabled(env) && virt) ?
209
+ RISCV_EXCP_VIRT_INSTRUCTION_FAULT : RISCV_EXCP_ILLEGAL_INST;
210
+ }
211
+ return RISCV_EXCP_NONE;
212
+}
213
+
214
static RISCVException read_mtvec(CPURISCVState *env, int csrno,
215
target_ulong *val)
216
{
217
@@ -XXX,XX +XXX,XX @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
218
[CSR_MTVAL] = { "mtval", any, read_mtval, write_mtval },
219
[CSR_MIP] = { "mip", any, NULL, NULL, rmw_mip },
220
221
+ /* Machine-Level Window to Indirectly Accessed Registers (AIA) */
222
+ [CSR_MISELECT] = { "miselect", aia_any, NULL, NULL, rmw_xiselect },
223
+ [CSR_MIREG] = { "mireg", aia_any, NULL, NULL, rmw_xireg },
224
+
225
/* Machine-Level Interrupts (AIA) */
226
[CSR_MTOPI] = { "mtopi", aia_any, read_mtopi },
227
228
@@ -XXX,XX +XXX,XX @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
229
/* Supervisor Protection and Translation */
230
[CSR_SATP] = { "satp", smode, read_satp, write_satp },
231
232
+ /* Supervisor-Level Window to Indirectly Accessed Registers (AIA) */
233
+ [CSR_SISELECT] = { "siselect", aia_smode, NULL, NULL, rmw_xiselect },
234
+ [CSR_SIREG] = { "sireg", aia_smode, NULL, NULL, rmw_xireg },
235
+
236
/* Supervisor-Level Interrupts (AIA) */
237
[CSR_STOPI] = { "stopi", aia_smode, read_stopi },
238
239
@@ -XXX,XX +XXX,XX @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
240
[CSR_HVIPRIO1] = { "hviprio1", aia_hmode, read_hviprio1, write_hviprio1 },
241
[CSR_HVIPRIO2] = { "hviprio2", aia_hmode, read_hviprio2, write_hviprio2 },
242
243
+ /*
244
+ * VS-Level Window to Indirectly Accessed Registers (H-extension with AIA)
245
+ */
246
+ [CSR_VSISELECT] = { "vsiselect", aia_hmode, NULL, NULL, rmw_xiselect },
247
+ [CSR_VSIREG] = { "vsireg", aia_hmode, NULL, NULL, rmw_xireg },
248
+
249
/* VS-Level Interrupts (H-extension with AIA) */
250
[CSR_VSTOPI] = { "vstopi", aia_hmode, read_vstopi },
251
252
diff --git a/target/riscv/machine.c b/target/riscv/machine.c
253
index XXXXXXX..XXXXXXX 100644
254
--- a/target/riscv/machine.c
255
+++ b/target/riscv/machine.c
256
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_hyper = {
257
VMSTATE_UINTTL(env.vscause, RISCVCPU),
258
VMSTATE_UINTTL(env.vstval, RISCVCPU),
259
VMSTATE_UINTTL(env.vsatp, RISCVCPU),
260
+ VMSTATE_UINTTL(env.vsiselect, RISCVCPU),
261
262
VMSTATE_UINTTL(env.mtval2, RISCVCPU),
263
VMSTATE_UINTTL(env.mtinst, RISCVCPU),
264
@@ -XXX,XX +XXX,XX @@ const VMStateDescription vmstate_riscv_cpu = {
265
VMSTATE_UINTTL(env.mepc, RISCVCPU),
266
VMSTATE_UINTTL(env.mcause, RISCVCPU),
267
VMSTATE_UINTTL(env.mtval, RISCVCPU),
268
+ VMSTATE_UINTTL(env.miselect, RISCVCPU),
269
+ VMSTATE_UINTTL(env.siselect, RISCVCPU),
270
VMSTATE_UINTTL(env.scounteren, RISCVCPU),
271
VMSTATE_UINTTL(env.mcounteren, RISCVCPU),
272
VMSTATE_UINTTL(env.sscratch, RISCVCPU),
273
--
248
--
274
2.34.1
249
2.41.0
275
276
diff view generated by jsdifflib
1
From: Anup Patel <anup.patel@wdc.com>
1
From: Yong-Xuan Wang <yongxuan.wang@sifive.com>
2
2
3
The AIA specification introduces new [m|s|vs]topi CSRs for
3
KVM AIA can't emulate APLIC only. When "aia=aplic" parameter is passed,
4
reporting pending local IRQ number and associated IRQ priority.
4
APLIC devices is emulated by QEMU. For "aia=aplic-imsic", remove the
5
mmio operations of APLIC when using KVM AIA and send wired interrupt
6
signal via KVM_IRQ_LINE API.
7
After KVM AIA enabled, MSI messages are delivered by KVM_SIGNAL_MSI API
8
when the IMSICs receive mmio write requests.
5
9
6
Signed-off-by: Anup Patel <anup.patel@wdc.com>
10
Signed-off-by: Yong-Xuan Wang <yongxuan.wang@sifive.com>
7
Signed-off-by: Anup Patel <anup@brainfault.org>
11
Reviewed-by: Jim Shu <jim.shu@sifive.com>
8
Reviewed-by: Frank Chang <frank.chang@sifive.com>
12
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
9
Message-id: 20220204174700.534953-14-anup@brainfault.org
13
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
10
[ Changed by AF:
14
Message-ID: <20230727102439.22554-5-yongxuan.wang@sifive.com>
11
- Fixup indentation
12
]
13
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
15
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
14
---
16
---
15
target/riscv/csr.c | 156 +++++++++++++++++++++++++++++++++++++++++++++
17
hw/intc/riscv_aplic.c | 56 ++++++++++++++++++++++++++++++-------------
16
1 file changed, 156 insertions(+)
18
hw/intc/riscv_imsic.c | 25 +++++++++++++++----
19
2 files changed, 61 insertions(+), 20 deletions(-)
17
20
18
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
21
diff --git a/hw/intc/riscv_aplic.c b/hw/intc/riscv_aplic.c
19
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
20
--- a/target/riscv/csr.c
23
--- a/hw/intc/riscv_aplic.c
21
+++ b/target/riscv/csr.c
24
+++ b/hw/intc/riscv_aplic.c
22
@@ -XXX,XX +XXX,XX @@ static int smode32(CPURISCVState *env, int csrno)
25
@@ -XXX,XX +XXX,XX @@
23
return smode(env, csrno);
26
#include "hw/irq.h"
24
}
27
#include "target/riscv/cpu.h"
25
28
#include "sysemu/sysemu.h"
26
+static int aia_smode(CPURISCVState *env, int csrno)
29
+#include "sysemu/kvm.h"
30
#include "migration/vmstate.h"
31
32
#define APLIC_MAX_IDC (1UL << 14)
33
@@ -XXX,XX +XXX,XX @@
34
35
#define APLIC_IDC_CLAIMI 0x1c
36
37
+/*
38
+ * KVM AIA only supports APLIC MSI, fallback to QEMU emulation if we want to use
39
+ * APLIC Wired.
40
+ */
41
+static bool is_kvm_aia(bool msimode)
27
+{
42
+{
28
+ if (!riscv_feature(env, RISCV_FEATURE_AIA)) {
43
+ return kvm_irqchip_in_kernel() && msimode;
29
+ return RISCV_EXCP_ILLEGAL_INST;
30
+ }
31
+
32
+ return smode(env, csrno);
33
+}
44
+}
34
+
45
+
35
static int aia_smode32(CPURISCVState *env, int csrno)
46
static uint32_t riscv_aplic_read_input_word(RISCVAPLICState *aplic,
47
uint32_t word)
36
{
48
{
37
if (!riscv_feature(env, RISCV_FEATURE_AIA)) {
49
@@ -XXX,XX +XXX,XX @@ static uint32_t riscv_aplic_idc_claimi(RISCVAPLICState *aplic, uint32_t idc)
38
@@ -XXX,XX +XXX,XX @@ static RISCVException read_timeh(CPURISCVState *env, int csrno,
50
return topi;
39
#define VS_MODE_INTERRUPTS ((uint64_t)(MIP_VSSIP | MIP_VSTIP | MIP_VSEIP))
40
#define HS_MODE_INTERRUPTS ((uint64_t)(MIP_SGEIP | VS_MODE_INTERRUPTS))
41
42
+#define VSTOPI_NUM_SRCS 5
43
+
44
static const uint64_t delegable_ints = S_MODE_INTERRUPTS |
45
VS_MODE_INTERRUPTS;
46
static const uint64_t vs_delegable_ints = VS_MODE_INTERRUPTS;
47
@@ -XXX,XX +XXX,XX @@ static RISCVException rmw_mieh(CPURISCVState *env, int csrno,
48
return ret;
49
}
51
}
50
52
51
+static int read_mtopi(CPURISCVState *env, int csrno, target_ulong *val)
53
+static void riscv_kvm_aplic_request(void *opaque, int irq, int level)
52
+{
54
+{
53
+ int irq;
55
+ kvm_set_irq(kvm_state, irq, !!level);
54
+ uint8_t iprio;
55
+
56
+ irq = riscv_cpu_mirq_pending(env);
57
+ if (irq <= 0 || irq > 63) {
58
+ *val = 0;
59
+ } else {
60
+ iprio = env->miprio[irq];
61
+ if (!iprio) {
62
+ if (riscv_cpu_default_priority(irq) > IPRIO_DEFAULT_M) {
63
+ iprio = IPRIO_MMAXIPRIO;
64
+ }
65
+ }
66
+ *val = (irq & TOPI_IID_MASK) << TOPI_IID_SHIFT;
67
+ *val |= iprio;
68
+ }
69
+
70
+ return RISCV_EXCP_NONE;
71
+}
56
+}
72
+
57
+
73
static RISCVException read_mtvec(CPURISCVState *env, int csrno,
58
static void riscv_aplic_request(void *opaque, int irq, int level)
74
target_ulong *val)
75
{
59
{
76
@@ -XXX,XX +XXX,XX @@ static RISCVException write_satp(CPURISCVState *env, int csrno,
60
bool update = false;
77
return RISCV_EXCP_NONE;
61
@@ -XXX,XX +XXX,XX @@ static void riscv_aplic_realize(DeviceState *dev, Error **errp)
78
}
62
uint32_t i;
79
63
RISCVAPLICState *aplic = RISCV_APLIC(dev);
80
+static int read_vstopi(CPURISCVState *env, int csrno, target_ulong *val)
64
81
+{
65
- aplic->bitfield_words = (aplic->num_irqs + 31) >> 5;
82
+ int irq, ret;
66
- aplic->sourcecfg = g_new0(uint32_t, aplic->num_irqs);
83
+ target_ulong topei;
67
- aplic->state = g_new0(uint32_t, aplic->num_irqs);
84
+ uint64_t vseip, vsgein;
68
- aplic->target = g_new0(uint32_t, aplic->num_irqs);
85
+ uint32_t iid, iprio, hviid, hviprio, gein;
69
- if (!aplic->msimode) {
86
+ uint32_t s, scount = 0, siid[VSTOPI_NUM_SRCS], siprio[VSTOPI_NUM_SRCS];
70
- for (i = 0; i < aplic->num_irqs; i++) {
71
- aplic->target[i] = 1;
72
+ if (!is_kvm_aia(aplic->msimode)) {
73
+ aplic->bitfield_words = (aplic->num_irqs + 31) >> 5;
74
+ aplic->sourcecfg = g_new0(uint32_t, aplic->num_irqs);
75
+ aplic->state = g_new0(uint32_t, aplic->num_irqs);
76
+ aplic->target = g_new0(uint32_t, aplic->num_irqs);
77
+ if (!aplic->msimode) {
78
+ for (i = 0; i < aplic->num_irqs; i++) {
79
+ aplic->target[i] = 1;
80
+ }
81
}
82
- }
83
- aplic->idelivery = g_new0(uint32_t, aplic->num_harts);
84
- aplic->iforce = g_new0(uint32_t, aplic->num_harts);
85
- aplic->ithreshold = g_new0(uint32_t, aplic->num_harts);
86
+ aplic->idelivery = g_new0(uint32_t, aplic->num_harts);
87
+ aplic->iforce = g_new0(uint32_t, aplic->num_harts);
88
+ aplic->ithreshold = g_new0(uint32_t, aplic->num_harts);
89
90
- memory_region_init_io(&aplic->mmio, OBJECT(dev), &riscv_aplic_ops, aplic,
91
- TYPE_RISCV_APLIC, aplic->aperture_size);
92
- sysbus_init_mmio(SYS_BUS_DEVICE(dev), &aplic->mmio);
93
+ memory_region_init_io(&aplic->mmio, OBJECT(dev), &riscv_aplic_ops,
94
+ aplic, TYPE_RISCV_APLIC, aplic->aperture_size);
95
+ sysbus_init_mmio(SYS_BUS_DEVICE(dev), &aplic->mmio);
96
+ }
97
98
/*
99
* Only root APLICs have hardware IRQ lines. All non-root APLICs
100
* have IRQ lines delegated by their parent APLIC.
101
*/
102
if (!aplic->parent) {
103
- qdev_init_gpio_in(dev, riscv_aplic_request, aplic->num_irqs);
104
+ if (is_kvm_aia(aplic->msimode)) {
105
+ qdev_init_gpio_in(dev, riscv_kvm_aplic_request, aplic->num_irqs);
106
+ } else {
107
+ qdev_init_gpio_in(dev, riscv_aplic_request, aplic->num_irqs);
108
+ }
109
}
110
111
/* Create output IRQ lines for non-MSI mode */
112
@@ -XXX,XX +XXX,XX @@ DeviceState *riscv_aplic_create(hwaddr addr, hwaddr size,
113
qdev_prop_set_bit(dev, "mmode", mmode);
114
115
sysbus_realize_and_unref(SYS_BUS_DEVICE(dev), &error_fatal);
116
- sysbus_mmio_map(SYS_BUS_DEVICE(dev), 0, addr);
87
+
117
+
88
+ gein = get_field(env->hstatus, HSTATUS_VGEIN);
118
+ if (!is_kvm_aia(msimode)) {
89
+ hviid = get_field(env->hvictl, HVICTL_IID);
119
+ sysbus_mmio_map(SYS_BUS_DEVICE(dev), 0, addr);
90
+ hviprio = get_field(env->hvictl, HVICTL_IPRIO);
120
+ }
121
122
if (parent) {
123
riscv_aplic_add_child(parent, dev);
124
diff --git a/hw/intc/riscv_imsic.c b/hw/intc/riscv_imsic.c
125
index XXXXXXX..XXXXXXX 100644
126
--- a/hw/intc/riscv_imsic.c
127
+++ b/hw/intc/riscv_imsic.c
128
@@ -XXX,XX +XXX,XX @@
129
#include "target/riscv/cpu.h"
130
#include "target/riscv/cpu_bits.h"
131
#include "sysemu/sysemu.h"
132
+#include "sysemu/kvm.h"
133
#include "migration/vmstate.h"
134
135
#define IMSIC_MMIO_PAGE_LE 0x00
136
@@ -XXX,XX +XXX,XX @@ static void riscv_imsic_write(void *opaque, hwaddr addr, uint64_t value,
137
goto err;
138
}
139
140
+#if defined(CONFIG_KVM)
141
+ if (kvm_irqchip_in_kernel()) {
142
+ struct kvm_msi msi;
91
+
143
+
92
+ if (gein) {
144
+ msi.address_lo = extract64(imsic->mmio.addr + addr, 0, 32);
93
+ vsgein = (env->hgeip & (1ULL << gein)) ? MIP_VSEIP : 0;
145
+ msi.address_hi = extract64(imsic->mmio.addr + addr, 32, 32);
94
+ vseip = env->mie & (env->mip | vsgein) & MIP_VSEIP;
146
+ msi.data = le32_to_cpu(value);
95
+ if (gein <= env->geilen && vseip) {
147
+
96
+ siid[scount] = IRQ_S_EXT;
148
+ kvm_vm_ioctl(kvm_state, KVM_SIGNAL_MSI, &msi);
97
+ siprio[scount] = IPRIO_MMAXIPRIO + 1;
149
+
98
+ if (env->aia_ireg_rmw_fn[PRV_S]) {
150
+ return;
99
+ /*
100
+ * Call machine specific IMSIC register emulation for
101
+ * reading TOPEI.
102
+ */
103
+ ret = env->aia_ireg_rmw_fn[PRV_S](
104
+ env->aia_ireg_rmw_fn_arg[PRV_S],
105
+ AIA_MAKE_IREG(ISELECT_IMSIC_TOPEI, PRV_S, true, gein,
106
+ riscv_cpu_mxl_bits(env)),
107
+ &topei, 0, 0);
108
+ if (!ret && topei) {
109
+ siprio[scount] = topei & IMSIC_TOPEI_IPRIO_MASK;
110
+ }
111
+ }
112
+ scount++;
113
+ }
114
+ } else {
115
+ if (hviid == IRQ_S_EXT && hviprio) {
116
+ siid[scount] = IRQ_S_EXT;
117
+ siprio[scount] = hviprio;
118
+ scount++;
119
+ }
120
+ }
151
+ }
152
+#endif
121
+
153
+
122
+ if (env->hvictl & HVICTL_VTI) {
154
/* Writes only supported for MSI little-endian registers */
123
+ if (hviid != IRQ_S_EXT) {
155
page = addr >> IMSIC_MMIO_PAGE_SHIFT;
124
+ siid[scount] = hviid;
156
if ((addr & (IMSIC_MMIO_PAGE_SZ - 1)) == IMSIC_MMIO_PAGE_LE) {
125
+ siprio[scount] = hviprio;
157
@@ -XXX,XX +XXX,XX @@ static void riscv_imsic_realize(DeviceState *dev, Error **errp)
126
+ scount++;
158
CPUState *cpu = cpu_by_arch_id(imsic->hartid);
127
+ }
159
CPURISCVState *env = cpu ? cpu->env_ptr : NULL;
128
+ } else {
160
129
+ irq = riscv_cpu_vsirq_pending(env);
161
- imsic->num_eistate = imsic->num_pages * imsic->num_irqs;
130
+ if (irq != IRQ_S_EXT && 0 < irq && irq <= 63) {
162
- imsic->eidelivery = g_new0(uint32_t, imsic->num_pages);
131
+ siid[scount] = irq;
163
- imsic->eithreshold = g_new0(uint32_t, imsic->num_pages);
132
+ siprio[scount] = env->hviprio[irq];
164
- imsic->eistate = g_new0(uint32_t, imsic->num_eistate);
133
+ scount++;
165
+ if (!kvm_irqchip_in_kernel()) {
134
+ }
166
+ imsic->num_eistate = imsic->num_pages * imsic->num_irqs;
167
+ imsic->eidelivery = g_new0(uint32_t, imsic->num_pages);
168
+ imsic->eithreshold = g_new0(uint32_t, imsic->num_pages);
169
+ imsic->eistate = g_new0(uint32_t, imsic->num_eistate);
135
+ }
170
+ }
136
+
171
137
+ iid = 0;
172
memory_region_init_io(&imsic->mmio, OBJECT(dev), &riscv_imsic_ops,
138
+ iprio = UINT_MAX;
173
imsic, TYPE_RISCV_IMSIC,
139
+ for (s = 0; s < scount; s++) {
140
+ if (siprio[s] < iprio) {
141
+ iid = siid[s];
142
+ iprio = siprio[s];
143
+ }
144
+ }
145
+
146
+ if (iid) {
147
+ if (env->hvictl & HVICTL_IPRIOM) {
148
+ if (iprio > IPRIO_MMAXIPRIO) {
149
+ iprio = IPRIO_MMAXIPRIO;
150
+ }
151
+ if (!iprio) {
152
+ if (riscv_cpu_default_priority(iid) > IPRIO_DEFAULT_S) {
153
+ iprio = IPRIO_MMAXIPRIO;
154
+ }
155
+ }
156
+ } else {
157
+ iprio = 1;
158
+ }
159
+ } else {
160
+ iprio = 0;
161
+ }
162
+
163
+ *val = (iid & TOPI_IID_MASK) << TOPI_IID_SHIFT;
164
+ *val |= iprio;
165
+ return RISCV_EXCP_NONE;
166
+}
167
+
168
+static int read_stopi(CPURISCVState *env, int csrno, target_ulong *val)
169
+{
170
+ int irq;
171
+ uint8_t iprio;
172
+
173
+ if (riscv_cpu_virt_enabled(env)) {
174
+ return read_vstopi(env, CSR_VSTOPI, val);
175
+ }
176
+
177
+ irq = riscv_cpu_sirq_pending(env);
178
+ if (irq <= 0 || irq > 63) {
179
+ *val = 0;
180
+ } else {
181
+ iprio = env->siprio[irq];
182
+ if (!iprio) {
183
+ if (riscv_cpu_default_priority(irq) > IPRIO_DEFAULT_S) {
184
+ iprio = IPRIO_MMAXIPRIO;
185
+ }
186
+ }
187
+ *val = (irq & TOPI_IID_MASK) << TOPI_IID_SHIFT;
188
+ *val |= iprio;
189
+ }
190
+
191
+ return RISCV_EXCP_NONE;
192
+}
193
+
194
/* Hypervisor Extensions */
195
static RISCVException read_hstatus(CPURISCVState *env, int csrno,
196
target_ulong *val)
197
@@ -XXX,XX +XXX,XX @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
198
[CSR_MTVAL] = { "mtval", any, read_mtval, write_mtval },
199
[CSR_MIP] = { "mip", any, NULL, NULL, rmw_mip },
200
201
+ /* Machine-Level Interrupts (AIA) */
202
+ [CSR_MTOPI] = { "mtopi", aia_any, read_mtopi },
203
+
204
/* Virtual Interrupts for Supervisor Level (AIA) */
205
[CSR_MVIEN] = { "mvien", aia_any, read_zero, write_ignore },
206
[CSR_MVIP] = { "mvip", aia_any, read_zero, write_ignore },
207
@@ -XXX,XX +XXX,XX @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
208
/* Supervisor Protection and Translation */
209
[CSR_SATP] = { "satp", smode, read_satp, write_satp },
210
211
+ /* Supervisor-Level Interrupts (AIA) */
212
+ [CSR_STOPI] = { "stopi", aia_smode, read_stopi },
213
+
214
/* Supervisor-Level High-Half CSRs (AIA) */
215
[CSR_SIEH] = { "sieh", aia_smode32, NULL, NULL, rmw_sieh },
216
[CSR_SIPH] = { "siph", aia_smode32, NULL, NULL, rmw_siph },
217
@@ -XXX,XX +XXX,XX @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
218
[CSR_HVIPRIO1] = { "hviprio1", aia_hmode, read_hviprio1, write_hviprio1 },
219
[CSR_HVIPRIO2] = { "hviprio2", aia_hmode, read_hviprio2, write_hviprio2 },
220
221
+ /* VS-Level Interrupts (H-extension with AIA) */
222
+ [CSR_VSTOPI] = { "vstopi", aia_hmode, read_vstopi },
223
+
224
/* Hypervisor and VS-Level High-Half CSRs (H-extension with AIA) */
225
[CSR_HIDELEGH] = { "hidelegh", aia_hmode32, NULL, NULL, rmw_hidelegh },
226
[CSR_HVIENH] = { "hvienh", aia_hmode32, read_zero, write_ignore },
227
--
174
--
228
2.34.1
175
2.41.0
229
230
diff view generated by jsdifflib
1
From: Anup Patel <anup.patel@wdc.com>
1
From: Yong-Xuan Wang <yongxuan.wang@sifive.com>
2
2
3
We extend virt machine to emulate AIA APLIC devices only when
3
Select KVM AIA when the host kernel has in-kernel AIA chip support.
4
"aia=aplic" parameter is passed along with machine name in QEMU
4
Since KVM AIA only has one APLIC instance, we map the QEMU APLIC
5
command-line. When "aia=none" or not specified then we fallback
5
devices to KVM APLIC.
6
to original PLIC device emulation.
7
6
8
Signed-off-by: Anup Patel <anup.patel@wdc.com>
7
Signed-off-by: Yong-Xuan Wang <yongxuan.wang@sifive.com>
9
Signed-off-by: Anup Patel <anup@brainfault.org>
8
Reviewed-by: Jim Shu <jim.shu@sifive.com>
10
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
9
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
11
Message-id: 20220204174700.534953-20-anup@brainfault.org
10
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
11
Message-ID: <20230727102439.22554-6-yongxuan.wang@sifive.com>
12
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
12
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
13
---
13
---
14
include/hw/riscv/virt.h | 26 +++-
14
hw/riscv/virt.c | 94 +++++++++++++++++++++++++++++++++----------------
15
hw/riscv/virt.c | 291 ++++++++++++++++++++++++++++++++--------
15
1 file changed, 63 insertions(+), 31 deletions(-)
16
hw/riscv/Kconfig | 1 +
17
3 files changed, 259 insertions(+), 59 deletions(-)
18
16
19
diff --git a/include/hw/riscv/virt.h b/include/hw/riscv/virt.h
20
index XXXXXXX..XXXXXXX 100644
21
--- a/include/hw/riscv/virt.h
22
+++ b/include/hw/riscv/virt.h
23
@@ -XXX,XX +XXX,XX @@ typedef struct RISCVVirtState RISCVVirtState;
24
DECLARE_INSTANCE_CHECKER(RISCVVirtState, RISCV_VIRT_MACHINE,
25
TYPE_RISCV_VIRT_MACHINE)
26
27
+typedef enum RISCVVirtAIAType {
28
+ VIRT_AIA_TYPE_NONE = 0,
29
+ VIRT_AIA_TYPE_APLIC,
30
+} RISCVVirtAIAType;
31
+
32
struct RISCVVirtState {
33
/*< private >*/
34
MachineState parent;
35
36
/*< public >*/
37
RISCVHartArrayState soc[VIRT_SOCKETS_MAX];
38
- DeviceState *plic[VIRT_SOCKETS_MAX];
39
+ DeviceState *irqchip[VIRT_SOCKETS_MAX];
40
PFlashCFI01 *flash[2];
41
FWCfgState *fw_cfg;
42
43
int fdt_size;
44
bool have_aclint;
45
+ RISCVVirtAIAType aia_type;
46
};
47
48
enum {
49
@@ -XXX,XX +XXX,XX @@ enum {
50
VIRT_CLINT,
51
VIRT_ACLINT_SSWI,
52
VIRT_PLIC,
53
+ VIRT_APLIC_M,
54
+ VIRT_APLIC_S,
55
VIRT_UART0,
56
VIRT_VIRTIO,
57
VIRT_FW_CFG,
58
@@ -XXX,XX +XXX,XX @@ enum {
59
VIRTIO_NDEV = 0x35 /* Arbitrary maximum number of interrupts */
60
};
61
62
-#define VIRT_PLIC_NUM_SOURCES 127
63
-#define VIRT_PLIC_NUM_PRIORITIES 7
64
+#define VIRT_IRQCHIP_NUM_SOURCES 127
65
+#define VIRT_IRQCHIP_NUM_PRIO_BITS 3
66
+
67
#define VIRT_PLIC_PRIORITY_BASE 0x04
68
#define VIRT_PLIC_PENDING_BASE 0x1000
69
#define VIRT_PLIC_ENABLE_BASE 0x2000
70
@@ -XXX,XX +XXX,XX @@ enum {
71
72
#define FDT_PCI_ADDR_CELLS 3
73
#define FDT_PCI_INT_CELLS 1
74
-#define FDT_PLIC_ADDR_CELLS 0
75
#define FDT_PLIC_INT_CELLS 1
76
-#define FDT_INT_MAP_WIDTH (FDT_PCI_ADDR_CELLS + FDT_PCI_INT_CELLS + 1 + \
77
- FDT_PLIC_ADDR_CELLS + FDT_PLIC_INT_CELLS)
78
+#define FDT_APLIC_INT_CELLS 2
79
+#define FDT_MAX_INT_CELLS 2
80
+#define FDT_MAX_INT_MAP_WIDTH (FDT_PCI_ADDR_CELLS + FDT_PCI_INT_CELLS + \
81
+ 1 + FDT_MAX_INT_CELLS)
82
+#define FDT_PLIC_INT_MAP_WIDTH (FDT_PCI_ADDR_CELLS + FDT_PCI_INT_CELLS + \
83
+ 1 + FDT_PLIC_INT_CELLS)
84
+#define FDT_APLIC_INT_MAP_WIDTH (FDT_PCI_ADDR_CELLS + FDT_PCI_INT_CELLS + \
85
+ 1 + FDT_APLIC_INT_CELLS)
86
87
#endif
88
diff --git a/hw/riscv/virt.c b/hw/riscv/virt.c
17
diff --git a/hw/riscv/virt.c b/hw/riscv/virt.c
89
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
90
--- a/hw/riscv/virt.c
19
--- a/hw/riscv/virt.c
91
+++ b/hw/riscv/virt.c
20
+++ b/hw/riscv/virt.c
92
@@ -XXX,XX +XXX,XX @@
21
@@ -XXX,XX +XXX,XX @@
22
#include "hw/riscv/virt.h"
93
#include "hw/riscv/boot.h"
23
#include "hw/riscv/boot.h"
94
#include "hw/riscv/numa.h"
24
#include "hw/riscv/numa.h"
25
+#include "kvm_riscv.h"
95
#include "hw/intc/riscv_aclint.h"
26
#include "hw/intc/riscv_aclint.h"
96
+#include "hw/intc/riscv_aplic.h"
27
#include "hw/intc/riscv_aplic.h"
97
#include "hw/intc/sifive_plic.h"
28
#include "hw/intc/riscv_imsic.h"
98
#include "hw/misc/sifive_test.h"
29
@@ -XXX,XX +XXX,XX @@
99
#include "chardev/char.h"
30
#error "Can't accommodate all IMSIC groups in address space"
100
@@ -XXX,XX +XXX,XX @@ static const MemMapEntry virt_memmap[] = {
31
#endif
101
[VIRT_ACLINT_SSWI] = { 0x2F00000, 0x4000 },
32
102
[VIRT_PCIE_PIO] = { 0x3000000, 0x10000 },
33
+/* KVM AIA only supports APLIC MSI. APLIC Wired is always emulated by QEMU. */
103
[VIRT_PLIC] = { 0xc000000, VIRT_PLIC_SIZE(VIRT_CPUS_MAX * 2) },
34
+static bool virt_use_kvm_aia(RISCVVirtState *s)
104
+ [VIRT_APLIC_M] = { 0xc000000, APLIC_SIZE(VIRT_CPUS_MAX) },
35
+{
105
+ [VIRT_APLIC_S] = { 0xd000000, APLIC_SIZE(VIRT_CPUS_MAX) },
36
+ return kvm_irqchip_in_kernel() && s->aia_type == VIRT_AIA_TYPE_APLIC_IMSIC;
106
[VIRT_UART0] = { 0x10000000, 0x100 },
37
+}
107
[VIRT_VIRTIO] = { 0x10001000, 0x1000 },
38
+
108
[VIRT_FW_CFG] = { 0x10100000, 0x18 },
39
static const MemMapEntry virt_memmap[] = {
109
@@ -XXX,XX +XXX,XX @@ static void virt_flash_map(RISCVVirtState *s,
40
[VIRT_DEBUG] = { 0x0, 0x100 },
110
sysmem);
41
[VIRT_MROM] = { 0x1000, 0xf000 },
111
}
42
@@ -XXX,XX +XXX,XX @@ static void create_fdt_one_aplic(RISCVVirtState *s, int socket,
112
43
uint32_t *intc_phandles,
113
-static void create_pcie_irq_map(void *fdt, char *nodename,
44
uint32_t aplic_phandle,
114
- uint32_t plic_phandle)
45
uint32_t aplic_child_phandle,
115
+static void create_pcie_irq_map(RISCVVirtState *s, void *fdt, char *nodename,
46
- bool m_mode)
116
+ uint32_t irqchip_phandle)
47
+ bool m_mode, int num_harts)
117
{
48
{
118
int pin, dev;
49
int cpu;
119
- uint32_t
50
char *aplic_name;
120
- full_irq_map[GPEX_NUM_IRQS * GPEX_NUM_IRQS * FDT_INT_MAP_WIDTH] = {};
51
uint32_t *aplic_cells;
121
+ uint32_t irq_map_stride = 0;
52
MachineState *ms = MACHINE(s);
122
+ uint32_t full_irq_map[GPEX_NUM_IRQS * GPEX_NUM_IRQS *
53
123
+ FDT_MAX_INT_MAP_WIDTH] = {};
54
- aplic_cells = g_new0(uint32_t, s->soc[socket].num_harts * 2);
124
uint32_t *irq_map = full_irq_map;
55
+ aplic_cells = g_new0(uint32_t, num_harts * 2);
125
56
126
/* This code creates a standard swizzle of interrupts such that
57
- for (cpu = 0; cpu < s->soc[socket].num_harts; cpu++) {
127
@@ -XXX,XX +XXX,XX @@ static void create_pcie_irq_map(void *fdt, char *nodename,
58
+ for (cpu = 0; cpu < num_harts; cpu++) {
128
int irq_nr = PCIE_IRQ + ((pin + PCI_SLOT(devfn)) % GPEX_NUM_IRQS);
59
aplic_cells[cpu * 2 + 0] = cpu_to_be32(intc_phandles[cpu]);
129
int i = 0;
60
aplic_cells[cpu * 2 + 1] = cpu_to_be32(m_mode ? IRQ_M_EXT : IRQ_S_EXT);
130
61
}
131
+ /* Fill PCI address cells */
62
@@ -XXX,XX +XXX,XX @@ static void create_fdt_one_aplic(RISCVVirtState *s, int socket,
132
irq_map[i] = cpu_to_be32(devfn << 8);
63
64
if (s->aia_type == VIRT_AIA_TYPE_APLIC) {
65
qemu_fdt_setprop(ms->fdt, aplic_name, "interrupts-extended",
66
- aplic_cells,
67
- s->soc[socket].num_harts * sizeof(uint32_t) * 2);
68
+ aplic_cells, num_harts * sizeof(uint32_t) * 2);
69
} else {
70
qemu_fdt_setprop_cell(ms->fdt, aplic_name, "msi-parent", msi_phandle);
71
}
72
@@ -XXX,XX +XXX,XX @@ static void create_fdt_socket_aplic(RISCVVirtState *s,
73
uint32_t msi_s_phandle,
74
uint32_t *phandle,
75
uint32_t *intc_phandles,
76
- uint32_t *aplic_phandles)
77
+ uint32_t *aplic_phandles,
78
+ int num_harts)
79
{
80
char *aplic_name;
81
unsigned long aplic_addr;
82
@@ -XXX,XX +XXX,XX @@ static void create_fdt_socket_aplic(RISCVVirtState *s,
83
create_fdt_one_aplic(s, socket, aplic_addr, memmap[VIRT_APLIC_M].size,
84
msi_m_phandle, intc_phandles,
85
aplic_m_phandle, aplic_s_phandle,
86
- true);
87
+ true, num_harts);
88
}
89
90
/* S-level APLIC node */
91
@@ -XXX,XX +XXX,XX @@ static void create_fdt_socket_aplic(RISCVVirtState *s,
92
create_fdt_one_aplic(s, socket, aplic_addr, memmap[VIRT_APLIC_S].size,
93
msi_s_phandle, intc_phandles,
94
aplic_s_phandle, 0,
95
- false);
96
+ false, num_harts);
97
98
aplic_name = g_strdup_printf("/soc/aplic@%lx", aplic_addr);
99
100
@@ -XXX,XX +XXX,XX @@ static void create_fdt_sockets(RISCVVirtState *s, const MemMapEntry *memmap,
101
*msi_pcie_phandle = msi_s_phandle;
102
}
103
104
- phandle_pos = ms->smp.cpus;
105
- for (socket = (socket_count - 1); socket >= 0; socket--) {
106
- phandle_pos -= s->soc[socket].num_harts;
133
-
107
-
134
i += FDT_PCI_ADDR_CELLS;
108
- if (s->aia_type == VIRT_AIA_TYPE_NONE) {
135
- irq_map[i] = cpu_to_be32(pin + 1);
109
- create_fdt_socket_plic(s, memmap, socket, phandle,
136
110
- &intc_phandles[phandle_pos], xplic_phandles);
137
+ /* Fill PCI Interrupt cells */
111
- } else {
138
+ irq_map[i] = cpu_to_be32(pin + 1);
112
- create_fdt_socket_aplic(s, memmap, socket,
139
i += FDT_PCI_INT_CELLS;
113
- msi_m_phandle, msi_s_phandle, phandle,
140
- irq_map[i++] = cpu_to_be32(plic_phandle);
114
- &intc_phandles[phandle_pos], xplic_phandles);
141
115
+ /* KVM AIA only has one APLIC instance */
142
- i += FDT_PLIC_ADDR_CELLS;
116
+ if (virt_use_kvm_aia(s)) {
143
- irq_map[i] = cpu_to_be32(irq_nr);
117
+ create_fdt_socket_aplic(s, memmap, 0,
144
+ /* Fill interrupt controller phandle and cells */
118
+ msi_m_phandle, msi_s_phandle, phandle,
145
+ irq_map[i++] = cpu_to_be32(irqchip_phandle);
119
+ &intc_phandles[0], xplic_phandles,
146
+ irq_map[i++] = cpu_to_be32(irq_nr);
120
+ ms->smp.cpus);
147
+ if (s->aia_type != VIRT_AIA_TYPE_NONE) {
121
+ } else {
148
+ irq_map[i++] = cpu_to_be32(0x4);
122
+ phandle_pos = ms->smp.cpus;
123
+ for (socket = (socket_count - 1); socket >= 0; socket--) {
124
+ phandle_pos -= s->soc[socket].num_harts;
125
+
126
+ if (s->aia_type == VIRT_AIA_TYPE_NONE) {
127
+ create_fdt_socket_plic(s, memmap, socket, phandle,
128
+ &intc_phandles[phandle_pos],
129
+ xplic_phandles);
130
+ } else {
131
+ create_fdt_socket_aplic(s, memmap, socket,
132
+ msi_m_phandle, msi_s_phandle, phandle,
133
+ &intc_phandles[phandle_pos],
134
+ xplic_phandles,
135
+ s->soc[socket].num_harts);
149
+ }
136
+ }
150
151
- irq_map += FDT_INT_MAP_WIDTH;
152
+ if (!irq_map_stride) {
153
+ irq_map_stride = i;
154
+ }
155
+ irq_map += irq_map_stride;
156
}
137
}
157
}
138
}
158
139
159
- qemu_fdt_setprop(fdt, nodename, "interrupt-map",
140
g_free(intc_phandles);
160
- full_irq_map, sizeof(full_irq_map));
141
161
+ qemu_fdt_setprop(fdt, nodename, "interrupt-map", full_irq_map,
142
- for (socket = 0; socket < socket_count; socket++) {
162
+ GPEX_NUM_IRQS * GPEX_NUM_IRQS *
143
- if (socket == 0) {
163
+ irq_map_stride * sizeof(uint32_t));
144
- *irq_mmio_phandle = xplic_phandles[socket];
164
145
- *irq_virtio_phandle = xplic_phandles[socket];
165
qemu_fdt_setprop_cells(fdt, nodename, "interrupt-map-mask",
146
- *irq_pcie_phandle = xplic_phandles[socket];
166
0x1800, 0, 0, 0x7);
147
- }
167
@@ -XXX,XX +XXX,XX @@ static void create_fdt_socket_plic(RISCVVirtState *s,
148
- if (socket == 1) {
168
plic_addr = memmap[VIRT_PLIC].base + (memmap[VIRT_PLIC].size * socket);
149
- *irq_virtio_phandle = xplic_phandles[socket];
169
plic_name = g_strdup_printf("/soc/plic@%lx", plic_addr);
150
- *irq_pcie_phandle = xplic_phandles[socket];
170
qemu_fdt_add_subnode(mc->fdt, plic_name);
151
- }
171
- qemu_fdt_setprop_cell(mc->fdt, plic_name,
152
- if (socket == 2) {
172
- "#address-cells", FDT_PLIC_ADDR_CELLS);
153
- *irq_pcie_phandle = xplic_phandles[socket];
173
qemu_fdt_setprop_cell(mc->fdt, plic_name,
154
+ if (virt_use_kvm_aia(s)) {
174
"#interrupt-cells", FDT_PLIC_INT_CELLS);
155
+ *irq_mmio_phandle = xplic_phandles[0];
175
qemu_fdt_setprop_string_array(mc->fdt, plic_name, "compatible",
156
+ *irq_virtio_phandle = xplic_phandles[0];
176
@@ -XXX,XX +XXX,XX @@ static void create_fdt_socket_plic(RISCVVirtState *s,
157
+ *irq_pcie_phandle = xplic_phandles[0];
177
g_free(plic_cells);
158
+ } else {
178
}
159
+ for (socket = 0; socket < socket_count; socket++) {
179
160
+ if (socket == 0) {
180
+static void create_fdt_socket_aia(RISCVVirtState *s,
161
+ *irq_mmio_phandle = xplic_phandles[socket];
181
+ const MemMapEntry *memmap, int socket,
162
+ *irq_virtio_phandle = xplic_phandles[socket];
182
+ uint32_t *phandle, uint32_t *intc_phandles,
163
+ *irq_pcie_phandle = xplic_phandles[socket];
183
+ uint32_t *aplic_phandles)
164
+ }
184
+{
165
+ if (socket == 1) {
185
+ int cpu;
166
+ *irq_virtio_phandle = xplic_phandles[socket];
186
+ char *aplic_name;
167
+ *irq_pcie_phandle = xplic_phandles[socket];
187
+ uint32_t *aplic_cells;
168
+ }
188
+ unsigned long aplic_addr;
169
+ if (socket == 2) {
189
+ MachineState *mc = MACHINE(s);
170
+ *irq_pcie_phandle = xplic_phandles[socket];
190
+ uint32_t aplic_m_phandle, aplic_s_phandle;
171
+ }
191
+
192
+ aplic_m_phandle = (*phandle)++;
193
+ aplic_s_phandle = (*phandle)++;
194
+ aplic_cells = g_new0(uint32_t, s->soc[socket].num_harts * 2);
195
+
196
+ /* M-level APLIC node */
197
+ for (cpu = 0; cpu < s->soc[socket].num_harts; cpu++) {
198
+ aplic_cells[cpu * 2 + 0] = cpu_to_be32(intc_phandles[cpu]);
199
+ aplic_cells[cpu * 2 + 1] = cpu_to_be32(IRQ_M_EXT);
200
+ }
201
+ aplic_addr = memmap[VIRT_APLIC_M].base +
202
+ (memmap[VIRT_APLIC_M].size * socket);
203
+ aplic_name = g_strdup_printf("/soc/aplic@%lx", aplic_addr);
204
+ qemu_fdt_add_subnode(mc->fdt, aplic_name);
205
+ qemu_fdt_setprop_string(mc->fdt, aplic_name, "compatible", "riscv,aplic");
206
+ qemu_fdt_setprop_cell(mc->fdt, aplic_name,
207
+ "#interrupt-cells", FDT_APLIC_INT_CELLS);
208
+ qemu_fdt_setprop(mc->fdt, aplic_name, "interrupt-controller", NULL, 0);
209
+ qemu_fdt_setprop(mc->fdt, aplic_name, "interrupts-extended",
210
+ aplic_cells, s->soc[socket].num_harts * sizeof(uint32_t) * 2);
211
+ qemu_fdt_setprop_cells(mc->fdt, aplic_name, "reg",
212
+ 0x0, aplic_addr, 0x0, memmap[VIRT_APLIC_M].size);
213
+ qemu_fdt_setprop_cell(mc->fdt, aplic_name, "riscv,num-sources",
214
+ VIRT_IRQCHIP_NUM_SOURCES);
215
+ qemu_fdt_setprop_cell(mc->fdt, aplic_name, "riscv,children",
216
+ aplic_s_phandle);
217
+ qemu_fdt_setprop_cells(mc->fdt, aplic_name, "riscv,delegate",
218
+ aplic_s_phandle, 0x1, VIRT_IRQCHIP_NUM_SOURCES);
219
+ riscv_socket_fdt_write_id(mc, mc->fdt, aplic_name, socket);
220
+ qemu_fdt_setprop_cell(mc->fdt, aplic_name, "phandle", aplic_m_phandle);
221
+ g_free(aplic_name);
222
+
223
+ /* S-level APLIC node */
224
+ for (cpu = 0; cpu < s->soc[socket].num_harts; cpu++) {
225
+ aplic_cells[cpu * 2 + 0] = cpu_to_be32(intc_phandles[cpu]);
226
+ aplic_cells[cpu * 2 + 1] = cpu_to_be32(IRQ_S_EXT);
227
+ }
228
+ aplic_addr = memmap[VIRT_APLIC_S].base +
229
+ (memmap[VIRT_APLIC_S].size * socket);
230
+ aplic_name = g_strdup_printf("/soc/aplic@%lx", aplic_addr);
231
+ qemu_fdt_add_subnode(mc->fdt, aplic_name);
232
+ qemu_fdt_setprop_string(mc->fdt, aplic_name, "compatible", "riscv,aplic");
233
+ qemu_fdt_setprop_cell(mc->fdt, aplic_name,
234
+ "#interrupt-cells", FDT_APLIC_INT_CELLS);
235
+ qemu_fdt_setprop(mc->fdt, aplic_name, "interrupt-controller", NULL, 0);
236
+ qemu_fdt_setprop(mc->fdt, aplic_name, "interrupts-extended",
237
+ aplic_cells, s->soc[socket].num_harts * sizeof(uint32_t) * 2);
238
+ qemu_fdt_setprop_cells(mc->fdt, aplic_name, "reg",
239
+ 0x0, aplic_addr, 0x0, memmap[VIRT_APLIC_S].size);
240
+ qemu_fdt_setprop_cell(mc->fdt, aplic_name, "riscv,num-sources",
241
+ VIRT_IRQCHIP_NUM_SOURCES);
242
+ riscv_socket_fdt_write_id(mc, mc->fdt, aplic_name, socket);
243
+ qemu_fdt_setprop_cell(mc->fdt, aplic_name, "phandle", aplic_s_phandle);
244
+ g_free(aplic_name);
245
+
246
+ g_free(aplic_cells);
247
+ aplic_phandles[socket] = aplic_s_phandle;
248
+}
249
+
250
static void create_fdt_sockets(RISCVVirtState *s, const MemMapEntry *memmap,
251
bool is_32_bit, uint32_t *phandle,
252
uint32_t *irq_mmio_phandle,
253
@@ -XXX,XX +XXX,XX @@ static void create_fdt_sockets(RISCVVirtState *s, const MemMapEntry *memmap,
254
}
255
}
172
}
256
257
- create_fdt_socket_plic(s, memmap, socket, phandle,
258
- intc_phandles, xplic_phandles);
259
+ if (s->aia_type == VIRT_AIA_TYPE_NONE) {
260
+ create_fdt_socket_plic(s, memmap, socket, phandle,
261
+ intc_phandles, xplic_phandles);
262
+ } else {
263
+ create_fdt_socket_aia(s, memmap, socket, phandle,
264
+ intc_phandles, xplic_phandles);
265
+ }
266
267
g_free(intc_phandles);
268
g_free(clust_name);
269
@@ -XXX,XX +XXX,XX @@ static void create_fdt_virtio(RISCVVirtState *s, const MemMapEntry *memmap,
270
0x0, memmap[VIRT_VIRTIO].size);
271
qemu_fdt_setprop_cell(mc->fdt, name, "interrupt-parent",
272
irq_virtio_phandle);
273
- qemu_fdt_setprop_cell(mc->fdt, name, "interrupts", VIRTIO_IRQ + i);
274
+ if (s->aia_type == VIRT_AIA_TYPE_NONE) {
275
+ qemu_fdt_setprop_cell(mc->fdt, name, "interrupts",
276
+ VIRTIO_IRQ + i);
277
+ } else {
278
+ qemu_fdt_setprop_cells(mc->fdt, name, "interrupts",
279
+ VIRTIO_IRQ + i, 0x4);
280
+ }
281
g_free(name);
282
}
173
}
283
}
174
284
@@ -XXX,XX +XXX,XX @@ static void create_fdt_pcie(RISCVVirtState *s, const MemMapEntry *memmap,
175
@@ -XXX,XX +XXX,XX @@ static void virt_machine_init(MachineState *machine)
285
2, virt_high_pcie_memmap.base,
176
}
286
2, virt_high_pcie_memmap.base, 2, virt_high_pcie_memmap.size);
177
}
287
178
288
- create_pcie_irq_map(mc->fdt, name, irq_pcie_phandle);
179
+ if (virt_use_kvm_aia(s)) {
289
+ create_pcie_irq_map(s, mc->fdt, name, irq_pcie_phandle);
180
+ kvm_riscv_aia_create(machine, IMSIC_MMIO_GROUP_MIN_SHIFT,
290
g_free(name);
181
+ VIRT_IRQCHIP_NUM_SOURCES, VIRT_IRQCHIP_NUM_MSIS,
291
}
182
+ memmap[VIRT_APLIC_S].base,
292
183
+ memmap[VIRT_IMSIC_S].base,
293
@@ -XXX,XX +XXX,XX @@ static void create_fdt_uart(RISCVVirtState *s, const MemMapEntry *memmap,
184
+ s->aia_guests);
294
0x0, memmap[VIRT_UART0].size);
295
qemu_fdt_setprop_cell(mc->fdt, name, "clock-frequency", 3686400);
296
qemu_fdt_setprop_cell(mc->fdt, name, "interrupt-parent", irq_mmio_phandle);
297
- qemu_fdt_setprop_cell(mc->fdt, name, "interrupts", UART0_IRQ);
298
+ if (s->aia_type == VIRT_AIA_TYPE_NONE) {
299
+ qemu_fdt_setprop_cell(mc->fdt, name, "interrupts", UART0_IRQ);
300
+ } else {
301
+ qemu_fdt_setprop_cells(mc->fdt, name, "interrupts", UART0_IRQ, 0x4);
302
+ }
303
304
qemu_fdt_add_subnode(mc->fdt, "/chosen");
305
qemu_fdt_setprop_string(mc->fdt, "/chosen", "stdout-path", name);
306
@@ -XXX,XX +XXX,XX @@ static void create_fdt_rtc(RISCVVirtState *s, const MemMapEntry *memmap,
307
0x0, memmap[VIRT_RTC].base, 0x0, memmap[VIRT_RTC].size);
308
qemu_fdt_setprop_cell(mc->fdt, name, "interrupt-parent",
309
irq_mmio_phandle);
310
- qemu_fdt_setprop_cell(mc->fdt, name, "interrupts", RTC_IRQ);
311
+ if (s->aia_type == VIRT_AIA_TYPE_NONE) {
312
+ qemu_fdt_setprop_cell(mc->fdt, name, "interrupts", RTC_IRQ);
313
+ } else {
314
+ qemu_fdt_setprop_cells(mc->fdt, name, "interrupts", RTC_IRQ, 0x4);
315
+ }
316
g_free(name);
317
}
318
319
@@ -XXX,XX +XXX,XX @@ static inline DeviceState *gpex_pcie_init(MemoryRegion *sys_mem,
320
hwaddr high_mmio_base,
321
hwaddr high_mmio_size,
322
hwaddr pio_base,
323
- DeviceState *plic)
324
+ DeviceState *irqchip)
325
{
326
DeviceState *dev;
327
MemoryRegion *ecam_alias, *ecam_reg;
328
@@ -XXX,XX +XXX,XX @@ static inline DeviceState *gpex_pcie_init(MemoryRegion *sys_mem,
329
sysbus_mmio_map(SYS_BUS_DEVICE(dev), 2, pio_base);
330
331
for (i = 0; i < GPEX_NUM_IRQS; i++) {
332
- irq = qdev_get_gpio_in(plic, PCIE_IRQ + i);
333
+ irq = qdev_get_gpio_in(irqchip, PCIE_IRQ + i);
334
335
sysbus_connect_irq(SYS_BUS_DEVICE(dev), i, irq);
336
gpex_set_irq_num(GPEX_HOST(dev), i, PCIE_IRQ + i);
337
@@ -XXX,XX +XXX,XX @@ static FWCfgState *create_fw_cfg(const MachineState *mc)
338
return fw_cfg;
339
}
340
341
+static DeviceState *virt_create_plic(const MemMapEntry *memmap, int socket,
342
+ int base_hartid, int hart_count)
343
+{
344
+ DeviceState *ret;
345
+ char *plic_hart_config;
346
+
347
+ /* Per-socket PLIC hart topology configuration string */
348
+ plic_hart_config = riscv_plic_hart_config_string(hart_count);
349
+
350
+ /* Per-socket PLIC */
351
+ ret = sifive_plic_create(
352
+ memmap[VIRT_PLIC].base + socket * memmap[VIRT_PLIC].size,
353
+ plic_hart_config, hart_count, base_hartid,
354
+ VIRT_IRQCHIP_NUM_SOURCES,
355
+ ((1U << VIRT_IRQCHIP_NUM_PRIO_BITS) - 1),
356
+ VIRT_PLIC_PRIORITY_BASE,
357
+ VIRT_PLIC_PENDING_BASE,
358
+ VIRT_PLIC_ENABLE_BASE,
359
+ VIRT_PLIC_ENABLE_STRIDE,
360
+ VIRT_PLIC_CONTEXT_BASE,
361
+ VIRT_PLIC_CONTEXT_STRIDE,
362
+ memmap[VIRT_PLIC].size);
363
+
364
+ g_free(plic_hart_config);
365
+
366
+ return ret;
367
+}
368
+
369
+static DeviceState *virt_create_aia(RISCVVirtAIAType aia_type,
370
+ const MemMapEntry *memmap, int socket,
371
+ int base_hartid, int hart_count)
372
+{
373
+ DeviceState *aplic_m;
374
+
375
+ /* Per-socket M-level APLIC */
376
+ aplic_m = riscv_aplic_create(
377
+ memmap[VIRT_APLIC_M].base + socket * memmap[VIRT_APLIC_M].size,
378
+ memmap[VIRT_APLIC_M].size,
379
+ base_hartid, hart_count,
380
+ VIRT_IRQCHIP_NUM_SOURCES,
381
+ VIRT_IRQCHIP_NUM_PRIO_BITS,
382
+ false, true, NULL);
383
+
384
+ if (aplic_m) {
385
+ /* Per-socket S-level APLIC */
386
+ riscv_aplic_create(
387
+ memmap[VIRT_APLIC_S].base + socket * memmap[VIRT_APLIC_S].size,
388
+ memmap[VIRT_APLIC_S].size,
389
+ base_hartid, hart_count,
390
+ VIRT_IRQCHIP_NUM_SOURCES,
391
+ VIRT_IRQCHIP_NUM_PRIO_BITS,
392
+ false, false, aplic_m);
393
+ }
185
+ }
394
+
186
+
395
+ return aplic_m;
187
if (riscv_is_32bit(&s->soc[0])) {
396
+}
188
#if HOST_LONG_BITS == 64
397
+
189
/* limit RAM size in a 32-bit system */
398
static void virt_machine_init(MachineState *machine)
399
{
400
const MemMapEntry *memmap = virt_memmap;
401
RISCVVirtState *s = RISCV_VIRT_MACHINE(machine);
402
MemoryRegion *system_memory = get_system_memory();
403
MemoryRegion *mask_rom = g_new(MemoryRegion, 1);
404
- char *plic_hart_config, *soc_name;
405
+ char *soc_name;
406
target_ulong start_addr = memmap[VIRT_DRAM].base;
407
target_ulong firmware_end_addr, kernel_start_addr;
408
uint32_t fdt_load_addr;
409
uint64_t kernel_entry;
410
- DeviceState *mmio_plic, *virtio_plic, *pcie_plic;
411
+ DeviceState *mmio_irqchip, *virtio_irqchip, *pcie_irqchip;
412
int i, base_hartid, hart_count;
413
414
/* Check socket count limit */
415
@@ -XXX,XX +XXX,XX @@ static void virt_machine_init(MachineState *machine)
416
}
417
418
/* Initialize sockets */
419
- mmio_plic = virtio_plic = pcie_plic = NULL;
420
+ mmio_irqchip = virtio_irqchip = pcie_irqchip = NULL;
421
for (i = 0; i < riscv_socket_count(machine); i++) {
422
if (!riscv_socket_check_hartids(machine, i)) {
423
error_report("discontinuous hartids in socket%d", i);
424
@@ -XXX,XX +XXX,XX @@ static void virt_machine_init(MachineState *machine)
425
}
426
}
427
428
- /* Per-socket PLIC hart topology configuration string */
429
- plic_hart_config = riscv_plic_hart_config_string(hart_count);
430
-
431
- /* Per-socket PLIC */
432
- s->plic[i] = sifive_plic_create(
433
- memmap[VIRT_PLIC].base + i * memmap[VIRT_PLIC].size,
434
- plic_hart_config, hart_count, base_hartid,
435
- VIRT_PLIC_NUM_SOURCES,
436
- VIRT_PLIC_NUM_PRIORITIES,
437
- VIRT_PLIC_PRIORITY_BASE,
438
- VIRT_PLIC_PENDING_BASE,
439
- VIRT_PLIC_ENABLE_BASE,
440
- VIRT_PLIC_ENABLE_STRIDE,
441
- VIRT_PLIC_CONTEXT_BASE,
442
- VIRT_PLIC_CONTEXT_STRIDE,
443
- memmap[VIRT_PLIC].size);
444
- g_free(plic_hart_config);
445
+ /* Per-socket interrupt controller */
446
+ if (s->aia_type == VIRT_AIA_TYPE_NONE) {
447
+ s->irqchip[i] = virt_create_plic(memmap, i,
448
+ base_hartid, hart_count);
449
+ } else {
450
+ s->irqchip[i] = virt_create_aia(s->aia_type, memmap, i,
451
+ base_hartid, hart_count);
452
+ }
453
454
- /* Try to use different PLIC instance based device type */
455
+ /* Try to use different IRQCHIP instance based device type */
456
if (i == 0) {
457
- mmio_plic = s->plic[i];
458
- virtio_plic = s->plic[i];
459
- pcie_plic = s->plic[i];
460
+ mmio_irqchip = s->irqchip[i];
461
+ virtio_irqchip = s->irqchip[i];
462
+ pcie_irqchip = s->irqchip[i];
463
}
464
if (i == 1) {
465
- virtio_plic = s->plic[i];
466
- pcie_plic = s->plic[i];
467
+ virtio_irqchip = s->irqchip[i];
468
+ pcie_irqchip = s->irqchip[i];
469
}
470
if (i == 2) {
471
- pcie_plic = s->plic[i];
472
+ pcie_irqchip = s->irqchip[i];
473
}
474
}
475
476
@@ -XXX,XX +XXX,XX @@ static void virt_machine_init(MachineState *machine)
477
for (i = 0; i < VIRTIO_COUNT; i++) {
478
sysbus_create_simple("virtio-mmio",
479
memmap[VIRT_VIRTIO].base + i * memmap[VIRT_VIRTIO].size,
480
- qdev_get_gpio_in(DEVICE(virtio_plic), VIRTIO_IRQ + i));
481
+ qdev_get_gpio_in(DEVICE(virtio_irqchip), VIRTIO_IRQ + i));
482
}
483
484
gpex_pcie_init(system_memory,
485
@@ -XXX,XX +XXX,XX @@ static void virt_machine_init(MachineState *machine)
486
virt_high_pcie_memmap.base,
487
virt_high_pcie_memmap.size,
488
memmap[VIRT_PCIE_PIO].base,
489
- DEVICE(pcie_plic));
490
+ DEVICE(pcie_irqchip));
491
492
serial_mm_init(system_memory, memmap[VIRT_UART0].base,
493
- 0, qdev_get_gpio_in(DEVICE(mmio_plic), UART0_IRQ), 399193,
494
+ 0, qdev_get_gpio_in(DEVICE(mmio_irqchip), UART0_IRQ), 399193,
495
serial_hd(0), DEVICE_LITTLE_ENDIAN);
496
497
sysbus_create_simple("goldfish_rtc", memmap[VIRT_RTC].base,
498
- qdev_get_gpio_in(DEVICE(mmio_plic), RTC_IRQ));
499
+ qdev_get_gpio_in(DEVICE(mmio_irqchip), RTC_IRQ));
500
501
virt_flash_create(s);
502
503
@@ -XXX,XX +XXX,XX @@ static void virt_machine_instance_init(Object *obj)
504
{
505
}
506
507
+static char *virt_get_aia(Object *obj, Error **errp)
508
+{
509
+ RISCVVirtState *s = RISCV_VIRT_MACHINE(obj);
510
+ const char *val;
511
+
512
+ switch (s->aia_type) {
513
+ case VIRT_AIA_TYPE_APLIC:
514
+ val = "aplic";
515
+ break;
516
+ default:
517
+ val = "none";
518
+ break;
519
+ };
520
+
521
+ return g_strdup(val);
522
+}
523
+
524
+static void virt_set_aia(Object *obj, const char *val, Error **errp)
525
+{
526
+ RISCVVirtState *s = RISCV_VIRT_MACHINE(obj);
527
+
528
+ if (!strcmp(val, "none")) {
529
+ s->aia_type = VIRT_AIA_TYPE_NONE;
530
+ } else if (!strcmp(val, "aplic")) {
531
+ s->aia_type = VIRT_AIA_TYPE_APLIC;
532
+ } else {
533
+ error_setg(errp, "Invalid AIA interrupt controller type");
534
+ error_append_hint(errp, "Valid values are none, and aplic.\n");
535
+ }
536
+}
537
+
538
static bool virt_get_aclint(Object *obj, Error **errp)
539
{
540
MachineState *ms = MACHINE(obj);
541
@@ -XXX,XX +XXX,XX @@ static void virt_machine_class_init(ObjectClass *oc, void *data)
542
object_class_property_set_description(oc, "aclint",
543
"Set on/off to enable/disable "
544
"emulating ACLINT devices");
545
+
546
+ object_class_property_add_str(oc, "aia", virt_get_aia,
547
+ virt_set_aia);
548
+ object_class_property_set_description(oc, "aia",
549
+ "Set type of AIA interrupt "
550
+ "conttoller. Valid values are "
551
+ "none, and aplic.");
552
}
553
554
static const TypeInfo virt_machine_typeinfo = {
555
diff --git a/hw/riscv/Kconfig b/hw/riscv/Kconfig
556
index XXXXXXX..XXXXXXX 100644
557
--- a/hw/riscv/Kconfig
558
+++ b/hw/riscv/Kconfig
559
@@ -XXX,XX +XXX,XX @@ config RISCV_VIRT
560
select PFLASH_CFI01
561
select SERIAL
562
select RISCV_ACLINT
563
+ select RISCV_APLIC
564
select SIFIVE_PLIC
565
select SIFIVE_TEST
566
select VIRTIO_MMIO
567
--
190
--
568
2.34.1
191
2.41.0
569
570
diff view generated by jsdifflib
1
From: Anup Patel <anup.patel@wdc.com>
1
From: Conor Dooley <conor.dooley@microchip.com>
2
2
3
To facilitate software development of RISC-V systems with large number
3
On a dtb dumped from the virt machine, dt-validate complains:
4
of HARTs, we increase the maximum number of allowed CPUs to 512 (2^9).
4
soc: pmu: {'riscv,event-to-mhpmcounters': [[1, 1, 524281], [2, 2, 524284], [65561, 65561, 524280], [65563, 65563, 524280], [65569, 65569, 524280]], 'compatible': ['riscv,pmu']} should not be valid under {'type': 'object'}
5
from schema $id: http://devicetree.org/schemas/simple-bus.yaml#
6
That's pretty cryptic, but running the dtb back through dtc produces
7
something a lot more reasonable:
8
Warning (simple_bus_reg): /soc/pmu: missing or empty reg/ranges property
5
9
6
We also add a detailed source level comments about limit defines which
10
Moving the riscv,pmu node out of the soc bus solves the problem.
7
impact the physical address space utilization.
8
11
9
Signed-off-by: Anup Patel <anup.patel@wdc.com>
12
Signed-off-by: Conor Dooley <conor.dooley@microchip.com>
10
Signed-off-by: Anup Patel <anup@brainfault.org>
13
Acked-by: Alistair Francis <alistair.francis@wdc.com>
11
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
14
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
12
Reviewed-by: Frank Chang <frank.chang@sifive.com>
15
Message-ID: <20230727-groom-decline-2c57ce42841c@spud>
13
Message-id: 20220204174700.534953-24-anup@brainfault.org
14
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
16
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
15
---
17
---
16
include/hw/riscv/virt.h | 2 +-
18
hw/riscv/virt.c | 2 +-
17
hw/riscv/virt.c | 10 ++++++++++
19
1 file changed, 1 insertion(+), 1 deletion(-)
18
2 files changed, 11 insertions(+), 1 deletion(-)
19
20
20
diff --git a/include/hw/riscv/virt.h b/include/hw/riscv/virt.h
21
index XXXXXXX..XXXXXXX 100644
22
--- a/include/hw/riscv/virt.h
23
+++ b/include/hw/riscv/virt.h
24
@@ -XXX,XX +XXX,XX @@
25
#include "hw/block/flash.h"
26
#include "qom/object.h"
27
28
-#define VIRT_CPUS_MAX_BITS 3
29
+#define VIRT_CPUS_MAX_BITS 9
30
#define VIRT_CPUS_MAX (1 << VIRT_CPUS_MAX_BITS)
31
#define VIRT_SOCKETS_MAX_BITS 2
32
#define VIRT_SOCKETS_MAX (1 << VIRT_SOCKETS_MAX_BITS)
33
diff --git a/hw/riscv/virt.c b/hw/riscv/virt.c
21
diff --git a/hw/riscv/virt.c b/hw/riscv/virt.c
34
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
35
--- a/hw/riscv/virt.c
23
--- a/hw/riscv/virt.c
36
+++ b/hw/riscv/virt.c
24
+++ b/hw/riscv/virt.c
37
@@ -XXX,XX +XXX,XX @@
25
@@ -XXX,XX +XXX,XX @@ static void create_fdt_pmu(RISCVVirtState *s)
38
#include "hw/pci-host/gpex.h"
26
MachineState *ms = MACHINE(s);
39
#include "hw/display/ramfb.h"
27
RISCVCPU hart = s->soc[0].harts[0];
40
28
41
+/*
29
- pmu_name = g_strdup_printf("/soc/pmu");
42
+ * The virt machine physical address space used by some of the devices
30
+ pmu_name = g_strdup_printf("/pmu");
43
+ * namely ACLINT, PLIC, APLIC, and IMSIC depend on number of Sockets,
31
qemu_fdt_add_subnode(ms->fdt, pmu_name);
44
+ * number of CPUs, and number of IMSIC guest files.
32
qemu_fdt_setprop_string(ms->fdt, pmu_name, "compatible", "riscv,pmu");
45
+ *
33
riscv_pmu_generate_fdt_node(ms->fdt, hart.cfg.pmu_num, pmu_name);
46
+ * Various limits defined by VIRT_SOCKETS_MAX_BITS, VIRT_CPUS_MAX_BITS,
47
+ * and VIRT_IRQCHIP_MAX_GUESTS_BITS are tuned for maximum utilization
48
+ * of virt machine physical address space.
49
+ */
50
+
51
#define VIRT_IMSIC_GROUP_MAX_SIZE (1U << IMSIC_MMIO_GROUP_MIN_SHIFT)
52
#if VIRT_IMSIC_GROUP_MAX_SIZE < \
53
IMSIC_GROUP_SIZE(VIRT_CPUS_MAX_BITS, VIRT_IRQCHIP_MAX_GUESTS_BITS)
54
--
34
--
55
2.34.1
35
2.41.0
56
57
diff view generated by jsdifflib
1
From: Anup Patel <anup.patel@wdc.com>
1
From: Weiwei Li <liweiwei@iscas.ac.cn>
2
2
3
A hypervisor can optionally take guest external interrupts using
3
The Svadu specification updated the name of the *envcfg bit from
4
SGEIP bit of hip and hie CSRs.
4
HADE to ADUE.
5
5
6
Signed-off-by: Anup Patel <anup.patel@wdc.com>
6
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
7
Signed-off-by: Anup Patel <anup@brainfault.org>
7
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
8
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
8
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
9
Reviewed-by: Frank Chang <frank.chang@sifive.com>
9
Message-ID: <20230816141916.66898-1-liweiwei@iscas.ac.cn>
10
Message-id: 20220204174700.534953-3-anup@brainfault.org
11
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
10
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
12
---
11
---
13
target/riscv/cpu_bits.h | 3 +++
12
target/riscv/cpu_bits.h | 8 ++++----
14
target/riscv/cpu.c | 3 ++-
13
target/riscv/cpu.c | 4 ++--
15
target/riscv/csr.c | 18 +++++++++++-------
14
target/riscv/cpu_helper.c | 6 +++---
16
3 files changed, 16 insertions(+), 8 deletions(-)
15
target/riscv/csr.c | 12 ++++++------
16
4 files changed, 15 insertions(+), 15 deletions(-)
17
17
18
diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
18
diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
19
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/riscv/cpu_bits.h
20
--- a/target/riscv/cpu_bits.h
21
+++ b/target/riscv/cpu_bits.h
21
+++ b/target/riscv/cpu_bits.h
22
@@ -XXX,XX +XXX,XX @@ typedef enum RISCVException {
22
@@ -XXX,XX +XXX,XX @@ typedef enum RISCVException {
23
#define IRQ_S_EXT 9
23
#define MENVCFG_CBIE (3UL << 4)
24
#define IRQ_VS_EXT 10
24
#define MENVCFG_CBCFE BIT(6)
25
#define IRQ_M_EXT 11
25
#define MENVCFG_CBZE BIT(7)
26
+#define IRQ_S_GEXT 12
26
-#define MENVCFG_HADE (1ULL << 61)
27
+#define IRQ_LOCAL_MAX 16
27
+#define MENVCFG_ADUE (1ULL << 61)
28
28
#define MENVCFG_PBMTE (1ULL << 62)
29
/* mip masks */
29
#define MENVCFG_STCE (1ULL << 63)
30
#define MIP_USIP (1 << IRQ_U_SOFT)
30
31
/* For RV32 */
32
-#define MENVCFGH_HADE BIT(29)
33
+#define MENVCFGH_ADUE BIT(29)
34
#define MENVCFGH_PBMTE BIT(30)
35
#define MENVCFGH_STCE BIT(31)
36
31
@@ -XXX,XX +XXX,XX @@ typedef enum RISCVException {
37
@@ -XXX,XX +XXX,XX @@ typedef enum RISCVException {
32
#define MIP_SEIP (1 << IRQ_S_EXT)
38
#define HENVCFG_CBIE MENVCFG_CBIE
33
#define MIP_VSEIP (1 << IRQ_VS_EXT)
39
#define HENVCFG_CBCFE MENVCFG_CBCFE
34
#define MIP_MEIP (1 << IRQ_M_EXT)
40
#define HENVCFG_CBZE MENVCFG_CBZE
35
+#define MIP_SGEIP (1 << IRQ_S_GEXT)
41
-#define HENVCFG_HADE MENVCFG_HADE
36
42
+#define HENVCFG_ADUE MENVCFG_ADUE
37
/* sip masks */
43
#define HENVCFG_PBMTE MENVCFG_PBMTE
38
#define SIP_SSIP MIP_SSIP
44
#define HENVCFG_STCE MENVCFG_STCE
45
46
/* For RV32 */
47
-#define HENVCFGH_HADE MENVCFGH_HADE
48
+#define HENVCFGH_ADUE MENVCFGH_ADUE
49
#define HENVCFGH_PBMTE MENVCFGH_PBMTE
50
#define HENVCFGH_STCE MENVCFGH_STCE
51
39
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
52
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
40
index XXXXXXX..XXXXXXX 100644
53
index XXXXXXX..XXXXXXX 100644
41
--- a/target/riscv/cpu.c
54
--- a/target/riscv/cpu.c
42
+++ b/target/riscv/cpu.c
55
+++ b/target/riscv/cpu.c
43
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_reset(DeviceState *dev)
56
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_reset_hold(Object *obj)
57
env->two_stage_lookup = false;
58
59
env->menvcfg = (cpu->cfg.ext_svpbmt ? MENVCFG_PBMTE : 0) |
60
- (cpu->cfg.ext_svadu ? MENVCFG_HADE : 0);
61
+ (cpu->cfg.ext_svadu ? MENVCFG_ADUE : 0);
62
env->henvcfg = (cpu->cfg.ext_svpbmt ? HENVCFG_PBMTE : 0) |
63
- (cpu->cfg.ext_svadu ? HENVCFG_HADE : 0);
64
+ (cpu->cfg.ext_svadu ? HENVCFG_ADUE : 0);
65
66
/* Initialized default priorities of local interrupts. */
67
for (i = 0; i < ARRAY_SIZE(env->miprio); i++) {
68
diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
69
index XXXXXXX..XXXXXXX 100644
70
--- a/target/riscv/cpu_helper.c
71
+++ b/target/riscv/cpu_helper.c
72
@@ -XXX,XX +XXX,XX @@ static int get_physical_address(CPURISCVState *env, hwaddr *physical,
73
}
74
75
bool pbmte = env->menvcfg & MENVCFG_PBMTE;
76
- bool hade = env->menvcfg & MENVCFG_HADE;
77
+ bool adue = env->menvcfg & MENVCFG_ADUE;
78
79
if (first_stage && two_stage && env->virt_enabled) {
80
pbmte = pbmte && (env->henvcfg & HENVCFG_PBMTE);
81
- hade = hade && (env->henvcfg & HENVCFG_HADE);
82
+ adue = adue && (env->henvcfg & HENVCFG_ADUE);
83
}
84
85
int ptshift = (levels - 1) * ptidxbits;
86
@@ -XXX,XX +XXX,XX @@ restart:
87
88
/* Page table updates need to be atomic with MTTCG enabled */
89
if (updated_pte != pte && !is_debug) {
90
- if (!hade) {
91
+ if (!adue) {
92
return TRANSLATE_FAIL;
44
}
93
}
45
}
46
env->mcause = 0;
47
+ env->miclaim = MIP_SGEIP;
48
env->pc = env->resetvec;
49
env->two_stage_lookup = false;
50
/* mmte is supposed to have pm.current hardwired to 1 */
51
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_init(Object *obj)
52
cpu_set_cpustate_pointers(cpu);
53
54
#ifndef CONFIG_USER_ONLY
55
- qdev_init_gpio_in(DEVICE(cpu), riscv_cpu_set_irq, 12);
56
+ qdev_init_gpio_in(DEVICE(cpu), riscv_cpu_set_irq, IRQ_LOCAL_MAX);
57
#endif /* CONFIG_USER_ONLY */
58
}
59
94
60
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
95
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
61
index XXXXXXX..XXXXXXX 100644
96
index XXXXXXX..XXXXXXX 100644
62
--- a/target/riscv/csr.c
97
--- a/target/riscv/csr.c
63
+++ b/target/riscv/csr.c
98
+++ b/target/riscv/csr.c
64
@@ -XXX,XX +XXX,XX @@ static RISCVException read_timeh(CPURISCVState *env, int csrno,
99
@@ -XXX,XX +XXX,XX @@ static RISCVException write_menvcfg(CPURISCVState *env, int csrno,
65
#define M_MODE_INTERRUPTS (MIP_MSIP | MIP_MTIP | MIP_MEIP)
100
if (riscv_cpu_mxl(env) == MXL_RV64) {
66
#define S_MODE_INTERRUPTS (MIP_SSIP | MIP_STIP | MIP_SEIP)
101
mask |= (cfg->ext_svpbmt ? MENVCFG_PBMTE : 0) |
67
#define VS_MODE_INTERRUPTS (MIP_VSSIP | MIP_VSTIP | MIP_VSEIP)
102
(cfg->ext_sstc ? MENVCFG_STCE : 0) |
68
+#define HS_MODE_INTERRUPTS (MIP_SGEIP | VS_MODE_INTERRUPTS)
103
- (cfg->ext_svadu ? MENVCFG_HADE : 0);
69
104
+ (cfg->ext_svadu ? MENVCFG_ADUE : 0);
70
static const target_ulong delegable_ints = S_MODE_INTERRUPTS |
71
VS_MODE_INTERRUPTS;
72
static const target_ulong vs_delegable_ints = VS_MODE_INTERRUPTS;
73
static const target_ulong all_ints = M_MODE_INTERRUPTS | S_MODE_INTERRUPTS |
74
- VS_MODE_INTERRUPTS;
75
+ HS_MODE_INTERRUPTS;
76
#define DELEGABLE_EXCPS ((1ULL << (RISCV_EXCP_INST_ADDR_MIS)) | \
77
(1ULL << (RISCV_EXCP_INST_ACCESS_FAULT)) | \
78
(1ULL << (RISCV_EXCP_ILLEGAL_INST)) | \
79
@@ -XXX,XX +XXX,XX @@ static RISCVException write_mideleg(CPURISCVState *env, int csrno,
80
{
81
env->mideleg = (env->mideleg & ~delegable_ints) | (val & delegable_ints);
82
if (riscv_has_ext(env, RVH)) {
83
- env->mideleg |= VS_MODE_INTERRUPTS;
84
+ env->mideleg |= HS_MODE_INTERRUPTS;
85
}
105
}
106
env->menvcfg = (env->menvcfg & ~mask) | (val & mask);
107
108
@@ -XXX,XX +XXX,XX @@ static RISCVException write_menvcfgh(CPURISCVState *env, int csrno,
109
const RISCVCPUConfig *cfg = riscv_cpu_cfg(env);
110
uint64_t mask = (cfg->ext_svpbmt ? MENVCFG_PBMTE : 0) |
111
(cfg->ext_sstc ? MENVCFG_STCE : 0) |
112
- (cfg->ext_svadu ? MENVCFG_HADE : 0);
113
+ (cfg->ext_svadu ? MENVCFG_ADUE : 0);
114
uint64_t valh = (uint64_t)val << 32;
115
116
env->menvcfg = (env->menvcfg & ~mask) | (valh & mask);
117
@@ -XXX,XX +XXX,XX @@ static RISCVException read_henvcfg(CPURISCVState *env, int csrno,
118
* henvcfg.stce is read_only 0 when menvcfg.stce = 0
119
* henvcfg.hade is read_only 0 when menvcfg.hade = 0
120
*/
121
- *val = env->henvcfg & (~(HENVCFG_PBMTE | HENVCFG_STCE | HENVCFG_HADE) |
122
+ *val = env->henvcfg & (~(HENVCFG_PBMTE | HENVCFG_STCE | HENVCFG_ADUE) |
123
env->menvcfg);
86
return RISCV_EXCP_NONE;
124
return RISCV_EXCP_NONE;
87
}
125
}
88
@@ -XXX,XX +XXX,XX @@ static RISCVException write_mie(CPURISCVState *env, int csrno,
126
@@ -XXX,XX +XXX,XX @@ static RISCVException write_henvcfg(CPURISCVState *env, int csrno,
89
target_ulong val)
127
}
90
{
128
91
env->mie = (env->mie & ~all_ints) | (val & all_ints);
129
if (riscv_cpu_mxl(env) == MXL_RV64) {
92
+ if (!riscv_has_ext(env, RVH)) {
130
- mask |= env->menvcfg & (HENVCFG_PBMTE | HENVCFG_STCE | HENVCFG_HADE);
93
+ env->mie &= ~MIP_SGEIP;
131
+ mask |= env->menvcfg & (HENVCFG_PBMTE | HENVCFG_STCE | HENVCFG_ADUE);
94
+ }
132
}
133
134
env->henvcfg = (env->henvcfg & ~mask) | (val & mask);
135
@@ -XXX,XX +XXX,XX @@ static RISCVException read_henvcfgh(CPURISCVState *env, int csrno,
136
return ret;
137
}
138
139
- *val = (env->henvcfg & (~(HENVCFG_PBMTE | HENVCFG_STCE | HENVCFG_HADE) |
140
+ *val = (env->henvcfg & (~(HENVCFG_PBMTE | HENVCFG_STCE | HENVCFG_ADUE) |
141
env->menvcfg)) >> 32;
95
return RISCV_EXCP_NONE;
142
return RISCV_EXCP_NONE;
96
}
143
}
97
144
@@ -XXX,XX +XXX,XX @@ static RISCVException write_henvcfgh(CPURISCVState *env, int csrno,
98
@@ -XXX,XX +XXX,XX @@ static RISCVException rmw_sip(CPURISCVState *env, int csrno,
145
target_ulong val)
99
}
100
101
if (ret_value) {
102
- *ret_value &= env->mideleg;
103
+ *ret_value &= env->mideleg & S_MODE_INTERRUPTS;
104
}
105
return ret;
106
}
107
@@ -XXX,XX +XXX,XX @@ static RISCVException rmw_hvip(CPURISCVState *env, int csrno,
108
write_mask & hvip_writable_mask);
109
110
if (ret_value) {
111
- *ret_value &= hvip_writable_mask;
112
+ *ret_value &= VS_MODE_INTERRUPTS;
113
}
114
return ret;
115
}
116
@@ -XXX,XX +XXX,XX @@ static RISCVException rmw_hip(CPURISCVState *env, int csrno,
117
write_mask & hip_writable_mask);
118
119
if (ret_value) {
120
- *ret_value &= hip_writable_mask;
121
+ *ret_value &= HS_MODE_INTERRUPTS;
122
}
123
return ret;
124
}
125
@@ -XXX,XX +XXX,XX @@ static RISCVException rmw_hip(CPURISCVState *env, int csrno,
126
static RISCVException read_hie(CPURISCVState *env, int csrno,
127
target_ulong *val)
128
{
146
{
129
- *val = env->mie & VS_MODE_INTERRUPTS;
147
uint64_t mask = env->menvcfg & (HENVCFG_PBMTE | HENVCFG_STCE |
130
+ *val = env->mie & HS_MODE_INTERRUPTS;
148
- HENVCFG_HADE);
131
return RISCV_EXCP_NONE;
149
+ HENVCFG_ADUE);
132
}
150
uint64_t valh = (uint64_t)val << 32;
133
151
RISCVException ret;
134
static RISCVException write_hie(CPURISCVState *env, int csrno,
135
target_ulong val)
136
{
137
- target_ulong newval = (env->mie & ~VS_MODE_INTERRUPTS) | (val & VS_MODE_INTERRUPTS);
138
+ target_ulong newval = (env->mie & ~HS_MODE_INTERRUPTS) | (val & HS_MODE_INTERRUPTS);
139
return write_mie(env, CSR_MIE, newval);
140
}
141
152
142
--
153
--
143
2.34.1
154
2.41.0
144
145
diff view generated by jsdifflib
1
From: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
1
From: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
2
2
3
The addition of uxl support in gdbstub adds a few checks on the maximum
3
In the same emulated RISC-V host, the 'host' KVM CPU takes 4 times
4
register length, but omitted MXL_RV128, an experimental feature.
4
longer to boot than the 'rv64' KVM CPU.
5
This patch makes rv128 react as rv64, as previously.
6
5
7
Signed-off-by: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
6
The reason is an unintended behavior of riscv_cpu_satp_mode_finalize()
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
when satp_mode.supported = 0, i.e. when cpu_init() does not set
9
Reviewed-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
8
satp_mode_max_supported(). satp_mode_max_from_map(map) does:
10
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
9
11
Message-id: 20220124202456.420258-1-frederic.petrot@univ-grenoble-alpes.fr
10
31 - __builtin_clz(map)
11
12
This means that, if satp_mode.supported = 0, satp_mode_supported_max
13
wil be '31 - 32'. But this is C, so satp_mode_supported_max will gladly
14
set it to UINT_MAX (4294967295). After that, if the user didn't set a
15
satp_mode, set_satp_mode_default_map(cpu) will make
16
17
cfg.satp_mode.map = cfg.satp_mode.supported
18
19
So satp_mode.map = 0. And then satp_mode_map_max will be set to
20
satp_mode_max_from_map(cpu->cfg.satp_mode.map), i.e. also UINT_MAX. The
21
guard "satp_mode_map_max > satp_mode_supported_max" doesn't protect us
22
here since both are UINT_MAX.
23
24
And finally we have 2 loops:
25
26
for (int i = satp_mode_map_max - 1; i >= 0; --i) {
27
28
Which are, in fact, 2 loops from UINT_MAX -1 to -1. This is where the
29
extra delay when booting the 'host' CPU is coming from.
30
31
Commit 43d1de32f8 already set a precedence for satp_mode.supported = 0
32
in a different manner. We're doing the same here. If supported == 0,
33
interpret as 'the CPU wants the OS to handle satp mode alone' and skip
34
satp_mode_finalize().
35
36
We'll also put a guard in satp_mode_max_from_map() to assert out if map
37
is 0 since the function is not ready to deal with it.
38
39
Cc: Alexandre Ghiti <alexghiti@rivosinc.com>
40
Fixes: 6f23aaeb9b ("riscv: Allow user to set the satp mode")
41
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
42
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
43
Message-ID: <20230817152903.694926-1-dbarboza@ventanamicro.com>
12
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
44
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
13
---
45
---
14
target/riscv/cpu.c | 3 +--
46
target/riscv/cpu.c | 23 ++++++++++++++++++++---
15
target/riscv/gdbstub.c | 3 +++
47
1 file changed, 20 insertions(+), 3 deletions(-)
16
2 files changed, 4 insertions(+), 2 deletions(-)
17
48
18
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
49
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
19
index XXXXXXX..XXXXXXX 100644
50
index XXXXXXX..XXXXXXX 100644
20
--- a/target/riscv/cpu.c
51
--- a/target/riscv/cpu.c
21
+++ b/target/riscv/cpu.c
52
+++ b/target/riscv/cpu.c
22
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_realize(DeviceState *dev, Error **errp)
53
@@ -XXX,XX +XXX,XX @@ static uint8_t satp_mode_from_str(const char *satp_mode_str)
23
switch (env->misa_mxl_max) {
54
24
#ifdef TARGET_RISCV64
55
uint8_t satp_mode_max_from_map(uint32_t map)
25
case MXL_RV64:
56
{
26
- cc->gdb_core_xml_file = "riscv-64bit-cpu.xml";
57
+ /*
27
- break;
58
+ * 'map = 0' will make us return (31 - 32), which C will
28
case MXL_RV128:
59
+ * happily overflow to UINT_MAX. There's no good result to
29
+ cc->gdb_core_xml_file = "riscv-64bit-cpu.xml";
60
+ * return if 'map = 0' (e.g. returning 0 will be ambiguous
30
break;
61
+ * with the result for 'map = 1').
31
#endif
62
+ *
32
case MXL_RV32:
63
+ * Assert out if map = 0. Callers will have to deal with
33
diff --git a/target/riscv/gdbstub.c b/target/riscv/gdbstub.c
64
+ * it outside of this function.
34
index XXXXXXX..XXXXXXX 100644
65
+ */
35
--- a/target/riscv/gdbstub.c
66
+ g_assert(map > 0);
36
+++ b/target/riscv/gdbstub.c
67
+
37
@@ -XXX,XX +XXX,XX @@ int riscv_cpu_gdb_read_register(CPUState *cs, GByteArray *mem_buf, int n)
68
/* map here has at least one bit set, so no problem with clz */
38
case MXL_RV32:
69
return 31 - __builtin_clz(map);
39
return gdb_get_reg32(mem_buf, tmp);
70
}
40
case MXL_RV64:
71
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_validate_set_extensions(RISCVCPU *cpu, Error **errp)
41
+ case MXL_RV128:
72
static void riscv_cpu_satp_mode_finalize(RISCVCPU *cpu, Error **errp)
42
return gdb_get_reg64(mem_buf, tmp);
73
{
43
default:
74
bool rv32 = riscv_cpu_mxl(&cpu->env) == MXL_RV32;
44
g_assert_not_reached();
75
- uint8_t satp_mode_map_max;
45
@@ -XXX,XX +XXX,XX @@ int riscv_cpu_gdb_write_register(CPUState *cs, uint8_t *mem_buf, int n)
76
- uint8_t satp_mode_supported_max =
46
length = 4;
77
- satp_mode_max_from_map(cpu->cfg.satp_mode.supported);
47
break;
78
+ uint8_t satp_mode_map_max, satp_mode_supported_max;
48
case MXL_RV64:
79
+
49
+ case MXL_RV128:
80
+ /* The CPU wants the OS to decide which satp mode to use */
50
if (env->xl < MXL_RV64) {
81
+ if (cpu->cfg.satp_mode.supported == 0) {
51
tmp = (int32_t)ldq_p(mem_buf);
82
+ return;
52
} else {
83
+ }
53
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_register_gdb_regs_for_features(CPUState *cs)
84
+
54
1, "riscv-32bit-virtual.xml", 0);
85
+ satp_mode_supported_max =
55
break;
86
+ satp_mode_max_from_map(cpu->cfg.satp_mode.supported);
56
case MXL_RV64:
87
57
+ case MXL_RV128:
88
if (cpu->cfg.satp_mode.map == 0) {
58
gdb_register_coprocessor(cs, riscv_gdb_get_virtual,
89
if (cpu->cfg.satp_mode.init == 0) {
59
riscv_gdb_set_virtual,
60
1, "riscv-64bit-virtual.xml", 0);
61
--
90
--
62
2.34.1
91
2.41.0
63
64
diff view generated by jsdifflib
1
From: Anup Patel <anup.patel@wdc.com>
1
From: Vineet Gupta <vineetg@rivosinc.com>
2
2
3
We add "x-aia" command-line option for RISC-V HART using which
3
zicond is now codegen supported in both llvm and gcc.
4
allows users to force enable CPU AIA CSRs without changing the
5
interrupt controller available in RISC-V machine.
6
4
7
Signed-off-by: Anup Patel <anup.patel@wdc.com>
5
This change allows seamless enabling/testing of zicond in downstream
8
Signed-off-by: Anup Patel <anup@brainfault.org>
6
projects. e.g. currently riscv-gnu-toolchain parses elf attributes
7
to create a cmdline for qemu but fails short of enabling it because of
8
the "x-" prefix.
9
10
Signed-off-by: Vineet Gupta <vineetg@rivosinc.com>
11
Message-ID: <20230808181715.436395-1-vineetg@rivosinc.com>
9
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
12
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
10
Reviewed-by: Frank Chang <frank.chang@sifive.com>
11
Message-id: 20220204174700.534953-18-anup@brainfault.org
12
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
13
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
13
---
14
---
14
target/riscv/cpu.h | 1 +
15
target/riscv/cpu.c | 2 +-
15
target/riscv/cpu.c | 5 +++++
16
1 file changed, 1 insertion(+), 1 deletion(-)
16
2 files changed, 6 insertions(+)
17
17
18
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/riscv/cpu.h
21
+++ b/target/riscv/cpu.h
22
@@ -XXX,XX +XXX,XX @@ struct RISCVCPUConfig {
23
bool mmu;
24
bool pmp;
25
bool epmp;
26
+ bool aia;
27
uint64_t resetvec;
28
};
29
30
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
18
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
31
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
32
--- a/target/riscv/cpu.c
20
--- a/target/riscv/cpu.c
33
+++ b/target/riscv/cpu.c
21
+++ b/target/riscv/cpu.c
34
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_realize(DeviceState *dev, Error **errp)
22
@@ -XXX,XX +XXX,XX @@ static Property riscv_cpu_extensions[] = {
35
}
23
DEFINE_PROP_BOOL("zcf", RISCVCPU, cfg.ext_zcf, false),
36
}
24
DEFINE_PROP_BOOL("zcmp", RISCVCPU, cfg.ext_zcmp, false),
37
25
DEFINE_PROP_BOOL("zcmt", RISCVCPU, cfg.ext_zcmt, false),
38
+ if (cpu->cfg.aia) {
26
+ DEFINE_PROP_BOOL("zicond", RISCVCPU, cfg.ext_zicond, false),
39
+ riscv_set_feature(env, RISCV_FEATURE_AIA);
27
40
+ }
28
/* Vendor-specific custom extensions */
41
+
29
DEFINE_PROP_BOOL("xtheadba", RISCVCPU, cfg.ext_xtheadba, false),
42
set_resetvec(env, cpu->cfg.resetvec);
30
@@ -XXX,XX +XXX,XX @@ static Property riscv_cpu_extensions[] = {
43
31
DEFINE_PROP_BOOL("xventanacondops", RISCVCPU, cfg.ext_XVentanaCondOps, false),
44
/* Validate that MISA_MXL is set properly. */
32
45
@@ -XXX,XX +XXX,XX @@ static Property riscv_cpu_properties[] = {
33
/* These are experimental so mark with 'x-' */
46
DEFINE_PROP_BOOL("x-j", RISCVCPU, cfg.ext_j, false),
34
- DEFINE_PROP_BOOL("x-zicond", RISCVCPU, cfg.ext_zicond, false),
35
47
/* ePMP 0.9.3 */
36
/* ePMP 0.9.3 */
48
DEFINE_PROP_BOOL("x-epmp", RISCVCPU, cfg.epmp, false),
37
DEFINE_PROP_BOOL("x-epmp", RISCVCPU, cfg.epmp, false),
49
+ DEFINE_PROP_BOOL("x-aia", RISCVCPU, cfg.aia, false),
50
51
DEFINE_PROP_UINT64("resetvec", RISCVCPU, cfg.resetvec, DEFAULT_RSTVEC),
52
DEFINE_PROP_END_OF_LIST(),
53
--
38
--
54
2.34.1
39
2.41.0
55
56
diff view generated by jsdifflib
1
From: Anup Patel <anup.patel@wdc.com>
1
From: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
2
2
3
We should use the AIA INTC compatible string in the CPU INTC
3
A build with --enable-debug and without KVM will fail as follows:
4
DT nodes when the CPUs support AIA feature. This will allow
5
Linux INTC driver to use AIA local interrupt CSRs.
6
4
7
Signed-off-by: Anup Patel <anup.patel@wdc.com>
5
/usr/bin/ld: libqemu-riscv64-softmmu.fa.p/hw_riscv_virt.c.o: in function `virt_machine_init':
8
Signed-off-by: Anup Patel <anup@brainfault.org>
6
./qemu/build/../hw/riscv/virt.c:1465: undefined reference to `kvm_riscv_aia_create'
9
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
7
10
Reviewed-by: Frank Chang <frank.chang@sifive.com>
8
This happens because the code block with "if virt_use_kvm_aia(s)" isn't
11
Message-id: 20220204174700.534953-17-anup@brainfault.org
9
being ignored by the debug build, resulting in an undefined reference to
10
a KVM only function.
11
12
Add a 'kvm_enabled()' conditional together with virt_use_kvm_aia() will
13
make the compiler crop the kvm_riscv_aia_create() call entirely from a
14
non-KVM build. Note that adding the 'kvm_enabled()' conditional inside
15
virt_use_kvm_aia() won't fix the build because this function would need
16
to be inlined multiple times to make the compiler zero out the entire
17
block.
18
19
While we're at it, use kvm_enabled() in all instances where
20
virt_use_kvm_aia() is checked to allow the compiler to elide these other
21
kvm-only instances as well.
22
23
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
24
Fixes: dbdb99948e ("target/riscv: select KVM AIA in riscv virt machine")
25
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
26
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
27
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
28
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
29
Message-ID: <20230830133503.711138-2-dbarboza@ventanamicro.com>
12
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
30
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
13
---
31
---
14
hw/riscv/virt.c | 13 +++++++++++--
32
hw/riscv/virt.c | 6 +++---
15
1 file changed, 11 insertions(+), 2 deletions(-)
33
1 file changed, 3 insertions(+), 3 deletions(-)
16
34
17
diff --git a/hw/riscv/virt.c b/hw/riscv/virt.c
35
diff --git a/hw/riscv/virt.c b/hw/riscv/virt.c
18
index XXXXXXX..XXXXXXX 100644
36
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/riscv/virt.c
37
--- a/hw/riscv/virt.c
20
+++ b/hw/riscv/virt.c
38
+++ b/hw/riscv/virt.c
21
@@ -XXX,XX +XXX,XX @@ static void create_fdt_socket_cpus(RISCVVirtState *s, int socket,
39
@@ -XXX,XX +XXX,XX @@ static void create_fdt_sockets(RISCVVirtState *s, const MemMapEntry *memmap,
22
qemu_fdt_add_subnode(mc->fdt, intc_name);
40
}
23
qemu_fdt_setprop_cell(mc->fdt, intc_name, "phandle",
41
24
intc_phandles[cpu]);
42
/* KVM AIA only has one APLIC instance */
25
- qemu_fdt_setprop_string(mc->fdt, intc_name, "compatible",
43
- if (virt_use_kvm_aia(s)) {
26
- "riscv,cpu-intc");
44
+ if (kvm_enabled() && virt_use_kvm_aia(s)) {
27
+ if (riscv_feature(&s->soc[socket].harts[cpu].env,
45
create_fdt_socket_aplic(s, memmap, 0,
28
+ RISCV_FEATURE_AIA)) {
46
msi_m_phandle, msi_s_phandle, phandle,
29
+ static const char * const compat[2] = {
47
&intc_phandles[0], xplic_phandles,
30
+ "riscv,cpu-intc-aia", "riscv,cpu-intc"
48
@@ -XXX,XX +XXX,XX @@ static void create_fdt_sockets(RISCVVirtState *s, const MemMapEntry *memmap,
31
+ };
49
32
+ qemu_fdt_setprop_string_array(mc->fdt, intc_name, "compatible",
50
g_free(intc_phandles);
33
+ (char **)&compat, ARRAY_SIZE(compat));
51
34
+ } else {
52
- if (virt_use_kvm_aia(s)) {
35
+ qemu_fdt_setprop_string(mc->fdt, intc_name, "compatible",
53
+ if (kvm_enabled() && virt_use_kvm_aia(s)) {
36
+ "riscv,cpu-intc");
54
*irq_mmio_phandle = xplic_phandles[0];
37
+ }
55
*irq_virtio_phandle = xplic_phandles[0];
38
qemu_fdt_setprop(mc->fdt, intc_name, "interrupt-controller", NULL, 0);
56
*irq_pcie_phandle = xplic_phandles[0];
39
qemu_fdt_setprop_cell(mc->fdt, intc_name, "#interrupt-cells", 1);
57
@@ -XXX,XX +XXX,XX @@ static void virt_machine_init(MachineState *machine)
40
58
}
59
}
60
61
- if (virt_use_kvm_aia(s)) {
62
+ if (kvm_enabled() && virt_use_kvm_aia(s)) {
63
kvm_riscv_aia_create(machine, IMSIC_MMIO_GROUP_MIN_SHIFT,
64
VIRT_IRQCHIP_NUM_SOURCES, VIRT_IRQCHIP_NUM_MSIS,
65
memmap[VIRT_APLIC_S].base,
41
--
66
--
42
2.34.1
67
2.41.0
43
68
44
69
diff view generated by jsdifflib
1
From: Anup Patel <anup.patel@wdc.com>
1
From: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
2
2
3
The AIA specificaiton adds interrupt filtering support for M-mode
3
Commit 6df0b37e2ab breaks a --enable-debug build in a non-KVM
4
and HS-mode. Using AIA interrupt filtering M-mode and H-mode can
4
environment with the following error:
5
take local interrupt 13 or above and selectively inject same local
6
interrupt to lower privilege modes.
7
5
8
At the moment, we don't have any local interrupts above 12 so we
6
/usr/bin/ld: libqemu-riscv64-softmmu.fa.p/hw_intc_riscv_aplic.c.o: in function `riscv_kvm_aplic_request':
9
add dummy implementation (i.e. read zero and ignore write) of AIA
7
./qemu/build/../hw/intc/riscv_aplic.c:486: undefined reference to `kvm_set_irq'
10
interrupt filtering CSRs.
8
collect2: error: ld returned 1 exit status
11
9
12
Signed-off-by: Anup Patel <anup.patel@wdc.com>
10
This happens because the debug build will poke into the
13
Signed-off-by: Anup Patel <anup@brainfault.org>
11
'if (is_kvm_aia(aplic->msimode))' block and fail to find a reference to
14
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
12
the KVM only function riscv_kvm_aplic_request().
15
Reviewed-by: Frank Chang <frank.chang@sifive.com>
13
16
Message-id: 20220204174700.534953-13-anup@brainfault.org
14
There are multiple solutions to fix this. We'll go with the same
15
solution from the previous patch, i.e. add a kvm_enabled() conditional
16
to filter out the block. But there's a catch: riscv_kvm_aplic_request()
17
is a local function that would end up being used if the compiler crops
18
the block, and this won't work. Quoting Richard Henderson's explanation
19
in [1]:
20
21
"(...) the compiler won't eliminate entire unused functions with -O0"
22
23
We'll solve it by moving riscv_kvm_aplic_request() to kvm.c and add its
24
declaration in kvm_riscv.h, where all other KVM specific public
25
functions are already declared. Other archs handles KVM specific code in
26
this manner and we expect to do the same from now on.
27
28
[1] https://lore.kernel.org/qemu-riscv/d2f1ad02-eb03-138f-9d08-db676deeed05@linaro.org/
29
30
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
31
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
32
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
33
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
34
Message-ID: <20230830133503.711138-3-dbarboza@ventanamicro.com>
17
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
35
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
18
---
36
---
19
target/riscv/csr.c | 23 +++++++++++++++++++++++
37
target/riscv/kvm_riscv.h | 1 +
20
1 file changed, 23 insertions(+)
38
hw/intc/riscv_aplic.c | 8 ++------
39
target/riscv/kvm.c | 5 +++++
40
3 files changed, 8 insertions(+), 6 deletions(-)
21
41
22
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
42
diff --git a/target/riscv/kvm_riscv.h b/target/riscv/kvm_riscv.h
23
index XXXXXXX..XXXXXXX 100644
43
index XXXXXXX..XXXXXXX 100644
24
--- a/target/riscv/csr.c
44
--- a/target/riscv/kvm_riscv.h
25
+++ b/target/riscv/csr.c
45
+++ b/target/riscv/kvm_riscv.h
26
@@ -XXX,XX +XXX,XX @@ static RISCVException any32(CPURISCVState *env, int csrno)
46
@@ -XXX,XX +XXX,XX @@ void kvm_riscv_aia_create(MachineState *machine, uint64_t group_shift,
27
47
uint64_t aia_irq_num, uint64_t aia_msi_num,
48
uint64_t aplic_base, uint64_t imsic_base,
49
uint64_t guest_num);
50
+void riscv_kvm_aplic_request(void *opaque, int irq, int level);
51
52
#endif
53
diff --git a/hw/intc/riscv_aplic.c b/hw/intc/riscv_aplic.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/hw/intc/riscv_aplic.c
56
+++ b/hw/intc/riscv_aplic.c
57
@@ -XXX,XX +XXX,XX @@
58
#include "target/riscv/cpu.h"
59
#include "sysemu/sysemu.h"
60
#include "sysemu/kvm.h"
61
+#include "kvm_riscv.h"
62
#include "migration/vmstate.h"
63
64
#define APLIC_MAX_IDC (1UL << 14)
65
@@ -XXX,XX +XXX,XX @@ static uint32_t riscv_aplic_idc_claimi(RISCVAPLICState *aplic, uint32_t idc)
66
return topi;
28
}
67
}
29
68
30
+static int aia_any(CPURISCVState *env, int csrno)
69
-static void riscv_kvm_aplic_request(void *opaque, int irq, int level)
70
-{
71
- kvm_set_irq(kvm_state, irq, !!level);
72
-}
73
-
74
static void riscv_aplic_request(void *opaque, int irq, int level)
75
{
76
bool update = false;
77
@@ -XXX,XX +XXX,XX @@ static void riscv_aplic_realize(DeviceState *dev, Error **errp)
78
* have IRQ lines delegated by their parent APLIC.
79
*/
80
if (!aplic->parent) {
81
- if (is_kvm_aia(aplic->msimode)) {
82
+ if (kvm_enabled() && is_kvm_aia(aplic->msimode)) {
83
qdev_init_gpio_in(dev, riscv_kvm_aplic_request, aplic->num_irqs);
84
} else {
85
qdev_init_gpio_in(dev, riscv_aplic_request, aplic->num_irqs);
86
diff --git a/target/riscv/kvm.c b/target/riscv/kvm.c
87
index XXXXXXX..XXXXXXX 100644
88
--- a/target/riscv/kvm.c
89
+++ b/target/riscv/kvm.c
90
@@ -XXX,XX +XXX,XX @@
91
#include "sysemu/runstate.h"
92
#include "hw/riscv/numa.h"
93
94
+void riscv_kvm_aplic_request(void *opaque, int irq, int level)
31
+{
95
+{
32
+ if (!riscv_feature(env, RISCV_FEATURE_AIA)) {
96
+ kvm_set_irq(kvm_state, irq, !!level);
33
+ return RISCV_EXCP_ILLEGAL_INST;
34
+ }
35
+
36
+ return any(env, csrno);
37
+}
97
+}
38
+
98
+
39
static int aia_any32(CPURISCVState *env, int csrno)
99
static uint64_t kvm_riscv_reg_id(CPURISCVState *env, uint64_t type,
100
uint64_t idx)
40
{
101
{
41
if (!riscv_feature(env, RISCV_FEATURE_AIA)) {
42
@@ -XXX,XX +XXX,XX @@ static RISCVException read_zero(CPURISCVState *env, int csrno,
43
return RISCV_EXCP_NONE;
44
}
45
46
+static RISCVException write_ignore(CPURISCVState *env, int csrno,
47
+ target_ulong val)
48
+{
49
+ return RISCV_EXCP_NONE;
50
+}
51
+
52
static RISCVException read_mhartid(CPURISCVState *env, int csrno,
53
target_ulong *val)
54
{
55
@@ -XXX,XX +XXX,XX @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
56
[CSR_MTVAL] = { "mtval", any, read_mtval, write_mtval },
57
[CSR_MIP] = { "mip", any, NULL, NULL, rmw_mip },
58
59
+ /* Virtual Interrupts for Supervisor Level (AIA) */
60
+ [CSR_MVIEN] = { "mvien", aia_any, read_zero, write_ignore },
61
+ [CSR_MVIP] = { "mvip", aia_any, read_zero, write_ignore },
62
+
63
/* Machine-Level High-Half CSRs (AIA) */
64
[CSR_MIDELEGH] = { "midelegh", aia_any32, NULL, NULL, rmw_midelegh },
65
[CSR_MIEH] = { "mieh", aia_any32, NULL, NULL, rmw_mieh },
66
+ [CSR_MVIENH] = { "mvienh", aia_any32, read_zero, write_ignore },
67
+ [CSR_MVIPH] = { "mviph", aia_any32, read_zero, write_ignore },
68
[CSR_MIPH] = { "miph", aia_any32, NULL, NULL, rmw_miph },
69
70
/* Supervisor Trap Setup */
71
@@ -XXX,XX +XXX,XX @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
72
[CSR_MTINST] = { "mtinst", hmode, read_mtinst, write_mtinst },
73
74
/* Virtual Interrupts and Interrupt Priorities (H-extension with AIA) */
75
+ [CSR_HVIEN] = { "hvien", aia_hmode, read_zero, write_ignore },
76
[CSR_HVICTL] = { "hvictl", aia_hmode, read_hvictl, write_hvictl },
77
[CSR_HVIPRIO1] = { "hviprio1", aia_hmode, read_hviprio1, write_hviprio1 },
78
[CSR_HVIPRIO2] = { "hviprio2", aia_hmode, read_hviprio2, write_hviprio2 },
79
80
/* Hypervisor and VS-Level High-Half CSRs (H-extension with AIA) */
81
[CSR_HIDELEGH] = { "hidelegh", aia_hmode32, NULL, NULL, rmw_hidelegh },
82
+ [CSR_HVIENH] = { "hvienh", aia_hmode32, read_zero, write_ignore },
83
[CSR_HVIPH] = { "hviph", aia_hmode32, NULL, NULL, rmw_hviph },
84
[CSR_HVIPRIO1H] = { "hviprio1h", aia_hmode32, read_hviprio1h, write_hviprio1h },
85
[CSR_HVIPRIO2H] = { "hviprio2h", aia_hmode32, read_hviprio2h, write_hviprio2h },
86
--
102
--
87
2.34.1
103
2.41.0
88
104
89
105
diff view generated by jsdifflib
1
From: Philipp Tomsich <philipp.tomsich@vrull.eu>
1
From: Robbin Ehn <rehn@rivosinc.com>
2
2
3
As the number of extensions is growing, copying them individiually
3
This patch adds the new extensions in
4
into the DisasContext will scale less and less... instead we populate
4
linux 6.5 to the hwprobe syscall.
5
a pointer to the RISCVCPUConfig structure in the DisasContext.
6
5
7
This adds an extra indirection when checking for the availability of
6
And fixes RVC check to OR with correct value.
8
an extension (compared to copying the fields into DisasContext).
7
The previous variable contains 0 therefore it
9
While not a performance problem today, we can always (shallow) copy
8
did work.
10
the entire structure into the DisasContext (instead of putting a
11
pointer to it) if this is ever deemed necessary.
12
9
13
Signed-off-by: Philipp Tomsich <philipp.tomsich@vrull.eu>
10
Signed-off-by: Robbin Ehn <rehn@rivosinc.com>
14
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
11
Acked-by: Richard Henderson <richard.henderson@linaro.org>
15
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
12
Acked-by: Alistair Francis <alistair.francis@wdc.com>
16
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-ID: <bc82203b72d7efb30f1b4a8f9eb3d94699799dc8.camel@rivosinc.com>
17
Message-Id: <20220202005249.3566542-3-philipp.tomsich@vrull.eu>
18
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
14
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
19
---
15
---
20
target/riscv/translate.c | 2 ++
16
linux-user/syscall.c | 14 +++++++++++++-
21
1 file changed, 2 insertions(+)
17
1 file changed, 13 insertions(+), 1 deletion(-)
22
18
23
diff --git a/target/riscv/translate.c b/target/riscv/translate.c
19
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
24
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
25
--- a/target/riscv/translate.c
21
--- a/linux-user/syscall.c
26
+++ b/target/riscv/translate.c
22
+++ b/linux-user/syscall.c
27
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
23
@@ -XXX,XX +XXX,XX @@ static int do_getdents64(abi_long dirfd, abi_long arg2, abi_long count)
28
int frm;
24
#define RISCV_HWPROBE_KEY_IMA_EXT_0 4
29
RISCVMXL ol;
25
#define RISCV_HWPROBE_IMA_FD (1 << 0)
30
bool virt_enabled;
26
#define RISCV_HWPROBE_IMA_C (1 << 1)
31
+ const RISCVCPUConfig *cfg_ptr;
27
+#define RISCV_HWPROBE_IMA_V (1 << 2)
32
bool ext_ifencei;
28
+#define RISCV_HWPROBE_EXT_ZBA (1 << 3)
33
bool ext_zfh;
29
+#define RISCV_HWPROBE_EXT_ZBB (1 << 4)
34
bool ext_zfhmin;
30
+#define RISCV_HWPROBE_EXT_ZBS (1 << 5)
35
@@ -XXX,XX +XXX,XX @@ static void riscv_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
31
36
#endif
32
#define RISCV_HWPROBE_KEY_CPUPERF_0 5
37
ctx->misa_ext = env->misa_ext;
33
#define RISCV_HWPROBE_MISALIGNED_UNKNOWN (0 << 0)
38
ctx->frm = -1; /* unknown rounding mode */
34
@@ -XXX,XX +XXX,XX @@ static void risc_hwprobe_fill_pairs(CPURISCVState *env,
39
+ ctx->cfg_ptr = &(cpu->cfg);
35
riscv_has_ext(env, RVD) ?
40
ctx->ext_ifencei = cpu->cfg.ext_ifencei;
36
RISCV_HWPROBE_IMA_FD : 0;
41
ctx->ext_zfh = cpu->cfg.ext_zfh;
37
value |= riscv_has_ext(env, RVC) ?
42
ctx->ext_zfhmin = cpu->cfg.ext_zfhmin;
38
- RISCV_HWPROBE_IMA_C : pair->value;
39
+ RISCV_HWPROBE_IMA_C : 0;
40
+ value |= riscv_has_ext(env, RVV) ?
41
+ RISCV_HWPROBE_IMA_V : 0;
42
+ value |= cfg->ext_zba ?
43
+ RISCV_HWPROBE_EXT_ZBA : 0;
44
+ value |= cfg->ext_zbb ?
45
+ RISCV_HWPROBE_EXT_ZBB : 0;
46
+ value |= cfg->ext_zbs ?
47
+ RISCV_HWPROBE_EXT_ZBS : 0;
48
__put_user(value, &pair->value);
49
break;
50
case RISCV_HWPROBE_KEY_CPUPERF_0:
43
--
51
--
44
2.34.1
52
2.41.0
45
46
diff view generated by jsdifflib
1
From: Anup Patel <anup.patel@wdc.com>
1
From: Ard Biesheuvel <ardb@kernel.org>
2
2
3
The guest external interrupts from an interrupt controller are
3
Use the accelerated SubBytes/ShiftRows/AddRoundKey AES helper to
4
delivered only when the Guest/VM is running (i.e. V=1). This means
4
implement the first half of the key schedule derivation. This does not
5
any guest external interrupt which is triggered while the Guest/VM
5
actually involve shifting rows, so clone the same value into all four
6
is not running (i.e. V=0) will be missed on QEMU resulting in Guest
6
columns of the AES vector to counter that operation.
7
with sluggish response to serial console input and other I/O events.
8
7
9
To solve this, we check and inject interrupt after setting V=1.
8
Cc: Richard Henderson <richard.henderson@linaro.org>
10
9
Cc: Philippe Mathieu-Daudé <philmd@linaro.org>
11
Signed-off-by: Anup Patel <anup.patel@wdc.com>
10
Cc: Palmer Dabbelt <palmer@dabbelt.com>
12
Signed-off-by: Anup Patel <anup@brainfault.org>
11
Cc: Alistair Francis <alistair.francis@wdc.com>
13
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
12
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
14
Reviewed-by: Frank Chang <frank.chang@sifive.com>
13
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
15
Message-id: 20220204174700.534953-5-anup@brainfault.org
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-ID: <20230831154118.138727-1-ardb@kernel.org>
16
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
16
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
17
---
17
---
18
target/riscv/cpu_helper.c | 13 +++++++++++++
18
target/riscv/crypto_helper.c | 17 +++++------------
19
1 file changed, 13 insertions(+)
19
1 file changed, 5 insertions(+), 12 deletions(-)
20
20
21
diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
21
diff --git a/target/riscv/crypto_helper.c b/target/riscv/crypto_helper.c
22
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
23
--- a/target/riscv/cpu_helper.c
23
--- a/target/riscv/crypto_helper.c
24
+++ b/target/riscv/cpu_helper.c
24
+++ b/target/riscv/crypto_helper.c
25
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_set_virt_enabled(CPURISCVState *env, bool enable)
25
@@ -XXX,XX +XXX,XX @@ target_ulong HELPER(aes64ks1i)(target_ulong rs1, target_ulong rnum)
26
27
uint8_t enc_rnum = rnum;
28
uint32_t temp = (RS1 >> 32) & 0xFFFFFFFF;
29
- uint8_t rcon_ = 0;
30
- target_ulong result;
31
+ AESState t, rc = {};
32
33
if (enc_rnum != 0xA) {
34
temp = ror32(temp, 8); /* Rotate right by 8 */
35
- rcon_ = round_consts[enc_rnum];
36
+ rc.w[0] = rc.w[1] = round_consts[enc_rnum];
26
}
37
}
27
38
28
env->virt = set_field(env->virt, VIRT_ONOFF, enable);
39
- temp = ((uint32_t)AES_sbox[(temp >> 24) & 0xFF] << 24) |
29
+
40
- ((uint32_t)AES_sbox[(temp >> 16) & 0xFF] << 16) |
30
+ if (enable) {
41
- ((uint32_t)AES_sbox[(temp >> 8) & 0xFF] << 8) |
31
+ /*
42
- ((uint32_t)AES_sbox[(temp >> 0) & 0xFF] << 0);
32
+ * The guest external interrupts from an interrupt controller are
43
+ t.w[0] = t.w[1] = t.w[2] = t.w[3] = temp;
33
+ * delivered only when the Guest/VM is running (i.e. V=1). This means
44
+ aesenc_SB_SR_AK(&t, &t, &rc, false);
34
+ * any guest external interrupt which is triggered while the Guest/VM
45
35
+ * is not running (i.e. V=0) will be missed on QEMU resulting in guest
46
- temp ^= rcon_;
36
+ * with sluggish response to serial console input and other I/O events.
47
-
37
+ *
48
- result = ((uint64_t)temp << 32) | temp;
38
+ * To solve this, we check and inject interrupt after setting V=1.
49
-
39
+ */
50
- return result;
40
+ riscv_cpu_update_mip(env_archcpu(env), 0, 0);
51
+ return t.d[0];
41
+ }
42
}
52
}
43
53
44
bool riscv_cpu_two_stage_lookup(int mmu_idx)
54
target_ulong HELPER(aes64im)(target_ulong rs1)
45
--
55
--
46
2.34.1
56
2.41.0
47
57
48
58
diff view generated by jsdifflib
1
From: Anup Patel <anup.patel@wdc.com>
1
From: Akihiko Odaki <akihiko.odaki@daynix.com>
2
2
3
The AIA device emulation (such as AIA IMSIC) should be able to set
3
riscv_trigger_init() had been called on reset events that can happen
4
(or provide) AIA ireg read-modify-write callback for each privilege
4
several times for a CPU and it allocated timers for itrigger. If old
5
level of a RISC-V HART.
5
timers were present, they were simply overwritten by the new timers,
6
resulting in a memory leak.
6
7
7
Signed-off-by: Anup Patel <anup.patel@wdc.com>
8
Divide riscv_trigger_init() into two functions, namely
8
Signed-off-by: Anup Patel <anup@brainfault.org>
9
riscv_trigger_realize() and riscv_trigger_reset() and call them in
10
appropriate timing. The timer allocation will happen only once for a
11
CPU in riscv_trigger_realize().
12
13
Fixes: 5a4ae64cac ("target/riscv: Add itrigger support when icount is enabled")
14
Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com>
15
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
16
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
9
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
17
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
10
Reviewed-by: Frank Chang <frank.chang@sifive.com>
18
Message-ID: <20230818034059.9146-1-akihiko.odaki@daynix.com>
11
Message-id: 20220204174700.534953-9-anup@brainfault.org
12
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
19
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
13
---
20
---
14
target/riscv/cpu.h | 23 +++++++++++++++++++++++
21
target/riscv/debug.h | 3 ++-
15
target/riscv/cpu_helper.c | 14 ++++++++++++++
22
target/riscv/cpu.c | 8 +++++++-
16
2 files changed, 37 insertions(+)
23
target/riscv/debug.c | 15 ++++++++++++---
24
3 files changed, 21 insertions(+), 5 deletions(-)
17
25
18
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
26
diff --git a/target/riscv/debug.h b/target/riscv/debug.h
19
index XXXXXXX..XXXXXXX 100644
27
index XXXXXXX..XXXXXXX 100644
20
--- a/target/riscv/cpu.h
28
--- a/target/riscv/debug.h
21
+++ b/target/riscv/cpu.h
29
+++ b/target/riscv/debug.h
22
@@ -XXX,XX +XXX,XX @@ struct CPURISCVState {
30
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_debug_excp_handler(CPUState *cs);
23
uint64_t (*rdtime_fn)(uint32_t);
31
bool riscv_cpu_debug_check_breakpoint(CPUState *cs);
24
uint32_t rdtime_fn_arg;
32
bool riscv_cpu_debug_check_watchpoint(CPUState *cs, CPUWatchpoint *wp);
25
33
26
+ /* machine specific AIA ireg read-modify-write callback */
34
-void riscv_trigger_init(CPURISCVState *env);
27
+#define AIA_MAKE_IREG(__isel, __priv, __virt, __vgein, __xlen) \
35
+void riscv_trigger_realize(CPURISCVState *env);
28
+ ((((__xlen) & 0xff) << 24) | \
36
+void riscv_trigger_reset_hold(CPURISCVState *env);
29
+ (((__vgein) & 0x3f) << 20) | \
37
30
+ (((__virt) & 0x1) << 18) | \
38
bool riscv_itrigger_enabled(CPURISCVState *env);
31
+ (((__priv) & 0x3) << 16) | \
39
void riscv_itrigger_update_priv(CPURISCVState *env);
32
+ (__isel & 0xffff))
40
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
33
+#define AIA_IREG_ISEL(__ireg) ((__ireg) & 0xffff)
41
index XXXXXXX..XXXXXXX 100644
34
+#define AIA_IREG_PRIV(__ireg) (((__ireg) >> 16) & 0x3)
42
--- a/target/riscv/cpu.c
35
+#define AIA_IREG_VIRT(__ireg) (((__ireg) >> 18) & 0x1)
43
+++ b/target/riscv/cpu.c
36
+#define AIA_IREG_VGEIN(__ireg) (((__ireg) >> 20) & 0x3f)
44
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_reset_hold(Object *obj)
37
+#define AIA_IREG_XLEN(__ireg) (((__ireg) >> 24) & 0xff)
45
38
+ int (*aia_ireg_rmw_fn[4])(void *arg, target_ulong reg,
46
#ifndef CONFIG_USER_ONLY
39
+ target_ulong *val, target_ulong new_val, target_ulong write_mask);
47
if (cpu->cfg.debug) {
40
+ void *aia_ireg_rmw_fn_arg[4];
48
- riscv_trigger_init(env);
49
+ riscv_trigger_reset_hold(env);
50
}
51
52
if (kvm_enabled()) {
53
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_realize(DeviceState *dev, Error **errp)
54
55
riscv_cpu_register_gdb_regs_for_features(cs);
56
57
+#ifndef CONFIG_USER_ONLY
58
+ if (cpu->cfg.debug) {
59
+ riscv_trigger_realize(&cpu->env);
60
+ }
61
+#endif
41
+
62
+
42
/* True if in debugger mode. */
63
qemu_init_vcpu(cs);
43
bool debugger;
64
cpu_reset(cs);
44
65
45
@@ -XXX,XX +XXX,XX @@ uint32_t riscv_cpu_update_mip(RISCVCPU *cpu, uint32_t mask, uint32_t value);
66
diff --git a/target/riscv/debug.c b/target/riscv/debug.c
46
#define BOOL_TO_MASK(x) (-!!(x)) /* helper for riscv_cpu_update_mip value */
47
void riscv_cpu_set_rdtime_fn(CPURISCVState *env, uint64_t (*fn)(uint32_t),
48
uint32_t arg);
49
+void riscv_cpu_set_aia_ireg_rmw_fn(CPURISCVState *env, uint32_t priv,
50
+ int (*rmw_fn)(void *arg,
51
+ target_ulong reg,
52
+ target_ulong *val,
53
+ target_ulong new_val,
54
+ target_ulong write_mask),
55
+ void *rmw_fn_arg);
56
#endif
57
void riscv_cpu_set_mode(CPURISCVState *env, target_ulong newpriv);
58
59
diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
60
index XXXXXXX..XXXXXXX 100644
67
index XXXXXXX..XXXXXXX 100644
61
--- a/target/riscv/cpu_helper.c
68
--- a/target/riscv/debug.c
62
+++ b/target/riscv/cpu_helper.c
69
+++ b/target/riscv/debug.c
63
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_set_rdtime_fn(CPURISCVState *env, uint64_t (*fn)(uint32_t),
70
@@ -XXX,XX +XXX,XX @@ bool riscv_cpu_debug_check_watchpoint(CPUState *cs, CPUWatchpoint *wp)
64
env->rdtime_fn_arg = arg;
71
return false;
65
}
72
}
66
73
67
+void riscv_cpu_set_aia_ireg_rmw_fn(CPURISCVState *env, uint32_t priv,
74
-void riscv_trigger_init(CPURISCVState *env)
68
+ int (*rmw_fn)(void *arg,
75
+void riscv_trigger_realize(CPURISCVState *env)
69
+ target_ulong reg,
70
+ target_ulong *val,
71
+ target_ulong new_val,
72
+ target_ulong write_mask),
73
+ void *rmw_fn_arg)
74
+{
76
+{
75
+ if (priv <= PRV_M) {
77
+ int i;
76
+ env->aia_ireg_rmw_fn[priv] = rmw_fn;
78
+
77
+ env->aia_ireg_rmw_fn_arg[priv] = rmw_fn_arg;
79
+ for (i = 0; i < RV_MAX_TRIGGERS; i++) {
80
+ env->itrigger_timer[i] = timer_new_ns(QEMU_CLOCK_VIRTUAL,
81
+ riscv_itrigger_timer_cb, env);
78
+ }
82
+ }
79
+}
83
+}
80
+
84
+
81
void riscv_cpu_set_mode(CPURISCVState *env, target_ulong newpriv)
85
+void riscv_trigger_reset_hold(CPURISCVState *env)
82
{
86
{
83
if (newpriv > PRV_M) {
87
target_ulong tdata1 = build_tdata1(env, TRIGGER_TYPE_AD_MATCH, 0, 0);
88
int i;
89
@@ -XXX,XX +XXX,XX @@ void riscv_trigger_init(CPURISCVState *env)
90
env->tdata3[i] = 0;
91
env->cpu_breakpoint[i] = NULL;
92
env->cpu_watchpoint[i] = NULL;
93
- env->itrigger_timer[i] = timer_new_ns(QEMU_CLOCK_VIRTUAL,
94
- riscv_itrigger_timer_cb, env);
95
+ timer_del(env->itrigger_timer[i]);
96
}
97
}
84
--
98
--
85
2.34.1
99
2.41.0
86
100
87
101
diff view generated by jsdifflib
1
From: Wilfred Mallawa <wilfred.mallawa@wdc.com>
1
From: Leon Schuermann <leons@opentitan.org>
2
2
3
This patch removes the left-over/unused `ibex_plic.h` file. Previously
3
When the rule-lock bypass (RLB) bit is set in the mseccfg CSR, the PMP
4
used by opentitan, which now follows the RISC-V standard and uses the
4
configuration lock bits must not apply. While this behavior is
5
SiFivePlicState.
5
implemented for the pmpcfgX CSRs, this bit is not respected for
6
changes to the pmpaddrX CSRs. This patch ensures that pmpaddrX CSR
7
writes work even on locked regions when the global rule-lock bypass is
8
enabled.
6
9
7
Fixes: 434e7e021 ("hw/intc: Remove the Ibex PLIC")
10
Signed-off-by: Leon Schuermann <leons@opentitan.org>
8
Signed-off-by: Wilfred Mallawa <wilfred.mallawa@wdc.com>
11
Reviewed-by: Mayuresh Chitale <mchitale@ventanamicro.com>
9
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
12
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
13
Message-ID: <20230829215046.1430463-1-leon@is.currently.online>
11
Message-id: 20220121055005.3159846-1-alistair.francis@opensource.wdc.com
12
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
14
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
13
---
15
---
14
include/hw/intc/ibex_plic.h | 67 -------------------------------------
16
target/riscv/pmp.c | 4 ++++
15
1 file changed, 67 deletions(-)
17
1 file changed, 4 insertions(+)
16
delete mode 100644 include/hw/intc/ibex_plic.h
17
18
18
diff --git a/include/hw/intc/ibex_plic.h b/include/hw/intc/ibex_plic.h
19
diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
19
deleted file mode 100644
20
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX
21
--- a/target/riscv/pmp.c
21
--- a/include/hw/intc/ibex_plic.h
22
+++ b/target/riscv/pmp.c
22
+++ /dev/null
23
@@ -XXX,XX +XXX,XX @@ static inline uint8_t pmp_get_a_field(uint8_t cfg)
23
@@ -XXX,XX +XXX,XX @@
24
*/
24
-/*
25
static inline int pmp_is_locked(CPURISCVState *env, uint32_t pmp_index)
25
- * QEMU RISC-V lowRISC Ibex PLIC
26
{
26
- *
27
+ /* mseccfg.RLB is set */
27
- * Copyright (c) 2020 Western Digital
28
+ if (MSECCFG_RLB_ISSET(env)) {
28
- *
29
+ return 0;
29
- * This program is free software; you can redistribute it and/or modify it
30
+ }
30
- * under the terms and conditions of the GNU General Public License,
31
31
- * version 2 or later, as published by the Free Software Foundation.
32
if (env->pmp_state.pmp[pmp_index].cfg_reg & PMP_LOCK) {
32
- *
33
return 1;
33
- * This program is distributed in the hope it will be useful, but WITHOUT
34
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
35
- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
36
- * more details.
37
- *
38
- * You should have received a copy of the GNU General Public License along with
39
- * this program. If not, see <http://www.gnu.org/licenses/>.
40
- */
41
-
42
-#ifndef HW_IBEX_PLIC_H
43
-#define HW_IBEX_PLIC_H
44
-
45
-#include "hw/sysbus.h"
46
-#include "qom/object.h"
47
-
48
-#define TYPE_IBEX_PLIC "ibex-plic"
49
-OBJECT_DECLARE_SIMPLE_TYPE(IbexPlicState, IBEX_PLIC)
50
-
51
-struct IbexPlicState {
52
- /*< private >*/
53
- SysBusDevice parent_obj;
54
-
55
- /*< public >*/
56
- MemoryRegion mmio;
57
-
58
- uint32_t *pending;
59
- uint32_t *hidden_pending;
60
- uint32_t *claimed;
61
- uint32_t *source;
62
- uint32_t *priority;
63
- uint32_t *enable;
64
- uint32_t threshold;
65
- uint32_t claim;
66
-
67
- /* config */
68
- uint32_t num_cpus;
69
- uint32_t num_sources;
70
-
71
- uint32_t pending_base;
72
- uint32_t pending_num;
73
-
74
- uint32_t source_base;
75
- uint32_t source_num;
76
-
77
- uint32_t priority_base;
78
- uint32_t priority_num;
79
-
80
- uint32_t enable_base;
81
- uint32_t enable_num;
82
-
83
- uint32_t threshold_base;
84
-
85
- uint32_t claim_base;
86
-
87
- qemu_irq *external_irqs;
88
-};
89
-
90
-#endif /* HW_IBEX_PLIC_H */
91
--
34
--
92
2.34.1
35
2.41.0
93
94
diff view generated by jsdifflib
1
From: Anup Patel <anup.patel@wdc.com>
1
From: Tommy Wu <tommy.wu@sifive.com>
2
2
3
The AIA hvictl and hviprioX CSRs allow hypervisor to control
3
According to the new spec, when vsiselect has a reserved value, attempts
4
interrupts visible at VS-level. This patch implements AIA hvictl
4
from M-mode or HS-mode to access vsireg, or from VS-mode to access
5
and hviprioX CSRs.
5
sireg, should preferably raise an illegal instruction exception.
6
6
7
Signed-off-by: Anup Patel <anup.patel@wdc.com>
7
Signed-off-by: Tommy Wu <tommy.wu@sifive.com>
8
Signed-off-by: Anup Patel <anup@brainfault.org>
9
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
10
Reviewed-by: Frank Chang <frank.chang@sifive.com>
8
Reviewed-by: Frank Chang <frank.chang@sifive.com>
11
Message-id: 20220204174700.534953-12-anup@brainfault.org
9
Message-ID: <20230816061647.600672-1-tommy.wu@sifive.com>
12
[ Changes by AF:
13
- Fix possible unintilised variable error in rmw_sie()
14
]
15
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
10
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
16
---
11
---
17
target/riscv/cpu.h | 2 +
12
target/riscv/csr.c | 7 +++++--
18
target/riscv/csr.c | 128 ++++++++++++++++++++++++++++++++++++++++-
13
1 file changed, 5 insertions(+), 2 deletions(-)
19
target/riscv/machine.c | 2 +
20
3 files changed, 131 insertions(+), 1 deletion(-)
21
14
22
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
23
index XXXXXXX..XXXXXXX 100644
24
--- a/target/riscv/cpu.h
25
+++ b/target/riscv/cpu.h
26
@@ -XXX,XX +XXX,XX @@ struct CPURISCVState {
27
uint64_t htimedelta;
28
29
/* Hypervisor controlled virtual interrupt priorities */
30
+ target_ulong hvictl;
31
uint8_t hviprio[64];
32
33
/* Upper 64-bits of 128-bit CSRs */
34
@@ -XXX,XX +XXX,XX @@ static inline RISCVMXL riscv_cpu_mxl(CPURISCVState *env)
35
return env->misa_mxl;
36
}
37
#endif
38
+#define riscv_cpu_mxl_bits(env) (1UL << (4 + riscv_cpu_mxl(env)))
39
40
#if defined(TARGET_RISCV32)
41
#define cpu_recompute_xl(env) ((void)(env), MXL_RV32)
42
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
15
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
43
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
44
--- a/target/riscv/csr.c
17
--- a/target/riscv/csr.c
45
+++ b/target/riscv/csr.c
18
+++ b/target/riscv/csr.c
46
@@ -XXX,XX +XXX,XX @@ static RISCVException pointer_masking(CPURISCVState *env, int csrno)
19
@@ -XXX,XX +XXX,XX @@ static int rmw_iprio(target_ulong xlen,
47
return RISCV_EXCP_ILLEGAL_INST;
20
static int rmw_xireg(CPURISCVState *env, int csrno, target_ulong *val,
48
}
21
target_ulong new_val, target_ulong wr_mask)
49
50
+static int aia_hmode(CPURISCVState *env, int csrno)
51
+{
52
+ if (!riscv_feature(env, RISCV_FEATURE_AIA)) {
53
+ return RISCV_EXCP_ILLEGAL_INST;
54
+ }
55
+
56
+ return hmode(env, csrno);
57
+}
58
+
59
static int aia_hmode32(CPURISCVState *env, int csrno)
60
{
22
{
61
if (!riscv_feature(env, RISCV_FEATURE_AIA)) {
23
- bool virt;
62
@@ -XXX,XX +XXX,XX @@ static RISCVException rmw_sie64(CPURISCVState *env, int csrno,
24
+ bool virt, isel_reserved;
63
uint64_t mask = env->mideleg & S_MODE_INTERRUPTS;
25
uint8_t *iprio;
64
26
int ret = -EINVAL;
65
if (riscv_cpu_virt_enabled(env)) {
27
target_ulong priv, isel, vgein;
66
+ if (env->hvictl & HVICTL_VTI) {
28
@@ -XXX,XX +XXX,XX @@ static int rmw_xireg(CPURISCVState *env, int csrno, target_ulong *val,
67
+ return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
29
68
+ }
30
/* Decode register details from CSR number */
69
ret = rmw_vsie64(env, CSR_VSIE, ret_val, new_val, wr_mask);
31
virt = false;
70
} else {
32
+ isel_reserved = false;
71
ret = rmw_mie64(env, csrno, ret_val, new_val, wr_mask & mask);
33
switch (csrno) {
72
@@ -XXX,XX +XXX,XX @@ static RISCVException rmw_sie(CPURISCVState *env, int csrno,
34
case CSR_MIREG:
73
RISCVException ret;
35
iprio = env->miprio;
74
36
@@ -XXX,XX +XXX,XX @@ static int rmw_xireg(CPURISCVState *env, int csrno, target_ulong *val,
75
ret = rmw_sie64(env, csrno, &rval, new_val, wr_mask);
37
riscv_cpu_mxl_bits(env)),
76
- if (ret_val) {
38
val, new_val, wr_mask);
77
+ if (ret == RISCV_EXCP_NONE && ret_val) {
39
}
78
*ret_val = rval;
40
+ } else {
41
+ isel_reserved = true;
79
}
42
}
80
43
81
@@ -XXX,XX +XXX,XX @@ static RISCVException rmw_sip64(CPURISCVState *env, int csrno,
44
done:
82
uint64_t mask = env->mideleg & sip_writable_mask;
45
if (ret) {
83
46
- return (env->virt_enabled && virt) ?
84
if (riscv_cpu_virt_enabled(env)) {
47
+ return (env->virt_enabled && virt && !isel_reserved) ?
85
+ if (env->hvictl & HVICTL_VTI) {
48
RISCV_EXCP_VIRT_INSTRUCTION_FAULT : RISCV_EXCP_ILLEGAL_INST;
86
+ return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
49
}
87
+ }
88
ret = rmw_vsip64(env, CSR_VSIP, ret_val, new_val, wr_mask);
89
} else {
90
ret = rmw_mip64(env, csrno, ret_val, new_val, wr_mask & mask);
91
@@ -XXX,XX +XXX,XX @@ static RISCVException write_htimedeltah(CPURISCVState *env, int csrno,
92
return RISCV_EXCP_NONE;
50
return RISCV_EXCP_NONE;
93
}
94
95
+static int read_hvictl(CPURISCVState *env, int csrno, target_ulong *val)
96
+{
97
+ *val = env->hvictl;
98
+ return RISCV_EXCP_NONE;
99
+}
100
+
101
+static int write_hvictl(CPURISCVState *env, int csrno, target_ulong val)
102
+{
103
+ env->hvictl = val & HVICTL_VALID_MASK;
104
+ return RISCV_EXCP_NONE;
105
+}
106
+
107
+static int read_hvipriox(CPURISCVState *env, int first_index,
108
+ uint8_t *iprio, target_ulong *val)
109
+{
110
+ int i, irq, rdzero, num_irqs = 4 * (riscv_cpu_mxl_bits(env) / 32);
111
+
112
+ /* First index has to be a multiple of number of irqs per register */
113
+ if (first_index % num_irqs) {
114
+ return (riscv_cpu_virt_enabled(env)) ?
115
+ RISCV_EXCP_VIRT_INSTRUCTION_FAULT : RISCV_EXCP_ILLEGAL_INST;
116
+ }
117
+
118
+ /* Fill-up return value */
119
+ *val = 0;
120
+ for (i = 0; i < num_irqs; i++) {
121
+ if (riscv_cpu_hviprio_index2irq(first_index + i, &irq, &rdzero)) {
122
+ continue;
123
+ }
124
+ if (rdzero) {
125
+ continue;
126
+ }
127
+ *val |= ((target_ulong)iprio[irq]) << (i * 8);
128
+ }
129
+
130
+ return RISCV_EXCP_NONE;
131
+}
132
+
133
+static int write_hvipriox(CPURISCVState *env, int first_index,
134
+ uint8_t *iprio, target_ulong val)
135
+{
136
+ int i, irq, rdzero, num_irqs = 4 * (riscv_cpu_mxl_bits(env) / 32);
137
+
138
+ /* First index has to be a multiple of number of irqs per register */
139
+ if (first_index % num_irqs) {
140
+ return (riscv_cpu_virt_enabled(env)) ?
141
+ RISCV_EXCP_VIRT_INSTRUCTION_FAULT : RISCV_EXCP_ILLEGAL_INST;
142
+ }
143
+
144
+ /* Fill-up priority arrary */
145
+ for (i = 0; i < num_irqs; i++) {
146
+ if (riscv_cpu_hviprio_index2irq(first_index + i, &irq, &rdzero)) {
147
+ continue;
148
+ }
149
+ if (rdzero) {
150
+ iprio[irq] = 0;
151
+ } else {
152
+ iprio[irq] = (val >> (i * 8)) & 0xff;
153
+ }
154
+ }
155
+
156
+ return RISCV_EXCP_NONE;
157
+}
158
+
159
+static int read_hviprio1(CPURISCVState *env, int csrno, target_ulong *val)
160
+{
161
+ return read_hvipriox(env, 0, env->hviprio, val);
162
+}
163
+
164
+static int write_hviprio1(CPURISCVState *env, int csrno, target_ulong val)
165
+{
166
+ return write_hvipriox(env, 0, env->hviprio, val);
167
+}
168
+
169
+static int read_hviprio1h(CPURISCVState *env, int csrno, target_ulong *val)
170
+{
171
+ return read_hvipriox(env, 4, env->hviprio, val);
172
+}
173
+
174
+static int write_hviprio1h(CPURISCVState *env, int csrno, target_ulong val)
175
+{
176
+ return write_hvipriox(env, 4, env->hviprio, val);
177
+}
178
+
179
+static int read_hviprio2(CPURISCVState *env, int csrno, target_ulong *val)
180
+{
181
+ return read_hvipriox(env, 8, env->hviprio, val);
182
+}
183
+
184
+static int write_hviprio2(CPURISCVState *env, int csrno, target_ulong val)
185
+{
186
+ return write_hvipriox(env, 8, env->hviprio, val);
187
+}
188
+
189
+static int read_hviprio2h(CPURISCVState *env, int csrno, target_ulong *val)
190
+{
191
+ return read_hvipriox(env, 12, env->hviprio, val);
192
+}
193
+
194
+static int write_hviprio2h(CPURISCVState *env, int csrno, target_ulong val)
195
+{
196
+ return write_hvipriox(env, 12, env->hviprio, val);
197
+}
198
+
199
/* Virtual CSR Registers */
200
static RISCVException read_vsstatus(CPURISCVState *env, int csrno,
201
target_ulong *val)
202
@@ -XXX,XX +XXX,XX @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
203
[CSR_MTVAL2] = { "mtval2", hmode, read_mtval2, write_mtval2 },
204
[CSR_MTINST] = { "mtinst", hmode, read_mtinst, write_mtinst },
205
206
+ /* Virtual Interrupts and Interrupt Priorities (H-extension with AIA) */
207
+ [CSR_HVICTL] = { "hvictl", aia_hmode, read_hvictl, write_hvictl },
208
+ [CSR_HVIPRIO1] = { "hviprio1", aia_hmode, read_hviprio1, write_hviprio1 },
209
+ [CSR_HVIPRIO2] = { "hviprio2", aia_hmode, read_hviprio2, write_hviprio2 },
210
+
211
/* Hypervisor and VS-Level High-Half CSRs (H-extension with AIA) */
212
[CSR_HIDELEGH] = { "hidelegh", aia_hmode32, NULL, NULL, rmw_hidelegh },
213
[CSR_HVIPH] = { "hviph", aia_hmode32, NULL, NULL, rmw_hviph },
214
+ [CSR_HVIPRIO1H] = { "hviprio1h", aia_hmode32, read_hviprio1h, write_hviprio1h },
215
+ [CSR_HVIPRIO2H] = { "hviprio2h", aia_hmode32, read_hviprio2h, write_hviprio2h },
216
[CSR_VSIEH] = { "vsieh", aia_hmode32, NULL, NULL, rmw_vsieh },
217
[CSR_VSIPH] = { "vsiph", aia_hmode32, NULL, NULL, rmw_vsiph },
218
219
diff --git a/target/riscv/machine.c b/target/riscv/machine.c
220
index XXXXXXX..XXXXXXX 100644
221
--- a/target/riscv/machine.c
222
+++ b/target/riscv/machine.c
223
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_hyper = {
224
VMSTATE_UINTTL(env.hgeie, RISCVCPU),
225
VMSTATE_UINTTL(env.hgeip, RISCVCPU),
226
VMSTATE_UINT64(env.htimedelta, RISCVCPU),
227
+
228
+ VMSTATE_UINTTL(env.hvictl, RISCVCPU),
229
VMSTATE_UINT8_ARRAY(env.hviprio, RISCVCPU, 64),
230
231
VMSTATE_UINT64(env.vsstatus, RISCVCPU),
232
--
51
--
233
2.34.1
52
2.41.0
234
235
diff view generated by jsdifflib
1
From: Anup Patel <anup.patel@wdc.com>
1
From: Nikita Shubin <n.shubin@yadro.com>
2
2
3
We should be returning illegal instruction trap when RV64 HS-mode tries
3
As per ISA:
4
to access RV32 HS-mode CSR.
5
4
6
Fixes: d6f20dacea51 ("target/riscv: Fix 32-bit HS mode access permissions")
5
"For CSRRWI, if rd=x0, then the instruction shall not read the CSR and
7
Signed-off-by: Anup Patel <anup.patel@wdc.com>
6
shall not cause any of the side effects that might occur on a CSR read."
8
Signed-off-by: Anup Patel <anup@brainfault.org>
7
8
trans_csrrwi() and trans_csrrw() call do_csrw() if rd=x0, do_csrw() calls
9
riscv_csrrw_do64(), via helper_csrw() passing NULL as *ret_value.
10
11
Signed-off-by: Nikita Shubin <n.shubin@yadro.com>
9
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
12
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
10
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
13
Message-ID: <20230808090914.17634-1-nikita.shubin@maquefel.me>
11
Reviewed-by: Frank Chang <frank.chang@sifive.com>
12
Message-id: 20220204174700.534953-2-anup@brainfault.org
13
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
14
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
14
---
15
---
15
target/riscv/csr.c | 2 +-
16
target/riscv/csr.c | 24 +++++++++++++++---------
16
1 file changed, 1 insertion(+), 1 deletion(-)
17
1 file changed, 15 insertions(+), 9 deletions(-)
17
18
18
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
19
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
19
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
20
--- a/target/riscv/csr.c
21
--- a/target/riscv/csr.c
21
+++ b/target/riscv/csr.c
22
+++ b/target/riscv/csr.c
22
@@ -XXX,XX +XXX,XX @@ static RISCVException hmode(CPURISCVState *env, int csrno)
23
@@ -XXX,XX +XXX,XX @@ static RISCVException riscv_csrrw_do64(CPURISCVState *env, int csrno,
23
static RISCVException hmode32(CPURISCVState *env, int csrno)
24
target_ulong write_mask)
24
{
25
{
25
if (riscv_cpu_mxl(env) != MXL_RV32) {
26
RISCVException ret;
26
- if (riscv_cpu_virt_enabled(env)) {
27
- target_ulong old_value;
27
+ if (!riscv_cpu_virt_enabled(env)) {
28
+ target_ulong old_value = 0;
28
return RISCV_EXCP_ILLEGAL_INST;
29
29
} else {
30
/* execute combined read/write operation if it exists */
30
return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
31
if (csr_ops[csrno].op) {
32
return csr_ops[csrno].op(env, csrno, ret_value, new_value, write_mask);
33
}
34
35
- /* if no accessor exists then return failure */
36
- if (!csr_ops[csrno].read) {
37
- return RISCV_EXCP_ILLEGAL_INST;
38
- }
39
- /* read old value */
40
- ret = csr_ops[csrno].read(env, csrno, &old_value);
41
- if (ret != RISCV_EXCP_NONE) {
42
- return ret;
43
+ /*
44
+ * ret_value == NULL means that rd=x0 and we're coming from helper_csrw()
45
+ * and we can't throw side effects caused by CSR reads.
46
+ */
47
+ if (ret_value) {
48
+ /* if no accessor exists then return failure */
49
+ if (!csr_ops[csrno].read) {
50
+ return RISCV_EXCP_ILLEGAL_INST;
51
+ }
52
+ /* read old value */
53
+ ret = csr_ops[csrno].read(env, csrno, &old_value);
54
+ if (ret != RISCV_EXCP_NONE) {
55
+ return ret;
56
+ }
57
}
58
59
/* write value if writable and write mask set, otherwise drop writes */
31
--
60
--
32
2.34.1
61
2.41.0
33
34
diff view generated by jsdifflib