1
From: Alistair Francis <alistair.francis@wdc.com>
1
The following changes since commit c5ea91da443b458352c1b629b490ee6631775cb4:
2
2
3
The following changes since commit 9c125d17e9402c232c46610802e5931b3639d77b:
3
Merge tag 'pull-trivial-patches' of https://gitlab.com/mjt0k/qemu into staging (2023-09-08 10:06:25 -0400)
4
5
Merge tag 'pull-tcg-20220420' of https://gitlab.com/rth7680/qemu into staging (2022-04-20 16:43:11 -0700)
6
4
7
are available in the Git repository at:
5
are available in the Git repository at:
8
6
9
git@github.com:alistair23/qemu.git tags/pull-riscv-to-apply-20220421
7
https://github.com/alistair23/qemu.git tags/pull-riscv-to-apply-20230911
10
8
11
for you to fetch changes up to e63e7b6cca93242a4d037610caba5626c980b990:
9
for you to fetch changes up to e7a03409f29e2da59297d55afbaec98c96e43e3a:
12
10
13
hw/riscv: boot: Support 64bit fdt address. (2022-04-21 16:29:57 +1000)
11
target/riscv: don't read CSR in riscv_csrrw_do64 (2023-09-11 11:45:55 +1000)
14
12
15
----------------------------------------------------------------
13
----------------------------------------------------------------
16
First RISC-V PR for QEMU 7.1
14
First RISC-V PR for 8.2
17
15
18
* Add support for Ibex SPI to OpenTitan
16
* Remove 'host' CPU from TCG
19
* Add support for privileged spec version 1.12.0
17
* riscv_htif Fixup printing on big endian hosts
20
* Use privileged spec version 1.12.0 for virt machine by default
18
* Add zmmul isa string
21
* Allow software access to MIP SEIP
19
* Add smepmp isa string
22
* Add initial support for the Sdtrig extension
20
* Fix page_check_range use in fault-only-first
23
* Optimisations for vector extensions
21
* Use existing lookup tables for MixColumns
24
* Improvements to the misa ISA string
22
* Add RISC-V vector cryptographic instruction set support
25
* Add isa extenstion strings to the device tree
23
* Implement WARL behaviour for mcountinhibit/mcounteren
26
* Don't allow `-bios` options with KVM machines
24
* Add Zihintntl extension ISA string to DTS
27
* Fix NAPOT range computation overflow
25
* Fix zfa fleq.d and fltq.d
28
* Fix DT property mmu-type when CPU mmu option is disabled
26
* Fix upper/lower mtime write calculation
29
* Make RISC-V ACLINT mtime MMIO register writable
27
* Make rtc variable names consistent
30
* Add and enable native debug feature
28
* Use abi type for linux-user target_ucontext
31
* Support 64bit fdt address.
29
* Add RISC-V KVM AIA Support
30
* Fix riscv,pmu DT node path in the virt machine
31
* Update CSR bits name for svadu extension
32
* Mark zicond non-experimental
33
* Fix satp_mode_finalize() when satp_mode.supported = 0
34
* Fix non-KVM --enable-debug build
35
* Add new extensions to hwprobe
36
* Use accelerated helper for AES64KS1I
37
* Allocate itrigger timers only once
38
* Respect mseccfg.RLB for pmpaddrX changes
39
* Align the AIA model to v1.0 ratified spec
40
* Don't read the CSR in riscv_csrrw_do64
32
41
33
----------------------------------------------------------------
42
----------------------------------------------------------------
34
Alistair Francis (2):
43
Akihiko Odaki (1):
35
target/riscv: cpu: Fixup indentation
44
target/riscv: Allocate itrigger timers only once
36
target/riscv: Allow software access to MIP SEIP
37
45
38
Atish Patra (7):
46
Ard Biesheuvel (2):
39
target/riscv: Define simpler privileged spec version numbering
47
target/riscv: Use existing lookup tables for MixColumns
40
target/riscv: Add the privileged spec version 1.12.0
48
target/riscv: Use accelerated helper for AES64KS1I
41
target/riscv: Introduce privilege version field in the CSR ops.
42
target/riscv: Add support for mconfigptr
43
target/riscv: Add *envcfg* CSRs support
44
target/riscv: Enable privileged spec version 1.12
45
target/riscv: Add isa extenstion strings to the device tree
46
49
47
Bin Meng (7):
50
Conor Dooley (1):
48
target/riscv: Add initial support for the Sdtrig extension
51
hw/riscv: virt: Fix riscv,pmu DT node path
49
target/riscv: debug: Implement debug related TCGCPUOps
50
target/riscv: cpu: Add a config option for native debug
51
target/riscv: csr: Hook debug CSR read/write
52
target/riscv: machine: Add debug state description
53
target/riscv: cpu: Enable native debug feature
54
hw/core: tcg-cpu-ops.h: Update comments of debug_check_watchpoint()
55
52
56
Dylan Jhong (1):
53
Daniel Henrique Barboza (6):
57
hw/riscv: boot: Support 64bit fdt address.
54
target/riscv/cpu.c: do not run 'host' CPU with TCG
55
target/riscv/cpu.c: add zmmul isa string
56
target/riscv/cpu.c: add smepmp isa string
57
target/riscv: fix satp_mode_finalize() when satp_mode.supported = 0
58
hw/riscv/virt.c: fix non-KVM --enable-debug build
59
hw/intc/riscv_aplic.c fix non-KVM --enable-debug build
58
60
59
Frank Chang (3):
61
Dickon Hood (2):
60
hw/intc: Add .impl.[min|max]_access_size declaration in RISC-V ACLINT
62
target/riscv: Refactor translation of vector-widening instruction
61
hw/intc: Support 32/64-bit mtimecmp and mtime accesses in RISC-V ACLINT
63
target/riscv: Add Zvbb ISA extension support
62
hw/intc: Make RISC-V ACLINT mtime MMIO register writable
63
64
64
Jim Shu (1):
65
Jason Chien (3):
65
hw/intc: riscv_aclint: Add reset function of ACLINT devices
66
target/riscv: Add Zihintntl extension ISA string to DTS
67
hw/intc: Fix upper/lower mtime write calculation
68
hw/intc: Make rtc variable names consistent
66
69
67
Nicolas Pitre (1):
70
Kiran Ostrolenk (4):
68
target/riscv/pmp: fix NAPOT range computation overflow
71
target/riscv: Refactor some of the generic vector functionality
72
target/riscv: Refactor vector-vector translation macro
73
target/riscv: Refactor some of the generic vector functionality
74
target/riscv: Add Zvknh ISA extension support
69
75
70
Niklas Cassel (1):
76
LIU Zhiwei (3):
71
hw/riscv: virt: fix DT property mmu-type when CPU mmu option is disabled
77
target/riscv: Fix page_check_range use in fault-only-first
78
target/riscv: Fix zfa fleq.d and fltq.d
79
linux-user/riscv: Use abi type for target_ucontext
72
80
73
Ralf Ramsauer (1):
81
Lawrence Hunter (2):
74
hw/riscv: virt: Exit if the user provided -bios in combination with KVM
82
target/riscv: Add Zvbc ISA extension support
83
target/riscv: Add Zvksh ISA extension support
75
84
76
Richard Henderson (1):
85
Leon Schuermann (1):
77
target/riscv: Use cpu_loop_exit_restore directly from mmu faults
86
target/riscv/pmp.c: respect mseccfg.RLB for pmpaddrX changes
78
87
79
Tsukasa OI (1):
88
Max Chou (3):
80
target/riscv: misa to ISA string conversion fix
89
crypto: Create sm4_subword
90
crypto: Add SM4 constant parameter CK
91
target/riscv: Add Zvksed ISA extension support
81
92
82
Weiwei Li (3):
93
Nazar Kazakov (4):
83
target/riscv: optimize condition assign for scale < 0
94
target/riscv: Remove redundant "cpu_vl == 0" checks
84
target/riscv: optimize helper for vmv<nr>r.v
95
target/riscv: Move vector translation checks
85
target/riscv: fix start byte for vmv<nf>r.v when vstart != 0
96
target/riscv: Add Zvkned ISA extension support
97
target/riscv: Add Zvkg ISA extension support
86
98
87
Wilfred Mallawa (2):
99
Nikita Shubin (1):
88
hw/ssi: Add Ibex SPI device model
100
target/riscv: don't read CSR in riscv_csrrw_do64
89
riscv: opentitan: Connect opentitan SPI Host
90
101
91
include/hw/core/tcg-cpu-ops.h | 1 +
102
Rob Bradford (1):
92
include/hw/intc/riscv_aclint.h | 1 +
103
target/riscv: Implement WARL behaviour for mcountinhibit/mcounteren
93
include/hw/riscv/boot.h | 4 +-
104
94
include/hw/riscv/opentitan.h | 30 +-
105
Robbin Ehn (1):
95
include/hw/ssi/ibex_spi_host.h | 94 +++++
106
linux-user/riscv: Add new extensions to hwprobe
96
target/riscv/cpu.h | 40 ++-
107
97
target/riscv/cpu_bits.h | 40 +++
108
Thomas Huth (2):
98
target/riscv/debug.h | 114 ++++++
109
hw/char/riscv_htif: Fix printing of console characters on big endian hosts
99
target/riscv/helper.h | 5 +-
110
hw/char/riscv_htif: Fix the console syscall on big endian hosts
100
hw/intc/riscv_aclint.c | 144 ++++++--
111
101
hw/riscv/boot.c | 12 +-
112
Tommy Wu (1):
102
hw/riscv/opentitan.c | 36 +-
113
target/riscv: Align the AIA model to v1.0 ratified spec
103
hw/riscv/virt.c | 24 +-
114
104
hw/ssi/ibex_spi_host.c | 612 ++++++++++++++++++++++++++++++++
115
Vineet Gupta (1):
105
target/riscv/cpu.c | 120 ++++++-
116
riscv: zicond: make non-experimental
106
target/riscv/cpu_helper.c | 10 +-
117
107
target/riscv/csr.c | 282 +++++++++++++--
118
Weiwei Li (1):
108
target/riscv/debug.c | 441 +++++++++++++++++++++++
119
target/riscv: Update CSR bits name for svadu extension
109
target/riscv/machine.c | 55 +++
120
110
target/riscv/pmp.c | 14 +-
121
Yong-Xuan Wang (5):
111
target/riscv/vector_helper.c | 31 +-
122
target/riscv: support the AIA device emulation with KVM enabled
112
target/riscv/insn_trans/trans_rvv.c.inc | 25 +-
123
target/riscv: check the in-kernel irqchip support
113
hw/ssi/meson.build | 1 +
124
target/riscv: Create an KVM AIA irqchip
114
hw/ssi/trace-events | 7 +
125
target/riscv: update APLIC and IMSIC to support KVM AIA
115
target/riscv/meson.build | 1 +
126
target/riscv: select KVM AIA in riscv virt machine
116
25 files changed, 1971 insertions(+), 173 deletions(-)
127
117
create mode 100644 include/hw/ssi/ibex_spi_host.h
128
include/crypto/aes.h | 7 +
118
create mode 100644 target/riscv/debug.h
129
include/crypto/sm4.h | 9 +
119
create mode 100644 hw/ssi/ibex_spi_host.c
130
target/riscv/cpu_bits.h | 8 +-
120
create mode 100644 target/riscv/debug.c
131
target/riscv/cpu_cfg.h | 9 +
132
target/riscv/debug.h | 3 +-
133
target/riscv/helper.h | 98 +++
134
target/riscv/kvm_riscv.h | 5 +
135
target/riscv/vector_internals.h | 228 +++++++
136
target/riscv/insn32.decode | 58 ++
137
crypto/aes.c | 4 +-
138
crypto/sm4.c | 10 +
139
hw/char/riscv_htif.c | 12 +-
140
hw/intc/riscv_aclint.c | 11 +-
141
hw/intc/riscv_aplic.c | 52 +-
142
hw/intc/riscv_imsic.c | 25 +-
143
hw/riscv/virt.c | 374 ++++++------
144
linux-user/riscv/signal.c | 4 +-
145
linux-user/syscall.c | 14 +-
146
target/arm/tcg/crypto_helper.c | 10 +-
147
target/riscv/cpu.c | 83 ++-
148
target/riscv/cpu_helper.c | 6 +-
149
target/riscv/crypto_helper.c | 51 +-
150
target/riscv/csr.c | 54 +-
151
target/riscv/debug.c | 15 +-
152
target/riscv/kvm.c | 201 ++++++-
153
target/riscv/pmp.c | 4 +
154
target/riscv/translate.c | 1 +
155
target/riscv/vcrypto_helper.c | 970 ++++++++++++++++++++++++++++++
156
target/riscv/vector_helper.c | 245 +-------
157
target/riscv/vector_internals.c | 81 +++
158
target/riscv/insn_trans/trans_rvv.c.inc | 171 +++---
159
target/riscv/insn_trans/trans_rvvk.c.inc | 606 +++++++++++++++++++
160
target/riscv/insn_trans/trans_rvzfa.c.inc | 4 +-
161
target/riscv/meson.build | 4 +-
162
34 files changed, 2785 insertions(+), 652 deletions(-)
163
create mode 100644 target/riscv/vector_internals.h
164
create mode 100644 target/riscv/vcrypto_helper.c
165
create mode 100644 target/riscv/vector_internals.c
166
create mode 100644 target/riscv/insn_trans/trans_rvvk.c.inc
diff view generated by jsdifflib
1
From: Alistair Francis <alistair.francis@wdc.com>
1
From: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
2
2
3
The RISC-V specification states that:
3
The 'host' CPU is available in a CONFIG_KVM build and it's currently
4
"Supervisor-level external interrupts are made pending based on the
4
available for all accels, but is a KVM only CPU. This means that in a
5
logical-OR of the software-writable SEIP bit and the signal from the
5
RISC-V KVM capable host we can do things like this:
6
external interrupt controller."
7
6
8
We currently only allow either the interrupt controller or software to
7
$ ./build/qemu-system-riscv64 -M virt,accel=tcg -cpu host --nographic
9
set the bit, which is incorrect.
8
qemu-system-riscv64: H extension requires priv spec 1.12.0
10
9
11
This patch removes the miclaim mask when writing MIP to allow M-mode
10
This CPU does not have a priv spec because we don't filter its extensions
12
software to inject interrupts, even with an interrupt controller.
11
via priv spec. We shouldn't be reaching riscv_cpu_realize_tcg() at all
12
with the 'host' CPU.
13
13
14
We then also need to keep track of which source is setting MIP_SEIP. The
14
We don't have a way to filter the 'host' CPU out of the available CPU
15
final value is a OR of both, so we add two bools and use that to keep
15
options (-cpu help) if the build includes both KVM and TCG. What we can
16
track of the current state. This way either source can change without
16
do is to error out during riscv_cpu_realize_tcg() if the user chooses
17
losing the correct value.
17
the 'host' CPU with accel=tcg:
18
18
19
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/904
19
$ ./build/qemu-system-riscv64 -M virt,accel=tcg -cpu host --nographic
20
qemu-system-riscv64: 'host' CPU is not compatible with TCG acceleration
21
22
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
23
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
24
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
25
Message-Id: <20230721133411.474105-1-dbarboza@ventanamicro.com>
20
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
26
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
21
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
22
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
23
Message-Id: <20220317061817.3856850-3-alistair.francis@opensource.wdc.com>
24
---
27
---
25
target/riscv/cpu.h | 8 ++++++++
28
target/riscv/cpu.c | 5 +++++
26
target/riscv/cpu.c | 10 +++++++++-
29
1 file changed, 5 insertions(+)
27
target/riscv/csr.c | 8 ++++++--
28
3 files changed, 23 insertions(+), 3 deletions(-)
29
30
30
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
31
index XXXXXXX..XXXXXXX 100644
32
--- a/target/riscv/cpu.h
33
+++ b/target/riscv/cpu.h
34
@@ -XXX,XX +XXX,XX @@ struct CPUArchState {
35
uint64_t mstatus;
36
37
uint64_t mip;
38
+ /*
39
+ * MIP contains the software writable version of SEIP ORed with the
40
+ * external interrupt value. The MIP register is always up-to-date.
41
+ * To keep track of the current source, we also save booleans of the values
42
+ * here.
43
+ */
44
+ bool external_seip;
45
+ bool software_seip;
46
47
uint64_t miclaim;
48
49
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
31
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
50
index XXXXXXX..XXXXXXX 100644
32
index XXXXXXX..XXXXXXX 100644
51
--- a/target/riscv/cpu.c
33
--- a/target/riscv/cpu.c
52
+++ b/target/riscv/cpu.c
34
+++ b/target/riscv/cpu.c
53
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_set_irq(void *opaque, int irq, int level)
35
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_realize_tcg(DeviceState *dev, Error **errp)
54
case IRQ_VS_TIMER:
36
CPURISCVState *env = &cpu->env;
55
case IRQ_M_TIMER:
37
Error *local_err = NULL;
56
case IRQ_U_EXT:
38
57
- case IRQ_S_EXT:
39
+ if (object_dynamic_cast(OBJECT(dev), TYPE_RISCV_CPU_HOST)) {
58
case IRQ_VS_EXT:
40
+ error_setg(errp, "'host' CPU is not compatible with TCG acceleration");
59
case IRQ_M_EXT:
41
+ return;
60
if (kvm_enabled()) {
61
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_set_irq(void *opaque, int irq, int level)
62
riscv_cpu_update_mip(cpu, 1 << irq, BOOL_TO_MASK(level));
63
}
64
break;
65
+ case IRQ_S_EXT:
66
+ if (kvm_enabled()) {
67
+ kvm_riscv_set_irq(cpu, irq, level);
68
+ } else {
69
+ env->external_seip = level;
70
+ riscv_cpu_update_mip(cpu, 1 << irq,
71
+ BOOL_TO_MASK(level | env->software_seip));
72
+ }
73
+ break;
74
default:
75
g_assert_not_reached();
76
}
77
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
78
index XXXXXXX..XXXXXXX 100644
79
--- a/target/riscv/csr.c
80
+++ b/target/riscv/csr.c
81
@@ -XXX,XX +XXX,XX @@ static RISCVException rmw_mip64(CPURISCVState *env, int csrno,
82
uint64_t new_val, uint64_t wr_mask)
83
{
84
RISCVCPU *cpu = env_archcpu(env);
85
- /* Allow software control of delegable interrupts not claimed by hardware */
86
- uint64_t old_mip, mask = wr_mask & delegable_ints & ~env->miclaim;
87
+ uint64_t old_mip, mask = wr_mask & delegable_ints;
88
uint32_t gin;
89
90
+ if (mask & MIP_SEIP) {
91
+ env->software_seip = new_val & MIP_SEIP;
92
+ new_val |= env->external_seip * MIP_SEIP;
93
+ }
42
+ }
94
+
43
+
95
if (mask) {
44
riscv_cpu_validate_misa_mxl(cpu, &local_err);
96
old_mip = riscv_cpu_update_mip(cpu, mask, (new_val & mask));
45
if (local_err != NULL) {
97
} else {
46
error_propagate(errp, local_err);
98
--
47
--
99
2.35.1
48
2.41.0
49
50
diff view generated by jsdifflib
New patch
1
From: Thomas Huth <thuth@redhat.com>
1
2
3
The character that should be printed is stored in the 64 bit "payload"
4
variable. The code currently tries to print it by taking the address
5
of the variable and passing this pointer to qemu_chr_fe_write(). However,
6
this only works on little endian hosts where the least significant bits
7
are stored on the lowest address. To do this in a portable way, we have
8
to store the value in an uint8_t variable instead.
9
10
Fixes: 5033606780 ("RISC-V HTIF Console")
11
Signed-off-by: Thomas Huth <thuth@redhat.com>
12
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
13
Reviewed-by: Bin Meng <bmeng@tinylab.org>
14
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
15
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
16
Message-Id: <20230721094720.902454-2-thuth@redhat.com>
17
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
18
---
19
hw/char/riscv_htif.c | 3 ++-
20
1 file changed, 2 insertions(+), 1 deletion(-)
21
22
diff --git a/hw/char/riscv_htif.c b/hw/char/riscv_htif.c
23
index XXXXXXX..XXXXXXX 100644
24
--- a/hw/char/riscv_htif.c
25
+++ b/hw/char/riscv_htif.c
26
@@ -XXX,XX +XXX,XX @@ static void htif_handle_tohost_write(HTIFState *s, uint64_t val_written)
27
s->tohost = 0; /* clear to indicate we read */
28
return;
29
} else if (cmd == HTIF_CONSOLE_CMD_PUTC) {
30
- qemu_chr_fe_write(&s->chr, (uint8_t *)&payload, 1);
31
+ uint8_t ch = (uint8_t)payload;
32
+ qemu_chr_fe_write(&s->chr, &ch, 1);
33
resp = 0x100 | (uint8_t)payload;
34
} else {
35
qemu_log("HTIF device %d: unknown command\n", device);
36
--
37
2.41.0
38
39
diff view generated by jsdifflib
New patch
1
From: Thomas Huth <thuth@redhat.com>
1
2
3
Values that have been read via cpu_physical_memory_read() from the
4
guest's memory have to be swapped in case the host endianess differs
5
from the guest.
6
7
Fixes: a6e13e31d5 ("riscv_htif: Support console output via proxy syscall")
8
Signed-off-by: Thomas Huth <thuth@redhat.com>
9
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
10
Reviewed-by: Bin Meng <bmeng@tinylab.org>
11
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
12
Message-Id: <20230721094720.902454-3-thuth@redhat.com>
13
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
14
---
15
hw/char/riscv_htif.c | 9 +++++----
16
1 file changed, 5 insertions(+), 4 deletions(-)
17
18
diff --git a/hw/char/riscv_htif.c b/hw/char/riscv_htif.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/char/riscv_htif.c
21
+++ b/hw/char/riscv_htif.c
22
@@ -XXX,XX +XXX,XX @@
23
#include "qemu/timer.h"
24
#include "qemu/error-report.h"
25
#include "exec/address-spaces.h"
26
+#include "exec/tswap.h"
27
#include "sysemu/dma.h"
28
29
#define RISCV_DEBUG_HTIF 0
30
@@ -XXX,XX +XXX,XX @@ static void htif_handle_tohost_write(HTIFState *s, uint64_t val_written)
31
} else {
32
uint64_t syscall[8];
33
cpu_physical_memory_read(payload, syscall, sizeof(syscall));
34
- if (syscall[0] == PK_SYS_WRITE &&
35
- syscall[1] == HTIF_DEV_CONSOLE &&
36
- syscall[3] == HTIF_CONSOLE_CMD_PUTC) {
37
+ if (tswap64(syscall[0]) == PK_SYS_WRITE &&
38
+ tswap64(syscall[1]) == HTIF_DEV_CONSOLE &&
39
+ tswap64(syscall[3]) == HTIF_CONSOLE_CMD_PUTC) {
40
uint8_t ch;
41
- cpu_physical_memory_read(syscall[2], &ch, 1);
42
+ cpu_physical_memory_read(tswap64(syscall[2]), &ch, 1);
43
qemu_chr_fe_write(&s->chr, &ch, 1);
44
resp = 0x100 | (uint8_t)payload;
45
} else {
46
--
47
2.41.0
diff view generated by jsdifflib
1
From: Tsukasa OI <research_trasio@irq.a4lg.com>
1
From: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
2
2
3
Some bits in RISC-V `misa' CSR should not be reflected in the ISA
3
zmmul was promoted from experimental to ratified in commit 6d00ffad4e95.
4
string. For instance, `S' and `U' (represents existence of supervisor
4
Add a riscv,isa string for it.
5
and user mode, respectively) in `misa' CSR must not be copied since
6
neither `S' nor `U' are valid single-letter extensions.
7
5
8
This commit also removes all reserved/dropped single-letter "extensions"
6
Fixes: 6d00ffad4e95 ("target/riscv: move zmmul out of the experimental properties")
9
from the list.
7
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
10
8
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
11
- "B": Not going to be a single-letter extension (misa.B is reserved).
12
- "J": Not going to be a single-letter extension (misa.J is reserved).
13
- "K": Not going to be a single-letter extension (misa.K is reserved).
14
- "L": Dropped.
15
- "N": Dropped.
16
- "T": Dropped.
17
18
It also clarifies that the variable `riscv_single_letter_exts' is a
19
single-letter extension order list.
20
21
Signed-off-by: Tsukasa OI <research_trasio@irq.a4lg.com>
22
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
9
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
23
Message-Id: <4a4c11213a161a7eedabe46abe58b351bb0e2ef2.1648473008.git.research_trasio@irq.a4lg.com>
10
Message-Id: <20230720132424.371132-2-dbarboza@ventanamicro.com>
24
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
11
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
25
---
12
---
26
target/riscv/cpu.c | 10 +++++-----
13
target/riscv/cpu.c | 1 +
27
1 file changed, 5 insertions(+), 5 deletions(-)
14
1 file changed, 1 insertion(+)
28
15
29
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
16
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
30
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
31
--- a/target/riscv/cpu.c
18
--- a/target/riscv/cpu.c
32
+++ b/target/riscv/cpu.c
19
+++ b/target/riscv/cpu.c
33
@@ -XXX,XX +XXX,XX @@
20
@@ -XXX,XX +XXX,XX @@ static const struct isa_ext_data isa_edata_arr[] = {
34
21
ISA_EXT_DATA_ENTRY(zicsr, PRIV_VERSION_1_10_0, ext_icsr),
35
/* RISC-V CPU definitions */
22
ISA_EXT_DATA_ENTRY(zifencei, PRIV_VERSION_1_10_0, ext_ifencei),
36
23
ISA_EXT_DATA_ENTRY(zihintpause, PRIV_VERSION_1_10_0, ext_zihintpause),
37
-static const char riscv_exts[26] = "IEMAFDQCLBJTPVNSUHKORWXYZG";
24
+ ISA_EXT_DATA_ENTRY(zmmul, PRIV_VERSION_1_12_0, ext_zmmul),
38
+static const char riscv_single_letter_exts[] = "IEMAFDQCPVH";
25
ISA_EXT_DATA_ENTRY(zawrs, PRIV_VERSION_1_12_0, ext_zawrs),
39
26
ISA_EXT_DATA_ENTRY(zfa, PRIV_VERSION_1_12_0, ext_zfa),
40
const char * const riscv_int_regnames[] = {
27
ISA_EXT_DATA_ENTRY(zfbfmin, PRIV_VERSION_1_12_0, ext_zfbfmin),
41
"x0/zero", "x1/ra", "x2/sp", "x3/gp", "x4/tp", "x5/t0", "x6/t1",
42
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_class_init(ObjectClass *c, void *data)
43
char *riscv_isa_string(RISCVCPU *cpu)
44
{
45
int i;
46
- const size_t maxlen = sizeof("rv128") + sizeof(riscv_exts) + 1;
47
+ const size_t maxlen = sizeof("rv128") + sizeof(riscv_single_letter_exts);
48
char *isa_str = g_new(char, maxlen);
49
char *p = isa_str + snprintf(isa_str, maxlen, "rv%d", TARGET_LONG_BITS);
50
- for (i = 0; i < sizeof(riscv_exts); i++) {
51
- if (cpu->env.misa_ext & RV(riscv_exts[i])) {
52
- *p++ = qemu_tolower(riscv_exts[i]);
53
+ for (i = 0; i < sizeof(riscv_single_letter_exts) - 1; i++) {
54
+ if (cpu->env.misa_ext & RV(riscv_single_letter_exts[i])) {
55
+ *p++ = qemu_tolower(riscv_single_letter_exts[i]);
56
}
57
}
58
*p = '\0';
59
--
28
--
60
2.35.1
29
2.41.0
diff view generated by jsdifflib
1
From: Alistair Francis <alistair.francis@wdc.com>
1
From: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
2
2
3
The cpu->cfg.epmp extension is still experimental, but it already has a
4
'smepmp' riscv,isa string. Add it.
5
6
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
7
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
8
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
9
Message-Id: <20230720132424.371132-3-dbarboza@ventanamicro.com>
3
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
10
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
4
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-Id: <20220317061817.3856850-2-alistair.francis@opensource.wdc.com>
7
---
11
---
8
target/riscv/cpu.c | 20 ++++++++++----------
12
target/riscv/cpu.c | 1 +
9
1 file changed, 10 insertions(+), 10 deletions(-)
13
1 file changed, 1 insertion(+)
10
14
11
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
15
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
12
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
13
--- a/target/riscv/cpu.c
17
--- a/target/riscv/cpu.c
14
+++ b/target/riscv/cpu.c
18
+++ b/target/riscv/cpu.c
15
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_realize(DeviceState *dev, Error **errp)
19
@@ -XXX,XX +XXX,XX @@ static const struct isa_ext_data isa_edata_arr[] = {
16
if (cpu->cfg.ext_i && cpu->cfg.ext_e) {
20
ISA_EXT_DATA_ENTRY(zhinx, PRIV_VERSION_1_12_0, ext_zhinx),
17
error_setg(errp,
21
ISA_EXT_DATA_ENTRY(zhinxmin, PRIV_VERSION_1_12_0, ext_zhinxmin),
18
"I and E extensions are incompatible");
22
ISA_EXT_DATA_ENTRY(smaia, PRIV_VERSION_1_12_0, ext_smaia),
19
- return;
23
+ ISA_EXT_DATA_ENTRY(smepmp, PRIV_VERSION_1_12_0, epmp),
20
- }
24
ISA_EXT_DATA_ENTRY(smstateen, PRIV_VERSION_1_12_0, ext_smstateen),
21
+ return;
25
ISA_EXT_DATA_ENTRY(ssaia, PRIV_VERSION_1_12_0, ext_ssaia),
22
+ }
26
ISA_EXT_DATA_ENTRY(sscofpmf, PRIV_VERSION_1_12_0, ext_sscofpmf),
23
24
if (!cpu->cfg.ext_i && !cpu->cfg.ext_e) {
25
error_setg(errp,
26
"Either I or E extension must be set");
27
- return;
28
- }
29
+ return;
30
+ }
31
32
- if (cpu->cfg.ext_g && !(cpu->cfg.ext_i & cpu->cfg.ext_m &
33
- cpu->cfg.ext_a & cpu->cfg.ext_f &
34
- cpu->cfg.ext_d)) {
35
+ if (cpu->cfg.ext_g && !(cpu->cfg.ext_i & cpu->cfg.ext_m &
36
+ cpu->cfg.ext_a & cpu->cfg.ext_f &
37
+ cpu->cfg.ext_d)) {
38
warn_report("Setting G will also set IMAFD");
39
cpu->cfg.ext_i = true;
40
cpu->cfg.ext_m = true;
41
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_set_irq(void *opaque, int irq, int level)
42
case IRQ_S_EXT:
43
case IRQ_VS_EXT:
44
case IRQ_M_EXT:
45
- if (kvm_enabled()) {
46
+ if (kvm_enabled()) {
47
kvm_riscv_set_irq(cpu, irq, level);
48
- } else {
49
+ } else {
50
riscv_cpu_update_mip(cpu, 1 << irq, BOOL_TO_MASK(level));
51
- }
52
+ }
53
break;
54
default:
55
g_assert_not_reached();
56
--
27
--
57
2.35.1
28
2.41.0
diff view generated by jsdifflib
1
From: Weiwei Li <liweiwei@iscas.ac.cn>
1
From: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
2
2
3
The spec for vmv<nf>r.v says: 'the instructions operate as if EEW=SEW,
3
Commit bef6f008b98(accel/tcg: Return bool from page_check_range) converts
4
EMUL = NREG, effective length evl= EMUL * VLEN/SEW.'
4
integer return value to bool type. However, it wrongly converted the use
5
of the API in riscv fault-only-first, where page_check_range < = 0, should
6
be converted to !page_check_range.
5
7
6
So the start byte for vstart != 0 should take sew into account
8
Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
7
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
10
Message-ID: <20230729031618.821-1-zhiwei_liu@linux.alibaba.com>
9
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
10
Acked-by: Alistair Francis <alistair.francis@wdc.com>
11
Message-Id: <20220330021316.18223-1-liweiwei@iscas.ac.cn>
12
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
11
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
13
---
12
---
14
target/riscv/vector_helper.c | 8 +++++---
13
target/riscv/vector_helper.c | 2 +-
15
1 file changed, 5 insertions(+), 3 deletions(-)
14
1 file changed, 1 insertion(+), 1 deletion(-)
16
15
17
diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c
16
diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c
18
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
19
--- a/target/riscv/vector_helper.c
18
--- a/target/riscv/vector_helper.c
20
+++ b/target/riscv/vector_helper.c
19
+++ b/target/riscv/vector_helper.c
21
@@ -XXX,XX +XXX,XX @@ GEN_VEXT_VCOMPRESS_VM(vcompress_vm_d, uint64_t, H8)
20
@@ -XXX,XX +XXX,XX @@ vext_ldff(void *vd, void *v0, target_ulong base,
22
/* Vector Whole Register Move */
21
cpu_mmu_index(env, false));
23
void HELPER(vmvr_v)(void *vd, void *vs2, CPURISCVState *env, uint32_t desc)
22
if (host) {
24
{
23
#ifdef CONFIG_USER_ONLY
25
- /* EEW = 8 */
24
- if (page_check_range(addr, offset, PAGE_READ)) {
26
+ /* EEW = SEW */
25
+ if (!page_check_range(addr, offset, PAGE_READ)) {
27
uint32_t maxsz = simd_maxsz(desc);
26
vl = i;
28
- uint32_t i = env->vstart;
27
goto ProbeSuccess;
29
+ uint32_t sewb = 1 << FIELD_EX64(env->vtype, VTYPE, VSEW);
28
}
30
+ uint32_t startb = env->vstart * sewb;
31
+ uint32_t i = startb;
32
33
memcpy((uint8_t *)vd + H1(i),
34
(uint8_t *)vs2 + H1(i),
35
- maxsz - env->vstart);
36
+ maxsz - startb);
37
38
env->vstart = 0;
39
}
40
--
29
--
41
2.35.1
30
2.41.0
diff view generated by jsdifflib
New patch
1
From: Ard Biesheuvel <ardb@kernel.org>
1
2
3
The AES MixColumns and InvMixColumns operations are relatively
4
expensive 4x4 matrix multiplications in GF(2^8), which is why C
5
implementations usually rely on precomputed lookup tables rather than
6
performing the calculations on demand.
7
8
Given that we already carry those tables in QEMU, we can just grab the
9
right value in the implementation of the RISC-V AES32 instructions. Note
10
that the tables in question are permuted according to the respective
11
Sbox, so we can omit the Sbox lookup as well in this case.
12
13
Cc: Richard Henderson <richard.henderson@linaro.org>
14
Cc: Philippe Mathieu-Daudé <philmd@linaro.org>
15
Cc: Zewen Ye <lustrew@foxmail.com>
16
Cc: Weiwei Li <liweiwei@iscas.ac.cn>
17
Cc: Junqiang Wang <wangjunqiang@iscas.ac.cn>
18
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
19
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Message-ID: <20230731084043.1791984-1-ardb@kernel.org>
21
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
22
---
23
include/crypto/aes.h | 7 +++++++
24
crypto/aes.c | 4 ++--
25
target/riscv/crypto_helper.c | 34 ++++------------------------------
26
3 files changed, 13 insertions(+), 32 deletions(-)
27
28
diff --git a/include/crypto/aes.h b/include/crypto/aes.h
29
index XXXXXXX..XXXXXXX 100644
30
--- a/include/crypto/aes.h
31
+++ b/include/crypto/aes.h
32
@@ -XXX,XX +XXX,XX @@ void AES_decrypt(const unsigned char *in, unsigned char *out,
33
extern const uint8_t AES_sbox[256];
34
extern const uint8_t AES_isbox[256];
35
36
+/*
37
+AES_Te0[x] = S [x].[02, 01, 01, 03];
38
+AES_Td0[x] = Si[x].[0e, 09, 0d, 0b];
39
+*/
40
+
41
+extern const uint32_t AES_Te0[256], AES_Td0[256];
42
+
43
#endif
44
diff --git a/crypto/aes.c b/crypto/aes.c
45
index XXXXXXX..XXXXXXX 100644
46
--- a/crypto/aes.c
47
+++ b/crypto/aes.c
48
@@ -XXX,XX +XXX,XX @@ AES_Td3[x] = Si[x].[09, 0d, 0b, 0e];
49
AES_Td4[x] = Si[x].[01, 01, 01, 01];
50
*/
51
52
-static const uint32_t AES_Te0[256] = {
53
+const uint32_t AES_Te0[256] = {
54
0xc66363a5U, 0xf87c7c84U, 0xee777799U, 0xf67b7b8dU,
55
0xfff2f20dU, 0xd66b6bbdU, 0xde6f6fb1U, 0x91c5c554U,
56
0x60303050U, 0x02010103U, 0xce6767a9U, 0x562b2b7dU,
57
@@ -XXX,XX +XXX,XX @@ static const uint32_t AES_Te4[256] = {
58
0xb0b0b0b0U, 0x54545454U, 0xbbbbbbbbU, 0x16161616U,
59
};
60
61
-static const uint32_t AES_Td0[256] = {
62
+const uint32_t AES_Td0[256] = {
63
0x51f4a750U, 0x7e416553U, 0x1a17a4c3U, 0x3a275e96U,
64
0x3bab6bcbU, 0x1f9d45f1U, 0xacfa58abU, 0x4be30393U,
65
0x2030fa55U, 0xad766df6U, 0x88cc7691U, 0xf5024c25U,
66
diff --git a/target/riscv/crypto_helper.c b/target/riscv/crypto_helper.c
67
index XXXXXXX..XXXXXXX 100644
68
--- a/target/riscv/crypto_helper.c
69
+++ b/target/riscv/crypto_helper.c
70
@@ -XXX,XX +XXX,XX @@
71
#include "crypto/aes-round.h"
72
#include "crypto/sm4.h"
73
74
-#define AES_XTIME(a) \
75
- ((a << 1) ^ ((a & 0x80) ? 0x1b : 0))
76
-
77
-#define AES_GFMUL(a, b) (( \
78
- (((b) & 0x1) ? (a) : 0) ^ \
79
- (((b) & 0x2) ? AES_XTIME(a) : 0) ^ \
80
- (((b) & 0x4) ? AES_XTIME(AES_XTIME(a)) : 0) ^ \
81
- (((b) & 0x8) ? AES_XTIME(AES_XTIME(AES_XTIME(a))) : 0)) & 0xFF)
82
-
83
-static inline uint32_t aes_mixcolumn_byte(uint8_t x, bool fwd)
84
-{
85
- uint32_t u;
86
-
87
- if (fwd) {
88
- u = (AES_GFMUL(x, 3) << 24) | (x << 16) | (x << 8) |
89
- (AES_GFMUL(x, 2) << 0);
90
- } else {
91
- u = (AES_GFMUL(x, 0xb) << 24) | (AES_GFMUL(x, 0xd) << 16) |
92
- (AES_GFMUL(x, 0x9) << 8) | (AES_GFMUL(x, 0xe) << 0);
93
- }
94
- return u;
95
-}
96
-
97
#define sext32_xlen(x) (target_ulong)(int32_t)(x)
98
99
static inline target_ulong aes32_operation(target_ulong shamt,
100
@@ -XXX,XX +XXX,XX @@ static inline target_ulong aes32_operation(target_ulong shamt,
101
bool enc, bool mix)
102
{
103
uint8_t si = rs2 >> shamt;
104
- uint8_t so;
105
uint32_t mixed;
106
target_ulong res;
107
108
if (enc) {
109
- so = AES_sbox[si];
110
if (mix) {
111
- mixed = aes_mixcolumn_byte(so, true);
112
+ mixed = be32_to_cpu(AES_Te0[si]);
113
} else {
114
- mixed = so;
115
+ mixed = AES_sbox[si];
116
}
117
} else {
118
- so = AES_isbox[si];
119
if (mix) {
120
- mixed = aes_mixcolumn_byte(so, false);
121
+ mixed = be32_to_cpu(AES_Td0[si]);
122
} else {
123
- mixed = so;
124
+ mixed = AES_isbox[si];
125
}
126
}
127
mixed = rol32(mixed, shamt);
128
--
129
2.41.0
130
131
diff view generated by jsdifflib
1
From: Bin Meng <bin.meng@windriver.com>
1
From: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
2
2
3
This adds initial support for the Sdtrig extension via the Trigger
3
Take some functions/macros out of `vector_helper` and put them in a new
4
Module, as defined in the RISC-V Debug Specification [1].
4
module called `vector_internals`. This ensures they can be used by both
5
vector and vector-crypto helpers (latter implemented in proceeding
6
commits).
5
7
6
Only "Address / Data Match" trigger (type 2) is implemented as of now,
8
Signed-off-by: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
7
which is mainly used for hardware breakpoint and watchpoint. The number
9
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
8
of type 2 triggers implemented is 2, which is the number that we can
10
Signed-off-by: Max Chou <max.chou@sifive.com>
9
find in the SiFive U54/U74 cores.
11
Acked-by: Alistair Francis <alistair.francis@wdc.com>
10
12
Message-ID: <20230711165917.2629866-2-max.chou@sifive.com>
11
[1] https://github.com/riscv/riscv-debug-spec/raw/master/riscv-debug-stable.pdf
12
13
Signed-off-by: Bin Meng <bin.meng@windriver.com>
14
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
15
Message-Id: <20220315065529.62198-2-bmeng.cn@gmail.com>
16
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
13
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
17
---
14
---
18
target/riscv/cpu.h | 5 +
15
target/riscv/vector_internals.h | 182 +++++++++++++++++++++++++++++
19
target/riscv/debug.h | 108 +++++++++++++
16
target/riscv/vector_helper.c | 201 +-------------------------------
20
target/riscv/debug.c | 339 +++++++++++++++++++++++++++++++++++++++
17
target/riscv/vector_internals.c | 81 +++++++++++++
21
target/riscv/meson.build | 1 +
18
target/riscv/meson.build | 1 +
22
4 files changed, 453 insertions(+)
19
4 files changed, 265 insertions(+), 200 deletions(-)
23
create mode 100644 target/riscv/debug.h
20
create mode 100644 target/riscv/vector_internals.h
24
create mode 100644 target/riscv/debug.c
21
create mode 100644 target/riscv/vector_internals.c
25
22
26
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
23
diff --git a/target/riscv/vector_internals.h b/target/riscv/vector_internals.h
27
index XXXXXXX..XXXXXXX 100644
28
--- a/target/riscv/cpu.h
29
+++ b/target/riscv/cpu.h
30
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState CPURISCVState;
31
32
#if !defined(CONFIG_USER_ONLY)
33
#include "pmp.h"
34
+#include "debug.h"
35
#endif
36
37
#define RV_VLEN_MAX 1024
38
@@ -XXX,XX +XXX,XX @@ struct CPUArchState {
39
pmp_table_t pmp_state;
40
target_ulong mseccfg;
41
42
+ /* trigger module */
43
+ target_ulong trigger_cur;
44
+ type2_trigger_t type2_trig[TRIGGER_TYPE2_NUM];
45
+
46
/* machine specific rdtime callback */
47
uint64_t (*rdtime_fn)(uint32_t);
48
uint32_t rdtime_fn_arg;
49
diff --git a/target/riscv/debug.h b/target/riscv/debug.h
50
new file mode 100644
24
new file mode 100644
51
index XXXXXXX..XXXXXXX
25
index XXXXXXX..XXXXXXX
52
--- /dev/null
26
--- /dev/null
53
+++ b/target/riscv/debug.h
27
+++ b/target/riscv/vector_internals.h
54
@@ -XXX,XX +XXX,XX @@
28
@@ -XXX,XX +XXX,XX @@
55
+/*
29
+/*
56
+ * QEMU RISC-V Native Debug Support
30
+ * RISC-V Vector Extension Internals
57
+ *
31
+ *
58
+ * Copyright (c) 2022 Wind River Systems, Inc.
32
+ * Copyright (c) 2020 T-Head Semiconductor Co., Ltd. All rights reserved.
59
+ *
60
+ * Author:
61
+ * Bin Meng <bin.meng@windriver.com>
62
+ *
33
+ *
63
+ * This program is free software; you can redistribute it and/or modify it
34
+ * This program is free software; you can redistribute it and/or modify it
64
+ * under the terms and conditions of the GNU General Public License,
35
+ * under the terms and conditions of the GNU General Public License,
65
+ * version 2 or later, as published by the Free Software Foundation.
36
+ * version 2 or later, as published by the Free Software Foundation.
66
+ *
37
+ *
...
...
71
+ *
42
+ *
72
+ * You should have received a copy of the GNU General Public License along with
43
+ * You should have received a copy of the GNU General Public License along with
73
+ * this program. If not, see <http://www.gnu.org/licenses/>.
44
+ * this program. If not, see <http://www.gnu.org/licenses/>.
74
+ */
45
+ */
75
+
46
+
76
+#ifndef RISCV_DEBUG_H
47
+#ifndef TARGET_RISCV_VECTOR_INTERNALS_H
77
+#define RISCV_DEBUG_H
48
+#define TARGET_RISCV_VECTOR_INTERNALS_H
78
+
49
+
79
+/* trigger indexes implemented */
50
+#include "qemu/osdep.h"
80
+enum {
51
+#include "qemu/bitops.h"
81
+ TRIGGER_TYPE2_IDX_0 = 0,
52
+#include "cpu.h"
82
+ TRIGGER_TYPE2_IDX_1,
53
+#include "tcg/tcg-gvec-desc.h"
83
+ TRIGGER_TYPE2_NUM,
54
+#include "internals.h"
84
+ TRIGGER_NUM = TRIGGER_TYPE2_NUM
55
+
85
+};
56
+static inline uint32_t vext_nf(uint32_t desc)
86
+
57
+{
87
+/* register index of tdata CSRs */
58
+ return FIELD_EX32(simd_data(desc), VDATA, NF);
88
+enum {
59
+}
89
+ TDATA1 = 0,
60
+
90
+ TDATA2,
61
+/*
91
+ TDATA3,
62
+ * Note that vector data is stored in host-endian 64-bit chunks,
92
+ TDATA_NUM
63
+ * so addressing units smaller than that needs a host-endian fixup.
93
+};
64
+ */
94
+
65
+#if HOST_BIG_ENDIAN
95
+typedef enum {
66
+#define H1(x) ((x) ^ 7)
96
+ TRIGGER_TYPE_NO_EXIST = 0, /* trigger does not exist */
67
+#define H1_2(x) ((x) ^ 6)
97
+ TRIGGER_TYPE_AD_MATCH = 2, /* address/data match trigger */
68
+#define H1_4(x) ((x) ^ 4)
98
+ TRIGGER_TYPE_INST_CNT = 3, /* instruction count trigger */
69
+#define H2(x) ((x) ^ 3)
99
+ TRIGGER_TYPE_INT = 4, /* interrupt trigger */
70
+#define H4(x) ((x) ^ 1)
100
+ TRIGGER_TYPE_EXCP = 5, /* exception trigger */
71
+#define H8(x) ((x))
101
+ TRIGGER_TYPE_AD_MATCH6 = 6, /* new address/data match trigger */
72
+#else
102
+ TRIGGER_TYPE_EXT_SRC = 7, /* external source trigger */
73
+#define H1(x) (x)
103
+ TRIGGER_TYPE_UNAVAIL = 15 /* trigger exists, but unavailable */
74
+#define H1_2(x) (x)
104
+} trigger_type_t;
75
+#define H1_4(x) (x)
105
+
76
+#define H2(x) (x)
106
+typedef struct {
77
+#define H4(x) (x)
107
+ target_ulong mcontrol;
78
+#define H8(x) (x)
108
+ target_ulong maddress;
79
+#endif
109
+ struct CPUBreakpoint *bp;
80
+
110
+ struct CPUWatchpoint *wp;
81
+/*
111
+} type2_trigger_t;
82
+ * Encode LMUL to lmul as following:
112
+
83
+ * LMUL vlmul lmul
113
+/* tdata field masks */
84
+ * 1 000 0
114
+
85
+ * 2 001 1
115
+#define RV32_TYPE(t) ((uint32_t)(t) << 28)
86
+ * 4 010 2
116
+#define RV32_TYPE_MASK (0xf << 28)
87
+ * 8 011 3
117
+#define RV32_DMODE BIT(27)
88
+ * - 100 -
118
+#define RV64_TYPE(t) ((uint64_t)(t) << 60)
89
+ * 1/8 101 -3
119
+#define RV64_TYPE_MASK (0xfULL << 60)
90
+ * 1/4 110 -2
120
+#define RV64_DMODE BIT_ULL(59)
91
+ * 1/2 111 -1
121
+
92
+ */
122
+/* mcontrol field masks */
93
+static inline int32_t vext_lmul(uint32_t desc)
123
+
94
+{
124
+#define TYPE2_LOAD BIT(0)
95
+ return sextract32(FIELD_EX32(simd_data(desc), VDATA, LMUL), 0, 3);
125
+#define TYPE2_STORE BIT(1)
96
+}
126
+#define TYPE2_EXEC BIT(2)
97
+
127
+#define TYPE2_U BIT(3)
98
+static inline uint32_t vext_vm(uint32_t desc)
128
+#define TYPE2_S BIT(4)
99
+{
129
+#define TYPE2_M BIT(6)
100
+ return FIELD_EX32(simd_data(desc), VDATA, VM);
130
+#define TYPE2_MATCH (0xf << 7)
101
+}
131
+#define TYPE2_CHAIN BIT(11)
102
+
132
+#define TYPE2_ACTION (0xf << 12)
103
+static inline uint32_t vext_vma(uint32_t desc)
133
+#define TYPE2_SIZELO (0x3 << 16)
104
+{
134
+#define TYPE2_TIMING BIT(18)
105
+ return FIELD_EX32(simd_data(desc), VDATA, VMA);
135
+#define TYPE2_SELECT BIT(19)
106
+}
136
+#define TYPE2_HIT BIT(20)
107
+
137
+#define TYPE2_SIZEHI (0x3 << 21) /* RV64 only */
108
+static inline uint32_t vext_vta(uint32_t desc)
138
+
109
+{
139
+/* access size */
110
+ return FIELD_EX32(simd_data(desc), VDATA, VTA);
140
+enum {
111
+}
141
+ SIZE_ANY = 0,
112
+
142
+ SIZE_1B,
113
+static inline uint32_t vext_vta_all_1s(uint32_t desc)
143
+ SIZE_2B,
114
+{
144
+ SIZE_4B,
115
+ return FIELD_EX32(simd_data(desc), VDATA, VTA_ALL_1S);
145
+ SIZE_6B,
116
+}
146
+ SIZE_8B,
117
+
147
+ SIZE_10B,
118
+/*
148
+ SIZE_12B,
119
+ * Earlier designs (pre-0.9) had a varying number of bits
149
+ SIZE_14B,
120
+ * per mask value (MLEN). In the 0.9 design, MLEN=1.
150
+ SIZE_16B,
121
+ * (Section 4.5)
151
+ SIZE_NUM = 16
122
+ */
152
+};
123
+static inline int vext_elem_mask(void *v0, int index)
153
+
124
+{
154
+bool tdata_available(CPURISCVState *env, int tdata_index);
125
+ int idx = index / 64;
155
+
126
+ int pos = index % 64;
156
+target_ulong tselect_csr_read(CPURISCVState *env);
127
+ return (((uint64_t *)v0)[idx] >> pos) & 1;
157
+void tselect_csr_write(CPURISCVState *env, target_ulong val);
128
+}
158
+
129
+
159
+target_ulong tdata_csr_read(CPURISCVState *env, int tdata_index);
130
+/*
160
+void tdata_csr_write(CPURISCVState *env, int tdata_index, target_ulong val);
131
+ * Get number of total elements, including prestart, body and tail elements.
161
+
132
+ * Note that when LMUL < 1, the tail includes the elements past VLMAX that
162
+#endif /* RISCV_DEBUG_H */
133
+ * are held in the same vector register.
163
diff --git a/target/riscv/debug.c b/target/riscv/debug.c
134
+ */
135
+static inline uint32_t vext_get_total_elems(CPURISCVState *env, uint32_t desc,
136
+ uint32_t esz)
137
+{
138
+ uint32_t vlenb = simd_maxsz(desc);
139
+ uint32_t sew = 1 << FIELD_EX64(env->vtype, VTYPE, VSEW);
140
+ int8_t emul = ctzl(esz) - ctzl(sew) + vext_lmul(desc) < 0 ? 0 :
141
+ ctzl(esz) - ctzl(sew) + vext_lmul(desc);
142
+ return (vlenb << emul) / esz;
143
+}
144
+
145
+/* set agnostic elements to 1s */
146
+void vext_set_elems_1s(void *base, uint32_t is_agnostic, uint32_t cnt,
147
+ uint32_t tot);
148
+
149
+/* expand macro args before macro */
150
+#define RVVCALL(macro, ...) macro(__VA_ARGS__)
151
+
152
+/* (TD, T1, T2, TX1, TX2) */
153
+#define OP_UUU_B uint8_t, uint8_t, uint8_t, uint8_t, uint8_t
154
+#define OP_UUU_H uint16_t, uint16_t, uint16_t, uint16_t, uint16_t
155
+#define OP_UUU_W uint32_t, uint32_t, uint32_t, uint32_t, uint32_t
156
+#define OP_UUU_D uint64_t, uint64_t, uint64_t, uint64_t, uint64_t
157
+
158
+/* operation of two vector elements */
159
+typedef void opivv2_fn(void *vd, void *vs1, void *vs2, int i);
160
+
161
+#define OPIVV2(NAME, TD, T1, T2, TX1, TX2, HD, HS1, HS2, OP) \
162
+static void do_##NAME(void *vd, void *vs1, void *vs2, int i) \
163
+{ \
164
+ TX1 s1 = *((T1 *)vs1 + HS1(i)); \
165
+ TX2 s2 = *((T2 *)vs2 + HS2(i)); \
166
+ *((TD *)vd + HD(i)) = OP(s2, s1); \
167
+}
168
+
169
+void do_vext_vv(void *vd, void *v0, void *vs1, void *vs2,
170
+ CPURISCVState *env, uint32_t desc,
171
+ opivv2_fn *fn, uint32_t esz);
172
+
173
+/* generate the helpers for OPIVV */
174
+#define GEN_VEXT_VV(NAME, ESZ) \
175
+void HELPER(NAME)(void *vd, void *v0, void *vs1, \
176
+ void *vs2, CPURISCVState *env, \
177
+ uint32_t desc) \
178
+{ \
179
+ do_vext_vv(vd, v0, vs1, vs2, env, desc, \
180
+ do_##NAME, ESZ); \
181
+}
182
+
183
+typedef void opivx2_fn(void *vd, target_long s1, void *vs2, int i);
184
+
185
+/*
186
+ * (T1)s1 gives the real operator type.
187
+ * (TX1)(T1)s1 expands the operator type of widen or narrow operations.
188
+ */
189
+#define OPIVX2(NAME, TD, T1, T2, TX1, TX2, HD, HS2, OP) \
190
+static void do_##NAME(void *vd, target_long s1, void *vs2, int i) \
191
+{ \
192
+ TX2 s2 = *((T2 *)vs2 + HS2(i)); \
193
+ *((TD *)vd + HD(i)) = OP(s2, (TX1)(T1)s1); \
194
+}
195
+
196
+void do_vext_vx(void *vd, void *v0, target_long s1, void *vs2,
197
+ CPURISCVState *env, uint32_t desc,
198
+ opivx2_fn fn, uint32_t esz);
199
+
200
+/* generate the helpers for OPIVX */
201
+#define GEN_VEXT_VX(NAME, ESZ) \
202
+void HELPER(NAME)(void *vd, void *v0, target_ulong s1, \
203
+ void *vs2, CPURISCVState *env, \
204
+ uint32_t desc) \
205
+{ \
206
+ do_vext_vx(vd, v0, s1, vs2, env, desc, \
207
+ do_##NAME, ESZ); \
208
+}
209
+
210
+#endif /* TARGET_RISCV_VECTOR_INTERNALS_H */
211
diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c
212
index XXXXXXX..XXXXXXX 100644
213
--- a/target/riscv/vector_helper.c
214
+++ b/target/riscv/vector_helper.c
215
@@ -XXX,XX +XXX,XX @@
216
#include "fpu/softfloat.h"
217
#include "tcg/tcg-gvec-desc.h"
218
#include "internals.h"
219
+#include "vector_internals.h"
220
#include <math.h>
221
222
target_ulong HELPER(vsetvl)(CPURISCVState *env, target_ulong s1,
223
@@ -XXX,XX +XXX,XX @@ target_ulong HELPER(vsetvl)(CPURISCVState *env, target_ulong s1,
224
return vl;
225
}
226
227
-/*
228
- * Note that vector data is stored in host-endian 64-bit chunks,
229
- * so addressing units smaller than that needs a host-endian fixup.
230
- */
231
-#if HOST_BIG_ENDIAN
232
-#define H1(x) ((x) ^ 7)
233
-#define H1_2(x) ((x) ^ 6)
234
-#define H1_4(x) ((x) ^ 4)
235
-#define H2(x) ((x) ^ 3)
236
-#define H4(x) ((x) ^ 1)
237
-#define H8(x) ((x))
238
-#else
239
-#define H1(x) (x)
240
-#define H1_2(x) (x)
241
-#define H1_4(x) (x)
242
-#define H2(x) (x)
243
-#define H4(x) (x)
244
-#define H8(x) (x)
245
-#endif
246
-
247
-static inline uint32_t vext_nf(uint32_t desc)
248
-{
249
- return FIELD_EX32(simd_data(desc), VDATA, NF);
250
-}
251
-
252
-static inline uint32_t vext_vm(uint32_t desc)
253
-{
254
- return FIELD_EX32(simd_data(desc), VDATA, VM);
255
-}
256
-
257
-/*
258
- * Encode LMUL to lmul as following:
259
- * LMUL vlmul lmul
260
- * 1 000 0
261
- * 2 001 1
262
- * 4 010 2
263
- * 8 011 3
264
- * - 100 -
265
- * 1/8 101 -3
266
- * 1/4 110 -2
267
- * 1/2 111 -1
268
- */
269
-static inline int32_t vext_lmul(uint32_t desc)
270
-{
271
- return sextract32(FIELD_EX32(simd_data(desc), VDATA, LMUL), 0, 3);
272
-}
273
-
274
-static inline uint32_t vext_vta(uint32_t desc)
275
-{
276
- return FIELD_EX32(simd_data(desc), VDATA, VTA);
277
-}
278
-
279
-static inline uint32_t vext_vma(uint32_t desc)
280
-{
281
- return FIELD_EX32(simd_data(desc), VDATA, VMA);
282
-}
283
-
284
-static inline uint32_t vext_vta_all_1s(uint32_t desc)
285
-{
286
- return FIELD_EX32(simd_data(desc), VDATA, VTA_ALL_1S);
287
-}
288
-
289
/*
290
* Get the maximum number of elements can be operated.
291
*
292
@@ -XXX,XX +XXX,XX @@ static inline uint32_t vext_max_elems(uint32_t desc, uint32_t log2_esz)
293
return scale < 0 ? vlenb >> -scale : vlenb << scale;
294
}
295
296
-/*
297
- * Get number of total elements, including prestart, body and tail elements.
298
- * Note that when LMUL < 1, the tail includes the elements past VLMAX that
299
- * are held in the same vector register.
300
- */
301
-static inline uint32_t vext_get_total_elems(CPURISCVState *env, uint32_t desc,
302
- uint32_t esz)
303
-{
304
- uint32_t vlenb = simd_maxsz(desc);
305
- uint32_t sew = 1 << FIELD_EX64(env->vtype, VTYPE, VSEW);
306
- int8_t emul = ctzl(esz) - ctzl(sew) + vext_lmul(desc) < 0 ? 0 :
307
- ctzl(esz) - ctzl(sew) + vext_lmul(desc);
308
- return (vlenb << emul) / esz;
309
-}
310
-
311
static inline target_ulong adjust_addr(CPURISCVState *env, target_ulong addr)
312
{
313
return (addr & ~env->cur_pmmask) | env->cur_pmbase;
314
@@ -XXX,XX +XXX,XX @@ static void probe_pages(CPURISCVState *env, target_ulong addr,
315
}
316
}
317
318
-/* set agnostic elements to 1s */
319
-static void vext_set_elems_1s(void *base, uint32_t is_agnostic, uint32_t cnt,
320
- uint32_t tot)
321
-{
322
- if (is_agnostic == 0) {
323
- /* policy undisturbed */
324
- return;
325
- }
326
- if (tot - cnt == 0) {
327
- return;
328
- }
329
- memset(base + cnt, -1, tot - cnt);
330
-}
331
-
332
static inline void vext_set_elem_mask(void *v0, int index,
333
uint8_t value)
334
{
335
@@ -XXX,XX +XXX,XX @@ static inline void vext_set_elem_mask(void *v0, int index,
336
((uint64_t *)v0)[idx] = deposit64(old, pos, 1, value);
337
}
338
339
-/*
340
- * Earlier designs (pre-0.9) had a varying number of bits
341
- * per mask value (MLEN). In the 0.9 design, MLEN=1.
342
- * (Section 4.5)
343
- */
344
-static inline int vext_elem_mask(void *v0, int index)
345
-{
346
- int idx = index / 64;
347
- int pos = index % 64;
348
- return (((uint64_t *)v0)[idx] >> pos) & 1;
349
-}
350
-
351
/* elements operations for load and store */
352
typedef void vext_ldst_elem_fn(CPURISCVState *env, abi_ptr addr,
353
uint32_t idx, void *vd, uintptr_t retaddr);
354
@@ -XXX,XX +XXX,XX @@ GEN_VEXT_ST_WHOLE(vs8r_v, int8_t, ste_b)
355
* Vector Integer Arithmetic Instructions
356
*/
357
358
-/* expand macro args before macro */
359
-#define RVVCALL(macro, ...) macro(__VA_ARGS__)
360
-
361
/* (TD, T1, T2, TX1, TX2) */
362
#define OP_SSS_B int8_t, int8_t, int8_t, int8_t, int8_t
363
#define OP_SSS_H int16_t, int16_t, int16_t, int16_t, int16_t
364
#define OP_SSS_W int32_t, int32_t, int32_t, int32_t, int32_t
365
#define OP_SSS_D int64_t, int64_t, int64_t, int64_t, int64_t
366
-#define OP_UUU_B uint8_t, uint8_t, uint8_t, uint8_t, uint8_t
367
-#define OP_UUU_H uint16_t, uint16_t, uint16_t, uint16_t, uint16_t
368
-#define OP_UUU_W uint32_t, uint32_t, uint32_t, uint32_t, uint32_t
369
-#define OP_UUU_D uint64_t, uint64_t, uint64_t, uint64_t, uint64_t
370
#define OP_SUS_B int8_t, uint8_t, int8_t, uint8_t, int8_t
371
#define OP_SUS_H int16_t, uint16_t, int16_t, uint16_t, int16_t
372
#define OP_SUS_W int32_t, uint32_t, int32_t, uint32_t, int32_t
373
@@ -XXX,XX +XXX,XX @@ GEN_VEXT_ST_WHOLE(vs8r_v, int8_t, ste_b)
374
#define NOP_UUU_H uint16_t, uint16_t, uint32_t, uint16_t, uint32_t
375
#define NOP_UUU_W uint32_t, uint32_t, uint64_t, uint32_t, uint64_t
376
377
-/* operation of two vector elements */
378
-typedef void opivv2_fn(void *vd, void *vs1, void *vs2, int i);
379
-
380
-#define OPIVV2(NAME, TD, T1, T2, TX1, TX2, HD, HS1, HS2, OP) \
381
-static void do_##NAME(void *vd, void *vs1, void *vs2, int i) \
382
-{ \
383
- TX1 s1 = *((T1 *)vs1 + HS1(i)); \
384
- TX2 s2 = *((T2 *)vs2 + HS2(i)); \
385
- *((TD *)vd + HD(i)) = OP(s2, s1); \
386
-}
387
#define DO_SUB(N, M) (N - M)
388
#define DO_RSUB(N, M) (M - N)
389
390
@@ -XXX,XX +XXX,XX @@ RVVCALL(OPIVV2, vsub_vv_h, OP_SSS_H, H2, H2, H2, DO_SUB)
391
RVVCALL(OPIVV2, vsub_vv_w, OP_SSS_W, H4, H4, H4, DO_SUB)
392
RVVCALL(OPIVV2, vsub_vv_d, OP_SSS_D, H8, H8, H8, DO_SUB)
393
394
-static void do_vext_vv(void *vd, void *v0, void *vs1, void *vs2,
395
- CPURISCVState *env, uint32_t desc,
396
- opivv2_fn *fn, uint32_t esz)
397
-{
398
- uint32_t vm = vext_vm(desc);
399
- uint32_t vl = env->vl;
400
- uint32_t total_elems = vext_get_total_elems(env, desc, esz);
401
- uint32_t vta = vext_vta(desc);
402
- uint32_t vma = vext_vma(desc);
403
- uint32_t i;
404
-
405
- for (i = env->vstart; i < vl; i++) {
406
- if (!vm && !vext_elem_mask(v0, i)) {
407
- /* set masked-off elements to 1s */
408
- vext_set_elems_1s(vd, vma, i * esz, (i + 1) * esz);
409
- continue;
410
- }
411
- fn(vd, vs1, vs2, i);
412
- }
413
- env->vstart = 0;
414
- /* set tail elements to 1s */
415
- vext_set_elems_1s(vd, vta, vl * esz, total_elems * esz);
416
-}
417
-
418
-/* generate the helpers for OPIVV */
419
-#define GEN_VEXT_VV(NAME, ESZ) \
420
-void HELPER(NAME)(void *vd, void *v0, void *vs1, \
421
- void *vs2, CPURISCVState *env, \
422
- uint32_t desc) \
423
-{ \
424
- do_vext_vv(vd, v0, vs1, vs2, env, desc, \
425
- do_##NAME, ESZ); \
426
-}
427
-
428
GEN_VEXT_VV(vadd_vv_b, 1)
429
GEN_VEXT_VV(vadd_vv_h, 2)
430
GEN_VEXT_VV(vadd_vv_w, 4)
431
@@ -XXX,XX +XXX,XX @@ GEN_VEXT_VV(vsub_vv_h, 2)
432
GEN_VEXT_VV(vsub_vv_w, 4)
433
GEN_VEXT_VV(vsub_vv_d, 8)
434
435
-typedef void opivx2_fn(void *vd, target_long s1, void *vs2, int i);
436
-
437
-/*
438
- * (T1)s1 gives the real operator type.
439
- * (TX1)(T1)s1 expands the operator type of widen or narrow operations.
440
- */
441
-#define OPIVX2(NAME, TD, T1, T2, TX1, TX2, HD, HS2, OP) \
442
-static void do_##NAME(void *vd, target_long s1, void *vs2, int i) \
443
-{ \
444
- TX2 s2 = *((T2 *)vs2 + HS2(i)); \
445
- *((TD *)vd + HD(i)) = OP(s2, (TX1)(T1)s1); \
446
-}
447
448
RVVCALL(OPIVX2, vadd_vx_b, OP_SSS_B, H1, H1, DO_ADD)
449
RVVCALL(OPIVX2, vadd_vx_h, OP_SSS_H, H2, H2, DO_ADD)
450
@@ -XXX,XX +XXX,XX @@ RVVCALL(OPIVX2, vrsub_vx_h, OP_SSS_H, H2, H2, DO_RSUB)
451
RVVCALL(OPIVX2, vrsub_vx_w, OP_SSS_W, H4, H4, DO_RSUB)
452
RVVCALL(OPIVX2, vrsub_vx_d, OP_SSS_D, H8, H8, DO_RSUB)
453
454
-static void do_vext_vx(void *vd, void *v0, target_long s1, void *vs2,
455
- CPURISCVState *env, uint32_t desc,
456
- opivx2_fn fn, uint32_t esz)
457
-{
458
- uint32_t vm = vext_vm(desc);
459
- uint32_t vl = env->vl;
460
- uint32_t total_elems = vext_get_total_elems(env, desc, esz);
461
- uint32_t vta = vext_vta(desc);
462
- uint32_t vma = vext_vma(desc);
463
- uint32_t i;
464
-
465
- for (i = env->vstart; i < vl; i++) {
466
- if (!vm && !vext_elem_mask(v0, i)) {
467
- /* set masked-off elements to 1s */
468
- vext_set_elems_1s(vd, vma, i * esz, (i + 1) * esz);
469
- continue;
470
- }
471
- fn(vd, s1, vs2, i);
472
- }
473
- env->vstart = 0;
474
- /* set tail elements to 1s */
475
- vext_set_elems_1s(vd, vta, vl * esz, total_elems * esz);
476
-}
477
-
478
-/* generate the helpers for OPIVX */
479
-#define GEN_VEXT_VX(NAME, ESZ) \
480
-void HELPER(NAME)(void *vd, void *v0, target_ulong s1, \
481
- void *vs2, CPURISCVState *env, \
482
- uint32_t desc) \
483
-{ \
484
- do_vext_vx(vd, v0, s1, vs2, env, desc, \
485
- do_##NAME, ESZ); \
486
-}
487
-
488
GEN_VEXT_VX(vadd_vx_b, 1)
489
GEN_VEXT_VX(vadd_vx_h, 2)
490
GEN_VEXT_VX(vadd_vx_w, 4)
491
diff --git a/target/riscv/vector_internals.c b/target/riscv/vector_internals.c
164
new file mode 100644
492
new file mode 100644
165
index XXXXXXX..XXXXXXX
493
index XXXXXXX..XXXXXXX
166
--- /dev/null
494
--- /dev/null
167
+++ b/target/riscv/debug.c
495
+++ b/target/riscv/vector_internals.c
168
@@ -XXX,XX +XXX,XX @@
496
@@ -XXX,XX +XXX,XX @@
169
+/*
497
+/*
170
+ * QEMU RISC-V Native Debug Support
498
+ * RISC-V Vector Extension Internals
171
+ *
499
+ *
172
+ * Copyright (c) 2022 Wind River Systems, Inc.
500
+ * Copyright (c) 2020 T-Head Semiconductor Co., Ltd. All rights reserved.
173
+ *
174
+ * Author:
175
+ * Bin Meng <bin.meng@windriver.com>
176
+ *
177
+ * This provides the native debug support via the Trigger Module, as defined
178
+ * in the RISC-V Debug Specification:
179
+ * https://github.com/riscv/riscv-debug-spec/raw/master/riscv-debug-stable.pdf
180
+ *
501
+ *
181
+ * This program is free software; you can redistribute it and/or modify it
502
+ * This program is free software; you can redistribute it and/or modify it
182
+ * under the terms and conditions of the GNU General Public License,
503
+ * under the terms and conditions of the GNU General Public License,
183
+ * version 2 or later, as published by the Free Software Foundation.
504
+ * version 2 or later, as published by the Free Software Foundation.
184
+ *
505
+ *
...
...
189
+ *
510
+ *
190
+ * You should have received a copy of the GNU General Public License along with
511
+ * You should have received a copy of the GNU General Public License along with
191
+ * this program. If not, see <http://www.gnu.org/licenses/>.
512
+ * this program. If not, see <http://www.gnu.org/licenses/>.
192
+ */
513
+ */
193
+
514
+
194
+#include "qemu/osdep.h"
515
+#include "vector_internals.h"
195
+#include "qemu/log.h"
516
+
196
+#include "qapi/error.h"
517
+/* set agnostic elements to 1s */
197
+#include "cpu.h"
518
+void vext_set_elems_1s(void *base, uint32_t is_agnostic, uint32_t cnt,
198
+#include "trace.h"
519
+ uint32_t tot)
199
+#include "exec/exec-all.h"
520
+{
200
+
521
+ if (is_agnostic == 0) {
201
+/*
522
+ /* policy undisturbed */
202
+ * The following M-mode trigger CSRs are implemented:
203
+ *
204
+ * - tselect
205
+ * - tdata1
206
+ * - tdata2
207
+ * - tdata3
208
+ *
209
+ * We don't support writable 'type' field in the tdata1 register, so there is
210
+ * no need to implement the "tinfo" CSR.
211
+ *
212
+ * The following triggers are implemented:
213
+ *
214
+ * Index | Type | tdata mapping | Description
215
+ * ------+------+------------------------+------------
216
+ * 0 | 2 | tdata1, tdata2 | Address / Data Match
217
+ * 1 | 2 | tdata1, tdata2 | Address / Data Match
218
+ */
219
+
220
+/* tdata availability of a trigger */
221
+typedef bool tdata_avail[TDATA_NUM];
222
+
223
+static tdata_avail tdata_mapping[TRIGGER_NUM] = {
224
+ [TRIGGER_TYPE2_IDX_0 ... TRIGGER_TYPE2_IDX_1] = { true, true, false },
225
+};
226
+
227
+/* only breakpoint size 1/2/4/8 supported */
228
+static int access_size[SIZE_NUM] = {
229
+ [SIZE_ANY] = 0,
230
+ [SIZE_1B] = 1,
231
+ [SIZE_2B] = 2,
232
+ [SIZE_4B] = 4,
233
+ [SIZE_6B] = -1,
234
+ [SIZE_8B] = 8,
235
+ [6 ... 15] = -1,
236
+};
237
+
238
+static inline target_ulong trigger_type(CPURISCVState *env,
239
+ trigger_type_t type)
240
+{
241
+ target_ulong tdata1;
242
+
243
+ switch (riscv_cpu_mxl(env)) {
244
+ case MXL_RV32:
245
+ tdata1 = RV32_TYPE(type);
246
+ break;
247
+ case MXL_RV64:
248
+ tdata1 = RV64_TYPE(type);
249
+ break;
250
+ default:
251
+ g_assert_not_reached();
252
+ }
253
+
254
+ return tdata1;
255
+}
256
+
257
+bool tdata_available(CPURISCVState *env, int tdata_index)
258
+{
259
+ if (unlikely(tdata_index >= TDATA_NUM)) {
260
+ return false;
261
+ }
262
+
263
+ if (unlikely(env->trigger_cur >= TRIGGER_NUM)) {
264
+ return false;
265
+ }
266
+
267
+ return tdata_mapping[env->trigger_cur][tdata_index];
268
+}
269
+
270
+target_ulong tselect_csr_read(CPURISCVState *env)
271
+{
272
+ return env->trigger_cur;
273
+}
274
+
275
+void tselect_csr_write(CPURISCVState *env, target_ulong val)
276
+{
277
+ /* all target_ulong bits of tselect are implemented */
278
+ env->trigger_cur = val;
279
+}
280
+
281
+static target_ulong tdata1_validate(CPURISCVState *env, target_ulong val,
282
+ trigger_type_t t)
283
+{
284
+ uint32_t type, dmode;
285
+ target_ulong tdata1;
286
+
287
+ switch (riscv_cpu_mxl(env)) {
288
+ case MXL_RV32:
289
+ type = extract32(val, 28, 4);
290
+ dmode = extract32(val, 27, 1);
291
+ tdata1 = RV32_TYPE(t);
292
+ break;
293
+ case MXL_RV64:
294
+ type = extract64(val, 60, 4);
295
+ dmode = extract64(val, 59, 1);
296
+ tdata1 = RV64_TYPE(t);
297
+ break;
298
+ default:
299
+ g_assert_not_reached();
300
+ }
301
+
302
+ if (type != t) {
303
+ qemu_log_mask(LOG_GUEST_ERROR,
304
+ "ignoring type write to tdata1 register\n");
305
+ }
306
+ if (dmode != 0) {
307
+ qemu_log_mask(LOG_UNIMP, "debug mode is not supported\n");
308
+ }
309
+
310
+ return tdata1;
311
+}
312
+
313
+static inline void warn_always_zero_bit(target_ulong val, target_ulong mask,
314
+ const char *msg)
315
+{
316
+ if (val & mask) {
317
+ qemu_log_mask(LOG_UNIMP, "%s bit is always zero\n", msg);
318
+ }
319
+}
320
+
321
+static uint32_t type2_breakpoint_size(CPURISCVState *env, target_ulong ctrl)
322
+{
323
+ uint32_t size, sizelo, sizehi = 0;
324
+
325
+ if (riscv_cpu_mxl(env) == MXL_RV64) {
326
+ sizehi = extract32(ctrl, 21, 2);
327
+ }
328
+ sizelo = extract32(ctrl, 16, 2);
329
+ size = (sizehi << 2) | sizelo;
330
+
331
+ return size;
332
+}
333
+
334
+static inline bool type2_breakpoint_enabled(target_ulong ctrl)
335
+{
336
+ bool mode = !!(ctrl & (TYPE2_U | TYPE2_S | TYPE2_M));
337
+ bool rwx = !!(ctrl & (TYPE2_LOAD | TYPE2_STORE | TYPE2_EXEC));
338
+
339
+ return mode && rwx;
340
+}
341
+
342
+static target_ulong type2_mcontrol_validate(CPURISCVState *env,
343
+ target_ulong ctrl)
344
+{
345
+ target_ulong val;
346
+ uint32_t size;
347
+
348
+ /* validate the generic part first */
349
+ val = tdata1_validate(env, ctrl, TRIGGER_TYPE_AD_MATCH);
350
+
351
+ /* validate unimplemented (always zero) bits */
352
+ warn_always_zero_bit(ctrl, TYPE2_MATCH, "match");
353
+ warn_always_zero_bit(ctrl, TYPE2_CHAIN, "chain");
354
+ warn_always_zero_bit(ctrl, TYPE2_ACTION, "action");
355
+ warn_always_zero_bit(ctrl, TYPE2_TIMING, "timing");
356
+ warn_always_zero_bit(ctrl, TYPE2_SELECT, "select");
357
+ warn_always_zero_bit(ctrl, TYPE2_HIT, "hit");
358
+
359
+ /* validate size encoding */
360
+ size = type2_breakpoint_size(env, ctrl);
361
+ if (access_size[size] == -1) {
362
+ qemu_log_mask(LOG_UNIMP, "access size %d is not supported, using SIZE_ANY\n",
363
+ size);
364
+ } else {
365
+ val |= (ctrl & TYPE2_SIZELO);
366
+ if (riscv_cpu_mxl(env) == MXL_RV64) {
367
+ val |= (ctrl & TYPE2_SIZEHI);
368
+ }
369
+ }
370
+
371
+ /* keep the mode and attribute bits */
372
+ val |= (ctrl & (TYPE2_U | TYPE2_S | TYPE2_M |
373
+ TYPE2_LOAD | TYPE2_STORE | TYPE2_EXEC));
374
+
375
+ return val;
376
+}
377
+
378
+static void type2_breakpoint_insert(CPURISCVState *env, target_ulong index)
379
+{
380
+ target_ulong ctrl = env->type2_trig[index].mcontrol;
381
+ target_ulong addr = env->type2_trig[index].maddress;
382
+ bool enabled = type2_breakpoint_enabled(ctrl);
383
+ CPUState *cs = env_cpu(env);
384
+ int flags = BP_CPU | BP_STOP_BEFORE_ACCESS;
385
+ uint32_t size;
386
+
387
+ if (!enabled) {
388
+ return;
523
+ return;
389
+ }
524
+ }
390
+
525
+ if (tot - cnt == 0) {
391
+ if (ctrl & TYPE2_EXEC) {
526
+ return ;
392
+ cpu_breakpoint_insert(cs, addr, flags, &env->type2_trig[index].bp);
393
+ }
527
+ }
394
+
528
+ memset(base + cnt, -1, tot - cnt);
395
+ if (ctrl & TYPE2_LOAD) {
529
+}
396
+ flags |= BP_MEM_READ;
530
+
531
+void do_vext_vv(void *vd, void *v0, void *vs1, void *vs2,
532
+ CPURISCVState *env, uint32_t desc,
533
+ opivv2_fn *fn, uint32_t esz)
534
+{
535
+ uint32_t vm = vext_vm(desc);
536
+ uint32_t vl = env->vl;
537
+ uint32_t total_elems = vext_get_total_elems(env, desc, esz);
538
+ uint32_t vta = vext_vta(desc);
539
+ uint32_t vma = vext_vma(desc);
540
+ uint32_t i;
541
+
542
+ for (i = env->vstart; i < vl; i++) {
543
+ if (!vm && !vext_elem_mask(v0, i)) {
544
+ /* set masked-off elements to 1s */
545
+ vext_set_elems_1s(vd, vma, i * esz, (i + 1) * esz);
546
+ continue;
547
+ }
548
+ fn(vd, vs1, vs2, i);
397
+ }
549
+ }
398
+ if (ctrl & TYPE2_STORE) {
550
+ env->vstart = 0;
399
+ flags |= BP_MEM_WRITE;
551
+ /* set tail elements to 1s */
552
+ vext_set_elems_1s(vd, vta, vl * esz, total_elems * esz);
553
+}
554
+
555
+void do_vext_vx(void *vd, void *v0, target_long s1, void *vs2,
556
+ CPURISCVState *env, uint32_t desc,
557
+ opivx2_fn fn, uint32_t esz)
558
+{
559
+ uint32_t vm = vext_vm(desc);
560
+ uint32_t vl = env->vl;
561
+ uint32_t total_elems = vext_get_total_elems(env, desc, esz);
562
+ uint32_t vta = vext_vta(desc);
563
+ uint32_t vma = vext_vma(desc);
564
+ uint32_t i;
565
+
566
+ for (i = env->vstart; i < vl; i++) {
567
+ if (!vm && !vext_elem_mask(v0, i)) {
568
+ /* set masked-off elements to 1s */
569
+ vext_set_elems_1s(vd, vma, i * esz, (i + 1) * esz);
570
+ continue;
571
+ }
572
+ fn(vd, s1, vs2, i);
400
+ }
573
+ }
401
+
574
+ env->vstart = 0;
402
+ if (flags & BP_MEM_ACCESS) {
575
+ /* set tail elements to 1s */
403
+ size = type2_breakpoint_size(env, ctrl);
576
+ vext_set_elems_1s(vd, vta, vl * esz, total_elems * esz);
404
+ if (size != 0) {
405
+ cpu_watchpoint_insert(cs, addr, size, flags,
406
+ &env->type2_trig[index].wp);
407
+ } else {
408
+ cpu_watchpoint_insert(cs, addr, 8, flags,
409
+ &env->type2_trig[index].wp);
410
+ }
411
+ }
412
+}
413
+
414
+static void type2_breakpoint_remove(CPURISCVState *env, target_ulong index)
415
+{
416
+ CPUState *cs = env_cpu(env);
417
+
418
+ if (env->type2_trig[index].bp) {
419
+ cpu_breakpoint_remove_by_ref(cs, env->type2_trig[index].bp);
420
+ env->type2_trig[index].bp = NULL;
421
+ }
422
+
423
+ if (env->type2_trig[index].wp) {
424
+ cpu_watchpoint_remove_by_ref(cs, env->type2_trig[index].wp);
425
+ env->type2_trig[index].wp = NULL;
426
+ }
427
+}
428
+
429
+static target_ulong type2_reg_read(CPURISCVState *env,
430
+ target_ulong trigger_index, int tdata_index)
431
+{
432
+ uint32_t index = trigger_index - TRIGGER_TYPE2_IDX_0;
433
+ target_ulong tdata;
434
+
435
+ switch (tdata_index) {
436
+ case TDATA1:
437
+ tdata = env->type2_trig[index].mcontrol;
438
+ break;
439
+ case TDATA2:
440
+ tdata = env->type2_trig[index].maddress;
441
+ break;
442
+ default:
443
+ g_assert_not_reached();
444
+ }
445
+
446
+ return tdata;
447
+}
448
+
449
+static void type2_reg_write(CPURISCVState *env, target_ulong trigger_index,
450
+ int tdata_index, target_ulong val)
451
+{
452
+ uint32_t index = trigger_index - TRIGGER_TYPE2_IDX_0;
453
+ target_ulong new_val;
454
+
455
+ switch (tdata_index) {
456
+ case TDATA1:
457
+ new_val = type2_mcontrol_validate(env, val);
458
+ if (new_val != env->type2_trig[index].mcontrol) {
459
+ env->type2_trig[index].mcontrol = new_val;
460
+ type2_breakpoint_remove(env, index);
461
+ type2_breakpoint_insert(env, index);
462
+ }
463
+ break;
464
+ case TDATA2:
465
+ if (val != env->type2_trig[index].maddress) {
466
+ env->type2_trig[index].maddress = val;
467
+ type2_breakpoint_remove(env, index);
468
+ type2_breakpoint_insert(env, index);
469
+ }
470
+ break;
471
+ default:
472
+ g_assert_not_reached();
473
+ }
474
+
475
+ return;
476
+}
477
+
478
+typedef target_ulong (*tdata_read_func)(CPURISCVState *env,
479
+ target_ulong trigger_index,
480
+ int tdata_index);
481
+
482
+static tdata_read_func trigger_read_funcs[TRIGGER_NUM] = {
483
+ [TRIGGER_TYPE2_IDX_0 ... TRIGGER_TYPE2_IDX_1] = type2_reg_read,
484
+};
485
+
486
+typedef void (*tdata_write_func)(CPURISCVState *env,
487
+ target_ulong trigger_index,
488
+ int tdata_index,
489
+ target_ulong val);
490
+
491
+static tdata_write_func trigger_write_funcs[TRIGGER_NUM] = {
492
+ [TRIGGER_TYPE2_IDX_0 ... TRIGGER_TYPE2_IDX_1] = type2_reg_write,
493
+};
494
+
495
+target_ulong tdata_csr_read(CPURISCVState *env, int tdata_index)
496
+{
497
+ tdata_read_func read_func = trigger_read_funcs[env->trigger_cur];
498
+
499
+ return read_func(env, env->trigger_cur, tdata_index);
500
+}
501
+
502
+void tdata_csr_write(CPURISCVState *env, int tdata_index, target_ulong val)
503
+{
504
+ tdata_write_func write_func = trigger_write_funcs[env->trigger_cur];
505
+
506
+ return write_func(env, env->trigger_cur, tdata_index, val);
507
+}
577
+}
508
diff --git a/target/riscv/meson.build b/target/riscv/meson.build
578
diff --git a/target/riscv/meson.build b/target/riscv/meson.build
509
index XXXXXXX..XXXXXXX 100644
579
index XXXXXXX..XXXXXXX 100644
510
--- a/target/riscv/meson.build
580
--- a/target/riscv/meson.build
511
+++ b/target/riscv/meson.build
581
+++ b/target/riscv/meson.build
512
@@ -XXX,XX +XXX,XX @@ riscv_softmmu_ss = ss.source_set()
582
@@ -XXX,XX +XXX,XX @@ riscv_ss.add(files(
513
riscv_softmmu_ss.add(files(
583
'gdbstub.c',
514
'arch_dump.c',
584
'op_helper.c',
515
'pmp.c',
585
'vector_helper.c',
516
+ 'debug.c',
586
+ 'vector_internals.c',
517
'monitor.c',
587
'bitmanip_helper.c',
518
'machine.c'
588
'translate.c',
519
))
589
'm128_helper.c',
520
--
590
--
521
2.35.1
591
2.41.0
diff view generated by jsdifflib
New patch
1
From: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
1
2
3
Refactor the non SEW-specific stuff out of `GEN_OPIVV_TRANS` into
4
function `opivv_trans` (similar to `opivi_trans`). `opivv_trans` will be
5
used in proceeding vector-crypto commits.
6
7
Signed-off-by: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
10
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
11
Signed-off-by: Max Chou <max.chou@sifive.com>
12
Message-ID: <20230711165917.2629866-3-max.chou@sifive.com>
13
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
14
---
15
target/riscv/insn_trans/trans_rvv.c.inc | 62 +++++++++++++------------
16
1 file changed, 32 insertions(+), 30 deletions(-)
17
18
diff --git a/target/riscv/insn_trans/trans_rvv.c.inc b/target/riscv/insn_trans/trans_rvv.c.inc
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/riscv/insn_trans/trans_rvv.c.inc
21
+++ b/target/riscv/insn_trans/trans_rvv.c.inc
22
@@ -XXX,XX +XXX,XX @@ GEN_OPIWX_WIDEN_TRANS(vwadd_wx)
23
GEN_OPIWX_WIDEN_TRANS(vwsubu_wx)
24
GEN_OPIWX_WIDEN_TRANS(vwsub_wx)
25
26
+static bool opivv_trans(uint32_t vd, uint32_t vs1, uint32_t vs2, uint32_t vm,
27
+ gen_helper_gvec_4_ptr *fn, DisasContext *s)
28
+{
29
+ uint32_t data = 0;
30
+ TCGLabel *over = gen_new_label();
31
+ tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
32
+ tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
33
+
34
+ data = FIELD_DP32(data, VDATA, VM, vm);
35
+ data = FIELD_DP32(data, VDATA, LMUL, s->lmul);
36
+ data = FIELD_DP32(data, VDATA, VTA, s->vta);
37
+ data = FIELD_DP32(data, VDATA, VTA_ALL_1S, s->cfg_vta_all_1s);
38
+ data = FIELD_DP32(data, VDATA, VMA, s->vma);
39
+ tcg_gen_gvec_4_ptr(vreg_ofs(s, vd), vreg_ofs(s, 0), vreg_ofs(s, vs1),
40
+ vreg_ofs(s, vs2), cpu_env, s->cfg_ptr->vlen / 8,
41
+ s->cfg_ptr->vlen / 8, data, fn);
42
+ mark_vs_dirty(s);
43
+ gen_set_label(over);
44
+ return true;
45
+}
46
+
47
/* Vector Integer Add-with-Carry / Subtract-with-Borrow Instructions */
48
/* OPIVV without GVEC IR */
49
-#define GEN_OPIVV_TRANS(NAME, CHECK) \
50
-static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
51
-{ \
52
- if (CHECK(s, a)) { \
53
- uint32_t data = 0; \
54
- static gen_helper_gvec_4_ptr * const fns[4] = { \
55
- gen_helper_##NAME##_b, gen_helper_##NAME##_h, \
56
- gen_helper_##NAME##_w, gen_helper_##NAME##_d, \
57
- }; \
58
- TCGLabel *over = gen_new_label(); \
59
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
60
- tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
61
- \
62
- data = FIELD_DP32(data, VDATA, VM, a->vm); \
63
- data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \
64
- data = FIELD_DP32(data, VDATA, VTA, s->vta); \
65
- data = \
66
- FIELD_DP32(data, VDATA, VTA_ALL_1S, s->cfg_vta_all_1s);\
67
- data = FIELD_DP32(data, VDATA, VMA, s->vma); \
68
- tcg_gen_gvec_4_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0), \
69
- vreg_ofs(s, a->rs1), \
70
- vreg_ofs(s, a->rs2), cpu_env, \
71
- s->cfg_ptr->vlen / 8, \
72
- s->cfg_ptr->vlen / 8, data, \
73
- fns[s->sew]); \
74
- mark_vs_dirty(s); \
75
- gen_set_label(over); \
76
- return true; \
77
- } \
78
- return false; \
79
+#define GEN_OPIVV_TRANS(NAME, CHECK) \
80
+static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
81
+{ \
82
+ if (CHECK(s, a)) { \
83
+ static gen_helper_gvec_4_ptr * const fns[4] = { \
84
+ gen_helper_##NAME##_b, gen_helper_##NAME##_h, \
85
+ gen_helper_##NAME##_w, gen_helper_##NAME##_d, \
86
+ }; \
87
+ return opivv_trans(a->rd, a->rs1, a->rs2, a->vm, fns[s->sew], s);\
88
+ } \
89
+ return false; \
90
}
91
92
/*
93
--
94
2.41.0
diff view generated by jsdifflib
New patch
1
From: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
1
2
3
Remove the redundant "vl == 0" check which is already included within the vstart >= vl check, when vl == 0.
4
5
Signed-off-by: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
6
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
7
Signed-off-by: Max Chou <max.chou@sifive.com>
8
Acked-by: Alistair Francis <alistair.francis@wdc.com>
9
Message-ID: <20230711165917.2629866-4-max.chou@sifive.com>
10
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
11
---
12
target/riscv/insn_trans/trans_rvv.c.inc | 31 +------------------------
13
1 file changed, 1 insertion(+), 30 deletions(-)
14
15
diff --git a/target/riscv/insn_trans/trans_rvv.c.inc b/target/riscv/insn_trans/trans_rvv.c.inc
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/riscv/insn_trans/trans_rvv.c.inc
18
+++ b/target/riscv/insn_trans/trans_rvv.c.inc
19
@@ -XXX,XX +XXX,XX @@ static bool ldst_us_trans(uint32_t vd, uint32_t rs1, uint32_t data,
20
TCGv_i32 desc;
21
22
TCGLabel *over = gen_new_label();
23
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
24
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
25
26
dest = tcg_temp_new_ptr();
27
@@ -XXX,XX +XXX,XX @@ static bool ldst_stride_trans(uint32_t vd, uint32_t rs1, uint32_t rs2,
28
TCGv_i32 desc;
29
30
TCGLabel *over = gen_new_label();
31
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
32
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
33
34
dest = tcg_temp_new_ptr();
35
@@ -XXX,XX +XXX,XX @@ static bool ldst_index_trans(uint32_t vd, uint32_t rs1, uint32_t vs2,
36
TCGv_i32 desc;
37
38
TCGLabel *over = gen_new_label();
39
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
40
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
41
42
dest = tcg_temp_new_ptr();
43
@@ -XXX,XX +XXX,XX @@ static bool ldff_trans(uint32_t vd, uint32_t rs1, uint32_t data,
44
TCGv_i32 desc;
45
46
TCGLabel *over = gen_new_label();
47
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
48
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
49
50
dest = tcg_temp_new_ptr();
51
@@ -XXX,XX +XXX,XX @@ do_opivv_gvec(DisasContext *s, arg_rmrr *a, GVecGen3Fn *gvec_fn,
52
return false;
53
}
54
55
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
56
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
57
58
if (a->vm && s->vl_eq_vlmax && !(s->vta && s->lmul < 0)) {
59
@@ -XXX,XX +XXX,XX @@ static bool opivx_trans(uint32_t vd, uint32_t rs1, uint32_t vs2, uint32_t vm,
60
uint32_t data = 0;
61
62
TCGLabel *over = gen_new_label();
63
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
64
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
65
66
dest = tcg_temp_new_ptr();
67
@@ -XXX,XX +XXX,XX @@ static bool opivi_trans(uint32_t vd, uint32_t imm, uint32_t vs2, uint32_t vm,
68
uint32_t data = 0;
69
70
TCGLabel *over = gen_new_label();
71
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
72
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
73
74
dest = tcg_temp_new_ptr();
75
@@ -XXX,XX +XXX,XX @@ static bool do_opivv_widen(DisasContext *s, arg_rmrr *a,
76
if (checkfn(s, a)) {
77
uint32_t data = 0;
78
TCGLabel *over = gen_new_label();
79
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
80
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
81
82
data = FIELD_DP32(data, VDATA, VM, a->vm);
83
@@ -XXX,XX +XXX,XX @@ static bool do_opiwv_widen(DisasContext *s, arg_rmrr *a,
84
if (opiwv_widen_check(s, a)) {
85
uint32_t data = 0;
86
TCGLabel *over = gen_new_label();
87
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
88
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
89
90
data = FIELD_DP32(data, VDATA, VM, a->vm);
91
@@ -XXX,XX +XXX,XX @@ static bool opivv_trans(uint32_t vd, uint32_t vs1, uint32_t vs2, uint32_t vm,
92
{
93
uint32_t data = 0;
94
TCGLabel *over = gen_new_label();
95
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
96
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
97
98
data = FIELD_DP32(data, VDATA, VM, vm);
99
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
100
gen_helper_##NAME##_w, \
101
}; \
102
TCGLabel *over = gen_new_label(); \
103
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
104
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
105
\
106
data = FIELD_DP32(data, VDATA, VM, a->vm); \
107
@@ -XXX,XX +XXX,XX @@ static bool trans_vmv_v_v(DisasContext *s, arg_vmv_v_v *a)
108
gen_helper_vmv_v_v_w, gen_helper_vmv_v_v_d,
109
};
110
TCGLabel *over = gen_new_label();
111
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
112
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
113
114
tcg_gen_gvec_2_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, a->rs1),
115
@@ -XXX,XX +XXX,XX @@ static bool trans_vmv_v_x(DisasContext *s, arg_vmv_v_x *a)
116
vext_check_ss(s, a->rd, 0, 1)) {
117
TCGv s1;
118
TCGLabel *over = gen_new_label();
119
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
120
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
121
122
s1 = get_gpr(s, a->rs1, EXT_SIGN);
123
@@ -XXX,XX +XXX,XX @@ static bool trans_vmv_v_i(DisasContext *s, arg_vmv_v_i *a)
124
gen_helper_vmv_v_x_w, gen_helper_vmv_v_x_d,
125
};
126
TCGLabel *over = gen_new_label();
127
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
128
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
129
130
s1 = tcg_constant_i64(simm);
131
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
132
}; \
133
TCGLabel *over = gen_new_label(); \
134
gen_set_rm(s, RISCV_FRM_DYN); \
135
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
136
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
137
\
138
data = FIELD_DP32(data, VDATA, VM, a->vm); \
139
@@ -XXX,XX +XXX,XX @@ static bool opfvf_trans(uint32_t vd, uint32_t rs1, uint32_t vs2,
140
TCGv_i64 t1;
141
142
TCGLabel *over = gen_new_label();
143
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
144
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
145
146
dest = tcg_temp_new_ptr();
147
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
148
}; \
149
TCGLabel *over = gen_new_label(); \
150
gen_set_rm(s, RISCV_FRM_DYN); \
151
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
152
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);\
153
\
154
data = FIELD_DP32(data, VDATA, VM, a->vm); \
155
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
156
}; \
157
TCGLabel *over = gen_new_label(); \
158
gen_set_rm(s, RISCV_FRM_DYN); \
159
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
160
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
161
\
162
data = FIELD_DP32(data, VDATA, VM, a->vm); \
163
@@ -XXX,XX +XXX,XX @@ static bool do_opfv(DisasContext *s, arg_rmr *a,
164
uint32_t data = 0;
165
TCGLabel *over = gen_new_label();
166
gen_set_rm_chkfrm(s, rm);
167
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
168
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
169
170
data = FIELD_DP32(data, VDATA, VM, a->vm);
171
@@ -XXX,XX +XXX,XX @@ static bool trans_vfmv_v_f(DisasContext *s, arg_vfmv_v_f *a)
172
gen_helper_vmv_v_x_d,
173
};
174
TCGLabel *over = gen_new_label();
175
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
176
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
177
178
t1 = tcg_temp_new_i64();
179
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmr *a) \
180
}; \
181
TCGLabel *over = gen_new_label(); \
182
gen_set_rm_chkfrm(s, FRM); \
183
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
184
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
185
\
186
data = FIELD_DP32(data, VDATA, VM, a->vm); \
187
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmr *a) \
188
}; \
189
TCGLabel *over = gen_new_label(); \
190
gen_set_rm(s, RISCV_FRM_DYN); \
191
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
192
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
193
\
194
data = FIELD_DP32(data, VDATA, VM, a->vm); \
195
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmr *a) \
196
}; \
197
TCGLabel *over = gen_new_label(); \
198
gen_set_rm_chkfrm(s, FRM); \
199
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
200
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
201
\
202
data = FIELD_DP32(data, VDATA, VM, a->vm); \
203
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmr *a) \
204
}; \
205
TCGLabel *over = gen_new_label(); \
206
gen_set_rm_chkfrm(s, FRM); \
207
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
208
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
209
\
210
data = FIELD_DP32(data, VDATA, VM, a->vm); \
211
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_r *a) \
212
uint32_t data = 0; \
213
gen_helper_gvec_4_ptr *fn = gen_helper_##NAME; \
214
TCGLabel *over = gen_new_label(); \
215
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
216
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
217
\
218
data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \
219
@@ -XXX,XX +XXX,XX @@ static bool trans_vid_v(DisasContext *s, arg_vid_v *a)
220
require_vm(a->vm, a->rd)) {
221
uint32_t data = 0;
222
TCGLabel *over = gen_new_label();
223
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
224
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
225
226
data = FIELD_DP32(data, VDATA, VM, a->vm);
227
@@ -XXX,XX +XXX,XX @@ static bool trans_vmv_s_x(DisasContext *s, arg_vmv_s_x *a)
228
TCGv s1;
229
TCGLabel *over = gen_new_label();
230
231
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
232
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
233
234
t1 = tcg_temp_new_i64();
235
@@ -XXX,XX +XXX,XX @@ static bool trans_vfmv_s_f(DisasContext *s, arg_vfmv_s_f *a)
236
TCGv_i64 t1;
237
TCGLabel *over = gen_new_label();
238
239
- /* if vl == 0 or vstart >= vl, skip vector register write back */
240
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
241
+ /* if vstart >= vl, skip vector register write back */
242
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
243
244
/* NaN-box f[rs1] */
245
@@ -XXX,XX +XXX,XX @@ static bool int_ext_op(DisasContext *s, arg_rmr *a, uint8_t seq)
246
uint32_t data = 0;
247
gen_helper_gvec_3_ptr *fn;
248
TCGLabel *over = gen_new_label();
249
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
250
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
251
252
static gen_helper_gvec_3_ptr * const fns[6][4] = {
253
--
254
2.41.0
diff view generated by jsdifflib
1
From: Atish Patra <atishp@rivosinc.com>
1
From: Lawrence Hunter <lawrence.hunter@codethink.co.uk>
2
2
3
Currently, the privileged specification version are defined in
3
This commit adds support for the Zvbc vector-crypto extension, which
4
a complex manner for no benefit.
4
consists of the following instructions:
5
5
6
Simplify it by changing it to a simple enum based on.
6
* vclmulh.[vx,vv]
7
7
* vclmul.[vx,vv]
8
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
8
9
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
9
Translation functions are defined in
10
Signed-off-by: Atish Patra <atishp@rivosinc.com>
10
`target/riscv/insn_trans/trans_rvvk.c.inc` and helpers are defined in
11
Message-Id: <20220303185440.512391-2-atishp@rivosinc.com>
11
`target/riscv/vcrypto_helper.c`.
12
13
Co-authored-by: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
14
Co-authored-by: Max Chou <max.chou@sifive.com>
15
Signed-off-by: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
16
Signed-off-by: Lawrence Hunter <lawrence.hunter@codethink.co.uk>
17
Signed-off-by: Max Chou <max.chou@sifive.com>
18
[max.chou@sifive.com: Exposed x-zvbc property]
19
Message-ID: <20230711165917.2629866-5-max.chou@sifive.com>
12
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
20
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
13
---
21
---
14
target/riscv/cpu.h | 7 +++++--
22
target/riscv/cpu_cfg.h | 1 +
15
1 file changed, 5 insertions(+), 2 deletions(-)
23
target/riscv/helper.h | 6 +++
16
24
target/riscv/insn32.decode | 6 +++
17
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
25
target/riscv/cpu.c | 9 ++++
18
index XXXXXXX..XXXXXXX 100644
26
target/riscv/translate.c | 1 +
19
--- a/target/riscv/cpu.h
27
target/riscv/vcrypto_helper.c | 59 ++++++++++++++++++++++
20
+++ b/target/riscv/cpu.h
28
target/riscv/insn_trans/trans_rvvk.c.inc | 62 ++++++++++++++++++++++++
21
@@ -XXX,XX +XXX,XX @@ enum {
29
target/riscv/meson.build | 3 +-
22
RISCV_FEATURE_AIA
30
8 files changed, 146 insertions(+), 1 deletion(-)
31
create mode 100644 target/riscv/vcrypto_helper.c
32
create mode 100644 target/riscv/insn_trans/trans_rvvk.c.inc
33
34
diff --git a/target/riscv/cpu_cfg.h b/target/riscv/cpu_cfg.h
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/riscv/cpu_cfg.h
37
+++ b/target/riscv/cpu_cfg.h
38
@@ -XXX,XX +XXX,XX @@ struct RISCVCPUConfig {
39
bool ext_zve32f;
40
bool ext_zve64f;
41
bool ext_zve64d;
42
+ bool ext_zvbc;
43
bool ext_zmmul;
44
bool ext_zvfbfmin;
45
bool ext_zvfbfwma;
46
diff --git a/target/riscv/helper.h b/target/riscv/helper.h
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/riscv/helper.h
49
+++ b/target/riscv/helper.h
50
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_5(vfwcvtbf16_f_f_v, void, ptr, ptr, ptr, env, i32)
51
52
DEF_HELPER_6(vfwmaccbf16_vv, void, ptr, ptr, ptr, ptr, env, i32)
53
DEF_HELPER_6(vfwmaccbf16_vf, void, ptr, ptr, i64, ptr, env, i32)
54
+
55
+/* Vector crypto functions */
56
+DEF_HELPER_6(vclmul_vv, void, ptr, ptr, ptr, ptr, env, i32)
57
+DEF_HELPER_6(vclmul_vx, void, ptr, ptr, tl, ptr, env, i32)
58
+DEF_HELPER_6(vclmulh_vv, void, ptr, ptr, ptr, ptr, env, i32)
59
+DEF_HELPER_6(vclmulh_vx, void, ptr, ptr, tl, ptr, env, i32)
60
diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode
61
index XXXXXXX..XXXXXXX 100644
62
--- a/target/riscv/insn32.decode
63
+++ b/target/riscv/insn32.decode
64
@@ -XXX,XX +XXX,XX @@ vfwcvtbf16_f_f_v 010010 . ..... 01101 001 ..... 1010111 @r2_vm
65
# *** Zvfbfwma Standard Extension ***
66
vfwmaccbf16_vv 111011 . ..... ..... 001 ..... 1010111 @r_vm
67
vfwmaccbf16_vf 111011 . ..... ..... 101 ..... 1010111 @r_vm
68
+
69
+# *** Zvbc vector crypto extension ***
70
+vclmul_vv 001100 . ..... ..... 010 ..... 1010111 @r_vm
71
+vclmul_vx 001100 . ..... ..... 110 ..... 1010111 @r_vm
72
+vclmulh_vv 001101 . ..... ..... 010 ..... 1010111 @r_vm
73
+vclmulh_vx 001101 . ..... ..... 110 ..... 1010111 @r_vm
74
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
75
index XXXXXXX..XXXXXXX 100644
76
--- a/target/riscv/cpu.c
77
+++ b/target/riscv/cpu.c
78
@@ -XXX,XX +XXX,XX @@ static const struct isa_ext_data isa_edata_arr[] = {
79
ISA_EXT_DATA_ENTRY(zksed, PRIV_VERSION_1_12_0, ext_zksed),
80
ISA_EXT_DATA_ENTRY(zksh, PRIV_VERSION_1_12_0, ext_zksh),
81
ISA_EXT_DATA_ENTRY(zkt, PRIV_VERSION_1_12_0, ext_zkt),
82
+ ISA_EXT_DATA_ENTRY(zvbc, PRIV_VERSION_1_12_0, ext_zvbc),
83
ISA_EXT_DATA_ENTRY(zve32f, PRIV_VERSION_1_10_0, ext_zve32f),
84
ISA_EXT_DATA_ENTRY(zve64f, PRIV_VERSION_1_10_0, ext_zve64f),
85
ISA_EXT_DATA_ENTRY(zve64d, PRIV_VERSION_1_10_0, ext_zve64d),
86
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_validate_set_extensions(RISCVCPU *cpu, Error **errp)
87
return;
88
}
89
90
+ if (cpu->cfg.ext_zvbc && !cpu->cfg.ext_zve64f) {
91
+ error_setg(errp, "Zvbc extension requires V or Zve64{f,d} extensions");
92
+ return;
93
+ }
94
+
95
if (cpu->cfg.ext_zk) {
96
cpu->cfg.ext_zkn = true;
97
cpu->cfg.ext_zkr = true;
98
@@ -XXX,XX +XXX,XX @@ static Property riscv_cpu_extensions[] = {
99
DEFINE_PROP_BOOL("x-zvfbfmin", RISCVCPU, cfg.ext_zvfbfmin, false),
100
DEFINE_PROP_BOOL("x-zvfbfwma", RISCVCPU, cfg.ext_zvfbfwma, false),
101
102
+ /* Vector cryptography extensions */
103
+ DEFINE_PROP_BOOL("x-zvbc", RISCVCPU, cfg.ext_zvbc, false),
104
+
105
DEFINE_PROP_END_OF_LIST(),
23
};
106
};
24
107
25
-#define PRIV_VERSION_1_10_0 0x00011000
108
diff --git a/target/riscv/translate.c b/target/riscv/translate.c
26
-#define PRIV_VERSION_1_11_0 0x00011100
109
index XXXXXXX..XXXXXXX 100644
27
+/* Privileged specification version */
110
--- a/target/riscv/translate.c
28
+enum {
111
+++ b/target/riscv/translate.c
29
+ PRIV_VERSION_1_10_0 = 0,
112
@@ -XXX,XX +XXX,XX @@ static uint32_t opcode_at(DisasContextBase *dcbase, target_ulong pc)
30
+ PRIV_VERSION_1_11_0,
113
#include "insn_trans/trans_rvzfa.c.inc"
31
+};
114
#include "insn_trans/trans_rvzfh.c.inc"
32
115
#include "insn_trans/trans_rvk.c.inc"
33
#define VEXT_VERSION_1_00_0 0x00010000
116
+#include "insn_trans/trans_rvvk.c.inc"
117
#include "insn_trans/trans_privileged.c.inc"
118
#include "insn_trans/trans_svinval.c.inc"
119
#include "insn_trans/trans_rvbf16.c.inc"
120
diff --git a/target/riscv/vcrypto_helper.c b/target/riscv/vcrypto_helper.c
121
new file mode 100644
122
index XXXXXXX..XXXXXXX
123
--- /dev/null
124
+++ b/target/riscv/vcrypto_helper.c
125
@@ -XXX,XX +XXX,XX @@
126
+/*
127
+ * RISC-V Vector Crypto Extension Helpers for QEMU.
128
+ *
129
+ * Copyright (C) 2023 SiFive, Inc.
130
+ * Written by Codethink Ltd and SiFive.
131
+ *
132
+ * This program is free software; you can redistribute it and/or modify it
133
+ * under the terms and conditions of the GNU General Public License,
134
+ * version 2 or later, as published by the Free Software Foundation.
135
+ *
136
+ * This program is distributed in the hope it will be useful, but WITHOUT
137
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
138
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
139
+ * more details.
140
+ *
141
+ * You should have received a copy of the GNU General Public License along with
142
+ * this program. If not, see <http://www.gnu.org/licenses/>.
143
+ */
144
+
145
+#include "qemu/osdep.h"
146
+#include "qemu/host-utils.h"
147
+#include "qemu/bitops.h"
148
+#include "cpu.h"
149
+#include "exec/memop.h"
150
+#include "exec/exec-all.h"
151
+#include "exec/helper-proto.h"
152
+#include "internals.h"
153
+#include "vector_internals.h"
154
+
155
+static uint64_t clmul64(uint64_t y, uint64_t x)
156
+{
157
+ uint64_t result = 0;
158
+ for (int j = 63; j >= 0; j--) {
159
+ if ((y >> j) & 1) {
160
+ result ^= (x << j);
161
+ }
162
+ }
163
+ return result;
164
+}
165
+
166
+static uint64_t clmulh64(uint64_t y, uint64_t x)
167
+{
168
+ uint64_t result = 0;
169
+ for (int j = 63; j >= 1; j--) {
170
+ if ((y >> j) & 1) {
171
+ result ^= (x >> (64 - j));
172
+ }
173
+ }
174
+ return result;
175
+}
176
+
177
+RVVCALL(OPIVV2, vclmul_vv, OP_UUU_D, H8, H8, H8, clmul64)
178
+GEN_VEXT_VV(vclmul_vv, 8)
179
+RVVCALL(OPIVX2, vclmul_vx, OP_UUU_D, H8, H8, clmul64)
180
+GEN_VEXT_VX(vclmul_vx, 8)
181
+RVVCALL(OPIVV2, vclmulh_vv, OP_UUU_D, H8, H8, H8, clmulh64)
182
+GEN_VEXT_VV(vclmulh_vv, 8)
183
+RVVCALL(OPIVX2, vclmulh_vx, OP_UUU_D, H8, H8, clmulh64)
184
+GEN_VEXT_VX(vclmulh_vx, 8)
185
diff --git a/target/riscv/insn_trans/trans_rvvk.c.inc b/target/riscv/insn_trans/trans_rvvk.c.inc
186
new file mode 100644
187
index XXXXXXX..XXXXXXX
188
--- /dev/null
189
+++ b/target/riscv/insn_trans/trans_rvvk.c.inc
190
@@ -XXX,XX +XXX,XX @@
191
+/*
192
+ * RISC-V translation routines for the vector crypto extension.
193
+ *
194
+ * Copyright (C) 2023 SiFive, Inc.
195
+ * Written by Codethink Ltd and SiFive.
196
+ *
197
+ * This program is free software; you can redistribute it and/or modify it
198
+ * under the terms and conditions of the GNU General Public License,
199
+ * version 2 or later, as published by the Free Software Foundation.
200
+ *
201
+ * This program is distributed in the hope it will be useful, but WITHOUT
202
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
203
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
204
+ * more details.
205
+ *
206
+ * You should have received a copy of the GNU General Public License along with
207
+ * this program. If not, see <http://www.gnu.org/licenses/>.
208
+ */
209
+
210
+/*
211
+ * Zvbc
212
+ */
213
+
214
+#define GEN_VV_MASKED_TRANS(NAME, CHECK) \
215
+ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
216
+ { \
217
+ if (CHECK(s, a)) { \
218
+ return opivv_trans(a->rd, a->rs1, a->rs2, a->vm, \
219
+ gen_helper_##NAME, s); \
220
+ } \
221
+ return false; \
222
+ }
223
+
224
+static bool vclmul_vv_check(DisasContext *s, arg_rmrr *a)
225
+{
226
+ return opivv_check(s, a) &&
227
+ s->cfg_ptr->ext_zvbc == true &&
228
+ s->sew == MO_64;
229
+}
230
+
231
+GEN_VV_MASKED_TRANS(vclmul_vv, vclmul_vv_check)
232
+GEN_VV_MASKED_TRANS(vclmulh_vv, vclmul_vv_check)
233
+
234
+#define GEN_VX_MASKED_TRANS(NAME, CHECK) \
235
+ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
236
+ { \
237
+ if (CHECK(s, a)) { \
238
+ return opivx_trans(a->rd, a->rs1, a->rs2, a->vm, \
239
+ gen_helper_##NAME, s); \
240
+ } \
241
+ return false; \
242
+ }
243
+
244
+static bool vclmul_vx_check(DisasContext *s, arg_rmrr *a)
245
+{
246
+ return opivx_check(s, a) &&
247
+ s->cfg_ptr->ext_zvbc == true &&
248
+ s->sew == MO_64;
249
+}
250
+
251
+GEN_VX_MASKED_TRANS(vclmul_vx, vclmul_vx_check)
252
+GEN_VX_MASKED_TRANS(vclmulh_vx, vclmul_vx_check)
253
diff --git a/target/riscv/meson.build b/target/riscv/meson.build
254
index XXXXXXX..XXXXXXX 100644
255
--- a/target/riscv/meson.build
256
+++ b/target/riscv/meson.build
257
@@ -XXX,XX +XXX,XX @@ riscv_ss.add(files(
258
'translate.c',
259
'm128_helper.c',
260
'crypto_helper.c',
261
- 'zce_helper.c'
262
+ 'zce_helper.c',
263
+ 'vcrypto_helper.c'
264
))
265
riscv_ss.add(when: 'CONFIG_KVM', if_true: files('kvm.c'), if_false: files('kvm-stub.c'))
34
266
35
--
267
--
36
2.35.1
268
2.41.0
diff view generated by jsdifflib
1
From: Bin Meng <bin.meng@windriver.com>
1
From: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
2
2
3
This is now used by RISC-V as well. Update the comments.
3
Move the checks out of `do_opiv{v,x,i}_gvec{,_shift}` functions
4
and into the corresponding macros. This enables the functions to be
5
reused in proceeding commits without check duplication.
4
6
5
Signed-off-by: Bin Meng <bin.meng@windriver.com>
7
Signed-off-by: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
9
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
8
Message-Id: <20220421003324.1134983-7-bmeng.cn@gmail.com>
10
Signed-off-by: Max Chou <max.chou@sifive.com>
11
Message-ID: <20230711165917.2629866-6-max.chou@sifive.com>
9
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
12
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
10
---
13
---
11
include/hw/core/tcg-cpu-ops.h | 1 +
14
target/riscv/insn_trans/trans_rvv.c.inc | 28 +++++++++++--------------
12
1 file changed, 1 insertion(+)
15
1 file changed, 12 insertions(+), 16 deletions(-)
13
16
14
diff --git a/include/hw/core/tcg-cpu-ops.h b/include/hw/core/tcg-cpu-ops.h
17
diff --git a/target/riscv/insn_trans/trans_rvv.c.inc b/target/riscv/insn_trans/trans_rvv.c.inc
15
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
16
--- a/include/hw/core/tcg-cpu-ops.h
19
--- a/target/riscv/insn_trans/trans_rvv.c.inc
17
+++ b/include/hw/core/tcg-cpu-ops.h
20
+++ b/target/riscv/insn_trans/trans_rvv.c.inc
18
@@ -XXX,XX +XXX,XX @@ struct TCGCPUOps {
21
@@ -XXX,XX +XXX,XX @@ do_opivv_gvec(DisasContext *s, arg_rmrr *a, GVecGen3Fn *gvec_fn,
19
/**
22
gen_helper_gvec_4_ptr *fn)
20
* @debug_check_watchpoint: return true if the architectural
23
{
21
* watchpoint whose address has matched should really fire, used by ARM
24
TCGLabel *over = gen_new_label();
22
+ * and RISC-V
25
- if (!opivv_check(s, a)) {
23
*/
26
- return false;
24
bool (*debug_check_watchpoint)(CPUState *cpu, CPUWatchpoint *wp);
27
- }
28
29
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
30
31
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
32
gen_helper_##NAME##_b, gen_helper_##NAME##_h, \
33
gen_helper_##NAME##_w, gen_helper_##NAME##_d, \
34
}; \
35
+ if (!opivv_check(s, a)) { \
36
+ return false; \
37
+ } \
38
return do_opivv_gvec(s, a, tcg_gen_gvec_##SUF, fns[s->sew]); \
39
}
40
41
@@ -XXX,XX +XXX,XX @@ static inline bool
42
do_opivx_gvec(DisasContext *s, arg_rmrr *a, GVecGen2sFn *gvec_fn,
43
gen_helper_opivx *fn)
44
{
45
- if (!opivx_check(s, a)) {
46
- return false;
47
- }
48
-
49
if (a->vm && s->vl_eq_vlmax && !(s->vta && s->lmul < 0)) {
50
TCGv_i64 src1 = tcg_temp_new_i64();
51
52
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
53
gen_helper_##NAME##_b, gen_helper_##NAME##_h, \
54
gen_helper_##NAME##_w, gen_helper_##NAME##_d, \
55
}; \
56
+ if (!opivx_check(s, a)) { \
57
+ return false; \
58
+ } \
59
return do_opivx_gvec(s, a, tcg_gen_gvec_##SUF, fns[s->sew]); \
60
}
61
62
@@ -XXX,XX +XXX,XX @@ static inline bool
63
do_opivi_gvec(DisasContext *s, arg_rmrr *a, GVecGen2iFn *gvec_fn,
64
gen_helper_opivx *fn, imm_mode_t imm_mode)
65
{
66
- if (!opivx_check(s, a)) {
67
- return false;
68
- }
69
-
70
if (a->vm && s->vl_eq_vlmax && !(s->vta && s->lmul < 0)) {
71
gvec_fn(s->sew, vreg_ofs(s, a->rd), vreg_ofs(s, a->rs2),
72
extract_imm(s, a->rs1, imm_mode), MAXSZ(s), MAXSZ(s));
73
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
74
gen_helper_##OPIVX##_b, gen_helper_##OPIVX##_h, \
75
gen_helper_##OPIVX##_w, gen_helper_##OPIVX##_d, \
76
}; \
77
+ if (!opivx_check(s, a)) { \
78
+ return false; \
79
+ } \
80
return do_opivi_gvec(s, a, tcg_gen_gvec_##SUF, \
81
fns[s->sew], IMM_MODE); \
82
}
83
@@ -XXX,XX +XXX,XX @@ static inline bool
84
do_opivx_gvec_shift(DisasContext *s, arg_rmrr *a, GVecGen2sFn32 *gvec_fn,
85
gen_helper_opivx *fn)
86
{
87
- if (!opivx_check(s, a)) {
88
- return false;
89
- }
90
-
91
if (a->vm && s->vl_eq_vlmax && !(s->vta && s->lmul < 0)) {
92
TCGv_i32 src1 = tcg_temp_new_i32();
93
94
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
95
gen_helper_##NAME##_b, gen_helper_##NAME##_h, \
96
gen_helper_##NAME##_w, gen_helper_##NAME##_d, \
97
}; \
98
- \
99
+ if (!opivx_check(s, a)) { \
100
+ return false; \
101
+ } \
102
return do_opivx_gvec_shift(s, a, tcg_gen_gvec_##SUF, fns[s->sew]); \
103
}
25
104
26
--
105
--
27
2.35.1
106
2.41.0
diff view generated by jsdifflib
1
From: Weiwei Li <liweiwei@iscas.ac.cn>
1
From: Dickon Hood <dickon.hood@codethink.co.uk>
2
2
3
for some cases, scale is always equal or less than 0, since lmul is not larger than 3
3
Zvbb (implemented in later commit) has a widening instruction, which
4
requires an extra check on the enabled extensions. Refactor
5
GEN_OPIVX_WIDEN_TRANS() to take a check function to avoid reimplementing
6
it.
4
7
5
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
8
Signed-off-by: Dickon Hood <dickon.hood@codethink.co.uk>
6
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Frank Chang <frank.chang@sifive.com>
10
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
8
Acked-by: Alistair Francis <alistair.francis@wdc.com>
11
Signed-off-by: Max Chou <max.chou@sifive.com>
9
Message-Id: <20220325085902.29500-1-liweiwei@iscas.ac.cn>
12
Message-ID: <20230711165917.2629866-7-max.chou@sifive.com>
10
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
13
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
11
---
14
---
12
target/riscv/insn_trans/trans_rvv.c.inc | 8 +++-----
15
target/riscv/insn_trans/trans_rvv.c.inc | 52 +++++++++++--------------
13
1 file changed, 3 insertions(+), 5 deletions(-)
16
1 file changed, 23 insertions(+), 29 deletions(-)
14
17
15
diff --git a/target/riscv/insn_trans/trans_rvv.c.inc b/target/riscv/insn_trans/trans_rvv.c.inc
18
diff --git a/target/riscv/insn_trans/trans_rvv.c.inc b/target/riscv/insn_trans/trans_rvv.c.inc
16
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
17
--- a/target/riscv/insn_trans/trans_rvv.c.inc
20
--- a/target/riscv/insn_trans/trans_rvv.c.inc
18
+++ b/target/riscv/insn_trans/trans_rvv.c.inc
21
+++ b/target/riscv/insn_trans/trans_rvv.c.inc
19
@@ -XXX,XX +XXX,XX @@ GEN_LDST_WHOLE_TRANS(vs8r_v, 8, true)
22
@@ -XXX,XX +XXX,XX @@ static bool opivx_widen_check(DisasContext *s, arg_rmrr *a)
20
static inline uint32_t MAXSZ(DisasContext *s)
23
vext_check_ds(s, a->rd, a->rs2, a->vm);
21
{
22
int scale = s->lmul - 3;
23
- return scale < 0 ? s->cfg_ptr->vlen >> -scale : s->cfg_ptr->vlen << scale;
24
+ return s->cfg_ptr->vlen >> -scale;
25
}
24
}
26
25
27
static bool opivv_check(DisasContext *s, arg_rmrr *a)
26
-static bool do_opivx_widen(DisasContext *s, arg_rmrr *a,
28
@@ -XXX,XX +XXX,XX @@ static bool trans_vrgather_vx(DisasContext *s, arg_rmrr *a)
27
- gen_helper_opivx *fn)
29
28
-{
30
if (a->vm && s->vl_eq_vlmax) {
29
- if (opivx_widen_check(s, a)) {
31
int scale = s->lmul - (s->sew + 3);
30
- return opivx_trans(a->rd, a->rs1, a->rs2, a->vm, fn, s);
32
- int vlmax = scale < 0 ?
31
- }
33
- s->cfg_ptr->vlen >> -scale : s->cfg_ptr->vlen << scale;
32
- return false;
34
+ int vlmax = s->cfg_ptr->vlen >> -scale;
33
-}
35
TCGv_i64 dest = tcg_temp_new_i64();
34
-
36
35
-#define GEN_OPIVX_WIDEN_TRANS(NAME) \
37
if (a->rs1 == 0) {
36
-static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
38
@@ -XXX,XX +XXX,XX @@ static bool trans_vrgather_vi(DisasContext *s, arg_rmrr *a)
37
-{ \
39
38
- static gen_helper_opivx * const fns[3] = { \
40
if (a->vm && s->vl_eq_vlmax) {
39
- gen_helper_##NAME##_b, \
41
int scale = s->lmul - (s->sew + 3);
40
- gen_helper_##NAME##_h, \
42
- int vlmax = scale < 0 ?
41
- gen_helper_##NAME##_w \
43
- s->cfg_ptr->vlen >> -scale : s->cfg_ptr->vlen << scale;
42
- }; \
44
+ int vlmax = s->cfg_ptr->vlen >> -scale;
43
- return do_opivx_widen(s, a, fns[s->sew]); \
45
if (a->rs1 >= vlmax) {
44
+#define GEN_OPIVX_WIDEN_TRANS(NAME, CHECK) \
46
tcg_gen_gvec_dup_imm(MO_64, vreg_ofs(s, a->rd),
45
+static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
47
MAXSZ(s), MAXSZ(s), 0);
46
+{ \
47
+ if (CHECK(s, a)) { \
48
+ static gen_helper_opivx * const fns[3] = { \
49
+ gen_helper_##NAME##_b, \
50
+ gen_helper_##NAME##_h, \
51
+ gen_helper_##NAME##_w \
52
+ }; \
53
+ return opivx_trans(a->rd, a->rs1, a->rs2, a->vm, fns[s->sew], s); \
54
+ } \
55
+ return false; \
56
}
57
58
-GEN_OPIVX_WIDEN_TRANS(vwaddu_vx)
59
-GEN_OPIVX_WIDEN_TRANS(vwadd_vx)
60
-GEN_OPIVX_WIDEN_TRANS(vwsubu_vx)
61
-GEN_OPIVX_WIDEN_TRANS(vwsub_vx)
62
+GEN_OPIVX_WIDEN_TRANS(vwaddu_vx, opivx_widen_check)
63
+GEN_OPIVX_WIDEN_TRANS(vwadd_vx, opivx_widen_check)
64
+GEN_OPIVX_WIDEN_TRANS(vwsubu_vx, opivx_widen_check)
65
+GEN_OPIVX_WIDEN_TRANS(vwsub_vx, opivx_widen_check)
66
67
/* WIDEN OPIVV with WIDEN */
68
static bool opiwv_widen_check(DisasContext *s, arg_rmrr *a)
69
@@ -XXX,XX +XXX,XX @@ GEN_OPIVX_TRANS(vrem_vx, opivx_check)
70
GEN_OPIVV_WIDEN_TRANS(vwmul_vv, opivv_widen_check)
71
GEN_OPIVV_WIDEN_TRANS(vwmulu_vv, opivv_widen_check)
72
GEN_OPIVV_WIDEN_TRANS(vwmulsu_vv, opivv_widen_check)
73
-GEN_OPIVX_WIDEN_TRANS(vwmul_vx)
74
-GEN_OPIVX_WIDEN_TRANS(vwmulu_vx)
75
-GEN_OPIVX_WIDEN_TRANS(vwmulsu_vx)
76
+GEN_OPIVX_WIDEN_TRANS(vwmul_vx, opivx_widen_check)
77
+GEN_OPIVX_WIDEN_TRANS(vwmulu_vx, opivx_widen_check)
78
+GEN_OPIVX_WIDEN_TRANS(vwmulsu_vx, opivx_widen_check)
79
80
/* Vector Single-Width Integer Multiply-Add Instructions */
81
GEN_OPIVV_TRANS(vmacc_vv, opivv_check)
82
@@ -XXX,XX +XXX,XX @@ GEN_OPIVX_TRANS(vnmsub_vx, opivx_check)
83
GEN_OPIVV_WIDEN_TRANS(vwmaccu_vv, opivv_widen_check)
84
GEN_OPIVV_WIDEN_TRANS(vwmacc_vv, opivv_widen_check)
85
GEN_OPIVV_WIDEN_TRANS(vwmaccsu_vv, opivv_widen_check)
86
-GEN_OPIVX_WIDEN_TRANS(vwmaccu_vx)
87
-GEN_OPIVX_WIDEN_TRANS(vwmacc_vx)
88
-GEN_OPIVX_WIDEN_TRANS(vwmaccsu_vx)
89
-GEN_OPIVX_WIDEN_TRANS(vwmaccus_vx)
90
+GEN_OPIVX_WIDEN_TRANS(vwmaccu_vx, opivx_widen_check)
91
+GEN_OPIVX_WIDEN_TRANS(vwmacc_vx, opivx_widen_check)
92
+GEN_OPIVX_WIDEN_TRANS(vwmaccsu_vx, opivx_widen_check)
93
+GEN_OPIVX_WIDEN_TRANS(vwmaccus_vx, opivx_widen_check)
94
95
/* Vector Integer Merge and Move Instructions */
96
static bool trans_vmv_v_v(DisasContext *s, arg_vmv_v_v *a)
48
--
97
--
49
2.35.1
98
2.41.0
diff view generated by jsdifflib
New patch
1
From: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
1
2
3
Move some macros out of `vector_helper` and into `vector_internals`.
4
This ensures they can be used by both vector and vector-crypto helpers
5
(latter implemented in proceeding commits).
6
7
Signed-off-by: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
8
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
9
Signed-off-by: Max Chou <max.chou@sifive.com>
10
Message-ID: <20230711165917.2629866-8-max.chou@sifive.com>
11
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
12
---
13
target/riscv/vector_internals.h | 46 +++++++++++++++++++++++++++++++++
14
target/riscv/vector_helper.c | 42 ------------------------------
15
2 files changed, 46 insertions(+), 42 deletions(-)
16
17
diff --git a/target/riscv/vector_internals.h b/target/riscv/vector_internals.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/riscv/vector_internals.h
20
+++ b/target/riscv/vector_internals.h
21
@@ -XXX,XX +XXX,XX @@ void vext_set_elems_1s(void *base, uint32_t is_agnostic, uint32_t cnt,
22
/* expand macro args before macro */
23
#define RVVCALL(macro, ...) macro(__VA_ARGS__)
24
25
+/* (TD, T2, TX2) */
26
+#define OP_UU_B uint8_t, uint8_t, uint8_t
27
+#define OP_UU_H uint16_t, uint16_t, uint16_t
28
+#define OP_UU_W uint32_t, uint32_t, uint32_t
29
+#define OP_UU_D uint64_t, uint64_t, uint64_t
30
+
31
/* (TD, T1, T2, TX1, TX2) */
32
#define OP_UUU_B uint8_t, uint8_t, uint8_t, uint8_t, uint8_t
33
#define OP_UUU_H uint16_t, uint16_t, uint16_t, uint16_t, uint16_t
34
#define OP_UUU_W uint32_t, uint32_t, uint32_t, uint32_t, uint32_t
35
#define OP_UUU_D uint64_t, uint64_t, uint64_t, uint64_t, uint64_t
36
37
+#define OPIVV1(NAME, TD, T2, TX2, HD, HS2, OP) \
38
+static void do_##NAME(void *vd, void *vs2, int i) \
39
+{ \
40
+ TX2 s2 = *((T2 *)vs2 + HS2(i)); \
41
+ *((TD *)vd + HD(i)) = OP(s2); \
42
+}
43
+
44
+#define GEN_VEXT_V(NAME, ESZ) \
45
+void HELPER(NAME)(void *vd, void *v0, void *vs2, \
46
+ CPURISCVState *env, uint32_t desc) \
47
+{ \
48
+ uint32_t vm = vext_vm(desc); \
49
+ uint32_t vl = env->vl; \
50
+ uint32_t total_elems = \
51
+ vext_get_total_elems(env, desc, ESZ); \
52
+ uint32_t vta = vext_vta(desc); \
53
+ uint32_t vma = vext_vma(desc); \
54
+ uint32_t i; \
55
+ \
56
+ for (i = env->vstart; i < vl; i++) { \
57
+ if (!vm && !vext_elem_mask(v0, i)) { \
58
+ /* set masked-off elements to 1s */ \
59
+ vext_set_elems_1s(vd, vma, i * ESZ, \
60
+ (i + 1) * ESZ); \
61
+ continue; \
62
+ } \
63
+ do_##NAME(vd, vs2, i); \
64
+ } \
65
+ env->vstart = 0; \
66
+ /* set tail elements to 1s */ \
67
+ vext_set_elems_1s(vd, vta, vl * ESZ, \
68
+ total_elems * ESZ); \
69
+}
70
+
71
/* operation of two vector elements */
72
typedef void opivv2_fn(void *vd, void *vs1, void *vs2, int i);
73
74
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *v0, target_ulong s1, \
75
do_##NAME, ESZ); \
76
}
77
78
+/* Three of the widening shortening macros: */
79
+/* (TD, T1, T2, TX1, TX2) */
80
+#define WOP_UUU_B uint16_t, uint8_t, uint8_t, uint16_t, uint16_t
81
+#define WOP_UUU_H uint32_t, uint16_t, uint16_t, uint32_t, uint32_t
82
+#define WOP_UUU_W uint64_t, uint32_t, uint32_t, uint64_t, uint64_t
83
+
84
#endif /* TARGET_RISCV_VECTOR_INTERNALS_H */
85
diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c
86
index XXXXXXX..XXXXXXX 100644
87
--- a/target/riscv/vector_helper.c
88
+++ b/target/riscv/vector_helper.c
89
@@ -XXX,XX +XXX,XX @@ GEN_VEXT_ST_WHOLE(vs8r_v, int8_t, ste_b)
90
#define OP_SUS_H int16_t, uint16_t, int16_t, uint16_t, int16_t
91
#define OP_SUS_W int32_t, uint32_t, int32_t, uint32_t, int32_t
92
#define OP_SUS_D int64_t, uint64_t, int64_t, uint64_t, int64_t
93
-#define WOP_UUU_B uint16_t, uint8_t, uint8_t, uint16_t, uint16_t
94
-#define WOP_UUU_H uint32_t, uint16_t, uint16_t, uint32_t, uint32_t
95
-#define WOP_UUU_W uint64_t, uint32_t, uint32_t, uint64_t, uint64_t
96
#define WOP_SSS_B int16_t, int8_t, int8_t, int16_t, int16_t
97
#define WOP_SSS_H int32_t, int16_t, int16_t, int32_t, int32_t
98
#define WOP_SSS_W int64_t, int32_t, int32_t, int64_t, int64_t
99
@@ -XXX,XX +XXX,XX @@ GEN_VEXT_VF(vfwnmsac_vf_h, 4)
100
GEN_VEXT_VF(vfwnmsac_vf_w, 8)
101
102
/* Vector Floating-Point Square-Root Instruction */
103
-/* (TD, T2, TX2) */
104
-#define OP_UU_H uint16_t, uint16_t, uint16_t
105
-#define OP_UU_W uint32_t, uint32_t, uint32_t
106
-#define OP_UU_D uint64_t, uint64_t, uint64_t
107
-
108
#define OPFVV1(NAME, TD, T2, TX2, HD, HS2, OP) \
109
static void do_##NAME(void *vd, void *vs2, int i, \
110
CPURISCVState *env) \
111
@@ -XXX,XX +XXX,XX @@ GEN_VEXT_CMP_VF(vmfge_vf_w, uint32_t, H4, vmfge32)
112
GEN_VEXT_CMP_VF(vmfge_vf_d, uint64_t, H8, vmfge64)
113
114
/* Vector Floating-Point Classify Instruction */
115
-#define OPIVV1(NAME, TD, T2, TX2, HD, HS2, OP) \
116
-static void do_##NAME(void *vd, void *vs2, int i) \
117
-{ \
118
- TX2 s2 = *((T2 *)vs2 + HS2(i)); \
119
- *((TD *)vd + HD(i)) = OP(s2); \
120
-}
121
-
122
-#define GEN_VEXT_V(NAME, ESZ) \
123
-void HELPER(NAME)(void *vd, void *v0, void *vs2, \
124
- CPURISCVState *env, uint32_t desc) \
125
-{ \
126
- uint32_t vm = vext_vm(desc); \
127
- uint32_t vl = env->vl; \
128
- uint32_t total_elems = \
129
- vext_get_total_elems(env, desc, ESZ); \
130
- uint32_t vta = vext_vta(desc); \
131
- uint32_t vma = vext_vma(desc); \
132
- uint32_t i; \
133
- \
134
- for (i = env->vstart; i < vl; i++) { \
135
- if (!vm && !vext_elem_mask(v0, i)) { \
136
- /* set masked-off elements to 1s */ \
137
- vext_set_elems_1s(vd, vma, i * ESZ, \
138
- (i + 1) * ESZ); \
139
- continue; \
140
- } \
141
- do_##NAME(vd, vs2, i); \
142
- } \
143
- env->vstart = 0; \
144
- /* set tail elements to 1s */ \
145
- vext_set_elems_1s(vd, vta, vl * ESZ, \
146
- total_elems * ESZ); \
147
-}
148
-
149
target_ulong fclass_h(uint64_t frs1)
150
{
151
float16 f = frs1;
152
--
153
2.41.0
diff view generated by jsdifflib
1
From: Wilfred Mallawa <wilfred.mallawa@wdc.com>
1
From: Dickon Hood <dickon.hood@codethink.co.uk>
2
2
3
Adds the SPI_HOST device model for ibex. The device specification is as per
3
This commit adds support for the Zvbb vector-crypto extension, which
4
[1]. The model has been tested on opentitan with spi_host unit tests
4
consists of the following instructions:
5
written for TockOS.
6
5
7
[1] https://docs.opentitan.org/hw/ip/spi_host/doc/
6
* vrol.[vv,vx]
7
* vror.[vv,vx,vi]
8
* vbrev8.v
9
* vrev8.v
10
* vandn.[vv,vx]
11
* vbrev.v
12
* vclz.v
13
* vctz.v
14
* vcpop.v
15
* vwsll.[vv,vx,vi]
8
16
9
Signed-off-by: Wilfred Mallawa <wilfred.mallawa@wdc.com>
17
Translation functions are defined in
10
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
18
`target/riscv/insn_trans/trans_rvvk.c.inc` and helpers are defined in
11
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
19
`target/riscv/vcrypto_helper.c`.
12
Message-Id: <20220303045426.511588-1-alistair.francis@opensource.wdc.com>
20
21
Co-authored-by: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
22
Co-authored-by: William Salmon <will.salmon@codethink.co.uk>
23
Co-authored-by: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
24
[max.chou@sifive.com: Fix imm mode of vror.vi]
25
Signed-off-by: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
26
Signed-off-by: William Salmon <will.salmon@codethink.co.uk>
27
Signed-off-by: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
28
Signed-off-by: Dickon Hood <dickon.hood@codethink.co.uk>
29
Signed-off-by: Max Chou <max.chou@sifive.com>
30
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
31
[max.chou@sifive.com: Exposed x-zvbb property]
32
Message-ID: <20230711165917.2629866-9-max.chou@sifive.com>
13
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
33
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
14
---
34
---
15
include/hw/ssi/ibex_spi_host.h | 94 +++++
35
target/riscv/cpu_cfg.h | 1 +
16
hw/ssi/ibex_spi_host.c | 612 +++++++++++++++++++++++++++++++++
36
target/riscv/helper.h | 62 +++++++++
17
hw/ssi/meson.build | 1 +
37
target/riscv/insn32.decode | 20 +++
18
hw/ssi/trace-events | 7 +
38
target/riscv/cpu.c | 12 ++
19
4 files changed, 714 insertions(+)
39
target/riscv/vcrypto_helper.c | 138 +++++++++++++++++++
20
create mode 100644 include/hw/ssi/ibex_spi_host.h
40
target/riscv/insn_trans/trans_rvvk.c.inc | 164 +++++++++++++++++++++++
21
create mode 100644 hw/ssi/ibex_spi_host.c
41
6 files changed, 397 insertions(+)
22
42
23
diff --git a/include/hw/ssi/ibex_spi_host.h b/include/hw/ssi/ibex_spi_host.h
43
diff --git a/target/riscv/cpu_cfg.h b/target/riscv/cpu_cfg.h
24
new file mode 100644
44
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX
45
--- a/target/riscv/cpu_cfg.h
26
--- /dev/null
46
+++ b/target/riscv/cpu_cfg.h
27
+++ b/include/hw/ssi/ibex_spi_host.h
47
@@ -XXX,XX +XXX,XX @@ struct RISCVCPUConfig {
48
bool ext_zve32f;
49
bool ext_zve64f;
50
bool ext_zve64d;
51
+ bool ext_zvbb;
52
bool ext_zvbc;
53
bool ext_zmmul;
54
bool ext_zvfbfmin;
55
diff --git a/target/riscv/helper.h b/target/riscv/helper.h
56
index XXXXXXX..XXXXXXX 100644
57
--- a/target/riscv/helper.h
58
+++ b/target/riscv/helper.h
59
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_6(vclmul_vv, void, ptr, ptr, ptr, ptr, env, i32)
60
DEF_HELPER_6(vclmul_vx, void, ptr, ptr, tl, ptr, env, i32)
61
DEF_HELPER_6(vclmulh_vv, void, ptr, ptr, ptr, ptr, env, i32)
62
DEF_HELPER_6(vclmulh_vx, void, ptr, ptr, tl, ptr, env, i32)
63
+
64
+DEF_HELPER_6(vror_vv_b, void, ptr, ptr, ptr, ptr, env, i32)
65
+DEF_HELPER_6(vror_vv_h, void, ptr, ptr, ptr, ptr, env, i32)
66
+DEF_HELPER_6(vror_vv_w, void, ptr, ptr, ptr, ptr, env, i32)
67
+DEF_HELPER_6(vror_vv_d, void, ptr, ptr, ptr, ptr, env, i32)
68
+
69
+DEF_HELPER_6(vror_vx_b, void, ptr, ptr, tl, ptr, env, i32)
70
+DEF_HELPER_6(vror_vx_h, void, ptr, ptr, tl, ptr, env, i32)
71
+DEF_HELPER_6(vror_vx_w, void, ptr, ptr, tl, ptr, env, i32)
72
+DEF_HELPER_6(vror_vx_d, void, ptr, ptr, tl, ptr, env, i32)
73
+
74
+DEF_HELPER_6(vrol_vv_b, void, ptr, ptr, ptr, ptr, env, i32)
75
+DEF_HELPER_6(vrol_vv_h, void, ptr, ptr, ptr, ptr, env, i32)
76
+DEF_HELPER_6(vrol_vv_w, void, ptr, ptr, ptr, ptr, env, i32)
77
+DEF_HELPER_6(vrol_vv_d, void, ptr, ptr, ptr, ptr, env, i32)
78
+
79
+DEF_HELPER_6(vrol_vx_b, void, ptr, ptr, tl, ptr, env, i32)
80
+DEF_HELPER_6(vrol_vx_h, void, ptr, ptr, tl, ptr, env, i32)
81
+DEF_HELPER_6(vrol_vx_w, void, ptr, ptr, tl, ptr, env, i32)
82
+DEF_HELPER_6(vrol_vx_d, void, ptr, ptr, tl, ptr, env, i32)
83
+
84
+DEF_HELPER_5(vrev8_v_b, void, ptr, ptr, ptr, env, i32)
85
+DEF_HELPER_5(vrev8_v_h, void, ptr, ptr, ptr, env, i32)
86
+DEF_HELPER_5(vrev8_v_w, void, ptr, ptr, ptr, env, i32)
87
+DEF_HELPER_5(vrev8_v_d, void, ptr, ptr, ptr, env, i32)
88
+DEF_HELPER_5(vbrev8_v_b, void, ptr, ptr, ptr, env, i32)
89
+DEF_HELPER_5(vbrev8_v_h, void, ptr, ptr, ptr, env, i32)
90
+DEF_HELPER_5(vbrev8_v_w, void, ptr, ptr, ptr, env, i32)
91
+DEF_HELPER_5(vbrev8_v_d, void, ptr, ptr, ptr, env, i32)
92
+DEF_HELPER_5(vbrev_v_b, void, ptr, ptr, ptr, env, i32)
93
+DEF_HELPER_5(vbrev_v_h, void, ptr, ptr, ptr, env, i32)
94
+DEF_HELPER_5(vbrev_v_w, void, ptr, ptr, ptr, env, i32)
95
+DEF_HELPER_5(vbrev_v_d, void, ptr, ptr, ptr, env, i32)
96
+
97
+DEF_HELPER_5(vclz_v_b, void, ptr, ptr, ptr, env, i32)
98
+DEF_HELPER_5(vclz_v_h, void, ptr, ptr, ptr, env, i32)
99
+DEF_HELPER_5(vclz_v_w, void, ptr, ptr, ptr, env, i32)
100
+DEF_HELPER_5(vclz_v_d, void, ptr, ptr, ptr, env, i32)
101
+DEF_HELPER_5(vctz_v_b, void, ptr, ptr, ptr, env, i32)
102
+DEF_HELPER_5(vctz_v_h, void, ptr, ptr, ptr, env, i32)
103
+DEF_HELPER_5(vctz_v_w, void, ptr, ptr, ptr, env, i32)
104
+DEF_HELPER_5(vctz_v_d, void, ptr, ptr, ptr, env, i32)
105
+DEF_HELPER_5(vcpop_v_b, void, ptr, ptr, ptr, env, i32)
106
+DEF_HELPER_5(vcpop_v_h, void, ptr, ptr, ptr, env, i32)
107
+DEF_HELPER_5(vcpop_v_w, void, ptr, ptr, ptr, env, i32)
108
+DEF_HELPER_5(vcpop_v_d, void, ptr, ptr, ptr, env, i32)
109
+
110
+DEF_HELPER_6(vwsll_vv_b, void, ptr, ptr, ptr, ptr, env, i32)
111
+DEF_HELPER_6(vwsll_vv_h, void, ptr, ptr, ptr, ptr, env, i32)
112
+DEF_HELPER_6(vwsll_vv_w, void, ptr, ptr, ptr, ptr, env, i32)
113
+DEF_HELPER_6(vwsll_vx_b, void, ptr, ptr, tl, ptr, env, i32)
114
+DEF_HELPER_6(vwsll_vx_h, void, ptr, ptr, tl, ptr, env, i32)
115
+DEF_HELPER_6(vwsll_vx_w, void, ptr, ptr, tl, ptr, env, i32)
116
+
117
+DEF_HELPER_6(vandn_vv_b, void, ptr, ptr, ptr, ptr, env, i32)
118
+DEF_HELPER_6(vandn_vv_h, void, ptr, ptr, ptr, ptr, env, i32)
119
+DEF_HELPER_6(vandn_vv_w, void, ptr, ptr, ptr, ptr, env, i32)
120
+DEF_HELPER_6(vandn_vv_d, void, ptr, ptr, ptr, ptr, env, i32)
121
+DEF_HELPER_6(vandn_vx_b, void, ptr, ptr, tl, ptr, env, i32)
122
+DEF_HELPER_6(vandn_vx_h, void, ptr, ptr, tl, ptr, env, i32)
123
+DEF_HELPER_6(vandn_vx_w, void, ptr, ptr, tl, ptr, env, i32)
124
+DEF_HELPER_6(vandn_vx_d, void, ptr, ptr, tl, ptr, env, i32)
125
diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode
126
index XXXXXXX..XXXXXXX 100644
127
--- a/target/riscv/insn32.decode
128
+++ b/target/riscv/insn32.decode
28
@@ -XXX,XX +XXX,XX @@
129
@@ -XXX,XX +XXX,XX @@
130
%imm_u 12:s20 !function=ex_shift_12
131
%imm_bs 30:2 !function=ex_shift_3
132
%imm_rnum 20:4
133
+%imm_z6 26:1 15:5
134
135
# Argument sets:
136
&empty
137
@@ -XXX,XX +XXX,XX @@
138
@r_vm ...... vm:1 ..... ..... ... ..... ....... &rmrr %rs2 %rs1 %rd
139
@r_vm_1 ...... . ..... ..... ... ..... ....... &rmrr vm=1 %rs2 %rs1 %rd
140
@r_vm_0 ...... . ..... ..... ... ..... ....... &rmrr vm=0 %rs2 %rs1 %rd
141
+@r2_zimm6 ..... . vm:1 ..... ..... ... ..... ....... &rmrr %rs2 rs1=%imm_z6 %rd
142
@r2_zimm11 . zimm:11 ..... ... ..... ....... %rs1 %rd
143
@r2_zimm10 .. zimm:10 ..... ... ..... ....... %rs1 %rd
144
@r2_s ....... ..... ..... ... ..... ....... %rs2 %rs1
145
@@ -XXX,XX +XXX,XX @@ vclmul_vv 001100 . ..... ..... 010 ..... 1010111 @r_vm
146
vclmul_vx 001100 . ..... ..... 110 ..... 1010111 @r_vm
147
vclmulh_vv 001101 . ..... ..... 010 ..... 1010111 @r_vm
148
vclmulh_vx 001101 . ..... ..... 110 ..... 1010111 @r_vm
149
+
150
+# *** Zvbb vector crypto extension ***
151
+vrol_vv 010101 . ..... ..... 000 ..... 1010111 @r_vm
152
+vrol_vx 010101 . ..... ..... 100 ..... 1010111 @r_vm
153
+vror_vv 010100 . ..... ..... 000 ..... 1010111 @r_vm
154
+vror_vx 010100 . ..... ..... 100 ..... 1010111 @r_vm
155
+vror_vi 01010. . ..... ..... 011 ..... 1010111 @r2_zimm6
156
+vbrev8_v 010010 . ..... 01000 010 ..... 1010111 @r2_vm
157
+vrev8_v 010010 . ..... 01001 010 ..... 1010111 @r2_vm
158
+vandn_vv 000001 . ..... ..... 000 ..... 1010111 @r_vm
159
+vandn_vx 000001 . ..... ..... 100 ..... 1010111 @r_vm
160
+vbrev_v 010010 . ..... 01010 010 ..... 1010111 @r2_vm
161
+vclz_v 010010 . ..... 01100 010 ..... 1010111 @r2_vm
162
+vctz_v 010010 . ..... 01101 010 ..... 1010111 @r2_vm
163
+vcpop_v 010010 . ..... 01110 010 ..... 1010111 @r2_vm
164
+vwsll_vv 110101 . ..... ..... 000 ..... 1010111 @r_vm
165
+vwsll_vx 110101 . ..... ..... 100 ..... 1010111 @r_vm
166
+vwsll_vi 110101 . ..... ..... 011 ..... 1010111 @r_vm
167
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
168
index XXXXXXX..XXXXXXX 100644
169
--- a/target/riscv/cpu.c
170
+++ b/target/riscv/cpu.c
171
@@ -XXX,XX +XXX,XX @@ static const struct isa_ext_data isa_edata_arr[] = {
172
ISA_EXT_DATA_ENTRY(zksed, PRIV_VERSION_1_12_0, ext_zksed),
173
ISA_EXT_DATA_ENTRY(zksh, PRIV_VERSION_1_12_0, ext_zksh),
174
ISA_EXT_DATA_ENTRY(zkt, PRIV_VERSION_1_12_0, ext_zkt),
175
+ ISA_EXT_DATA_ENTRY(zvbb, PRIV_VERSION_1_12_0, ext_zvbb),
176
ISA_EXT_DATA_ENTRY(zvbc, PRIV_VERSION_1_12_0, ext_zvbc),
177
ISA_EXT_DATA_ENTRY(zve32f, PRIV_VERSION_1_10_0, ext_zve32f),
178
ISA_EXT_DATA_ENTRY(zve64f, PRIV_VERSION_1_10_0, ext_zve64f),
179
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_validate_set_extensions(RISCVCPU *cpu, Error **errp)
180
return;
181
}
182
183
+ /*
184
+ * In principle Zve*x would also suffice here, were they supported
185
+ * in qemu
186
+ */
187
+ if (cpu->cfg.ext_zvbb && !cpu->cfg.ext_zve32f) {
188
+ error_setg(errp,
189
+ "Vector crypto extensions require V or Zve* extensions");
190
+ return;
191
+ }
192
+
193
if (cpu->cfg.ext_zvbc && !cpu->cfg.ext_zve64f) {
194
error_setg(errp, "Zvbc extension requires V or Zve64{f,d} extensions");
195
return;
196
@@ -XXX,XX +XXX,XX @@ static Property riscv_cpu_extensions[] = {
197
DEFINE_PROP_BOOL("x-zvfbfwma", RISCVCPU, cfg.ext_zvfbfwma, false),
198
199
/* Vector cryptography extensions */
200
+ DEFINE_PROP_BOOL("x-zvbb", RISCVCPU, cfg.ext_zvbb, false),
201
DEFINE_PROP_BOOL("x-zvbc", RISCVCPU, cfg.ext_zvbc, false),
202
203
DEFINE_PROP_END_OF_LIST(),
204
diff --git a/target/riscv/vcrypto_helper.c b/target/riscv/vcrypto_helper.c
205
index XXXXXXX..XXXXXXX 100644
206
--- a/target/riscv/vcrypto_helper.c
207
+++ b/target/riscv/vcrypto_helper.c
208
@@ -XXX,XX +XXX,XX @@
209
#include "qemu/osdep.h"
210
#include "qemu/host-utils.h"
211
#include "qemu/bitops.h"
212
+#include "qemu/bswap.h"
213
#include "cpu.h"
214
#include "exec/memop.h"
215
#include "exec/exec-all.h"
216
@@ -XXX,XX +XXX,XX @@ RVVCALL(OPIVV2, vclmulh_vv, OP_UUU_D, H8, H8, H8, clmulh64)
217
GEN_VEXT_VV(vclmulh_vv, 8)
218
RVVCALL(OPIVX2, vclmulh_vx, OP_UUU_D, H8, H8, clmulh64)
219
GEN_VEXT_VX(vclmulh_vx, 8)
220
+
221
+RVVCALL(OPIVV2, vror_vv_b, OP_UUU_B, H1, H1, H1, ror8)
222
+RVVCALL(OPIVV2, vror_vv_h, OP_UUU_H, H2, H2, H2, ror16)
223
+RVVCALL(OPIVV2, vror_vv_w, OP_UUU_W, H4, H4, H4, ror32)
224
+RVVCALL(OPIVV2, vror_vv_d, OP_UUU_D, H8, H8, H8, ror64)
225
+GEN_VEXT_VV(vror_vv_b, 1)
226
+GEN_VEXT_VV(vror_vv_h, 2)
227
+GEN_VEXT_VV(vror_vv_w, 4)
228
+GEN_VEXT_VV(vror_vv_d, 8)
229
+
230
+RVVCALL(OPIVX2, vror_vx_b, OP_UUU_B, H1, H1, ror8)
231
+RVVCALL(OPIVX2, vror_vx_h, OP_UUU_H, H2, H2, ror16)
232
+RVVCALL(OPIVX2, vror_vx_w, OP_UUU_W, H4, H4, ror32)
233
+RVVCALL(OPIVX2, vror_vx_d, OP_UUU_D, H8, H8, ror64)
234
+GEN_VEXT_VX(vror_vx_b, 1)
235
+GEN_VEXT_VX(vror_vx_h, 2)
236
+GEN_VEXT_VX(vror_vx_w, 4)
237
+GEN_VEXT_VX(vror_vx_d, 8)
238
+
239
+RVVCALL(OPIVV2, vrol_vv_b, OP_UUU_B, H1, H1, H1, rol8)
240
+RVVCALL(OPIVV2, vrol_vv_h, OP_UUU_H, H2, H2, H2, rol16)
241
+RVVCALL(OPIVV2, vrol_vv_w, OP_UUU_W, H4, H4, H4, rol32)
242
+RVVCALL(OPIVV2, vrol_vv_d, OP_UUU_D, H8, H8, H8, rol64)
243
+GEN_VEXT_VV(vrol_vv_b, 1)
244
+GEN_VEXT_VV(vrol_vv_h, 2)
245
+GEN_VEXT_VV(vrol_vv_w, 4)
246
+GEN_VEXT_VV(vrol_vv_d, 8)
247
+
248
+RVVCALL(OPIVX2, vrol_vx_b, OP_UUU_B, H1, H1, rol8)
249
+RVVCALL(OPIVX2, vrol_vx_h, OP_UUU_H, H2, H2, rol16)
250
+RVVCALL(OPIVX2, vrol_vx_w, OP_UUU_W, H4, H4, rol32)
251
+RVVCALL(OPIVX2, vrol_vx_d, OP_UUU_D, H8, H8, rol64)
252
+GEN_VEXT_VX(vrol_vx_b, 1)
253
+GEN_VEXT_VX(vrol_vx_h, 2)
254
+GEN_VEXT_VX(vrol_vx_w, 4)
255
+GEN_VEXT_VX(vrol_vx_d, 8)
256
+
257
+static uint64_t brev8(uint64_t val)
258
+{
259
+ val = ((val & 0x5555555555555555ull) << 1) |
260
+ ((val & 0xAAAAAAAAAAAAAAAAull) >> 1);
261
+ val = ((val & 0x3333333333333333ull) << 2) |
262
+ ((val & 0xCCCCCCCCCCCCCCCCull) >> 2);
263
+ val = ((val & 0x0F0F0F0F0F0F0F0Full) << 4) |
264
+ ((val & 0xF0F0F0F0F0F0F0F0ull) >> 4);
265
+
266
+ return val;
267
+}
268
+
269
+RVVCALL(OPIVV1, vbrev8_v_b, OP_UU_B, H1, H1, brev8)
270
+RVVCALL(OPIVV1, vbrev8_v_h, OP_UU_H, H2, H2, brev8)
271
+RVVCALL(OPIVV1, vbrev8_v_w, OP_UU_W, H4, H4, brev8)
272
+RVVCALL(OPIVV1, vbrev8_v_d, OP_UU_D, H8, H8, brev8)
273
+GEN_VEXT_V(vbrev8_v_b, 1)
274
+GEN_VEXT_V(vbrev8_v_h, 2)
275
+GEN_VEXT_V(vbrev8_v_w, 4)
276
+GEN_VEXT_V(vbrev8_v_d, 8)
277
+
278
+#define DO_IDENTITY(a) (a)
279
+RVVCALL(OPIVV1, vrev8_v_b, OP_UU_B, H1, H1, DO_IDENTITY)
280
+RVVCALL(OPIVV1, vrev8_v_h, OP_UU_H, H2, H2, bswap16)
281
+RVVCALL(OPIVV1, vrev8_v_w, OP_UU_W, H4, H4, bswap32)
282
+RVVCALL(OPIVV1, vrev8_v_d, OP_UU_D, H8, H8, bswap64)
283
+GEN_VEXT_V(vrev8_v_b, 1)
284
+GEN_VEXT_V(vrev8_v_h, 2)
285
+GEN_VEXT_V(vrev8_v_w, 4)
286
+GEN_VEXT_V(vrev8_v_d, 8)
287
+
288
+#define DO_ANDN(a, b) ((a) & ~(b))
289
+RVVCALL(OPIVV2, vandn_vv_b, OP_UUU_B, H1, H1, H1, DO_ANDN)
290
+RVVCALL(OPIVV2, vandn_vv_h, OP_UUU_H, H2, H2, H2, DO_ANDN)
291
+RVVCALL(OPIVV2, vandn_vv_w, OP_UUU_W, H4, H4, H4, DO_ANDN)
292
+RVVCALL(OPIVV2, vandn_vv_d, OP_UUU_D, H8, H8, H8, DO_ANDN)
293
+GEN_VEXT_VV(vandn_vv_b, 1)
294
+GEN_VEXT_VV(vandn_vv_h, 2)
295
+GEN_VEXT_VV(vandn_vv_w, 4)
296
+GEN_VEXT_VV(vandn_vv_d, 8)
297
+
298
+RVVCALL(OPIVX2, vandn_vx_b, OP_UUU_B, H1, H1, DO_ANDN)
299
+RVVCALL(OPIVX2, vandn_vx_h, OP_UUU_H, H2, H2, DO_ANDN)
300
+RVVCALL(OPIVX2, vandn_vx_w, OP_UUU_W, H4, H4, DO_ANDN)
301
+RVVCALL(OPIVX2, vandn_vx_d, OP_UUU_D, H8, H8, DO_ANDN)
302
+GEN_VEXT_VX(vandn_vx_b, 1)
303
+GEN_VEXT_VX(vandn_vx_h, 2)
304
+GEN_VEXT_VX(vandn_vx_w, 4)
305
+GEN_VEXT_VX(vandn_vx_d, 8)
306
+
307
+RVVCALL(OPIVV1, vbrev_v_b, OP_UU_B, H1, H1, revbit8)
308
+RVVCALL(OPIVV1, vbrev_v_h, OP_UU_H, H2, H2, revbit16)
309
+RVVCALL(OPIVV1, vbrev_v_w, OP_UU_W, H4, H4, revbit32)
310
+RVVCALL(OPIVV1, vbrev_v_d, OP_UU_D, H8, H8, revbit64)
311
+GEN_VEXT_V(vbrev_v_b, 1)
312
+GEN_VEXT_V(vbrev_v_h, 2)
313
+GEN_VEXT_V(vbrev_v_w, 4)
314
+GEN_VEXT_V(vbrev_v_d, 8)
315
+
316
+RVVCALL(OPIVV1, vclz_v_b, OP_UU_B, H1, H1, clz8)
317
+RVVCALL(OPIVV1, vclz_v_h, OP_UU_H, H2, H2, clz16)
318
+RVVCALL(OPIVV1, vclz_v_w, OP_UU_W, H4, H4, clz32)
319
+RVVCALL(OPIVV1, vclz_v_d, OP_UU_D, H8, H8, clz64)
320
+GEN_VEXT_V(vclz_v_b, 1)
321
+GEN_VEXT_V(vclz_v_h, 2)
322
+GEN_VEXT_V(vclz_v_w, 4)
323
+GEN_VEXT_V(vclz_v_d, 8)
324
+
325
+RVVCALL(OPIVV1, vctz_v_b, OP_UU_B, H1, H1, ctz8)
326
+RVVCALL(OPIVV1, vctz_v_h, OP_UU_H, H2, H2, ctz16)
327
+RVVCALL(OPIVV1, vctz_v_w, OP_UU_W, H4, H4, ctz32)
328
+RVVCALL(OPIVV1, vctz_v_d, OP_UU_D, H8, H8, ctz64)
329
+GEN_VEXT_V(vctz_v_b, 1)
330
+GEN_VEXT_V(vctz_v_h, 2)
331
+GEN_VEXT_V(vctz_v_w, 4)
332
+GEN_VEXT_V(vctz_v_d, 8)
333
+
334
+RVVCALL(OPIVV1, vcpop_v_b, OP_UU_B, H1, H1, ctpop8)
335
+RVVCALL(OPIVV1, vcpop_v_h, OP_UU_H, H2, H2, ctpop16)
336
+RVVCALL(OPIVV1, vcpop_v_w, OP_UU_W, H4, H4, ctpop32)
337
+RVVCALL(OPIVV1, vcpop_v_d, OP_UU_D, H8, H8, ctpop64)
338
+GEN_VEXT_V(vcpop_v_b, 1)
339
+GEN_VEXT_V(vcpop_v_h, 2)
340
+GEN_VEXT_V(vcpop_v_w, 4)
341
+GEN_VEXT_V(vcpop_v_d, 8)
342
+
343
+#define DO_SLL(N, M) (N << (M & (sizeof(N) * 8 - 1)))
344
+RVVCALL(OPIVV2, vwsll_vv_b, WOP_UUU_B, H2, H1, H1, DO_SLL)
345
+RVVCALL(OPIVV2, vwsll_vv_h, WOP_UUU_H, H4, H2, H2, DO_SLL)
346
+RVVCALL(OPIVV2, vwsll_vv_w, WOP_UUU_W, H8, H4, H4, DO_SLL)
347
+GEN_VEXT_VV(vwsll_vv_b, 2)
348
+GEN_VEXT_VV(vwsll_vv_h, 4)
349
+GEN_VEXT_VV(vwsll_vv_w, 8)
350
+
351
+RVVCALL(OPIVX2, vwsll_vx_b, WOP_UUU_B, H2, H1, DO_SLL)
352
+RVVCALL(OPIVX2, vwsll_vx_h, WOP_UUU_H, H4, H2, DO_SLL)
353
+RVVCALL(OPIVX2, vwsll_vx_w, WOP_UUU_W, H8, H4, DO_SLL)
354
+GEN_VEXT_VX(vwsll_vx_b, 2)
355
+GEN_VEXT_VX(vwsll_vx_h, 4)
356
+GEN_VEXT_VX(vwsll_vx_w, 8)
357
diff --git a/target/riscv/insn_trans/trans_rvvk.c.inc b/target/riscv/insn_trans/trans_rvvk.c.inc
358
index XXXXXXX..XXXXXXX 100644
359
--- a/target/riscv/insn_trans/trans_rvvk.c.inc
360
+++ b/target/riscv/insn_trans/trans_rvvk.c.inc
361
@@ -XXX,XX +XXX,XX @@ static bool vclmul_vx_check(DisasContext *s, arg_rmrr *a)
362
363
GEN_VX_MASKED_TRANS(vclmul_vx, vclmul_vx_check)
364
GEN_VX_MASKED_TRANS(vclmulh_vx, vclmul_vx_check)
29
+
365
+
30
+/*
366
+/*
31
+ * QEMU model of the Ibex SPI Controller
367
+ * Zvbb
32
+ * SPEC Reference: https://docs.opentitan.org/hw/ip/spi_host/doc/
33
+ *
34
+ * Copyright (C) 2022 Western Digital
35
+ *
36
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
37
+ * of this software and associated documentation files (the "Software"), to deal
38
+ * in the Software without restriction, including without limitation the rights
39
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
40
+ * copies of the Software, and to permit persons to whom the Software is
41
+ * furnished to do so, subject to the following conditions:
42
+ *
43
+ * The above copyright notice and this permission notice shall be included in
44
+ * all copies or substantial portions of the Software.
45
+ *
46
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
47
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
48
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
49
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
50
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
51
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
52
+ * THE SOFTWARE.
53
+ */
368
+ */
54
+
369
+
55
+#ifndef IBEX_SPI_HOST_H
370
+#define GEN_OPIVI_GVEC_TRANS_CHECK(NAME, IMM_MODE, OPIVX, SUF, CHECK) \
56
+#define IBEX_SPI_HOST_H
371
+ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
57
+
372
+ { \
58
+#include "hw/sysbus.h"
373
+ if (CHECK(s, a)) { \
59
+#include "hw/hw.h"
374
+ static gen_helper_opivx *const fns[4] = { \
60
+#include "hw/ssi/ssi.h"
375
+ gen_helper_##OPIVX##_b, \
61
+#include "qemu/fifo8.h"
376
+ gen_helper_##OPIVX##_h, \
62
+#include "qom/object.h"
377
+ gen_helper_##OPIVX##_w, \
63
+#include "hw/registerfields.h"
378
+ gen_helper_##OPIVX##_d, \
64
+#include "qemu/timer.h"
379
+ }; \
65
+
380
+ return do_opivi_gvec(s, a, tcg_gen_gvec_##SUF, fns[s->sew], \
66
+#define TYPE_IBEX_SPI_HOST "ibex-spi"
381
+ IMM_MODE); \
67
+#define IBEX_SPI_HOST(obj) \
382
+ } \
68
+ OBJECT_CHECK(IbexSPIHostState, (obj), TYPE_IBEX_SPI_HOST)
383
+ return false; \
69
+
384
+ }
70
+/* SPI Registers */
385
+
71
+#define IBEX_SPI_HOST_INTR_STATE (0x00 / 4) /* rw */
386
+#define GEN_OPIVV_GVEC_TRANS_CHECK(NAME, SUF, CHECK) \
72
+#define IBEX_SPI_HOST_INTR_ENABLE (0x04 / 4) /* rw */
387
+ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
73
+#define IBEX_SPI_HOST_INTR_TEST (0x08 / 4) /* wo */
388
+ { \
74
+#define IBEX_SPI_HOST_ALERT_TEST (0x0c / 4) /* wo */
389
+ if (CHECK(s, a)) { \
75
+#define IBEX_SPI_HOST_CONTROL (0x10 / 4) /* rw */
390
+ static gen_helper_gvec_4_ptr *const fns[4] = { \
76
+#define IBEX_SPI_HOST_STATUS (0x14 / 4) /* ro */
391
+ gen_helper_##NAME##_b, \
77
+#define IBEX_SPI_HOST_CONFIGOPTS (0x18 / 4) /* rw */
392
+ gen_helper_##NAME##_h, \
78
+#define IBEX_SPI_HOST_CSID (0x1c / 4) /* rw */
393
+ gen_helper_##NAME##_w, \
79
+#define IBEX_SPI_HOST_COMMAND (0x20 / 4) /* wo */
394
+ gen_helper_##NAME##_d, \
80
+/* RX/TX Modelled by FIFO */
395
+ }; \
81
+#define IBEX_SPI_HOST_RXDATA (0x24 / 4)
396
+ return do_opivv_gvec(s, a, tcg_gen_gvec_##SUF, fns[s->sew]); \
82
+#define IBEX_SPI_HOST_TXDATA (0x28 / 4)
397
+ } \
83
+
398
+ return false; \
84
+#define IBEX_SPI_HOST_ERROR_ENABLE (0x2c / 4) /* rw */
399
+ }
85
+#define IBEX_SPI_HOST_ERROR_STATUS (0x30 / 4) /* rw */
400
+
86
+#define IBEX_SPI_HOST_EVENT_ENABLE (0x34 / 4) /* rw */
401
+#define GEN_OPIVX_GVEC_SHIFT_TRANS_CHECK(NAME, SUF, CHECK) \
87
+
402
+ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
88
+/* FIFO Len in Bytes */
403
+ { \
89
+#define IBEX_SPI_HOST_TXFIFO_LEN 288
404
+ if (CHECK(s, a)) { \
90
+#define IBEX_SPI_HOST_RXFIFO_LEN 256
405
+ static gen_helper_opivx *const fns[4] = { \
91
+
406
+ gen_helper_##NAME##_b, \
92
+/* Max Register (Based on addr) */
407
+ gen_helper_##NAME##_h, \
93
+#define IBEX_SPI_HOST_MAX_REGS (IBEX_SPI_HOST_EVENT_ENABLE + 1)
408
+ gen_helper_##NAME##_w, \
94
+
409
+ gen_helper_##NAME##_d, \
95
+/* MISC */
410
+ }; \
96
+#define TX_INTERRUPT_TRIGGER_DELAY_NS 100
411
+ return do_opivx_gvec_shift(s, a, tcg_gen_gvec_##SUF, \
97
+#define BIDIRECTIONAL_TRANSFER 3
412
+ fns[s->sew]); \
98
+
413
+ } \
99
+typedef struct {
414
+ return false; \
100
+ /* <private> */
415
+ }
101
+ SysBusDevice parent_obj;
416
+
102
+
417
+static bool zvbb_vv_check(DisasContext *s, arg_rmrr *a)
103
+ /* <public> */
104
+ MemoryRegion mmio;
105
+ uint32_t regs[IBEX_SPI_HOST_MAX_REGS];
106
+ /* Multi-reg that sets config opts per CS */
107
+ uint32_t *config_opts;
108
+ Fifo8 rx_fifo;
109
+ Fifo8 tx_fifo;
110
+ QEMUTimer *fifo_trigger_handle;
111
+
112
+ qemu_irq event;
113
+ qemu_irq host_err;
114
+ uint32_t num_cs;
115
+ qemu_irq *cs_lines;
116
+ SSIBus *ssi;
117
+
118
+ /* Used to track the init status, for replicating TXDATA ghost writes */
119
+ bool init_status;
120
+} IbexSPIHostState;
121
+
122
+#endif
123
diff --git a/hw/ssi/ibex_spi_host.c b/hw/ssi/ibex_spi_host.c
124
new file mode 100644
125
index XXXXXXX..XXXXXXX
126
--- /dev/null
127
+++ b/hw/ssi/ibex_spi_host.c
128
@@ -XXX,XX +XXX,XX @@
129
+/*
130
+ * QEMU model of the Ibex SPI Controller
131
+ * SPEC Reference: https://docs.opentitan.org/hw/ip/spi_host/doc/
132
+ *
133
+ * Copyright (C) 2022 Western Digital
134
+ *
135
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
136
+ * of this software and associated documentation files (the "Software"), to deal
137
+ * in the Software without restriction, including without limitation the rights
138
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
139
+ * copies of the Software, and to permit persons to whom the Software is
140
+ * furnished to do so, subject to the following conditions:
141
+ *
142
+ * The above copyright notice and this permission notice shall be included in
143
+ * all copies or substantial portions of the Software.
144
+ *
145
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
146
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
147
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
148
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
149
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
150
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
151
+ * THE SOFTWARE.
152
+ */
153
+
154
+#include "qemu/osdep.h"
155
+#include "qemu/log.h"
156
+#include "qemu/module.h"
157
+#include "hw/ssi/ibex_spi_host.h"
158
+#include "hw/irq.h"
159
+#include "hw/qdev-properties.h"
160
+#include "hw/qdev-properties-system.h"
161
+#include "migration/vmstate.h"
162
+#include "trace.h"
163
+
164
+REG32(INTR_STATE, 0x00)
165
+ FIELD(INTR_STATE, ERROR, 0, 1)
166
+ FIELD(INTR_STATE, SPI_EVENT, 1, 1)
167
+REG32(INTR_ENABLE, 0x04)
168
+ FIELD(INTR_ENABLE, ERROR, 0, 1)
169
+ FIELD(INTR_ENABLE, SPI_EVENT, 1, 1)
170
+REG32(INTR_TEST, 0x08)
171
+ FIELD(INTR_TEST, ERROR, 0, 1)
172
+ FIELD(INTR_TEST, SPI_EVENT, 1, 1)
173
+REG32(ALERT_TEST, 0x0c)
174
+ FIELD(ALERT_TEST, FETAL_TEST, 0, 1)
175
+REG32(CONTROL, 0x10)
176
+ FIELD(CONTROL, RX_WATERMARK, 0, 8)
177
+ FIELD(CONTROL, TX_WATERMARK, 1, 8)
178
+ FIELD(CONTROL, OUTPUT_EN, 29, 1)
179
+ FIELD(CONTROL, SW_RST, 30, 1)
180
+ FIELD(CONTROL, SPIEN, 31, 1)
181
+REG32(STATUS, 0x14)
182
+ FIELD(STATUS, TXQD, 0, 8)
183
+ FIELD(STATUS, RXQD, 18, 8)
184
+ FIELD(STATUS, CMDQD, 16, 3)
185
+ FIELD(STATUS, RXWM, 20, 1)
186
+ FIELD(STATUS, BYTEORDER, 22, 1)
187
+ FIELD(STATUS, RXSTALL, 23, 1)
188
+ FIELD(STATUS, RXEMPTY, 24, 1)
189
+ FIELD(STATUS, RXFULL, 25, 1)
190
+ FIELD(STATUS, TXWM, 26, 1)
191
+ FIELD(STATUS, TXSTALL, 27, 1)
192
+ FIELD(STATUS, TXEMPTY, 28, 1)
193
+ FIELD(STATUS, TXFULL, 29, 1)
194
+ FIELD(STATUS, ACTIVE, 30, 1)
195
+ FIELD(STATUS, READY, 31, 1)
196
+REG32(CONFIGOPTS, 0x18)
197
+ FIELD(CONFIGOPTS, CLKDIV_0, 0, 16)
198
+ FIELD(CONFIGOPTS, CSNIDLE_0, 16, 4)
199
+ FIELD(CONFIGOPTS, CSNTRAIL_0, 20, 4)
200
+ FIELD(CONFIGOPTS, CSNLEAD_0, 24, 4)
201
+ FIELD(CONFIGOPTS, FULLCYC_0, 29, 1)
202
+ FIELD(CONFIGOPTS, CPHA_0, 30, 1)
203
+ FIELD(CONFIGOPTS, CPOL_0, 31, 1)
204
+REG32(CSID, 0x1c)
205
+ FIELD(CSID, CSID, 0, 32)
206
+REG32(COMMAND, 0x20)
207
+ FIELD(COMMAND, LEN, 0, 8)
208
+ FIELD(COMMAND, CSAAT, 9, 1)
209
+ FIELD(COMMAND, SPEED, 10, 2)
210
+ FIELD(COMMAND, DIRECTION, 12, 2)
211
+REG32(ERROR_ENABLE, 0x2c)
212
+ FIELD(ERROR_ENABLE, CMDBUSY, 0, 1)
213
+ FIELD(ERROR_ENABLE, OVERFLOW, 1, 1)
214
+ FIELD(ERROR_ENABLE, UNDERFLOW, 2, 1)
215
+ FIELD(ERROR_ENABLE, CMDINVAL, 3, 1)
216
+ FIELD(ERROR_ENABLE, CSIDINVAL, 4, 1)
217
+REG32(ERROR_STATUS, 0x30)
218
+ FIELD(ERROR_STATUS, CMDBUSY, 0, 1)
219
+ FIELD(ERROR_STATUS, OVERFLOW, 1, 1)
220
+ FIELD(ERROR_STATUS, UNDERFLOW, 2, 1)
221
+ FIELD(ERROR_STATUS, CMDINVAL, 3, 1)
222
+ FIELD(ERROR_STATUS, CSIDINVAL, 4, 1)
223
+ FIELD(ERROR_STATUS, ACCESSINVAL, 5, 1)
224
+REG32(EVENT_ENABLE, 0x30)
225
+ FIELD(EVENT_ENABLE, RXFULL, 0, 1)
226
+ FIELD(EVENT_ENABLE, TXEMPTY, 1, 1)
227
+ FIELD(EVENT_ENABLE, RXWM, 2, 1)
228
+ FIELD(EVENT_ENABLE, TXWM, 3, 1)
229
+ FIELD(EVENT_ENABLE, READY, 4, 1)
230
+ FIELD(EVENT_ENABLE, IDLE, 5, 1)
231
+
232
+static inline uint8_t div4_round_up(uint8_t dividend)
233
+{
418
+{
234
+ return (dividend + 3) / 4;
419
+ return opivv_check(s, a) && s->cfg_ptr->ext_zvbb == true;
235
+}
420
+}
236
+
421
+
237
+static void ibex_spi_rxfifo_reset(IbexSPIHostState *s)
422
+static bool zvbb_vx_check(DisasContext *s, arg_rmrr *a)
238
+{
423
+{
239
+ /* Empty the RX FIFO and assert RXEMPTY */
424
+ return opivx_check(s, a) && s->cfg_ptr->ext_zvbb == true;
240
+ fifo8_reset(&s->rx_fifo);
241
+ s->regs[IBEX_SPI_HOST_STATUS] &= ~R_STATUS_RXFULL_MASK;
242
+ s->regs[IBEX_SPI_HOST_STATUS] |= R_STATUS_RXEMPTY_MASK;
243
+}
425
+}
244
+
426
+
245
+static void ibex_spi_txfifo_reset(IbexSPIHostState *s)
427
+/* vrol.v[vx] */
428
+GEN_OPIVV_GVEC_TRANS_CHECK(vrol_vv, rotlv, zvbb_vv_check)
429
+GEN_OPIVX_GVEC_SHIFT_TRANS_CHECK(vrol_vx, rotls, zvbb_vx_check)
430
+
431
+/* vror.v[vxi] */
432
+GEN_OPIVV_GVEC_TRANS_CHECK(vror_vv, rotrv, zvbb_vv_check)
433
+GEN_OPIVX_GVEC_SHIFT_TRANS_CHECK(vror_vx, rotrs, zvbb_vx_check)
434
+GEN_OPIVI_GVEC_TRANS_CHECK(vror_vi, IMM_TRUNC_SEW, vror_vx, rotri, zvbb_vx_check)
435
+
436
+#define GEN_OPIVX_GVEC_TRANS_CHECK(NAME, SUF, CHECK) \
437
+ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
438
+ { \
439
+ if (CHECK(s, a)) { \
440
+ static gen_helper_opivx *const fns[4] = { \
441
+ gen_helper_##NAME##_b, \
442
+ gen_helper_##NAME##_h, \
443
+ gen_helper_##NAME##_w, \
444
+ gen_helper_##NAME##_d, \
445
+ }; \
446
+ return do_opivx_gvec(s, a, tcg_gen_gvec_##SUF, fns[s->sew]); \
447
+ } \
448
+ return false; \
449
+ }
450
+
451
+/* vandn.v[vx] */
452
+GEN_OPIVV_GVEC_TRANS_CHECK(vandn_vv, andc, zvbb_vv_check)
453
+GEN_OPIVX_GVEC_TRANS_CHECK(vandn_vx, andcs, zvbb_vx_check)
454
+
455
+#define GEN_OPIV_TRANS(NAME, CHECK) \
456
+ static bool trans_##NAME(DisasContext *s, arg_rmr *a) \
457
+ { \
458
+ if (CHECK(s, a)) { \
459
+ uint32_t data = 0; \
460
+ static gen_helper_gvec_3_ptr *const fns[4] = { \
461
+ gen_helper_##NAME##_b, \
462
+ gen_helper_##NAME##_h, \
463
+ gen_helper_##NAME##_w, \
464
+ gen_helper_##NAME##_d, \
465
+ }; \
466
+ TCGLabel *over = gen_new_label(); \
467
+ tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
468
+ \
469
+ data = FIELD_DP32(data, VDATA, VM, a->vm); \
470
+ data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \
471
+ data = FIELD_DP32(data, VDATA, VTA, s->vta); \
472
+ data = FIELD_DP32(data, VDATA, VTA_ALL_1S, s->cfg_vta_all_1s); \
473
+ data = FIELD_DP32(data, VDATA, VMA, s->vma); \
474
+ tcg_gen_gvec_3_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0), \
475
+ vreg_ofs(s, a->rs2), cpu_env, \
476
+ s->cfg_ptr->vlen / 8, s->cfg_ptr->vlen / 8, \
477
+ data, fns[s->sew]); \
478
+ mark_vs_dirty(s); \
479
+ gen_set_label(over); \
480
+ return true; \
481
+ } \
482
+ return false; \
483
+ }
484
+
485
+static bool zvbb_opiv_check(DisasContext *s, arg_rmr *a)
246
+{
486
+{
247
+ /* Empty the TX FIFO and assert TXEMPTY */
487
+ return s->cfg_ptr->ext_zvbb == true &&
248
+ fifo8_reset(&s->tx_fifo);
488
+ require_rvv(s) &&
249
+ s->regs[IBEX_SPI_HOST_STATUS] &= ~R_STATUS_TXFULL_MASK;
489
+ vext_check_isa_ill(s) &&
250
+ s->regs[IBEX_SPI_HOST_STATUS] |= R_STATUS_TXEMPTY_MASK;
490
+ vext_check_ss(s, a->rd, a->rs2, a->vm);
251
+}
491
+}
252
+
492
+
253
+static void ibex_spi_host_reset(DeviceState *dev)
493
+GEN_OPIV_TRANS(vbrev8_v, zvbb_opiv_check)
494
+GEN_OPIV_TRANS(vrev8_v, zvbb_opiv_check)
495
+GEN_OPIV_TRANS(vbrev_v, zvbb_opiv_check)
496
+GEN_OPIV_TRANS(vclz_v, zvbb_opiv_check)
497
+GEN_OPIV_TRANS(vctz_v, zvbb_opiv_check)
498
+GEN_OPIV_TRANS(vcpop_v, zvbb_opiv_check)
499
+
500
+static bool vwsll_vv_check(DisasContext *s, arg_rmrr *a)
254
+{
501
+{
255
+ IbexSPIHostState *s = IBEX_SPI_HOST(dev);
502
+ return s->cfg_ptr->ext_zvbb && opivv_widen_check(s, a);
256
+ trace_ibex_spi_host_reset("Resetting Ibex SPI");
257
+
258
+ /* SPI Host Register Reset */
259
+ s->regs[IBEX_SPI_HOST_INTR_STATE] = 0x00;
260
+ s->regs[IBEX_SPI_HOST_INTR_ENABLE] = 0x00;
261
+ s->regs[IBEX_SPI_HOST_INTR_TEST] = 0x00;
262
+ s->regs[IBEX_SPI_HOST_ALERT_TEST] = 0x00;
263
+ s->regs[IBEX_SPI_HOST_CONTROL] = 0x7f;
264
+ s->regs[IBEX_SPI_HOST_STATUS] = 0x00;
265
+ s->regs[IBEX_SPI_HOST_CONFIGOPTS] = 0x00;
266
+ s->regs[IBEX_SPI_HOST_CSID] = 0x00;
267
+ s->regs[IBEX_SPI_HOST_COMMAND] = 0x00;
268
+ /* RX/TX Modelled by FIFO */
269
+ s->regs[IBEX_SPI_HOST_RXDATA] = 0x00;
270
+ s->regs[IBEX_SPI_HOST_TXDATA] = 0x00;
271
+
272
+ s->regs[IBEX_SPI_HOST_ERROR_ENABLE] = 0x1F;
273
+ s->regs[IBEX_SPI_HOST_ERROR_STATUS] = 0x00;
274
+ s->regs[IBEX_SPI_HOST_EVENT_ENABLE] = 0x00;
275
+
276
+ ibex_spi_rxfifo_reset(s);
277
+ ibex_spi_txfifo_reset(s);
278
+
279
+ s->init_status = true;
280
+ return;
281
+}
503
+}
282
+
504
+
283
+/*
505
+static bool vwsll_vx_check(DisasContext *s, arg_rmrr *a)
284
+ * Check if we need to trigger an interrupt.
285
+ * The two interrupts lines (host_err and event) can
286
+ * be enabled separately in 'IBEX_SPI_HOST_INTR_ENABLE'.
287
+ *
288
+ * Interrupts are triggered based on the ones
289
+ * enabled in the `IBEX_SPI_HOST_EVENT_ENABLE` and `IBEX_SPI_HOST_ERROR_ENABLE`.
290
+ */
291
+static void ibex_spi_host_irq(IbexSPIHostState *s)
292
+{
506
+{
293
+ bool error_en = s->regs[IBEX_SPI_HOST_INTR_ENABLE]
507
+ return s->cfg_ptr->ext_zvbb && opivx_widen_check(s, a);
294
+ & R_INTR_ENABLE_ERROR_MASK;
295
+ bool event_en = s->regs[IBEX_SPI_HOST_INTR_ENABLE]
296
+ & R_INTR_ENABLE_SPI_EVENT_MASK;
297
+ bool err_pending = s->regs[IBEX_SPI_HOST_INTR_STATE]
298
+ & R_INTR_STATE_ERROR_MASK;
299
+ bool status_pending = s->regs[IBEX_SPI_HOST_INTR_STATE]
300
+ & R_INTR_STATE_SPI_EVENT_MASK;
301
+ int err_irq = 0, event_irq = 0;
302
+
303
+ /* Error IRQ enabled and Error IRQ Cleared*/
304
+ if (error_en && !err_pending) {
305
+ /* Event enabled, Interrupt Test Error */
306
+ if (s->regs[IBEX_SPI_HOST_INTR_TEST] & R_INTR_TEST_ERROR_MASK) {
307
+ err_irq = 1;
308
+ } else if ((s->regs[IBEX_SPI_HOST_ERROR_ENABLE]
309
+ & R_ERROR_ENABLE_CMDBUSY_MASK) &&
310
+ s->regs[IBEX_SPI_HOST_ERROR_STATUS]
311
+ & R_ERROR_STATUS_CMDBUSY_MASK) {
312
+ /* Wrote to COMMAND when not READY */
313
+ err_irq = 1;
314
+ } else if ((s->regs[IBEX_SPI_HOST_ERROR_ENABLE]
315
+ & R_ERROR_ENABLE_CMDINVAL_MASK) &&
316
+ s->regs[IBEX_SPI_HOST_ERROR_STATUS]
317
+ & R_ERROR_STATUS_CMDINVAL_MASK) {
318
+ /* Invalid command segment */
319
+ err_irq = 1;
320
+ } else if ((s->regs[IBEX_SPI_HOST_ERROR_ENABLE]
321
+ & R_ERROR_ENABLE_CSIDINVAL_MASK) &&
322
+ s->regs[IBEX_SPI_HOST_ERROR_STATUS]
323
+ & R_ERROR_STATUS_CSIDINVAL_MASK) {
324
+ /* Invalid value for CSID */
325
+ err_irq = 1;
326
+ }
327
+ if (err_irq) {
328
+ s->regs[IBEX_SPI_HOST_INTR_STATE] |= R_INTR_STATE_ERROR_MASK;
329
+ }
330
+ qemu_set_irq(s->host_err, err_irq);
331
+ }
332
+
333
+ /* Event IRQ Enabled and Event IRQ Cleared */
334
+ if (event_en && !status_pending) {
335
+ if (s->regs[IBEX_SPI_HOST_INTR_TEST] & R_INTR_TEST_SPI_EVENT_MASK) {
336
+ /* Event enabled, Interrupt Test Event */
337
+ event_irq = 1;
338
+ } else if ((s->regs[IBEX_SPI_HOST_EVENT_ENABLE]
339
+ & R_EVENT_ENABLE_READY_MASK) &&
340
+ (s->regs[IBEX_SPI_HOST_STATUS] & R_STATUS_READY_MASK)) {
341
+ /* SPI Host ready for next command */
342
+ event_irq = 1;
343
+ } else if ((s->regs[IBEX_SPI_HOST_EVENT_ENABLE]
344
+ & R_EVENT_ENABLE_TXEMPTY_MASK) &&
345
+ (s->regs[IBEX_SPI_HOST_STATUS] & R_STATUS_TXEMPTY_MASK)) {
346
+ /* SPI TXEMPTY, TXFIFO drained */
347
+ event_irq = 1;
348
+ } else if ((s->regs[IBEX_SPI_HOST_EVENT_ENABLE]
349
+ & R_EVENT_ENABLE_RXFULL_MASK) &&
350
+ (s->regs[IBEX_SPI_HOST_STATUS] & R_STATUS_RXFULL_MASK)) {
351
+ /* SPI RXFULL, RXFIFO full */
352
+ event_irq = 1;
353
+ }
354
+ if (event_irq) {
355
+ s->regs[IBEX_SPI_HOST_INTR_STATE] |= R_INTR_STATE_SPI_EVENT_MASK;
356
+ }
357
+ qemu_set_irq(s->event, event_irq);
358
+ }
359
+}
508
+}
360
+
509
+
361
+static void ibex_spi_host_transfer(IbexSPIHostState *s)
510
+/* OPIVI without GVEC IR */
362
+{
511
+#define GEN_OPIVI_WIDEN_TRANS(NAME, IMM_MODE, OPIVX, CHECK) \
363
+ uint32_t rx, tx;
512
+ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
364
+ /* Get num of one byte transfers */
513
+ { \
365
+ uint8_t segment_len = ((s->regs[IBEX_SPI_HOST_COMMAND] & R_COMMAND_LEN_MASK)
514
+ if (CHECK(s, a)) { \
366
+ >> R_COMMAND_LEN_SHIFT);
515
+ static gen_helper_opivx *const fns[3] = { \
367
+ while (segment_len > 0) {
516
+ gen_helper_##OPIVX##_b, \
368
+ if (fifo8_is_empty(&s->tx_fifo)) {
517
+ gen_helper_##OPIVX##_h, \
369
+ /* Assert Stall */
518
+ gen_helper_##OPIVX##_w, \
370
+ s->regs[IBEX_SPI_HOST_STATUS] |= R_STATUS_TXSTALL_MASK;
519
+ }; \
371
+ break;
520
+ return opivi_trans(a->rd, a->rs1, a->rs2, a->vm, fns[s->sew], s, \
372
+ } else if (fifo8_is_full(&s->rx_fifo)) {
521
+ IMM_MODE); \
373
+ /* Assert Stall */
522
+ } \
374
+ s->regs[IBEX_SPI_HOST_STATUS] |= R_STATUS_RXSTALL_MASK;
523
+ return false; \
375
+ break;
524
+ }
376
+ } else {
525
+
377
+ tx = fifo8_pop(&s->tx_fifo);
526
+GEN_OPIVV_WIDEN_TRANS(vwsll_vv, vwsll_vv_check)
378
+ }
527
+GEN_OPIVX_WIDEN_TRANS(vwsll_vx, vwsll_vx_check)
379
+
528
+GEN_OPIVI_WIDEN_TRANS(vwsll_vi, IMM_ZX, vwsll_vx, vwsll_vx_check)
380
+ rx = ssi_transfer(s->ssi, tx);
381
+
382
+ trace_ibex_spi_host_transfer(tx, rx);
383
+
384
+ if (!fifo8_is_full(&s->rx_fifo)) {
385
+ fifo8_push(&s->rx_fifo, rx);
386
+ } else {
387
+ /* Assert RXFULL */
388
+ s->regs[IBEX_SPI_HOST_STATUS] |= R_STATUS_RXFULL_MASK;
389
+ }
390
+ --segment_len;
391
+ }
392
+
393
+ /* Assert Ready */
394
+ s->regs[IBEX_SPI_HOST_STATUS] |= R_STATUS_READY_MASK;
395
+ /* Set RXQD */
396
+ s->regs[IBEX_SPI_HOST_STATUS] &= ~R_STATUS_RXQD_MASK;
397
+ s->regs[IBEX_SPI_HOST_STATUS] |= (R_STATUS_RXQD_MASK
398
+ & div4_round_up(segment_len));
399
+ /* Set TXQD */
400
+ s->regs[IBEX_SPI_HOST_STATUS] &= ~R_STATUS_TXQD_MASK;
401
+ s->regs[IBEX_SPI_HOST_STATUS] |= (fifo8_num_used(&s->tx_fifo) / 4)
402
+ & R_STATUS_TXQD_MASK;
403
+ /* Clear TXFULL */
404
+ s->regs[IBEX_SPI_HOST_STATUS] &= ~R_STATUS_TXFULL_MASK;
405
+ /* Assert TXEMPTY and drop remaining bytes that exceed segment_len */
406
+ ibex_spi_txfifo_reset(s);
407
+ /* Reset RXEMPTY */
408
+ s->regs[IBEX_SPI_HOST_STATUS] &= ~R_STATUS_RXEMPTY_MASK;
409
+
410
+ ibex_spi_host_irq(s);
411
+}
412
+
413
+static uint64_t ibex_spi_host_read(void *opaque, hwaddr addr,
414
+ unsigned int size)
415
+{
416
+ IbexSPIHostState *s = opaque;
417
+ uint32_t rc = 0;
418
+ uint8_t rx_byte = 0;
419
+
420
+ trace_ibex_spi_host_read(addr, size);
421
+
422
+ /* Match reg index */
423
+ addr = addr >> 2;
424
+ switch (addr) {
425
+ /* Skipping any W/O registers */
426
+ case IBEX_SPI_HOST_INTR_STATE...IBEX_SPI_HOST_INTR_ENABLE:
427
+ case IBEX_SPI_HOST_CONTROL...IBEX_SPI_HOST_STATUS:
428
+ rc = s->regs[addr];
429
+ break;
430
+ case IBEX_SPI_HOST_CSID:
431
+ rc = s->regs[addr];
432
+ break;
433
+ case IBEX_SPI_HOST_CONFIGOPTS:
434
+ rc = s->config_opts[s->regs[IBEX_SPI_HOST_CSID]];
435
+ break;
436
+ case IBEX_SPI_HOST_TXDATA:
437
+ rc = s->regs[addr];
438
+ break;
439
+ case IBEX_SPI_HOST_RXDATA:
440
+ /* Clear RXFULL */
441
+ s->regs[IBEX_SPI_HOST_STATUS] &= ~R_STATUS_RXFULL_MASK;
442
+
443
+ for (int i = 0; i < 4; ++i) {
444
+ if (fifo8_is_empty(&s->rx_fifo)) {
445
+ /* Assert RXEMPTY, no IRQ */
446
+ s->regs[IBEX_SPI_HOST_STATUS] |= R_STATUS_RXEMPTY_MASK;
447
+ s->regs[IBEX_SPI_HOST_ERROR_STATUS] |=
448
+ R_ERROR_STATUS_UNDERFLOW_MASK;
449
+ return rc;
450
+ }
451
+ rx_byte = fifo8_pop(&s->rx_fifo);
452
+ rc |= rx_byte << (i * 8);
453
+ }
454
+ break;
455
+ case IBEX_SPI_HOST_ERROR_ENABLE...IBEX_SPI_HOST_EVENT_ENABLE:
456
+ rc = s->regs[addr];
457
+ break;
458
+ default:
459
+ qemu_log_mask(LOG_GUEST_ERROR, "Bad offset 0x%" HWADDR_PRIx "\n",
460
+ addr << 2);
461
+ }
462
+ return rc;
463
+}
464
+
465
+
466
+static void ibex_spi_host_write(void *opaque, hwaddr addr,
467
+ uint64_t val64, unsigned int size)
468
+{
469
+ IbexSPIHostState *s = opaque;
470
+ uint32_t val32 = val64;
471
+ uint32_t shift_mask = 0xff;
472
+ uint8_t txqd_len;
473
+
474
+ trace_ibex_spi_host_write(addr, size, val64);
475
+
476
+ /* Match reg index */
477
+ addr = addr >> 2;
478
+
479
+ switch (addr) {
480
+ /* Skipping any R/O registers */
481
+ case IBEX_SPI_HOST_INTR_STATE...IBEX_SPI_HOST_INTR_ENABLE:
482
+ s->regs[addr] = val32;
483
+ break;
484
+ case IBEX_SPI_HOST_INTR_TEST:
485
+ s->regs[addr] = val32;
486
+ ibex_spi_host_irq(s);
487
+ break;
488
+ case IBEX_SPI_HOST_ALERT_TEST:
489
+ s->regs[addr] = val32;
490
+ qemu_log_mask(LOG_UNIMP,
491
+ "%s: SPI_ALERT_TEST is not supported\n", __func__);
492
+ break;
493
+ case IBEX_SPI_HOST_CONTROL:
494
+ s->regs[addr] = val32;
495
+
496
+ if (val32 & R_CONTROL_SW_RST_MASK) {
497
+ ibex_spi_host_reset((DeviceState *)s);
498
+ /* Clear active if any */
499
+ s->regs[IBEX_SPI_HOST_STATUS] &= ~R_STATUS_ACTIVE_MASK;
500
+ }
501
+
502
+ if (val32 & R_CONTROL_OUTPUT_EN_MASK) {
503
+ qemu_log_mask(LOG_UNIMP,
504
+ "%s: CONTROL_OUTPUT_EN is not supported\n", __func__);
505
+ }
506
+ break;
507
+ case IBEX_SPI_HOST_CONFIGOPTS:
508
+ /* Update the respective config-opts register based on CSIDth index */
509
+ s->config_opts[s->regs[IBEX_SPI_HOST_CSID]] = val32;
510
+ qemu_log_mask(LOG_UNIMP,
511
+ "%s: CONFIGOPTS Hardware settings not supported\n",
512
+ __func__);
513
+ break;
514
+ case IBEX_SPI_HOST_CSID:
515
+ if (val32 >= s->num_cs) {
516
+ /* CSID exceeds max num_cs */
517
+ s->regs[IBEX_SPI_HOST_ERROR_STATUS] |=
518
+ R_ERROR_STATUS_CSIDINVAL_MASK;
519
+ ibex_spi_host_irq(s);
520
+ return;
521
+ }
522
+ s->regs[addr] = val32;
523
+ break;
524
+ case IBEX_SPI_HOST_COMMAND:
525
+ s->regs[addr] = val32;
526
+
527
+ /* STALL, IP not enabled */
528
+ if (!(s->regs[IBEX_SPI_HOST_CONTROL] & R_CONTROL_SPIEN_MASK)) {
529
+ return;
530
+ }
531
+
532
+ /* SPI not ready, IRQ Error */
533
+ if (!(s->regs[IBEX_SPI_HOST_STATUS] & R_STATUS_READY_MASK)) {
534
+ s->regs[IBEX_SPI_HOST_ERROR_STATUS] |= R_ERROR_STATUS_CMDBUSY_MASK;
535
+ ibex_spi_host_irq(s);
536
+ return;
537
+ }
538
+ /* Assert Not Ready */
539
+ s->regs[IBEX_SPI_HOST_STATUS] &= ~R_STATUS_READY_MASK;
540
+
541
+ if (((val32 & R_COMMAND_DIRECTION_MASK) >> R_COMMAND_DIRECTION_SHIFT)
542
+ != BIDIRECTIONAL_TRANSFER) {
543
+ qemu_log_mask(LOG_UNIMP,
544
+ "%s: Rx Only/Tx Only are not supported\n", __func__);
545
+ }
546
+
547
+ if (val32 & R_COMMAND_CSAAT_MASK) {
548
+ qemu_log_mask(LOG_UNIMP,
549
+ "%s: CSAAT is not supported\n", __func__);
550
+ }
551
+ if (val32 & R_COMMAND_SPEED_MASK) {
552
+ qemu_log_mask(LOG_UNIMP,
553
+ "%s: SPEED is not supported\n", __func__);
554
+ }
555
+
556
+ /* Set Transfer Callback */
557
+ timer_mod(s->fifo_trigger_handle,
558
+ qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) +
559
+ (TX_INTERRUPT_TRIGGER_DELAY_NS));
560
+
561
+ break;
562
+ case IBEX_SPI_HOST_TXDATA:
563
+ /*
564
+ * This is a hardware `feature` where
565
+ * the first word written TXDATA after init is omitted entirely
566
+ */
567
+ if (s->init_status) {
568
+ s->init_status = false;
569
+ return;
570
+ }
571
+
572
+ for (int i = 0; i < 4; ++i) {
573
+ /* Attempting to write when TXFULL */
574
+ if (fifo8_is_full(&s->tx_fifo)) {
575
+ /* Assert RXEMPTY, no IRQ */
576
+ s->regs[IBEX_SPI_HOST_STATUS] |= R_STATUS_TXFULL_MASK;
577
+ s->regs[IBEX_SPI_HOST_ERROR_STATUS] |=
578
+ R_ERROR_STATUS_OVERFLOW_MASK;
579
+ ibex_spi_host_irq(s);
580
+ return;
581
+ }
582
+ /* Byte ordering is set by the IP */
583
+ if ((s->regs[IBEX_SPI_HOST_STATUS] &
584
+ R_STATUS_BYTEORDER_MASK) == 0) {
585
+ /* LE: LSB transmitted first (default for ibex processor) */
586
+ shift_mask = 0xff << (i * 8);
587
+ } else {
588
+ /* BE: MSB transmitted first */
589
+ qemu_log_mask(LOG_UNIMP,
590
+ "%s: Big endian is not supported\n", __func__);
591
+ }
592
+
593
+ fifo8_push(&s->tx_fifo, (val32 & shift_mask) >> (i * 8));
594
+ }
595
+
596
+ /* Reset TXEMPTY */
597
+ s->regs[IBEX_SPI_HOST_STATUS] &= ~R_STATUS_TXEMPTY_MASK;
598
+ /* Update TXQD */
599
+ txqd_len = (s->regs[IBEX_SPI_HOST_STATUS] &
600
+ R_STATUS_TXQD_MASK) >> R_STATUS_TXQD_SHIFT;
601
+ /* Partial bytes (size < 4) are padded, in words. */
602
+ txqd_len += 1;
603
+ s->regs[IBEX_SPI_HOST_STATUS] &= ~R_STATUS_TXQD_MASK;
604
+ s->regs[IBEX_SPI_HOST_STATUS] |= txqd_len;
605
+ /* Assert Ready */
606
+ s->regs[IBEX_SPI_HOST_STATUS] |= R_STATUS_READY_MASK;
607
+ break;
608
+ case IBEX_SPI_HOST_ERROR_ENABLE:
609
+ s->regs[addr] = val32;
610
+
611
+ if (val32 & R_ERROR_ENABLE_CMDINVAL_MASK) {
612
+ qemu_log_mask(LOG_UNIMP,
613
+ "%s: Segment Length is not supported\n", __func__);
614
+ }
615
+ break;
616
+ case IBEX_SPI_HOST_ERROR_STATUS:
617
+ /*
618
+ * Indicates that any errors that have occurred.
619
+ * When an error occurs, the corresponding bit must be cleared
620
+ * here before issuing any further commands
621
+ */
622
+ s->regs[addr] = val32;
623
+ break;
624
+ case IBEX_SPI_HOST_EVENT_ENABLE:
625
+ /* Controls which classes of SPI events raise an interrupt. */
626
+ s->regs[addr] = val32;
627
+
628
+ if (val32 & R_EVENT_ENABLE_RXWM_MASK) {
629
+ qemu_log_mask(LOG_UNIMP,
630
+ "%s: RXWM is not supported\n", __func__);
631
+ }
632
+ if (val32 & R_EVENT_ENABLE_TXWM_MASK) {
633
+ qemu_log_mask(LOG_UNIMP,
634
+ "%s: TXWM is not supported\n", __func__);
635
+ }
636
+
637
+ if (val32 & R_EVENT_ENABLE_IDLE_MASK) {
638
+ qemu_log_mask(LOG_UNIMP,
639
+ "%s: IDLE is not supported\n", __func__);
640
+ }
641
+ break;
642
+ default:
643
+ qemu_log_mask(LOG_GUEST_ERROR, "Bad offset 0x%" HWADDR_PRIx "\n",
644
+ addr << 2);
645
+ }
646
+}
647
+
648
+static const MemoryRegionOps ibex_spi_ops = {
649
+ .read = ibex_spi_host_read,
650
+ .write = ibex_spi_host_write,
651
+ /* Ibex default LE */
652
+ .endianness = DEVICE_LITTLE_ENDIAN,
653
+};
654
+
655
+static Property ibex_spi_properties[] = {
656
+ DEFINE_PROP_UINT32("num_cs", IbexSPIHostState, num_cs, 1),
657
+ DEFINE_PROP_END_OF_LIST(),
658
+};
659
+
660
+static const VMStateDescription vmstate_ibex = {
661
+ .name = TYPE_IBEX_SPI_HOST,
662
+ .version_id = 1,
663
+ .minimum_version_id = 1,
664
+ .fields = (VMStateField[]) {
665
+ VMSTATE_UINT32_ARRAY(regs, IbexSPIHostState, IBEX_SPI_HOST_MAX_REGS),
666
+ VMSTATE_VARRAY_UINT32(config_opts, IbexSPIHostState,
667
+ num_cs, 0, vmstate_info_uint32, uint32_t),
668
+ VMSTATE_FIFO8(rx_fifo, IbexSPIHostState),
669
+ VMSTATE_FIFO8(tx_fifo, IbexSPIHostState),
670
+ VMSTATE_TIMER_PTR(fifo_trigger_handle, IbexSPIHostState),
671
+ VMSTATE_BOOL(init_status, IbexSPIHostState),
672
+ VMSTATE_END_OF_LIST()
673
+ }
674
+};
675
+
676
+static void fifo_trigger_update(void *opaque)
677
+{
678
+ IbexSPIHostState *s = opaque;
679
+ ibex_spi_host_transfer(s);
680
+}
681
+
682
+static void ibex_spi_host_realize(DeviceState *dev, Error **errp)
683
+{
684
+ IbexSPIHostState *s = IBEX_SPI_HOST(dev);
685
+ int i;
686
+
687
+ s->ssi = ssi_create_bus(dev, "ssi");
688
+ s->cs_lines = g_new0(qemu_irq, s->num_cs);
689
+
690
+ for (i = 0; i < s->num_cs; ++i) {
691
+ sysbus_init_irq(SYS_BUS_DEVICE(dev), &s->cs_lines[i]);
692
+ }
693
+
694
+ /* Setup CONFIGOPTS Multi-register */
695
+ s->config_opts = g_new0(uint32_t, s->num_cs);
696
+
697
+ /* Setup FIFO Interrupt Timer */
698
+ s->fifo_trigger_handle = timer_new_ns(QEMU_CLOCK_VIRTUAL,
699
+ fifo_trigger_update, s);
700
+
701
+ /* FIFO sizes as per OT Spec */
702
+ fifo8_create(&s->tx_fifo, IBEX_SPI_HOST_TXFIFO_LEN);
703
+ fifo8_create(&s->rx_fifo, IBEX_SPI_HOST_RXFIFO_LEN);
704
+}
705
+
706
+static void ibex_spi_host_init(Object *obj)
707
+{
708
+ IbexSPIHostState *s = IBEX_SPI_HOST(obj);
709
+
710
+ sysbus_init_irq(SYS_BUS_DEVICE(obj), &s->host_err);
711
+ sysbus_init_irq(SYS_BUS_DEVICE(obj), &s->event);
712
+
713
+ memory_region_init_io(&s->mmio, obj, &ibex_spi_ops, s,
714
+ TYPE_IBEX_SPI_HOST, 0x1000);
715
+ sysbus_init_mmio(SYS_BUS_DEVICE(obj), &s->mmio);
716
+}
717
+
718
+static void ibex_spi_host_class_init(ObjectClass *klass, void *data)
719
+{
720
+ DeviceClass *dc = DEVICE_CLASS(klass);
721
+ dc->realize = ibex_spi_host_realize;
722
+ dc->reset = ibex_spi_host_reset;
723
+ dc->vmsd = &vmstate_ibex;
724
+ device_class_set_props(dc, ibex_spi_properties);
725
+}
726
+
727
+static const TypeInfo ibex_spi_host_info = {
728
+ .name = TYPE_IBEX_SPI_HOST,
729
+ .parent = TYPE_SYS_BUS_DEVICE,
730
+ .instance_size = sizeof(IbexSPIHostState),
731
+ .instance_init = ibex_spi_host_init,
732
+ .class_init = ibex_spi_host_class_init,
733
+};
734
+
735
+static void ibex_spi_host_register_types(void)
736
+{
737
+ type_register_static(&ibex_spi_host_info);
738
+}
739
+
740
+type_init(ibex_spi_host_register_types)
741
diff --git a/hw/ssi/meson.build b/hw/ssi/meson.build
742
index XXXXXXX..XXXXXXX 100644
743
--- a/hw/ssi/meson.build
744
+++ b/hw/ssi/meson.build
745
@@ -XXX,XX +XXX,XX @@ softmmu_ss.add(when: 'CONFIG_XILINX_SPIPS', if_true: files('xilinx_spips.c'))
746
softmmu_ss.add(when: 'CONFIG_XLNX_VERSAL', if_true: files('xlnx-versal-ospi.c'))
747
softmmu_ss.add(when: 'CONFIG_IMX', if_true: files('imx_spi.c'))
748
softmmu_ss.add(when: 'CONFIG_OMAP', if_true: files('omap_spi.c'))
749
+softmmu_ss.add(when: 'CONFIG_IBEX', if_true: files('ibex_spi_host.c'))
750
diff --git a/hw/ssi/trace-events b/hw/ssi/trace-events
751
index XXXXXXX..XXXXXXX 100644
752
--- a/hw/ssi/trace-events
753
+++ b/hw/ssi/trace-events
754
@@ -XXX,XX +XXX,XX @@ npcm7xx_fiu_ctrl_read(const char *id, uint64_t addr, uint32_t data) "%s offset:
755
npcm7xx_fiu_ctrl_write(const char *id, uint64_t addr, uint32_t data) "%s offset: 0x%04" PRIx64 " value: 0x%08" PRIx32
756
npcm7xx_fiu_flash_read(const char *id, int cs, uint64_t addr, unsigned int size, uint64_t value) "%s[%d] offset: 0x%08" PRIx64 " size: %u value: 0x%" PRIx64
757
npcm7xx_fiu_flash_write(const char *id, unsigned cs, uint64_t addr, unsigned int size, uint64_t value) "%s[%d] offset: 0x%08" PRIx64 " size: %u value: 0x%" PRIx64
758
+
759
+# ibex_spi_host.c
760
+
761
+ibex_spi_host_reset(const char *msg) "%s"
762
+ibex_spi_host_transfer(uint32_t tx_data, uint32_t rx_data) "tx_data: 0x%" PRIx32 " rx_data: @0x%" PRIx32
763
+ibex_spi_host_write(uint64_t addr, uint32_t size, uint64_t data) "@0x%" PRIx64 " size %u: 0x%" PRIx64
764
+ibex_spi_host_read(uint64_t addr, uint32_t size) "@0x%" PRIx64 " size %u:"
765
--
529
--
766
2.35.1
530
2.41.0
diff view generated by jsdifflib
New patch
1
From: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
1
2
3
This commit adds support for the Zvkned vector-crypto extension, which
4
consists of the following instructions:
5
6
* vaesef.[vv,vs]
7
* vaesdf.[vv,vs]
8
* vaesdm.[vv,vs]
9
* vaesz.vs
10
* vaesem.[vv,vs]
11
* vaeskf1.vi
12
* vaeskf2.vi
13
14
Translation functions are defined in
15
`target/riscv/insn_trans/trans_rvvk.c.inc` and helpers are defined in
16
`target/riscv/vcrypto_helper.c`.
17
18
Co-authored-by: Lawrence Hunter <lawrence.hunter@codethink.co.uk>
19
Co-authored-by: William Salmon <will.salmon@codethink.co.uk>
20
[max.chou@sifive.com: Replaced vstart checking by TCG op]
21
Signed-off-by: Lawrence Hunter <lawrence.hunter@codethink.co.uk>
22
Signed-off-by: William Salmon <will.salmon@codethink.co.uk>
23
Signed-off-by: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
24
Signed-off-by: Max Chou <max.chou@sifive.com>
25
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
26
[max.chou@sifive.com: Imported aes-round.h and exposed x-zvkned
27
property]
28
[max.chou@sifive.com: Fixed endian issues and replaced the vstart & vl
29
egs checking by helper function]
30
[max.chou@sifive.com: Replaced bswap32 calls in aes key expanding]
31
Message-ID: <20230711165917.2629866-10-max.chou@sifive.com>
32
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
33
---
34
target/riscv/cpu_cfg.h | 1 +
35
target/riscv/helper.h | 14 ++
36
target/riscv/insn32.decode | 14 ++
37
target/riscv/cpu.c | 4 +-
38
target/riscv/vcrypto_helper.c | 202 +++++++++++++++++++++++
39
target/riscv/insn_trans/trans_rvvk.c.inc | 147 +++++++++++++++++
40
6 files changed, 381 insertions(+), 1 deletion(-)
41
42
diff --git a/target/riscv/cpu_cfg.h b/target/riscv/cpu_cfg.h
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/riscv/cpu_cfg.h
45
+++ b/target/riscv/cpu_cfg.h
46
@@ -XXX,XX +XXX,XX @@ struct RISCVCPUConfig {
47
bool ext_zve64d;
48
bool ext_zvbb;
49
bool ext_zvbc;
50
+ bool ext_zvkned;
51
bool ext_zmmul;
52
bool ext_zvfbfmin;
53
bool ext_zvfbfwma;
54
diff --git a/target/riscv/helper.h b/target/riscv/helper.h
55
index XXXXXXX..XXXXXXX 100644
56
--- a/target/riscv/helper.h
57
+++ b/target/riscv/helper.h
58
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_6(vandn_vx_b, void, ptr, ptr, tl, ptr, env, i32)
59
DEF_HELPER_6(vandn_vx_h, void, ptr, ptr, tl, ptr, env, i32)
60
DEF_HELPER_6(vandn_vx_w, void, ptr, ptr, tl, ptr, env, i32)
61
DEF_HELPER_6(vandn_vx_d, void, ptr, ptr, tl, ptr, env, i32)
62
+
63
+DEF_HELPER_2(egs_check, void, i32, env)
64
+
65
+DEF_HELPER_4(vaesef_vv, void, ptr, ptr, env, i32)
66
+DEF_HELPER_4(vaesef_vs, void, ptr, ptr, env, i32)
67
+DEF_HELPER_4(vaesdf_vv, void, ptr, ptr, env, i32)
68
+DEF_HELPER_4(vaesdf_vs, void, ptr, ptr, env, i32)
69
+DEF_HELPER_4(vaesem_vv, void, ptr, ptr, env, i32)
70
+DEF_HELPER_4(vaesem_vs, void, ptr, ptr, env, i32)
71
+DEF_HELPER_4(vaesdm_vv, void, ptr, ptr, env, i32)
72
+DEF_HELPER_4(vaesdm_vs, void, ptr, ptr, env, i32)
73
+DEF_HELPER_4(vaesz_vs, void, ptr, ptr, env, i32)
74
+DEF_HELPER_5(vaeskf1_vi, void, ptr, ptr, i32, env, i32)
75
+DEF_HELPER_5(vaeskf2_vi, void, ptr, ptr, i32, env, i32)
76
diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode
77
index XXXXXXX..XXXXXXX 100644
78
--- a/target/riscv/insn32.decode
79
+++ b/target/riscv/insn32.decode
80
@@ -XXX,XX +XXX,XX @@
81
@r_rm ....... ..... ..... ... ..... ....... %rs2 %rs1 %rm %rd
82
@r2_rm ....... ..... ..... ... ..... ....... %rs1 %rm %rd
83
@r2 ....... ..... ..... ... ..... ....... &r2 %rs1 %rd
84
+@r2_vm_1 ...... . ..... ..... ... ..... ....... &rmr vm=1 %rs2 %rd
85
@r2_nfvm ... ... vm:1 ..... ..... ... ..... ....... &r2nfvm %nf %rs1 %rd
86
@r2_vm ...... vm:1 ..... ..... ... ..... ....... &rmr %rs2 %rd
87
@r1_vm ...... vm:1 ..... ..... ... ..... ....... %rd
88
@@ -XXX,XX +XXX,XX @@ vcpop_v 010010 . ..... 01110 010 ..... 1010111 @r2_vm
89
vwsll_vv 110101 . ..... ..... 000 ..... 1010111 @r_vm
90
vwsll_vx 110101 . ..... ..... 100 ..... 1010111 @r_vm
91
vwsll_vi 110101 . ..... ..... 011 ..... 1010111 @r_vm
92
+
93
+# *** Zvkned vector crypto extension ***
94
+vaesef_vv 101000 1 ..... 00011 010 ..... 1110111 @r2_vm_1
95
+vaesef_vs 101001 1 ..... 00011 010 ..... 1110111 @r2_vm_1
96
+vaesdf_vv 101000 1 ..... 00001 010 ..... 1110111 @r2_vm_1
97
+vaesdf_vs 101001 1 ..... 00001 010 ..... 1110111 @r2_vm_1
98
+vaesem_vv 101000 1 ..... 00010 010 ..... 1110111 @r2_vm_1
99
+vaesem_vs 101001 1 ..... 00010 010 ..... 1110111 @r2_vm_1
100
+vaesdm_vv 101000 1 ..... 00000 010 ..... 1110111 @r2_vm_1
101
+vaesdm_vs 101001 1 ..... 00000 010 ..... 1110111 @r2_vm_1
102
+vaesz_vs 101001 1 ..... 00111 010 ..... 1110111 @r2_vm_1
103
+vaeskf1_vi 100010 1 ..... ..... 010 ..... 1110111 @r_vm_1
104
+vaeskf2_vi 101010 1 ..... ..... 010 ..... 1110111 @r_vm_1
105
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
106
index XXXXXXX..XXXXXXX 100644
107
--- a/target/riscv/cpu.c
108
+++ b/target/riscv/cpu.c
109
@@ -XXX,XX +XXX,XX @@ static const struct isa_ext_data isa_edata_arr[] = {
110
ISA_EXT_DATA_ENTRY(zvfbfwma, PRIV_VERSION_1_12_0, ext_zvfbfwma),
111
ISA_EXT_DATA_ENTRY(zvfh, PRIV_VERSION_1_12_0, ext_zvfh),
112
ISA_EXT_DATA_ENTRY(zvfhmin, PRIV_VERSION_1_12_0, ext_zvfhmin),
113
+ ISA_EXT_DATA_ENTRY(zvkned, PRIV_VERSION_1_12_0, ext_zvkned),
114
ISA_EXT_DATA_ENTRY(zhinx, PRIV_VERSION_1_12_0, ext_zhinx),
115
ISA_EXT_DATA_ENTRY(zhinxmin, PRIV_VERSION_1_12_0, ext_zhinxmin),
116
ISA_EXT_DATA_ENTRY(smaia, PRIV_VERSION_1_12_0, ext_smaia),
117
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_validate_set_extensions(RISCVCPU *cpu, Error **errp)
118
* In principle Zve*x would also suffice here, were they supported
119
* in qemu
120
*/
121
- if (cpu->cfg.ext_zvbb && !cpu->cfg.ext_zve32f) {
122
+ if ((cpu->cfg.ext_zvbb || cpu->cfg.ext_zvkned) && !cpu->cfg.ext_zve32f) {
123
error_setg(errp,
124
"Vector crypto extensions require V or Zve* extensions");
125
return;
126
@@ -XXX,XX +XXX,XX @@ static Property riscv_cpu_extensions[] = {
127
/* Vector cryptography extensions */
128
DEFINE_PROP_BOOL("x-zvbb", RISCVCPU, cfg.ext_zvbb, false),
129
DEFINE_PROP_BOOL("x-zvbc", RISCVCPU, cfg.ext_zvbc, false),
130
+ DEFINE_PROP_BOOL("x-zvkned", RISCVCPU, cfg.ext_zvkned, false),
131
132
DEFINE_PROP_END_OF_LIST(),
133
};
134
diff --git a/target/riscv/vcrypto_helper.c b/target/riscv/vcrypto_helper.c
135
index XXXXXXX..XXXXXXX 100644
136
--- a/target/riscv/vcrypto_helper.c
137
+++ b/target/riscv/vcrypto_helper.c
138
@@ -XXX,XX +XXX,XX @@
139
#include "qemu/bitops.h"
140
#include "qemu/bswap.h"
141
#include "cpu.h"
142
+#include "crypto/aes.h"
143
+#include "crypto/aes-round.h"
144
#include "exec/memop.h"
145
#include "exec/exec-all.h"
146
#include "exec/helper-proto.h"
147
@@ -XXX,XX +XXX,XX @@ RVVCALL(OPIVX2, vwsll_vx_w, WOP_UUU_W, H8, H4, DO_SLL)
148
GEN_VEXT_VX(vwsll_vx_b, 2)
149
GEN_VEXT_VX(vwsll_vx_h, 4)
150
GEN_VEXT_VX(vwsll_vx_w, 8)
151
+
152
+void HELPER(egs_check)(uint32_t egs, CPURISCVState *env)
153
+{
154
+ uint32_t vl = env->vl;
155
+ uint32_t vstart = env->vstart;
156
+
157
+ if (vl % egs != 0 || vstart % egs != 0) {
158
+ riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC());
159
+ }
160
+}
161
+
162
+static inline void xor_round_key(AESState *round_state, AESState *round_key)
163
+{
164
+ round_state->v = round_state->v ^ round_key->v;
165
+}
166
+
167
+#define GEN_ZVKNED_HELPER_VV(NAME, ...) \
168
+ void HELPER(NAME)(void *vd, void *vs2, CPURISCVState *env, \
169
+ uint32_t desc) \
170
+ { \
171
+ uint32_t vl = env->vl; \
172
+ uint32_t total_elems = vext_get_total_elems(env, desc, 4); \
173
+ uint32_t vta = vext_vta(desc); \
174
+ \
175
+ for (uint32_t i = env->vstart / 4; i < env->vl / 4; i++) { \
176
+ AESState round_key; \
177
+ round_key.d[0] = *((uint64_t *)vs2 + H8(i * 2 + 0)); \
178
+ round_key.d[1] = *((uint64_t *)vs2 + H8(i * 2 + 1)); \
179
+ AESState round_state; \
180
+ round_state.d[0] = *((uint64_t *)vd + H8(i * 2 + 0)); \
181
+ round_state.d[1] = *((uint64_t *)vd + H8(i * 2 + 1)); \
182
+ __VA_ARGS__; \
183
+ *((uint64_t *)vd + H8(i * 2 + 0)) = round_state.d[0]; \
184
+ *((uint64_t *)vd + H8(i * 2 + 1)) = round_state.d[1]; \
185
+ } \
186
+ env->vstart = 0; \
187
+ /* set tail elements to 1s */ \
188
+ vext_set_elems_1s(vd, vta, vl * 4, total_elems * 4); \
189
+ }
190
+
191
+#define GEN_ZVKNED_HELPER_VS(NAME, ...) \
192
+ void HELPER(NAME)(void *vd, void *vs2, CPURISCVState *env, \
193
+ uint32_t desc) \
194
+ { \
195
+ uint32_t vl = env->vl; \
196
+ uint32_t total_elems = vext_get_total_elems(env, desc, 4); \
197
+ uint32_t vta = vext_vta(desc); \
198
+ \
199
+ for (uint32_t i = env->vstart / 4; i < env->vl / 4; i++) { \
200
+ AESState round_key; \
201
+ round_key.d[0] = *((uint64_t *)vs2 + H8(0)); \
202
+ round_key.d[1] = *((uint64_t *)vs2 + H8(1)); \
203
+ AESState round_state; \
204
+ round_state.d[0] = *((uint64_t *)vd + H8(i * 2 + 0)); \
205
+ round_state.d[1] = *((uint64_t *)vd + H8(i * 2 + 1)); \
206
+ __VA_ARGS__; \
207
+ *((uint64_t *)vd + H8(i * 2 + 0)) = round_state.d[0]; \
208
+ *((uint64_t *)vd + H8(i * 2 + 1)) = round_state.d[1]; \
209
+ } \
210
+ env->vstart = 0; \
211
+ /* set tail elements to 1s */ \
212
+ vext_set_elems_1s(vd, vta, vl * 4, total_elems * 4); \
213
+ }
214
+
215
+GEN_ZVKNED_HELPER_VV(vaesef_vv, aesenc_SB_SR_AK(&round_state,
216
+ &round_state,
217
+ &round_key,
218
+ false);)
219
+GEN_ZVKNED_HELPER_VS(vaesef_vs, aesenc_SB_SR_AK(&round_state,
220
+ &round_state,
221
+ &round_key,
222
+ false);)
223
+GEN_ZVKNED_HELPER_VV(vaesdf_vv, aesdec_ISB_ISR_AK(&round_state,
224
+ &round_state,
225
+ &round_key,
226
+ false);)
227
+GEN_ZVKNED_HELPER_VS(vaesdf_vs, aesdec_ISB_ISR_AK(&round_state,
228
+ &round_state,
229
+ &round_key,
230
+ false);)
231
+GEN_ZVKNED_HELPER_VV(vaesem_vv, aesenc_SB_SR_MC_AK(&round_state,
232
+ &round_state,
233
+ &round_key,
234
+ false);)
235
+GEN_ZVKNED_HELPER_VS(vaesem_vs, aesenc_SB_SR_MC_AK(&round_state,
236
+ &round_state,
237
+ &round_key,
238
+ false);)
239
+GEN_ZVKNED_HELPER_VV(vaesdm_vv, aesdec_ISB_ISR_AK_IMC(&round_state,
240
+ &round_state,
241
+ &round_key,
242
+ false);)
243
+GEN_ZVKNED_HELPER_VS(vaesdm_vs, aesdec_ISB_ISR_AK_IMC(&round_state,
244
+ &round_state,
245
+ &round_key,
246
+ false);)
247
+GEN_ZVKNED_HELPER_VS(vaesz_vs, xor_round_key(&round_state, &round_key);)
248
+
249
+void HELPER(vaeskf1_vi)(void *vd_vptr, void *vs2_vptr, uint32_t uimm,
250
+ CPURISCVState *env, uint32_t desc)
251
+{
252
+ uint32_t *vd = vd_vptr;
253
+ uint32_t *vs2 = vs2_vptr;
254
+ uint32_t vl = env->vl;
255
+ uint32_t total_elems = vext_get_total_elems(env, desc, 4);
256
+ uint32_t vta = vext_vta(desc);
257
+
258
+ uimm &= 0b1111;
259
+ if (uimm > 10 || uimm == 0) {
260
+ uimm ^= 0b1000;
261
+ }
262
+
263
+ for (uint32_t i = env->vstart / 4; i < env->vl / 4; i++) {
264
+ uint32_t rk[8], tmp;
265
+ static const uint32_t rcon[] = {
266
+ 0x00000001, 0x00000002, 0x00000004, 0x00000008, 0x00000010,
267
+ 0x00000020, 0x00000040, 0x00000080, 0x0000001B, 0x00000036,
268
+ };
269
+
270
+ rk[0] = vs2[i * 4 + H4(0)];
271
+ rk[1] = vs2[i * 4 + H4(1)];
272
+ rk[2] = vs2[i * 4 + H4(2)];
273
+ rk[3] = vs2[i * 4 + H4(3)];
274
+ tmp = ror32(rk[3], 8);
275
+
276
+ rk[4] = rk[0] ^ (((uint32_t)AES_sbox[(tmp >> 24) & 0xff] << 24) |
277
+ ((uint32_t)AES_sbox[(tmp >> 16) & 0xff] << 16) |
278
+ ((uint32_t)AES_sbox[(tmp >> 8) & 0xff] << 8) |
279
+ ((uint32_t)AES_sbox[(tmp >> 0) & 0xff] << 0))
280
+ ^ rcon[uimm - 1];
281
+ rk[5] = rk[1] ^ rk[4];
282
+ rk[6] = rk[2] ^ rk[5];
283
+ rk[7] = rk[3] ^ rk[6];
284
+
285
+ vd[i * 4 + H4(0)] = rk[4];
286
+ vd[i * 4 + H4(1)] = rk[5];
287
+ vd[i * 4 + H4(2)] = rk[6];
288
+ vd[i * 4 + H4(3)] = rk[7];
289
+ }
290
+ env->vstart = 0;
291
+ /* set tail elements to 1s */
292
+ vext_set_elems_1s(vd, vta, vl * 4, total_elems * 4);
293
+}
294
+
295
+void HELPER(vaeskf2_vi)(void *vd_vptr, void *vs2_vptr, uint32_t uimm,
296
+ CPURISCVState *env, uint32_t desc)
297
+{
298
+ uint32_t *vd = vd_vptr;
299
+ uint32_t *vs2 = vs2_vptr;
300
+ uint32_t vl = env->vl;
301
+ uint32_t total_elems = vext_get_total_elems(env, desc, 4);
302
+ uint32_t vta = vext_vta(desc);
303
+
304
+ uimm &= 0b1111;
305
+ if (uimm > 14 || uimm < 2) {
306
+ uimm ^= 0b1000;
307
+ }
308
+
309
+ for (uint32_t i = env->vstart / 4; i < env->vl / 4; i++) {
310
+ uint32_t rk[12], tmp;
311
+ static const uint32_t rcon[] = {
312
+ 0x00000001, 0x00000002, 0x00000004, 0x00000008, 0x00000010,
313
+ 0x00000020, 0x00000040, 0x00000080, 0x0000001B, 0x00000036,
314
+ };
315
+
316
+ rk[0] = vd[i * 4 + H4(0)];
317
+ rk[1] = vd[i * 4 + H4(1)];
318
+ rk[2] = vd[i * 4 + H4(2)];
319
+ rk[3] = vd[i * 4 + H4(3)];
320
+ rk[4] = vs2[i * 4 + H4(0)];
321
+ rk[5] = vs2[i * 4 + H4(1)];
322
+ rk[6] = vs2[i * 4 + H4(2)];
323
+ rk[7] = vs2[i * 4 + H4(3)];
324
+
325
+ if (uimm % 2 == 0) {
326
+ tmp = ror32(rk[7], 8);
327
+ rk[8] = rk[0] ^ (((uint32_t)AES_sbox[(tmp >> 24) & 0xff] << 24) |
328
+ ((uint32_t)AES_sbox[(tmp >> 16) & 0xff] << 16) |
329
+ ((uint32_t)AES_sbox[(tmp >> 8) & 0xff] << 8) |
330
+ ((uint32_t)AES_sbox[(tmp >> 0) & 0xff] << 0))
331
+ ^ rcon[(uimm - 1) / 2];
332
+ } else {
333
+ rk[8] = rk[0] ^ (((uint32_t)AES_sbox[(rk[7] >> 24) & 0xff] << 24) |
334
+ ((uint32_t)AES_sbox[(rk[7] >> 16) & 0xff] << 16) |
335
+ ((uint32_t)AES_sbox[(rk[7] >> 8) & 0xff] << 8) |
336
+ ((uint32_t)AES_sbox[(rk[7] >> 0) & 0xff] << 0));
337
+ }
338
+ rk[9] = rk[1] ^ rk[8];
339
+ rk[10] = rk[2] ^ rk[9];
340
+ rk[11] = rk[3] ^ rk[10];
341
+
342
+ vd[i * 4 + H4(0)] = rk[8];
343
+ vd[i * 4 + H4(1)] = rk[9];
344
+ vd[i * 4 + H4(2)] = rk[10];
345
+ vd[i * 4 + H4(3)] = rk[11];
346
+ }
347
+ env->vstart = 0;
348
+ /* set tail elements to 1s */
349
+ vext_set_elems_1s(vd, vta, vl * 4, total_elems * 4);
350
+}
351
diff --git a/target/riscv/insn_trans/trans_rvvk.c.inc b/target/riscv/insn_trans/trans_rvvk.c.inc
352
index XXXXXXX..XXXXXXX 100644
353
--- a/target/riscv/insn_trans/trans_rvvk.c.inc
354
+++ b/target/riscv/insn_trans/trans_rvvk.c.inc
355
@@ -XXX,XX +XXX,XX @@ static bool vwsll_vx_check(DisasContext *s, arg_rmrr *a)
356
GEN_OPIVV_WIDEN_TRANS(vwsll_vv, vwsll_vv_check)
357
GEN_OPIVX_WIDEN_TRANS(vwsll_vx, vwsll_vx_check)
358
GEN_OPIVI_WIDEN_TRANS(vwsll_vi, IMM_ZX, vwsll_vx, vwsll_vx_check)
359
+
360
+/*
361
+ * Zvkned
362
+ */
363
+
364
+#define ZVKNED_EGS 4
365
+
366
+#define GEN_V_UNMASKED_TRANS(NAME, CHECK, EGS) \
367
+ static bool trans_##NAME(DisasContext *s, arg_##NAME *a) \
368
+ { \
369
+ if (CHECK(s, a)) { \
370
+ TCGv_ptr rd_v, rs2_v; \
371
+ TCGv_i32 desc, egs; \
372
+ uint32_t data = 0; \
373
+ TCGLabel *over = gen_new_label(); \
374
+ \
375
+ if (!s->vstart_eq_zero || !s->vl_eq_vlmax) { \
376
+ /* save opcode for unwinding in case we throw an exception */ \
377
+ decode_save_opc(s); \
378
+ egs = tcg_constant_i32(EGS); \
379
+ gen_helper_egs_check(egs, cpu_env); \
380
+ tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
381
+ } \
382
+ \
383
+ data = FIELD_DP32(data, VDATA, VM, a->vm); \
384
+ data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \
385
+ data = FIELD_DP32(data, VDATA, VTA, s->vta); \
386
+ data = FIELD_DP32(data, VDATA, VTA_ALL_1S, s->cfg_vta_all_1s); \
387
+ data = FIELD_DP32(data, VDATA, VMA, s->vma); \
388
+ rd_v = tcg_temp_new_ptr(); \
389
+ rs2_v = tcg_temp_new_ptr(); \
390
+ desc = tcg_constant_i32( \
391
+ simd_desc(s->cfg_ptr->vlen / 8, s->cfg_ptr->vlen / 8, data)); \
392
+ tcg_gen_addi_ptr(rd_v, cpu_env, vreg_ofs(s, a->rd)); \
393
+ tcg_gen_addi_ptr(rs2_v, cpu_env, vreg_ofs(s, a->rs2)); \
394
+ gen_helper_##NAME(rd_v, rs2_v, cpu_env, desc); \
395
+ mark_vs_dirty(s); \
396
+ gen_set_label(over); \
397
+ return true; \
398
+ } \
399
+ return false; \
400
+ }
401
+
402
+static bool vaes_check_vv(DisasContext *s, arg_rmr *a)
403
+{
404
+ int egw_bytes = ZVKNED_EGS << s->sew;
405
+ return s->cfg_ptr->ext_zvkned == true &&
406
+ require_rvv(s) &&
407
+ vext_check_isa_ill(s) &&
408
+ MAXSZ(s) >= egw_bytes &&
409
+ require_align(a->rd, s->lmul) &&
410
+ require_align(a->rs2, s->lmul) &&
411
+ s->sew == MO_32;
412
+}
413
+
414
+static bool vaes_check_overlap(DisasContext *s, int vd, int vs2)
415
+{
416
+ int8_t op_size = s->lmul <= 0 ? 1 : 1 << s->lmul;
417
+ return !is_overlapped(vd, op_size, vs2, 1);
418
+}
419
+
420
+static bool vaes_check_vs(DisasContext *s, arg_rmr *a)
421
+{
422
+ int egw_bytes = ZVKNED_EGS << s->sew;
423
+ return vaes_check_overlap(s, a->rd, a->rs2) &&
424
+ MAXSZ(s) >= egw_bytes &&
425
+ s->cfg_ptr->ext_zvkned == true &&
426
+ require_rvv(s) &&
427
+ vext_check_isa_ill(s) &&
428
+ require_align(a->rd, s->lmul) &&
429
+ s->sew == MO_32;
430
+}
431
+
432
+GEN_V_UNMASKED_TRANS(vaesef_vv, vaes_check_vv, ZVKNED_EGS)
433
+GEN_V_UNMASKED_TRANS(vaesef_vs, vaes_check_vs, ZVKNED_EGS)
434
+GEN_V_UNMASKED_TRANS(vaesdf_vv, vaes_check_vv, ZVKNED_EGS)
435
+GEN_V_UNMASKED_TRANS(vaesdf_vs, vaes_check_vs, ZVKNED_EGS)
436
+GEN_V_UNMASKED_TRANS(vaesdm_vv, vaes_check_vv, ZVKNED_EGS)
437
+GEN_V_UNMASKED_TRANS(vaesdm_vs, vaes_check_vs, ZVKNED_EGS)
438
+GEN_V_UNMASKED_TRANS(vaesz_vs, vaes_check_vs, ZVKNED_EGS)
439
+GEN_V_UNMASKED_TRANS(vaesem_vv, vaes_check_vv, ZVKNED_EGS)
440
+GEN_V_UNMASKED_TRANS(vaesem_vs, vaes_check_vs, ZVKNED_EGS)
441
+
442
+#define GEN_VI_UNMASKED_TRANS(NAME, CHECK, EGS) \
443
+ static bool trans_##NAME(DisasContext *s, arg_##NAME *a) \
444
+ { \
445
+ if (CHECK(s, a)) { \
446
+ TCGv_ptr rd_v, rs2_v; \
447
+ TCGv_i32 uimm_v, desc, egs; \
448
+ uint32_t data = 0; \
449
+ TCGLabel *over = gen_new_label(); \
450
+ \
451
+ if (!s->vstart_eq_zero || !s->vl_eq_vlmax) { \
452
+ /* save opcode for unwinding in case we throw an exception */ \
453
+ decode_save_opc(s); \
454
+ egs = tcg_constant_i32(EGS); \
455
+ gen_helper_egs_check(egs, cpu_env); \
456
+ tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
457
+ } \
458
+ \
459
+ data = FIELD_DP32(data, VDATA, VM, a->vm); \
460
+ data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \
461
+ data = FIELD_DP32(data, VDATA, VTA, s->vta); \
462
+ data = FIELD_DP32(data, VDATA, VTA_ALL_1S, s->cfg_vta_all_1s); \
463
+ data = FIELD_DP32(data, VDATA, VMA, s->vma); \
464
+ \
465
+ rd_v = tcg_temp_new_ptr(); \
466
+ rs2_v = tcg_temp_new_ptr(); \
467
+ uimm_v = tcg_constant_i32(a->rs1); \
468
+ desc = tcg_constant_i32( \
469
+ simd_desc(s->cfg_ptr->vlen / 8, s->cfg_ptr->vlen / 8, data)); \
470
+ tcg_gen_addi_ptr(rd_v, cpu_env, vreg_ofs(s, a->rd)); \
471
+ tcg_gen_addi_ptr(rs2_v, cpu_env, vreg_ofs(s, a->rs2)); \
472
+ gen_helper_##NAME(rd_v, rs2_v, uimm_v, cpu_env, desc); \
473
+ mark_vs_dirty(s); \
474
+ gen_set_label(over); \
475
+ return true; \
476
+ } \
477
+ return false; \
478
+ }
479
+
480
+static bool vaeskf1_check(DisasContext *s, arg_vaeskf1_vi *a)
481
+{
482
+ int egw_bytes = ZVKNED_EGS << s->sew;
483
+ return s->cfg_ptr->ext_zvkned == true &&
484
+ require_rvv(s) &&
485
+ vext_check_isa_ill(s) &&
486
+ MAXSZ(s) >= egw_bytes &&
487
+ s->sew == MO_32 &&
488
+ require_align(a->rd, s->lmul) &&
489
+ require_align(a->rs2, s->lmul);
490
+}
491
+
492
+static bool vaeskf2_check(DisasContext *s, arg_vaeskf2_vi *a)
493
+{
494
+ int egw_bytes = ZVKNED_EGS << s->sew;
495
+ return s->cfg_ptr->ext_zvkned == true &&
496
+ require_rvv(s) &&
497
+ vext_check_isa_ill(s) &&
498
+ MAXSZ(s) >= egw_bytes &&
499
+ s->sew == MO_32 &&
500
+ require_align(a->rd, s->lmul) &&
501
+ require_align(a->rs2, s->lmul);
502
+}
503
+
504
+GEN_VI_UNMASKED_TRANS(vaeskf1_vi, vaeskf1_check, ZVKNED_EGS)
505
+GEN_VI_UNMASKED_TRANS(vaeskf2_vi, vaeskf2_check, ZVKNED_EGS)
506
--
507
2.41.0
diff view generated by jsdifflib
1
From: Bin Meng <bin.meng@windriver.com>
1
From: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
2
2
3
Add a config option to enable support for native M-mode debug.
3
This commit adds support for the Zvknh vector-crypto extension, which
4
This is disabled by default and can be enabled with 'debug=true'.
4
consists of the following instructions:
5
5
6
Signed-off-by: Bin Meng <bin.meng@windriver.com>
6
* vsha2ms.vv
7
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
7
* vsha2c[hl].vv
8
Message-Id: <20220421003324.1134983-3-bmeng.cn@gmail.com>
8
9
Translation functions are defined in
10
`target/riscv/insn_trans/trans_rvvk.c.inc` and helpers are defined in
11
`target/riscv/vcrypto_helper.c`.
12
13
Co-authored-by: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
14
Co-authored-by: Lawrence Hunter <lawrence.hunter@codethink.co.uk>
15
[max.chou@sifive.com: Replaced vstart checking by TCG op]
16
Signed-off-by: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
17
Signed-off-by: Lawrence Hunter <lawrence.hunter@codethink.co.uk>
18
Signed-off-by: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
19
Signed-off-by: Max Chou <max.chou@sifive.com>
20
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
21
[max.chou@sifive.com: Exposed x-zvknha & x-zvknhb properties]
22
[max.chou@sifive.com: Replaced SEW selection to happened during
23
translation]
24
Message-ID: <20230711165917.2629866-11-max.chou@sifive.com>
9
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
25
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
10
---
26
---
11
target/riscv/cpu.h | 4 +++-
27
target/riscv/cpu_cfg.h | 2 +
12
target/riscv/cpu.c | 5 +++++
28
target/riscv/helper.h | 6 +
13
2 files changed, 8 insertions(+), 1 deletion(-)
29
target/riscv/insn32.decode | 5 +
30
target/riscv/cpu.c | 13 +-
31
target/riscv/vcrypto_helper.c | 238 +++++++++++++++++++++++
32
target/riscv/insn_trans/trans_rvvk.c.inc | 129 ++++++++++++
33
6 files changed, 390 insertions(+), 3 deletions(-)
14
34
15
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
35
diff --git a/target/riscv/cpu_cfg.h b/target/riscv/cpu_cfg.h
16
index XXXXXXX..XXXXXXX 100644
36
index XXXXXXX..XXXXXXX 100644
17
--- a/target/riscv/cpu.h
37
--- a/target/riscv/cpu_cfg.h
18
+++ b/target/riscv/cpu.h
38
+++ b/target/riscv/cpu_cfg.h
19
@@ -XXX,XX +XXX,XX @@ enum {
20
RISCV_FEATURE_PMP,
21
RISCV_FEATURE_EPMP,
22
RISCV_FEATURE_MISA,
23
- RISCV_FEATURE_AIA
24
+ RISCV_FEATURE_AIA,
25
+ RISCV_FEATURE_DEBUG
26
};
27
28
/* Privileged specification version */
29
@@ -XXX,XX +XXX,XX @@ struct RISCVCPUConfig {
39
@@ -XXX,XX +XXX,XX @@ struct RISCVCPUConfig {
30
bool pmp;
40
bool ext_zvbb;
31
bool epmp;
41
bool ext_zvbc;
32
bool aia;
42
bool ext_zvkned;
33
+ bool debug;
43
+ bool ext_zvknha;
34
uint64_t resetvec;
44
+ bool ext_zvknhb;
35
};
45
bool ext_zmmul;
36
46
bool ext_zvfbfmin;
47
bool ext_zvfbfwma;
48
diff --git a/target/riscv/helper.h b/target/riscv/helper.h
49
index XXXXXXX..XXXXXXX 100644
50
--- a/target/riscv/helper.h
51
+++ b/target/riscv/helper.h
52
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_4(vaesdm_vs, void, ptr, ptr, env, i32)
53
DEF_HELPER_4(vaesz_vs, void, ptr, ptr, env, i32)
54
DEF_HELPER_5(vaeskf1_vi, void, ptr, ptr, i32, env, i32)
55
DEF_HELPER_5(vaeskf2_vi, void, ptr, ptr, i32, env, i32)
56
+
57
+DEF_HELPER_5(vsha2ms_vv, void, ptr, ptr, ptr, env, i32)
58
+DEF_HELPER_5(vsha2ch32_vv, void, ptr, ptr, ptr, env, i32)
59
+DEF_HELPER_5(vsha2ch64_vv, void, ptr, ptr, ptr, env, i32)
60
+DEF_HELPER_5(vsha2cl32_vv, void, ptr, ptr, ptr, env, i32)
61
+DEF_HELPER_5(vsha2cl64_vv, void, ptr, ptr, ptr, env, i32)
62
diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode
63
index XXXXXXX..XXXXXXX 100644
64
--- a/target/riscv/insn32.decode
65
+++ b/target/riscv/insn32.decode
66
@@ -XXX,XX +XXX,XX @@ vaesdm_vs 101001 1 ..... 00000 010 ..... 1110111 @r2_vm_1
67
vaesz_vs 101001 1 ..... 00111 010 ..... 1110111 @r2_vm_1
68
vaeskf1_vi 100010 1 ..... ..... 010 ..... 1110111 @r_vm_1
69
vaeskf2_vi 101010 1 ..... ..... 010 ..... 1110111 @r_vm_1
70
+
71
+# *** Zvknh vector crypto extension ***
72
+vsha2ms_vv 101101 1 ..... ..... 010 ..... 1110111 @r_vm_1
73
+vsha2ch_vv 101110 1 ..... ..... 010 ..... 1110111 @r_vm_1
74
+vsha2cl_vv 101111 1 ..... ..... 010 ..... 1110111 @r_vm_1
37
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
75
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
38
index XXXXXXX..XXXXXXX 100644
76
index XXXXXXX..XXXXXXX 100644
39
--- a/target/riscv/cpu.c
77
--- a/target/riscv/cpu.c
40
+++ b/target/riscv/cpu.c
78
+++ b/target/riscv/cpu.c
41
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_realize(DeviceState *dev, Error **errp)
79
@@ -XXX,XX +XXX,XX @@ static const struct isa_ext_data isa_edata_arr[] = {
42
riscv_set_feature(env, RISCV_FEATURE_AIA);
80
ISA_EXT_DATA_ENTRY(zvfh, PRIV_VERSION_1_12_0, ext_zvfh),
81
ISA_EXT_DATA_ENTRY(zvfhmin, PRIV_VERSION_1_12_0, ext_zvfhmin),
82
ISA_EXT_DATA_ENTRY(zvkned, PRIV_VERSION_1_12_0, ext_zvkned),
83
+ ISA_EXT_DATA_ENTRY(zvknha, PRIV_VERSION_1_12_0, ext_zvknha),
84
+ ISA_EXT_DATA_ENTRY(zvknhb, PRIV_VERSION_1_12_0, ext_zvknhb),
85
ISA_EXT_DATA_ENTRY(zhinx, PRIV_VERSION_1_12_0, ext_zhinx),
86
ISA_EXT_DATA_ENTRY(zhinxmin, PRIV_VERSION_1_12_0, ext_zhinxmin),
87
ISA_EXT_DATA_ENTRY(smaia, PRIV_VERSION_1_12_0, ext_smaia),
88
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_validate_set_extensions(RISCVCPU *cpu, Error **errp)
89
* In principle Zve*x would also suffice here, were they supported
90
* in qemu
91
*/
92
- if ((cpu->cfg.ext_zvbb || cpu->cfg.ext_zvkned) && !cpu->cfg.ext_zve32f) {
93
+ if ((cpu->cfg.ext_zvbb || cpu->cfg.ext_zvkned || cpu->cfg.ext_zvknha) &&
94
+ !cpu->cfg.ext_zve32f) {
95
error_setg(errp,
96
"Vector crypto extensions require V or Zve* extensions");
97
return;
43
}
98
}
44
99
45
+ if (cpu->cfg.debug) {
100
- if (cpu->cfg.ext_zvbc && !cpu->cfg.ext_zve64f) {
46
+ riscv_set_feature(env, RISCV_FEATURE_DEBUG);
101
- error_setg(errp, "Zvbc extension requires V or Zve64{f,d} extensions");
47
+ }
102
+ if ((cpu->cfg.ext_zvbc || cpu->cfg.ext_zvknhb) && !cpu->cfg.ext_zve64f) {
48
+
103
+ error_setg(
49
set_resetvec(env, cpu->cfg.resetvec);
104
+ errp,
50
105
+ "Zvbc and Zvknhb extensions require V or Zve64{f,d} extensions");
51
/* Validate that MISA_MXL is set properly. */
106
return;
52
@@ -XXX,XX +XXX,XX @@ static Property riscv_cpu_properties[] = {
107
}
53
DEFINE_PROP_BOOL("Zve64f", RISCVCPU, cfg.ext_zve64f, false),
108
54
DEFINE_PROP_BOOL("mmu", RISCVCPU, cfg.mmu, true),
109
@@ -XXX,XX +XXX,XX @@ static Property riscv_cpu_extensions[] = {
55
DEFINE_PROP_BOOL("pmp", RISCVCPU, cfg.pmp, true),
110
DEFINE_PROP_BOOL("x-zvbb", RISCVCPU, cfg.ext_zvbb, false),
56
+ DEFINE_PROP_BOOL("debug", RISCVCPU, cfg.debug, false),
111
DEFINE_PROP_BOOL("x-zvbc", RISCVCPU, cfg.ext_zvbc, false),
57
112
DEFINE_PROP_BOOL("x-zvkned", RISCVCPU, cfg.ext_zvkned, false),
58
DEFINE_PROP_STRING("priv_spec", RISCVCPU, cfg.priv_spec),
113
+ DEFINE_PROP_BOOL("x-zvknha", RISCVCPU, cfg.ext_zvknha, false),
59
DEFINE_PROP_STRING("vext_spec", RISCVCPU, cfg.vext_spec),
114
+ DEFINE_PROP_BOOL("x-zvknhb", RISCVCPU, cfg.ext_zvknhb, false),
115
116
DEFINE_PROP_END_OF_LIST(),
117
};
118
diff --git a/target/riscv/vcrypto_helper.c b/target/riscv/vcrypto_helper.c
119
index XXXXXXX..XXXXXXX 100644
120
--- a/target/riscv/vcrypto_helper.c
121
+++ b/target/riscv/vcrypto_helper.c
122
@@ -XXX,XX +XXX,XX @@ void HELPER(vaeskf2_vi)(void *vd_vptr, void *vs2_vptr, uint32_t uimm,
123
/* set tail elements to 1s */
124
vext_set_elems_1s(vd, vta, vl * 4, total_elems * 4);
125
}
126
+
127
+static inline uint32_t sig0_sha256(uint32_t x)
128
+{
129
+ return ror32(x, 7) ^ ror32(x, 18) ^ (x >> 3);
130
+}
131
+
132
+static inline uint32_t sig1_sha256(uint32_t x)
133
+{
134
+ return ror32(x, 17) ^ ror32(x, 19) ^ (x >> 10);
135
+}
136
+
137
+static inline uint64_t sig0_sha512(uint64_t x)
138
+{
139
+ return ror64(x, 1) ^ ror64(x, 8) ^ (x >> 7);
140
+}
141
+
142
+static inline uint64_t sig1_sha512(uint64_t x)
143
+{
144
+ return ror64(x, 19) ^ ror64(x, 61) ^ (x >> 6);
145
+}
146
+
147
+static inline void vsha2ms_e32(uint32_t *vd, uint32_t *vs1, uint32_t *vs2)
148
+{
149
+ uint32_t res[4];
150
+ res[0] = sig1_sha256(vs1[H4(2)]) + vs2[H4(1)] + sig0_sha256(vd[H4(1)]) +
151
+ vd[H4(0)];
152
+ res[1] = sig1_sha256(vs1[H4(3)]) + vs2[H4(2)] + sig0_sha256(vd[H4(2)]) +
153
+ vd[H4(1)];
154
+ res[2] =
155
+ sig1_sha256(res[0]) + vs2[H4(3)] + sig0_sha256(vd[H4(3)]) + vd[H4(2)];
156
+ res[3] =
157
+ sig1_sha256(res[1]) + vs1[H4(0)] + sig0_sha256(vs2[H4(0)]) + vd[H4(3)];
158
+ vd[H4(3)] = res[3];
159
+ vd[H4(2)] = res[2];
160
+ vd[H4(1)] = res[1];
161
+ vd[H4(0)] = res[0];
162
+}
163
+
164
+static inline void vsha2ms_e64(uint64_t *vd, uint64_t *vs1, uint64_t *vs2)
165
+{
166
+ uint64_t res[4];
167
+ res[0] = sig1_sha512(vs1[2]) + vs2[1] + sig0_sha512(vd[1]) + vd[0];
168
+ res[1] = sig1_sha512(vs1[3]) + vs2[2] + sig0_sha512(vd[2]) + vd[1];
169
+ res[2] = sig1_sha512(res[0]) + vs2[3] + sig0_sha512(vd[3]) + vd[2];
170
+ res[3] = sig1_sha512(res[1]) + vs1[0] + sig0_sha512(vs2[0]) + vd[3];
171
+ vd[3] = res[3];
172
+ vd[2] = res[2];
173
+ vd[1] = res[1];
174
+ vd[0] = res[0];
175
+}
176
+
177
+void HELPER(vsha2ms_vv)(void *vd, void *vs1, void *vs2, CPURISCVState *env,
178
+ uint32_t desc)
179
+{
180
+ uint32_t sew = FIELD_EX64(env->vtype, VTYPE, VSEW);
181
+ uint32_t esz = sew == MO_32 ? 4 : 8;
182
+ uint32_t total_elems;
183
+ uint32_t vta = vext_vta(desc);
184
+
185
+ for (uint32_t i = env->vstart / 4; i < env->vl / 4; i++) {
186
+ if (sew == MO_32) {
187
+ vsha2ms_e32(((uint32_t *)vd) + i * 4, ((uint32_t *)vs1) + i * 4,
188
+ ((uint32_t *)vs2) + i * 4);
189
+ } else {
190
+ /* If not 32 then SEW should be 64 */
191
+ vsha2ms_e64(((uint64_t *)vd) + i * 4, ((uint64_t *)vs1) + i * 4,
192
+ ((uint64_t *)vs2) + i * 4);
193
+ }
194
+ }
195
+ /* set tail elements to 1s */
196
+ total_elems = vext_get_total_elems(env, desc, esz);
197
+ vext_set_elems_1s(vd, vta, env->vl * esz, total_elems * esz);
198
+ env->vstart = 0;
199
+}
200
+
201
+static inline uint64_t sum0_64(uint64_t x)
202
+{
203
+ return ror64(x, 28) ^ ror64(x, 34) ^ ror64(x, 39);
204
+}
205
+
206
+static inline uint32_t sum0_32(uint32_t x)
207
+{
208
+ return ror32(x, 2) ^ ror32(x, 13) ^ ror32(x, 22);
209
+}
210
+
211
+static inline uint64_t sum1_64(uint64_t x)
212
+{
213
+ return ror64(x, 14) ^ ror64(x, 18) ^ ror64(x, 41);
214
+}
215
+
216
+static inline uint32_t sum1_32(uint32_t x)
217
+{
218
+ return ror32(x, 6) ^ ror32(x, 11) ^ ror32(x, 25);
219
+}
220
+
221
+#define ch(x, y, z) ((x & y) ^ ((~x) & z))
222
+
223
+#define maj(x, y, z) ((x & y) ^ (x & z) ^ (y & z))
224
+
225
+static void vsha2c_64(uint64_t *vs2, uint64_t *vd, uint64_t *vs1)
226
+{
227
+ uint64_t a = vs2[3], b = vs2[2], e = vs2[1], f = vs2[0];
228
+ uint64_t c = vd[3], d = vd[2], g = vd[1], h = vd[0];
229
+ uint64_t W0 = vs1[0], W1 = vs1[1];
230
+ uint64_t T1 = h + sum1_64(e) + ch(e, f, g) + W0;
231
+ uint64_t T2 = sum0_64(a) + maj(a, b, c);
232
+
233
+ h = g;
234
+ g = f;
235
+ f = e;
236
+ e = d + T1;
237
+ d = c;
238
+ c = b;
239
+ b = a;
240
+ a = T1 + T2;
241
+
242
+ T1 = h + sum1_64(e) + ch(e, f, g) + W1;
243
+ T2 = sum0_64(a) + maj(a, b, c);
244
+ h = g;
245
+ g = f;
246
+ f = e;
247
+ e = d + T1;
248
+ d = c;
249
+ c = b;
250
+ b = a;
251
+ a = T1 + T2;
252
+
253
+ vd[0] = f;
254
+ vd[1] = e;
255
+ vd[2] = b;
256
+ vd[3] = a;
257
+}
258
+
259
+static void vsha2c_32(uint32_t *vs2, uint32_t *vd, uint32_t *vs1)
260
+{
261
+ uint32_t a = vs2[H4(3)], b = vs2[H4(2)], e = vs2[H4(1)], f = vs2[H4(0)];
262
+ uint32_t c = vd[H4(3)], d = vd[H4(2)], g = vd[H4(1)], h = vd[H4(0)];
263
+ uint32_t W0 = vs1[H4(0)], W1 = vs1[H4(1)];
264
+ uint32_t T1 = h + sum1_32(e) + ch(e, f, g) + W0;
265
+ uint32_t T2 = sum0_32(a) + maj(a, b, c);
266
+
267
+ h = g;
268
+ g = f;
269
+ f = e;
270
+ e = d + T1;
271
+ d = c;
272
+ c = b;
273
+ b = a;
274
+ a = T1 + T2;
275
+
276
+ T1 = h + sum1_32(e) + ch(e, f, g) + W1;
277
+ T2 = sum0_32(a) + maj(a, b, c);
278
+ h = g;
279
+ g = f;
280
+ f = e;
281
+ e = d + T1;
282
+ d = c;
283
+ c = b;
284
+ b = a;
285
+ a = T1 + T2;
286
+
287
+ vd[H4(0)] = f;
288
+ vd[H4(1)] = e;
289
+ vd[H4(2)] = b;
290
+ vd[H4(3)] = a;
291
+}
292
+
293
+void HELPER(vsha2ch32_vv)(void *vd, void *vs1, void *vs2, CPURISCVState *env,
294
+ uint32_t desc)
295
+{
296
+ const uint32_t esz = 4;
297
+ uint32_t total_elems;
298
+ uint32_t vta = vext_vta(desc);
299
+
300
+ for (uint32_t i = env->vstart / 4; i < env->vl / 4; i++) {
301
+ vsha2c_32(((uint32_t *)vs2) + 4 * i, ((uint32_t *)vd) + 4 * i,
302
+ ((uint32_t *)vs1) + 4 * i + 2);
303
+ }
304
+
305
+ /* set tail elements to 1s */
306
+ total_elems = vext_get_total_elems(env, desc, esz);
307
+ vext_set_elems_1s(vd, vta, env->vl * esz, total_elems * esz);
308
+ env->vstart = 0;
309
+}
310
+
311
+void HELPER(vsha2ch64_vv)(void *vd, void *vs1, void *vs2, CPURISCVState *env,
312
+ uint32_t desc)
313
+{
314
+ const uint32_t esz = 8;
315
+ uint32_t total_elems;
316
+ uint32_t vta = vext_vta(desc);
317
+
318
+ for (uint32_t i = env->vstart / 4; i < env->vl / 4; i++) {
319
+ vsha2c_64(((uint64_t *)vs2) + 4 * i, ((uint64_t *)vd) + 4 * i,
320
+ ((uint64_t *)vs1) + 4 * i + 2);
321
+ }
322
+
323
+ /* set tail elements to 1s */
324
+ total_elems = vext_get_total_elems(env, desc, esz);
325
+ vext_set_elems_1s(vd, vta, env->vl * esz, total_elems * esz);
326
+ env->vstart = 0;
327
+}
328
+
329
+void HELPER(vsha2cl32_vv)(void *vd, void *vs1, void *vs2, CPURISCVState *env,
330
+ uint32_t desc)
331
+{
332
+ const uint32_t esz = 4;
333
+ uint32_t total_elems;
334
+ uint32_t vta = vext_vta(desc);
335
+
336
+ for (uint32_t i = env->vstart / 4; i < env->vl / 4; i++) {
337
+ vsha2c_32(((uint32_t *)vs2) + 4 * i, ((uint32_t *)vd) + 4 * i,
338
+ (((uint32_t *)vs1) + 4 * i));
339
+ }
340
+
341
+ /* set tail elements to 1s */
342
+ total_elems = vext_get_total_elems(env, desc, esz);
343
+ vext_set_elems_1s(vd, vta, env->vl * esz, total_elems * esz);
344
+ env->vstart = 0;
345
+}
346
+
347
+void HELPER(vsha2cl64_vv)(void *vd, void *vs1, void *vs2, CPURISCVState *env,
348
+ uint32_t desc)
349
+{
350
+ uint32_t esz = 8;
351
+ uint32_t total_elems;
352
+ uint32_t vta = vext_vta(desc);
353
+
354
+ for (uint32_t i = env->vstart / 4; i < env->vl / 4; i++) {
355
+ vsha2c_64(((uint64_t *)vs2) + 4 * i, ((uint64_t *)vd) + 4 * i,
356
+ (((uint64_t *)vs1) + 4 * i));
357
+ }
358
+
359
+ /* set tail elements to 1s */
360
+ total_elems = vext_get_total_elems(env, desc, esz);
361
+ vext_set_elems_1s(vd, vta, env->vl * esz, total_elems * esz);
362
+ env->vstart = 0;
363
+}
364
diff --git a/target/riscv/insn_trans/trans_rvvk.c.inc b/target/riscv/insn_trans/trans_rvvk.c.inc
365
index XXXXXXX..XXXXXXX 100644
366
--- a/target/riscv/insn_trans/trans_rvvk.c.inc
367
+++ b/target/riscv/insn_trans/trans_rvvk.c.inc
368
@@ -XXX,XX +XXX,XX @@ static bool vaeskf2_check(DisasContext *s, arg_vaeskf2_vi *a)
369
370
GEN_VI_UNMASKED_TRANS(vaeskf1_vi, vaeskf1_check, ZVKNED_EGS)
371
GEN_VI_UNMASKED_TRANS(vaeskf2_vi, vaeskf2_check, ZVKNED_EGS)
372
+
373
+/*
374
+ * Zvknh
375
+ */
376
+
377
+#define ZVKNH_EGS 4
378
+
379
+#define GEN_VV_UNMASKED_TRANS(NAME, CHECK, EGS) \
380
+ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
381
+ { \
382
+ if (CHECK(s, a)) { \
383
+ uint32_t data = 0; \
384
+ TCGLabel *over = gen_new_label(); \
385
+ TCGv_i32 egs; \
386
+ \
387
+ if (!s->vstart_eq_zero || !s->vl_eq_vlmax) { \
388
+ /* save opcode for unwinding in case we throw an exception */ \
389
+ decode_save_opc(s); \
390
+ egs = tcg_constant_i32(EGS); \
391
+ gen_helper_egs_check(egs, cpu_env); \
392
+ tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
393
+ } \
394
+ \
395
+ data = FIELD_DP32(data, VDATA, VM, a->vm); \
396
+ data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \
397
+ data = FIELD_DP32(data, VDATA, VTA, s->vta); \
398
+ data = FIELD_DP32(data, VDATA, VTA_ALL_1S, s->cfg_vta_all_1s); \
399
+ data = FIELD_DP32(data, VDATA, VMA, s->vma); \
400
+ \
401
+ tcg_gen_gvec_3_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, a->rs1), \
402
+ vreg_ofs(s, a->rs2), cpu_env, \
403
+ s->cfg_ptr->vlen / 8, s->cfg_ptr->vlen / 8, \
404
+ data, gen_helper_##NAME); \
405
+ \
406
+ mark_vs_dirty(s); \
407
+ gen_set_label(over); \
408
+ return true; \
409
+ } \
410
+ return false; \
411
+ }
412
+
413
+static bool vsha_check_sew(DisasContext *s)
414
+{
415
+ return (s->cfg_ptr->ext_zvknha == true && s->sew == MO_32) ||
416
+ (s->cfg_ptr->ext_zvknhb == true &&
417
+ (s->sew == MO_32 || s->sew == MO_64));
418
+}
419
+
420
+static bool vsha_check(DisasContext *s, arg_rmrr *a)
421
+{
422
+ int egw_bytes = ZVKNH_EGS << s->sew;
423
+ int mult = 1 << MAX(s->lmul, 0);
424
+ return opivv_check(s, a) &&
425
+ vsha_check_sew(s) &&
426
+ MAXSZ(s) >= egw_bytes &&
427
+ !is_overlapped(a->rd, mult, a->rs1, mult) &&
428
+ !is_overlapped(a->rd, mult, a->rs2, mult) &&
429
+ s->lmul >= 0;
430
+}
431
+
432
+GEN_VV_UNMASKED_TRANS(vsha2ms_vv, vsha_check, ZVKNH_EGS)
433
+
434
+static bool trans_vsha2cl_vv(DisasContext *s, arg_rmrr *a)
435
+{
436
+ if (vsha_check(s, a)) {
437
+ uint32_t data = 0;
438
+ TCGLabel *over = gen_new_label();
439
+ TCGv_i32 egs;
440
+
441
+ if (!s->vstart_eq_zero || !s->vl_eq_vlmax) {
442
+ /* save opcode for unwinding in case we throw an exception */
443
+ decode_save_opc(s);
444
+ egs = tcg_constant_i32(ZVKNH_EGS);
445
+ gen_helper_egs_check(egs, cpu_env);
446
+ tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
447
+ }
448
+
449
+ data = FIELD_DP32(data, VDATA, VM, a->vm);
450
+ data = FIELD_DP32(data, VDATA, LMUL, s->lmul);
451
+ data = FIELD_DP32(data, VDATA, VTA, s->vta);
452
+ data = FIELD_DP32(data, VDATA, VTA_ALL_1S, s->cfg_vta_all_1s);
453
+ data = FIELD_DP32(data, VDATA, VMA, s->vma);
454
+
455
+ tcg_gen_gvec_3_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, a->rs1),
456
+ vreg_ofs(s, a->rs2), cpu_env, s->cfg_ptr->vlen / 8,
457
+ s->cfg_ptr->vlen / 8, data,
458
+ s->sew == MO_32 ?
459
+ gen_helper_vsha2cl32_vv : gen_helper_vsha2cl64_vv);
460
+
461
+ mark_vs_dirty(s);
462
+ gen_set_label(over);
463
+ return true;
464
+ }
465
+ return false;
466
+}
467
+
468
+static bool trans_vsha2ch_vv(DisasContext *s, arg_rmrr *a)
469
+{
470
+ if (vsha_check(s, a)) {
471
+ uint32_t data = 0;
472
+ TCGLabel *over = gen_new_label();
473
+ TCGv_i32 egs;
474
+
475
+ if (!s->vstart_eq_zero || !s->vl_eq_vlmax) {
476
+ /* save opcode for unwinding in case we throw an exception */
477
+ decode_save_opc(s);
478
+ egs = tcg_constant_i32(ZVKNH_EGS);
479
+ gen_helper_egs_check(egs, cpu_env);
480
+ tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
481
+ }
482
+
483
+ data = FIELD_DP32(data, VDATA, VM, a->vm);
484
+ data = FIELD_DP32(data, VDATA, LMUL, s->lmul);
485
+ data = FIELD_DP32(data, VDATA, VTA, s->vta);
486
+ data = FIELD_DP32(data, VDATA, VTA_ALL_1S, s->cfg_vta_all_1s);
487
+ data = FIELD_DP32(data, VDATA, VMA, s->vma);
488
+
489
+ tcg_gen_gvec_3_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, a->rs1),
490
+ vreg_ofs(s, a->rs2), cpu_env, s->cfg_ptr->vlen / 8,
491
+ s->cfg_ptr->vlen / 8, data,
492
+ s->sew == MO_32 ?
493
+ gen_helper_vsha2ch32_vv : gen_helper_vsha2ch64_vv);
494
+
495
+ mark_vs_dirty(s);
496
+ gen_set_label(over);
497
+ return true;
498
+ }
499
+ return false;
500
+}
60
--
501
--
61
2.35.1
502
2.41.0
diff view generated by jsdifflib
1
From: Bin Meng <bin.meng@windriver.com>
1
From: Lawrence Hunter <lawrence.hunter@codethink.co.uk>
2
2
3
Implement .debug_excp_handler, .debug_check_{breakpoint, watchpoint}
3
This commit adds support for the Zvksh vector-crypto extension, which
4
TCGCPUOps and hook them into riscv_tcg_ops.
4
consists of the following instructions:
5
5
6
Signed-off-by: Bin Meng <bin.meng@windriver.com>
6
* vsm3me.vv
7
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
7
* vsm3c.vi
8
Message-Id: <20220421003324.1134983-2-bmeng.cn@gmail.com>
8
9
Translation functions are defined in
10
`target/riscv/insn_trans/trans_rvvk.c.inc` and helpers are defined in
11
`target/riscv/vcrypto_helper.c`.
12
13
Co-authored-by: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
14
[max.chou@sifive.com: Replaced vstart checking by TCG op]
15
Signed-off-by: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
16
Signed-off-by: Lawrence Hunter <lawrence.hunter@codethink.co.uk>
17
Signed-off-by: Max Chou <max.chou@sifive.com>
18
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
19
[max.chou@sifive.com: Exposed x-zvksh property]
20
Message-ID: <20230711165917.2629866-12-max.chou@sifive.com>
9
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
21
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
10
---
22
---
11
target/riscv/debug.h | 4 +++
23
target/riscv/cpu_cfg.h | 1 +
12
target/riscv/cpu.c | 3 ++
24
target/riscv/helper.h | 3 +
13
target/riscv/debug.c | 75 ++++++++++++++++++++++++++++++++++++++++++++
25
target/riscv/insn32.decode | 4 +
14
3 files changed, 82 insertions(+)
26
target/riscv/cpu.c | 6 +-
15
27
target/riscv/vcrypto_helper.c | 134 +++++++++++++++++++++++
16
diff --git a/target/riscv/debug.h b/target/riscv/debug.h
28
target/riscv/insn_trans/trans_rvvk.c.inc | 31 ++++++
17
index XXXXXXX..XXXXXXX 100644
29
6 files changed, 177 insertions(+), 2 deletions(-)
18
--- a/target/riscv/debug.h
30
19
+++ b/target/riscv/debug.h
31
diff --git a/target/riscv/cpu_cfg.h b/target/riscv/cpu_cfg.h
20
@@ -XXX,XX +XXX,XX @@ void tselect_csr_write(CPURISCVState *env, target_ulong val);
32
index XXXXXXX..XXXXXXX 100644
21
target_ulong tdata_csr_read(CPURISCVState *env, int tdata_index);
33
--- a/target/riscv/cpu_cfg.h
22
void tdata_csr_write(CPURISCVState *env, int tdata_index, target_ulong val);
34
+++ b/target/riscv/cpu_cfg.h
23
35
@@ -XXX,XX +XXX,XX @@ struct RISCVCPUConfig {
24
+void riscv_cpu_debug_excp_handler(CPUState *cs);
36
bool ext_zvkned;
25
+bool riscv_cpu_debug_check_breakpoint(CPUState *cs);
37
bool ext_zvknha;
26
+bool riscv_cpu_debug_check_watchpoint(CPUState *cs, CPUWatchpoint *wp);
38
bool ext_zvknhb;
27
+
39
+ bool ext_zvksh;
28
#endif /* RISCV_DEBUG_H */
40
bool ext_zmmul;
41
bool ext_zvfbfmin;
42
bool ext_zvfbfwma;
43
diff --git a/target/riscv/helper.h b/target/riscv/helper.h
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/riscv/helper.h
46
+++ b/target/riscv/helper.h
47
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_5(vsha2ch32_vv, void, ptr, ptr, ptr, env, i32)
48
DEF_HELPER_5(vsha2ch64_vv, void, ptr, ptr, ptr, env, i32)
49
DEF_HELPER_5(vsha2cl32_vv, void, ptr, ptr, ptr, env, i32)
50
DEF_HELPER_5(vsha2cl64_vv, void, ptr, ptr, ptr, env, i32)
51
+
52
+DEF_HELPER_5(vsm3me_vv, void, ptr, ptr, ptr, env, i32)
53
+DEF_HELPER_5(vsm3c_vi, void, ptr, ptr, i32, env, i32)
54
diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode
55
index XXXXXXX..XXXXXXX 100644
56
--- a/target/riscv/insn32.decode
57
+++ b/target/riscv/insn32.decode
58
@@ -XXX,XX +XXX,XX @@ vaeskf2_vi 101010 1 ..... ..... 010 ..... 1110111 @r_vm_1
59
vsha2ms_vv 101101 1 ..... ..... 010 ..... 1110111 @r_vm_1
60
vsha2ch_vv 101110 1 ..... ..... 010 ..... 1110111 @r_vm_1
61
vsha2cl_vv 101111 1 ..... ..... 010 ..... 1110111 @r_vm_1
62
+
63
+# *** Zvksh vector crypto extension ***
64
+vsm3me_vv 100000 1 ..... ..... 010 ..... 1110111 @r_vm_1
65
+vsm3c_vi 101011 1 ..... ..... 010 ..... 1110111 @r_vm_1
29
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
66
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
30
index XXXXXXX..XXXXXXX 100644
67
index XXXXXXX..XXXXXXX 100644
31
--- a/target/riscv/cpu.c
68
--- a/target/riscv/cpu.c
32
+++ b/target/riscv/cpu.c
69
+++ b/target/riscv/cpu.c
33
@@ -XXX,XX +XXX,XX @@ static const struct TCGCPUOps riscv_tcg_ops = {
70
@@ -XXX,XX +XXX,XX @@ static const struct isa_ext_data isa_edata_arr[] = {
34
.do_interrupt = riscv_cpu_do_interrupt,
71
ISA_EXT_DATA_ENTRY(zvkned, PRIV_VERSION_1_12_0, ext_zvkned),
35
.do_transaction_failed = riscv_cpu_do_transaction_failed,
72
ISA_EXT_DATA_ENTRY(zvknha, PRIV_VERSION_1_12_0, ext_zvknha),
36
.do_unaligned_access = riscv_cpu_do_unaligned_access,
73
ISA_EXT_DATA_ENTRY(zvknhb, PRIV_VERSION_1_12_0, ext_zvknhb),
37
+ .debug_excp_handler = riscv_cpu_debug_excp_handler,
74
+ ISA_EXT_DATA_ENTRY(zvksh, PRIV_VERSION_1_12_0, ext_zvksh),
38
+ .debug_check_breakpoint = riscv_cpu_debug_check_breakpoint,
75
ISA_EXT_DATA_ENTRY(zhinx, PRIV_VERSION_1_12_0, ext_zhinx),
39
+ .debug_check_watchpoint = riscv_cpu_debug_check_watchpoint,
76
ISA_EXT_DATA_ENTRY(zhinxmin, PRIV_VERSION_1_12_0, ext_zhinxmin),
40
#endif /* !CONFIG_USER_ONLY */
77
ISA_EXT_DATA_ENTRY(smaia, PRIV_VERSION_1_12_0, ext_smaia),
78
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_validate_set_extensions(RISCVCPU *cpu, Error **errp)
79
* In principle Zve*x would also suffice here, were they supported
80
* in qemu
81
*/
82
- if ((cpu->cfg.ext_zvbb || cpu->cfg.ext_zvkned || cpu->cfg.ext_zvknha) &&
83
- !cpu->cfg.ext_zve32f) {
84
+ if ((cpu->cfg.ext_zvbb || cpu->cfg.ext_zvkned || cpu->cfg.ext_zvknha ||
85
+ cpu->cfg.ext_zvksh) && !cpu->cfg.ext_zve32f) {
86
error_setg(errp,
87
"Vector crypto extensions require V or Zve* extensions");
88
return;
89
@@ -XXX,XX +XXX,XX @@ static Property riscv_cpu_extensions[] = {
90
DEFINE_PROP_BOOL("x-zvkned", RISCVCPU, cfg.ext_zvkned, false),
91
DEFINE_PROP_BOOL("x-zvknha", RISCVCPU, cfg.ext_zvknha, false),
92
DEFINE_PROP_BOOL("x-zvknhb", RISCVCPU, cfg.ext_zvknhb, false),
93
+ DEFINE_PROP_BOOL("x-zvksh", RISCVCPU, cfg.ext_zvksh, false),
94
95
DEFINE_PROP_END_OF_LIST(),
41
};
96
};
42
97
diff --git a/target/riscv/vcrypto_helper.c b/target/riscv/vcrypto_helper.c
43
diff --git a/target/riscv/debug.c b/target/riscv/debug.c
98
index XXXXXXX..XXXXXXX 100644
44
index XXXXXXX..XXXXXXX 100644
99
--- a/target/riscv/vcrypto_helper.c
45
--- a/target/riscv/debug.c
100
+++ b/target/riscv/vcrypto_helper.c
46
+++ b/target/riscv/debug.c
101
@@ -XXX,XX +XXX,XX @@ void HELPER(vsha2cl64_vv)(void *vd, void *vs1, void *vs2, CPURISCVState *env,
47
@@ -XXX,XX +XXX,XX @@ void tdata_csr_write(CPURISCVState *env, int tdata_index, target_ulong val)
102
vext_set_elems_1s(vd, vta, env->vl * esz, total_elems * esz);
48
103
env->vstart = 0;
49
return write_func(env, env->trigger_cur, tdata_index, val);
50
}
104
}
51
+
105
+
52
+void riscv_cpu_debug_excp_handler(CPUState *cs)
106
+static inline uint32_t p1(uint32_t x)
53
+{
107
+{
54
+ RISCVCPU *cpu = RISCV_CPU(cs);
108
+ return x ^ rol32(x, 15) ^ rol32(x, 23);
55
+ CPURISCVState *env = &cpu->env;
109
+}
56
+
110
+
57
+ if (cs->watchpoint_hit) {
111
+static inline uint32_t zvksh_w(uint32_t m16, uint32_t m9, uint32_t m3,
58
+ if (cs->watchpoint_hit->flags & BP_CPU) {
112
+ uint32_t m13, uint32_t m6)
59
+ cs->watchpoint_hit = NULL;
113
+{
60
+ riscv_raise_exception(env, RISCV_EXCP_BREAKPOINT, 0);
114
+ return p1(m16 ^ m9 ^ rol32(m3, 15)) ^ rol32(m13, 7) ^ m6;
61
+ }
115
+}
62
+ } else {
116
+
63
+ if (cpu_breakpoint_test(cs, env->pc, BP_CPU)) {
117
+void HELPER(vsm3me_vv)(void *vd_vptr, void *vs1_vptr, void *vs2_vptr,
64
+ riscv_raise_exception(env, RISCV_EXCP_BREAKPOINT, 0);
118
+ CPURISCVState *env, uint32_t desc)
119
+{
120
+ uint32_t esz = memop_size(FIELD_EX64(env->vtype, VTYPE, VSEW));
121
+ uint32_t total_elems = vext_get_total_elems(env, desc, esz);
122
+ uint32_t vta = vext_vta(desc);
123
+ uint32_t *vd = vd_vptr;
124
+ uint32_t *vs1 = vs1_vptr;
125
+ uint32_t *vs2 = vs2_vptr;
126
+
127
+ for (int i = env->vstart / 8; i < env->vl / 8; i++) {
128
+ uint32_t w[24];
129
+ for (int j = 0; j < 8; j++) {
130
+ w[j] = bswap32(vs1[H4((i * 8) + j)]);
131
+ w[j + 8] = bswap32(vs2[H4((i * 8) + j)]);
132
+ }
133
+ for (int j = 0; j < 8; j++) {
134
+ w[j + 16] =
135
+ zvksh_w(w[j], w[j + 7], w[j + 13], w[j + 3], w[j + 10]);
136
+ }
137
+ for (int j = 0; j < 8; j++) {
138
+ vd[(i * 8) + j] = bswap32(w[H4(j + 16)]);
65
+ }
139
+ }
66
+ }
140
+ }
67
+}
141
+ vext_set_elems_1s(vd_vptr, vta, env->vl * esz, total_elems * esz);
68
+
142
+ env->vstart = 0;
69
+bool riscv_cpu_debug_check_breakpoint(CPUState *cs)
143
+}
70
+{
144
+
71
+ RISCVCPU *cpu = RISCV_CPU(cs);
145
+static inline uint32_t ff1(uint32_t x, uint32_t y, uint32_t z)
72
+ CPURISCVState *env = &cpu->env;
146
+{
73
+ CPUBreakpoint *bp;
147
+ return x ^ y ^ z;
74
+ target_ulong ctrl;
148
+}
75
+ target_ulong pc;
149
+
76
+ int i;
150
+static inline uint32_t ff2(uint32_t x, uint32_t y, uint32_t z)
77
+
151
+{
78
+ QTAILQ_FOREACH(bp, &cs->breakpoints, entry) {
152
+ return (x & y) | (x & z) | (y & z);
79
+ for (i = 0; i < TRIGGER_TYPE2_NUM; i++) {
153
+}
80
+ ctrl = env->type2_trig[i].mcontrol;
154
+
81
+ pc = env->type2_trig[i].maddress;
155
+static inline uint32_t ff_j(uint32_t x, uint32_t y, uint32_t z, uint32_t j)
82
+
156
+{
83
+ if ((ctrl & TYPE2_EXEC) && (bp->pc == pc)) {
157
+ return (j <= 15) ? ff1(x, y, z) : ff2(x, y, z);
84
+ /* check U/S/M bit against current privilege level */
158
+}
85
+ if ((ctrl >> 3) & BIT(env->priv)) {
159
+
86
+ return true;
160
+static inline uint32_t gg1(uint32_t x, uint32_t y, uint32_t z)
87
+ }
161
+{
88
+ }
162
+ return x ^ y ^ z;
163
+}
164
+
165
+static inline uint32_t gg2(uint32_t x, uint32_t y, uint32_t z)
166
+{
167
+ return (x & y) | (~x & z);
168
+}
169
+
170
+static inline uint32_t gg_j(uint32_t x, uint32_t y, uint32_t z, uint32_t j)
171
+{
172
+ return (j <= 15) ? gg1(x, y, z) : gg2(x, y, z);
173
+}
174
+
175
+static inline uint32_t t_j(uint32_t j)
176
+{
177
+ return (j <= 15) ? 0x79cc4519 : 0x7a879d8a;
178
+}
179
+
180
+static inline uint32_t p_0(uint32_t x)
181
+{
182
+ return x ^ rol32(x, 9) ^ rol32(x, 17);
183
+}
184
+
185
+static void sm3c(uint32_t *vd, uint32_t *vs1, uint32_t *vs2, uint32_t uimm)
186
+{
187
+ uint32_t x0, x1;
188
+ uint32_t j;
189
+ uint32_t ss1, ss2, tt1, tt2;
190
+ x0 = vs2[0] ^ vs2[4];
191
+ x1 = vs2[1] ^ vs2[5];
192
+ j = 2 * uimm;
193
+ ss1 = rol32(rol32(vs1[0], 12) + vs1[4] + rol32(t_j(j), j % 32), 7);
194
+ ss2 = ss1 ^ rol32(vs1[0], 12);
195
+ tt1 = ff_j(vs1[0], vs1[1], vs1[2], j) + vs1[3] + ss2 + x0;
196
+ tt2 = gg_j(vs1[4], vs1[5], vs1[6], j) + vs1[7] + ss1 + vs2[0];
197
+ vs1[3] = vs1[2];
198
+ vd[3] = rol32(vs1[1], 9);
199
+ vs1[1] = vs1[0];
200
+ vd[1] = tt1;
201
+ vs1[7] = vs1[6];
202
+ vd[7] = rol32(vs1[5], 19);
203
+ vs1[5] = vs1[4];
204
+ vd[5] = p_0(tt2);
205
+ j = 2 * uimm + 1;
206
+ ss1 = rol32(rol32(vd[1], 12) + vd[5] + rol32(t_j(j), j % 32), 7);
207
+ ss2 = ss1 ^ rol32(vd[1], 12);
208
+ tt1 = ff_j(vd[1], vs1[1], vd[3], j) + vs1[3] + ss2 + x1;
209
+ tt2 = gg_j(vd[5], vs1[5], vd[7], j) + vs1[7] + ss1 + vs2[1];
210
+ vd[2] = rol32(vs1[1], 9);
211
+ vd[0] = tt1;
212
+ vd[6] = rol32(vs1[5], 19);
213
+ vd[4] = p_0(tt2);
214
+}
215
+
216
+void HELPER(vsm3c_vi)(void *vd_vptr, void *vs2_vptr, uint32_t uimm,
217
+ CPURISCVState *env, uint32_t desc)
218
+{
219
+ uint32_t esz = memop_size(FIELD_EX64(env->vtype, VTYPE, VSEW));
220
+ uint32_t total_elems = vext_get_total_elems(env, desc, esz);
221
+ uint32_t vta = vext_vta(desc);
222
+ uint32_t *vd = vd_vptr;
223
+ uint32_t *vs2 = vs2_vptr;
224
+ uint32_t v1[8], v2[8], v3[8];
225
+
226
+ for (int i = env->vstart / 8; i < env->vl / 8; i++) {
227
+ for (int k = 0; k < 8; k++) {
228
+ v2[k] = bswap32(vd[H4(i * 8 + k)]);
229
+ v3[k] = bswap32(vs2[H4(i * 8 + k)]);
230
+ }
231
+ sm3c(v1, v2, v3, uimm);
232
+ for (int k = 0; k < 8; k++) {
233
+ vd[i * 8 + k] = bswap32(v1[H4(k)]);
89
+ }
234
+ }
90
+ }
235
+ }
91
+
236
+ vext_set_elems_1s(vd_vptr, vta, env->vl * esz, total_elems * esz);
92
+ return false;
237
+ env->vstart = 0;
93
+}
238
+}
94
+
239
diff --git a/target/riscv/insn_trans/trans_rvvk.c.inc b/target/riscv/insn_trans/trans_rvvk.c.inc
95
+bool riscv_cpu_debug_check_watchpoint(CPUState *cs, CPUWatchpoint *wp)
240
index XXXXXXX..XXXXXXX 100644
96
+{
241
--- a/target/riscv/insn_trans/trans_rvvk.c.inc
97
+ RISCVCPU *cpu = RISCV_CPU(cs);
242
+++ b/target/riscv/insn_trans/trans_rvvk.c.inc
98
+ CPURISCVState *env = &cpu->env;
243
@@ -XXX,XX +XXX,XX @@ static bool trans_vsha2ch_vv(DisasContext *s, arg_rmrr *a)
99
+ target_ulong ctrl;
244
}
100
+ target_ulong addr;
245
return false;
101
+ int flags;
246
}
102
+ int i;
247
+
103
+
248
+/*
104
+ for (i = 0; i < TRIGGER_TYPE2_NUM; i++) {
249
+ * Zvksh
105
+ ctrl = env->type2_trig[i].mcontrol;
250
+ */
106
+ addr = env->type2_trig[i].maddress;
251
+
107
+ flags = 0;
252
+#define ZVKSH_EGS 8
108
+
253
+
109
+ if (ctrl & TYPE2_LOAD) {
254
+static inline bool vsm3_check(DisasContext *s, arg_rmrr *a)
110
+ flags |= BP_MEM_READ;
255
+{
111
+ }
256
+ int egw_bytes = ZVKSH_EGS << s->sew;
112
+ if (ctrl & TYPE2_STORE) {
257
+ int mult = 1 << MAX(s->lmul, 0);
113
+ flags |= BP_MEM_WRITE;
258
+ return s->cfg_ptr->ext_zvksh == true &&
114
+ }
259
+ require_rvv(s) &&
115
+
260
+ vext_check_isa_ill(s) &&
116
+ if ((wp->flags & flags) && (wp->vaddr == addr)) {
261
+ !is_overlapped(a->rd, mult, a->rs2, mult) &&
117
+ /* check U/S/M bit against current privilege level */
262
+ MAXSZ(s) >= egw_bytes &&
118
+ if ((ctrl >> 3) & BIT(env->priv)) {
263
+ s->sew == MO_32;
119
+ return true;
264
+}
120
+ }
265
+
121
+ }
266
+static inline bool vsm3me_check(DisasContext *s, arg_rmrr *a)
122
+ }
267
+{
123
+
268
+ return vsm3_check(s, a) && vext_check_sss(s, a->rd, a->rs1, a->rs2, a->vm);
124
+ return false;
269
+}
125
+}
270
+
271
+static inline bool vsm3c_check(DisasContext *s, arg_rmrr *a)
272
+{
273
+ return vsm3_check(s, a) && vext_check_ss(s, a->rd, a->rs2, a->vm);
274
+}
275
+
276
+GEN_VV_UNMASKED_TRANS(vsm3me_vv, vsm3me_check, ZVKSH_EGS)
277
+GEN_VI_UNMASKED_TRANS(vsm3c_vi, vsm3c_check, ZVKSH_EGS)
126
--
278
--
127
2.35.1
279
2.41.0
diff view generated by jsdifflib
New patch
1
1
From: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
2
3
This commit adds support for the Zvkg vector-crypto extension, which
4
consists of the following instructions:
5
6
* vgmul.vv
7
* vghsh.vv
8
9
Translation functions are defined in
10
`target/riscv/insn_trans/trans_rvvk.c.inc` and helpers are defined in
11
`target/riscv/vcrypto_helper.c`.
12
13
Co-authored-by: Lawrence Hunter <lawrence.hunter@codethink.co.uk>
14
[max.chou@sifive.com: Replaced vstart checking by TCG op]
15
Signed-off-by: Lawrence Hunter <lawrence.hunter@codethink.co.uk>
16
Signed-off-by: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
17
Signed-off-by: Max Chou <max.chou@sifive.com>
18
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
19
[max.chou@sifive.com: Exposed x-zvkg property]
20
[max.chou@sifive.com: Replaced uint by int for cross win32 build]
21
Message-ID: <20230711165917.2629866-13-max.chou@sifive.com>
22
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
23
---
24
target/riscv/cpu_cfg.h | 1 +
25
target/riscv/helper.h | 3 +
26
target/riscv/insn32.decode | 4 ++
27
target/riscv/cpu.c | 6 +-
28
target/riscv/vcrypto_helper.c | 72 ++++++++++++++++++++++++
29
target/riscv/insn_trans/trans_rvvk.c.inc | 30 ++++++++++
30
6 files changed, 114 insertions(+), 2 deletions(-)
31
32
diff --git a/target/riscv/cpu_cfg.h b/target/riscv/cpu_cfg.h
33
index XXXXXXX..XXXXXXX 100644
34
--- a/target/riscv/cpu_cfg.h
35
+++ b/target/riscv/cpu_cfg.h
36
@@ -XXX,XX +XXX,XX @@ struct RISCVCPUConfig {
37
bool ext_zve64d;
38
bool ext_zvbb;
39
bool ext_zvbc;
40
+ bool ext_zvkg;
41
bool ext_zvkned;
42
bool ext_zvknha;
43
bool ext_zvknhb;
44
diff --git a/target/riscv/helper.h b/target/riscv/helper.h
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/riscv/helper.h
47
+++ b/target/riscv/helper.h
48
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_5(vsha2cl64_vv, void, ptr, ptr, ptr, env, i32)
49
50
DEF_HELPER_5(vsm3me_vv, void, ptr, ptr, ptr, env, i32)
51
DEF_HELPER_5(vsm3c_vi, void, ptr, ptr, i32, env, i32)
52
+
53
+DEF_HELPER_5(vghsh_vv, void, ptr, ptr, ptr, env, i32)
54
+DEF_HELPER_4(vgmul_vv, void, ptr, ptr, env, i32)
55
diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode
56
index XXXXXXX..XXXXXXX 100644
57
--- a/target/riscv/insn32.decode
58
+++ b/target/riscv/insn32.decode
59
@@ -XXX,XX +XXX,XX @@ vsha2cl_vv 101111 1 ..... ..... 010 ..... 1110111 @r_vm_1
60
# *** Zvksh vector crypto extension ***
61
vsm3me_vv 100000 1 ..... ..... 010 ..... 1110111 @r_vm_1
62
vsm3c_vi 101011 1 ..... ..... 010 ..... 1110111 @r_vm_1
63
+
64
+# *** Zvkg vector crypto extension ***
65
+vghsh_vv 101100 1 ..... ..... 010 ..... 1110111 @r_vm_1
66
+vgmul_vv 101000 1 ..... 10001 010 ..... 1110111 @r2_vm_1
67
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
68
index XXXXXXX..XXXXXXX 100644
69
--- a/target/riscv/cpu.c
70
+++ b/target/riscv/cpu.c
71
@@ -XXX,XX +XXX,XX @@ static const struct isa_ext_data isa_edata_arr[] = {
72
ISA_EXT_DATA_ENTRY(zvfbfwma, PRIV_VERSION_1_12_0, ext_zvfbfwma),
73
ISA_EXT_DATA_ENTRY(zvfh, PRIV_VERSION_1_12_0, ext_zvfh),
74
ISA_EXT_DATA_ENTRY(zvfhmin, PRIV_VERSION_1_12_0, ext_zvfhmin),
75
+ ISA_EXT_DATA_ENTRY(zvkg, PRIV_VERSION_1_12_0, ext_zvkg),
76
ISA_EXT_DATA_ENTRY(zvkned, PRIV_VERSION_1_12_0, ext_zvkned),
77
ISA_EXT_DATA_ENTRY(zvknha, PRIV_VERSION_1_12_0, ext_zvknha),
78
ISA_EXT_DATA_ENTRY(zvknhb, PRIV_VERSION_1_12_0, ext_zvknhb),
79
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_validate_set_extensions(RISCVCPU *cpu, Error **errp)
80
* In principle Zve*x would also suffice here, were they supported
81
* in qemu
82
*/
83
- if ((cpu->cfg.ext_zvbb || cpu->cfg.ext_zvkned || cpu->cfg.ext_zvknha ||
84
- cpu->cfg.ext_zvksh) && !cpu->cfg.ext_zve32f) {
85
+ if ((cpu->cfg.ext_zvbb || cpu->cfg.ext_zvkg || cpu->cfg.ext_zvkned ||
86
+ cpu->cfg.ext_zvknha || cpu->cfg.ext_zvksh) && !cpu->cfg.ext_zve32f) {
87
error_setg(errp,
88
"Vector crypto extensions require V or Zve* extensions");
89
return;
90
@@ -XXX,XX +XXX,XX @@ static Property riscv_cpu_extensions[] = {
91
/* Vector cryptography extensions */
92
DEFINE_PROP_BOOL("x-zvbb", RISCVCPU, cfg.ext_zvbb, false),
93
DEFINE_PROP_BOOL("x-zvbc", RISCVCPU, cfg.ext_zvbc, false),
94
+ DEFINE_PROP_BOOL("x-zvkg", RISCVCPU, cfg.ext_zvkg, false),
95
DEFINE_PROP_BOOL("x-zvkned", RISCVCPU, cfg.ext_zvkned, false),
96
DEFINE_PROP_BOOL("x-zvknha", RISCVCPU, cfg.ext_zvknha, false),
97
DEFINE_PROP_BOOL("x-zvknhb", RISCVCPU, cfg.ext_zvknhb, false),
98
diff --git a/target/riscv/vcrypto_helper.c b/target/riscv/vcrypto_helper.c
99
index XXXXXXX..XXXXXXX 100644
100
--- a/target/riscv/vcrypto_helper.c
101
+++ b/target/riscv/vcrypto_helper.c
102
@@ -XXX,XX +XXX,XX @@ void HELPER(vsm3c_vi)(void *vd_vptr, void *vs2_vptr, uint32_t uimm,
103
vext_set_elems_1s(vd_vptr, vta, env->vl * esz, total_elems * esz);
104
env->vstart = 0;
105
}
106
+
107
+void HELPER(vghsh_vv)(void *vd_vptr, void *vs1_vptr, void *vs2_vptr,
108
+ CPURISCVState *env, uint32_t desc)
109
+{
110
+ uint64_t *vd = vd_vptr;
111
+ uint64_t *vs1 = vs1_vptr;
112
+ uint64_t *vs2 = vs2_vptr;
113
+ uint32_t vta = vext_vta(desc);
114
+ uint32_t total_elems = vext_get_total_elems(env, desc, 4);
115
+
116
+ for (uint32_t i = env->vstart / 4; i < env->vl / 4; i++) {
117
+ uint64_t Y[2] = {vd[i * 2 + 0], vd[i * 2 + 1]};
118
+ uint64_t H[2] = {brev8(vs2[i * 2 + 0]), brev8(vs2[i * 2 + 1])};
119
+ uint64_t X[2] = {vs1[i * 2 + 0], vs1[i * 2 + 1]};
120
+ uint64_t Z[2] = {0, 0};
121
+
122
+ uint64_t S[2] = {brev8(Y[0] ^ X[0]), brev8(Y[1] ^ X[1])};
123
+
124
+ for (int j = 0; j < 128; j++) {
125
+ if ((S[j / 64] >> (j % 64)) & 1) {
126
+ Z[0] ^= H[0];
127
+ Z[1] ^= H[1];
128
+ }
129
+ bool reduce = ((H[1] >> 63) & 1);
130
+ H[1] = H[1] << 1 | H[0] >> 63;
131
+ H[0] = H[0] << 1;
132
+ if (reduce) {
133
+ H[0] ^= 0x87;
134
+ }
135
+ }
136
+
137
+ vd[i * 2 + 0] = brev8(Z[0]);
138
+ vd[i * 2 + 1] = brev8(Z[1]);
139
+ }
140
+ /* set tail elements to 1s */
141
+ vext_set_elems_1s(vd, vta, env->vl * 4, total_elems * 4);
142
+ env->vstart = 0;
143
+}
144
+
145
+void HELPER(vgmul_vv)(void *vd_vptr, void *vs2_vptr, CPURISCVState *env,
146
+ uint32_t desc)
147
+{
148
+ uint64_t *vd = vd_vptr;
149
+ uint64_t *vs2 = vs2_vptr;
150
+ uint32_t vta = vext_vta(desc);
151
+ uint32_t total_elems = vext_get_total_elems(env, desc, 4);
152
+
153
+ for (uint32_t i = env->vstart / 4; i < env->vl / 4; i++) {
154
+ uint64_t Y[2] = {brev8(vd[i * 2 + 0]), brev8(vd[i * 2 + 1])};
155
+ uint64_t H[2] = {brev8(vs2[i * 2 + 0]), brev8(vs2[i * 2 + 1])};
156
+ uint64_t Z[2] = {0, 0};
157
+
158
+ for (int j = 0; j < 128; j++) {
159
+ if ((Y[j / 64] >> (j % 64)) & 1) {
160
+ Z[0] ^= H[0];
161
+ Z[1] ^= H[1];
162
+ }
163
+ bool reduce = ((H[1] >> 63) & 1);
164
+ H[1] = H[1] << 1 | H[0] >> 63;
165
+ H[0] = H[0] << 1;
166
+ if (reduce) {
167
+ H[0] ^= 0x87;
168
+ }
169
+ }
170
+
171
+ vd[i * 2 + 0] = brev8(Z[0]);
172
+ vd[i * 2 + 1] = brev8(Z[1]);
173
+ }
174
+ /* set tail elements to 1s */
175
+ vext_set_elems_1s(vd, vta, env->vl * 4, total_elems * 4);
176
+ env->vstart = 0;
177
+}
178
diff --git a/target/riscv/insn_trans/trans_rvvk.c.inc b/target/riscv/insn_trans/trans_rvvk.c.inc
179
index XXXXXXX..XXXXXXX 100644
180
--- a/target/riscv/insn_trans/trans_rvvk.c.inc
181
+++ b/target/riscv/insn_trans/trans_rvvk.c.inc
182
@@ -XXX,XX +XXX,XX @@ static inline bool vsm3c_check(DisasContext *s, arg_rmrr *a)
183
184
GEN_VV_UNMASKED_TRANS(vsm3me_vv, vsm3me_check, ZVKSH_EGS)
185
GEN_VI_UNMASKED_TRANS(vsm3c_vi, vsm3c_check, ZVKSH_EGS)
186
+
187
+/*
188
+ * Zvkg
189
+ */
190
+
191
+#define ZVKG_EGS 4
192
+
193
+static bool vgmul_check(DisasContext *s, arg_rmr *a)
194
+{
195
+ int egw_bytes = ZVKG_EGS << s->sew;
196
+ return s->cfg_ptr->ext_zvkg == true &&
197
+ vext_check_isa_ill(s) &&
198
+ require_rvv(s) &&
199
+ MAXSZ(s) >= egw_bytes &&
200
+ vext_check_ss(s, a->rd, a->rs2, a->vm) &&
201
+ s->sew == MO_32;
202
+}
203
+
204
+GEN_V_UNMASKED_TRANS(vgmul_vv, vgmul_check, ZVKG_EGS)
205
+
206
+static bool vghsh_check(DisasContext *s, arg_rmrr *a)
207
+{
208
+ int egw_bytes = ZVKG_EGS << s->sew;
209
+ return s->cfg_ptr->ext_zvkg == true &&
210
+ opivv_check(s, a) &&
211
+ MAXSZ(s) >= egw_bytes &&
212
+ s->sew == MO_32;
213
+}
214
+
215
+GEN_VV_UNMASKED_TRANS(vghsh_vv, vghsh_check, ZVKG_EGS)
216
--
217
2.41.0
diff view generated by jsdifflib
New patch
1
From: Max Chou <max.chou@sifive.com>
1
2
3
Allows sharing of sm4_subword between different targets.
4
5
Signed-off-by: Max Chou <max.chou@sifive.com>
6
Reviewed-by: Frank Chang <frank.chang@sifive.com>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Max Chou <max.chou@sifive.com>
9
Message-ID: <20230711165917.2629866-14-max.chou@sifive.com>
10
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
11
---
12
include/crypto/sm4.h | 8 ++++++++
13
target/arm/tcg/crypto_helper.c | 10 ++--------
14
2 files changed, 10 insertions(+), 8 deletions(-)
15
16
diff --git a/include/crypto/sm4.h b/include/crypto/sm4.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/include/crypto/sm4.h
19
+++ b/include/crypto/sm4.h
20
@@ -XXX,XX +XXX,XX @@
21
22
extern const uint8_t sm4_sbox[256];
23
24
+static inline uint32_t sm4_subword(uint32_t word)
25
+{
26
+ return sm4_sbox[word & 0xff] |
27
+ sm4_sbox[(word >> 8) & 0xff] << 8 |
28
+ sm4_sbox[(word >> 16) & 0xff] << 16 |
29
+ sm4_sbox[(word >> 24) & 0xff] << 24;
30
+}
31
+
32
#endif
33
diff --git a/target/arm/tcg/crypto_helper.c b/target/arm/tcg/crypto_helper.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/tcg/crypto_helper.c
36
+++ b/target/arm/tcg/crypto_helper.c
37
@@ -XXX,XX +XXX,XX @@ static void do_crypto_sm4e(uint64_t *rd, uint64_t *rn, uint64_t *rm)
38
CR_ST_WORD(d, (i + 3) % 4) ^
39
CR_ST_WORD(n, i);
40
41
- t = sm4_sbox[t & 0xff] |
42
- sm4_sbox[(t >> 8) & 0xff] << 8 |
43
- sm4_sbox[(t >> 16) & 0xff] << 16 |
44
- sm4_sbox[(t >> 24) & 0xff] << 24;
45
+ t = sm4_subword(t);
46
47
CR_ST_WORD(d, i) ^= t ^ rol32(t, 2) ^ rol32(t, 10) ^ rol32(t, 18) ^
48
rol32(t, 24);
49
@@ -XXX,XX +XXX,XX @@ static void do_crypto_sm4ekey(uint64_t *rd, uint64_t *rn, uint64_t *rm)
50
CR_ST_WORD(d, (i + 3) % 4) ^
51
CR_ST_WORD(m, i);
52
53
- t = sm4_sbox[t & 0xff] |
54
- sm4_sbox[(t >> 8) & 0xff] << 8 |
55
- sm4_sbox[(t >> 16) & 0xff] << 16 |
56
- sm4_sbox[(t >> 24) & 0xff] << 24;
57
+ t = sm4_subword(t);
58
59
CR_ST_WORD(d, i) ^= t ^ rol32(t, 13) ^ rol32(t, 23);
60
}
61
--
62
2.41.0
diff view generated by jsdifflib
1
From: Atish Patra <atishp@rivosinc.com>
1
From: Max Chou <max.chou@sifive.com>
2
2
3
Add the definition for ratified privileged specification version v1.12
3
Adds sm4_ck constant for use in sm4 cryptography across different targets.
4
4
5
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
5
Signed-off-by: Max Chou <max.chou@sifive.com>
6
Signed-off-by: Atish Patra <atishp@rivosinc.com>
6
Reviewed-by: Frank Chang <frank.chang@sifive.com>
7
Message-Id: <20220303185440.512391-3-atishp@rivosinc.com>
7
Signed-off-by: Max Chou <max.chou@sifive.com>
8
Message-ID: <20230711165917.2629866-15-max.chou@sifive.com>
8
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
9
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
9
---
10
---
10
target/riscv/cpu.h | 1 +
11
include/crypto/sm4.h | 1 +
11
1 file changed, 1 insertion(+)
12
crypto/sm4.c | 10 ++++++++++
13
2 files changed, 11 insertions(+)
12
14
13
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
15
diff --git a/include/crypto/sm4.h b/include/crypto/sm4.h
14
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
15
--- a/target/riscv/cpu.h
17
--- a/include/crypto/sm4.h
16
+++ b/target/riscv/cpu.h
18
+++ b/include/crypto/sm4.h
17
@@ -XXX,XX +XXX,XX @@ enum {
19
@@ -XXX,XX +XXX,XX @@
18
enum {
20
#define QEMU_SM4_H
19
PRIV_VERSION_1_10_0 = 0,
21
20
PRIV_VERSION_1_11_0,
22
extern const uint8_t sm4_sbox[256];
21
+ PRIV_VERSION_1_12_0,
23
+extern const uint32_t sm4_ck[32];
24
25
static inline uint32_t sm4_subword(uint32_t word)
26
{
27
diff --git a/crypto/sm4.c b/crypto/sm4.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/crypto/sm4.c
30
+++ b/crypto/sm4.c
31
@@ -XXX,XX +XXX,XX @@ uint8_t const sm4_sbox[] = {
32
0x79, 0xee, 0x5f, 0x3e, 0xd7, 0xcb, 0x39, 0x48,
22
};
33
};
23
34
24
#define VEXT_VERSION_1_00_0 0x00010000
35
+uint32_t const sm4_ck[] = {
36
+ 0x00070e15, 0x1c232a31, 0x383f464d, 0x545b6269,
37
+ 0x70777e85, 0x8c939aa1, 0xa8afb6bd, 0xc4cbd2d9,
38
+ 0xe0e7eef5, 0xfc030a11, 0x181f262d, 0x343b4249,
39
+ 0x50575e65, 0x6c737a81, 0x888f969d, 0xa4abb2b9,
40
+ 0xc0c7ced5, 0xdce3eaf1, 0xf8ff060d, 0x141b2229,
41
+ 0x30373e45, 0x4c535a61, 0x686f767d, 0x848b9299,
42
+ 0xa0a7aeb5, 0xbcc3cad1, 0xd8dfe6ed, 0xf4fb0209,
43
+ 0x10171e25, 0x2c333a41, 0x484f565d, 0x646b7279
44
+};
25
--
45
--
26
2.35.1
46
2.41.0
diff view generated by jsdifflib
1
From: Weiwei Li <liweiwei@iscas.ac.cn>
1
From: Max Chou <max.chou@sifive.com>
2
2
3
LEN is not used for GEN_VEXT_VMV_WHOLE macro, so vmv<nr>r.v can share
3
This commit adds support for the Zvksed vector-crypto extension, which
4
the same helper
4
consists of the following instructions:
5
5
6
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
6
* vsm4k.vi
7
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
7
* vsm4r.[vv,vs]
8
9
Translation functions are defined in
10
`target/riscv/insn_trans/trans_rvvk.c.inc` and helpers are defined in
11
`target/riscv/vcrypto_helper.c`.
12
13
Signed-off-by: Max Chou <max.chou@sifive.com>
8
Reviewed-by: Frank Chang <frank.chang@sifive.com>
14
Reviewed-by: Frank Chang <frank.chang@sifive.com>
9
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
15
[lawrence.hunter@codethink.co.uk: Moved SM4 functions from
10
Message-Id: <20220325085902.29500-2-liweiwei@iscas.ac.cn>
16
crypto_helper.c to vcrypto_helper.c]
17
[nazar.kazakov@codethink.co.uk: Added alignment checks, refactored code to
18
use macros, and minor style changes]
19
Signed-off-by: Max Chou <max.chou@sifive.com>
20
Message-ID: <20230711165917.2629866-16-max.chou@sifive.com>
11
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
21
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
12
---
22
---
13
target/riscv/helper.h | 5 +----
23
target/riscv/cpu_cfg.h | 1 +
14
target/riscv/vector_helper.c | 29 ++++++++++---------------
24
target/riscv/helper.h | 4 +
15
target/riscv/insn_trans/trans_rvv.c.inc | 17 +++++----------
25
target/riscv/insn32.decode | 5 +
16
3 files changed, 18 insertions(+), 33 deletions(-)
26
target/riscv/cpu.c | 5 +-
17
27
target/riscv/vcrypto_helper.c | 127 +++++++++++++++++++++++
28
target/riscv/insn_trans/trans_rvvk.c.inc | 43 ++++++++
29
6 files changed, 184 insertions(+), 1 deletion(-)
30
31
diff --git a/target/riscv/cpu_cfg.h b/target/riscv/cpu_cfg.h
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/riscv/cpu_cfg.h
34
+++ b/target/riscv/cpu_cfg.h
35
@@ -XXX,XX +XXX,XX @@ struct RISCVCPUConfig {
36
bool ext_zvkned;
37
bool ext_zvknha;
38
bool ext_zvknhb;
39
+ bool ext_zvksed;
40
bool ext_zvksh;
41
bool ext_zmmul;
42
bool ext_zvfbfmin;
18
diff --git a/target/riscv/helper.h b/target/riscv/helper.h
43
diff --git a/target/riscv/helper.h b/target/riscv/helper.h
19
index XXXXXXX..XXXXXXX 100644
44
index XXXXXXX..XXXXXXX 100644
20
--- a/target/riscv/helper.h
45
--- a/target/riscv/helper.h
21
+++ b/target/riscv/helper.h
46
+++ b/target/riscv/helper.h
22
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_6(vcompress_vm_h, void, ptr, ptr, ptr, ptr, env, i32)
47
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_5(vsm3c_vi, void, ptr, ptr, i32, env, i32)
23
DEF_HELPER_6(vcompress_vm_w, void, ptr, ptr, ptr, ptr, env, i32)
48
24
DEF_HELPER_6(vcompress_vm_d, void, ptr, ptr, ptr, ptr, env, i32)
49
DEF_HELPER_5(vghsh_vv, void, ptr, ptr, ptr, env, i32)
25
50
DEF_HELPER_4(vgmul_vv, void, ptr, ptr, env, i32)
26
-DEF_HELPER_4(vmv1r_v, void, ptr, ptr, env, i32)
51
+
27
-DEF_HELPER_4(vmv2r_v, void, ptr, ptr, env, i32)
52
+DEF_HELPER_5(vsm4k_vi, void, ptr, ptr, i32, env, i32)
28
-DEF_HELPER_4(vmv4r_v, void, ptr, ptr, env, i32)
53
+DEF_HELPER_4(vsm4r_vv, void, ptr, ptr, env, i32)
29
-DEF_HELPER_4(vmv8r_v, void, ptr, ptr, env, i32)
54
+DEF_HELPER_4(vsm4r_vs, void, ptr, ptr, env, i32)
30
+DEF_HELPER_4(vmvr_v, void, ptr, ptr, env, i32)
55
diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode
31
56
index XXXXXXX..XXXXXXX 100644
32
DEF_HELPER_5(vzext_vf2_h, void, ptr, ptr, ptr, env, i32)
57
--- a/target/riscv/insn32.decode
33
DEF_HELPER_5(vzext_vf2_w, void, ptr, ptr, ptr, env, i32)
58
+++ b/target/riscv/insn32.decode
34
diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c
59
@@ -XXX,XX +XXX,XX @@ vsm3c_vi 101011 1 ..... ..... 010 ..... 1110111 @r_vm_1
35
index XXXXXXX..XXXXXXX 100644
60
# *** Zvkg vector crypto extension ***
36
--- a/target/riscv/vector_helper.c
61
vghsh_vv 101100 1 ..... ..... 010 ..... 1110111 @r_vm_1
37
+++ b/target/riscv/vector_helper.c
62
vgmul_vv 101000 1 ..... 10001 010 ..... 1110111 @r2_vm_1
38
@@ -XXX,XX +XXX,XX @@ GEN_VEXT_VCOMPRESS_VM(vcompress_vm_w, uint32_t, H4)
63
+
39
GEN_VEXT_VCOMPRESS_VM(vcompress_vm_d, uint64_t, H8)
64
+# *** Zvksed vector crypto extension ***
40
65
+vsm4k_vi 100001 1 ..... ..... 010 ..... 1110111 @r_vm_1
41
/* Vector Whole Register Move */
66
+vsm4r_vv 101000 1 ..... 10000 010 ..... 1110111 @r2_vm_1
42
-#define GEN_VEXT_VMV_WHOLE(NAME, LEN) \
67
+vsm4r_vs 101001 1 ..... 10000 010 ..... 1110111 @r2_vm_1
43
-void HELPER(NAME)(void *vd, void *vs2, CPURISCVState *env, \
68
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
44
- uint32_t desc) \
69
index XXXXXXX..XXXXXXX 100644
45
-{ \
70
--- a/target/riscv/cpu.c
46
- /* EEW = 8 */ \
71
+++ b/target/riscv/cpu.c
47
- uint32_t maxsz = simd_maxsz(desc); \
72
@@ -XXX,XX +XXX,XX @@ static const struct isa_ext_data isa_edata_arr[] = {
48
- uint32_t i = env->vstart; \
73
ISA_EXT_DATA_ENTRY(zvkned, PRIV_VERSION_1_12_0, ext_zvkned),
49
- \
74
ISA_EXT_DATA_ENTRY(zvknha, PRIV_VERSION_1_12_0, ext_zvknha),
50
- memcpy((uint8_t *)vd + H1(i), \
75
ISA_EXT_DATA_ENTRY(zvknhb, PRIV_VERSION_1_12_0, ext_zvknhb),
51
- (uint8_t *)vs2 + H1(i), \
76
+ ISA_EXT_DATA_ENTRY(zvksed, PRIV_VERSION_1_12_0, ext_zvksed),
52
- maxsz - env->vstart); \
77
ISA_EXT_DATA_ENTRY(zvksh, PRIV_VERSION_1_12_0, ext_zvksh),
53
- \
78
ISA_EXT_DATA_ENTRY(zhinx, PRIV_VERSION_1_12_0, ext_zhinx),
54
- env->vstart = 0; \
79
ISA_EXT_DATA_ENTRY(zhinxmin, PRIV_VERSION_1_12_0, ext_zhinxmin),
55
-}
80
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_validate_set_extensions(RISCVCPU *cpu, Error **errp)
56
+void HELPER(vmvr_v)(void *vd, void *vs2, CPURISCVState *env, uint32_t desc)
81
* in qemu
57
+{
82
*/
58
+ /* EEW = 8 */
83
if ((cpu->cfg.ext_zvbb || cpu->cfg.ext_zvkg || cpu->cfg.ext_zvkned ||
59
+ uint32_t maxsz = simd_maxsz(desc);
84
- cpu->cfg.ext_zvknha || cpu->cfg.ext_zvksh) && !cpu->cfg.ext_zve32f) {
60
+ uint32_t i = env->vstart;
85
+ cpu->cfg.ext_zvknha || cpu->cfg.ext_zvksed || cpu->cfg.ext_zvksh) &&
61
+
86
+ !cpu->cfg.ext_zve32f) {
62
+ memcpy((uint8_t *)vd + H1(i),
87
error_setg(errp,
63
+ (uint8_t *)vs2 + H1(i),
88
"Vector crypto extensions require V or Zve* extensions");
64
+ maxsz - env->vstart);
89
return;
65
90
@@ -XXX,XX +XXX,XX @@ static Property riscv_cpu_extensions[] = {
66
-GEN_VEXT_VMV_WHOLE(vmv1r_v, 1)
91
DEFINE_PROP_BOOL("x-zvkned", RISCVCPU, cfg.ext_zvkned, false),
67
-GEN_VEXT_VMV_WHOLE(vmv2r_v, 2)
92
DEFINE_PROP_BOOL("x-zvknha", RISCVCPU, cfg.ext_zvknha, false),
68
-GEN_VEXT_VMV_WHOLE(vmv4r_v, 4)
93
DEFINE_PROP_BOOL("x-zvknhb", RISCVCPU, cfg.ext_zvknhb, false),
69
-GEN_VEXT_VMV_WHOLE(vmv8r_v, 8)
94
+ DEFINE_PROP_BOOL("x-zvksed", RISCVCPU, cfg.ext_zvksed, false),
95
DEFINE_PROP_BOOL("x-zvksh", RISCVCPU, cfg.ext_zvksh, false),
96
97
DEFINE_PROP_END_OF_LIST(),
98
diff --git a/target/riscv/vcrypto_helper.c b/target/riscv/vcrypto_helper.c
99
index XXXXXXX..XXXXXXX 100644
100
--- a/target/riscv/vcrypto_helper.c
101
+++ b/target/riscv/vcrypto_helper.c
102
@@ -XXX,XX +XXX,XX @@
103
#include "cpu.h"
104
#include "crypto/aes.h"
105
#include "crypto/aes-round.h"
106
+#include "crypto/sm4.h"
107
#include "exec/memop.h"
108
#include "exec/exec-all.h"
109
#include "exec/helper-proto.h"
110
@@ -XXX,XX +XXX,XX @@ void HELPER(vgmul_vv)(void *vd_vptr, void *vs2_vptr, CPURISCVState *env,
111
vext_set_elems_1s(vd, vta, env->vl * 4, total_elems * 4);
112
env->vstart = 0;
113
}
114
+
115
+void HELPER(vsm4k_vi)(void *vd, void *vs2, uint32_t uimm5, CPURISCVState *env,
116
+ uint32_t desc)
117
+{
118
+ const uint32_t egs = 4;
119
+ uint32_t rnd = uimm5 & 0x7;
120
+ uint32_t group_start = env->vstart / egs;
121
+ uint32_t group_end = env->vl / egs;
122
+ uint32_t esz = sizeof(uint32_t);
123
+ uint32_t total_elems = vext_get_total_elems(env, desc, esz);
124
+
125
+ for (uint32_t i = group_start; i < group_end; ++i) {
126
+ uint32_t vstart = i * egs;
127
+ uint32_t vend = (i + 1) * egs;
128
+ uint32_t rk[4] = {0};
129
+ uint32_t tmp[8] = {0};
130
+
131
+ for (uint32_t j = vstart; j < vend; ++j) {
132
+ rk[j - vstart] = *((uint32_t *)vs2 + H4(j));
133
+ }
134
+
135
+ for (uint32_t j = 0; j < egs; ++j) {
136
+ tmp[j] = rk[j];
137
+ }
138
+
139
+ for (uint32_t j = 0; j < egs; ++j) {
140
+ uint32_t b, s;
141
+ b = tmp[j + 1] ^ tmp[j + 2] ^ tmp[j + 3] ^ sm4_ck[rnd * 4 + j];
142
+
143
+ s = sm4_subword(b);
144
+
145
+ tmp[j + 4] = tmp[j] ^ (s ^ rol32(s, 13) ^ rol32(s, 23));
146
+ }
147
+
148
+ for (uint32_t j = vstart; j < vend; ++j) {
149
+ *((uint32_t *)vd + H4(j)) = tmp[egs + (j - vstart)];
150
+ }
151
+ }
152
+
70
+ env->vstart = 0;
153
+ env->vstart = 0;
71
+}
154
+ /* set tail elements to 1s */
72
155
+ vext_set_elems_1s(vd, vext_vta(desc), env->vl * esz, total_elems * esz);
73
/* Vector Integer Extension */
156
+}
74
#define GEN_VEXT_INT_EXT(NAME, ETYPE, DTYPE, HD, HS1) \
157
+
75
diff --git a/target/riscv/insn_trans/trans_rvv.c.inc b/target/riscv/insn_trans/trans_rvv.c.inc
158
+static void do_sm4_round(uint32_t *rk, uint32_t *buf)
76
index XXXXXXX..XXXXXXX 100644
159
+{
77
--- a/target/riscv/insn_trans/trans_rvv.c.inc
160
+ const uint32_t egs = 4;
78
+++ b/target/riscv/insn_trans/trans_rvv.c.inc
161
+ uint32_t s, b;
79
@@ -XXX,XX +XXX,XX @@ static bool trans_vcompress_vm(DisasContext *s, arg_r *a)
162
+
80
* Whole Vector Register Move Instructions ignore vtype and vl setting.
163
+ for (uint32_t j = egs; j < egs * 2; ++j) {
81
* Thus, we don't need to check vill bit. (Section 16.6)
164
+ b = buf[j - 3] ^ buf[j - 2] ^ buf[j - 1] ^ rk[j - 4];
82
*/
165
+
83
-#define GEN_VMV_WHOLE_TRANS(NAME, LEN, SEQ) \
166
+ s = sm4_subword(b);
84
+#define GEN_VMV_WHOLE_TRANS(NAME, LEN) \
167
+
85
static bool trans_##NAME(DisasContext *s, arg_##NAME * a) \
168
+ buf[j] = buf[j - 4] ^ (s ^ rol32(s, 2) ^ rol32(s, 10) ^ rol32(s, 18) ^
86
{ \
169
+ rol32(s, 24));
87
if (require_rvv(s) && \
170
+ }
88
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_##NAME * a) \
171
+}
89
} else { \
172
+
90
TCGLabel *over = gen_new_label(); \
173
+void HELPER(vsm4r_vv)(void *vd, void *vs2, CPURISCVState *env, uint32_t desc)
91
tcg_gen_brcondi_tl(TCG_COND_GEU, cpu_vstart, maxsz, over); \
174
+{
92
- \
175
+ const uint32_t egs = 4;
93
- static gen_helper_gvec_2_ptr * const fns[4] = { \
176
+ uint32_t group_start = env->vstart / egs;
94
- gen_helper_vmv1r_v, gen_helper_vmv2r_v, \
177
+ uint32_t group_end = env->vl / egs;
95
- gen_helper_vmv4r_v, gen_helper_vmv8r_v, \
178
+ uint32_t esz = sizeof(uint32_t);
96
- }; \
179
+ uint32_t total_elems = vext_get_total_elems(env, desc, esz);
97
tcg_gen_gvec_2_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, a->rs2), \
180
+
98
- cpu_env, maxsz, maxsz, 0, fns[SEQ]); \
181
+ for (uint32_t i = group_start; i < group_end; ++i) {
99
+ cpu_env, maxsz, maxsz, 0, gen_helper_vmvr_v); \
182
+ uint32_t vstart = i * egs;
100
mark_vs_dirty(s); \
183
+ uint32_t vend = (i + 1) * egs;
101
gen_set_label(over); \
184
+ uint32_t rk[4] = {0};
102
} \
185
+ uint32_t tmp[8] = {0};
103
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_##NAME * a) \
186
+
104
return false; \
187
+ for (uint32_t j = vstart; j < vend; ++j) {
188
+ rk[j - vstart] = *((uint32_t *)vs2 + H4(j));
189
+ }
190
+
191
+ for (uint32_t j = vstart; j < vend; ++j) {
192
+ tmp[j - vstart] = *((uint32_t *)vd + H4(j));
193
+ }
194
+
195
+ do_sm4_round(rk, tmp);
196
+
197
+ for (uint32_t j = vstart; j < vend; ++j) {
198
+ *((uint32_t *)vd + H4(j)) = tmp[egs + (j - vstart)];
199
+ }
200
+ }
201
+
202
+ env->vstart = 0;
203
+ /* set tail elements to 1s */
204
+ vext_set_elems_1s(vd, vext_vta(desc), env->vl * esz, total_elems * esz);
205
+}
206
+
207
+void HELPER(vsm4r_vs)(void *vd, void *vs2, CPURISCVState *env, uint32_t desc)
208
+{
209
+ const uint32_t egs = 4;
210
+ uint32_t group_start = env->vstart / egs;
211
+ uint32_t group_end = env->vl / egs;
212
+ uint32_t esz = sizeof(uint32_t);
213
+ uint32_t total_elems = vext_get_total_elems(env, desc, esz);
214
+
215
+ for (uint32_t i = group_start; i < group_end; ++i) {
216
+ uint32_t vstart = i * egs;
217
+ uint32_t vend = (i + 1) * egs;
218
+ uint32_t rk[4] = {0};
219
+ uint32_t tmp[8] = {0};
220
+
221
+ for (uint32_t j = 0; j < egs; ++j) {
222
+ rk[j] = *((uint32_t *)vs2 + H4(j));
223
+ }
224
+
225
+ for (uint32_t j = vstart; j < vend; ++j) {
226
+ tmp[j - vstart] = *((uint32_t *)vd + H4(j));
227
+ }
228
+
229
+ do_sm4_round(rk, tmp);
230
+
231
+ for (uint32_t j = vstart; j < vend; ++j) {
232
+ *((uint32_t *)vd + H4(j)) = tmp[egs + (j - vstart)];
233
+ }
234
+ }
235
+
236
+ env->vstart = 0;
237
+ /* set tail elements to 1s */
238
+ vext_set_elems_1s(vd, vext_vta(desc), env->vl * esz, total_elems * esz);
239
+}
240
diff --git a/target/riscv/insn_trans/trans_rvvk.c.inc b/target/riscv/insn_trans/trans_rvvk.c.inc
241
index XXXXXXX..XXXXXXX 100644
242
--- a/target/riscv/insn_trans/trans_rvvk.c.inc
243
+++ b/target/riscv/insn_trans/trans_rvvk.c.inc
244
@@ -XXX,XX +XXX,XX @@ static bool vghsh_check(DisasContext *s, arg_rmrr *a)
105
}
245
}
106
246
107
-GEN_VMV_WHOLE_TRANS(vmv1r_v, 1, 0)
247
GEN_VV_UNMASKED_TRANS(vghsh_vv, vghsh_check, ZVKG_EGS)
108
-GEN_VMV_WHOLE_TRANS(vmv2r_v, 2, 1)
248
+
109
-GEN_VMV_WHOLE_TRANS(vmv4r_v, 4, 2)
249
+/*
110
-GEN_VMV_WHOLE_TRANS(vmv8r_v, 8, 3)
250
+ * Zvksed
111
+GEN_VMV_WHOLE_TRANS(vmv1r_v, 1)
251
+ */
112
+GEN_VMV_WHOLE_TRANS(vmv2r_v, 2)
252
+
113
+GEN_VMV_WHOLE_TRANS(vmv4r_v, 4)
253
+#define ZVKSED_EGS 4
114
+GEN_VMV_WHOLE_TRANS(vmv8r_v, 8)
254
+
115
255
+static bool zvksed_check(DisasContext *s)
116
static bool int_ext_check(DisasContext *s, arg_rmr *a, uint8_t div)
256
+{
117
{
257
+ int egw_bytes = ZVKSED_EGS << s->sew;
258
+ return s->cfg_ptr->ext_zvksed == true &&
259
+ require_rvv(s) &&
260
+ vext_check_isa_ill(s) &&
261
+ MAXSZ(s) >= egw_bytes &&
262
+ s->sew == MO_32;
263
+}
264
+
265
+static bool vsm4k_vi_check(DisasContext *s, arg_rmrr *a)
266
+{
267
+ return zvksed_check(s) &&
268
+ require_align(a->rd, s->lmul) &&
269
+ require_align(a->rs2, s->lmul);
270
+}
271
+
272
+GEN_VI_UNMASKED_TRANS(vsm4k_vi, vsm4k_vi_check, ZVKSED_EGS)
273
+
274
+static bool vsm4r_vv_check(DisasContext *s, arg_rmr *a)
275
+{
276
+ return zvksed_check(s) &&
277
+ require_align(a->rd, s->lmul) &&
278
+ require_align(a->rs2, s->lmul);
279
+}
280
+
281
+GEN_V_UNMASKED_TRANS(vsm4r_vv, vsm4r_vv_check, ZVKSED_EGS)
282
+
283
+static bool vsm4r_vs_check(DisasContext *s, arg_rmr *a)
284
+{
285
+ return zvksed_check(s) &&
286
+ !is_overlapped(a->rd, 1 << MAX(s->lmul, 0), a->rs2, 1) &&
287
+ require_align(a->rd, s->lmul);
288
+}
289
+
290
+GEN_V_UNMASKED_TRANS(vsm4r_vs, vsm4r_vs_check, ZVKSED_EGS)
118
--
291
--
119
2.35.1
292
2.41.0
diff view generated by jsdifflib
1
From: Atish Patra <atishp@rivosinc.com>
1
From: Rob Bradford <rbradford@rivosinc.com>
2
2
3
The RISC-V privileged specification v1.12 defines few execution
3
These are WARL fields - zero out the bits for unavailable counters and
4
environment configuration CSRs that can be used enable/disable
4
special case the TM bit in mcountinhibit which is hardwired to zero.
5
extensions per privilege levels.
5
This patch achieves this by modifying the value written so that any use
6
of the field will see the correctly masked bits.
6
7
7
Add the basic support for these CSRs.
8
Tested by modifying OpenSBI to write max value to these CSRs and upon
9
subsequent read the appropriate number of bits for number of PMUs is
10
enabled and the TM bit is zero in mcountinhibit.
8
11
9
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
12
Signed-off-by: Rob Bradford <rbradford@rivosinc.com>
10
Signed-off-by: Atish Patra <atishp@rivosinc.com>
13
Acked-by: Alistair Francis <alistair.francis@wdc.com>
11
Message-Id: <20220303185440.512391-6-atishp@rivosinc.com>
14
Reviewed-by: Atish Patra <atishp@rivosinc.com>
15
Message-ID: <20230802124906.24197-1-rbradford@rivosinc.com>
12
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
16
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
13
---
17
---
14
target/riscv/cpu.h | 5 ++
18
target/riscv/csr.c | 11 +++++++++--
15
target/riscv/cpu_bits.h | 39 +++++++++++++++
19
1 file changed, 9 insertions(+), 2 deletions(-)
16
target/riscv/csr.c | 107 ++++++++++++++++++++++++++++++++++++++++
17
target/riscv/machine.c | 23 +++++++++
18
4 files changed, 174 insertions(+)
19
20
20
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
21
index XXXXXXX..XXXXXXX 100644
22
--- a/target/riscv/cpu.h
23
+++ b/target/riscv/cpu.h
24
@@ -XXX,XX +XXX,XX @@ struct CPUArchState {
25
target_ulong spmbase;
26
target_ulong upmmask;
27
target_ulong upmbase;
28
+
29
+ /* CSRs for execution enviornment configuration */
30
+ uint64_t menvcfg;
31
+ target_ulong senvcfg;
32
+ uint64_t henvcfg;
33
#endif
34
target_ulong cur_pmmask;
35
target_ulong cur_pmbase;
36
diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
37
index XXXXXXX..XXXXXXX 100644
38
--- a/target/riscv/cpu_bits.h
39
+++ b/target/riscv/cpu_bits.h
40
@@ -XXX,XX +XXX,XX @@
41
#define CSR_STVEC 0x105
42
#define CSR_SCOUNTEREN 0x106
43
44
+/* Supervisor Configuration CSRs */
45
+#define CSR_SENVCFG 0x10A
46
+
47
/* Supervisor Trap Handling */
48
#define CSR_SSCRATCH 0x140
49
#define CSR_SEPC 0x141
50
@@ -XXX,XX +XXX,XX @@
51
#define CSR_HTIMEDELTA 0x605
52
#define CSR_HTIMEDELTAH 0x615
53
54
+/* Hypervisor Configuration CSRs */
55
+#define CSR_HENVCFG 0x60A
56
+#define CSR_HENVCFGH 0x61A
57
+
58
/* Virtual CSRs */
59
#define CSR_VSSTATUS 0x200
60
#define CSR_VSIE 0x204
61
@@ -XXX,XX +XXX,XX @@
62
#define CSR_VSIEH 0x214
63
#define CSR_VSIPH 0x254
64
65
+/* Machine Configuration CSRs */
66
+#define CSR_MENVCFG 0x30A
67
+#define CSR_MENVCFGH 0x31A
68
+
69
/* Enhanced Physical Memory Protection (ePMP) */
70
#define CSR_MSECCFG 0x747
71
#define CSR_MSECCFGH 0x757
72
@@ -XXX,XX +XXX,XX @@ typedef enum RISCVException {
73
#define PM_EXT_CLEAN 0x00000002ULL
74
#define PM_EXT_DIRTY 0x00000003ULL
75
76
+/* Execution enviornment configuration bits */
77
+#define MENVCFG_FIOM BIT(0)
78
+#define MENVCFG_CBIE (3UL << 4)
79
+#define MENVCFG_CBCFE BIT(6)
80
+#define MENVCFG_CBZE BIT(7)
81
+#define MENVCFG_PBMTE BIT(62)
82
+#define MENVCFG_STCE BIT(63)
83
+
84
+/* For RV32 */
85
+#define MENVCFGH_PBMTE BIT(30)
86
+#define MENVCFGH_STCE BIT(31)
87
+
88
+#define SENVCFG_FIOM MENVCFG_FIOM
89
+#define SENVCFG_CBIE MENVCFG_CBIE
90
+#define SENVCFG_CBCFE MENVCFG_CBCFE
91
+#define SENVCFG_CBZE MENVCFG_CBZE
92
+
93
+#define HENVCFG_FIOM MENVCFG_FIOM
94
+#define HENVCFG_CBIE MENVCFG_CBIE
95
+#define HENVCFG_CBCFE MENVCFG_CBCFE
96
+#define HENVCFG_CBZE MENVCFG_CBZE
97
+#define HENVCFG_PBMTE MENVCFG_PBMTE
98
+#define HENVCFG_STCE MENVCFG_STCE
99
+
100
+/* For RV32 */
101
+#define HENVCFGH_PBMTE MENVCFGH_PBMTE
102
+#define HENVCFGH_STCE MENVCFGH_STCE
103
+
104
/* Offsets for every pair of control bits per each priv level */
105
#define XS_OFFSET 0ULL
106
#define U_OFFSET 2ULL
107
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
21
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
108
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
109
--- a/target/riscv/csr.c
23
--- a/target/riscv/csr.c
110
+++ b/target/riscv/csr.c
24
+++ b/target/riscv/csr.c
111
@@ -XXX,XX +XXX,XX @@ static RISCVException write_mtval(CPURISCVState *env, int csrno,
25
@@ -XXX,XX +XXX,XX @@ static RISCVException write_mcountinhibit(CPURISCVState *env, int csrno,
26
{
27
int cidx;
28
PMUCTRState *counter;
29
+ RISCVCPU *cpu = env_archcpu(env);
30
31
- env->mcountinhibit = val;
32
+ /* WARL register - disable unavailable counters; TM bit is always 0 */
33
+ env->mcountinhibit =
34
+ val & (cpu->pmu_avail_ctrs | COUNTEREN_CY | COUNTEREN_IR);
35
36
/* Check if any other counter is also monitoring cycles/instructions */
37
for (cidx = 0; cidx < RV_MAX_MHPMCOUNTERS; cidx++) {
38
@@ -XXX,XX +XXX,XX @@ static RISCVException read_mcounteren(CPURISCVState *env, int csrno,
39
static RISCVException write_mcounteren(CPURISCVState *env, int csrno,
40
target_ulong val)
41
{
42
- env->mcounteren = val;
43
+ RISCVCPU *cpu = env_archcpu(env);
44
+
45
+ /* WARL register - disable unavailable counters */
46
+ env->mcounteren = val & (cpu->pmu_avail_ctrs | COUNTEREN_CY | COUNTEREN_TM |
47
+ COUNTEREN_IR);
112
return RISCV_EXCP_NONE;
48
return RISCV_EXCP_NONE;
113
}
49
}
114
50
115
+/* Execution environment configuration setup */
116
+static RISCVException read_menvcfg(CPURISCVState *env, int csrno,
117
+ target_ulong *val)
118
+{
119
+ *val = env->menvcfg;
120
+ return RISCV_EXCP_NONE;
121
+}
122
+
123
+static RISCVException write_menvcfg(CPURISCVState *env, int csrno,
124
+ target_ulong val)
125
+{
126
+ uint64_t mask = MENVCFG_FIOM | MENVCFG_CBIE | MENVCFG_CBCFE | MENVCFG_CBZE;
127
+
128
+ if (riscv_cpu_mxl(env) == MXL_RV64) {
129
+ mask |= MENVCFG_PBMTE | MENVCFG_STCE;
130
+ }
131
+ env->menvcfg = (env->menvcfg & ~mask) | (val & mask);
132
+
133
+ return RISCV_EXCP_NONE;
134
+}
135
+
136
+static RISCVException read_menvcfgh(CPURISCVState *env, int csrno,
137
+ target_ulong *val)
138
+{
139
+ *val = env->menvcfg >> 32;
140
+ return RISCV_EXCP_NONE;
141
+}
142
+
143
+static RISCVException write_menvcfgh(CPURISCVState *env, int csrno,
144
+ target_ulong val)
145
+{
146
+ uint64_t mask = MENVCFG_PBMTE | MENVCFG_STCE;
147
+ uint64_t valh = (uint64_t)val << 32;
148
+
149
+ env->menvcfg = (env->menvcfg & ~mask) | (valh & mask);
150
+
151
+ return RISCV_EXCP_NONE;
152
+}
153
+
154
+static RISCVException read_senvcfg(CPURISCVState *env, int csrno,
155
+ target_ulong *val)
156
+{
157
+ *val = env->senvcfg;
158
+ return RISCV_EXCP_NONE;
159
+}
160
+
161
+static RISCVException write_senvcfg(CPURISCVState *env, int csrno,
162
+ target_ulong val)
163
+{
164
+ uint64_t mask = SENVCFG_FIOM | SENVCFG_CBIE | SENVCFG_CBCFE | SENVCFG_CBZE;
165
+
166
+ env->senvcfg = (env->senvcfg & ~mask) | (val & mask);
167
+
168
+ return RISCV_EXCP_NONE;
169
+}
170
+
171
+static RISCVException read_henvcfg(CPURISCVState *env, int csrno,
172
+ target_ulong *val)
173
+{
174
+ *val = env->henvcfg;
175
+ return RISCV_EXCP_NONE;
176
+}
177
+
178
+static RISCVException write_henvcfg(CPURISCVState *env, int csrno,
179
+ target_ulong val)
180
+{
181
+ uint64_t mask = HENVCFG_FIOM | HENVCFG_CBIE | HENVCFG_CBCFE | HENVCFG_CBZE;
182
+
183
+ if (riscv_cpu_mxl(env) == MXL_RV64) {
184
+ mask |= HENVCFG_PBMTE | HENVCFG_STCE;
185
+ }
186
+
187
+ env->henvcfg = (env->henvcfg & ~mask) | (val & mask);
188
+
189
+ return RISCV_EXCP_NONE;
190
+}
191
+
192
+static RISCVException read_henvcfgh(CPURISCVState *env, int csrno,
193
+ target_ulong *val)
194
+{
195
+ *val = env->henvcfg >> 32;
196
+ return RISCV_EXCP_NONE;
197
+}
198
+
199
+static RISCVException write_henvcfgh(CPURISCVState *env, int csrno,
200
+ target_ulong val)
201
+{
202
+ uint64_t mask = HENVCFG_PBMTE | HENVCFG_STCE;
203
+ uint64_t valh = (uint64_t)val << 32;
204
+
205
+ env->henvcfg = (env->henvcfg & ~mask) | (valh & mask);
206
+
207
+ return RISCV_EXCP_NONE;
208
+}
209
+
210
static RISCVException rmw_mip64(CPURISCVState *env, int csrno,
211
uint64_t *ret_val,
212
uint64_t new_val, uint64_t wr_mask)
213
@@ -XXX,XX +XXX,XX @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
214
[CSR_MVIPH] = { "mviph", aia_any32, read_zero, write_ignore },
215
[CSR_MIPH] = { "miph", aia_any32, NULL, NULL, rmw_miph },
216
217
+ /* Execution environment configuration */
218
+ [CSR_MENVCFG] = { "menvcfg", any, read_menvcfg, write_menvcfg,
219
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
220
+ [CSR_MENVCFGH] = { "menvcfgh", any32, read_menvcfgh, write_menvcfgh,
221
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
222
+ [CSR_SENVCFG] = { "senvcfg", smode, read_senvcfg, write_senvcfg,
223
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
224
+ [CSR_HENVCFG] = { "henvcfg", hmode, read_henvcfg, write_henvcfg,
225
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
226
+ [CSR_HENVCFGH] = { "henvcfgh", hmode32, read_henvcfgh, write_henvcfgh,
227
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
228
+
229
/* Supervisor Trap Setup */
230
[CSR_SSTATUS] = { "sstatus", smode, read_sstatus, write_sstatus, NULL,
231
read_sstatus_i128 },
232
diff --git a/target/riscv/machine.c b/target/riscv/machine.c
233
index XXXXXXX..XXXXXXX 100644
234
--- a/target/riscv/machine.c
235
+++ b/target/riscv/machine.c
236
@@ -XXX,XX +XXX,XX @@ static int riscv_cpu_post_load(void *opaque, int version_id)
237
return 0;
238
}
239
240
+static bool envcfg_needed(void *opaque)
241
+{
242
+ RISCVCPU *cpu = opaque;
243
+ CPURISCVState *env = &cpu->env;
244
+
245
+ return (env->priv_ver >= PRIV_VERSION_1_12_0 ? 1 : 0);
246
+}
247
+
248
+static const VMStateDescription vmstate_envcfg = {
249
+ .name = "cpu/envcfg",
250
+ .version_id = 1,
251
+ .minimum_version_id = 1,
252
+ .needed = envcfg_needed,
253
+ .fields = (VMStateField[]) {
254
+ VMSTATE_UINT64(env.menvcfg, RISCVCPU),
255
+ VMSTATE_UINTTL(env.senvcfg, RISCVCPU),
256
+ VMSTATE_UINT64(env.henvcfg, RISCVCPU),
257
+
258
+ VMSTATE_END_OF_LIST()
259
+ }
260
+};
261
+
262
const VMStateDescription vmstate_riscv_cpu = {
263
.name = "cpu",
264
.version_id = 3,
265
@@ -XXX,XX +XXX,XX @@ const VMStateDescription vmstate_riscv_cpu = {
266
&vmstate_pointermasking,
267
&vmstate_rv128,
268
&vmstate_kvmtimer,
269
+ &vmstate_envcfg,
270
NULL
271
}
272
};
273
--
51
--
274
2.35.1
52
2.41.0
diff view generated by jsdifflib
New patch
1
From: Jason Chien <jason.chien@sifive.com>
1
2
3
RVA23 Profiles states:
4
The RVA23 profiles are intended to be used for 64-bit application
5
processors that will run rich OS stacks from standard binary OS
6
distributions and with a substantial number of third-party binary user
7
applications that will be supported over a considerable length of time
8
in the field.
9
10
The chapter 4 of the unprivileged spec introduces the Zihintntl extension
11
and Zihintntl is a mandatory extension presented in RVA23 Profiles, whose
12
purpose is to enable application and operating system portability across
13
different implementations. Thus the DTS should contain the Zihintntl ISA
14
string in order to pass to software.
15
16
The unprivileged spec states:
17
Like any HINTs, these instructions may be freely ignored. Hence, although
18
they are described in terms of cache-based memory hierarchies, they do not
19
mandate the provision of caches.
20
21
These instructions are encoded with non-used opcode, e.g. ADD x0, x0, x2,
22
which QEMU already supports, and QEMU does not emulate cache. Therefore
23
these instructions can be considered as a no-op, and we only need to add
24
a new property for the Zihintntl extension.
25
26
Reviewed-by: Frank Chang <frank.chang@sifive.com>
27
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
28
Signed-off-by: Jason Chien <jason.chien@sifive.com>
29
Message-ID: <20230726074049.19505-2-jason.chien@sifive.com>
30
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
31
---
32
target/riscv/cpu_cfg.h | 1 +
33
target/riscv/cpu.c | 2 ++
34
2 files changed, 3 insertions(+)
35
36
diff --git a/target/riscv/cpu_cfg.h b/target/riscv/cpu_cfg.h
37
index XXXXXXX..XXXXXXX 100644
38
--- a/target/riscv/cpu_cfg.h
39
+++ b/target/riscv/cpu_cfg.h
40
@@ -XXX,XX +XXX,XX @@ struct RISCVCPUConfig {
41
bool ext_icbom;
42
bool ext_icboz;
43
bool ext_zicond;
44
+ bool ext_zihintntl;
45
bool ext_zihintpause;
46
bool ext_smstateen;
47
bool ext_sstc;
48
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
49
index XXXXXXX..XXXXXXX 100644
50
--- a/target/riscv/cpu.c
51
+++ b/target/riscv/cpu.c
52
@@ -XXX,XX +XXX,XX @@ static const struct isa_ext_data isa_edata_arr[] = {
53
ISA_EXT_DATA_ENTRY(zicond, PRIV_VERSION_1_12_0, ext_zicond),
54
ISA_EXT_DATA_ENTRY(zicsr, PRIV_VERSION_1_10_0, ext_icsr),
55
ISA_EXT_DATA_ENTRY(zifencei, PRIV_VERSION_1_10_0, ext_ifencei),
56
+ ISA_EXT_DATA_ENTRY(zihintntl, PRIV_VERSION_1_10_0, ext_zihintntl),
57
ISA_EXT_DATA_ENTRY(zihintpause, PRIV_VERSION_1_10_0, ext_zihintpause),
58
ISA_EXT_DATA_ENTRY(zmmul, PRIV_VERSION_1_12_0, ext_zmmul),
59
ISA_EXT_DATA_ENTRY(zawrs, PRIV_VERSION_1_12_0, ext_zawrs),
60
@@ -XXX,XX +XXX,XX @@ static Property riscv_cpu_extensions[] = {
61
DEFINE_PROP_BOOL("sscofpmf", RISCVCPU, cfg.ext_sscofpmf, false),
62
DEFINE_PROP_BOOL("Zifencei", RISCVCPU, cfg.ext_ifencei, true),
63
DEFINE_PROP_BOOL("Zicsr", RISCVCPU, cfg.ext_icsr, true),
64
+ DEFINE_PROP_BOOL("Zihintntl", RISCVCPU, cfg.ext_zihintntl, true),
65
DEFINE_PROP_BOOL("Zihintpause", RISCVCPU, cfg.ext_zihintpause, true),
66
DEFINE_PROP_BOOL("Zawrs", RISCVCPU, cfg.ext_zawrs, true),
67
DEFINE_PROP_BOOL("Zfa", RISCVCPU, cfg.ext_zfa, true),
68
--
69
2.41.0
diff view generated by jsdifflib
New patch
1
From: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
1
2
3
Commit a47842d ("riscv: Add support for the Zfa extension") implemented the zfa extension.
4
However, it has some typos for fleq.d and fltq.d. Both of them misused the fltq.s
5
helper function.
6
7
Fixes: a47842d ("riscv: Add support for the Zfa extension")
8
Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
9
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
10
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
11
Message-ID: <20230728003906.768-1-zhiwei_liu@linux.alibaba.com>
12
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
13
---
14
target/riscv/insn_trans/trans_rvzfa.c.inc | 4 ++--
15
1 file changed, 2 insertions(+), 2 deletions(-)
16
17
diff --git a/target/riscv/insn_trans/trans_rvzfa.c.inc b/target/riscv/insn_trans/trans_rvzfa.c.inc
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/riscv/insn_trans/trans_rvzfa.c.inc
20
+++ b/target/riscv/insn_trans/trans_rvzfa.c.inc
21
@@ -XXX,XX +XXX,XX @@ bool trans_fleq_d(DisasContext *ctx, arg_fleq_d *a)
22
TCGv_i64 src1 = get_fpr_hs(ctx, a->rs1);
23
TCGv_i64 src2 = get_fpr_hs(ctx, a->rs2);
24
25
- gen_helper_fltq_s(dest, cpu_env, src1, src2);
26
+ gen_helper_fleq_d(dest, cpu_env, src1, src2);
27
gen_set_gpr(ctx, a->rd, dest);
28
return true;
29
}
30
@@ -XXX,XX +XXX,XX @@ bool trans_fltq_d(DisasContext *ctx, arg_fltq_d *a)
31
TCGv_i64 src1 = get_fpr_hs(ctx, a->rs1);
32
TCGv_i64 src2 = get_fpr_hs(ctx, a->rs2);
33
34
- gen_helper_fltq_s(dest, cpu_env, src1, src2);
35
+ gen_helper_fltq_d(dest, cpu_env, src1, src2);
36
gen_set_gpr(ctx, a->rd, dest);
37
return true;
38
}
39
--
40
2.41.0
diff view generated by jsdifflib
1
From: Frank Chang <frank.chang@sifive.com>
1
From: Jason Chien <jason.chien@sifive.com>
2
2
3
RISC-V privilege spec defines that:
3
When writing the upper mtime, we should keep the original lower mtime
4
whose value is given by cpu_riscv_read_rtc() instead of
5
cpu_riscv_read_rtc_raw(). The same logic applies to writes to lower mtime.
4
6
5
* In RV32, memory-mapped writes to mtimecmp modify only one 32-bit part
7
Signed-off-by: Jason Chien <jason.chien@sifive.com>
6
of the register.
7
* For RV64, naturally aligned 64-bit memory accesses to the mtime and
8
mtimecmp registers are additionally supported and are atomic.
9
10
It's possible to perform both 32/64-bit read/write accesses to both
11
mtimecmp and mtime registers.
12
13
Signed-off-by: Frank Chang <frank.chang@sifive.com>
14
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
8
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
15
Reviewed-by: Jim Shu <jim.shu@sifive.com>
9
Message-ID: <20230728082502.26439-1-jason.chien@sifive.com>
16
Message-Id: <20220420080901.14655-3-frank.chang@sifive.com>
17
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
10
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
18
---
11
---
19
hw/intc/riscv_aclint.c | 42 +++++++++++++++++++++++++++---------------
12
hw/intc/riscv_aclint.c | 5 +++--
20
1 file changed, 27 insertions(+), 15 deletions(-)
13
1 file changed, 3 insertions(+), 2 deletions(-)
21
14
22
diff --git a/hw/intc/riscv_aclint.c b/hw/intc/riscv_aclint.c
15
diff --git a/hw/intc/riscv_aclint.c b/hw/intc/riscv_aclint.c
23
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
24
--- a/hw/intc/riscv_aclint.c
17
--- a/hw/intc/riscv_aclint.c
25
+++ b/hw/intc/riscv_aclint.c
18
+++ b/hw/intc/riscv_aclint.c
26
@@ -XXX,XX +XXX,XX @@ static uint64_t riscv_aclint_mtimer_read(void *opaque, hwaddr addr,
27
qemu_log_mask(LOG_GUEST_ERROR,
28
"aclint-mtimer: invalid hartid: %zu", hartid);
29
} else if ((addr & 0x7) == 0) {
30
- /* timecmp_lo */
31
+ /* timecmp_lo for RV32/RV64 or timecmp for RV64 */
32
uint64_t timecmp = env->timecmp;
33
- return timecmp & 0xFFFFFFFF;
34
+ return (size == 4) ? (timecmp & 0xFFFFFFFF) : timecmp;
35
} else if ((addr & 0x7) == 4) {
36
/* timecmp_hi */
37
uint64_t timecmp = env->timecmp;
38
@@ -XXX,XX +XXX,XX @@ static uint64_t riscv_aclint_mtimer_read(void *opaque, hwaddr addr,
39
return 0;
40
}
41
} else if (addr == mtimer->time_base) {
42
- /* time_lo */
43
- return cpu_riscv_read_rtc(mtimer->timebase_freq) & 0xFFFFFFFF;
44
+ /* time_lo for RV32/RV64 or timecmp for RV64 */
45
+ uint64_t rtc = cpu_riscv_read_rtc(mtimer->timebase_freq);
46
+ return (size == 4) ? (rtc & 0xFFFFFFFF) : rtc;
47
} else if (addr == mtimer->time_base + 4) {
48
/* time_hi */
49
return (cpu_riscv_read_rtc(mtimer->timebase_freq) >> 32) & 0xFFFFFFFF;
50
@@ -XXX,XX +XXX,XX @@ static void riscv_aclint_mtimer_write(void *opaque, hwaddr addr,
19
@@ -XXX,XX +XXX,XX @@ static void riscv_aclint_mtimer_write(void *opaque, hwaddr addr,
51
qemu_log_mask(LOG_GUEST_ERROR,
20
return;
52
"aclint-mtimer: invalid hartid: %zu", hartid);
21
} else if (addr == mtimer->time_base || addr == mtimer->time_base + 4) {
53
} else if ((addr & 0x7) == 0) {
22
uint64_t rtc_r = cpu_riscv_read_rtc_raw(mtimer->timebase_freq);
54
- /* timecmp_lo */
23
+ uint64_t rtc = cpu_riscv_read_rtc(mtimer);
55
- uint64_t timecmp_hi = env->timecmp >> 32;
24
56
- riscv_aclint_mtimer_write_timecmp(mtimer, RISCV_CPU(cpu), hartid,
25
if (addr == mtimer->time_base) {
57
- timecmp_hi << 32 | (value & 0xFFFFFFFF),
26
if (size == 4) {
58
- mtimer->timebase_freq);
27
/* time_lo for RV32/RV64 */
59
- return;
28
- mtimer->time_delta = ((rtc_r & ~0xFFFFFFFFULL) | value) - rtc_r;
60
+ if (size == 4) {
29
+ mtimer->time_delta = ((rtc & ~0xFFFFFFFFULL) | value) - rtc_r;
61
+ /* timecmp_lo for RV32/RV64 */
30
} else {
62
+ uint64_t timecmp_hi = env->timecmp >> 32;
31
/* time for RV64 */
63
+ riscv_aclint_mtimer_write_timecmp(mtimer, RISCV_CPU(cpu), hartid,
32
mtimer->time_delta = value - rtc_r;
64
+ timecmp_hi << 32 | (value & 0xFFFFFFFF),
33
@@ -XXX,XX +XXX,XX @@ static void riscv_aclint_mtimer_write(void *opaque, hwaddr addr,
65
+ mtimer->timebase_freq);
66
+ } else {
67
+ /* timecmp for RV64 */
68
+ riscv_aclint_mtimer_write_timecmp(mtimer, RISCV_CPU(cpu), hartid,
69
+ value, mtimer->timebase_freq);
70
+ }
71
} else if ((addr & 0x7) == 4) {
72
- /* timecmp_hi */
73
- uint64_t timecmp_lo = env->timecmp;
74
- riscv_aclint_mtimer_write_timecmp(mtimer, RISCV_CPU(cpu), hartid,
75
- value << 32 | (timecmp_lo & 0xFFFFFFFF),
76
- mtimer->timebase_freq);
77
+ if (size == 4) {
78
+ /* timecmp_hi for RV32/RV64 */
79
+ uint64_t timecmp_lo = env->timecmp;
80
+ riscv_aclint_mtimer_write_timecmp(mtimer, RISCV_CPU(cpu), hartid,
81
+ value << 32 | (timecmp_lo & 0xFFFFFFFF),
82
+ mtimer->timebase_freq);
83
+ } else {
84
+ qemu_log_mask(LOG_GUEST_ERROR,
85
+ "aclint-mtimer: invalid timecmp_hi write: %08x",
86
+ (uint32_t)addr);
87
+ }
88
} else {
34
} else {
89
qemu_log_mask(LOG_UNIMP,
35
if (size == 4) {
90
"aclint-mtimer: invalid timecmp write: %08x",
36
/* time_hi for RV32/RV64 */
37
- mtimer->time_delta = (value << 32 | (rtc_r & 0xFFFFFFFF)) - rtc_r;
38
+ mtimer->time_delta = (value << 32 | (rtc & 0xFFFFFFFF)) - rtc_r;
39
} else {
40
qemu_log_mask(LOG_GUEST_ERROR,
41
"aclint-mtimer: invalid time_hi write: %08x",
91
--
42
--
92
2.35.1
43
2.41.0
diff view generated by jsdifflib
1
From: Frank Chang <frank.chang@sifive.com>
1
From: Jason Chien <jason.chien@sifive.com>
2
2
3
If device's MemoryRegion doesn't have .impl.[min|max]_access_size
3
The variables whose values are given by cpu_riscv_read_rtc() should be named
4
declaration, the default access_size_min would be 1 byte and
4
"rtc". The variables whose value are given by cpu_riscv_read_rtc_raw()
5
access_size_max would be 4 bytes (see: softmmu/memory.c).
5
should be named "rtc_r".
6
This will cause a 64-bit memory access to ACLINT to be splitted into
7
two 32-bit memory accesses.
8
6
9
Signed-off-by: Frank Chang <frank.chang@sifive.com>
7
Signed-off-by: Jason Chien <jason.chien@sifive.com>
10
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
8
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
11
Reviewed-by: Jim Shu <jim.shu@sifive.com>
9
Message-ID: <20230728082502.26439-2-jason.chien@sifive.com>
12
Message-Id: <20220420080901.14655-2-frank.chang@sifive.com>
13
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
10
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
14
---
11
---
15
hw/intc/riscv_aclint.c | 4 ++++
12
hw/intc/riscv_aclint.c | 6 +++---
16
1 file changed, 4 insertions(+)
13
1 file changed, 3 insertions(+), 3 deletions(-)
17
14
18
diff --git a/hw/intc/riscv_aclint.c b/hw/intc/riscv_aclint.c
15
diff --git a/hw/intc/riscv_aclint.c b/hw/intc/riscv_aclint.c
19
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/intc/riscv_aclint.c
17
--- a/hw/intc/riscv_aclint.c
21
+++ b/hw/intc/riscv_aclint.c
18
+++ b/hw/intc/riscv_aclint.c
22
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps riscv_aclint_mtimer_ops = {
19
@@ -XXX,XX +XXX,XX @@ static void riscv_aclint_mtimer_write_timecmp(RISCVAclintMTimerState *mtimer,
23
.valid = {
20
uint64_t next;
24
.min_access_size = 4,
21
uint64_t diff;
25
.max_access_size = 8
22
26
+ },
23
- uint64_t rtc_r = cpu_riscv_read_rtc(mtimer);
27
+ .impl = {
24
+ uint64_t rtc = cpu_riscv_read_rtc(mtimer);
28
+ .min_access_size = 4,
25
29
+ .max_access_size = 8,
26
/* Compute the relative hartid w.r.t the socket */
30
}
27
hartid = hartid - mtimer->hartid_base;
31
};
28
29
mtimer->timecmp[hartid] = value;
30
- if (mtimer->timecmp[hartid] <= rtc_r) {
31
+ if (mtimer->timecmp[hartid] <= rtc) {
32
/*
33
* If we're setting an MTIMECMP value in the "past",
34
* immediately raise the timer interrupt
35
@@ -XXX,XX +XXX,XX @@ static void riscv_aclint_mtimer_write_timecmp(RISCVAclintMTimerState *mtimer,
36
37
/* otherwise, set up the future timer interrupt */
38
qemu_irq_lower(mtimer->timer_irqs[hartid]);
39
- diff = mtimer->timecmp[hartid] - rtc_r;
40
+ diff = mtimer->timecmp[hartid] - rtc;
41
/* back to ns (note args switched in muldiv64) */
42
uint64_t ns_diff = muldiv64(diff, NANOSECONDS_PER_SECOND, timebase_freq);
32
43
33
--
44
--
34
2.35.1
45
2.41.0
diff view generated by jsdifflib
New patch
1
From: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
1
2
3
We should not use types dependend on host arch for target_ucontext.
4
This bug is found when run rv32 applications.
5
6
Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
10
Message-ID: <20230811055438.1945-1-zhiwei_liu@linux.alibaba.com>
11
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
12
---
13
linux-user/riscv/signal.c | 4 ++--
14
1 file changed, 2 insertions(+), 2 deletions(-)
15
16
diff --git a/linux-user/riscv/signal.c b/linux-user/riscv/signal.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/linux-user/riscv/signal.c
19
+++ b/linux-user/riscv/signal.c
20
@@ -XXX,XX +XXX,XX @@ struct target_sigcontext {
21
}; /* cf. riscv-linux:arch/riscv/include/uapi/asm/ptrace.h */
22
23
struct target_ucontext {
24
- unsigned long uc_flags;
25
- struct target_ucontext *uc_link;
26
+ abi_ulong uc_flags;
27
+ abi_ptr uc_link;
28
target_stack_t uc_stack;
29
target_sigset_t uc_sigmask;
30
uint8_t __unused[1024 / 8 - sizeof(target_sigset_t)];
31
--
32
2.41.0
33
34
diff view generated by jsdifflib
1
From: Jim Shu <jim.shu@sifive.com>
1
From: Yong-Xuan Wang <yongxuan.wang@sifive.com>
2
2
3
This commit implements reset function of all ACLINT devices.
3
In this patch, we create the APLIC and IMSIC FDT helper functions and
4
ACLINT device reset will clear MTIME and MSIP register to 0.
4
remove M mode AIA devices when using KVM acceleration.
5
5
6
Depend on RISC-V ACLINT spec v1.0-rc4:
6
Signed-off-by: Yong-Xuan Wang <yongxuan.wang@sifive.com>
7
https://github.com/riscv/riscv-aclint/blob/v1.0-rc4/riscv-aclint.adoc
7
Reviewed-by: Jim Shu <jim.shu@sifive.com>
8
8
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
9
Signed-off-by: Jim Shu <jim.shu@sifive.com>
9
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
10
Reviewed-by: Frank Chang <frank.chang@sifive.com>
10
Message-ID: <20230727102439.22554-2-yongxuan.wang@sifive.com>
11
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
12
Message-Id: <20220420080901.14655-5-frank.chang@sifive.com>
13
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
11
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
14
---
12
---
15
hw/intc/riscv_aclint.c | 39 +++++++++++++++++++++++++++++++++++++++
13
hw/riscv/virt.c | 290 +++++++++++++++++++++++-------------------------
16
1 file changed, 39 insertions(+)
14
1 file changed, 137 insertions(+), 153 deletions(-)
17
15
18
diff --git a/hw/intc/riscv_aclint.c b/hw/intc/riscv_aclint.c
16
diff --git a/hw/riscv/virt.c b/hw/riscv/virt.c
19
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/intc/riscv_aclint.c
18
--- a/hw/riscv/virt.c
21
+++ b/hw/intc/riscv_aclint.c
19
+++ b/hw/riscv/virt.c
22
@@ -XXX,XX +XXX,XX @@ static void riscv_aclint_mtimer_realize(DeviceState *dev, Error **errp)
20
@@ -XXX,XX +XXX,XX @@ static uint32_t imsic_num_bits(uint32_t count)
23
}
21
return ret;
24
}
22
}
25
23
26
+static void riscv_aclint_mtimer_reset_enter(Object *obj, ResetType type)
24
-static void create_fdt_imsic(RISCVVirtState *s, const MemMapEntry *memmap,
25
- uint32_t *phandle, uint32_t *intc_phandles,
26
- uint32_t *msi_m_phandle, uint32_t *msi_s_phandle)
27
+static void create_fdt_one_imsic(RISCVVirtState *s, hwaddr base_addr,
28
+ uint32_t *intc_phandles, uint32_t msi_phandle,
29
+ bool m_mode, uint32_t imsic_guest_bits)
30
{
31
int cpu, socket;
32
char *imsic_name;
33
MachineState *ms = MACHINE(s);
34
int socket_count = riscv_socket_count(ms);
35
- uint32_t imsic_max_hart_per_socket, imsic_guest_bits;
36
+ uint32_t imsic_max_hart_per_socket;
37
uint32_t *imsic_cells, *imsic_regs, imsic_addr, imsic_size;
38
39
- *msi_m_phandle = (*phandle)++;
40
- *msi_s_phandle = (*phandle)++;
41
imsic_cells = g_new0(uint32_t, ms->smp.cpus * 2);
42
imsic_regs = g_new0(uint32_t, socket_count * 4);
43
44
- /* M-level IMSIC node */
45
for (cpu = 0; cpu < ms->smp.cpus; cpu++) {
46
imsic_cells[cpu * 2 + 0] = cpu_to_be32(intc_phandles[cpu]);
47
- imsic_cells[cpu * 2 + 1] = cpu_to_be32(IRQ_M_EXT);
48
+ imsic_cells[cpu * 2 + 1] = cpu_to_be32(m_mode ? IRQ_M_EXT : IRQ_S_EXT);
49
}
50
- imsic_max_hart_per_socket = 0;
51
- for (socket = 0; socket < socket_count; socket++) {
52
- imsic_addr = memmap[VIRT_IMSIC_M].base +
53
- socket * VIRT_IMSIC_GROUP_MAX_SIZE;
54
- imsic_size = IMSIC_HART_SIZE(0) * s->soc[socket].num_harts;
55
- imsic_regs[socket * 4 + 0] = 0;
56
- imsic_regs[socket * 4 + 1] = cpu_to_be32(imsic_addr);
57
- imsic_regs[socket * 4 + 2] = 0;
58
- imsic_regs[socket * 4 + 3] = cpu_to_be32(imsic_size);
59
- if (imsic_max_hart_per_socket < s->soc[socket].num_harts) {
60
- imsic_max_hart_per_socket = s->soc[socket].num_harts;
61
- }
62
- }
63
- imsic_name = g_strdup_printf("/soc/imsics@%lx",
64
- (unsigned long)memmap[VIRT_IMSIC_M].base);
65
- qemu_fdt_add_subnode(ms->fdt, imsic_name);
66
- qemu_fdt_setprop_string(ms->fdt, imsic_name, "compatible",
67
- "riscv,imsics");
68
- qemu_fdt_setprop_cell(ms->fdt, imsic_name, "#interrupt-cells",
69
- FDT_IMSIC_INT_CELLS);
70
- qemu_fdt_setprop(ms->fdt, imsic_name, "interrupt-controller",
71
- NULL, 0);
72
- qemu_fdt_setprop(ms->fdt, imsic_name, "msi-controller",
73
- NULL, 0);
74
- qemu_fdt_setprop(ms->fdt, imsic_name, "interrupts-extended",
75
- imsic_cells, ms->smp.cpus * sizeof(uint32_t) * 2);
76
- qemu_fdt_setprop(ms->fdt, imsic_name, "reg", imsic_regs,
77
- socket_count * sizeof(uint32_t) * 4);
78
- qemu_fdt_setprop_cell(ms->fdt, imsic_name, "riscv,num-ids",
79
- VIRT_IRQCHIP_NUM_MSIS);
80
- if (socket_count > 1) {
81
- qemu_fdt_setprop_cell(ms->fdt, imsic_name, "riscv,hart-index-bits",
82
- imsic_num_bits(imsic_max_hart_per_socket));
83
- qemu_fdt_setprop_cell(ms->fdt, imsic_name, "riscv,group-index-bits",
84
- imsic_num_bits(socket_count));
85
- qemu_fdt_setprop_cell(ms->fdt, imsic_name, "riscv,group-index-shift",
86
- IMSIC_MMIO_GROUP_MIN_SHIFT);
87
- }
88
- qemu_fdt_setprop_cell(ms->fdt, imsic_name, "phandle", *msi_m_phandle);
89
-
90
- g_free(imsic_name);
91
92
- /* S-level IMSIC node */
93
- for (cpu = 0; cpu < ms->smp.cpus; cpu++) {
94
- imsic_cells[cpu * 2 + 0] = cpu_to_be32(intc_phandles[cpu]);
95
- imsic_cells[cpu * 2 + 1] = cpu_to_be32(IRQ_S_EXT);
96
- }
97
- imsic_guest_bits = imsic_num_bits(s->aia_guests + 1);
98
imsic_max_hart_per_socket = 0;
99
for (socket = 0; socket < socket_count; socket++) {
100
- imsic_addr = memmap[VIRT_IMSIC_S].base +
101
- socket * VIRT_IMSIC_GROUP_MAX_SIZE;
102
+ imsic_addr = base_addr + socket * VIRT_IMSIC_GROUP_MAX_SIZE;
103
imsic_size = IMSIC_HART_SIZE(imsic_guest_bits) *
104
s->soc[socket].num_harts;
105
imsic_regs[socket * 4 + 0] = 0;
106
@@ -XXX,XX +XXX,XX @@ static void create_fdt_imsic(RISCVVirtState *s, const MemMapEntry *memmap,
107
imsic_max_hart_per_socket = s->soc[socket].num_harts;
108
}
109
}
110
- imsic_name = g_strdup_printf("/soc/imsics@%lx",
111
- (unsigned long)memmap[VIRT_IMSIC_S].base);
112
+
113
+ imsic_name = g_strdup_printf("/soc/imsics@%lx", (unsigned long)base_addr);
114
qemu_fdt_add_subnode(ms->fdt, imsic_name);
115
- qemu_fdt_setprop_string(ms->fdt, imsic_name, "compatible",
116
- "riscv,imsics");
117
+ qemu_fdt_setprop_string(ms->fdt, imsic_name, "compatible", "riscv,imsics");
118
qemu_fdt_setprop_cell(ms->fdt, imsic_name, "#interrupt-cells",
119
- FDT_IMSIC_INT_CELLS);
120
- qemu_fdt_setprop(ms->fdt, imsic_name, "interrupt-controller",
121
- NULL, 0);
122
- qemu_fdt_setprop(ms->fdt, imsic_name, "msi-controller",
123
- NULL, 0);
124
+ FDT_IMSIC_INT_CELLS);
125
+ qemu_fdt_setprop(ms->fdt, imsic_name, "interrupt-controller", NULL, 0);
126
+ qemu_fdt_setprop(ms->fdt, imsic_name, "msi-controller", NULL, 0);
127
qemu_fdt_setprop(ms->fdt, imsic_name, "interrupts-extended",
128
- imsic_cells, ms->smp.cpus * sizeof(uint32_t) * 2);
129
+ imsic_cells, ms->smp.cpus * sizeof(uint32_t) * 2);
130
qemu_fdt_setprop(ms->fdt, imsic_name, "reg", imsic_regs,
131
- socket_count * sizeof(uint32_t) * 4);
132
+ socket_count * sizeof(uint32_t) * 4);
133
qemu_fdt_setprop_cell(ms->fdt, imsic_name, "riscv,num-ids",
134
- VIRT_IRQCHIP_NUM_MSIS);
135
+ VIRT_IRQCHIP_NUM_MSIS);
136
+
137
if (imsic_guest_bits) {
138
qemu_fdt_setprop_cell(ms->fdt, imsic_name, "riscv,guest-index-bits",
139
- imsic_guest_bits);
140
+ imsic_guest_bits);
141
}
142
+
143
if (socket_count > 1) {
144
qemu_fdt_setprop_cell(ms->fdt, imsic_name, "riscv,hart-index-bits",
145
- imsic_num_bits(imsic_max_hart_per_socket));
146
+ imsic_num_bits(imsic_max_hart_per_socket));
147
qemu_fdt_setprop_cell(ms->fdt, imsic_name, "riscv,group-index-bits",
148
- imsic_num_bits(socket_count));
149
+ imsic_num_bits(socket_count));
150
qemu_fdt_setprop_cell(ms->fdt, imsic_name, "riscv,group-index-shift",
151
- IMSIC_MMIO_GROUP_MIN_SHIFT);
152
+ IMSIC_MMIO_GROUP_MIN_SHIFT);
153
}
154
- qemu_fdt_setprop_cell(ms->fdt, imsic_name, "phandle", *msi_s_phandle);
155
- g_free(imsic_name);
156
+ qemu_fdt_setprop_cell(ms->fdt, imsic_name, "phandle", msi_phandle);
157
158
+ g_free(imsic_name);
159
g_free(imsic_regs);
160
g_free(imsic_cells);
161
}
162
163
-static void create_fdt_socket_aplic(RISCVVirtState *s,
164
- const MemMapEntry *memmap, int socket,
165
- uint32_t msi_m_phandle,
166
- uint32_t msi_s_phandle,
167
- uint32_t *phandle,
168
- uint32_t *intc_phandles,
169
- uint32_t *aplic_phandles)
170
+static void create_fdt_imsic(RISCVVirtState *s, const MemMapEntry *memmap,
171
+ uint32_t *phandle, uint32_t *intc_phandles,
172
+ uint32_t *msi_m_phandle, uint32_t *msi_s_phandle)
27
+{
173
+{
28
+ /*
174
+ *msi_m_phandle = (*phandle)++;
29
+ * According to RISC-V ACLINT spec:
175
+ *msi_s_phandle = (*phandle)++;
30
+ * - On MTIMER device reset, the MTIME register is cleared to zero.
176
+
31
+ * - On MTIMER device reset, the MTIMECMP registers are in unknown state.
177
+ if (!kvm_enabled()) {
32
+ */
178
+ /* M-level IMSIC node */
33
+ RISCVAclintMTimerState *mtimer = RISCV_ACLINT_MTIMER(obj);
179
+ create_fdt_one_imsic(s, memmap[VIRT_IMSIC_M].base, intc_phandles,
34
+
180
+ *msi_m_phandle, true, 0);
35
+ /*
181
+ }
36
+ * Clear mtime register by writing to 0 it.
182
+
37
+ * Pending mtime interrupts will also be cleared at the same time.
183
+ /* S-level IMSIC node */
38
+ */
184
+ create_fdt_one_imsic(s, memmap[VIRT_IMSIC_S].base, intc_phandles,
39
+ riscv_aclint_mtimer_write(mtimer, mtimer->time_base, 0, 8);
185
+ *msi_s_phandle, false,
186
+ imsic_num_bits(s->aia_guests + 1));
187
+
40
+}
188
+}
41
+
189
+
42
static void riscv_aclint_mtimer_class_init(ObjectClass *klass, void *data)
190
+static void create_fdt_one_aplic(RISCVVirtState *s, int socket,
191
+ unsigned long aplic_addr, uint32_t aplic_size,
192
+ uint32_t msi_phandle,
193
+ uint32_t *intc_phandles,
194
+ uint32_t aplic_phandle,
195
+ uint32_t aplic_child_phandle,
196
+ bool m_mode)
43
{
197
{
44
DeviceClass *dc = DEVICE_CLASS(klass);
198
int cpu;
45
dc->realize = riscv_aclint_mtimer_realize;
199
char *aplic_name;
46
device_class_set_props(dc, riscv_aclint_mtimer_properties);
200
uint32_t *aplic_cells;
47
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
201
- unsigned long aplic_addr;
48
+ rc->phases.enter = riscv_aclint_mtimer_reset_enter;
202
MachineState *ms = MACHINE(s);
203
- uint32_t aplic_m_phandle, aplic_s_phandle;
204
205
- aplic_m_phandle = (*phandle)++;
206
- aplic_s_phandle = (*phandle)++;
207
aplic_cells = g_new0(uint32_t, s->soc[socket].num_harts * 2);
208
209
- /* M-level APLIC node */
210
for (cpu = 0; cpu < s->soc[socket].num_harts; cpu++) {
211
aplic_cells[cpu * 2 + 0] = cpu_to_be32(intc_phandles[cpu]);
212
- aplic_cells[cpu * 2 + 1] = cpu_to_be32(IRQ_M_EXT);
213
+ aplic_cells[cpu * 2 + 1] = cpu_to_be32(m_mode ? IRQ_M_EXT : IRQ_S_EXT);
214
}
215
- aplic_addr = memmap[VIRT_APLIC_M].base +
216
- (memmap[VIRT_APLIC_M].size * socket);
217
+
218
aplic_name = g_strdup_printf("/soc/aplic@%lx", aplic_addr);
219
qemu_fdt_add_subnode(ms->fdt, aplic_name);
220
qemu_fdt_setprop_string(ms->fdt, aplic_name, "compatible", "riscv,aplic");
221
qemu_fdt_setprop_cell(ms->fdt, aplic_name,
222
- "#interrupt-cells", FDT_APLIC_INT_CELLS);
223
+ "#interrupt-cells", FDT_APLIC_INT_CELLS);
224
qemu_fdt_setprop(ms->fdt, aplic_name, "interrupt-controller", NULL, 0);
225
+
226
if (s->aia_type == VIRT_AIA_TYPE_APLIC) {
227
qemu_fdt_setprop(ms->fdt, aplic_name, "interrupts-extended",
228
- aplic_cells, s->soc[socket].num_harts * sizeof(uint32_t) * 2);
229
+ aplic_cells,
230
+ s->soc[socket].num_harts * sizeof(uint32_t) * 2);
231
} else {
232
- qemu_fdt_setprop_cell(ms->fdt, aplic_name, "msi-parent",
233
- msi_m_phandle);
234
+ qemu_fdt_setprop_cell(ms->fdt, aplic_name, "msi-parent", msi_phandle);
235
}
236
+
237
qemu_fdt_setprop_cells(ms->fdt, aplic_name, "reg",
238
- 0x0, aplic_addr, 0x0, memmap[VIRT_APLIC_M].size);
239
+ 0x0, aplic_addr, 0x0, aplic_size);
240
qemu_fdt_setprop_cell(ms->fdt, aplic_name, "riscv,num-sources",
241
- VIRT_IRQCHIP_NUM_SOURCES);
242
- qemu_fdt_setprop_cell(ms->fdt, aplic_name, "riscv,children",
243
- aplic_s_phandle);
244
- qemu_fdt_setprop_cells(ms->fdt, aplic_name, "riscv,delegate",
245
- aplic_s_phandle, 0x1, VIRT_IRQCHIP_NUM_SOURCES);
246
+ VIRT_IRQCHIP_NUM_SOURCES);
247
+
248
+ if (aplic_child_phandle) {
249
+ qemu_fdt_setprop_cell(ms->fdt, aplic_name, "riscv,children",
250
+ aplic_child_phandle);
251
+ qemu_fdt_setprop_cells(ms->fdt, aplic_name, "riscv,delegate",
252
+ aplic_child_phandle, 0x1,
253
+ VIRT_IRQCHIP_NUM_SOURCES);
254
+ }
255
+
256
riscv_socket_fdt_write_id(ms, aplic_name, socket);
257
- qemu_fdt_setprop_cell(ms->fdt, aplic_name, "phandle", aplic_m_phandle);
258
+ qemu_fdt_setprop_cell(ms->fdt, aplic_name, "phandle", aplic_phandle);
259
+
260
g_free(aplic_name);
261
+ g_free(aplic_cells);
262
+}
263
264
- /* S-level APLIC node */
265
- for (cpu = 0; cpu < s->soc[socket].num_harts; cpu++) {
266
- aplic_cells[cpu * 2 + 0] = cpu_to_be32(intc_phandles[cpu]);
267
- aplic_cells[cpu * 2 + 1] = cpu_to_be32(IRQ_S_EXT);
268
+static void create_fdt_socket_aplic(RISCVVirtState *s,
269
+ const MemMapEntry *memmap, int socket,
270
+ uint32_t msi_m_phandle,
271
+ uint32_t msi_s_phandle,
272
+ uint32_t *phandle,
273
+ uint32_t *intc_phandles,
274
+ uint32_t *aplic_phandles)
275
+{
276
+ char *aplic_name;
277
+ unsigned long aplic_addr;
278
+ MachineState *ms = MACHINE(s);
279
+ uint32_t aplic_m_phandle, aplic_s_phandle;
280
+
281
+ aplic_m_phandle = (*phandle)++;
282
+ aplic_s_phandle = (*phandle)++;
283
+
284
+ if (!kvm_enabled()) {
285
+ /* M-level APLIC node */
286
+ aplic_addr = memmap[VIRT_APLIC_M].base +
287
+ (memmap[VIRT_APLIC_M].size * socket);
288
+ create_fdt_one_aplic(s, socket, aplic_addr, memmap[VIRT_APLIC_M].size,
289
+ msi_m_phandle, intc_phandles,
290
+ aplic_m_phandle, aplic_s_phandle,
291
+ true);
292
}
293
+
294
+ /* S-level APLIC node */
295
aplic_addr = memmap[VIRT_APLIC_S].base +
296
(memmap[VIRT_APLIC_S].size * socket);
297
+ create_fdt_one_aplic(s, socket, aplic_addr, memmap[VIRT_APLIC_S].size,
298
+ msi_s_phandle, intc_phandles,
299
+ aplic_s_phandle, 0,
300
+ false);
301
+
302
aplic_name = g_strdup_printf("/soc/aplic@%lx", aplic_addr);
303
- qemu_fdt_add_subnode(ms->fdt, aplic_name);
304
- qemu_fdt_setprop_string(ms->fdt, aplic_name, "compatible", "riscv,aplic");
305
- qemu_fdt_setprop_cell(ms->fdt, aplic_name,
306
- "#interrupt-cells", FDT_APLIC_INT_CELLS);
307
- qemu_fdt_setprop(ms->fdt, aplic_name, "interrupt-controller", NULL, 0);
308
- if (s->aia_type == VIRT_AIA_TYPE_APLIC) {
309
- qemu_fdt_setprop(ms->fdt, aplic_name, "interrupts-extended",
310
- aplic_cells, s->soc[socket].num_harts * sizeof(uint32_t) * 2);
311
- } else {
312
- qemu_fdt_setprop_cell(ms->fdt, aplic_name, "msi-parent",
313
- msi_s_phandle);
314
- }
315
- qemu_fdt_setprop_cells(ms->fdt, aplic_name, "reg",
316
- 0x0, aplic_addr, 0x0, memmap[VIRT_APLIC_S].size);
317
- qemu_fdt_setprop_cell(ms->fdt, aplic_name, "riscv,num-sources",
318
- VIRT_IRQCHIP_NUM_SOURCES);
319
- riscv_socket_fdt_write_id(ms, aplic_name, socket);
320
- qemu_fdt_setprop_cell(ms->fdt, aplic_name, "phandle", aplic_s_phandle);
321
322
if (!socket) {
323
platform_bus_add_all_fdt_nodes(ms->fdt, aplic_name,
324
@@ -XXX,XX +XXX,XX @@ static void create_fdt_socket_aplic(RISCVVirtState *s,
325
326
g_free(aplic_name);
327
328
- g_free(aplic_cells);
329
aplic_phandles[socket] = aplic_s_phandle;
49
}
330
}
50
331
51
static const TypeInfo riscv_aclint_mtimer_info = {
332
@@ -XXX,XX +XXX,XX @@ static DeviceState *virt_create_aia(RISCVVirtAIAType aia_type, int aia_guests,
52
@@ -XXX,XX +XXX,XX @@ static void riscv_aclint_swi_realize(DeviceState *dev, Error **errp)
333
int i;
53
}
334
hwaddr addr;
335
uint32_t guest_bits;
336
- DeviceState *aplic_m;
337
- bool msimode = (aia_type == VIRT_AIA_TYPE_APLIC_IMSIC) ? true : false;
338
+ DeviceState *aplic_s = NULL;
339
+ DeviceState *aplic_m = NULL;
340
+ bool msimode = aia_type == VIRT_AIA_TYPE_APLIC_IMSIC;
341
342
if (msimode) {
343
- /* Per-socket M-level IMSICs */
344
- addr = memmap[VIRT_IMSIC_M].base + socket * VIRT_IMSIC_GROUP_MAX_SIZE;
345
- for (i = 0; i < hart_count; i++) {
346
- riscv_imsic_create(addr + i * IMSIC_HART_SIZE(0),
347
- base_hartid + i, true, 1,
348
- VIRT_IRQCHIP_NUM_MSIS);
349
+ if (!kvm_enabled()) {
350
+ /* Per-socket M-level IMSICs */
351
+ addr = memmap[VIRT_IMSIC_M].base +
352
+ socket * VIRT_IMSIC_GROUP_MAX_SIZE;
353
+ for (i = 0; i < hart_count; i++) {
354
+ riscv_imsic_create(addr + i * IMSIC_HART_SIZE(0),
355
+ base_hartid + i, true, 1,
356
+ VIRT_IRQCHIP_NUM_MSIS);
357
+ }
358
}
359
360
/* Per-socket S-level IMSICs */
361
@@ -XXX,XX +XXX,XX @@ static DeviceState *virt_create_aia(RISCVVirtAIAType aia_type, int aia_guests,
362
}
363
}
364
365
- /* Per-socket M-level APLIC */
366
- aplic_m = riscv_aplic_create(
367
- memmap[VIRT_APLIC_M].base + socket * memmap[VIRT_APLIC_M].size,
368
- memmap[VIRT_APLIC_M].size,
369
- (msimode) ? 0 : base_hartid,
370
- (msimode) ? 0 : hart_count,
371
- VIRT_IRQCHIP_NUM_SOURCES,
372
- VIRT_IRQCHIP_NUM_PRIO_BITS,
373
- msimode, true, NULL);
374
-
375
- if (aplic_m) {
376
- /* Per-socket S-level APLIC */
377
- riscv_aplic_create(
378
- memmap[VIRT_APLIC_S].base + socket * memmap[VIRT_APLIC_S].size,
379
- memmap[VIRT_APLIC_S].size,
380
- (msimode) ? 0 : base_hartid,
381
- (msimode) ? 0 : hart_count,
382
- VIRT_IRQCHIP_NUM_SOURCES,
383
- VIRT_IRQCHIP_NUM_PRIO_BITS,
384
- msimode, false, aplic_m);
385
+ if (!kvm_enabled()) {
386
+ /* Per-socket M-level APLIC */
387
+ aplic_m = riscv_aplic_create(memmap[VIRT_APLIC_M].base +
388
+ socket * memmap[VIRT_APLIC_M].size,
389
+ memmap[VIRT_APLIC_M].size,
390
+ (msimode) ? 0 : base_hartid,
391
+ (msimode) ? 0 : hart_count,
392
+ VIRT_IRQCHIP_NUM_SOURCES,
393
+ VIRT_IRQCHIP_NUM_PRIO_BITS,
394
+ msimode, true, NULL);
395
}
396
397
- return aplic_m;
398
+ /* Per-socket S-level APLIC */
399
+ aplic_s = riscv_aplic_create(memmap[VIRT_APLIC_S].base +
400
+ socket * memmap[VIRT_APLIC_S].size,
401
+ memmap[VIRT_APLIC_S].size,
402
+ (msimode) ? 0 : base_hartid,
403
+ (msimode) ? 0 : hart_count,
404
+ VIRT_IRQCHIP_NUM_SOURCES,
405
+ VIRT_IRQCHIP_NUM_PRIO_BITS,
406
+ msimode, false, aplic_m);
407
+
408
+ return kvm_enabled() ? aplic_s : aplic_m;
54
}
409
}
55
410
56
+static void riscv_aclint_swi_reset_enter(Object *obj, ResetType type)
411
static void create_platform_bus(RISCVVirtState *s, DeviceState *irqchip)
57
+{
58
+ /*
59
+ * According to RISC-V ACLINT spec:
60
+ * - On MSWI device reset, each MSIP register is cleared to zero.
61
+ *
62
+ * p.s. SSWI device reset does nothing since SETSIP register always reads 0.
63
+ */
64
+ RISCVAclintSwiState *swi = RISCV_ACLINT_SWI(obj);
65
+ int i;
66
+
67
+ if (!swi->sswi) {
68
+ for (i = 0; i < swi->num_harts; i++) {
69
+ /* Clear MSIP registers by lowering software interrupts. */
70
+ qemu_irq_lower(swi->soft_irqs[i]);
71
+ }
72
+ }
73
+}
74
+
75
static void riscv_aclint_swi_class_init(ObjectClass *klass, void *data)
76
{
77
DeviceClass *dc = DEVICE_CLASS(klass);
78
dc->realize = riscv_aclint_swi_realize;
79
device_class_set_props(dc, riscv_aclint_swi_properties);
80
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
81
+ rc->phases.enter = riscv_aclint_swi_reset_enter;
82
}
83
84
static const TypeInfo riscv_aclint_swi_info = {
85
--
412
--
86
2.35.1
413
2.41.0
diff view generated by jsdifflib
New patch
1
From: Yong-Xuan Wang <yongxuan.wang@sifive.com>
1
2
3
We check the in-kernel irqchip support when using KVM acceleration.
4
5
Signed-off-by: Yong-Xuan Wang <yongxuan.wang@sifive.com>
6
Reviewed-by: Jim Shu <jim.shu@sifive.com>
7
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
8
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
9
Message-ID: <20230727102439.22554-3-yongxuan.wang@sifive.com>
10
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
11
---
12
target/riscv/kvm.c | 10 +++++++++-
13
1 file changed, 9 insertions(+), 1 deletion(-)
14
15
diff --git a/target/riscv/kvm.c b/target/riscv/kvm.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/riscv/kvm.c
18
+++ b/target/riscv/kvm.c
19
@@ -XXX,XX +XXX,XX @@ int kvm_arch_init(MachineState *ms, KVMState *s)
20
21
int kvm_arch_irqchip_create(KVMState *s)
22
{
23
- return 0;
24
+ if (kvm_kernel_irqchip_split()) {
25
+ error_report("-machine kernel_irqchip=split is not supported on RISC-V.");
26
+ exit(1);
27
+ }
28
+
29
+ /*
30
+ * We can create the VAIA using the newer device control API.
31
+ */
32
+ return kvm_check_extension(s, KVM_CAP_DEVICE_CTRL);
33
}
34
35
int kvm_arch_process_async_events(CPUState *cs)
36
--
37
2.41.0
diff view generated by jsdifflib
1
From: Wilfred Mallawa <wilfred.mallawa@wdc.com>
1
From: Yong-Xuan Wang <yongxuan.wang@sifive.com>
2
2
3
Connect spi host[1/0] to opentitan.
3
We create a vAIA chip by using the KVM_DEV_TYPE_RISCV_AIA and then set up
4
the chip with the KVM_DEV_RISCV_AIA_GRP_* APIs.
5
We also extend KVM accelerator to specify the KVM AIA mode. The "riscv-aia"
6
parameter is passed along with --accel in QEMU command-line.
7
1) "riscv-aia=emul": IMSIC is emulated by hypervisor
8
2) "riscv-aia=hwaccel": use hardware guest IMSIC
9
3) "riscv-aia=auto": use the hardware guest IMSICs whenever available
10
otherwise we fallback to software emulation.
4
11
5
Signed-off-by: Wilfred Mallawa <wilfred.mallawa@wdc.com>
12
Signed-off-by: Yong-Xuan Wang <yongxuan.wang@sifive.com>
6
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
13
Reviewed-by: Jim Shu <jim.shu@sifive.com>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
14
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
8
Message-Id: <20220303045426.511588-2-alistair.francis@opensource.wdc.com>
15
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
16
Message-ID: <20230727102439.22554-4-yongxuan.wang@sifive.com>
9
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
17
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
10
---
18
---
11
include/hw/riscv/opentitan.h | 30 +++++++++++++++++++++---------
19
target/riscv/kvm_riscv.h | 4 +
12
hw/riscv/opentitan.c | 36 ++++++++++++++++++++++++++++++++----
20
target/riscv/kvm.c | 186 +++++++++++++++++++++++++++++++++++++++
13
2 files changed, 53 insertions(+), 13 deletions(-)
21
2 files changed, 190 insertions(+)
14
22
15
diff --git a/include/hw/riscv/opentitan.h b/include/hw/riscv/opentitan.h
23
diff --git a/target/riscv/kvm_riscv.h b/target/riscv/kvm_riscv.h
16
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
17
--- a/include/hw/riscv/opentitan.h
25
--- a/target/riscv/kvm_riscv.h
18
+++ b/include/hw/riscv/opentitan.h
26
+++ b/target/riscv/kvm_riscv.h
19
@@ -XXX,XX +XXX,XX @@
27
@@ -XXX,XX +XXX,XX @@
20
#include "hw/intc/sifive_plic.h"
28
void kvm_riscv_init_user_properties(Object *cpu_obj);
21
#include "hw/char/ibex_uart.h"
29
void kvm_riscv_reset_vcpu(RISCVCPU *cpu);
22
#include "hw/timer/ibex_timer.h"
30
void kvm_riscv_set_irq(RISCVCPU *cpu, int irq, int level);
23
+#include "hw/ssi/ibex_spi_host.h"
31
+void kvm_riscv_aia_create(MachineState *machine, uint64_t group_shift,
24
#include "qom/object.h"
32
+ uint64_t aia_irq_num, uint64_t aia_msi_num,
25
33
+ uint64_t aplic_base, uint64_t imsic_base,
26
#define TYPE_RISCV_IBEX_SOC "riscv.lowrisc.ibex.soc"
34
+ uint64_t guest_num);
27
OBJECT_DECLARE_SIMPLE_TYPE(LowRISCIbexSoCState, RISCV_IBEX_SOC)
28
29
+enum {
30
+ OPENTITAN_SPI_HOST0,
31
+ OPENTITAN_SPI_HOST1,
32
+ OPENTITAN_NUM_SPI_HOSTS,
33
+};
34
+
35
struct LowRISCIbexSoCState {
36
/*< private >*/
37
SysBusDevice parent_obj;
38
@@ -XXX,XX +XXX,XX @@ struct LowRISCIbexSoCState {
39
SiFivePLICState plic;
40
IbexUartState uart;
41
IbexTimerState timer;
42
+ IbexSPIHostState spi_host[OPENTITAN_NUM_SPI_HOSTS];
43
44
MemoryRegion flash_mem;
45
MemoryRegion rom;
46
@@ -XXX,XX +XXX,XX @@ enum {
47
};
48
49
enum {
50
- IBEX_TIMER_TIMEREXPIRED0_0 = 126,
51
- IBEX_UART0_RX_PARITY_ERR_IRQ = 8,
52
- IBEX_UART0_RX_TIMEOUT_IRQ = 7,
53
- IBEX_UART0_RX_BREAK_ERR_IRQ = 6,
54
- IBEX_UART0_RX_FRAME_ERR_IRQ = 5,
55
- IBEX_UART0_RX_OVERFLOW_IRQ = 4,
56
- IBEX_UART0_TX_EMPTY_IRQ = 3,
57
- IBEX_UART0_RX_WATERMARK_IRQ = 2,
58
- IBEX_UART0_TX_WATERMARK_IRQ = 1,
59
+ IBEX_UART0_TX_WATERMARK_IRQ = 1,
60
+ IBEX_UART0_RX_WATERMARK_IRQ = 2,
61
+ IBEX_UART0_TX_EMPTY_IRQ = 3,
62
+ IBEX_UART0_RX_OVERFLOW_IRQ = 4,
63
+ IBEX_UART0_RX_FRAME_ERR_IRQ = 5,
64
+ IBEX_UART0_RX_BREAK_ERR_IRQ = 6,
65
+ IBEX_UART0_RX_TIMEOUT_IRQ = 7,
66
+ IBEX_UART0_RX_PARITY_ERR_IRQ = 8,
67
+ IBEX_TIMER_TIMEREXPIRED0_0 = 126,
68
+ IBEX_SPI_HOST0_ERR_IRQ = 150,
69
+ IBEX_SPI_HOST0_SPI_EVENT_IRQ = 151,
70
+ IBEX_SPI_HOST1_ERR_IRQ = 152,
71
+ IBEX_SPI_HOST1_SPI_EVENT_IRQ = 153,
72
};
73
35
74
#endif
36
#endif
75
diff --git a/hw/riscv/opentitan.c b/hw/riscv/opentitan.c
37
diff --git a/target/riscv/kvm.c b/target/riscv/kvm.c
76
index XXXXXXX..XXXXXXX 100644
38
index XXXXXXX..XXXXXXX 100644
77
--- a/hw/riscv/opentitan.c
39
--- a/target/riscv/kvm.c
78
+++ b/hw/riscv/opentitan.c
40
+++ b/target/riscv/kvm.c
79
@@ -XXX,XX +XXX,XX @@ static void lowrisc_ibex_soc_init(Object *obj)
41
@@ -XXX,XX +XXX,XX @@
80
object_initialize_child(obj, "uart", &s->uart, TYPE_IBEX_UART);
42
#include "exec/address-spaces.h"
81
43
#include "hw/boards.h"
82
object_initialize_child(obj, "timer", &s->timer, TYPE_IBEX_TIMER);
44
#include "hw/irq.h"
83
+
45
+#include "hw/intc/riscv_imsic.h"
84
+ for (int i = 0; i < OPENTITAN_NUM_SPI_HOSTS; i++) {
46
#include "qemu/log.h"
85
+ object_initialize_child(obj, "spi_host[*]", &s->spi_host[i],
47
#include "hw/loader.h"
86
+ TYPE_IBEX_SPI_HOST);
48
#include "kvm_riscv.h"
87
+ }
49
@@ -XXX,XX +XXX,XX @@
50
#include "chardev/char-fe.h"
51
#include "migration/migration.h"
52
#include "sysemu/runstate.h"
53
+#include "hw/riscv/numa.h"
54
55
static uint64_t kvm_riscv_reg_id(CPURISCVState *env, uint64_t type,
56
uint64_t idx)
57
@@ -XXX,XX +XXX,XX @@ bool kvm_arch_cpu_check_are_resettable(void)
58
return true;
88
}
59
}
89
60
90
static void lowrisc_ibex_soc_realize(DeviceState *dev_soc, Error **errp)
61
+static int aia_mode;
62
+
63
+static const char *kvm_aia_mode_str(uint64_t mode)
64
+{
65
+ switch (mode) {
66
+ case KVM_DEV_RISCV_AIA_MODE_EMUL:
67
+ return "emul";
68
+ case KVM_DEV_RISCV_AIA_MODE_HWACCEL:
69
+ return "hwaccel";
70
+ case KVM_DEV_RISCV_AIA_MODE_AUTO:
71
+ default:
72
+ return "auto";
73
+ };
74
+}
75
+
76
+static char *riscv_get_kvm_aia(Object *obj, Error **errp)
77
+{
78
+ return g_strdup(kvm_aia_mode_str(aia_mode));
79
+}
80
+
81
+static void riscv_set_kvm_aia(Object *obj, const char *val, Error **errp)
82
+{
83
+ if (!strcmp(val, "emul")) {
84
+ aia_mode = KVM_DEV_RISCV_AIA_MODE_EMUL;
85
+ } else if (!strcmp(val, "hwaccel")) {
86
+ aia_mode = KVM_DEV_RISCV_AIA_MODE_HWACCEL;
87
+ } else if (!strcmp(val, "auto")) {
88
+ aia_mode = KVM_DEV_RISCV_AIA_MODE_AUTO;
89
+ } else {
90
+ error_setg(errp, "Invalid KVM AIA mode");
91
+ error_append_hint(errp, "Valid values are emul, hwaccel, and auto.\n");
92
+ }
93
+}
94
+
95
void kvm_arch_accel_class_init(ObjectClass *oc)
91
{
96
{
92
const MemMapEntry *memmap = ibex_memmap;
97
+ object_class_property_add_str(oc, "riscv-aia", riscv_get_kvm_aia,
93
+ DeviceState *dev;
98
+ riscv_set_kvm_aia);
94
+ SysBusDevice *busdev;
99
+ object_class_property_set_description(oc, "riscv-aia",
95
MachineState *ms = MACHINE(qdev_get_machine());
100
+ "Set KVM AIA mode. Valid values are "
96
LowRISCIbexSoCState *s = RISCV_IBEX_SOC(dev_soc);
101
+ "emul, hwaccel, and auto. Default "
97
MemoryRegion *sys_mem = get_system_memory();
102
+ "is auto.");
98
@@ -XXX,XX +XXX,XX @@ static void lowrisc_ibex_soc_realize(DeviceState *dev_soc, Error **errp)
103
+ object_property_set_default_str(object_class_property_find(oc, "riscv-aia"),
99
qdev_get_gpio_in(DEVICE(qemu_get_cpu(0)),
104
+ "auto");
100
IRQ_M_TIMER));
105
+}
101
106
+
102
+ /* SPI-Hosts */
107
+void kvm_riscv_aia_create(MachineState *machine, uint64_t group_shift,
103
+ for (int i = 0; i < OPENTITAN_NUM_SPI_HOSTS; ++i) {
108
+ uint64_t aia_irq_num, uint64_t aia_msi_num,
104
+ dev = DEVICE(&(s->spi_host[i]));
109
+ uint64_t aplic_base, uint64_t imsic_base,
105
+ if (!sysbus_realize(SYS_BUS_DEVICE(&s->spi_host[i]), errp)) {
110
+ uint64_t guest_num)
106
+ return;
111
+{
112
+ int ret, i;
113
+ int aia_fd = -1;
114
+ uint64_t default_aia_mode;
115
+ uint64_t socket_count = riscv_socket_count(machine);
116
+ uint64_t max_hart_per_socket = 0;
117
+ uint64_t socket, base_hart, hart_count, socket_imsic_base, imsic_addr;
118
+ uint64_t socket_bits, hart_bits, guest_bits;
119
+
120
+ aia_fd = kvm_create_device(kvm_state, KVM_DEV_TYPE_RISCV_AIA, false);
121
+
122
+ if (aia_fd < 0) {
123
+ error_report("Unable to create in-kernel irqchip");
124
+ exit(1);
125
+ }
126
+
127
+ ret = kvm_device_access(aia_fd, KVM_DEV_RISCV_AIA_GRP_CONFIG,
128
+ KVM_DEV_RISCV_AIA_CONFIG_MODE,
129
+ &default_aia_mode, false, NULL);
130
+ if (ret < 0) {
131
+ error_report("KVM AIA: failed to get current KVM AIA mode");
132
+ exit(1);
133
+ }
134
+ qemu_log("KVM AIA: default mode is %s\n",
135
+ kvm_aia_mode_str(default_aia_mode));
136
+
137
+ if (default_aia_mode != aia_mode) {
138
+ ret = kvm_device_access(aia_fd, KVM_DEV_RISCV_AIA_GRP_CONFIG,
139
+ KVM_DEV_RISCV_AIA_CONFIG_MODE,
140
+ &aia_mode, true, NULL);
141
+ if (ret < 0)
142
+ warn_report("KVM AIA: failed to set KVM AIA mode");
143
+ else
144
+ qemu_log("KVM AIA: set current mode to %s\n",
145
+ kvm_aia_mode_str(aia_mode));
146
+ }
147
+
148
+ ret = kvm_device_access(aia_fd, KVM_DEV_RISCV_AIA_GRP_CONFIG,
149
+ KVM_DEV_RISCV_AIA_CONFIG_SRCS,
150
+ &aia_irq_num, true, NULL);
151
+ if (ret < 0) {
152
+ error_report("KVM AIA: failed to set number of input irq lines");
153
+ exit(1);
154
+ }
155
+
156
+ ret = kvm_device_access(aia_fd, KVM_DEV_RISCV_AIA_GRP_CONFIG,
157
+ KVM_DEV_RISCV_AIA_CONFIG_IDS,
158
+ &aia_msi_num, true, NULL);
159
+ if (ret < 0) {
160
+ error_report("KVM AIA: failed to set number of msi");
161
+ exit(1);
162
+ }
163
+
164
+ socket_bits = find_last_bit(&socket_count, BITS_PER_LONG) + 1;
165
+ ret = kvm_device_access(aia_fd, KVM_DEV_RISCV_AIA_GRP_CONFIG,
166
+ KVM_DEV_RISCV_AIA_CONFIG_GROUP_BITS,
167
+ &socket_bits, true, NULL);
168
+ if (ret < 0) {
169
+ error_report("KVM AIA: failed to set group_bits");
170
+ exit(1);
171
+ }
172
+
173
+ ret = kvm_device_access(aia_fd, KVM_DEV_RISCV_AIA_GRP_CONFIG,
174
+ KVM_DEV_RISCV_AIA_CONFIG_GROUP_SHIFT,
175
+ &group_shift, true, NULL);
176
+ if (ret < 0) {
177
+ error_report("KVM AIA: failed to set group_shift");
178
+ exit(1);
179
+ }
180
+
181
+ guest_bits = guest_num == 0 ? 0 :
182
+ find_last_bit(&guest_num, BITS_PER_LONG) + 1;
183
+ ret = kvm_device_access(aia_fd, KVM_DEV_RISCV_AIA_GRP_CONFIG,
184
+ KVM_DEV_RISCV_AIA_CONFIG_GUEST_BITS,
185
+ &guest_bits, true, NULL);
186
+ if (ret < 0) {
187
+ error_report("KVM AIA: failed to set guest_bits");
188
+ exit(1);
189
+ }
190
+
191
+ ret = kvm_device_access(aia_fd, KVM_DEV_RISCV_AIA_GRP_ADDR,
192
+ KVM_DEV_RISCV_AIA_ADDR_APLIC,
193
+ &aplic_base, true, NULL);
194
+ if (ret < 0) {
195
+ error_report("KVM AIA: failed to set the base address of APLIC");
196
+ exit(1);
197
+ }
198
+
199
+ for (socket = 0; socket < socket_count; socket++) {
200
+ socket_imsic_base = imsic_base + socket * (1U << group_shift);
201
+ hart_count = riscv_socket_hart_count(machine, socket);
202
+ base_hart = riscv_socket_first_hartid(machine, socket);
203
+
204
+ if (max_hart_per_socket < hart_count) {
205
+ max_hart_per_socket = hart_count;
107
+ }
206
+ }
108
+ busdev = SYS_BUS_DEVICE(dev);
207
+
109
+ sysbus_mmio_map(busdev, 0, memmap[IBEX_DEV_SPI_HOST0 + i].base);
208
+ for (i = 0; i < hart_count; i++) {
110
+
209
+ imsic_addr = socket_imsic_base + i * IMSIC_HART_SIZE(guest_bits);
111
+ switch (i) {
210
+ ret = kvm_device_access(aia_fd, KVM_DEV_RISCV_AIA_GRP_ADDR,
112
+ case OPENTITAN_SPI_HOST0:
211
+ KVM_DEV_RISCV_AIA_ADDR_IMSIC(i + base_hart),
113
+ sysbus_connect_irq(busdev, 0, qdev_get_gpio_in(DEVICE(&s->plic),
212
+ &imsic_addr, true, NULL);
114
+ IBEX_SPI_HOST0_ERR_IRQ));
213
+ if (ret < 0) {
115
+ sysbus_connect_irq(busdev, 1, qdev_get_gpio_in(DEVICE(&s->plic),
214
+ error_report("KVM AIA: failed to set the IMSIC address for hart %d", i);
116
+ IBEX_SPI_HOST0_SPI_EVENT_IRQ));
215
+ exit(1);
117
+ break;
216
+ }
118
+ case OPENTITAN_SPI_HOST1:
119
+ sysbus_connect_irq(busdev, 0, qdev_get_gpio_in(DEVICE(&s->plic),
120
+ IBEX_SPI_HOST1_ERR_IRQ));
121
+ sysbus_connect_irq(busdev, 1, qdev_get_gpio_in(DEVICE(&s->plic),
122
+ IBEX_SPI_HOST1_SPI_EVENT_IRQ));
123
+ break;
124
+ }
217
+ }
125
+ }
218
+ }
126
+
219
+
127
create_unimplemented_device("riscv.lowrisc.ibex.gpio",
220
+ hart_bits = find_last_bit(&max_hart_per_socket, BITS_PER_LONG) + 1;
128
memmap[IBEX_DEV_GPIO].base, memmap[IBEX_DEV_GPIO].size);
221
+ ret = kvm_device_access(aia_fd, KVM_DEV_RISCV_AIA_GRP_CONFIG,
129
create_unimplemented_device("riscv.lowrisc.ibex.spi_device",
222
+ KVM_DEV_RISCV_AIA_CONFIG_HART_BITS,
130
memmap[IBEX_DEV_SPI_DEVICE].base, memmap[IBEX_DEV_SPI_DEVICE].size);
223
+ &hart_bits, true, NULL);
131
- create_unimplemented_device("riscv.lowrisc.ibex.spi_host0",
224
+ if (ret < 0) {
132
- memmap[IBEX_DEV_SPI_HOST0].base, memmap[IBEX_DEV_SPI_HOST0].size);
225
+ error_report("KVM AIA: failed to set hart_bits");
133
- create_unimplemented_device("riscv.lowrisc.ibex.spi_host1",
226
+ exit(1);
134
- memmap[IBEX_DEV_SPI_HOST1].base, memmap[IBEX_DEV_SPI_HOST1].size);
227
+ }
135
create_unimplemented_device("riscv.lowrisc.ibex.i2c",
228
+
136
memmap[IBEX_DEV_I2C].base, memmap[IBEX_DEV_I2C].size);
229
+ if (kvm_has_gsi_routing()) {
137
create_unimplemented_device("riscv.lowrisc.ibex.pattgen",
230
+ for (uint64_t idx = 0; idx < aia_irq_num + 1; ++idx) {
231
+ /* KVM AIA only has one APLIC instance */
232
+ kvm_irqchip_add_irq_route(kvm_state, idx, 0, idx);
233
+ }
234
+ kvm_gsi_routing_allowed = true;
235
+ kvm_irqchip_commit_routes(kvm_state);
236
+ }
237
+
238
+ ret = kvm_device_access(aia_fd, KVM_DEV_RISCV_AIA_GRP_CTRL,
239
+ KVM_DEV_RISCV_AIA_CTRL_INIT,
240
+ NULL, true, NULL);
241
+ if (ret < 0) {
242
+ error_report("KVM AIA: initialized fail");
243
+ exit(1);
244
+ }
245
+
246
+ kvm_msi_via_irqfd_allowed = kvm_irqfds_enabled();
247
}
138
--
248
--
139
2.35.1
249
2.41.0
diff view generated by jsdifflib
1
From: Frank Chang <frank.chang@sifive.com>
1
From: Yong-Xuan Wang <yongxuan.wang@sifive.com>
2
2
3
RISC-V privilege spec defines that mtime is exposed as a memory-mapped
3
KVM AIA can't emulate APLIC only. When "aia=aplic" parameter is passed,
4
machine-mode read-write register. However, as QEMU uses host monotonic
4
APLIC devices is emulated by QEMU. For "aia=aplic-imsic", remove the
5
timer as timer source, this makes mtime to be read-only in RISC-V
5
mmio operations of APLIC when using KVM AIA and send wired interrupt
6
ACLINT.
6
signal via KVM_IRQ_LINE API.
7
After KVM AIA enabled, MSI messages are delivered by KVM_SIGNAL_MSI API
8
when the IMSICs receive mmio write requests.
7
9
8
This patch makes mtime to be writable by recording the time delta value
10
Signed-off-by: Yong-Xuan Wang <yongxuan.wang@sifive.com>
9
between the mtime value to be written and the timer value at the time
10
mtime is written. Time delta value is then added back whenever the timer
11
value is retrieved.
12
13
Signed-off-by: Frank Chang <frank.chang@sifive.com>
14
Reviewed-by: Jim Shu <jim.shu@sifive.com>
11
Reviewed-by: Jim Shu <jim.shu@sifive.com>
15
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
12
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
16
Message-Id: <20220420080901.14655-4-frank.chang@sifive.com>
13
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
14
Message-ID: <20230727102439.22554-5-yongxuan.wang@sifive.com>
17
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
15
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
18
---
16
---
19
include/hw/intc/riscv_aclint.h | 1 +
17
hw/intc/riscv_aplic.c | 56 ++++++++++++++++++++++++++++++-------------
20
target/riscv/cpu.h | 8 ++--
18
hw/intc/riscv_imsic.c | 25 +++++++++++++++----
21
hw/intc/riscv_aclint.c | 71 ++++++++++++++++++++++++----------
19
2 files changed, 61 insertions(+), 20 deletions(-)
22
target/riscv/cpu_helper.c | 4 +-
23
4 files changed, 57 insertions(+), 27 deletions(-)
24
20
25
diff --git a/include/hw/intc/riscv_aclint.h b/include/hw/intc/riscv_aclint.h
21
diff --git a/hw/intc/riscv_aplic.c b/hw/intc/riscv_aplic.c
26
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
27
--- a/include/hw/intc/riscv_aclint.h
23
--- a/hw/intc/riscv_aplic.c
28
+++ b/include/hw/intc/riscv_aclint.h
24
+++ b/hw/intc/riscv_aplic.c
29
@@ -XXX,XX +XXX,XX @@
25
@@ -XXX,XX +XXX,XX @@
30
typedef struct RISCVAclintMTimerState {
26
#include "hw/irq.h"
31
/*< private >*/
27
#include "target/riscv/cpu.h"
32
SysBusDevice parent_obj;
28
#include "sysemu/sysemu.h"
33
+ uint64_t time_delta;
29
+#include "sysemu/kvm.h"
34
30
#include "migration/vmstate.h"
35
/*< public >*/
31
36
MemoryRegion mmio;
32
#define APLIC_MAX_IDC (1UL << 14)
37
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
33
@@ -XXX,XX +XXX,XX @@
38
index XXXXXXX..XXXXXXX 100644
34
39
--- a/target/riscv/cpu.h
35
#define APLIC_IDC_CLAIMI 0x1c
40
+++ b/target/riscv/cpu.h
36
41
@@ -XXX,XX +XXX,XX @@ struct CPUArchState {
37
+/*
42
type2_trigger_t type2_trig[TRIGGER_TYPE2_NUM];
38
+ * KVM AIA only supports APLIC MSI, fallback to QEMU emulation if we want to use
43
39
+ * APLIC Wired.
44
/* machine specific rdtime callback */
40
+ */
45
- uint64_t (*rdtime_fn)(uint32_t);
41
+static bool is_kvm_aia(bool msimode)
46
- uint32_t rdtime_fn_arg;
47
+ uint64_t (*rdtime_fn)(void *);
48
+ void *rdtime_fn_arg;
49
50
/* machine specific AIA ireg read-modify-write callback */
51
#define AIA_MAKE_IREG(__isel, __priv, __virt, __vgein, __xlen) \
52
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_swap_hypervisor_regs(CPURISCVState *env);
53
int riscv_cpu_claim_interrupts(RISCVCPU *cpu, uint64_t interrupts);
54
uint64_t riscv_cpu_update_mip(RISCVCPU *cpu, uint64_t mask, uint64_t value);
55
#define BOOL_TO_MASK(x) (-!!(x)) /* helper for riscv_cpu_update_mip value */
56
-void riscv_cpu_set_rdtime_fn(CPURISCVState *env, uint64_t (*fn)(uint32_t),
57
- uint32_t arg);
58
+void riscv_cpu_set_rdtime_fn(CPURISCVState *env, uint64_t (*fn)(void *),
59
+ void *arg);
60
void riscv_cpu_set_aia_ireg_rmw_fn(CPURISCVState *env, uint32_t priv,
61
int (*rmw_fn)(void *arg,
62
target_ulong reg,
63
diff --git a/hw/intc/riscv_aclint.c b/hw/intc/riscv_aclint.c
64
index XXXXXXX..XXXXXXX 100644
65
--- a/hw/intc/riscv_aclint.c
66
+++ b/hw/intc/riscv_aclint.c
67
@@ -XXX,XX +XXX,XX @@ typedef struct riscv_aclint_mtimer_callback {
68
int num;
69
} riscv_aclint_mtimer_callback;
70
71
-static uint64_t cpu_riscv_read_rtc(uint32_t timebase_freq)
72
+static uint64_t cpu_riscv_read_rtc_raw(uint32_t timebase_freq)
73
{
74
return muldiv64(qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL),
75
timebase_freq, NANOSECONDS_PER_SECOND);
76
}
77
78
+static uint64_t cpu_riscv_read_rtc(void *opaque)
79
+{
42
+{
80
+ RISCVAclintMTimerState *mtimer = opaque;
43
+ return kvm_irqchip_in_kernel() && msimode;
81
+ return cpu_riscv_read_rtc_raw(mtimer->timebase_freq) + mtimer->time_delta;
82
+}
44
+}
83
+
45
+
84
/*
46
static uint32_t riscv_aplic_read_input_word(RISCVAPLICState *aplic,
85
* Called when timecmp is written to update the QEMU timer or immediately
47
uint32_t word)
86
* trigger timer interrupt if mtimecmp <= current timer value.
87
@@ -XXX,XX +XXX,XX @@ static uint64_t cpu_riscv_read_rtc(uint32_t timebase_freq)
88
static void riscv_aclint_mtimer_write_timecmp(RISCVAclintMTimerState *mtimer,
89
RISCVCPU *cpu,
90
int hartid,
91
- uint64_t value,
92
- uint32_t timebase_freq)
93
+ uint64_t value)
94
{
48
{
95
+ uint32_t timebase_freq = mtimer->timebase_freq;
49
@@ -XXX,XX +XXX,XX @@ static uint32_t riscv_aplic_idc_claimi(RISCVAPLICState *aplic, uint32_t idc)
96
uint64_t next;
50
return topi;
97
uint64_t diff;
51
}
98
52
99
- uint64_t rtc_r = cpu_riscv_read_rtc(timebase_freq);
53
+static void riscv_kvm_aplic_request(void *opaque, int irq, int level)
100
+ uint64_t rtc_r = cpu_riscv_read_rtc(mtimer);
54
+{
101
55
+ kvm_set_irq(kvm_state, irq, !!level);
102
cpu->env.timecmp = value;
56
+}
103
if (cpu->env.timecmp <= rtc_r) {
57
+
104
@@ -XXX,XX +XXX,XX @@ static uint64_t riscv_aclint_mtimer_read(void *opaque, hwaddr addr,
58
static void riscv_aplic_request(void *opaque, int irq, int level)
59
{
60
bool update = false;
61
@@ -XXX,XX +XXX,XX @@ static void riscv_aplic_realize(DeviceState *dev, Error **errp)
62
uint32_t i;
63
RISCVAPLICState *aplic = RISCV_APLIC(dev);
64
65
- aplic->bitfield_words = (aplic->num_irqs + 31) >> 5;
66
- aplic->sourcecfg = g_new0(uint32_t, aplic->num_irqs);
67
- aplic->state = g_new0(uint32_t, aplic->num_irqs);
68
- aplic->target = g_new0(uint32_t, aplic->num_irqs);
69
- if (!aplic->msimode) {
70
- for (i = 0; i < aplic->num_irqs; i++) {
71
- aplic->target[i] = 1;
72
+ if (!is_kvm_aia(aplic->msimode)) {
73
+ aplic->bitfield_words = (aplic->num_irqs + 31) >> 5;
74
+ aplic->sourcecfg = g_new0(uint32_t, aplic->num_irqs);
75
+ aplic->state = g_new0(uint32_t, aplic->num_irqs);
76
+ aplic->target = g_new0(uint32_t, aplic->num_irqs);
77
+ if (!aplic->msimode) {
78
+ for (i = 0; i < aplic->num_irqs; i++) {
79
+ aplic->target[i] = 1;
80
+ }
105
}
81
}
106
} else if (addr == mtimer->time_base) {
82
- }
107
/* time_lo for RV32/RV64 or timecmp for RV64 */
83
- aplic->idelivery = g_new0(uint32_t, aplic->num_harts);
108
- uint64_t rtc = cpu_riscv_read_rtc(mtimer->timebase_freq);
84
- aplic->iforce = g_new0(uint32_t, aplic->num_harts);
109
+ uint64_t rtc = cpu_riscv_read_rtc(mtimer);
85
- aplic->ithreshold = g_new0(uint32_t, aplic->num_harts);
110
return (size == 4) ? (rtc & 0xFFFFFFFF) : rtc;
86
+ aplic->idelivery = g_new0(uint32_t, aplic->num_harts);
111
} else if (addr == mtimer->time_base + 4) {
87
+ aplic->iforce = g_new0(uint32_t, aplic->num_harts);
112
/* time_hi */
88
+ aplic->ithreshold = g_new0(uint32_t, aplic->num_harts);
113
- return (cpu_riscv_read_rtc(mtimer->timebase_freq) >> 32) & 0xFFFFFFFF;
89
114
+ return (cpu_riscv_read_rtc(mtimer) >> 32) & 0xFFFFFFFF;
90
- memory_region_init_io(&aplic->mmio, OBJECT(dev), &riscv_aplic_ops, aplic,
91
- TYPE_RISCV_APLIC, aplic->aperture_size);
92
- sysbus_init_mmio(SYS_BUS_DEVICE(dev), &aplic->mmio);
93
+ memory_region_init_io(&aplic->mmio, OBJECT(dev), &riscv_aplic_ops,
94
+ aplic, TYPE_RISCV_APLIC, aplic->aperture_size);
95
+ sysbus_init_mmio(SYS_BUS_DEVICE(dev), &aplic->mmio);
96
+ }
97
98
/*
99
* Only root APLICs have hardware IRQ lines. All non-root APLICs
100
* have IRQ lines delegated by their parent APLIC.
101
*/
102
if (!aplic->parent) {
103
- qdev_init_gpio_in(dev, riscv_aplic_request, aplic->num_irqs);
104
+ if (is_kvm_aia(aplic->msimode)) {
105
+ qdev_init_gpio_in(dev, riscv_kvm_aplic_request, aplic->num_irqs);
106
+ } else {
107
+ qdev_init_gpio_in(dev, riscv_aplic_request, aplic->num_irqs);
108
+ }
115
}
109
}
116
110
117
qemu_log_mask(LOG_UNIMP,
111
/* Create output IRQ lines for non-MSI mode */
118
@@ -XXX,XX +XXX,XX @@ static void riscv_aclint_mtimer_write(void *opaque, hwaddr addr,
112
@@ -XXX,XX +XXX,XX @@ DeviceState *riscv_aplic_create(hwaddr addr, hwaddr size,
119
uint64_t value, unsigned size)
113
qdev_prop_set_bit(dev, "mmode", mmode);
120
{
114
121
RISCVAclintMTimerState *mtimer = opaque;
115
sysbus_realize_and_unref(SYS_BUS_DEVICE(dev), &error_fatal);
122
+ int i;
116
- sysbus_mmio_map(SYS_BUS_DEVICE(dev), 0, addr);
123
124
if (addr >= mtimer->timecmp_base &&
125
addr < (mtimer->timecmp_base + (mtimer->num_harts << 3))) {
126
@@ -XXX,XX +XXX,XX @@ static void riscv_aclint_mtimer_write(void *opaque, hwaddr addr,
127
/* timecmp_lo for RV32/RV64 */
128
uint64_t timecmp_hi = env->timecmp >> 32;
129
riscv_aclint_mtimer_write_timecmp(mtimer, RISCV_CPU(cpu), hartid,
130
- timecmp_hi << 32 | (value & 0xFFFFFFFF),
131
- mtimer->timebase_freq);
132
+ timecmp_hi << 32 | (value & 0xFFFFFFFF));
133
} else {
134
/* timecmp for RV64 */
135
riscv_aclint_mtimer_write_timecmp(mtimer, RISCV_CPU(cpu), hartid,
136
- value, mtimer->timebase_freq);
137
+ value);
138
}
139
} else if ((addr & 0x7) == 4) {
140
if (size == 4) {
141
/* timecmp_hi for RV32/RV64 */
142
uint64_t timecmp_lo = env->timecmp;
143
riscv_aclint_mtimer_write_timecmp(mtimer, RISCV_CPU(cpu), hartid,
144
- value << 32 | (timecmp_lo & 0xFFFFFFFF),
145
- mtimer->timebase_freq);
146
+ value << 32 | (timecmp_lo & 0xFFFFFFFF));
147
} else {
148
qemu_log_mask(LOG_GUEST_ERROR,
149
"aclint-mtimer: invalid timecmp_hi write: %08x",
150
@@ -XXX,XX +XXX,XX @@ static void riscv_aclint_mtimer_write(void *opaque, hwaddr addr,
151
(uint32_t)addr);
152
}
153
return;
154
- } else if (addr == mtimer->time_base) {
155
- /* time_lo */
156
- qemu_log_mask(LOG_UNIMP,
157
- "aclint-mtimer: time_lo write not implemented");
158
- return;
159
- } else if (addr == mtimer->time_base + 4) {
160
- /* time_hi */
161
- qemu_log_mask(LOG_UNIMP,
162
- "aclint-mtimer: time_hi write not implemented");
163
+ } else if (addr == mtimer->time_base || addr == mtimer->time_base + 4) {
164
+ uint64_t rtc_r = cpu_riscv_read_rtc_raw(mtimer->timebase_freq);
165
+
117
+
166
+ if (addr == mtimer->time_base) {
118
+ if (!is_kvm_aia(msimode)) {
167
+ if (size == 4) {
119
+ sysbus_mmio_map(SYS_BUS_DEVICE(dev), 0, addr);
168
+ /* time_lo for RV32/RV64 */
120
+ }
169
+ mtimer->time_delta = ((rtc_r & ~0xFFFFFFFFULL) | value) - rtc_r;
121
170
+ } else {
122
if (parent) {
171
+ /* time for RV64 */
123
riscv_aplic_add_child(parent, dev);
172
+ mtimer->time_delta = value - rtc_r;
124
diff --git a/hw/intc/riscv_imsic.c b/hw/intc/riscv_imsic.c
173
+ }
125
index XXXXXXX..XXXXXXX 100644
174
+ } else {
126
--- a/hw/intc/riscv_imsic.c
175
+ if (size == 4) {
127
+++ b/hw/intc/riscv_imsic.c
176
+ /* time_hi for RV32/RV64 */
128
@@ -XXX,XX +XXX,XX @@
177
+ mtimer->time_delta = (value << 32 | (rtc_r & 0xFFFFFFFF)) - rtc_r;
129
#include "target/riscv/cpu.h"
178
+ } else {
130
#include "target/riscv/cpu_bits.h"
179
+ qemu_log_mask(LOG_GUEST_ERROR,
131
#include "sysemu/sysemu.h"
180
+ "aclint-mtimer: invalid time_hi write: %08x",
132
+#include "sysemu/kvm.h"
181
+ (uint32_t)addr);
133
#include "migration/vmstate.h"
182
+ return;
134
183
+ }
135
#define IMSIC_MMIO_PAGE_LE 0x00
184
+ }
136
@@ -XXX,XX +XXX,XX @@ static void riscv_imsic_write(void *opaque, hwaddr addr, uint64_t value,
137
goto err;
138
}
139
140
+#if defined(CONFIG_KVM)
141
+ if (kvm_irqchip_in_kernel()) {
142
+ struct kvm_msi msi;
185
+
143
+
186
+ /* Check if timer interrupt is triggered for each hart. */
144
+ msi.address_lo = extract64(imsic->mmio.addr + addr, 0, 32);
187
+ for (i = 0; i < mtimer->num_harts; i++) {
145
+ msi.address_hi = extract64(imsic->mmio.addr + addr, 32, 32);
188
+ CPUState *cpu = qemu_get_cpu(mtimer->hartid_base + i);
146
+ msi.data = le32_to_cpu(value);
189
+ CPURISCVState *env = cpu ? cpu->env_ptr : NULL;
147
+
190
+ if (!env) {
148
+ kvm_vm_ioctl(kvm_state, KVM_SIGNAL_MSI, &msi);
191
+ continue;
149
+
192
+ }
150
+ return;
193
+ riscv_aclint_mtimer_write_timecmp(mtimer, RISCV_CPU(cpu),
151
+ }
194
+ i, env->timecmp);
152
+#endif
195
+ }
153
+
196
return;
154
/* Writes only supported for MSI little-endian registers */
197
}
155
page = addr >> IMSIC_MMIO_PAGE_SHIFT;
198
156
if ((addr & (IMSIC_MMIO_PAGE_SZ - 1)) == IMSIC_MMIO_PAGE_LE) {
199
@@ -XXX,XX +XXX,XX @@ DeviceState *riscv_aclint_mtimer_create(hwaddr addr, hwaddr size,
157
@@ -XXX,XX +XXX,XX @@ static void riscv_imsic_realize(DeviceState *dev, Error **errp)
200
continue;
158
CPUState *cpu = cpu_by_arch_id(imsic->hartid);
201
}
159
CPURISCVState *env = cpu ? cpu->env_ptr : NULL;
202
if (provide_rdtime) {
160
203
- riscv_cpu_set_rdtime_fn(env, cpu_riscv_read_rtc, timebase_freq);
161
- imsic->num_eistate = imsic->num_pages * imsic->num_irqs;
204
+ riscv_cpu_set_rdtime_fn(env, cpu_riscv_read_rtc, dev);
162
- imsic->eidelivery = g_new0(uint32_t, imsic->num_pages);
205
}
163
- imsic->eithreshold = g_new0(uint32_t, imsic->num_pages);
206
164
- imsic->eistate = g_new0(uint32_t, imsic->num_eistate);
207
cb->s = RISCV_ACLINT_MTIMER(dev);
165
+ if (!kvm_irqchip_in_kernel()) {
208
diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
166
+ imsic->num_eistate = imsic->num_pages * imsic->num_irqs;
209
index XXXXXXX..XXXXXXX 100644
167
+ imsic->eidelivery = g_new0(uint32_t, imsic->num_pages);
210
--- a/target/riscv/cpu_helper.c
168
+ imsic->eithreshold = g_new0(uint32_t, imsic->num_pages);
211
+++ b/target/riscv/cpu_helper.c
169
+ imsic->eistate = g_new0(uint32_t, imsic->num_eistate);
212
@@ -XXX,XX +XXX,XX @@ uint64_t riscv_cpu_update_mip(RISCVCPU *cpu, uint64_t mask, uint64_t value)
170
+ }
213
return old;
171
214
}
172
memory_region_init_io(&imsic->mmio, OBJECT(dev), &riscv_imsic_ops,
215
173
imsic, TYPE_RISCV_IMSIC,
216
-void riscv_cpu_set_rdtime_fn(CPURISCVState *env, uint64_t (*fn)(uint32_t),
217
- uint32_t arg)
218
+void riscv_cpu_set_rdtime_fn(CPURISCVState *env, uint64_t (*fn)(void *),
219
+ void *arg)
220
{
221
env->rdtime_fn = fn;
222
env->rdtime_fn_arg = arg;
223
--
174
--
224
2.35.1
175
2.41.0
diff view generated by jsdifflib
1
From: Ralf Ramsauer <ralf.ramsauer@oth-regensburg.de>
1
From: Yong-Xuan Wang <yongxuan.wang@sifive.com>
2
2
3
The -bios option is silently ignored if used in combination with -enable-kvm.
3
Select KVM AIA when the host kernel has in-kernel AIA chip support.
4
The reason is that the machine starts in S-Mode, and the bios typically runs in
4
Since KVM AIA only has one APLIC instance, we map the QEMU APLIC
5
M-Mode.
5
devices to KVM APLIC.
6
6
7
Better exit in that case to not confuse the user.
7
Signed-off-by: Yong-Xuan Wang <yongxuan.wang@sifive.com>
8
8
Reviewed-by: Jim Shu <jim.shu@sifive.com>
9
Signed-off-by: Ralf Ramsauer <ralf.ramsauer@oth-regensburg.de>
9
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
10
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
10
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
11
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
11
Message-ID: <20230727102439.22554-6-yongxuan.wang@sifive.com>
12
Reviewed-by: Anup Patel <anup@brainfault.org>
13
Message-Id: <20220401121842.2791796-1-ralf.ramsauer@oth-regensburg.de>
14
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
12
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
15
---
13
---
16
hw/riscv/virt.c | 14 ++++++++++----
14
hw/riscv/virt.c | 94 +++++++++++++++++++++++++++++++++----------------
17
1 file changed, 10 insertions(+), 4 deletions(-)
15
1 file changed, 63 insertions(+), 31 deletions(-)
18
16
19
diff --git a/hw/riscv/virt.c b/hw/riscv/virt.c
17
diff --git a/hw/riscv/virt.c b/hw/riscv/virt.c
20
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
21
--- a/hw/riscv/virt.c
19
--- a/hw/riscv/virt.c
22
+++ b/hw/riscv/virt.c
20
+++ b/hw/riscv/virt.c
21
@@ -XXX,XX +XXX,XX @@
22
#include "hw/riscv/virt.h"
23
#include "hw/riscv/boot.h"
24
#include "hw/riscv/numa.h"
25
+#include "kvm_riscv.h"
26
#include "hw/intc/riscv_aclint.h"
27
#include "hw/intc/riscv_aplic.h"
28
#include "hw/intc/riscv_imsic.h"
29
@@ -XXX,XX +XXX,XX @@
30
#error "Can't accommodate all IMSIC groups in address space"
31
#endif
32
33
+/* KVM AIA only supports APLIC MSI. APLIC Wired is always emulated by QEMU. */
34
+static bool virt_use_kvm_aia(RISCVVirtState *s)
35
+{
36
+ return kvm_irqchip_in_kernel() && s->aia_type == VIRT_AIA_TYPE_APLIC_IMSIC;
37
+}
38
+
39
static const MemMapEntry virt_memmap[] = {
40
[VIRT_DEBUG] = { 0x0, 0x100 },
41
[VIRT_MROM] = { 0x1000, 0xf000 },
42
@@ -XXX,XX +XXX,XX @@ static void create_fdt_one_aplic(RISCVVirtState *s, int socket,
43
uint32_t *intc_phandles,
44
uint32_t aplic_phandle,
45
uint32_t aplic_child_phandle,
46
- bool m_mode)
47
+ bool m_mode, int num_harts)
48
{
49
int cpu;
50
char *aplic_name;
51
uint32_t *aplic_cells;
52
MachineState *ms = MACHINE(s);
53
54
- aplic_cells = g_new0(uint32_t, s->soc[socket].num_harts * 2);
55
+ aplic_cells = g_new0(uint32_t, num_harts * 2);
56
57
- for (cpu = 0; cpu < s->soc[socket].num_harts; cpu++) {
58
+ for (cpu = 0; cpu < num_harts; cpu++) {
59
aplic_cells[cpu * 2 + 0] = cpu_to_be32(intc_phandles[cpu]);
60
aplic_cells[cpu * 2 + 1] = cpu_to_be32(m_mode ? IRQ_M_EXT : IRQ_S_EXT);
61
}
62
@@ -XXX,XX +XXX,XX @@ static void create_fdt_one_aplic(RISCVVirtState *s, int socket,
63
64
if (s->aia_type == VIRT_AIA_TYPE_APLIC) {
65
qemu_fdt_setprop(ms->fdt, aplic_name, "interrupts-extended",
66
- aplic_cells,
67
- s->soc[socket].num_harts * sizeof(uint32_t) * 2);
68
+ aplic_cells, num_harts * sizeof(uint32_t) * 2);
69
} else {
70
qemu_fdt_setprop_cell(ms->fdt, aplic_name, "msi-parent", msi_phandle);
71
}
72
@@ -XXX,XX +XXX,XX @@ static void create_fdt_socket_aplic(RISCVVirtState *s,
73
uint32_t msi_s_phandle,
74
uint32_t *phandle,
75
uint32_t *intc_phandles,
76
- uint32_t *aplic_phandles)
77
+ uint32_t *aplic_phandles,
78
+ int num_harts)
79
{
80
char *aplic_name;
81
unsigned long aplic_addr;
82
@@ -XXX,XX +XXX,XX @@ static void create_fdt_socket_aplic(RISCVVirtState *s,
83
create_fdt_one_aplic(s, socket, aplic_addr, memmap[VIRT_APLIC_M].size,
84
msi_m_phandle, intc_phandles,
85
aplic_m_phandle, aplic_s_phandle,
86
- true);
87
+ true, num_harts);
88
}
89
90
/* S-level APLIC node */
91
@@ -XXX,XX +XXX,XX @@ static void create_fdt_socket_aplic(RISCVVirtState *s,
92
create_fdt_one_aplic(s, socket, aplic_addr, memmap[VIRT_APLIC_S].size,
93
msi_s_phandle, intc_phandles,
94
aplic_s_phandle, 0,
95
- false);
96
+ false, num_harts);
97
98
aplic_name = g_strdup_printf("/soc/aplic@%lx", aplic_addr);
99
100
@@ -XXX,XX +XXX,XX @@ static void create_fdt_sockets(RISCVVirtState *s, const MemMapEntry *memmap,
101
*msi_pcie_phandle = msi_s_phandle;
102
}
103
104
- phandle_pos = ms->smp.cpus;
105
- for (socket = (socket_count - 1); socket >= 0; socket--) {
106
- phandle_pos -= s->soc[socket].num_harts;
107
-
108
- if (s->aia_type == VIRT_AIA_TYPE_NONE) {
109
- create_fdt_socket_plic(s, memmap, socket, phandle,
110
- &intc_phandles[phandle_pos], xplic_phandles);
111
- } else {
112
- create_fdt_socket_aplic(s, memmap, socket,
113
- msi_m_phandle, msi_s_phandle, phandle,
114
- &intc_phandles[phandle_pos], xplic_phandles);
115
+ /* KVM AIA only has one APLIC instance */
116
+ if (virt_use_kvm_aia(s)) {
117
+ create_fdt_socket_aplic(s, memmap, 0,
118
+ msi_m_phandle, msi_s_phandle, phandle,
119
+ &intc_phandles[0], xplic_phandles,
120
+ ms->smp.cpus);
121
+ } else {
122
+ phandle_pos = ms->smp.cpus;
123
+ for (socket = (socket_count - 1); socket >= 0; socket--) {
124
+ phandle_pos -= s->soc[socket].num_harts;
125
+
126
+ if (s->aia_type == VIRT_AIA_TYPE_NONE) {
127
+ create_fdt_socket_plic(s, memmap, socket, phandle,
128
+ &intc_phandles[phandle_pos],
129
+ xplic_phandles);
130
+ } else {
131
+ create_fdt_socket_aplic(s, memmap, socket,
132
+ msi_m_phandle, msi_s_phandle, phandle,
133
+ &intc_phandles[phandle_pos],
134
+ xplic_phandles,
135
+ s->soc[socket].num_harts);
136
+ }
137
}
138
}
139
140
g_free(intc_phandles);
141
142
- for (socket = 0; socket < socket_count; socket++) {
143
- if (socket == 0) {
144
- *irq_mmio_phandle = xplic_phandles[socket];
145
- *irq_virtio_phandle = xplic_phandles[socket];
146
- *irq_pcie_phandle = xplic_phandles[socket];
147
- }
148
- if (socket == 1) {
149
- *irq_virtio_phandle = xplic_phandles[socket];
150
- *irq_pcie_phandle = xplic_phandles[socket];
151
- }
152
- if (socket == 2) {
153
- *irq_pcie_phandle = xplic_phandles[socket];
154
+ if (virt_use_kvm_aia(s)) {
155
+ *irq_mmio_phandle = xplic_phandles[0];
156
+ *irq_virtio_phandle = xplic_phandles[0];
157
+ *irq_pcie_phandle = xplic_phandles[0];
158
+ } else {
159
+ for (socket = 0; socket < socket_count; socket++) {
160
+ if (socket == 0) {
161
+ *irq_mmio_phandle = xplic_phandles[socket];
162
+ *irq_virtio_phandle = xplic_phandles[socket];
163
+ *irq_pcie_phandle = xplic_phandles[socket];
164
+ }
165
+ if (socket == 1) {
166
+ *irq_virtio_phandle = xplic_phandles[socket];
167
+ *irq_pcie_phandle = xplic_phandles[socket];
168
+ }
169
+ if (socket == 2) {
170
+ *irq_pcie_phandle = xplic_phandles[socket];
171
+ }
172
}
173
}
174
23
@@ -XXX,XX +XXX,XX @@ static void virt_machine_init(MachineState *machine)
175
@@ -XXX,XX +XXX,XX @@ static void virt_machine_init(MachineState *machine)
24
176
}
25
/*
26
* Only direct boot kernel is currently supported for KVM VM,
27
- * so the "-bios" parameter is ignored and treated like "-bios none"
28
- * when KVM is enabled.
29
+ * so the "-bios" parameter is not supported when KVM is enabled.
30
*/
31
if (kvm_enabled()) {
32
- g_free(machine->firmware);
33
- machine->firmware = g_strdup("none");
34
+ if (machine->firmware) {
35
+ if (strcmp(machine->firmware, "none")) {
36
+ error_report("Machine mode firmware is not supported in "
37
+ "combination with KVM.");
38
+ exit(1);
39
+ }
40
+ } else {
41
+ machine->firmware = g_strdup("none");
42
+ }
43
}
177
}
44
178
179
+ if (virt_use_kvm_aia(s)) {
180
+ kvm_riscv_aia_create(machine, IMSIC_MMIO_GROUP_MIN_SHIFT,
181
+ VIRT_IRQCHIP_NUM_SOURCES, VIRT_IRQCHIP_NUM_MSIS,
182
+ memmap[VIRT_APLIC_S].base,
183
+ memmap[VIRT_IMSIC_S].base,
184
+ s->aia_guests);
185
+ }
186
+
45
if (riscv_is_32bit(&s->soc[0])) {
187
if (riscv_is_32bit(&s->soc[0])) {
188
#if HOST_LONG_BITS == 64
189
/* limit RAM size in a 32-bit system */
46
--
190
--
47
2.35.1
191
2.41.0
diff view generated by jsdifflib
1
From: Niklas Cassel <niklas.cassel@wdc.com>
1
From: Conor Dooley <conor.dooley@microchip.com>
2
2
3
The device tree property "mmu-type" is currently exported as either
3
On a dtb dumped from the virt machine, dt-validate complains:
4
"riscv,sv32" or "riscv,sv48".
4
soc: pmu: {'riscv,event-to-mhpmcounters': [[1, 1, 524281], [2, 2, 524284], [65561, 65561, 524280], [65563, 65563, 524280], [65569, 65569, 524280]], 'compatible': ['riscv,pmu']} should not be valid under {'type': 'object'}
5
from schema $id: http://devicetree.org/schemas/simple-bus.yaml#
6
That's pretty cryptic, but running the dtb back through dtc produces
7
something a lot more reasonable:
8
Warning (simple_bus_reg): /soc/pmu: missing or empty reg/ranges property
5
9
6
However, the riscv cpu device tree binding [1] has a specific value
10
Moving the riscv,pmu node out of the soc bus solves the problem.
7
"riscv,none" for a HART without a MMU.
8
11
9
Set the device tree property "mmu-type" to "riscv,none" when the CPU mmu
12
Signed-off-by: Conor Dooley <conor.dooley@microchip.com>
10
option is disabled using rv32,mmu=off or rv64,mmu=off.
13
Acked-by: Alistair Francis <alistair.francis@wdc.com>
11
14
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
12
[1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/devicetree/bindings/riscv/cpus.yaml?h=v5.17
15
Message-ID: <20230727-groom-decline-2c57ce42841c@spud>
13
14
Signed-off-by: Niklas Cassel <niklas.cassel@wdc.com>
15
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
16
Reviewed-by: Frank Chang <frank.chang@sifive.com>
17
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
18
Message-Id: <20220414155510.1364147-1-niklas.cassel@wdc.com>
19
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
16
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
20
---
17
---
21
hw/riscv/virt.c | 10 ++++++++--
18
hw/riscv/virt.c | 2 +-
22
1 file changed, 8 insertions(+), 2 deletions(-)
19
1 file changed, 1 insertion(+), 1 deletion(-)
23
20
24
diff --git a/hw/riscv/virt.c b/hw/riscv/virt.c
21
diff --git a/hw/riscv/virt.c b/hw/riscv/virt.c
25
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
26
--- a/hw/riscv/virt.c
23
--- a/hw/riscv/virt.c
27
+++ b/hw/riscv/virt.c
24
+++ b/hw/riscv/virt.c
28
@@ -XXX,XX +XXX,XX @@ static void create_fdt_socket_cpus(RISCVVirtState *s, int socket,
25
@@ -XXX,XX +XXX,XX @@ static void create_fdt_pmu(RISCVVirtState *s)
29
cpu_name = g_strdup_printf("/cpus/cpu@%d",
26
MachineState *ms = MACHINE(s);
30
s->soc[socket].hartid_base + cpu);
27
RISCVCPU hart = s->soc[0].harts[0];
31
qemu_fdt_add_subnode(mc->fdt, cpu_name);
28
32
- qemu_fdt_setprop_string(mc->fdt, cpu_name, "mmu-type",
29
- pmu_name = g_strdup_printf("/soc/pmu");
33
- (is_32_bit) ? "riscv,sv32" : "riscv,sv48");
30
+ pmu_name = g_strdup_printf("/pmu");
34
+ if (riscv_feature(&s->soc[socket].harts[cpu].env,
31
qemu_fdt_add_subnode(ms->fdt, pmu_name);
35
+ RISCV_FEATURE_MMU)) {
32
qemu_fdt_setprop_string(ms->fdt, pmu_name, "compatible", "riscv,pmu");
36
+ qemu_fdt_setprop_string(mc->fdt, cpu_name, "mmu-type",
33
riscv_pmu_generate_fdt_node(ms->fdt, hart.cfg.pmu_num, pmu_name);
37
+ (is_32_bit) ? "riscv,sv32" : "riscv,sv48");
38
+ } else {
39
+ qemu_fdt_setprop_string(mc->fdt, cpu_name, "mmu-type",
40
+ "riscv,none");
41
+ }
42
name = riscv_isa_string(&s->soc[socket].harts[cpu]);
43
qemu_fdt_setprop_string(mc->fdt, cpu_name, "riscv,isa", name);
44
g_free(name);
45
--
34
--
46
2.35.1
35
2.41.0
diff view generated by jsdifflib
1
From: Atish Patra <atishp@rivosinc.com>
1
From: Weiwei Li <liweiwei@iscas.ac.cn>
2
2
3
Virt machine uses privileged specification version 1.12 now.
3
The Svadu specification updated the name of the *envcfg bit from
4
All other machine continue to use the default one defined for that
4
HADE to ADUE.
5
machine unless changed to 1.12 by the user explicitly.
6
5
7
This commit enforces the privilege version for csrs introduced in
6
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
8
v1.12 or after.
7
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
9
8
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
10
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
9
Message-ID: <20230816141916.66898-1-liweiwei@iscas.ac.cn>
11
Signed-off-by: Atish Patra <atishp@rivosinc.com>
12
Message-Id: <20220303185440.512391-7-atishp@rivosinc.com>
13
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
10
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
14
---
11
---
15
target/riscv/cpu.c | 8 +++++---
12
target/riscv/cpu_bits.h | 8 ++++----
16
target/riscv/csr.c | 5 +++++
13
target/riscv/cpu.c | 4 ++--
17
2 files changed, 10 insertions(+), 3 deletions(-)
14
target/riscv/cpu_helper.c | 6 +++---
15
target/riscv/csr.c | 12 ++++++------
16
4 files changed, 15 insertions(+), 15 deletions(-)
18
17
18
diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/riscv/cpu_bits.h
21
+++ b/target/riscv/cpu_bits.h
22
@@ -XXX,XX +XXX,XX @@ typedef enum RISCVException {
23
#define MENVCFG_CBIE (3UL << 4)
24
#define MENVCFG_CBCFE BIT(6)
25
#define MENVCFG_CBZE BIT(7)
26
-#define MENVCFG_HADE (1ULL << 61)
27
+#define MENVCFG_ADUE (1ULL << 61)
28
#define MENVCFG_PBMTE (1ULL << 62)
29
#define MENVCFG_STCE (1ULL << 63)
30
31
/* For RV32 */
32
-#define MENVCFGH_HADE BIT(29)
33
+#define MENVCFGH_ADUE BIT(29)
34
#define MENVCFGH_PBMTE BIT(30)
35
#define MENVCFGH_STCE BIT(31)
36
37
@@ -XXX,XX +XXX,XX @@ typedef enum RISCVException {
38
#define HENVCFG_CBIE MENVCFG_CBIE
39
#define HENVCFG_CBCFE MENVCFG_CBCFE
40
#define HENVCFG_CBZE MENVCFG_CBZE
41
-#define HENVCFG_HADE MENVCFG_HADE
42
+#define HENVCFG_ADUE MENVCFG_ADUE
43
#define HENVCFG_PBMTE MENVCFG_PBMTE
44
#define HENVCFG_STCE MENVCFG_STCE
45
46
/* For RV32 */
47
-#define HENVCFGH_HADE MENVCFGH_HADE
48
+#define HENVCFGH_ADUE MENVCFGH_ADUE
49
#define HENVCFGH_PBMTE MENVCFGH_PBMTE
50
#define HENVCFGH_STCE MENVCFGH_STCE
51
19
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
52
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
20
index XXXXXXX..XXXXXXX 100644
53
index XXXXXXX..XXXXXXX 100644
21
--- a/target/riscv/cpu.c
54
--- a/target/riscv/cpu.c
22
+++ b/target/riscv/cpu.c
55
+++ b/target/riscv/cpu.c
23
@@ -XXX,XX +XXX,XX @@ static void riscv_any_cpu_init(Object *obj)
56
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_reset_hold(Object *obj)
24
#elif defined(TARGET_RISCV64)
57
env->two_stage_lookup = false;
25
set_misa(env, MXL_RV64, RVI | RVM | RVA | RVF | RVD | RVC | RVU);
58
26
#endif
59
env->menvcfg = (cpu->cfg.ext_svpbmt ? MENVCFG_PBMTE : 0) |
27
- set_priv_version(env, PRIV_VERSION_1_11_0);
60
- (cpu->cfg.ext_svadu ? MENVCFG_HADE : 0);
28
+ set_priv_version(env, PRIV_VERSION_1_12_0);
61
+ (cpu->cfg.ext_svadu ? MENVCFG_ADUE : 0);
29
}
62
env->henvcfg = (cpu->cfg.ext_svpbmt ? HENVCFG_PBMTE : 0) |
30
63
- (cpu->cfg.ext_svadu ? HENVCFG_HADE : 0);
31
#if defined(TARGET_RISCV64)
64
+ (cpu->cfg.ext_svadu ? HENVCFG_ADUE : 0);
32
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_realize(DeviceState *dev, Error **errp)
65
66
/* Initialized default priorities of local interrupts. */
67
for (i = 0; i < ARRAY_SIZE(env->miprio); i++) {
68
diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
69
index XXXXXXX..XXXXXXX 100644
70
--- a/target/riscv/cpu_helper.c
71
+++ b/target/riscv/cpu_helper.c
72
@@ -XXX,XX +XXX,XX @@ static int get_physical_address(CPURISCVState *env, hwaddr *physical,
33
}
73
}
34
74
35
if (cpu->cfg.priv_spec) {
75
bool pbmte = env->menvcfg & MENVCFG_PBMTE;
36
- if (!g_strcmp0(cpu->cfg.priv_spec, "v1.11.0")) {
76
- bool hade = env->menvcfg & MENVCFG_HADE;
37
+ if (!g_strcmp0(cpu->cfg.priv_spec, "v1.12.0")) {
77
+ bool adue = env->menvcfg & MENVCFG_ADUE;
38
+ priv_version = PRIV_VERSION_1_12_0;
78
39
+ } else if (!g_strcmp0(cpu->cfg.priv_spec, "v1.11.0")) {
79
if (first_stage && two_stage && env->virt_enabled) {
40
priv_version = PRIV_VERSION_1_11_0;
80
pbmte = pbmte && (env->henvcfg & HENVCFG_PBMTE);
41
} else if (!g_strcmp0(cpu->cfg.priv_spec, "v1.10.0")) {
81
- hade = hade && (env->henvcfg & HENVCFG_HADE);
42
priv_version = PRIV_VERSION_1_10_0;
82
+ adue = adue && (env->henvcfg & HENVCFG_ADUE);
43
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_realize(DeviceState *dev, Error **errp)
44
if (priv_version) {
45
set_priv_version(env, priv_version);
46
} else if (!env->priv_ver) {
47
- set_priv_version(env, PRIV_VERSION_1_11_0);
48
+ set_priv_version(env, PRIV_VERSION_1_12_0);
49
}
83
}
50
84
51
if (cpu->cfg.mmu) {
85
int ptshift = (levels - 1) * ptidxbits;
86
@@ -XXX,XX +XXX,XX @@ restart:
87
88
/* Page table updates need to be atomic with MTTCG enabled */
89
if (updated_pte != pte && !is_debug) {
90
- if (!hade) {
91
+ if (!adue) {
92
return TRANSLATE_FAIL;
93
}
94
52
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
95
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
53
index XXXXXXX..XXXXXXX 100644
96
index XXXXXXX..XXXXXXX 100644
54
--- a/target/riscv/csr.c
97
--- a/target/riscv/csr.c
55
+++ b/target/riscv/csr.c
98
+++ b/target/riscv/csr.c
56
@@ -XXX,XX +XXX,XX @@ static inline RISCVException riscv_csrrw_check(CPURISCVState *env,
99
@@ -XXX,XX +XXX,XX @@ static RISCVException write_menvcfg(CPURISCVState *env, int csrno,
100
if (riscv_cpu_mxl(env) == MXL_RV64) {
101
mask |= (cfg->ext_svpbmt ? MENVCFG_PBMTE : 0) |
102
(cfg->ext_sstc ? MENVCFG_STCE : 0) |
103
- (cfg->ext_svadu ? MENVCFG_HADE : 0);
104
+ (cfg->ext_svadu ? MENVCFG_ADUE : 0);
105
}
106
env->menvcfg = (env->menvcfg & ~mask) | (val & mask);
107
108
@@ -XXX,XX +XXX,XX @@ static RISCVException write_menvcfgh(CPURISCVState *env, int csrno,
109
const RISCVCPUConfig *cfg = riscv_cpu_cfg(env);
110
uint64_t mask = (cfg->ext_svpbmt ? MENVCFG_PBMTE : 0) |
111
(cfg->ext_sstc ? MENVCFG_STCE : 0) |
112
- (cfg->ext_svadu ? MENVCFG_HADE : 0);
113
+ (cfg->ext_svadu ? MENVCFG_ADUE : 0);
114
uint64_t valh = (uint64_t)val << 32;
115
116
env->menvcfg = (env->menvcfg & ~mask) | (valh & mask);
117
@@ -XXX,XX +XXX,XX @@ static RISCVException read_henvcfg(CPURISCVState *env, int csrno,
118
* henvcfg.stce is read_only 0 when menvcfg.stce = 0
119
* henvcfg.hade is read_only 0 when menvcfg.hade = 0
120
*/
121
- *val = env->henvcfg & (~(HENVCFG_PBMTE | HENVCFG_STCE | HENVCFG_HADE) |
122
+ *val = env->henvcfg & (~(HENVCFG_PBMTE | HENVCFG_STCE | HENVCFG_ADUE) |
123
env->menvcfg);
124
return RISCV_EXCP_NONE;
125
}
126
@@ -XXX,XX +XXX,XX @@ static RISCVException write_henvcfg(CPURISCVState *env, int csrno,
127
}
128
129
if (riscv_cpu_mxl(env) == MXL_RV64) {
130
- mask |= env->menvcfg & (HENVCFG_PBMTE | HENVCFG_STCE | HENVCFG_HADE);
131
+ mask |= env->menvcfg & (HENVCFG_PBMTE | HENVCFG_STCE | HENVCFG_ADUE);
132
}
133
134
env->henvcfg = (env->henvcfg & ~mask) | (val & mask);
135
@@ -XXX,XX +XXX,XX @@ static RISCVException read_henvcfgh(CPURISCVState *env, int csrno,
136
return ret;
137
}
138
139
- *val = (env->henvcfg & (~(HENVCFG_PBMTE | HENVCFG_STCE | HENVCFG_HADE) |
140
+ *val = (env->henvcfg & (~(HENVCFG_PBMTE | HENVCFG_STCE | HENVCFG_ADUE) |
141
env->menvcfg)) >> 32;
142
return RISCV_EXCP_NONE;
143
}
144
@@ -XXX,XX +XXX,XX @@ static RISCVException write_henvcfgh(CPURISCVState *env, int csrno,
145
target_ulong val)
57
{
146
{
58
/* check privileges and return RISCV_EXCP_ILLEGAL_INST if check fails */
147
uint64_t mask = env->menvcfg & (HENVCFG_PBMTE | HENVCFG_STCE |
59
int read_only = get_field(csrno, 0xC00) == 3;
148
- HENVCFG_HADE);
60
+ int csr_min_priv = csr_ops[csrno].min_priv_ver;
149
+ HENVCFG_ADUE);
61
#if !defined(CONFIG_USER_ONLY)
150
uint64_t valh = (uint64_t)val << 32;
62
int effective_priv = env->priv;
151
RISCVException ret;
63
64
@@ -XXX,XX +XXX,XX @@ static inline RISCVException riscv_csrrw_check(CPURISCVState *env,
65
return RISCV_EXCP_ILLEGAL_INST;
66
}
67
68
+ if (env->priv_ver < csr_min_priv) {
69
+ return RISCV_EXCP_ILLEGAL_INST;
70
+ }
71
+
72
return csr_ops[csrno].predicate(env, csrno);
73
}
74
152
75
--
153
--
76
2.35.1
154
2.41.0
diff view generated by jsdifflib
1
From: Atish Patra <atishp@rivosinc.com>
1
From: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
2
2
3
The Linux kernel parses the ISA extensions from "riscv,isa" DT
3
In the same emulated RISC-V host, the 'host' KVM CPU takes 4 times
4
property. It used to parse only the single letter base extensions
4
longer to boot than the 'rv64' KVM CPU.
5
until now. A generic ISA extension parsing framework was proposed[1]
6
recently that can parse multi-letter ISA extensions as well.
7
5
8
Generate the extended ISA string by appending the available ISA extensions
6
The reason is an unintended behavior of riscv_cpu_satp_mode_finalize()
9
to the "riscv,isa" string if it is enabled so that kernel can process it.
7
when satp_mode.supported = 0, i.e. when cpu_init() does not set
8
satp_mode_max_supported(). satp_mode_max_from_map(map) does:
10
9
11
[1] https://lkml.org/lkml/2022/2/15/263
10
31 - __builtin_clz(map)
12
11
13
Reviewed-by: Anup Patel <anup@brainfault.org>
12
This means that, if satp_mode.supported = 0, satp_mode_supported_max
14
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
13
wil be '31 - 32'. But this is C, so satp_mode_supported_max will gladly
15
Reviewed-by: Frank Chang <frank.chang@sifive.com>
14
set it to UINT_MAX (4294967295). After that, if the user didn't set a
16
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
15
satp_mode, set_satp_mode_default_map(cpu) will make
17
Tested-by: Bin Meng <bmeng.cn@gmail.com>
16
18
Signed-off-by: Atish Patra <atishp@rivosinc.com>
17
cfg.satp_mode.map = cfg.satp_mode.supported
19
Suggested-by: Heiko Stubner <heiko@sntech.de>
18
20
Signed-off-by: Atish Patra <atishp@rivosinc.com>
19
So satp_mode.map = 0. And then satp_mode_map_max will be set to
21
Message-Id: <20220329195657.1725425-1-atishp@rivosinc.com>
20
satp_mode_max_from_map(cpu->cfg.satp_mode.map), i.e. also UINT_MAX. The
21
guard "satp_mode_map_max > satp_mode_supported_max" doesn't protect us
22
here since both are UINT_MAX.
23
24
And finally we have 2 loops:
25
26
for (int i = satp_mode_map_max - 1; i >= 0; --i) {
27
28
Which are, in fact, 2 loops from UINT_MAX -1 to -1. This is where the
29
extra delay when booting the 'host' CPU is coming from.
30
31
Commit 43d1de32f8 already set a precedence for satp_mode.supported = 0
32
in a different manner. We're doing the same here. If supported == 0,
33
interpret as 'the CPU wants the OS to handle satp mode alone' and skip
34
satp_mode_finalize().
35
36
We'll also put a guard in satp_mode_max_from_map() to assert out if map
37
is 0 since the function is not ready to deal with it.
38
39
Cc: Alexandre Ghiti <alexghiti@rivosinc.com>
40
Fixes: 6f23aaeb9b ("riscv: Allow user to set the satp mode")
41
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
42
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
43
Message-ID: <20230817152903.694926-1-dbarboza@ventanamicro.com>
22
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
44
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
23
---
45
---
24
target/riscv/cpu.c | 60 ++++++++++++++++++++++++++++++++++++++++++++++
46
target/riscv/cpu.c | 23 ++++++++++++++++++++---
25
1 file changed, 60 insertions(+)
47
1 file changed, 20 insertions(+), 3 deletions(-)
26
48
27
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
49
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
28
index XXXXXXX..XXXXXXX 100644
50
index XXXXXXX..XXXXXXX 100644
29
--- a/target/riscv/cpu.c
51
--- a/target/riscv/cpu.c
30
+++ b/target/riscv/cpu.c
52
+++ b/target/riscv/cpu.c
31
@@ -XXX,XX +XXX,XX @@
53
@@ -XXX,XX +XXX,XX @@ static uint8_t satp_mode_from_str(const char *satp_mode_str)
32
54
33
static const char riscv_single_letter_exts[] = "IEMAFDQCPVH";
55
uint8_t satp_mode_max_from_map(uint32_t map)
34
56
{
35
+struct isa_ext_data {
57
+ /*
36
+ const char *name;
58
+ * 'map = 0' will make us return (31 - 32), which C will
37
+ bool enabled;
59
+ * happily overflow to UINT_MAX. There's no good result to
38
+};
60
+ * return if 'map = 0' (e.g. returning 0 will be ambiguous
61
+ * with the result for 'map = 1').
62
+ *
63
+ * Assert out if map = 0. Callers will have to deal with
64
+ * it outside of this function.
65
+ */
66
+ g_assert(map > 0);
39
+
67
+
40
const char * const riscv_int_regnames[] = {
68
/* map here has at least one bit set, so no problem with clz */
41
"x0/zero", "x1/ra", "x2/sp", "x3/gp", "x4/tp", "x5/t0", "x6/t1",
69
return 31 - __builtin_clz(map);
42
"x7/t2", "x8/s0", "x9/s1", "x10/a0", "x11/a1", "x12/a2", "x13/a3",
43
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_class_init(ObjectClass *c, void *data)
44
device_class_set_props(dc, riscv_cpu_properties);
45
}
70
}
46
71
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_validate_set_extensions(RISCVCPU *cpu, Error **errp)
47
+#define ISA_EDATA_ENTRY(name, prop) {#name, cpu->cfg.prop}
72
static void riscv_cpu_satp_mode_finalize(RISCVCPU *cpu, Error **errp)
73
{
74
bool rv32 = riscv_cpu_mxl(&cpu->env) == MXL_RV32;
75
- uint8_t satp_mode_map_max;
76
- uint8_t satp_mode_supported_max =
77
- satp_mode_max_from_map(cpu->cfg.satp_mode.supported);
78
+ uint8_t satp_mode_map_max, satp_mode_supported_max;
48
+
79
+
49
+static void riscv_isa_string_ext(RISCVCPU *cpu, char **isa_str, int max_str_len)
80
+ /* The CPU wants the OS to decide which satp mode to use */
50
+{
81
+ if (cpu->cfg.satp_mode.supported == 0) {
51
+ char *old = *isa_str;
82
+ return;
52
+ char *new = *isa_str;
53
+ int i;
54
+
55
+ /**
56
+ * Here are the ordering rules of extension naming defined by RISC-V
57
+ * specification :
58
+ * 1. All extensions should be separated from other multi-letter extensions
59
+ * by an underscore.
60
+ * 2. The first letter following the 'Z' conventionally indicates the most
61
+ * closely related alphabetical extension category, IMAFDQLCBKJTPVH.
62
+ * If multiple 'Z' extensions are named, they should be ordered first
63
+ * by category, then alphabetically within a category.
64
+ * 3. Standard supervisor-level extensions (starts with 'S') should be
65
+ * listed after standard unprivileged extensions. If multiple
66
+ * supervisor-level extensions are listed, they should be ordered
67
+ * alphabetically.
68
+ * 4. Non-standard extensions (starts with 'X') must be listed after all
69
+ * standard extensions. They must be separated from other multi-letter
70
+ * extensions by an underscore.
71
+ */
72
+ struct isa_ext_data isa_edata_arr[] = {
73
+ ISA_EDATA_ENTRY(zfh, ext_zfh),
74
+ ISA_EDATA_ENTRY(zfhmin, ext_zfhmin),
75
+ ISA_EDATA_ENTRY(zfinx, ext_zfinx),
76
+ ISA_EDATA_ENTRY(zhinx, ext_zhinx),
77
+ ISA_EDATA_ENTRY(zhinxmin, ext_zhinxmin),
78
+ ISA_EDATA_ENTRY(zdinx, ext_zdinx),
79
+ ISA_EDATA_ENTRY(zba, ext_zba),
80
+ ISA_EDATA_ENTRY(zbb, ext_zbb),
81
+ ISA_EDATA_ENTRY(zbc, ext_zbc),
82
+ ISA_EDATA_ENTRY(zbs, ext_zbs),
83
+ ISA_EDATA_ENTRY(zve32f, ext_zve32f),
84
+ ISA_EDATA_ENTRY(zve64f, ext_zve64f),
85
+ ISA_EDATA_ENTRY(svinval, ext_svinval),
86
+ ISA_EDATA_ENTRY(svnapot, ext_svnapot),
87
+ ISA_EDATA_ENTRY(svpbmt, ext_svpbmt),
88
+ };
89
+
90
+ for (i = 0; i < ARRAY_SIZE(isa_edata_arr); i++) {
91
+ if (isa_edata_arr[i].enabled) {
92
+ new = g_strconcat(old, "_", isa_edata_arr[i].name, NULL);
93
+ g_free(old);
94
+ old = new;
95
+ }
96
+ }
83
+ }
97
+
84
+
98
+ *isa_str = new;
85
+ satp_mode_supported_max =
99
+}
86
+ satp_mode_max_from_map(cpu->cfg.satp_mode.supported);
100
+
87
101
char *riscv_isa_string(RISCVCPU *cpu)
88
if (cpu->cfg.satp_mode.map == 0) {
102
{
89
if (cpu->cfg.satp_mode.init == 0) {
103
int i;
104
@@ -XXX,XX +XXX,XX @@ char *riscv_isa_string(RISCVCPU *cpu)
105
}
106
}
107
*p = '\0';
108
+ riscv_isa_string_ext(cpu, &isa_str, maxlen);
109
return isa_str;
110
}
111
112
--
90
--
113
2.35.1
91
2.41.0
diff view generated by jsdifflib
1
From: Bin Meng <bin.meng@windriver.com>
1
From: Vineet Gupta <vineetg@rivosinc.com>
2
2
3
Turn on native debug feature by default for all CPUs.
3
zicond is now codegen supported in both llvm and gcc.
4
4
5
Signed-off-by: Bin Meng <bin.meng@windriver.com>
5
This change allows seamless enabling/testing of zicond in downstream
6
projects. e.g. currently riscv-gnu-toolchain parses elf attributes
7
to create a cmdline for qemu but fails short of enabling it because of
8
the "x-" prefix.
9
10
Signed-off-by: Vineet Gupta <vineetg@rivosinc.com>
11
Message-ID: <20230808181715.436395-1-vineetg@rivosinc.com>
6
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
12
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
7
Message-Id: <20220421003324.1134983-6-bmeng.cn@gmail.com>
8
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
13
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
9
---
14
---
10
target/riscv/cpu.c | 2 +-
15
target/riscv/cpu.c | 2 +-
11
1 file changed, 1 insertion(+), 1 deletion(-)
16
1 file changed, 1 insertion(+), 1 deletion(-)
12
17
13
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
18
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
14
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
15
--- a/target/riscv/cpu.c
20
--- a/target/riscv/cpu.c
16
+++ b/target/riscv/cpu.c
21
+++ b/target/riscv/cpu.c
17
@@ -XXX,XX +XXX,XX @@ static Property riscv_cpu_properties[] = {
22
@@ -XXX,XX +XXX,XX @@ static Property riscv_cpu_extensions[] = {
18
DEFINE_PROP_BOOL("Zve64f", RISCVCPU, cfg.ext_zve64f, false),
23
DEFINE_PROP_BOOL("zcf", RISCVCPU, cfg.ext_zcf, false),
19
DEFINE_PROP_BOOL("mmu", RISCVCPU, cfg.mmu, true),
24
DEFINE_PROP_BOOL("zcmp", RISCVCPU, cfg.ext_zcmp, false),
20
DEFINE_PROP_BOOL("pmp", RISCVCPU, cfg.pmp, true),
25
DEFINE_PROP_BOOL("zcmt", RISCVCPU, cfg.ext_zcmt, false),
21
- DEFINE_PROP_BOOL("debug", RISCVCPU, cfg.debug, false),
26
+ DEFINE_PROP_BOOL("zicond", RISCVCPU, cfg.ext_zicond, false),
22
+ DEFINE_PROP_BOOL("debug", RISCVCPU, cfg.debug, true),
27
23
28
/* Vendor-specific custom extensions */
24
DEFINE_PROP_STRING("priv_spec", RISCVCPU, cfg.priv_spec),
29
DEFINE_PROP_BOOL("xtheadba", RISCVCPU, cfg.ext_xtheadba, false),
25
DEFINE_PROP_STRING("vext_spec", RISCVCPU, cfg.vext_spec),
30
@@ -XXX,XX +XXX,XX @@ static Property riscv_cpu_extensions[] = {
31
DEFINE_PROP_BOOL("xventanacondops", RISCVCPU, cfg.ext_XVentanaCondOps, false),
32
33
/* These are experimental so mark with 'x-' */
34
- DEFINE_PROP_BOOL("x-zicond", RISCVCPU, cfg.ext_zicond, false),
35
36
/* ePMP 0.9.3 */
37
DEFINE_PROP_BOOL("x-epmp", RISCVCPU, cfg.epmp, false),
26
--
38
--
27
2.35.1
39
2.41.0
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
2
2
3
The riscv_raise_exception function stores its argument into
3
A build with --enable-debug and without KVM will fail as follows:
4
exception_index and then exits to the main loop. When we
5
have already set exception_index, we can just exit directly.
6
4
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
/usr/bin/ld: libqemu-riscv64-softmmu.fa.p/hw_riscv_virt.c.o: in function `virt_machine_init':
8
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
6
./qemu/build/../hw/riscv/virt.c:1465: undefined reference to `kvm_riscv_aia_create'
9
Message-Id: <20220401125948.79292-2-richard.henderson@linaro.org>
7
8
This happens because the code block with "if virt_use_kvm_aia(s)" isn't
9
being ignored by the debug build, resulting in an undefined reference to
10
a KVM only function.
11
12
Add a 'kvm_enabled()' conditional together with virt_use_kvm_aia() will
13
make the compiler crop the kvm_riscv_aia_create() call entirely from a
14
non-KVM build. Note that adding the 'kvm_enabled()' conditional inside
15
virt_use_kvm_aia() won't fix the build because this function would need
16
to be inlined multiple times to make the compiler zero out the entire
17
block.
18
19
While we're at it, use kvm_enabled() in all instances where
20
virt_use_kvm_aia() is checked to allow the compiler to elide these other
21
kvm-only instances as well.
22
23
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
24
Fixes: dbdb99948e ("target/riscv: select KVM AIA in riscv virt machine")
25
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
26
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
27
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
28
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
29
Message-ID: <20230830133503.711138-2-dbarboza@ventanamicro.com>
10
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
30
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
11
---
31
---
12
target/riscv/cpu_helper.c | 6 +++---
32
hw/riscv/virt.c | 6 +++---
13
1 file changed, 3 insertions(+), 3 deletions(-)
33
1 file changed, 3 insertions(+), 3 deletions(-)
14
34
15
diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
35
diff --git a/hw/riscv/virt.c b/hw/riscv/virt.c
16
index XXXXXXX..XXXXXXX 100644
36
index XXXXXXX..XXXXXXX 100644
17
--- a/target/riscv/cpu_helper.c
37
--- a/hw/riscv/virt.c
18
+++ b/target/riscv/cpu_helper.c
38
+++ b/hw/riscv/virt.c
19
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_do_transaction_failed(CPUState *cs, hwaddr physaddr,
39
@@ -XXX,XX +XXX,XX @@ static void create_fdt_sockets(RISCVVirtState *s, const MemMapEntry *memmap,
20
env->badaddr = addr;
21
env->two_stage_lookup = riscv_cpu_virt_enabled(env) ||
22
riscv_cpu_two_stage_lookup(mmu_idx);
23
- riscv_raise_exception(&cpu->env, cs->exception_index, retaddr);
24
+ cpu_loop_exit_restore(cs, retaddr);
25
}
26
27
void riscv_cpu_do_unaligned_access(CPUState *cs, vaddr addr,
28
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_do_unaligned_access(CPUState *cs, vaddr addr,
29
env->badaddr = addr;
30
env->two_stage_lookup = riscv_cpu_virt_enabled(env) ||
31
riscv_cpu_two_stage_lookup(mmu_idx);
32
- riscv_raise_exception(env, cs->exception_index, retaddr);
33
+ cpu_loop_exit_restore(cs, retaddr);
34
}
35
36
bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
37
@@ -XXX,XX +XXX,XX @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
38
first_stage_error,
39
riscv_cpu_virt_enabled(env) ||
40
riscv_cpu_two_stage_lookup(mmu_idx));
41
- riscv_raise_exception(env, cs->exception_index, retaddr);
42
+ cpu_loop_exit_restore(cs, retaddr);
43
}
40
}
44
41
45
return true;
42
/* KVM AIA only has one APLIC instance */
43
- if (virt_use_kvm_aia(s)) {
44
+ if (kvm_enabled() && virt_use_kvm_aia(s)) {
45
create_fdt_socket_aplic(s, memmap, 0,
46
msi_m_phandle, msi_s_phandle, phandle,
47
&intc_phandles[0], xplic_phandles,
48
@@ -XXX,XX +XXX,XX @@ static void create_fdt_sockets(RISCVVirtState *s, const MemMapEntry *memmap,
49
50
g_free(intc_phandles);
51
52
- if (virt_use_kvm_aia(s)) {
53
+ if (kvm_enabled() && virt_use_kvm_aia(s)) {
54
*irq_mmio_phandle = xplic_phandles[0];
55
*irq_virtio_phandle = xplic_phandles[0];
56
*irq_pcie_phandle = xplic_phandles[0];
57
@@ -XXX,XX +XXX,XX @@ static void virt_machine_init(MachineState *machine)
58
}
59
}
60
61
- if (virt_use_kvm_aia(s)) {
62
+ if (kvm_enabled() && virt_use_kvm_aia(s)) {
63
kvm_riscv_aia_create(machine, IMSIC_MMIO_GROUP_MIN_SHIFT,
64
VIRT_IRQCHIP_NUM_SOURCES, VIRT_IRQCHIP_NUM_MSIS,
65
memmap[VIRT_APLIC_S].base,
46
--
66
--
47
2.35.1
67
2.41.0
68
69
diff view generated by jsdifflib
New patch
1
From: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
1
2
3
Commit 6df0b37e2ab breaks a --enable-debug build in a non-KVM
4
environment with the following error:
5
6
/usr/bin/ld: libqemu-riscv64-softmmu.fa.p/hw_intc_riscv_aplic.c.o: in function `riscv_kvm_aplic_request':
7
./qemu/build/../hw/intc/riscv_aplic.c:486: undefined reference to `kvm_set_irq'
8
collect2: error: ld returned 1 exit status
9
10
This happens because the debug build will poke into the
11
'if (is_kvm_aia(aplic->msimode))' block and fail to find a reference to
12
the KVM only function riscv_kvm_aplic_request().
13
14
There are multiple solutions to fix this. We'll go with the same
15
solution from the previous patch, i.e. add a kvm_enabled() conditional
16
to filter out the block. But there's a catch: riscv_kvm_aplic_request()
17
is a local function that would end up being used if the compiler crops
18
the block, and this won't work. Quoting Richard Henderson's explanation
19
in [1]:
20
21
"(...) the compiler won't eliminate entire unused functions with -O0"
22
23
We'll solve it by moving riscv_kvm_aplic_request() to kvm.c and add its
24
declaration in kvm_riscv.h, where all other KVM specific public
25
functions are already declared. Other archs handles KVM specific code in
26
this manner and we expect to do the same from now on.
27
28
[1] https://lore.kernel.org/qemu-riscv/d2f1ad02-eb03-138f-9d08-db676deeed05@linaro.org/
29
30
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
31
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
32
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
33
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
34
Message-ID: <20230830133503.711138-3-dbarboza@ventanamicro.com>
35
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
36
---
37
target/riscv/kvm_riscv.h | 1 +
38
hw/intc/riscv_aplic.c | 8 ++------
39
target/riscv/kvm.c | 5 +++++
40
3 files changed, 8 insertions(+), 6 deletions(-)
41
42
diff --git a/target/riscv/kvm_riscv.h b/target/riscv/kvm_riscv.h
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/riscv/kvm_riscv.h
45
+++ b/target/riscv/kvm_riscv.h
46
@@ -XXX,XX +XXX,XX @@ void kvm_riscv_aia_create(MachineState *machine, uint64_t group_shift,
47
uint64_t aia_irq_num, uint64_t aia_msi_num,
48
uint64_t aplic_base, uint64_t imsic_base,
49
uint64_t guest_num);
50
+void riscv_kvm_aplic_request(void *opaque, int irq, int level);
51
52
#endif
53
diff --git a/hw/intc/riscv_aplic.c b/hw/intc/riscv_aplic.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/hw/intc/riscv_aplic.c
56
+++ b/hw/intc/riscv_aplic.c
57
@@ -XXX,XX +XXX,XX @@
58
#include "target/riscv/cpu.h"
59
#include "sysemu/sysemu.h"
60
#include "sysemu/kvm.h"
61
+#include "kvm_riscv.h"
62
#include "migration/vmstate.h"
63
64
#define APLIC_MAX_IDC (1UL << 14)
65
@@ -XXX,XX +XXX,XX @@ static uint32_t riscv_aplic_idc_claimi(RISCVAPLICState *aplic, uint32_t idc)
66
return topi;
67
}
68
69
-static void riscv_kvm_aplic_request(void *opaque, int irq, int level)
70
-{
71
- kvm_set_irq(kvm_state, irq, !!level);
72
-}
73
-
74
static void riscv_aplic_request(void *opaque, int irq, int level)
75
{
76
bool update = false;
77
@@ -XXX,XX +XXX,XX @@ static void riscv_aplic_realize(DeviceState *dev, Error **errp)
78
* have IRQ lines delegated by their parent APLIC.
79
*/
80
if (!aplic->parent) {
81
- if (is_kvm_aia(aplic->msimode)) {
82
+ if (kvm_enabled() && is_kvm_aia(aplic->msimode)) {
83
qdev_init_gpio_in(dev, riscv_kvm_aplic_request, aplic->num_irqs);
84
} else {
85
qdev_init_gpio_in(dev, riscv_aplic_request, aplic->num_irqs);
86
diff --git a/target/riscv/kvm.c b/target/riscv/kvm.c
87
index XXXXXXX..XXXXXXX 100644
88
--- a/target/riscv/kvm.c
89
+++ b/target/riscv/kvm.c
90
@@ -XXX,XX +XXX,XX @@
91
#include "sysemu/runstate.h"
92
#include "hw/riscv/numa.h"
93
94
+void riscv_kvm_aplic_request(void *opaque, int irq, int level)
95
+{
96
+ kvm_set_irq(kvm_state, irq, !!level);
97
+}
98
+
99
static uint64_t kvm_riscv_reg_id(CPURISCVState *env, uint64_t type,
100
uint64_t idx)
101
{
102
--
103
2.41.0
104
105
diff view generated by jsdifflib
1
From: Bin Meng <bin.meng@windriver.com>
1
From: Robbin Ehn <rehn@rivosinc.com>
2
2
3
Add a subsection to machine.c to migrate debug CSR state.
3
This patch adds the new extensions in
4
linux 6.5 to the hwprobe syscall.
4
5
5
Signed-off-by: Bin Meng <bin.meng@windriver.com>
6
And fixes RVC check to OR with correct value.
6
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
7
The previous variable contains 0 therefore it
7
Message-Id: <20220421003324.1134983-5-bmeng.cn@gmail.com>
8
did work.
9
10
Signed-off-by: Robbin Ehn <rehn@rivosinc.com>
11
Acked-by: Richard Henderson <richard.henderson@linaro.org>
12
Acked-by: Alistair Francis <alistair.francis@wdc.com>
13
Message-ID: <bc82203b72d7efb30f1b4a8f9eb3d94699799dc8.camel@rivosinc.com>
8
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
14
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
9
---
15
---
10
target/riscv/machine.c | 32 ++++++++++++++++++++++++++++++++
16
linux-user/syscall.c | 14 +++++++++++++-
11
1 file changed, 32 insertions(+)
17
1 file changed, 13 insertions(+), 1 deletion(-)
12
18
13
diff --git a/target/riscv/machine.c b/target/riscv/machine.c
19
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
14
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
15
--- a/target/riscv/machine.c
21
--- a/linux-user/syscall.c
16
+++ b/target/riscv/machine.c
22
+++ b/linux-user/syscall.c
17
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_kvmtimer = {
23
@@ -XXX,XX +XXX,XX @@ static int do_getdents64(abi_long dirfd, abi_long arg2, abi_long count)
18
VMSTATE_UINT64(env.kvm_timer_time, RISCVCPU),
24
#define RISCV_HWPROBE_KEY_IMA_EXT_0 4
19
VMSTATE_UINT64(env.kvm_timer_compare, RISCVCPU),
25
#define RISCV_HWPROBE_IMA_FD (1 << 0)
20
VMSTATE_UINT64(env.kvm_timer_state, RISCVCPU),
26
#define RISCV_HWPROBE_IMA_C (1 << 1)
21
+ VMSTATE_END_OF_LIST()
27
+#define RISCV_HWPROBE_IMA_V (1 << 2)
22
+ }
28
+#define RISCV_HWPROBE_EXT_ZBA (1 << 3)
23
+};
29
+#define RISCV_HWPROBE_EXT_ZBB (1 << 4)
24
+
30
+#define RISCV_HWPROBE_EXT_ZBS (1 << 5)
25
+static bool debug_needed(void *opaque)
31
26
+{
32
#define RISCV_HWPROBE_KEY_CPUPERF_0 5
27
+ RISCVCPU *cpu = opaque;
33
#define RISCV_HWPROBE_MISALIGNED_UNKNOWN (0 << 0)
28
+ CPURISCVState *env = &cpu->env;
34
@@ -XXX,XX +XXX,XX @@ static void risc_hwprobe_fill_pairs(CPURISCVState *env,
29
+
35
riscv_has_ext(env, RVD) ?
30
+ return riscv_feature(env, RISCV_FEATURE_DEBUG);
36
RISCV_HWPROBE_IMA_FD : 0;
31
+}
37
value |= riscv_has_ext(env, RVC) ?
32
38
- RISCV_HWPROBE_IMA_C : pair->value;
33
+static const VMStateDescription vmstate_debug_type2 = {
39
+ RISCV_HWPROBE_IMA_C : 0;
34
+ .name = "cpu/debug/type2",
40
+ value |= riscv_has_ext(env, RVV) ?
35
+ .version_id = 1,
41
+ RISCV_HWPROBE_IMA_V : 0;
36
+ .minimum_version_id = 1,
42
+ value |= cfg->ext_zba ?
37
+ .fields = (VMStateField[]) {
43
+ RISCV_HWPROBE_EXT_ZBA : 0;
38
+ VMSTATE_UINTTL(mcontrol, type2_trigger_t),
44
+ value |= cfg->ext_zbb ?
39
+ VMSTATE_UINTTL(maddress, type2_trigger_t),
45
+ RISCV_HWPROBE_EXT_ZBB : 0;
40
+ VMSTATE_END_OF_LIST()
46
+ value |= cfg->ext_zbs ?
41
+ }
47
+ RISCV_HWPROBE_EXT_ZBS : 0;
42
+};
48
__put_user(value, &pair->value);
43
+
49
break;
44
+static const VMStateDescription vmstate_debug = {
50
case RISCV_HWPROBE_KEY_CPUPERF_0:
45
+ .name = "cpu/debug",
46
+ .version_id = 1,
47
+ .minimum_version_id = 1,
48
+ .needed = debug_needed,
49
+ .fields = (VMStateField[]) {
50
+ VMSTATE_UINTTL(env.trigger_cur, RISCVCPU),
51
+ VMSTATE_STRUCT_ARRAY(env.type2_trig, RISCVCPU, TRIGGER_TYPE2_NUM,
52
+ 0, vmstate_debug_type2, type2_trigger_t),
53
VMSTATE_END_OF_LIST()
54
}
55
};
56
@@ -XXX,XX +XXX,XX @@ const VMStateDescription vmstate_riscv_cpu = {
57
&vmstate_rv128,
58
&vmstate_kvmtimer,
59
&vmstate_envcfg,
60
+ &vmstate_debug,
61
NULL
62
}
63
};
64
--
51
--
65
2.35.1
52
2.41.0
diff view generated by jsdifflib
1
From: Dylan Jhong <dylan@andestech.com>
1
From: Ard Biesheuvel <ardb@kernel.org>
2
2
3
The current riscv_load_fdt() forces fdt_load_addr to be placed at a dram address within 3GB,
3
Use the accelerated SubBytes/ShiftRows/AddRoundKey AES helper to
4
but not all platforms have dram_base within 3GB.
4
implement the first half of the key schedule derivation. This does not
5
actually involve shifting rows, so clone the same value into all four
6
columns of the AES vector to counter that operation.
5
7
6
This patch adds an exception for dram base not within 3GB,
8
Cc: Richard Henderson <richard.henderson@linaro.org>
7
which will place fdt at dram_end align 16MB.
9
Cc: Philippe Mathieu-Daudé <philmd@linaro.org>
8
10
Cc: Palmer Dabbelt <palmer@dabbelt.com>
9
riscv_setup_rom_reset_vec() also needs to be modified
11
Cc: Alistair Francis <alistair.francis@wdc.com>
10
12
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
11
Signed-off-by: Dylan Jhong <dylan@andestech.com>
13
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
12
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-Id: <20220419115945.37945-1-dylan@andestech.com>
15
Message-ID: <20230831154118.138727-1-ardb@kernel.org>
14
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
16
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
15
---
17
---
16
include/hw/riscv/boot.h | 4 ++--
18
target/riscv/crypto_helper.c | 17 +++++------------
17
hw/riscv/boot.c | 12 +++++++-----
19
1 file changed, 5 insertions(+), 12 deletions(-)
18
2 files changed, 9 insertions(+), 7 deletions(-)
19
20
20
diff --git a/include/hw/riscv/boot.h b/include/hw/riscv/boot.h
21
diff --git a/target/riscv/crypto_helper.c b/target/riscv/crypto_helper.c
21
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
22
--- a/include/hw/riscv/boot.h
23
--- a/target/riscv/crypto_helper.c
23
+++ b/include/hw/riscv/boot.h
24
+++ b/target/riscv/crypto_helper.c
24
@@ -XXX,XX +XXX,XX @@ target_ulong riscv_load_kernel(const char *kernel_filename,
25
@@ -XXX,XX +XXX,XX @@ target_ulong HELPER(aes64ks1i)(target_ulong rs1, target_ulong rnum)
25
symbol_fn_t sym_cb);
26
26
hwaddr riscv_load_initrd(const char *filename, uint64_t mem_size,
27
uint8_t enc_rnum = rnum;
27
uint64_t kernel_entry, hwaddr *start);
28
uint32_t temp = (RS1 >> 32) & 0xFFFFFFFF;
28
-uint32_t riscv_load_fdt(hwaddr dram_start, uint64_t dram_size, void *fdt);
29
- uint8_t rcon_ = 0;
29
+uint64_t riscv_load_fdt(hwaddr dram_start, uint64_t dram_size, void *fdt);
30
- target_ulong result;
30
void riscv_setup_rom_reset_vec(MachineState *machine, RISCVHartArrayState *harts,
31
+ AESState t, rc = {};
31
hwaddr saddr,
32
32
hwaddr rom_base, hwaddr rom_size,
33
if (enc_rnum != 0xA) {
33
uint64_t kernel_entry,
34
temp = ror32(temp, 8); /* Rotate right by 8 */
34
- uint32_t fdt_load_addr, void *fdt);
35
- rcon_ = round_consts[enc_rnum];
35
+ uint64_t fdt_load_addr, void *fdt);
36
+ rc.w[0] = rc.w[1] = round_consts[enc_rnum];
36
void riscv_rom_copy_firmware_info(MachineState *machine, hwaddr rom_base,
37
}
37
hwaddr rom_size,
38
38
uint32_t reset_vec_size,
39
- temp = ((uint32_t)AES_sbox[(temp >> 24) & 0xFF] << 24) |
39
diff --git a/hw/riscv/boot.c b/hw/riscv/boot.c
40
- ((uint32_t)AES_sbox[(temp >> 16) & 0xFF] << 16) |
40
index XXXXXXX..XXXXXXX 100644
41
- ((uint32_t)AES_sbox[(temp >> 8) & 0xFF] << 8) |
41
--- a/hw/riscv/boot.c
42
- ((uint32_t)AES_sbox[(temp >> 0) & 0xFF] << 0);
42
+++ b/hw/riscv/boot.c
43
+ t.w[0] = t.w[1] = t.w[2] = t.w[3] = temp;
43
@@ -XXX,XX +XXX,XX @@ hwaddr riscv_load_initrd(const char *filename, uint64_t mem_size,
44
+ aesenc_SB_SR_AK(&t, &t, &rc, false);
44
return *start + size;
45
46
- temp ^= rcon_;
47
-
48
- result = ((uint64_t)temp << 32) | temp;
49
-
50
- return result;
51
+ return t.d[0];
45
}
52
}
46
53
47
-uint32_t riscv_load_fdt(hwaddr dram_base, uint64_t mem_size, void *fdt)
54
target_ulong HELPER(aes64im)(target_ulong rs1)
48
+uint64_t riscv_load_fdt(hwaddr dram_base, uint64_t mem_size, void *fdt)
49
{
50
- uint32_t temp, fdt_addr;
51
+ uint64_t temp, fdt_addr;
52
hwaddr dram_end = dram_base + mem_size;
53
int ret, fdtsize = fdt_totalsize(fdt);
54
55
@@ -XXX,XX +XXX,XX @@ uint32_t riscv_load_fdt(hwaddr dram_base, uint64_t mem_size, void *fdt)
56
* Thus, put it at an 16MB aligned address that less than fdt size from the
57
* end of dram or 3GB whichever is lesser.
58
*/
59
- temp = MIN(dram_end, 3072 * MiB);
60
+ temp = (dram_base < 3072 * MiB) ? MIN(dram_end, 3072 * MiB) : dram_end;
61
fdt_addr = QEMU_ALIGN_DOWN(temp - fdtsize, 16 * MiB);
62
63
ret = fdt_pack(fdt);
64
@@ -XXX,XX +XXX,XX @@ void riscv_setup_rom_reset_vec(MachineState *machine, RISCVHartArrayState *harts
65
hwaddr start_addr,
66
hwaddr rom_base, hwaddr rom_size,
67
uint64_t kernel_entry,
68
- uint32_t fdt_load_addr, void *fdt)
69
+ uint64_t fdt_load_addr, void *fdt)
70
{
71
int i;
72
uint32_t start_addr_hi32 = 0x00000000;
73
+ uint32_t fdt_load_addr_hi32 = 0x00000000;
74
75
if (!riscv_is_32bit(harts)) {
76
start_addr_hi32 = start_addr >> 32;
77
+ fdt_load_addr_hi32 = fdt_load_addr >> 32;
78
}
79
/* reset vector */
80
uint32_t reset_vec[10] = {
81
@@ -XXX,XX +XXX,XX @@ void riscv_setup_rom_reset_vec(MachineState *machine, RISCVHartArrayState *harts
82
start_addr, /* start: .dword */
83
start_addr_hi32,
84
fdt_load_addr, /* fdt_laddr: .dword */
85
- 0x00000000,
86
+ fdt_load_addr_hi32,
87
/* fw_dyn: */
88
};
89
if (riscv_is_32bit(harts)) {
90
--
55
--
91
2.35.1
56
2.41.0
57
58
diff view generated by jsdifflib
1
From: Bin Meng <bin.meng@windriver.com>
1
From: Akihiko Odaki <akihiko.odaki@daynix.com>
2
2
3
This adds debug CSR read/write support to the RISC-V CSR RW table.
3
riscv_trigger_init() had been called on reset events that can happen
4
several times for a CPU and it allocated timers for itrigger. If old
5
timers were present, they were simply overwritten by the new timers,
6
resulting in a memory leak.
4
7
5
Signed-off-by: Bin Meng <bin.meng@windriver.com>
8
Divide riscv_trigger_init() into two functions, namely
9
riscv_trigger_realize() and riscv_trigger_reset() and call them in
10
appropriate timing. The timer allocation will happen only once for a
11
CPU in riscv_trigger_realize().
12
13
Fixes: 5a4ae64cac ("target/riscv: Add itrigger support when icount is enabled")
14
Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com>
15
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
16
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
6
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
17
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
7
Message-Id: <20220421003324.1134983-4-bmeng.cn@gmail.com>
18
Message-ID: <20230818034059.9146-1-akihiko.odaki@daynix.com>
8
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
19
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
9
---
20
---
10
target/riscv/debug.h | 2 ++
21
target/riscv/debug.h | 3 ++-
11
target/riscv/cpu.c | 4 ++++
22
target/riscv/cpu.c | 8 +++++++-
12
target/riscv/csr.c | 57 ++++++++++++++++++++++++++++++++++++++++++++
23
target/riscv/debug.c | 15 ++++++++++++---
13
target/riscv/debug.c | 27 +++++++++++++++++++++
24
3 files changed, 21 insertions(+), 5 deletions(-)
14
4 files changed, 90 insertions(+)
15
25
16
diff --git a/target/riscv/debug.h b/target/riscv/debug.h
26
diff --git a/target/riscv/debug.h b/target/riscv/debug.h
17
index XXXXXXX..XXXXXXX 100644
27
index XXXXXXX..XXXXXXX 100644
18
--- a/target/riscv/debug.h
28
--- a/target/riscv/debug.h
19
+++ b/target/riscv/debug.h
29
+++ b/target/riscv/debug.h
20
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_debug_excp_handler(CPUState *cs);
30
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_debug_excp_handler(CPUState *cs);
21
bool riscv_cpu_debug_check_breakpoint(CPUState *cs);
31
bool riscv_cpu_debug_check_breakpoint(CPUState *cs);
22
bool riscv_cpu_debug_check_watchpoint(CPUState *cs, CPUWatchpoint *wp);
32
bool riscv_cpu_debug_check_watchpoint(CPUState *cs, CPUWatchpoint *wp);
23
33
24
+void riscv_trigger_init(CPURISCVState *env);
34
-void riscv_trigger_init(CPURISCVState *env);
25
+
35
+void riscv_trigger_realize(CPURISCVState *env);
26
#endif /* RISCV_DEBUG_H */
36
+void riscv_trigger_reset_hold(CPURISCVState *env);
37
38
bool riscv_itrigger_enabled(CPURISCVState *env);
39
void riscv_itrigger_update_priv(CPURISCVState *env);
27
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
40
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
28
index XXXXXXX..XXXXXXX 100644
41
index XXXXXXX..XXXXXXX 100644
29
--- a/target/riscv/cpu.c
42
--- a/target/riscv/cpu.c
30
+++ b/target/riscv/cpu.c
43
+++ b/target/riscv/cpu.c
31
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_reset(DeviceState *dev)
44
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_reset_hold(Object *obj)
32
set_default_nan_mode(1, &env->fp_status);
33
45
34
#ifndef CONFIG_USER_ONLY
46
#ifndef CONFIG_USER_ONLY
35
+ if (riscv_feature(env, RISCV_FEATURE_DEBUG)) {
47
if (cpu->cfg.debug) {
36
+ riscv_trigger_init(env);
48
- riscv_trigger_init(env);
49
+ riscv_trigger_reset_hold(env);
50
}
51
52
if (kvm_enabled()) {
53
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_realize(DeviceState *dev, Error **errp)
54
55
riscv_cpu_register_gdb_regs_for_features(cs);
56
57
+#ifndef CONFIG_USER_ONLY
58
+ if (cpu->cfg.debug) {
59
+ riscv_trigger_realize(&cpu->env);
37
+ }
60
+ }
61
+#endif
38
+
62
+
39
if (kvm_enabled()) {
63
qemu_init_vcpu(cs);
40
kvm_riscv_reset_vcpu(cpu);
64
cpu_reset(cs);
41
}
65
42
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/riscv/csr.c
45
+++ b/target/riscv/csr.c
46
@@ -XXX,XX +XXX,XX @@ static RISCVException epmp(CPURISCVState *env, int csrno)
47
48
return RISCV_EXCP_ILLEGAL_INST;
49
}
50
+
51
+static RISCVException debug(CPURISCVState *env, int csrno)
52
+{
53
+ if (riscv_feature(env, RISCV_FEATURE_DEBUG)) {
54
+ return RISCV_EXCP_NONE;
55
+ }
56
+
57
+ return RISCV_EXCP_ILLEGAL_INST;
58
+}
59
#endif
60
61
/* User Floating-Point CSRs */
62
@@ -XXX,XX +XXX,XX @@ static RISCVException write_pmpaddr(CPURISCVState *env, int csrno,
63
return RISCV_EXCP_NONE;
64
}
65
66
+static RISCVException read_tselect(CPURISCVState *env, int csrno,
67
+ target_ulong *val)
68
+{
69
+ *val = tselect_csr_read(env);
70
+ return RISCV_EXCP_NONE;
71
+}
72
+
73
+static RISCVException write_tselect(CPURISCVState *env, int csrno,
74
+ target_ulong val)
75
+{
76
+ tselect_csr_write(env, val);
77
+ return RISCV_EXCP_NONE;
78
+}
79
+
80
+static RISCVException read_tdata(CPURISCVState *env, int csrno,
81
+ target_ulong *val)
82
+{
83
+ /* return 0 in tdata1 to end the trigger enumeration */
84
+ if (env->trigger_cur >= TRIGGER_NUM && csrno == CSR_TDATA1) {
85
+ *val = 0;
86
+ return RISCV_EXCP_NONE;
87
+ }
88
+
89
+ if (!tdata_available(env, csrno - CSR_TDATA1)) {
90
+ return RISCV_EXCP_ILLEGAL_INST;
91
+ }
92
+
93
+ *val = tdata_csr_read(env, csrno - CSR_TDATA1);
94
+ return RISCV_EXCP_NONE;
95
+}
96
+
97
+static RISCVException write_tdata(CPURISCVState *env, int csrno,
98
+ target_ulong val)
99
+{
100
+ if (!tdata_available(env, csrno - CSR_TDATA1)) {
101
+ return RISCV_EXCP_ILLEGAL_INST;
102
+ }
103
+
104
+ tdata_csr_write(env, csrno - CSR_TDATA1, val);
105
+ return RISCV_EXCP_NONE;
106
+}
107
+
108
/*
109
* Functions to access Pointer Masking feature registers
110
* We have to check if current priv lvl could modify
111
@@ -XXX,XX +XXX,XX @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
112
[CSR_PMPADDR14] = { "pmpaddr14", pmp, read_pmpaddr, write_pmpaddr },
113
[CSR_PMPADDR15] = { "pmpaddr15", pmp, read_pmpaddr, write_pmpaddr },
114
115
+ /* Debug CSRs */
116
+ [CSR_TSELECT] = { "tselect", debug, read_tselect, write_tselect },
117
+ [CSR_TDATA1] = { "tdata1", debug, read_tdata, write_tdata },
118
+ [CSR_TDATA2] = { "tdata2", debug, read_tdata, write_tdata },
119
+ [CSR_TDATA3] = { "tdata3", debug, read_tdata, write_tdata },
120
+
121
/* User Pointer Masking */
122
[CSR_UMTE] = { "umte", pointer_masking, read_umte, write_umte },
123
[CSR_UPMMASK] = { "upmmask", pointer_masking, read_upmmask, write_upmmask },
124
diff --git a/target/riscv/debug.c b/target/riscv/debug.c
66
diff --git a/target/riscv/debug.c b/target/riscv/debug.c
125
index XXXXXXX..XXXXXXX 100644
67
index XXXXXXX..XXXXXXX 100644
126
--- a/target/riscv/debug.c
68
--- a/target/riscv/debug.c
127
+++ b/target/riscv/debug.c
69
+++ b/target/riscv/debug.c
128
@@ -XXX,XX +XXX,XX @@ bool riscv_cpu_debug_check_watchpoint(CPUState *cs, CPUWatchpoint *wp)
70
@@ -XXX,XX +XXX,XX @@ bool riscv_cpu_debug_check_watchpoint(CPUState *cs, CPUWatchpoint *wp)
129
130
return false;
71
return false;
131
}
72
}
132
+
73
133
+void riscv_trigger_init(CPURISCVState *env)
74
-void riscv_trigger_init(CPURISCVState *env)
75
+void riscv_trigger_realize(CPURISCVState *env)
134
+{
76
+{
135
+ target_ulong type2 = trigger_type(env, TRIGGER_TYPE_AD_MATCH);
136
+ int i;
77
+ int i;
137
+
78
+
138
+ /* type 2 triggers */
79
+ for (i = 0; i < RV_MAX_TRIGGERS; i++) {
139
+ for (i = 0; i < TRIGGER_TYPE2_NUM; i++) {
80
+ env->itrigger_timer[i] = timer_new_ns(QEMU_CLOCK_VIRTUAL,
140
+ /*
81
+ riscv_itrigger_timer_cb, env);
141
+ * type = TRIGGER_TYPE_AD_MATCH
142
+ * dmode = 0 (both debug and M-mode can write tdata)
143
+ * maskmax = 0 (unimplemented, always 0)
144
+ * sizehi = 0 (match against any size, RV64 only)
145
+ * hit = 0 (unimplemented, always 0)
146
+ * select = 0 (always 0, perform match on address)
147
+ * timing = 0 (always 0, trigger before instruction)
148
+ * sizelo = 0 (match against any size)
149
+ * action = 0 (always 0, raise a breakpoint exception)
150
+ * chain = 0 (unimplemented, always 0)
151
+ * match = 0 (always 0, when any compare value equals tdata2)
152
+ */
153
+ env->type2_trig[i].mcontrol = type2;
154
+ env->type2_trig[i].maddress = 0;
155
+ env->type2_trig[i].bp = NULL;
156
+ env->type2_trig[i].wp = NULL;
157
+ }
82
+ }
158
+}
83
+}
84
+
85
+void riscv_trigger_reset_hold(CPURISCVState *env)
86
{
87
target_ulong tdata1 = build_tdata1(env, TRIGGER_TYPE_AD_MATCH, 0, 0);
88
int i;
89
@@ -XXX,XX +XXX,XX @@ void riscv_trigger_init(CPURISCVState *env)
90
env->tdata3[i] = 0;
91
env->cpu_breakpoint[i] = NULL;
92
env->cpu_watchpoint[i] = NULL;
93
- env->itrigger_timer[i] = timer_new_ns(QEMU_CLOCK_VIRTUAL,
94
- riscv_itrigger_timer_cb, env);
95
+ timer_del(env->itrigger_timer[i]);
96
}
97
}
159
--
98
--
160
2.35.1
99
2.41.0
100
101
diff view generated by jsdifflib
1
From: Nicolas Pitre <nico@fluxnic.net>
1
From: Leon Schuermann <leons@opentitan.org>
2
2
3
There is an overflow with the current code where a pmpaddr value of
3
When the rule-lock bypass (RLB) bit is set in the mseccfg CSR, the PMP
4
0x1fffffff is decoded as sa=0 and ea=0 whereas it should be sa=0 and
4
configuration lock bits must not apply. While this behavior is
5
ea=0xffffffff.
5
implemented for the pmpcfgX CSRs, this bit is not respected for
6
changes to the pmpaddrX CSRs. This patch ensures that pmpaddrX CSR
7
writes work even on locked regions when the global rule-lock bypass is
8
enabled.
6
9
7
Fix that by simplifying the computation. There is in fact no need for
10
Signed-off-by: Leon Schuermann <leons@opentitan.org>
8
ctz64() nor special case for -1 to achieve proper results.
11
Reviewed-by: Mayuresh Chitale <mchitale@ventanamicro.com>
9
10
Signed-off-by: Nicolas Pitre <nico@fluxnic.net>
11
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
12
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
12
Message-Id: <rq81o86n-17ps-92no-p65o-79o88476266@syhkavp.arg>
13
Message-ID: <20230829215046.1430463-1-leon@is.currently.online>
13
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
14
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
14
---
15
---
15
target/riscv/pmp.c | 14 +++-----------
16
target/riscv/pmp.c | 4 ++++
16
1 file changed, 3 insertions(+), 11 deletions(-)
17
1 file changed, 4 insertions(+)
17
18
18
diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
19
diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
19
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
20
--- a/target/riscv/pmp.c
21
--- a/target/riscv/pmp.c
21
+++ b/target/riscv/pmp.c
22
+++ b/target/riscv/pmp.c
22
@@ -XXX,XX +XXX,XX @@ static void pmp_decode_napot(target_ulong a, target_ulong *sa, target_ulong *ea)
23
@@ -XXX,XX +XXX,XX @@ static inline uint8_t pmp_get_a_field(uint8_t cfg)
23
0111...1111 2^(XLEN+2)-byte NAPOT range
24
*/
24
1111...1111 Reserved
25
static inline int pmp_is_locked(CPURISCVState *env, uint32_t pmp_index)
25
*/
26
{
26
- if (a == -1) {
27
+ /* mseccfg.RLB is set */
27
- *sa = 0u;
28
+ if (MSECCFG_RLB_ISSET(env)) {
28
- *ea = -1;
29
+ return 0;
29
- return;
30
+ }
30
- } else {
31
31
- target_ulong t1 = ctz64(~a);
32
if (env->pmp_state.pmp[pmp_index].cfg_reg & PMP_LOCK) {
32
- target_ulong base = (a & ~(((target_ulong)1 << t1) - 1)) << 2;
33
return 1;
33
- target_ulong range = ((target_ulong)1 << (t1 + 3)) - 1;
34
- *sa = base;
35
- *ea = base + range;
36
- }
37
+ a = (a << 2) | 0x3;
38
+ *sa = a & (a + 1);
39
+ *ea = a | (a + 1);
40
}
41
42
void pmp_update_rule_addr(CPURISCVState *env, uint32_t pmp_index)
43
--
34
--
44
2.35.1
35
2.41.0
diff view generated by jsdifflib
1
From: Atish Patra <atishp@rivosinc.com>
1
From: Tommy Wu <tommy.wu@sifive.com>
2
2
3
RISC-V privileged specification v1.12 introduced a mconfigptr
3
According to the new spec, when vsiselect has a reserved value, attempts
4
which will hold the physical address of a configuration data
4
from M-mode or HS-mode to access vsireg, or from VS-mode to access
5
structure. As Qemu doesn't have a configuration data structure,
5
sireg, should preferably raise an illegal instruction exception.
6
is read as zero which is valid as per the priv spec.
7
6
8
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
7
Signed-off-by: Tommy Wu <tommy.wu@sifive.com>
9
Signed-off-by: Atish Patra <atishp@rivosinc.com>
8
Reviewed-by: Frank Chang <frank.chang@sifive.com>
10
Message-Id: <20220303185440.512391-5-atishp@rivosinc.com>
9
Message-ID: <20230816061647.600672-1-tommy.wu@sifive.com>
11
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
10
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
12
---
11
---
13
target/riscv/cpu_bits.h | 1 +
12
target/riscv/csr.c | 7 +++++--
14
target/riscv/csr.c | 2 ++
13
1 file changed, 5 insertions(+), 2 deletions(-)
15
2 files changed, 3 insertions(+)
16
14
17
diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/riscv/cpu_bits.h
20
+++ b/target/riscv/cpu_bits.h
21
@@ -XXX,XX +XXX,XX @@
22
#define CSR_MARCHID 0xf12
23
#define CSR_MIMPID 0xf13
24
#define CSR_MHARTID 0xf14
25
+#define CSR_MCONFIGPTR 0xf15
26
27
/* Machine Trap Setup */
28
#define CSR_MSTATUS 0x300
29
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
15
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
30
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
31
--- a/target/riscv/csr.c
17
--- a/target/riscv/csr.c
32
+++ b/target/riscv/csr.c
18
+++ b/target/riscv/csr.c
33
@@ -XXX,XX +XXX,XX @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
19
@@ -XXX,XX +XXX,XX @@ static int rmw_iprio(target_ulong xlen,
34
[CSR_MIMPID] = { "mimpid", any, read_zero },
20
static int rmw_xireg(CPURISCVState *env, int csrno, target_ulong *val,
35
[CSR_MHARTID] = { "mhartid", any, read_mhartid },
21
target_ulong new_val, target_ulong wr_mask)
36
22
{
37
+ [CSR_MCONFIGPTR] = { "mconfigptr", any, read_zero,
23
- bool virt;
38
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
24
+ bool virt, isel_reserved;
39
/* Machine Trap Setup */
25
uint8_t *iprio;
40
[CSR_MSTATUS] = { "mstatus", any, read_mstatus, write_mstatus, NULL,
26
int ret = -EINVAL;
41
read_mstatus_i128 },
27
target_ulong priv, isel, vgein;
28
@@ -XXX,XX +XXX,XX @@ static int rmw_xireg(CPURISCVState *env, int csrno, target_ulong *val,
29
30
/* Decode register details from CSR number */
31
virt = false;
32
+ isel_reserved = false;
33
switch (csrno) {
34
case CSR_MIREG:
35
iprio = env->miprio;
36
@@ -XXX,XX +XXX,XX @@ static int rmw_xireg(CPURISCVState *env, int csrno, target_ulong *val,
37
riscv_cpu_mxl_bits(env)),
38
val, new_val, wr_mask);
39
}
40
+ } else {
41
+ isel_reserved = true;
42
}
43
44
done:
45
if (ret) {
46
- return (env->virt_enabled && virt) ?
47
+ return (env->virt_enabled && virt && !isel_reserved) ?
48
RISCV_EXCP_VIRT_INSTRUCTION_FAULT : RISCV_EXCP_ILLEGAL_INST;
49
}
50
return RISCV_EXCP_NONE;
42
--
51
--
43
2.35.1
52
2.41.0
diff view generated by jsdifflib
1
From: Atish Patra <atishp@rivosinc.com>
1
From: Nikita Shubin <n.shubin@yadro.com>
2
2
3
To allow/disallow the CSR access based on the privilege spec, a new field
3
As per ISA:
4
in the csr_ops is introduced. It also adds the privileged specification
5
version (v1.12) for the CSRs introduced in the v1.12. This includes the
6
new ratified extensions such as Vector, Hypervisor and secconfig CSR.
7
However, it doesn't enforce the privilege version in this commit.
8
4
5
"For CSRRWI, if rd=x0, then the instruction shall not read the CSR and
6
shall not cause any of the side effects that might occur on a CSR read."
7
8
trans_csrrwi() and trans_csrrw() call do_csrw() if rd=x0, do_csrw() calls
9
riscv_csrrw_do64(), via helper_csrw() passing NULL as *ret_value.
10
11
Signed-off-by: Nikita Shubin <n.shubin@yadro.com>
9
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
12
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
10
Signed-off-by: Atish Patra <atishp@rivosinc.com>
13
Message-ID: <20230808090914.17634-1-nikita.shubin@maquefel.me>
11
Message-Id: <20220303185440.512391-4-atishp@rivosinc.com>
12
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
14
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
13
---
15
---
14
target/riscv/cpu.h | 2 +
16
target/riscv/csr.c | 24 +++++++++++++++---------
15
target/riscv/csr.c | 103 ++++++++++++++++++++++++++++++---------------
17
1 file changed, 15 insertions(+), 9 deletions(-)
16
2 files changed, 70 insertions(+), 35 deletions(-)
17
18
18
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/riscv/cpu.h
21
+++ b/target/riscv/cpu.h
22
@@ -XXX,XX +XXX,XX @@ typedef struct {
23
riscv_csr_op_fn op;
24
riscv_csr_read128_fn read128;
25
riscv_csr_write128_fn write128;
26
+ /* The default priv spec version should be PRIV_VERSION_1_10_0 (i.e 0) */
27
+ uint32_t min_priv_ver;
28
} riscv_csr_operations;
29
30
/* CSR function table constants */
31
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
19
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
32
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
33
--- a/target/riscv/csr.c
21
--- a/target/riscv/csr.c
34
+++ b/target/riscv/csr.c
22
+++ b/target/riscv/csr.c
35
@@ -XXX,XX +XXX,XX @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
23
@@ -XXX,XX +XXX,XX @@ static RISCVException riscv_csrrw_do64(CPURISCVState *env, int csrno,
36
[CSR_FRM] = { "frm", fs, read_frm, write_frm },
24
target_ulong write_mask)
37
[CSR_FCSR] = { "fcsr", fs, read_fcsr, write_fcsr },
25
{
38
/* Vector CSRs */
26
RISCVException ret;
39
- [CSR_VSTART] = { "vstart", vs, read_vstart, write_vstart },
27
- target_ulong old_value;
40
- [CSR_VXSAT] = { "vxsat", vs, read_vxsat, write_vxsat },
28
+ target_ulong old_value = 0;
41
- [CSR_VXRM] = { "vxrm", vs, read_vxrm, write_vxrm },
29
42
- [CSR_VCSR] = { "vcsr", vs, read_vcsr, write_vcsr },
30
/* execute combined read/write operation if it exists */
43
- [CSR_VL] = { "vl", vs, read_vl },
31
if (csr_ops[csrno].op) {
44
- [CSR_VTYPE] = { "vtype", vs, read_vtype },
32
return csr_ops[csrno].op(env, csrno, ret_value, new_value, write_mask);
45
- [CSR_VLENB] = { "vlenb", vs, read_vlenb },
33
}
46
+ [CSR_VSTART] = { "vstart", vs, read_vstart, write_vstart,
34
47
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
35
- /* if no accessor exists then return failure */
48
+ [CSR_VXSAT] = { "vxsat", vs, read_vxsat, write_vxsat,
36
- if (!csr_ops[csrno].read) {
49
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
37
- return RISCV_EXCP_ILLEGAL_INST;
50
+ [CSR_VXRM] = { "vxrm", vs, read_vxrm, write_vxrm,
38
- }
51
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
39
- /* read old value */
52
+ [CSR_VCSR] = { "vcsr", vs, read_vcsr, write_vcsr,
40
- ret = csr_ops[csrno].read(env, csrno, &old_value);
53
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
41
- if (ret != RISCV_EXCP_NONE) {
54
+ [CSR_VL] = { "vl", vs, read_vl,
42
- return ret;
55
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
43
+ /*
56
+ [CSR_VTYPE] = { "vtype", vs, read_vtype,
44
+ * ret_value == NULL means that rd=x0 and we're coming from helper_csrw()
57
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
45
+ * and we can't throw side effects caused by CSR reads.
58
+ [CSR_VLENB] = { "vlenb", vs, read_vlenb,
46
+ */
59
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
47
+ if (ret_value) {
60
/* User Timers and Counters */
48
+ /* if no accessor exists then return failure */
61
[CSR_CYCLE] = { "cycle", ctr, read_instret },
49
+ if (!csr_ops[csrno].read) {
62
[CSR_INSTRET] = { "instret", ctr, read_instret },
50
+ return RISCV_EXCP_ILLEGAL_INST;
63
@@ -XXX,XX +XXX,XX @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
51
+ }
64
[CSR_SIEH] = { "sieh", aia_smode32, NULL, NULL, rmw_sieh },
52
+ /* read old value */
65
[CSR_SIPH] = { "siph", aia_smode32, NULL, NULL, rmw_siph },
53
+ ret = csr_ops[csrno].read(env, csrno, &old_value);
66
54
+ if (ret != RISCV_EXCP_NONE) {
67
- [CSR_HSTATUS] = { "hstatus", hmode, read_hstatus, write_hstatus },
55
+ return ret;
68
- [CSR_HEDELEG] = { "hedeleg", hmode, read_hedeleg, write_hedeleg },
56
+ }
69
- [CSR_HIDELEG] = { "hideleg", hmode, NULL, NULL, rmw_hideleg },
57
}
70
- [CSR_HVIP] = { "hvip", hmode, NULL, NULL, rmw_hvip },
58
71
- [CSR_HIP] = { "hip", hmode, NULL, NULL, rmw_hip },
59
/* write value if writable and write mask set, otherwise drop writes */
72
- [CSR_HIE] = { "hie", hmode, NULL, NULL, rmw_hie },
73
- [CSR_HCOUNTEREN] = { "hcounteren", hmode, read_hcounteren, write_hcounteren },
74
- [CSR_HGEIE] = { "hgeie", hmode, read_hgeie, write_hgeie },
75
- [CSR_HTVAL] = { "htval", hmode, read_htval, write_htval },
76
- [CSR_HTINST] = { "htinst", hmode, read_htinst, write_htinst },
77
- [CSR_HGEIP] = { "hgeip", hmode, read_hgeip, NULL },
78
- [CSR_HGATP] = { "hgatp", hmode, read_hgatp, write_hgatp },
79
- [CSR_HTIMEDELTA] = { "htimedelta", hmode, read_htimedelta, write_htimedelta },
80
- [CSR_HTIMEDELTAH] = { "htimedeltah", hmode32, read_htimedeltah, write_htimedeltah },
81
-
82
- [CSR_VSSTATUS] = { "vsstatus", hmode, read_vsstatus, write_vsstatus },
83
- [CSR_VSIP] = { "vsip", hmode, NULL, NULL, rmw_vsip },
84
- [CSR_VSIE] = { "vsie", hmode, NULL, NULL, rmw_vsie },
85
- [CSR_VSTVEC] = { "vstvec", hmode, read_vstvec, write_vstvec },
86
- [CSR_VSSCRATCH] = { "vsscratch", hmode, read_vsscratch, write_vsscratch },
87
- [CSR_VSEPC] = { "vsepc", hmode, read_vsepc, write_vsepc },
88
- [CSR_VSCAUSE] = { "vscause", hmode, read_vscause, write_vscause },
89
- [CSR_VSTVAL] = { "vstval", hmode, read_vstval, write_vstval },
90
- [CSR_VSATP] = { "vsatp", hmode, read_vsatp, write_vsatp },
91
-
92
- [CSR_MTVAL2] = { "mtval2", hmode, read_mtval2, write_mtval2 },
93
- [CSR_MTINST] = { "mtinst", hmode, read_mtinst, write_mtinst },
94
+ [CSR_HSTATUS] = { "hstatus", hmode, read_hstatus, write_hstatus,
95
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
96
+ [CSR_HEDELEG] = { "hedeleg", hmode, read_hedeleg, write_hedeleg,
97
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
98
+ [CSR_HIDELEG] = { "hideleg", hmode, NULL, NULL, rmw_hideleg,
99
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
100
+ [CSR_HVIP] = { "hvip", hmode, NULL, NULL, rmw_hvip,
101
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
102
+ [CSR_HIP] = { "hip", hmode, NULL, NULL, rmw_hip,
103
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
104
+ [CSR_HIE] = { "hie", hmode, NULL, NULL, rmw_hie,
105
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
106
+ [CSR_HCOUNTEREN] = { "hcounteren", hmode, read_hcounteren, write_hcounteren,
107
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
108
+ [CSR_HGEIE] = { "hgeie", hmode, read_hgeie, write_hgeie,
109
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
110
+ [CSR_HTVAL] = { "htval", hmode, read_htval, write_htval,
111
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
112
+ [CSR_HTINST] = { "htinst", hmode, read_htinst, write_htinst,
113
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
114
+ [CSR_HGEIP] = { "hgeip", hmode, read_hgeip,
115
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
116
+ [CSR_HGATP] = { "hgatp", hmode, read_hgatp, write_hgatp,
117
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
118
+ [CSR_HTIMEDELTA] = { "htimedelta", hmode, read_htimedelta, write_htimedelta,
119
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
120
+ [CSR_HTIMEDELTAH] = { "htimedeltah", hmode32, read_htimedeltah, write_htimedeltah,
121
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
122
+
123
+ [CSR_VSSTATUS] = { "vsstatus", hmode, read_vsstatus, write_vsstatus,
124
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
125
+ [CSR_VSIP] = { "vsip", hmode, NULL, NULL, rmw_vsip,
126
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
127
+ [CSR_VSIE] = { "vsie", hmode, NULL, NULL, rmw_vsie ,
128
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
129
+ [CSR_VSTVEC] = { "vstvec", hmode, read_vstvec, write_vstvec,
130
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
131
+ [CSR_VSSCRATCH] = { "vsscratch", hmode, read_vsscratch, write_vsscratch,
132
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
133
+ [CSR_VSEPC] = { "vsepc", hmode, read_vsepc, write_vsepc,
134
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
135
+ [CSR_VSCAUSE] = { "vscause", hmode, read_vscause, write_vscause,
136
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
137
+ [CSR_VSTVAL] = { "vstval", hmode, read_vstval, write_vstval,
138
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
139
+ [CSR_VSATP] = { "vsatp", hmode, read_vsatp, write_vsatp,
140
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
141
+
142
+ [CSR_MTVAL2] = { "mtval2", hmode, read_mtval2, write_mtval2,
143
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
144
+ [CSR_MTINST] = { "mtinst", hmode, read_mtinst, write_mtinst,
145
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
146
147
/* Virtual Interrupts and Interrupt Priorities (H-extension with AIA) */
148
[CSR_HVIEN] = { "hvien", aia_hmode, read_zero, write_ignore },
149
@@ -XXX,XX +XXX,XX @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
150
[CSR_VSIPH] = { "vsiph", aia_hmode32, NULL, NULL, rmw_vsiph },
151
152
/* Physical Memory Protection */
153
- [CSR_MSECCFG] = { "mseccfg", epmp, read_mseccfg, write_mseccfg },
154
+ [CSR_MSECCFG] = { "mseccfg", epmp, read_mseccfg, write_mseccfg,
155
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
156
[CSR_PMPCFG0] = { "pmpcfg0", pmp, read_pmpcfg, write_pmpcfg },
157
[CSR_PMPCFG1] = { "pmpcfg1", pmp, read_pmpcfg, write_pmpcfg },
158
[CSR_PMPCFG2] = { "pmpcfg2", pmp, read_pmpcfg, write_pmpcfg },
159
--
60
--
160
2.35.1
61
2.41.0
diff view generated by jsdifflib