1
From: Alistair Francis <alistair.francis@wdc.com>
1
From: Alistair Francis <alistair.francis@wdc.com>
2
2
3
The following changes since commit c5fbdd60cf1fb52f01bdfe342b6fa65d5343e1b1:
3
The following changes since commit d70075373af51b6aa1d637962c962120e201fc98:
4
4
5
Merge tag 'qemu-sparc-20211121' of git://github.com/mcayland/qemu into staging (2021-11-21 14:12:25 +0100)
5
Merge tag 'for_upstream' of git://git.kernel.org/pub/scm/virt/kvm/mst/qemu into staging (2022-01-07 17:24:24 -0800)
6
6
7
are available in the Git repository at:
7
are available in the Git repository at:
8
8
9
git@github.com:alistair23/qemu.git tags/pull-riscv-to-apply-20211122
9
git@github.com:alistair23/qemu.git tags/pull-riscv-to-apply-20220108
10
10
11
for you to fetch changes up to 526e7443027c71fe7b04c29df529e1f9f425f9e3:
11
for you to fetch changes up to 48eaeb56debf91817dea00a2cd9c1f6c986eb531:
12
12
13
hw/misc/sifive_u_otp: Do not reset OTP content on hardware reset (2021-11-22 10:46:22 +1000)
13
target/riscv: Implement the stval/mtval illegal instruction (2022-01-08 15:46:10 +1000)
14
14
15
----------------------------------------------------------------
15
----------------------------------------------------------------
16
Seventh RISC-V PR for QEMU 6.2
16
Second RISC-V PR for QEMU 7.0
17
17
18
- Deprecate IF_NONE for SiFive OTP
18
- Fix illegal instruction when PMP is disabled
19
- Don't reset SiFive OTP content
19
- SiFive PDMA 64-bit support
20
- SiFive PLIC cleanups
21
- Mark Hypervisor extension as non experimental
22
- Enable Hypervisor extension by default
23
- Support 32 cores on the virt machine
24
- Corrections for the Vector extension
25
- Experimental support for 128-bit CPUs
26
- stval and mtval support for illegal instructions
20
27
21
----------------------------------------------------------------
28
----------------------------------------------------------------
22
Philippe Mathieu-Daudé (1):
29
Alistair Francis (11):
23
hw/misc/sifive_u_otp: Do not reset OTP content on hardware reset
30
hw/intc: sifive_plic: Add a reset function
31
hw/intc: sifive_plic: Cleanup the write function
32
hw/intc: sifive_plic: Cleanup the read function
33
hw/intc: sifive_plic: Cleanup remaining functions
34
target/riscv: Mark the Hypervisor extension as non experimental
35
target/riscv: Enable the Hypervisor extension by default
36
hw/riscv: Use error_fatal for SoC realisation
37
hw/riscv: virt: Allow support for 32 cores
38
target/riscv: Set the opcode in DisasContext
39
target/riscv: Fixup setting GVA
40
target/riscv: Implement the stval/mtval illegal instruction
24
41
25
Thomas Huth (1):
42
Bin Meng (1):
26
hw/misc/sifive_u_otp: Use IF_PFLASH for the OTP device instead of IF_NONE
43
roms/opensbi: Upgrade from v0.9 to v1.0
27
44
28
docs/about/deprecated.rst | 6 ++++++
45
Frank Chang (3):
29
hw/misc/sifive_u_otp.c | 22 +++++++++++++---------
46
target/riscv: rvv-1.0: Call the correct RVF/RVD check function for widening fp insns
30
2 files changed, 19 insertions(+), 9 deletions(-)
47
target/riscv: rvv-1.0: Call the correct RVF/RVD check function for widening fp/int type-convert insns
48
target/riscv: rvv-1.0: Call the correct RVF/RVD check function for narrowing fp/int type-convert insns
31
49
50
Frédéric Pétrot (18):
51
exec/memop: Adding signedness to quad definitions
52
exec/memop: Adding signed quad and octo defines
53
qemu/int128: addition of div/rem 128-bit operations
54
target/riscv: additional macros to check instruction support
55
target/riscv: separation of bitwise logic and arithmetic helpers
56
target/riscv: array for the 64 upper bits of 128-bit registers
57
target/riscv: setup everything for rv64 to support rv128 execution
58
target/riscv: moving some insns close to similar insns
59
target/riscv: accessors to registers upper part and 128-bit load/store
60
target/riscv: support for 128-bit bitwise instructions
61
target/riscv: support for 128-bit U-type instructions
62
target/riscv: support for 128-bit shift instructions
63
target/riscv: support for 128-bit arithmetic instructions
64
target/riscv: support for 128-bit M extension
65
target/riscv: adding high part of some csrs
66
target/riscv: helper functions to wrap calls to 128-bit csr insns
67
target/riscv: modification of the trans_csrxx for 128-bit support
68
target/riscv: actual functions to realize crs 128-bit insns
69
70
Jim Shu (2):
71
hw/dma: sifive_pdma: support high 32-bit access of 64-bit register
72
hw/dma: sifive_pdma: permit 4/8-byte access size of PDMA registers
73
74
Nikita Shubin (1):
75
target/riscv/pmp: fix no pmp illegal intrs
76
77
Philipp Tomsich (1):
78
target/riscv: Fix position of 'experimental' comment
79
80
include/disas/dis-asm.h | 1 +
81
include/exec/memop.h | 15 +-
82
include/hw/riscv/virt.h | 2 +-
83
include/qemu/int128.h | 27 +
84
include/tcg/tcg-op.h | 4 +-
85
target/arm/translate-a32.h | 4 +-
86
target/riscv/cpu.h | 24 +
87
target/riscv/cpu_bits.h | 3 +
88
target/riscv/helper.h | 9 +
89
target/riscv/insn16.decode | 27 +-
90
target/riscv/insn32.decode | 25 +
91
accel/tcg/cputlb.c | 30 +-
92
accel/tcg/user-exec.c | 8 +-
93
disas/riscv.c | 5 +
94
hw/dma/sifive_pdma.c | 181 ++++++-
95
hw/intc/sifive_plic.c | 254 +++------
96
hw/riscv/microchip_pfsoc.c | 2 +-
97
hw/riscv/opentitan.c | 2 +-
98
hw/riscv/sifive_e.c | 2 +-
99
hw/riscv/sifive_u.c | 2 +-
100
target/alpha/translate.c | 32 +-
101
target/arm/helper-a64.c | 8 +-
102
target/arm/translate-a64.c | 8 +-
103
target/arm/translate-neon.c | 6 +-
104
target/arm/translate-sve.c | 10 +-
105
target/arm/translate-vfp.c | 8 +-
106
target/arm/translate.c | 2 +-
107
target/cris/translate.c | 2 +-
108
target/hppa/translate.c | 4 +-
109
target/i386/tcg/mem_helper.c | 2 +-
110
target/i386/tcg/translate.c | 36 +-
111
target/m68k/op_helper.c | 2 +-
112
target/mips/tcg/translate.c | 58 +-
113
target/mips/tcg/tx79_translate.c | 8 +-
114
target/ppc/translate.c | 32 +-
115
target/riscv/cpu.c | 34 +-
116
target/riscv/cpu_helper.c | 24 +-
117
target/riscv/csr.c | 194 ++++++-
118
target/riscv/gdbstub.c | 5 +
119
target/riscv/m128_helper.c | 109 ++++
120
target/riscv/machine.c | 22 +
121
target/riscv/op_helper.c | 47 +-
122
target/riscv/translate.c | 257 +++++++--
123
target/s390x/tcg/mem_helper.c | 8 +-
124
target/s390x/tcg/translate.c | 8 +-
125
target/sh4/translate.c | 12 +-
126
target/sparc/translate.c | 36 +-
127
target/tricore/translate.c | 4 +-
128
target/xtensa/translate.c | 4 +-
129
tcg/tcg.c | 4 +-
130
tcg/tci.c | 16 +-
131
util/int128.c | 147 +++++
132
accel/tcg/ldst_common.c.inc | 8 +-
133
target/mips/tcg/micromips_translate.c.inc | 10 +-
134
target/ppc/translate/fixedpoint-impl.c.inc | 22 +-
135
target/ppc/translate/fp-impl.c.inc | 4 +-
136
target/ppc/translate/vsx-impl.c.inc | 42 +-
137
target/riscv/insn_trans/trans_rva.c.inc | 22 +-
138
target/riscv/insn_trans/trans_rvb.c.inc | 48 +-
139
target/riscv/insn_trans/trans_rvd.c.inc | 4 +-
140
target/riscv/insn_trans/trans_rvh.c.inc | 4 +-
141
target/riscv/insn_trans/trans_rvi.c.inc | 716 +++++++++++++++++++++----
142
target/riscv/insn_trans/trans_rvm.c.inc | 192 ++++++-
143
target/riscv/insn_trans/trans_rvv.c.inc | 78 ++-
144
target/s390x/tcg/translate_vx.c.inc | 18 +-
145
tcg/aarch64/tcg-target.c.inc | 2 +-
146
tcg/arm/tcg-target.c.inc | 10 +-
147
tcg/i386/tcg-target.c.inc | 12 +-
148
tcg/mips/tcg-target.c.inc | 12 +-
149
tcg/ppc/tcg-target.c.inc | 16 +-
150
tcg/riscv/tcg-target.c.inc | 6 +-
151
tcg/s390x/tcg-target.c.inc | 18 +-
152
tcg/sparc/tcg-target.c.inc | 16 +-
153
pc-bios/opensbi-riscv32-generic-fw_dynamic.bin | Bin 78680 -> 108504 bytes
154
pc-bios/opensbi-riscv32-generic-fw_dynamic.elf | Bin 727464 -> 838904 bytes
155
pc-bios/opensbi-riscv64-generic-fw_dynamic.bin | Bin 75096 -> 105296 bytes
156
pc-bios/opensbi-riscv64-generic-fw_dynamic.elf | Bin 781264 -> 934696 bytes
157
roms/opensbi | 2 +-
158
target/riscv/meson.build | 1 +
159
target/s390x/tcg/insn-data.def | 28 +-
160
util/meson.build | 1 +
161
81 files changed, 2318 insertions(+), 750 deletions(-)
162
create mode 100644 target/riscv/m128_helper.c
163
create mode 100644 util/int128.c
164
diff view generated by jsdifflib
New patch
1
From: Nikita Shubin <n.shubin@yadro.com>
1
2
3
As per the privilege specification, any access from S/U mode should fail
4
if no pmp region is configured and pmp is present, othwerwise access
5
should succeed.
6
7
Fixes: d102f19a208 (target/riscv/pmp: Raise exception if no PMP entry is configured)
8
Signed-off-by: Nikita Shubin <n.shubin@yadro.com>
9
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
10
Message-id: 20211214092659.15709-1-nikita.shubin@maquefel.me
11
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
12
---
13
target/riscv/op_helper.c | 3 ++-
14
1 file changed, 2 insertions(+), 1 deletion(-)
15
16
diff --git a/target/riscv/op_helper.c b/target/riscv/op_helper.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/riscv/op_helper.c
19
+++ b/target/riscv/op_helper.c
20
@@ -XXX,XX +XXX,XX @@ target_ulong helper_mret(CPURISCVState *env, target_ulong cpu_pc_deb)
21
uint64_t mstatus = env->mstatus;
22
target_ulong prev_priv = get_field(mstatus, MSTATUS_MPP);
23
24
- if (!pmp_get_num_rules(env) && (prev_priv != PRV_M)) {
25
+ if (riscv_feature(env, RISCV_FEATURE_PMP) &&
26
+ !pmp_get_num_rules(env) && (prev_priv != PRV_M)) {
27
riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC());
28
}
29
30
--
31
2.31.1
32
33
diff view generated by jsdifflib
New patch
1
1
From: Jim Shu <jim.shu@sifive.com>
2
3
Real PDMA supports high 32-bit read/write memory access of 64-bit
4
register.
5
6
The following result is PDMA tested in U-Boot on Unmatched board:
7
8
1. Real PDMA allows high 32-bit read/write to 64-bit register.
9
=> mw.l 0x3000000 0x0 <= Disclaim channel 0
10
=> mw.l 0x3000000 0x1 <= Claim channel 0
11
=> mw.l 0x3000010 0x80000000 <= Write low 32-bit NextDest (NextDest = 0x280000000)
12
=> mw.l 0x3000014 0x2 <= Write high 32-bit NextDest
13
=> md.l 0x3000010 1 <= Dump low 32-bit NextDest
14
03000010: 80000000
15
=> md.l 0x3000014 1 <= Dump high 32-bit NextDest
16
03000014: 00000002
17
=> mw.l 0x3000018 0x80001000 <= Write low 32-bit NextSrc (NextSrc = 0x280001000)
18
=> mw.l 0x300001c 0x2 <= Write high 32-bit NextSrc
19
=> md.l 0x3000018 1 <= Dump low 32-bit NextSrc
20
03000010: 80001000
21
=> md.l 0x300001c 1 <= Dump high 32-bit NextSrc
22
03000014: 00000002
23
24
2. PDMA transfer from 0x280001000 to 0x280000000 is OK.
25
=> mw.q 0x3000008 0x4 <= NextBytes = 4
26
=> mw.l 0x3000004 0x22000000 <= wsize = rsize = 2 (2^2 = 4 bytes)
27
=> mw.l 0x280000000 0x87654321 <= Fill test data to dst
28
=> mw.l 0x280001000 0x12345678 <= Fill test data to src
29
=> md.l 0x280000000 1; md.l 0x280001000 1 <= Dump src/dst memory contents
30
280000000: 87654321 !Ce.
31
280001000: 12345678 xV4.
32
=> md.l 0x3000000 8 <= Dump PDMA status
33
03000000: 00000001 22000000 00000004 00000000 ......."........
34
03000010: 80000000 00000002 80001000 00000002 ................
35
=> mw.l 0x3000000 0x3 <= Set channel 0 run and claim bits
36
=> md.l 0x3000000 8 <= Dump PDMA status
37
03000000: 40000001 22000000 00000004 00000000 ...@..."........
38
03000010: 80000000 00000002 80001000 00000002 ................
39
=> md.l 0x280000000 1; md.l 0x280001000 1 <= Dump src/dst memory contents
40
280000000: 12345678 xV4.
41
280001000: 12345678 xV4.
42
43
Signed-off-by: Jim Shu <jim.shu@sifive.com>
44
Reviewed-by: Frank Chang <frank.chang@sifive.com>
45
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
46
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
47
Tested-by: Bin Meng <bmeng.cn@gmail.com>
48
Message-id: 20220104063408.658169-2-jim.shu@sifive.com
49
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
50
---
51
hw/dma/sifive_pdma.c | 177 +++++++++++++++++++++++++++++++++++++------
52
1 file changed, 155 insertions(+), 22 deletions(-)
53
54
diff --git a/hw/dma/sifive_pdma.c b/hw/dma/sifive_pdma.c
55
index XXXXXXX..XXXXXXX 100644
56
--- a/hw/dma/sifive_pdma.c
57
+++ b/hw/dma/sifive_pdma.c
58
@@ -XXX,XX +XXX,XX @@ static inline void sifive_pdma_update_irq(SiFivePDMAState *s, int ch)
59
s->chan[ch].state = DMA_CHAN_STATE_IDLE;
60
}
61
62
-static uint64_t sifive_pdma_read(void *opaque, hwaddr offset, unsigned size)
63
+static uint64_t sifive_pdma_readq(SiFivePDMAState *s, int ch, hwaddr offset)
64
{
65
- SiFivePDMAState *s = opaque;
66
- int ch = SIFIVE_PDMA_CHAN_NO(offset);
67
uint64_t val = 0;
68
69
- if (ch >= SIFIVE_PDMA_CHANS) {
70
- qemu_log_mask(LOG_GUEST_ERROR, "%s: Invalid channel no %d\n",
71
- __func__, ch);
72
- return 0;
73
+ offset &= 0xfff;
74
+ switch (offset) {
75
+ case DMA_NEXT_BYTES:
76
+ val = s->chan[ch].next_bytes;
77
+ break;
78
+ case DMA_NEXT_DST:
79
+ val = s->chan[ch].next_dst;
80
+ break;
81
+ case DMA_NEXT_SRC:
82
+ val = s->chan[ch].next_src;
83
+ break;
84
+ case DMA_EXEC_BYTES:
85
+ val = s->chan[ch].exec_bytes;
86
+ break;
87
+ case DMA_EXEC_DST:
88
+ val = s->chan[ch].exec_dst;
89
+ break;
90
+ case DMA_EXEC_SRC:
91
+ val = s->chan[ch].exec_src;
92
+ break;
93
+ default:
94
+ qemu_log_mask(LOG_GUEST_ERROR,
95
+ "%s: Unexpected 64-bit access to 0x%" HWADDR_PRIX "\n",
96
+ __func__, offset);
97
+ break;
98
}
99
100
+ return val;
101
+}
102
+
103
+static uint32_t sifive_pdma_readl(SiFivePDMAState *s, int ch, hwaddr offset)
104
+{
105
+ uint32_t val = 0;
106
+
107
offset &= 0xfff;
108
switch (offset) {
109
case DMA_CONTROL:
110
@@ -XXX,XX +XXX,XX @@ static uint64_t sifive_pdma_read(void *opaque, hwaddr offset, unsigned size)
111
val = s->chan[ch].next_config;
112
break;
113
case DMA_NEXT_BYTES:
114
- val = s->chan[ch].next_bytes;
115
+ val = extract64(s->chan[ch].next_bytes, 0, 32);
116
+ break;
117
+ case DMA_NEXT_BYTES + 4:
118
+ val = extract64(s->chan[ch].next_bytes, 32, 32);
119
break;
120
case DMA_NEXT_DST:
121
- val = s->chan[ch].next_dst;
122
+ val = extract64(s->chan[ch].next_dst, 0, 32);
123
+ break;
124
+ case DMA_NEXT_DST + 4:
125
+ val = extract64(s->chan[ch].next_dst, 32, 32);
126
break;
127
case DMA_NEXT_SRC:
128
- val = s->chan[ch].next_src;
129
+ val = extract64(s->chan[ch].next_src, 0, 32);
130
+ break;
131
+ case DMA_NEXT_SRC + 4:
132
+ val = extract64(s->chan[ch].next_src, 32, 32);
133
break;
134
case DMA_EXEC_CONFIG:
135
val = s->chan[ch].exec_config;
136
break;
137
case DMA_EXEC_BYTES:
138
- val = s->chan[ch].exec_bytes;
139
+ val = extract64(s->chan[ch].exec_bytes, 0, 32);
140
+ break;
141
+ case DMA_EXEC_BYTES + 4:
142
+ val = extract64(s->chan[ch].exec_bytes, 32, 32);
143
break;
144
case DMA_EXEC_DST:
145
- val = s->chan[ch].exec_dst;
146
+ val = extract64(s->chan[ch].exec_dst, 0, 32);
147
+ break;
148
+ case DMA_EXEC_DST + 4:
149
+ val = extract64(s->chan[ch].exec_dst, 32, 32);
150
break;
151
case DMA_EXEC_SRC:
152
- val = s->chan[ch].exec_src;
153
+ val = extract64(s->chan[ch].exec_src, 0, 32);
154
+ break;
155
+ case DMA_EXEC_SRC + 4:
156
+ val = extract64(s->chan[ch].exec_src, 32, 32);
157
break;
158
default:
159
- qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad offset 0x%" HWADDR_PRIX "\n",
160
+ qemu_log_mask(LOG_GUEST_ERROR,
161
+ "%s: Unexpected 32-bit access to 0x%" HWADDR_PRIX "\n",
162
__func__, offset);
163
break;
164
}
165
@@ -XXX,XX +XXX,XX @@ static uint64_t sifive_pdma_read(void *opaque, hwaddr offset, unsigned size)
166
return val;
167
}
168
169
-static void sifive_pdma_write(void *opaque, hwaddr offset,
170
- uint64_t value, unsigned size)
171
+static uint64_t sifive_pdma_read(void *opaque, hwaddr offset, unsigned size)
172
{
173
SiFivePDMAState *s = opaque;
174
int ch = SIFIVE_PDMA_CHAN_NO(offset);
175
- bool claimed, run;
176
+ uint64_t val = 0;
177
178
if (ch >= SIFIVE_PDMA_CHANS) {
179
qemu_log_mask(LOG_GUEST_ERROR, "%s: Invalid channel no %d\n",
180
__func__, ch);
181
- return;
182
+ return 0;
183
+ }
184
+
185
+ switch (size) {
186
+ case 8:
187
+ val = sifive_pdma_readq(s, ch, offset);
188
+ break;
189
+ case 4:
190
+ val = sifive_pdma_readl(s, ch, offset);
191
+ break;
192
+ default:
193
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Invalid read size %u to PDMA\n",
194
+ __func__, size);
195
+ return 0;
196
}
197
198
+ return val;
199
+}
200
+
201
+static void sifive_pdma_writeq(SiFivePDMAState *s, int ch,
202
+ hwaddr offset, uint64_t value)
203
+{
204
+ offset &= 0xfff;
205
+ switch (offset) {
206
+ case DMA_NEXT_BYTES:
207
+ s->chan[ch].next_bytes = value;
208
+ break;
209
+ case DMA_NEXT_DST:
210
+ s->chan[ch].next_dst = value;
211
+ break;
212
+ case DMA_NEXT_SRC:
213
+ s->chan[ch].next_src = value;
214
+ break;
215
+ case DMA_EXEC_BYTES:
216
+ case DMA_EXEC_DST:
217
+ case DMA_EXEC_SRC:
218
+ /* these are read-only registers */
219
+ break;
220
+ default:
221
+ qemu_log_mask(LOG_GUEST_ERROR,
222
+ "%s: Unexpected 64-bit access to 0x%" HWADDR_PRIX "\n",
223
+ __func__, offset);
224
+ break;
225
+ }
226
+}
227
+
228
+static void sifive_pdma_writel(SiFivePDMAState *s, int ch,
229
+ hwaddr offset, uint32_t value)
230
+{
231
+ bool claimed, run;
232
+
233
offset &= 0xfff;
234
switch (offset) {
235
case DMA_CONTROL:
236
@@ -XXX,XX +XXX,XX @@ static void sifive_pdma_write(void *opaque, hwaddr offset,
237
s->chan[ch].next_config = value;
238
break;
239
case DMA_NEXT_BYTES:
240
- s->chan[ch].next_bytes = value;
241
+ s->chan[ch].next_bytes =
242
+ deposit64(s->chan[ch].next_bytes, 0, 32, value);
243
+ break;
244
+ case DMA_NEXT_BYTES + 4:
245
+ s->chan[ch].next_bytes =
246
+ deposit64(s->chan[ch].next_bytes, 32, 32, value);
247
break;
248
case DMA_NEXT_DST:
249
- s->chan[ch].next_dst = value;
250
+ s->chan[ch].next_dst = deposit64(s->chan[ch].next_dst, 0, 32, value);
251
+ break;
252
+ case DMA_NEXT_DST + 4:
253
+ s->chan[ch].next_dst = deposit64(s->chan[ch].next_dst, 32, 32, value);
254
break;
255
case DMA_NEXT_SRC:
256
- s->chan[ch].next_src = value;
257
+ s->chan[ch].next_src = deposit64(s->chan[ch].next_src, 0, 32, value);
258
+ break;
259
+ case DMA_NEXT_SRC + 4:
260
+ s->chan[ch].next_src = deposit64(s->chan[ch].next_src, 32, 32, value);
261
break;
262
case DMA_EXEC_CONFIG:
263
case DMA_EXEC_BYTES:
264
+ case DMA_EXEC_BYTES + 4:
265
case DMA_EXEC_DST:
266
+ case DMA_EXEC_DST + 4:
267
case DMA_EXEC_SRC:
268
+ case DMA_EXEC_SRC + 4:
269
/* these are read-only registers */
270
break;
271
default:
272
- qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad offset 0x%" HWADDR_PRIX "\n",
273
+ qemu_log_mask(LOG_GUEST_ERROR,
274
+ "%s: Unexpected 32-bit access to 0x%" HWADDR_PRIX "\n",
275
__func__, offset);
276
break;
277
}
278
}
279
280
+static void sifive_pdma_write(void *opaque, hwaddr offset,
281
+ uint64_t value, unsigned size)
282
+{
283
+ SiFivePDMAState *s = opaque;
284
+ int ch = SIFIVE_PDMA_CHAN_NO(offset);
285
+
286
+ if (ch >= SIFIVE_PDMA_CHANS) {
287
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Invalid channel no %d\n",
288
+ __func__, ch);
289
+ return;
290
+ }
291
+
292
+ switch (size) {
293
+ case 8:
294
+ sifive_pdma_writeq(s, ch, offset, value);
295
+ break;
296
+ case 4:
297
+ sifive_pdma_writel(s, ch, offset, (uint32_t) value);
298
+ break;
299
+ default:
300
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Invalid write size %u to PDMA\n",
301
+ __func__, size);
302
+ break;
303
+ }
304
+}
305
+
306
static const MemoryRegionOps sifive_pdma_ops = {
307
.read = sifive_pdma_read,
308
.write = sifive_pdma_write,
309
--
310
2.31.1
311
312
diff view generated by jsdifflib
New patch
1
From: Jim Shu <jim.shu@sifive.com>
1
2
3
It's obvious that PDMA supports 64-bit access of 64-bit registers, and
4
in previous commit, we confirm that PDMA supports 32-bit access of
5
both 32/64-bit registers. Thus, we configure 32/64-bit memory access
6
of PDMA registers as valid in general.
7
8
Signed-off-by: Jim Shu <jim.shu@sifive.com>
9
Reviewed-by: Frank Chang <frank.chang@sifive.com>
10
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
11
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
12
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
13
Tested-by: Bin Meng <bmeng.cn@gmail.com>
14
Message-id: 20220104063408.658169-3-jim.shu@sifive.com
15
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
16
---
17
hw/dma/sifive_pdma.c | 4 ++++
18
1 file changed, 4 insertions(+)
19
20
diff --git a/hw/dma/sifive_pdma.c b/hw/dma/sifive_pdma.c
21
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/dma/sifive_pdma.c
23
+++ b/hw/dma/sifive_pdma.c
24
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps sifive_pdma_ops = {
25
.impl = {
26
.min_access_size = 4,
27
.max_access_size = 8,
28
+ },
29
+ .valid = {
30
+ .min_access_size = 4,
31
+ .max_access_size = 8,
32
}
33
};
34
35
--
36
2.31.1
37
38
diff view generated by jsdifflib
New patch
1
From: Alistair Francis <alistair.francis@wdc.com>
1
2
3
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
4
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
5
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
6
Message-Id: <20220105213937.1113508-2-alistair.francis@opensource.wdc.com>
7
---
8
hw/intc/sifive_plic.c | 18 ++++++++++++++++++
9
1 file changed, 18 insertions(+)
10
11
diff --git a/hw/intc/sifive_plic.c b/hw/intc/sifive_plic.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/intc/sifive_plic.c
14
+++ b/hw/intc/sifive_plic.c
15
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps sifive_plic_ops = {
16
}
17
};
18
19
+static void sifive_plic_reset(DeviceState *dev)
20
+{
21
+ SiFivePLICState *s = SIFIVE_PLIC(dev);
22
+ int i;
23
+
24
+ memset(s->source_priority, 0, sizeof(uint32_t) * s->num_sources);
25
+ memset(s->target_priority, 0, sizeof(uint32_t) * s->num_addrs);
26
+ memset(s->pending, 0, sizeof(uint32_t) * s->bitfield_words);
27
+ memset(s->claimed, 0, sizeof(uint32_t) * s->bitfield_words);
28
+ memset(s->enable, 0, sizeof(uint32_t) * s->num_enables);
29
+
30
+ for (i = 0; i < s->num_harts; i++) {
31
+ qemu_set_irq(s->m_external_irqs[i], 0);
32
+ qemu_set_irq(s->s_external_irqs[i], 0);
33
+ }
34
+}
35
+
36
/*
37
* parse PLIC hart/mode address offset config
38
*
39
@@ -XXX,XX +XXX,XX @@ static void sifive_plic_class_init(ObjectClass *klass, void *data)
40
{
41
DeviceClass *dc = DEVICE_CLASS(klass);
42
43
+ dc->reset = sifive_plic_reset;
44
device_class_set_props(dc, sifive_plic_properties);
45
dc->realize = sifive_plic_realize;
46
dc->vmsd = &vmstate_sifive_plic;
47
--
48
2.31.1
49
50
diff view generated by jsdifflib
New patch
1
From: Alistair Francis <alistair.francis@wdc.com>
1
2
3
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
4
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
5
Message-Id: <20220105213937.1113508-3-alistair.francis@opensource.wdc.com>
6
---
7
hw/intc/sifive_plic.c | 76 +++++++++++++++----------------------------
8
1 file changed, 27 insertions(+), 49 deletions(-)
9
10
diff --git a/hw/intc/sifive_plic.c b/hw/intc/sifive_plic.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/hw/intc/sifive_plic.c
13
+++ b/hw/intc/sifive_plic.c
14
@@ -XXX,XX +XXX,XX @@
15
16
#define RISCV_DEBUG_PLIC 0
17
18
+static bool addr_between(uint32_t addr, uint32_t base, uint32_t num)
19
+{
20
+ return addr >= base && addr - base < num;
21
+}
22
+
23
static PLICMode char_to_mode(char c)
24
{
25
switch (c) {
26
@@ -XXX,XX +XXX,XX @@ static void sifive_plic_write(void *opaque, hwaddr addr, uint64_t value,
27
{
28
SiFivePLICState *plic = opaque;
29
30
- /* writes must be 4 byte words */
31
- if ((addr & 0x3) != 0) {
32
- goto err;
33
- }
34
-
35
- if (addr >= plic->priority_base && /* 4 bytes per source */
36
- addr < plic->priority_base + (plic->num_sources << 2))
37
- {
38
+ if (addr_between(addr, plic->priority_base, plic->num_sources << 2)) {
39
uint32_t irq = ((addr - plic->priority_base) >> 2) + 1;
40
+
41
plic->source_priority[irq] = value & 7;
42
- if (RISCV_DEBUG_PLIC) {
43
- qemu_log("plic: write priority: irq=%d priority=%d\n",
44
- irq, plic->source_priority[irq]);
45
- }
46
sifive_plic_update(plic);
47
- return;
48
- } else if (addr >= plic->pending_base && /* 1 bit per source */
49
- addr < plic->pending_base + (plic->num_sources >> 3))
50
- {
51
+ } else if (addr_between(addr, plic->pending_base,
52
+ plic->num_sources >> 3)) {
53
qemu_log_mask(LOG_GUEST_ERROR,
54
"%s: invalid pending write: 0x%" HWADDR_PRIx "",
55
__func__, addr);
56
- return;
57
- } else if (addr >= plic->enable_base && /* 1 bit per source */
58
- addr < plic->enable_base + plic->num_addrs * plic->enable_stride)
59
- {
60
+ } else if (addr_between(addr, plic->enable_base,
61
+ plic->num_addrs * plic->enable_stride)) {
62
uint32_t addrid = (addr - plic->enable_base) / plic->enable_stride;
63
uint32_t wordid = (addr & (plic->enable_stride - 1)) >> 2;
64
+
65
if (wordid < plic->bitfield_words) {
66
plic->enable[addrid * plic->bitfield_words + wordid] = value;
67
- if (RISCV_DEBUG_PLIC) {
68
- qemu_log("plic: write enable: hart%d-%c word=%d value=%x\n",
69
- plic->addr_config[addrid].hartid,
70
- mode_to_char(plic->addr_config[addrid].mode), wordid,
71
- plic->enable[addrid * plic->bitfield_words + wordid]);
72
- }
73
- return;
74
+ } else {
75
+ qemu_log_mask(LOG_GUEST_ERROR,
76
+ "%s: Invalid enable write 0x%" HWADDR_PRIx "\n",
77
+ __func__, addr);
78
}
79
- } else if (addr >= plic->context_base && /* 4 bytes per reg */
80
- addr < plic->context_base + plic->num_addrs * plic->context_stride)
81
- {
82
+ } else if (addr_between(addr, plic->context_base,
83
+ plic->num_addrs * plic->context_stride)) {
84
uint32_t addrid = (addr - plic->context_base) / plic->context_stride;
85
uint32_t contextid = (addr & (plic->context_stride - 1));
86
+
87
if (contextid == 0) {
88
- if (RISCV_DEBUG_PLIC) {
89
- qemu_log("plic: write priority: hart%d-%c priority=%x\n",
90
- plic->addr_config[addrid].hartid,
91
- mode_to_char(plic->addr_config[addrid].mode),
92
- plic->target_priority[addrid]);
93
- }
94
if (value <= plic->num_priorities) {
95
plic->target_priority[addrid] = value;
96
sifive_plic_update(plic);
97
}
98
- return;
99
} else if (contextid == 4) {
100
- if (RISCV_DEBUG_PLIC) {
101
- qemu_log("plic: write claim: hart%d-%c irq=%x\n",
102
- plic->addr_config[addrid].hartid,
103
- mode_to_char(plic->addr_config[addrid].mode),
104
- (uint32_t)value);
105
- }
106
if (value < plic->num_sources) {
107
sifive_plic_set_claimed(plic, value, false);
108
sifive_plic_update(plic);
109
}
110
- return;
111
+ } else {
112
+ qemu_log_mask(LOG_GUEST_ERROR,
113
+ "%s: Invalid context write 0x%" HWADDR_PRIx "\n",
114
+ __func__, addr);
115
}
116
+ } else {
117
+ qemu_log_mask(LOG_GUEST_ERROR,
118
+ "%s: Invalid register write 0x%" HWADDR_PRIx "\n",
119
+ __func__, addr);
120
}
121
-
122
-err:
123
- qemu_log_mask(LOG_GUEST_ERROR,
124
- "%s: Invalid register write 0x%" HWADDR_PRIx "\n",
125
- __func__, addr);
126
}
127
128
static const MemoryRegionOps sifive_plic_ops = {
129
--
130
2.31.1
131
132
diff view generated by jsdifflib
New patch
1
From: Alistair Francis <alistair.francis@wdc.com>
1
2
3
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
4
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
5
Message-Id: <20220105213937.1113508-4-alistair.francis@opensource.wdc.com>
6
---
7
hw/intc/sifive_plic.c | 55 +++++++++----------------------------------
8
1 file changed, 11 insertions(+), 44 deletions(-)
9
10
diff --git a/hw/intc/sifive_plic.c b/hw/intc/sifive_plic.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/hw/intc/sifive_plic.c
13
+++ b/hw/intc/sifive_plic.c
14
@@ -XXX,XX +XXX,XX @@ static uint64_t sifive_plic_read(void *opaque, hwaddr addr, unsigned size)
15
{
16
SiFivePLICState *plic = opaque;
17
18
- /* writes must be 4 byte words */
19
- if ((addr & 0x3) != 0) {
20
- goto err;
21
- }
22
-
23
- if (addr >= plic->priority_base && /* 4 bytes per source */
24
- addr < plic->priority_base + (plic->num_sources << 2))
25
- {
26
+ if (addr_between(addr, plic->priority_base, plic->num_sources << 2)) {
27
uint32_t irq = ((addr - plic->priority_base) >> 2) + 1;
28
- if (RISCV_DEBUG_PLIC) {
29
- qemu_log("plic: read priority: irq=%d priority=%d\n",
30
- irq, plic->source_priority[irq]);
31
- }
32
+
33
return plic->source_priority[irq];
34
- } else if (addr >= plic->pending_base && /* 1 bit per source */
35
- addr < plic->pending_base + (plic->num_sources >> 3))
36
- {
37
+ } else if (addr_between(addr, plic->pending_base, plic->num_sources >> 3)) {
38
uint32_t word = (addr - plic->pending_base) >> 2;
39
- if (RISCV_DEBUG_PLIC) {
40
- qemu_log("plic: read pending: word=%d value=%d\n",
41
- word, plic->pending[word]);
42
- }
43
+
44
return plic->pending[word];
45
- } else if (addr >= plic->enable_base && /* 1 bit per source */
46
- addr < plic->enable_base + plic->num_addrs * plic->enable_stride)
47
- {
48
+ } else if (addr_between(addr, plic->enable_base,
49
+ plic->num_addrs * plic->enable_stride)) {
50
uint32_t addrid = (addr - plic->enable_base) / plic->enable_stride;
51
uint32_t wordid = (addr & (plic->enable_stride - 1)) >> 2;
52
+
53
if (wordid < plic->bitfield_words) {
54
- if (RISCV_DEBUG_PLIC) {
55
- qemu_log("plic: read enable: hart%d-%c word=%d value=%x\n",
56
- plic->addr_config[addrid].hartid,
57
- mode_to_char(plic->addr_config[addrid].mode), wordid,
58
- plic->enable[addrid * plic->bitfield_words + wordid]);
59
- }
60
return plic->enable[addrid * plic->bitfield_words + wordid];
61
}
62
- } else if (addr >= plic->context_base && /* 1 bit per source */
63
- addr < plic->context_base + plic->num_addrs * plic->context_stride)
64
- {
65
+ } else if (addr_between(addr, plic->context_base,
66
+ plic->num_addrs * plic->context_stride)) {
67
uint32_t addrid = (addr - plic->context_base) / plic->context_stride;
68
uint32_t contextid = (addr & (plic->context_stride - 1));
69
+
70
if (contextid == 0) {
71
- if (RISCV_DEBUG_PLIC) {
72
- qemu_log("plic: read priority: hart%d-%c priority=%x\n",
73
- plic->addr_config[addrid].hartid,
74
- mode_to_char(plic->addr_config[addrid].mode),
75
- plic->target_priority[addrid]);
76
- }
77
return plic->target_priority[addrid];
78
} else if (contextid == 4) {
79
uint32_t value = sifive_plic_claim(plic, addrid);
80
- if (RISCV_DEBUG_PLIC) {
81
- qemu_log("plic: read claim: hart%d-%c irq=%x\n",
82
- plic->addr_config[addrid].hartid,
83
- mode_to_char(plic->addr_config[addrid].mode),
84
- value);
85
- }
86
+
87
sifive_plic_update(plic);
88
return value;
89
}
90
}
91
92
-err:
93
qemu_log_mask(LOG_GUEST_ERROR,
94
"%s: Invalid register read 0x%" HWADDR_PRIx "\n",
95
__func__, addr);
96
--
97
2.31.1
98
99
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Alistair Francis <alistair.francis@wdc.com>
2
2
3
Once a "One Time Programmable" is programmed, it shouldn't be reset.
3
We can remove the original sifive_plic_irqs_pending() function and
4
instead just use the sifive_plic_claim() function (renamed to
5
sifive_plic_claimed()) to determine if any interrupts are pending.
4
6
5
Do not re-initialize the OTP content in the DeviceReset handler,
7
This requires move the side effects outside of sifive_plic_claimed(),
6
initialize it once in the DeviceRealize one.
8
but as they are only invoked once that isn't a problem.
7
9
8
Fixes: 9fb45c62ae8 ("riscv: sifive: Implement a model for SiFive FU540 OTP")
10
We have also removed all of the old #ifdef debugging logs, so let's
9
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
cleanup the last remaining debug function while we are here.
10
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
12
11
Message-Id: <20211119104757.331579-1-f4bug@amsat.org>
12
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
13
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
14
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
15
Message-Id: <20220105213937.1113508-5-alistair.francis@opensource.wdc.com>
13
---
16
---
14
hw/misc/sifive_u_otp.c | 13 +++++--------
17
hw/intc/sifive_plic.c | 109 +++++++++---------------------------------
15
1 file changed, 5 insertions(+), 8 deletions(-)
18
1 file changed, 22 insertions(+), 87 deletions(-)
16
19
17
diff --git a/hw/misc/sifive_u_otp.c b/hw/misc/sifive_u_otp.c
20
diff --git a/hw/intc/sifive_plic.c b/hw/intc/sifive_plic.c
18
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/misc/sifive_u_otp.c
22
--- a/hw/intc/sifive_plic.c
20
+++ b/hw/misc/sifive_u_otp.c
23
+++ b/hw/intc/sifive_plic.c
21
@@ -XXX,XX +XXX,XX @@ static void sifive_u_otp_realize(DeviceState *dev, Error **errp)
24
@@ -XXX,XX +XXX,XX @@
22
25
#include "migration/vmstate.h"
23
if (blk_pread(s->blk, 0, s->fuse, filesize) != filesize) {
26
#include "hw/irq.h"
24
error_setg(errp, "failed to read the initial flash content");
27
25
+ return;
28
-#define RISCV_DEBUG_PLIC 0
29
-
30
static bool addr_between(uint32_t addr, uint32_t base, uint32_t num)
31
{
32
return addr >= base && addr - base < num;
33
@@ -XXX,XX +XXX,XX @@ static PLICMode char_to_mode(char c)
34
}
35
}
36
37
-static char mode_to_char(PLICMode m)
38
-{
39
- switch (m) {
40
- case PLICMode_U: return 'U';
41
- case PLICMode_S: return 'S';
42
- case PLICMode_H: return 'H';
43
- case PLICMode_M: return 'M';
44
- default: return '?';
45
- }
46
-}
47
-
48
-static void sifive_plic_print_state(SiFivePLICState *plic)
49
-{
50
- int i;
51
- int addrid;
52
-
53
- /* pending */
54
- qemu_log("pending : ");
55
- for (i = plic->bitfield_words - 1; i >= 0; i--) {
56
- qemu_log("%08x", plic->pending[i]);
57
- }
58
- qemu_log("\n");
59
-
60
- /* pending */
61
- qemu_log("claimed : ");
62
- for (i = plic->bitfield_words - 1; i >= 0; i--) {
63
- qemu_log("%08x", plic->claimed[i]);
64
- }
65
- qemu_log("\n");
66
-
67
- for (addrid = 0; addrid < plic->num_addrs; addrid++) {
68
- qemu_log("hart%d-%c enable: ",
69
- plic->addr_config[addrid].hartid,
70
- mode_to_char(plic->addr_config[addrid].mode));
71
- for (i = plic->bitfield_words - 1; i >= 0; i--) {
72
- qemu_log("%08x", plic->enable[addrid * plic->bitfield_words + i]);
73
- }
74
- qemu_log("\n");
75
- }
76
-}
77
-
78
static uint32_t atomic_set_masked(uint32_t *a, uint32_t mask, uint32_t value)
79
{
80
uint32_t old, new, cmp = qatomic_read(a);
81
@@ -XXX,XX +XXX,XX @@ static void sifive_plic_set_claimed(SiFivePLICState *plic, int irq, bool level)
82
atomic_set_masked(&plic->claimed[irq >> 5], 1 << (irq & 31), -!!level);
83
}
84
85
-static int sifive_plic_irqs_pending(SiFivePLICState *plic, uint32_t addrid)
86
+static uint32_t sifive_plic_claimed(SiFivePLICState *plic, uint32_t addrid)
87
{
88
+ uint32_t max_irq = 0;
89
+ uint32_t max_prio = plic->target_priority[addrid];
90
int i, j;
91
+
92
for (i = 0; i < plic->bitfield_words; i++) {
93
uint32_t pending_enabled_not_claimed =
94
- (plic->pending[i] & ~plic->claimed[i]) &
95
- plic->enable[addrid * plic->bitfield_words + i];
96
+ (plic->pending[i] & ~plic->claimed[i]) &
97
+ plic->enable[addrid * plic->bitfield_words + i];
98
+
99
if (!pending_enabled_not_claimed) {
100
continue;
101
}
102
+
103
for (j = 0; j < 32; j++) {
104
int irq = (i << 5) + j;
105
uint32_t prio = plic->source_priority[irq];
106
int enabled = pending_enabled_not_claimed & (1 << j);
107
- if (enabled && prio > plic->target_priority[addrid]) {
108
- return 1;
109
+
110
+ if (enabled && prio > max_prio) {
111
+ max_irq = irq;
112
+ max_prio = prio;
26
}
113
}
27
}
114
}
28
}
115
}
116
- return 0;
117
+
118
+ return max_irq;
119
}
120
121
static void sifive_plic_update(SiFivePLICState *plic)
122
@@ -XXX,XX +XXX,XX @@ static void sifive_plic_update(SiFivePLICState *plic)
123
for (addrid = 0; addrid < plic->num_addrs; addrid++) {
124
uint32_t hartid = plic->addr_config[addrid].hartid;
125
PLICMode mode = plic->addr_config[addrid].mode;
126
- int level = sifive_plic_irqs_pending(plic, addrid);
127
+ bool level = !!sifive_plic_claimed(plic, addrid);
128
129
switch (mode) {
130
case PLICMode_M:
131
@@ -XXX,XX +XXX,XX @@ static void sifive_plic_update(SiFivePLICState *plic)
132
break;
133
}
134
}
135
-
136
- if (RISCV_DEBUG_PLIC) {
137
- sifive_plic_print_state(plic);
138
- }
29
-}
139
-}
30
-
140
-
31
-static void sifive_u_otp_reset(DeviceState *dev)
141
-static uint32_t sifive_plic_claim(SiFivePLICState *plic, uint32_t addrid)
32
-{
142
-{
33
- SiFiveUOTPState *s = SIFIVE_U_OTP(dev);
143
- int i, j;
34
144
- uint32_t max_irq = 0;
35
/* Initialize all fuses' initial value to 0xFFs */
145
- uint32_t max_prio = plic->target_priority[addrid];
36
memset(s->fuse, 0xff, sizeof(s->fuse));
146
-
37
@@ -XXX,XX +XXX,XX @@ static void sifive_u_otp_reset(DeviceState *dev)
147
- for (i = 0; i < plic->bitfield_words; i++) {
38
serial_data = s->serial;
148
- uint32_t pending_enabled_not_claimed =
39
if (blk_pwrite(s->blk, index * SIFIVE_U_OTP_FUSE_WORD,
149
- (plic->pending[i] & ~plic->claimed[i]) &
40
&serial_data, SIFIVE_U_OTP_FUSE_WORD, 0) < 0) {
150
- plic->enable[addrid * plic->bitfield_words + i];
41
- error_report("write error index<%d>", index);
151
- if (!pending_enabled_not_claimed) {
42
+ error_setg(errp, "failed to write index<%d>", index);
152
- continue;
43
+ return;
153
- }
44
}
154
- for (j = 0; j < 32; j++) {
45
155
- int irq = (i << 5) + j;
46
serial_data = ~(s->serial);
156
- uint32_t prio = plic->source_priority[irq];
47
if (blk_pwrite(s->blk, (index + 1) * SIFIVE_U_OTP_FUSE_WORD,
157
- int enabled = pending_enabled_not_claimed & (1 << j);
48
&serial_data, SIFIVE_U_OTP_FUSE_WORD, 0) < 0) {
158
- if (enabled && prio > max_prio) {
49
- error_report("write error index<%d>", index + 1);
159
- max_irq = irq;
50
+ error_setg(errp, "failed to write index<%d>", index + 1);
160
- max_prio = prio;
51
+ return;
161
- }
162
- }
163
- }
164
-
165
- if (max_irq) {
166
- sifive_plic_set_pending(plic, max_irq, false);
167
- sifive_plic_set_claimed(plic, max_irq, true);
168
- }
169
- return max_irq;
170
}
171
172
static uint64_t sifive_plic_read(void *opaque, hwaddr addr, unsigned size)
173
@@ -XXX,XX +XXX,XX @@ static uint64_t sifive_plic_read(void *opaque, hwaddr addr, unsigned size)
174
if (contextid == 0) {
175
return plic->target_priority[addrid];
176
} else if (contextid == 4) {
177
- uint32_t value = sifive_plic_claim(plic, addrid);
178
+ uint32_t max_irq = sifive_plic_claimed(plic, addrid);
179
+
180
+ if (max_irq) {
181
+ sifive_plic_set_pending(plic, max_irq, false);
182
+ sifive_plic_set_claimed(plic, max_irq, true);
183
+ }
184
185
sifive_plic_update(plic);
186
- return value;
187
+ return max_irq;
52
}
188
}
53
}
189
}
54
190
55
@@ -XXX,XX +XXX,XX @@ static void sifive_u_otp_class_init(ObjectClass *klass, void *data)
56
57
device_class_set_props(dc, sifive_u_otp_properties);
58
dc->realize = sifive_u_otp_realize;
59
- dc->reset = sifive_u_otp_reset;
60
}
61
62
static const TypeInfo sifive_u_otp_info = {
63
--
191
--
64
2.31.1
192
2.31.1
65
193
66
194
diff view generated by jsdifflib
New patch
1
From: Alistair Francis <alistair.francis@wdc.com>
1
2
3
The Hypervisor spec is now frozen, so remove the experimental tag.
4
5
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
6
Reviewed-by: Anup Patel <anup.patel@wdc.com>
7
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
8
Message-Id: <20220105213937.1113508-6-alistair.francis@opensource.wdc.com>
9
---
10
target/riscv/cpu.c | 2 +-
11
1 file changed, 1 insertion(+), 1 deletion(-)
12
13
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/riscv/cpu.c
16
+++ b/target/riscv/cpu.c
17
@@ -XXX,XX +XXX,XX @@ static Property riscv_cpu_properties[] = {
18
DEFINE_PROP_BOOL("s", RISCVCPU, cfg.ext_s, true),
19
DEFINE_PROP_BOOL("u", RISCVCPU, cfg.ext_u, true),
20
DEFINE_PROP_BOOL("v", RISCVCPU, cfg.ext_v, false),
21
+ DEFINE_PROP_BOOL("h", RISCVCPU, cfg.ext_h, false),
22
DEFINE_PROP_BOOL("Counters", RISCVCPU, cfg.ext_counters, true),
23
DEFINE_PROP_BOOL("Zifencei", RISCVCPU, cfg.ext_ifencei, true),
24
DEFINE_PROP_BOOL("Zicsr", RISCVCPU, cfg.ext_icsr, true),
25
@@ -XXX,XX +XXX,XX @@ static Property riscv_cpu_properties[] = {
26
DEFINE_PROP_BOOL("zbb", RISCVCPU, cfg.ext_zbb, true),
27
DEFINE_PROP_BOOL("zbc", RISCVCPU, cfg.ext_zbc, true),
28
DEFINE_PROP_BOOL("zbs", RISCVCPU, cfg.ext_zbs, true),
29
- DEFINE_PROP_BOOL("x-h", RISCVCPU, cfg.ext_h, false),
30
DEFINE_PROP_BOOL("x-j", RISCVCPU, cfg.ext_j, false),
31
/* ePMP 0.9.3 */
32
DEFINE_PROP_BOOL("x-epmp", RISCVCPU, cfg.epmp, false),
33
--
34
2.31.1
35
36
diff view generated by jsdifflib
New patch
1
From: Alistair Francis <alistair.francis@wdc.com>
1
2
3
Let's enable the Hypervisor extension by default. This doesn't affect
4
named CPUs (such as lowrisc-ibex or sifive-u54) but does enable the
5
Hypervisor extensions by default for the virt machine.
6
7
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
8
Reviewed-by: Anup Patel <anup.patel@wdc.com>
9
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
10
Message-Id: <20220105213937.1113508-7-alistair.francis@opensource.wdc.com>
11
---
12
target/riscv/cpu.c | 2 +-
13
1 file changed, 1 insertion(+), 1 deletion(-)
14
15
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/riscv/cpu.c
18
+++ b/target/riscv/cpu.c
19
@@ -XXX,XX +XXX,XX @@ static Property riscv_cpu_properties[] = {
20
DEFINE_PROP_BOOL("s", RISCVCPU, cfg.ext_s, true),
21
DEFINE_PROP_BOOL("u", RISCVCPU, cfg.ext_u, true),
22
DEFINE_PROP_BOOL("v", RISCVCPU, cfg.ext_v, false),
23
- DEFINE_PROP_BOOL("h", RISCVCPU, cfg.ext_h, false),
24
+ DEFINE_PROP_BOOL("h", RISCVCPU, cfg.ext_h, true),
25
DEFINE_PROP_BOOL("Counters", RISCVCPU, cfg.ext_counters, true),
26
DEFINE_PROP_BOOL("Zifencei", RISCVCPU, cfg.ext_ifencei, true),
27
DEFINE_PROP_BOOL("Zicsr", RISCVCPU, cfg.ext_icsr, true),
28
--
29
2.31.1
30
31
diff view generated by jsdifflib
New patch
1
From: Alistair Francis <alistair.francis@wdc.com>
1
2
3
When realising the SoC use error_fatal instead of error_abort as the
4
process can fail and report useful information to the user.
5
6
Currently a user can see this:
7
8
$ ../qemu/bld/qemu-system-riscv64 -M sifive_u -S -monitor stdio -display none -drive if=pflash
9
QEMU 6.1.93 monitor - type 'help' for more information
10
(qemu) Unexpected error in sifive_u_otp_realize() at ../hw/misc/sifive_u_otp.c:229:
11
qemu-system-riscv64: OTP drive size < 16K
12
Aborted (core dumped)
13
14
Which this patch addresses
15
16
Reported-by: Markus Armbruster <armbru@redhat.com>
17
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
18
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
19
Reviewed-by: Markus Armbruster <armbru@redhat.com>
20
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
21
Tested-by: Bin Meng <bmeng.cn@gmail.com>
22
Message-Id: <20220105213937.1113508-8-alistair.francis@opensource.wdc.com>
23
---
24
hw/riscv/microchip_pfsoc.c | 2 +-
25
hw/riscv/opentitan.c | 2 +-
26
hw/riscv/sifive_e.c | 2 +-
27
hw/riscv/sifive_u.c | 2 +-
28
4 files changed, 4 insertions(+), 4 deletions(-)
29
30
diff --git a/hw/riscv/microchip_pfsoc.c b/hw/riscv/microchip_pfsoc.c
31
index XXXXXXX..XXXXXXX 100644
32
--- a/hw/riscv/microchip_pfsoc.c
33
+++ b/hw/riscv/microchip_pfsoc.c
34
@@ -XXX,XX +XXX,XX @@ static void microchip_icicle_kit_machine_init(MachineState *machine)
35
/* Initialize SoC */
36
object_initialize_child(OBJECT(machine), "soc", &s->soc,
37
TYPE_MICROCHIP_PFSOC);
38
- qdev_realize(DEVICE(&s->soc), NULL, &error_abort);
39
+ qdev_realize(DEVICE(&s->soc), NULL, &error_fatal);
40
41
/* Split RAM into low and high regions using aliases to machine->ram */
42
mem_low_size = memmap[MICROCHIP_PFSOC_DRAM_LO].size;
43
diff --git a/hw/riscv/opentitan.c b/hw/riscv/opentitan.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/hw/riscv/opentitan.c
46
+++ b/hw/riscv/opentitan.c
47
@@ -XXX,XX +XXX,XX @@ static void opentitan_board_init(MachineState *machine)
48
/* Initialize SoC */
49
object_initialize_child(OBJECT(machine), "soc", &s->soc,
50
TYPE_RISCV_IBEX_SOC);
51
- qdev_realize(DEVICE(&s->soc), NULL, &error_abort);
52
+ qdev_realize(DEVICE(&s->soc), NULL, &error_fatal);
53
54
memory_region_add_subregion(sys_mem,
55
memmap[IBEX_DEV_RAM].base, machine->ram);
56
diff --git a/hw/riscv/sifive_e.c b/hw/riscv/sifive_e.c
57
index XXXXXXX..XXXXXXX 100644
58
--- a/hw/riscv/sifive_e.c
59
+++ b/hw/riscv/sifive_e.c
60
@@ -XXX,XX +XXX,XX @@ static void sifive_e_machine_init(MachineState *machine)
61
62
/* Initialize SoC */
63
object_initialize_child(OBJECT(machine), "soc", &s->soc, TYPE_RISCV_E_SOC);
64
- qdev_realize(DEVICE(&s->soc), NULL, &error_abort);
65
+ qdev_realize(DEVICE(&s->soc), NULL, &error_fatal);
66
67
/* Data Tightly Integrated Memory */
68
memory_region_add_subregion(sys_mem,
69
diff --git a/hw/riscv/sifive_u.c b/hw/riscv/sifive_u.c
70
index XXXXXXX..XXXXXXX 100644
71
--- a/hw/riscv/sifive_u.c
72
+++ b/hw/riscv/sifive_u.c
73
@@ -XXX,XX +XXX,XX @@ static void sifive_u_machine_init(MachineState *machine)
74
&error_abort);
75
object_property_set_str(OBJECT(&s->soc), "cpu-type", machine->cpu_type,
76
&error_abort);
77
- qdev_realize(DEVICE(&s->soc), NULL, &error_abort);
78
+ qdev_realize(DEVICE(&s->soc), NULL, &error_fatal);
79
80
/* register RAM */
81
memory_region_add_subregion(system_memory, memmap[SIFIVE_U_DEV_DRAM].base,
82
--
83
2.31.1
84
85
diff view generated by jsdifflib
New patch
1
From: Alistair Francis <alistair.francis@wdc.com>
1
2
3
Linux supports up to 32 cores for both 32-bit and 64-bit RISC-V, so
4
let's set that as the maximum for the virt board.
5
6
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/435
7
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
8
Reviewed-by: Anup Patel <anup.patel@wdc.com>
9
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
11
Message-Id: <20220105213937.1113508-9-alistair.francis@opensource.wdc.com>
12
---
13
include/hw/riscv/virt.h | 2 +-
14
1 file changed, 1 insertion(+), 1 deletion(-)
15
16
diff --git a/include/hw/riscv/virt.h b/include/hw/riscv/virt.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/include/hw/riscv/virt.h
19
+++ b/include/hw/riscv/virt.h
20
@@ -XXX,XX +XXX,XX @@
21
#include "hw/block/flash.h"
22
#include "qom/object.h"
23
24
-#define VIRT_CPUS_MAX 8
25
+#define VIRT_CPUS_MAX 32
26
#define VIRT_SOCKETS_MAX 8
27
28
#define TYPE_RISCV_VIRT_MACHINE MACHINE_TYPE_NAME("virt")
29
--
30
2.31.1
31
32
diff view generated by jsdifflib
New patch
1
1
From: Bin Meng <bmeng.cn@gmail.com>
2
3
Upgrade OpenSBI from v0.9 to v1.0 and the pre-built bios images.
4
5
The v1.0 release includes the following commits:
6
7
ec5274b platform: implement K210 system reset
8
5487cf0 include: sbi: Simplify HSM state define names
9
8df1f9a lib: sbi: Use SBI_HSM_STATE_xyz defines instead of SBI_STATE_xyz defines
10
7c867fd lib: sbi: Rename sbi_hsm_hart_started_mask() function
11
638c948 lib: sbi: Remove redundant sbi_hsm_hart_started() function
12
ca864a9 lib: sbi: Fix error codes returned by HSM start() and stop() functions
13
6290a22 include: sbi: Add HSM suspend related defines
14
4b05df6 lib: sbi: Add sbi_hart_reinit() function
15
807d71c include: sbi: Add hart_suspend() platform callback
16
7475689 lib: sbi: Implement SBI HSM suspend function
17
b9cf617 include: sbi: Upgrade SBI implementation version to v0.3
18
50d4fde lib: Remove redundant sbi_platform_ipi_clear() calls
19
ff5bd94 include: sbi: SBI function IDs for RFENCE extension
20
22d8ee9 firmware: Use lla to access all global symbols
21
0f20e8a firmware: Support position independent execution
22
ddad02d lib: sbi: illegal CSR 0x306 access in hpm_allowed()
23
bfc85c7 include: headers: Replace __ASSEMBLY__ with __ASSEMBLER__
24
9190ad1 lib/utils: Support the official clint DT bindings
25
ca3f358 lib/utils: Drop the 'compat' parameter of fdt_plic_fixup()
26
4edc822 lib/utils: Support fixing up the official DT bindings of PLIC
27
4ef2f5d firware: optimize the exception exit code
28
3d8a952 lib: fix csr detect support
29
e71a7c1 firmware: Remove redundant add instruction from trap restore path
30
d4a94ea include: types: Add __aligned(x) to define the minimum alignement
31
d0e406f include: sbi: Allow direct initialization via SPIN_LOCK_INIT()
32
4d8e2f1 lib: sbi: Replace test-and-set locks by ticket locks
33
70ffc3e lib: sbi: fix atomic_add_return
34
27a16b1 docs: fix link to OpenPiton documentation
35
b1df1ac lib: sbi: Domains can be registered only before finalizing domains
36
7495bce lib: sbi: Add sbi_domain_memregion_init() API
37
4dc0001 lib: sbi: Add sbi_domain_root_add_memregion() API
38
8b56980 lib: utils/sys: Add CLINT memregion in the root domain
39
fc37c97 lib: sbi: Make the root domain instance global variable
40
e7e4bcd lib: utils: Copy over restricted root domain memregions to FDT domains
41
f41196a lib: sbi: Make sbi_domain_memregion_initfw() a local function
42
c5d0645 lib: utils: Implement "64bit-mmio" property parsing
43
49e422c lib: utils: reset: Add T-HEAD sample platform reset driver
44
0d56293 lib: sbi: Fix sbi_domain_root_add_memregion() for merging memregions
45
bf3ef53 firmware: Enable FW_PIC by default
46
1db8436 platform: Remove platform/thead
47
6d1642f docs: generic: Add T-HEAD C9xx series processors
48
a3689db lib: sbi: Remove domains_root_regions() platform callback
49
068ca08 lib: sbi: Simplify console platform operations
50
559a8f1 lib: sbi: Simplify timer platform operations
51
dc39c7b lib: sbi: Simplify ipi platform operations
52
043d088 lib: sbi: Simplify system reset platform operations
53
a84a1dd lib: sbi: Simplify HSM platform operations
54
e9a27ab lib: sbi: Show devices provided by platform in boot prints
55
632e27b docs/platform: sifive_fu540: Update U-Boot defconfig name
56
117fb6d lib: utils/serial: Add support for Gaisler APBUART
57
552f53f docs: platform: Sort platform names
58
d4177e7 docs: platform: Describe sifive_fu540 as supported generic platform
59
26998f3 platform: Remove sifive/fu540 platform
60
f90c4c2 lib: sbi: Have spinlock checks return bool
61
e822b75 lib: utils/serial: Support Synopsys DesignWare APB UART
62
6139ab2 Makefile: unconditionally disable SSP
63
c9ef2bc lib: utils: Add strncpy macro to libfdt_env.h
64
ee7c2b2 lib: utils/fdt: Don't use sbi_string functions
65
fe92347 lib: utils/fdt: Replace strcmp with strncmp
66
b2dbbc0 lib: Check region base for merging in sbi_domain_root_add_memregion()
67
54d7def lib: utils: Try other FDT drivers when we see SBI_ENODEV
68
d9ba653 docs: debugging OpenSBI
69
66c4fca lib: utils: consider ':' in stdout-path
70
f30b189 lib: sbi_scratch: remove owner from sbi_scratch_alloc_offset
71
a03ea2e platform: andes/ae350: Cosmetic fixes in plicsw.c
72
b32fac4 docs/platform: andes-ae350: Fix missing spaces
73
de446cc platform: andes/ae350: Drop plicsw_get_pending()
74
434198e platform: andes/ae350: Drop plicsw_ipi_sync()
75
1da3d80 lib: sbi_scratch: zero out scratch memory on all harts
76
360ab88 lib: utils: missing initialization in thead_reset_init
77
79f9b42 lib: sbi: Fix GET_F64_REG inline assembly
78
eb90e0a lib: utils/libfdt: Upgrade to v1.6.1 release
79
cdcf907 lib: sign conflict in sbi_tlb_entry_process()
80
9901794 lib: sign conflict in wake_coldboot_harts()
81
11c345f lib: simplify sbi_fifo_inplace_update()
82
4519e29 lib: utils/timer: Add ACLINT MTIMER library
83
5a049fe lib: utils/ipi: Add ACLINT MSWI library
84
bd5d208 lib: utils: Add FDT parsing API common for both ACLINT and CLINT
85
56fc5f7 lib: utils/ipi: Add FDT based ACLINT MSWI IPI driver
86
03d6bb5 lib: utils/timer: Add FDT based ACLINT MTIMER driver
87
a731c7e platform: Replace CLINT library usage with ACLINT library
88
b7f2cd2 lib: utils: reset: unify naming of 'sifive_test' device
89
197e089 docs/platform: thead-c9xx: Remove FW_PIC=y
90
17e23b6 platform: generic: Terminate platform.name with null
91
3e8b31a docs: Add device tree bindings for SBI PMU extension
92
fde28fa lib: sbi: Detect mcountinihibit support at runtime
93
d3a96cc lib: sbi: Remove stray '\' character
94
0829f2b lib: sbi: Detect number of bits implemented in mhpmcounter
95
9c9b4ad lib: sbi: Disable m/scounteren & enable mcountinhibit
96
41ae63c include: Add a list empty check function
97
fd9116b lib: sbi: Remove redundant boot time print statement
98
49966db lib: sbi: Use csr_read/write_num to read/update PMU counters
99
e7cc7a3 lib: sbi: Add PMU specific platform hooks
100
13d40f2 lib: sbi: Add PMU support
101
ae72ec0 utils: fdt: Add fdt helper functions to parse PMU DT nodes
102
37f9b0f lib: sbi: Implement SBI PMU extension
103
764a17d lib: sbi: Implement firmware counters
104
ec1b8bb lib: sbi: Improve TLB function naming
105
0e12aa8 platform: generic: Add PMU support
106
14c7f71 firmware: Minor optimization in _scratch_init()
107
dafaa0f docs: Correct a typo in platform_guide.md
108
abfce9b docs: Make <xyz> visible in the rendered platform guide
109
dcb756b firmware: Remove the sanity checks in fw_save_info()
110
b88b366 firmware: Define a macro for version of struct fw_dynamic_info
111
a76ac44 lib: sbi: Fix sbi_pmu_exit() for systems not having MCOUNTINHIBIT csr
112
7f1be8a fw_base: Don't mark fw_platform_init as both global and weak
113
397afe5 fw_base: Put data in .data rather than .text
114
a3d328a firmware: Explicitly pass -pie to the linker, not just the driver
115
09ad811 firmware: Only default FW_PIC to y if supported
116
2942777 Makefile: Support building with Clang and LLVM binutils
117
17729d4 lib: utils: Drop dependency on libgcc by importing part of FreeBSD's libquad
118
e931f38 lib: utils/fdt: Add fdt_parse_phandle_with_args() API
119
36b8eff lib: utils/gpio: Add generic GPIO configuration library
120
c14f1fe lib: utils/gpio: Add simple FDT based GPIO framework
121
4c3df2a lib: utils/gpio: Add minimal SiFive GPIO driver
122
e3d6919 lib: utils/reset: Add generic GPIO reset driver
123
7210e90 firmware: use __SIZEOF_LONG__ for field offsets in fw_dynamic.h
124
f3a8f60 include: types: Use __builtin_offsetof when supported
125
8a1475b firmware: Remove the unhelpful alignment codes before fdt relocation
126
a4555e5 docs: Document parameters passed to firmware and alignment requirement
127
2c74dc3 docs: Document FW_PIC compile time option
128
81eb708 README: Update toolchain information
129
9890391 Makefile: Manually forward RELAX_FLAG to the assembler when linking with LLD
130
74db0ac firmware: use _fw_start for load address
131
217d5e4 generic: fu740: add workaround for CIP-1200 errata
132
ce03c88 lib: utils: remove unused variable in fdt_reset_init
133
e928472 lib: utils: support both of gpio-poweroff, gpio-reset
134
d244f3d lib: sbi: Fix bug in strncmp function when count is 0
135
47a4765 lib: utils/fdt: Change addr and size to uint64_t
136
e0d1b9d lib: utils/timer: Allow separate base addresses for MTIME and MTIMECMP
137
7a3a0cc lib: utils: Extend fdt_get_node_addr_size() for multiple register sets
138
f3a0eb8 lib: utils/fdt: Extend fdt_parse_aclint_node() function
139
b35f782 lib: utils/timer: Allow ACLINT MTIMER supporting only 32-bit MMIO
140
7aa6c9a lib: utils/timer: Simplify MTIMER synchronization
141
33eac76 lib: sbi: Fix bug in sbi_ecall_rfence that misses checking
142
ee27437 lib: sbi_trap: Restore redirect for access faults
143
b1d3e91 payloads/test: Add support for SBI v0.2 ecalls
144
bd316e2 lib: sbi: Correct typo in faults delegation CSR name
145
c262306 lib: sbi: protect dprintf output with spinlock
146
1718b16 lib: sbi: Checking fifo validness in sbi_fifo_is_empty and is_full
147
bd35521 lib: sbi: Refine the way to construct platform features
148
0274a96 lib: utils/reset: Sort fdt_reset driver list
149
395ff7e lib: utils/reset: Add a sunxi watchdog reset driver
150
3477f08 lib: sbi: fix ctz bug
151
12753d2 lib: sbi: add some macros to detect BUG at runtime
152
51113fe lib: sbi: Add BUG() macro for csr_read/write_num() and misa_string()
153
72154f4 lib: utils/fdt: Add fdt_parse_timebase_frequency() function
154
12e7af9 lib: sbi: Add timer frequency to struct sbi_timer_device
155
6355155 lib: sbi: Print timer frequency at boot time
156
9d0ab35 lib: sbi: Add generic timer delay loop function
157
fa59dd3 lib: utils/reset: use sbi_timer_mdelay() in gpio reset driver
158
754d511 lib: utils: identify supported GPIO reset methods
159
516161c lib: sbi: convert reset to list
160
9283d50 lib: sbi: add priority for reset handler
161
c38973e lib: sbi: Save context for all non-retentive suspend types
162
67cbbcb lib: sbi: system reset with invalid parameters
163
422eda4 Makefile: Add build time and compiler info string
164
78c2b19 lib: utils/irqchip: Automatically delegate T-HEAD PLIC access
165
309e8bd lib: utils/reset: Register separate GPIO system reset devices
166
723aa88 lib: sbi: Refine addr format in sbi_printf
167
c891acc include: sbi_utils: Introduce an helper to get fdt base address
168
013ba4e lib: sbi: Fix GPA passed to __sbi_hfence_gvma_xyz() functions
169
0979ffd lib: utils/gpio: use list for drivers
170
2fe2f55 lib: sbi: move sbi_boot_print_general()
171
57f094e platform: generic: move fdt_reset_init to final_init
172
be245ac lib: sbi: error handling in fdt_reset_init()
173
a74daf2 riscv: Add new CSRs introduced by Sscofpmf[1] extension
174
7084ad9 lib: sbi: Update csr_read/write_num for PMU
175
867c653 lib: sbi: Detect Sscofpmf extension at run time
176
9134c36 lib: sbi: Delegate PMU counter overflow interrupt to S mode
177
730f01b lib: sbi: Support sscofpmf extension in OpenSBI
178
2363f95 lib: sbi: Always enable access for all counters
179
0c304b6 lib: sbi: Allow programmable counters to monitor cycle/instret events
180
1e14732 lib: sbi: Reset the mhpmevent value upon counter reset
181
b628cfd lib: sbi: Counter info width should be zero indexed
182
b28f070 lib: sbi: Enable PMU extension for platforms without mcountinhibit
183
15906a3 lib: utils: Rename the prefix in PMU DT properties
184
b8845e4 lib: sbi: Fix initial value mask while updating the counters
185
31fe5a7 lib: sbi: Fix PMP address bits detection
186
94eba23 lib: utils/reset: add priority to gpio reset
187
1d462e0 lib: utils/reset: separate driver init func
188
2c964a2 lib: utils/i2c: Add generic I2C configuration library
189
6ca6bca lib: utils/i2c: Add simple FDT based I2C framework
190
13a1158 lib: utils/i2c: Add minimal SiFive I2C driver
191
f374496 platform: sifive_fu740: add platform reset driver
192
d335a17 lib: sbi: clear pmpcfg.A before setting in pmp_set()
193
52af6e4 lib: utils: Add LiteX UART support
194
22d556d lib: sbi: Fix spelling of "address" in sbi_domain.c
195
7a22c78 lib: sbi: Fix missing space
196
7e77706 lib: sbi: Resolve the uninitialized complaint in sbi_pmu
197
14faee6 lib: sbi: Improve fatal error handling
198
2428987 lib: pmu: support the event ID encoded by a bitmap.
199
66fbcc0 docs/platform: spike: Enhance Spike examples
200
460041c lib: pmu: check SSCOF before masking
201
69d7e53 Makefile: Fix -msave-restore compile warning with CLANG-10 (or lower)
202
d249d65 lib: sbi: Fix compile errors using -Os option
203
f270359 Makefile: Improve the method to disable -m(no-)save-restore option
204
2082153 lib: sbi: simplify pmp_set(), pmp_get()
205
d30bde3 firmware: Move memcpy/memset mapping to fw_base.S
206
48f91ee include: Bump-up version to 1.0
207
208
Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
209
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
210
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
211
---
212
.../opensbi-riscv32-generic-fw_dynamic.bin | Bin 78680 -> 108504 bytes
213
.../opensbi-riscv32-generic-fw_dynamic.elf | Bin 727464 -> 838904 bytes
214
.../opensbi-riscv64-generic-fw_dynamic.bin | Bin 75096 -> 105296 bytes
215
.../opensbi-riscv64-generic-fw_dynamic.elf | Bin 781264 -> 934696 bytes
216
roms/opensbi | 2 +-
217
5 files changed, 1 insertion(+), 1 deletion(-)
218
219
diff --git a/pc-bios/opensbi-riscv32-generic-fw_dynamic.bin b/pc-bios/opensbi-riscv32-generic-fw_dynamic.bin
220
index XXXXXXX..XXXXXXX 100644
221
Binary files a/pc-bios/opensbi-riscv32-generic-fw_dynamic.bin and b/pc-bios/opensbi-riscv32-generic-fw_dynamic.bin differ
222
diff --git a/pc-bios/opensbi-riscv32-generic-fw_dynamic.elf b/pc-bios/opensbi-riscv32-generic-fw_dynamic.elf
223
index XXXXXXX..XXXXXXX 100644
224
Binary files a/pc-bios/opensbi-riscv32-generic-fw_dynamic.elf and b/pc-bios/opensbi-riscv32-generic-fw_dynamic.elf differ
225
diff --git a/pc-bios/opensbi-riscv64-generic-fw_dynamic.bin b/pc-bios/opensbi-riscv64-generic-fw_dynamic.bin
226
index XXXXXXX..XXXXXXX 100644
227
Binary files a/pc-bios/opensbi-riscv64-generic-fw_dynamic.bin and b/pc-bios/opensbi-riscv64-generic-fw_dynamic.bin differ
228
diff --git a/pc-bios/opensbi-riscv64-generic-fw_dynamic.elf b/pc-bios/opensbi-riscv64-generic-fw_dynamic.elf
229
index XXXXXXX..XXXXXXX 100644
230
Binary files a/pc-bios/opensbi-riscv64-generic-fw_dynamic.elf and b/pc-bios/opensbi-riscv64-generic-fw_dynamic.elf differ
231
diff --git a/roms/opensbi b/roms/opensbi
232
index XXXXXXX..XXXXXXX 160000
233
--- a/roms/opensbi
234
+++ b/roms/opensbi
235
@@ -1 +1 @@
236
-Subproject commit 234ed8e427f4d92903123199f6590d144e0d9351
237
+Subproject commit 48f91ee9c960f048c4a7d1da4447d31e04931e38
238
--
239
2.31.1
240
241
diff view generated by jsdifflib
New patch
1
From: Frank Chang <frank.chang@sifive.com>
1
2
3
Vector widening floating-point instructions should use
4
require_scale_rvf() instead of require_rvf() to check whether RVF/RVD is
5
enabled.
6
7
Signed-off-by: Frank Chang <frank.chang@sifive.com>
8
Acked-by: Alistair Francis <alistair.francis@wdc.com>
9
Message-Id: <20220105022247.21131-2-frank.chang@sifive.com>
10
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
11
---
12
target/riscv/insn_trans/trans_rvv.c.inc | 12 ++++++++----
13
1 file changed, 8 insertions(+), 4 deletions(-)
14
15
diff --git a/target/riscv/insn_trans/trans_rvv.c.inc b/target/riscv/insn_trans/trans_rvv.c.inc
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/riscv/insn_trans/trans_rvv.c.inc
18
+++ b/target/riscv/insn_trans/trans_rvv.c.inc
19
@@ -XXX,XX +XXX,XX @@ GEN_OPFVF_TRANS(vfrsub_vf, opfvf_check)
20
static bool opfvv_widen_check(DisasContext *s, arg_rmrr *a)
21
{
22
return require_rvv(s) &&
23
- require_rvf(s) &&
24
+ require_scale_rvf(s) &&
25
+ (s->sew != MO_8) &&
26
vext_check_isa_ill(s) &&
27
vext_check_dss(s, a->rd, a->rs1, a->rs2, a->vm);
28
}
29
@@ -XXX,XX +XXX,XX @@ GEN_OPFVV_WIDEN_TRANS(vfwsub_vv, opfvv_widen_check)
30
static bool opfvf_widen_check(DisasContext *s, arg_rmrr *a)
31
{
32
return require_rvv(s) &&
33
- require_rvf(s) &&
34
+ require_scale_rvf(s) &&
35
+ (s->sew != MO_8) &&
36
vext_check_isa_ill(s) &&
37
vext_check_ds(s, a->rd, a->rs2, a->vm);
38
}
39
@@ -XXX,XX +XXX,XX @@ GEN_OPFVF_WIDEN_TRANS(vfwsub_vf)
40
static bool opfwv_widen_check(DisasContext *s, arg_rmrr *a)
41
{
42
return require_rvv(s) &&
43
- require_rvf(s) &&
44
+ require_scale_rvf(s) &&
45
+ (s->sew != MO_8) &&
46
vext_check_isa_ill(s) &&
47
vext_check_dds(s, a->rd, a->rs1, a->rs2, a->vm);
48
}
49
@@ -XXX,XX +XXX,XX @@ GEN_OPFWV_WIDEN_TRANS(vfwsub_wv)
50
static bool opfwf_widen_check(DisasContext *s, arg_rmrr *a)
51
{
52
return require_rvv(s) &&
53
- require_rvf(s) &&
54
+ require_scale_rvf(s) &&
55
+ (s->sew != MO_8) &&
56
vext_check_isa_ill(s) &&
57
vext_check_dd(s, a->rd, a->rs2, a->vm);
58
}
59
--
60
2.31.1
61
62
diff view generated by jsdifflib
New patch
1
From: Frank Chang <frank.chang@sifive.com>
1
2
3
vfwcvt.xu.f.v, vfwcvt.x.f.v, vfwcvt.rtz.xu.f.v and vfwcvt.rtz.x.f.v
4
convert single-width floating-point to double-width integer.
5
Therefore, should use require_rvf() to check whether RVF/RVD is enabled.
6
7
vfwcvt.f.xu.v, vfwcvt.f.x.v convert single-width integer to double-width
8
floating-point, and vfwcvt.f.f.v convert double-width floating-point to
9
single-width floating-point. Therefore, should use require_scale_rvf() to
10
check whether RVF/RVD is enabled.
11
12
Signed-off-by: Frank Chang <frank.chang@sifive.com>
13
Acked-by: Alistair Francis <alistair.francis@wdc.com>
14
Message-Id: <20220105022247.21131-3-frank.chang@sifive.com>
15
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
16
---
17
target/riscv/insn_trans/trans_rvv.c.inc | 34 ++++++++++++++++++-------
18
1 file changed, 25 insertions(+), 9 deletions(-)
19
20
diff --git a/target/riscv/insn_trans/trans_rvv.c.inc b/target/riscv/insn_trans/trans_rvv.c.inc
21
index XXXXXXX..XXXXXXX 100644
22
--- a/target/riscv/insn_trans/trans_rvv.c.inc
23
+++ b/target/riscv/insn_trans/trans_rvv.c.inc
24
@@ -XXX,XX +XXX,XX @@ GEN_OPFV_CVT_TRANS(vfcvt_rtz_x_f_v, vfcvt_x_f_v, RISCV_FRM_RTZ)
25
static bool opfv_widen_check(DisasContext *s, arg_rmr *a)
26
{
27
return require_rvv(s) &&
28
- require_scale_rvf(s) &&
29
- (s->sew != MO_8) &&
30
vext_check_isa_ill(s) &&
31
vext_check_ds(s, a->rd, a->rs2, a->vm);
32
}
33
34
-#define GEN_OPFV_WIDEN_TRANS(NAME, HELPER, FRM) \
35
+static bool opxfv_widen_check(DisasContext *s, arg_rmr *a)
36
+{
37
+ return opfv_widen_check(s, a) &&
38
+ require_rvf(s);
39
+}
40
+
41
+static bool opffv_widen_check(DisasContext *s, arg_rmr *a)
42
+{
43
+ return opfv_widen_check(s, a) &&
44
+ require_scale_rvf(s) &&
45
+ (s->sew != MO_8);
46
+}
47
+
48
+#define GEN_OPFV_WIDEN_TRANS(NAME, CHECK, HELPER, FRM) \
49
static bool trans_##NAME(DisasContext *s, arg_rmr *a) \
50
{ \
51
- if (opfv_widen_check(s, a)) { \
52
+ if (CHECK(s, a)) { \
53
if (FRM != RISCV_FRM_DYN) { \
54
gen_set_rm(s, RISCV_FRM_DYN); \
55
} \
56
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmr *a) \
57
return false; \
58
}
59
60
-GEN_OPFV_WIDEN_TRANS(vfwcvt_xu_f_v, vfwcvt_xu_f_v, RISCV_FRM_DYN)
61
-GEN_OPFV_WIDEN_TRANS(vfwcvt_x_f_v, vfwcvt_x_f_v, RISCV_FRM_DYN)
62
-GEN_OPFV_WIDEN_TRANS(vfwcvt_f_f_v, vfwcvt_f_f_v, RISCV_FRM_DYN)
63
+GEN_OPFV_WIDEN_TRANS(vfwcvt_xu_f_v, opxfv_widen_check, vfwcvt_xu_f_v,
64
+ RISCV_FRM_DYN)
65
+GEN_OPFV_WIDEN_TRANS(vfwcvt_x_f_v, opxfv_widen_check, vfwcvt_x_f_v,
66
+ RISCV_FRM_DYN)
67
+GEN_OPFV_WIDEN_TRANS(vfwcvt_f_f_v, opffv_widen_check, vfwcvt_f_f_v,
68
+ RISCV_FRM_DYN)
69
/* Reuse the helper functions from vfwcvt.xu.f.v and vfwcvt.x.f.v */
70
-GEN_OPFV_WIDEN_TRANS(vfwcvt_rtz_xu_f_v, vfwcvt_xu_f_v, RISCV_FRM_RTZ)
71
-GEN_OPFV_WIDEN_TRANS(vfwcvt_rtz_x_f_v, vfwcvt_x_f_v, RISCV_FRM_RTZ)
72
+GEN_OPFV_WIDEN_TRANS(vfwcvt_rtz_xu_f_v, opxfv_widen_check, vfwcvt_xu_f_v,
73
+ RISCV_FRM_RTZ)
74
+GEN_OPFV_WIDEN_TRANS(vfwcvt_rtz_x_f_v, opxfv_widen_check, vfwcvt_x_f_v,
75
+ RISCV_FRM_RTZ)
76
77
static bool opfxv_widen_check(DisasContext *s, arg_rmr *a)
78
{
79
--
80
2.31.1
81
82
diff view generated by jsdifflib
New patch
1
From: Frank Chang <frank.chang@sifive.com>
1
2
3
vfncvt.f.xu.w, vfncvt.f.x.w convert double-width integer to single-width
4
floating-point. Therefore, should use require_rvf() to check whether
5
RVF/RVD is enabled.
6
7
vfncvt.f.f.w, vfncvt.rod.f.f.w convert double-width floating-point to
8
single-width integer. Therefore, should use require_scale_rvf() to check
9
whether RVF/RVD is enabled.
10
11
Signed-off-by: Frank Chang <frank.chang@sifive.com>
12
Acked-by: Alistair Francis <alistair.francis@wdc.com>
13
Message-Id: <20220105022247.21131-4-frank.chang@sifive.com>
14
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
15
---
16
target/riscv/insn_trans/trans_rvv.c.inc | 32 ++++++++++++++++++-------
17
1 file changed, 24 insertions(+), 8 deletions(-)
18
19
diff --git a/target/riscv/insn_trans/trans_rvv.c.inc b/target/riscv/insn_trans/trans_rvv.c.inc
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/riscv/insn_trans/trans_rvv.c.inc
22
+++ b/target/riscv/insn_trans/trans_rvv.c.inc
23
@@ -XXX,XX +XXX,XX @@ GEN_OPFXV_WIDEN_TRANS(vfwcvt_f_x_v)
24
static bool opfv_narrow_check(DisasContext *s, arg_rmr *a)
25
{
26
return require_rvv(s) &&
27
- require_rvf(s) &&
28
- (s->sew != MO_64) &&
29
vext_check_isa_ill(s) &&
30
/* OPFV narrowing instructions ignore vs1 check */
31
vext_check_sd(s, a->rd, a->rs2, a->vm);
32
}
33
34
-#define GEN_OPFV_NARROW_TRANS(NAME, HELPER, FRM) \
35
+static bool opfxv_narrow_check(DisasContext *s, arg_rmr *a)
36
+{
37
+ return opfv_narrow_check(s, a) &&
38
+ require_rvf(s) &&
39
+ (s->sew != MO_64);
40
+}
41
+
42
+static bool opffv_narrow_check(DisasContext *s, arg_rmr *a)
43
+{
44
+ return opfv_narrow_check(s, a) &&
45
+ require_scale_rvf(s) &&
46
+ (s->sew != MO_8);
47
+}
48
+
49
+#define GEN_OPFV_NARROW_TRANS(NAME, CHECK, HELPER, FRM) \
50
static bool trans_##NAME(DisasContext *s, arg_rmr *a) \
51
{ \
52
- if (opfv_narrow_check(s, a)) { \
53
+ if (CHECK(s, a)) { \
54
if (FRM != RISCV_FRM_DYN) { \
55
gen_set_rm(s, RISCV_FRM_DYN); \
56
} \
57
@@ -XXX,XX +XXX,XX @@ static bool trans_##NAME(DisasContext *s, arg_rmr *a) \
58
return false; \
59
}
60
61
-GEN_OPFV_NARROW_TRANS(vfncvt_f_xu_w, vfncvt_f_xu_w, RISCV_FRM_DYN)
62
-GEN_OPFV_NARROW_TRANS(vfncvt_f_x_w, vfncvt_f_x_w, RISCV_FRM_DYN)
63
-GEN_OPFV_NARROW_TRANS(vfncvt_f_f_w, vfncvt_f_f_w, RISCV_FRM_DYN)
64
+GEN_OPFV_NARROW_TRANS(vfncvt_f_xu_w, opfxv_narrow_check, vfncvt_f_xu_w,
65
+ RISCV_FRM_DYN)
66
+GEN_OPFV_NARROW_TRANS(vfncvt_f_x_w, opfxv_narrow_check, vfncvt_f_x_w,
67
+ RISCV_FRM_DYN)
68
+GEN_OPFV_NARROW_TRANS(vfncvt_f_f_w, opffv_narrow_check, vfncvt_f_f_w,
69
+ RISCV_FRM_DYN)
70
/* Reuse the helper function from vfncvt.f.f.w */
71
-GEN_OPFV_NARROW_TRANS(vfncvt_rod_f_f_w, vfncvt_f_f_w, RISCV_FRM_ROD)
72
+GEN_OPFV_NARROW_TRANS(vfncvt_rod_f_f_w, opffv_narrow_check, vfncvt_f_f_w,
73
+ RISCV_FRM_ROD)
74
75
static bool opxfv_narrow_check(DisasContext *s, arg_rmr *a)
76
{
77
--
78
2.31.1
79
80
diff view generated by jsdifflib
New patch
1
From: Philipp Tomsich <philipp.tomsich@vrull.eu>
1
2
3
When commit 0643c12e4b dropped the 'x-' prefix for Zb[abcs] and set
4
them to be enabled by default, the comment about experimental
5
extensions was kept in place above them. This moves it down a few
6
lines to only cover experimental extensions.
7
8
References: 0643c12e4b ("target/riscv: Enable bitmanip Zb[abcs] instructions")
9
10
Signed-off-by: Philipp Tomsich <philipp.tomsich@vrull.eu>
11
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
12
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
13
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
14
Message-id: 20220106134020.1628889-1-philipp.tomsich@vrull.eu
15
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
16
---
17
target/riscv/cpu.c | 3 ++-
18
1 file changed, 2 insertions(+), 1 deletion(-)
19
20
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
21
index XXXXXXX..XXXXXXX 100644
22
--- a/target/riscv/cpu.c
23
+++ b/target/riscv/cpu.c
24
@@ -XXX,XX +XXX,XX @@ static Property riscv_cpu_properties[] = {
25
DEFINE_PROP_UINT16("vlen", RISCVCPU, cfg.vlen, 128),
26
DEFINE_PROP_UINT16("elen", RISCVCPU, cfg.elen, 64),
27
28
- /* These are experimental so mark with 'x-' */
29
DEFINE_PROP_BOOL("zba", RISCVCPU, cfg.ext_zba, true),
30
DEFINE_PROP_BOOL("zbb", RISCVCPU, cfg.ext_zbb, true),
31
DEFINE_PROP_BOOL("zbc", RISCVCPU, cfg.ext_zbc, true),
32
DEFINE_PROP_BOOL("zbs", RISCVCPU, cfg.ext_zbs, true),
33
+
34
+ /* These are experimental so mark with 'x-' */
35
DEFINE_PROP_BOOL("x-j", RISCVCPU, cfg.ext_j, false),
36
/* ePMP 0.9.3 */
37
DEFINE_PROP_BOOL("x-epmp", RISCVCPU, cfg.epmp, false),
38
--
39
2.31.1
40
41
diff view generated by jsdifflib
New patch
1
From: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
1
2
3
Renaming defines for quad in their various forms so that their signedness is
4
now explicit.
5
Done using git grep as suggested by Philippe, with a bit of hand edition to
6
keep assignments aligned.
7
8
Signed-off-by: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
9
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
12
Message-id: 20220106210108.138226-2-frederic.petrot@univ-grenoble-alpes.fr
13
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
14
---
15
include/exec/memop.h | 8 +--
16
include/tcg/tcg-op.h | 4 +-
17
target/arm/translate-a32.h | 4 +-
18
accel/tcg/cputlb.c | 30 +++++------
19
accel/tcg/user-exec.c | 8 +--
20
target/alpha/translate.c | 32 ++++++------
21
target/arm/helper-a64.c | 8 +--
22
target/arm/translate-a64.c | 8 +--
23
target/arm/translate-neon.c | 6 +--
24
target/arm/translate-sve.c | 10 ++--
25
target/arm/translate-vfp.c | 8 +--
26
target/arm/translate.c | 2 +-
27
target/cris/translate.c | 2 +-
28
target/hppa/translate.c | 4 +-
29
target/i386/tcg/mem_helper.c | 2 +-
30
target/i386/tcg/translate.c | 36 +++++++-------
31
target/m68k/op_helper.c | 2 +-
32
target/mips/tcg/translate.c | 58 +++++++++++-----------
33
target/mips/tcg/tx79_translate.c | 8 +--
34
target/ppc/translate.c | 32 ++++++------
35
target/s390x/tcg/mem_helper.c | 8 +--
36
target/s390x/tcg/translate.c | 8 +--
37
target/sh4/translate.c | 12 ++---
38
target/sparc/translate.c | 36 +++++++-------
39
target/tricore/translate.c | 4 +-
40
target/xtensa/translate.c | 4 +-
41
tcg/tcg.c | 4 +-
42
tcg/tci.c | 16 +++---
43
accel/tcg/ldst_common.c.inc | 8 +--
44
target/mips/tcg/micromips_translate.c.inc | 10 ++--
45
target/ppc/translate/fixedpoint-impl.c.inc | 22 ++++----
46
target/ppc/translate/fp-impl.c.inc | 4 +-
47
target/ppc/translate/vsx-impl.c.inc | 42 ++++++++--------
48
target/riscv/insn_trans/trans_rva.c.inc | 22 ++++----
49
target/riscv/insn_trans/trans_rvd.c.inc | 4 +-
50
target/riscv/insn_trans/trans_rvh.c.inc | 4 +-
51
target/riscv/insn_trans/trans_rvi.c.inc | 4 +-
52
target/s390x/tcg/translate_vx.c.inc | 18 +++----
53
tcg/aarch64/tcg-target.c.inc | 2 +-
54
tcg/arm/tcg-target.c.inc | 10 ++--
55
tcg/i386/tcg-target.c.inc | 12 ++---
56
tcg/mips/tcg-target.c.inc | 12 ++---
57
tcg/ppc/tcg-target.c.inc | 16 +++---
58
tcg/riscv/tcg-target.c.inc | 6 +--
59
tcg/s390x/tcg-target.c.inc | 18 +++----
60
tcg/sparc/tcg-target.c.inc | 16 +++---
61
target/s390x/tcg/insn-data.def | 28 +++++------
62
47 files changed, 311 insertions(+), 311 deletions(-)
63
64
diff --git a/include/exec/memop.h b/include/exec/memop.h
65
index XXXXXXX..XXXXXXX 100644
66
--- a/include/exec/memop.h
67
+++ b/include/exec/memop.h
68
@@ -XXX,XX +XXX,XX @@ typedef enum MemOp {
69
MO_UB = MO_8,
70
MO_UW = MO_16,
71
MO_UL = MO_32,
72
+ MO_UQ = MO_64,
73
MO_SB = MO_SIGN | MO_8,
74
MO_SW = MO_SIGN | MO_16,
75
MO_SL = MO_SIGN | MO_32,
76
- MO_Q = MO_64,
77
78
MO_LEUW = MO_LE | MO_UW,
79
MO_LEUL = MO_LE | MO_UL,
80
+ MO_LEUQ = MO_LE | MO_UQ,
81
MO_LESW = MO_LE | MO_SW,
82
MO_LESL = MO_LE | MO_SL,
83
- MO_LEQ = MO_LE | MO_Q,
84
85
MO_BEUW = MO_BE | MO_UW,
86
MO_BEUL = MO_BE | MO_UL,
87
+ MO_BEUQ = MO_BE | MO_UQ,
88
MO_BESW = MO_BE | MO_SW,
89
MO_BESL = MO_BE | MO_SL,
90
- MO_BEQ = MO_BE | MO_Q,
91
92
#ifdef NEED_CPU_H
93
MO_TEUW = MO_TE | MO_UW,
94
MO_TEUL = MO_TE | MO_UL,
95
+ MO_TEUQ = MO_TE | MO_UQ,
96
MO_TESW = MO_TE | MO_SW,
97
MO_TESL = MO_TE | MO_SL,
98
- MO_TEQ = MO_TE | MO_Q,
99
#endif
100
101
MO_SSIZE = MO_SIZE | MO_SIGN,
102
diff --git a/include/tcg/tcg-op.h b/include/tcg/tcg-op.h
103
index XXXXXXX..XXXXXXX 100644
104
--- a/include/tcg/tcg-op.h
105
+++ b/include/tcg/tcg-op.h
106
@@ -XXX,XX +XXX,XX @@ static inline void tcg_gen_qemu_ld32s(TCGv ret, TCGv addr, int mem_index)
107
108
static inline void tcg_gen_qemu_ld64(TCGv_i64 ret, TCGv addr, int mem_index)
109
{
110
- tcg_gen_qemu_ld_i64(ret, addr, mem_index, MO_TEQ);
111
+ tcg_gen_qemu_ld_i64(ret, addr, mem_index, MO_TEUQ);
112
}
113
114
static inline void tcg_gen_qemu_st8(TCGv arg, TCGv addr, int mem_index)
115
@@ -XXX,XX +XXX,XX @@ static inline void tcg_gen_qemu_st32(TCGv arg, TCGv addr, int mem_index)
116
117
static inline void tcg_gen_qemu_st64(TCGv_i64 arg, TCGv addr, int mem_index)
118
{
119
- tcg_gen_qemu_st_i64(arg, addr, mem_index, MO_TEQ);
120
+ tcg_gen_qemu_st_i64(arg, addr, mem_index, MO_TEUQ);
121
}
122
123
void tcg_gen_atomic_cmpxchg_i32(TCGv_i32, TCGv, TCGv_i32, TCGv_i32,
124
diff --git a/target/arm/translate-a32.h b/target/arm/translate-a32.h
125
index XXXXXXX..XXXXXXX 100644
126
--- a/target/arm/translate-a32.h
127
+++ b/target/arm/translate-a32.h
128
@@ -XXX,XX +XXX,XX @@ void gen_aa32_st_i64(DisasContext *s, TCGv_i64 val, TCGv_i32 a32,
129
static inline void gen_aa32_ld64(DisasContext *s, TCGv_i64 val,
130
TCGv_i32 a32, int index)
131
{
132
- gen_aa32_ld_i64(s, val, a32, index, MO_Q);
133
+ gen_aa32_ld_i64(s, val, a32, index, MO_UQ);
134
}
135
136
static inline void gen_aa32_st64(DisasContext *s, TCGv_i64 val,
137
TCGv_i32 a32, int index)
138
{
139
- gen_aa32_st_i64(s, val, a32, index, MO_Q);
140
+ gen_aa32_st_i64(s, val, a32, index, MO_UQ);
141
}
142
143
DO_GEN_LD(8u, MO_UB)
144
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
145
index XXXXXXX..XXXXXXX 100644
146
--- a/accel/tcg/cputlb.c
147
+++ b/accel/tcg/cputlb.c
148
@@ -XXX,XX +XXX,XX @@ load_memop(const void *haddr, MemOp op)
149
return (uint32_t)ldl_be_p(haddr);
150
case MO_LEUL:
151
return (uint32_t)ldl_le_p(haddr);
152
- case MO_BEQ:
153
+ case MO_BEUQ:
154
return ldq_be_p(haddr);
155
- case MO_LEQ:
156
+ case MO_LEUQ:
157
return ldq_le_p(haddr);
158
default:
159
qemu_build_not_reached();
160
@@ -XXX,XX +XXX,XX @@ tcg_target_ulong helper_be_ldul_mmu(CPUArchState *env, target_ulong addr,
161
uint64_t helper_le_ldq_mmu(CPUArchState *env, target_ulong addr,
162
MemOpIdx oi, uintptr_t retaddr)
163
{
164
- validate_memop(oi, MO_LEQ);
165
- return load_helper(env, addr, oi, retaddr, MO_LEQ, false,
166
+ validate_memop(oi, MO_LEUQ);
167
+ return load_helper(env, addr, oi, retaddr, MO_LEUQ, false,
168
helper_le_ldq_mmu);
169
}
170
171
uint64_t helper_be_ldq_mmu(CPUArchState *env, target_ulong addr,
172
MemOpIdx oi, uintptr_t retaddr)
173
{
174
- validate_memop(oi, MO_BEQ);
175
- return load_helper(env, addr, oi, retaddr, MO_BEQ, false,
176
+ validate_memop(oi, MO_BEUQ);
177
+ return load_helper(env, addr, oi, retaddr, MO_BEUQ, false,
178
helper_be_ldq_mmu);
179
}
180
181
@@ -XXX,XX +XXX,XX @@ uint32_t cpu_ldl_be_mmu(CPUArchState *env, abi_ptr addr,
182
uint64_t cpu_ldq_be_mmu(CPUArchState *env, abi_ptr addr,
183
MemOpIdx oi, uintptr_t ra)
184
{
185
- return cpu_load_helper(env, addr, oi, MO_BEQ, helper_be_ldq_mmu);
186
+ return cpu_load_helper(env, addr, oi, MO_BEUQ, helper_be_ldq_mmu);
187
}
188
189
uint16_t cpu_ldw_le_mmu(CPUArchState *env, abi_ptr addr,
190
@@ -XXX,XX +XXX,XX @@ store_memop(void *haddr, uint64_t val, MemOp op)
191
case MO_LEUL:
192
stl_le_p(haddr, val);
193
break;
194
- case MO_BEQ:
195
+ case MO_BEUQ:
196
stq_be_p(haddr, val);
197
break;
198
- case MO_LEQ:
199
+ case MO_LEUQ:
200
stq_le_p(haddr, val);
201
break;
202
default:
203
@@ -XXX,XX +XXX,XX @@ void helper_be_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val,
204
void helper_le_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val,
205
MemOpIdx oi, uintptr_t retaddr)
206
{
207
- validate_memop(oi, MO_LEQ);
208
- store_helper(env, addr, val, oi, retaddr, MO_LEQ);
209
+ validate_memop(oi, MO_LEUQ);
210
+ store_helper(env, addr, val, oi, retaddr, MO_LEUQ);
211
}
212
213
void helper_be_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val,
214
MemOpIdx oi, uintptr_t retaddr)
215
{
216
- validate_memop(oi, MO_BEQ);
217
- store_helper(env, addr, val, oi, retaddr, MO_BEQ);
218
+ validate_memop(oi, MO_BEUQ);
219
+ store_helper(env, addr, val, oi, retaddr, MO_BEUQ);
220
}
221
222
/*
223
@@ -XXX,XX +XXX,XX @@ uint32_t cpu_ldl_code(CPUArchState *env, abi_ptr addr)
224
static uint64_t full_ldq_code(CPUArchState *env, target_ulong addr,
225
MemOpIdx oi, uintptr_t retaddr)
226
{
227
- return load_helper(env, addr, oi, retaddr, MO_TEQ, true, full_ldq_code);
228
+ return load_helper(env, addr, oi, retaddr, MO_TEUQ, true, full_ldq_code);
229
}
230
231
uint64_t cpu_ldq_code(CPUArchState *env, abi_ptr addr)
232
{
233
- MemOpIdx oi = make_memop_idx(MO_TEQ, cpu_mmu_index(env, true));
234
+ MemOpIdx oi = make_memop_idx(MO_TEUQ, cpu_mmu_index(env, true));
235
return full_ldq_code(env, addr, oi, 0);
236
}
237
diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c
238
index XXXXXXX..XXXXXXX 100644
239
--- a/accel/tcg/user-exec.c
240
+++ b/accel/tcg/user-exec.c
241
@@ -XXX,XX +XXX,XX @@ uint64_t cpu_ldq_be_mmu(CPUArchState *env, abi_ptr addr,
242
void *haddr;
243
uint64_t ret;
244
245
- validate_memop(oi, MO_BEQ);
246
+ validate_memop(oi, MO_BEUQ);
247
trace_guest_ld_before_exec(env_cpu(env), addr, oi);
248
haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD);
249
ret = ldq_be_p(haddr);
250
@@ -XXX,XX +XXX,XX @@ uint64_t cpu_ldq_le_mmu(CPUArchState *env, abi_ptr addr,
251
void *haddr;
252
uint64_t ret;
253
254
- validate_memop(oi, MO_LEQ);
255
+ validate_memop(oi, MO_LEUQ);
256
trace_guest_ld_before_exec(env_cpu(env), addr, oi);
257
haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD);
258
ret = ldq_le_p(haddr);
259
@@ -XXX,XX +XXX,XX @@ void cpu_stq_be_mmu(CPUArchState *env, abi_ptr addr, uint64_t val,
260
{
261
void *haddr;
262
263
- validate_memop(oi, MO_BEQ);
264
+ validate_memop(oi, MO_BEUQ);
265
trace_guest_st_before_exec(env_cpu(env), addr, oi);
266
haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE);
267
stq_be_p(haddr, val);
268
@@ -XXX,XX +XXX,XX @@ void cpu_stq_le_mmu(CPUArchState *env, abi_ptr addr, uint64_t val,
269
{
270
void *haddr;
271
272
- validate_memop(oi, MO_LEQ);
273
+ validate_memop(oi, MO_LEUQ);
274
trace_guest_st_before_exec(env_cpu(env), addr, oi);
275
haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE);
276
stq_le_p(haddr, val);
277
diff --git a/target/alpha/translate.c b/target/alpha/translate.c
278
index XXXXXXX..XXXXXXX 100644
279
--- a/target/alpha/translate.c
280
+++ b/target/alpha/translate.c
281
@@ -XXX,XX +XXX,XX @@ static void gen_ldf(DisasContext *ctx, TCGv dest, TCGv addr)
282
static void gen_ldg(DisasContext *ctx, TCGv dest, TCGv addr)
283
{
284
TCGv tmp = tcg_temp_new();
285
- tcg_gen_qemu_ld_i64(tmp, addr, ctx->mem_idx, MO_LEQ | UNALIGN(ctx));
286
+ tcg_gen_qemu_ld_i64(tmp, addr, ctx->mem_idx, MO_LEUQ | UNALIGN(ctx));
287
gen_helper_memory_to_g(dest, tmp);
288
tcg_temp_free(tmp);
289
}
290
@@ -XXX,XX +XXX,XX @@ static void gen_lds(DisasContext *ctx, TCGv dest, TCGv addr)
291
292
static void gen_ldt(DisasContext *ctx, TCGv dest, TCGv addr)
293
{
294
- tcg_gen_qemu_ld_i64(dest, addr, ctx->mem_idx, MO_LEQ | UNALIGN(ctx));
295
+ tcg_gen_qemu_ld_i64(dest, addr, ctx->mem_idx, MO_LEUQ | UNALIGN(ctx));
296
}
297
298
static void gen_load_fp(DisasContext *ctx, int ra, int rb, int32_t disp16,
299
@@ -XXX,XX +XXX,XX @@ static void gen_stg(DisasContext *ctx, TCGv src, TCGv addr)
300
{
301
TCGv tmp = tcg_temp_new();
302
gen_helper_g_to_memory(tmp, src);
303
- tcg_gen_qemu_st_i64(tmp, addr, ctx->mem_idx, MO_LEQ | UNALIGN(ctx));
304
+ tcg_gen_qemu_st_i64(tmp, addr, ctx->mem_idx, MO_LEUQ | UNALIGN(ctx));
305
tcg_temp_free(tmp);
306
}
307
308
@@ -XXX,XX +XXX,XX @@ static void gen_sts(DisasContext *ctx, TCGv src, TCGv addr)
309
310
static void gen_stt(DisasContext *ctx, TCGv src, TCGv addr)
311
{
312
- tcg_gen_qemu_st_i64(src, addr, ctx->mem_idx, MO_LEQ | UNALIGN(ctx));
313
+ tcg_gen_qemu_st_i64(src, addr, ctx->mem_idx, MO_LEUQ | UNALIGN(ctx));
314
}
315
316
static void gen_store_fp(DisasContext *ctx, int ra, int rb, int32_t disp16,
317
@@ -XXX,XX +XXX,XX @@ static DisasJumpType translate_one(DisasContext *ctx, uint32_t insn)
318
break;
319
case 0x0B:
320
/* LDQ_U */
321
- gen_load_int(ctx, ra, rb, disp16, MO_LEQ, 1, 0);
322
+ gen_load_int(ctx, ra, rb, disp16, MO_LEUQ, 1, 0);
323
break;
324
case 0x0C:
325
/* LDWU */
326
@@ -XXX,XX +XXX,XX @@ static DisasJumpType translate_one(DisasContext *ctx, uint32_t insn)
327
break;
328
case 0x0F:
329
/* STQ_U */
330
- gen_store_int(ctx, ra, rb, disp16, MO_LEQ, 1);
331
+ gen_store_int(ctx, ra, rb, disp16, MO_LEUQ, 1);
332
break;
333
334
case 0x10:
335
@@ -XXX,XX +XXX,XX @@ static DisasJumpType translate_one(DisasContext *ctx, uint32_t insn)
336
break;
337
case 0x1:
338
/* Quadword physical access (hw_ldq/p) */
339
- tcg_gen_qemu_ld_i64(va, addr, MMU_PHYS_IDX, MO_LEQ);
340
+ tcg_gen_qemu_ld_i64(va, addr, MMU_PHYS_IDX, MO_LEUQ);
341
break;
342
case 0x2:
343
/* Longword physical access with lock (hw_ldl_l/p) */
344
@@ -XXX,XX +XXX,XX @@ static DisasJumpType translate_one(DisasContext *ctx, uint32_t insn)
345
break;
346
case 0x3:
347
/* Quadword physical access with lock (hw_ldq_l/p) */
348
- tcg_gen_qemu_ld_i64(va, addr, MMU_PHYS_IDX, MO_LEQ);
349
+ tcg_gen_qemu_ld_i64(va, addr, MMU_PHYS_IDX, MO_LEUQ);
350
tcg_gen_mov_i64(cpu_lock_addr, addr);
351
tcg_gen_mov_i64(cpu_lock_value, va);
352
break;
353
@@ -XXX,XX +XXX,XX @@ static DisasJumpType translate_one(DisasContext *ctx, uint32_t insn)
354
break;
355
case 0xB:
356
/* Quadword virtual access with protection check (hw_ldq/w) */
357
- tcg_gen_qemu_ld_i64(va, addr, MMU_KERNEL_IDX, MO_LEQ);
358
+ tcg_gen_qemu_ld_i64(va, addr, MMU_KERNEL_IDX, MO_LEUQ);
359
break;
360
case 0xC:
361
/* Longword virtual access with alt access mode (hw_ldl/a)*/
362
@@ -XXX,XX +XXX,XX @@ static DisasJumpType translate_one(DisasContext *ctx, uint32_t insn)
363
case 0xF:
364
/* Quadword virtual access with alternate access mode and
365
protection checks (hw_ldq/wa) */
366
- tcg_gen_qemu_ld_i64(va, addr, MMU_USER_IDX, MO_LEQ);
367
+ tcg_gen_qemu_ld_i64(va, addr, MMU_USER_IDX, MO_LEUQ);
368
break;
369
}
370
tcg_temp_free(addr);
371
@@ -XXX,XX +XXX,XX @@ static DisasJumpType translate_one(DisasContext *ctx, uint32_t insn)
372
vb = load_gpr(ctx, rb);
373
tmp = tcg_temp_new();
374
tcg_gen_addi_i64(tmp, vb, disp12);
375
- tcg_gen_qemu_st_i64(va, tmp, MMU_PHYS_IDX, MO_LEQ);
376
+ tcg_gen_qemu_st_i64(va, tmp, MMU_PHYS_IDX, MO_LEUQ);
377
tcg_temp_free(tmp);
378
break;
379
case 0x2:
380
@@ -XXX,XX +XXX,XX @@ static DisasJumpType translate_one(DisasContext *ctx, uint32_t insn)
381
case 0x3:
382
/* Quadword physical access with lock */
383
ret = gen_store_conditional(ctx, ra, rb, disp12,
384
- MMU_PHYS_IDX, MO_LEQ);
385
+ MMU_PHYS_IDX, MO_LEUQ);
386
break;
387
case 0x4:
388
/* Longword virtual access */
389
@@ -XXX,XX +XXX,XX @@ static DisasJumpType translate_one(DisasContext *ctx, uint32_t insn)
390
break;
391
case 0x29:
392
/* LDQ */
393
- gen_load_int(ctx, ra, rb, disp16, MO_LEQ, 0, 0);
394
+ gen_load_int(ctx, ra, rb, disp16, MO_LEUQ, 0, 0);
395
break;
396
case 0x2A:
397
/* LDL_L */
398
@@ -XXX,XX +XXX,XX @@ static DisasJumpType translate_one(DisasContext *ctx, uint32_t insn)
399
break;
400
case 0x2B:
401
/* LDQ_L */
402
- gen_load_int(ctx, ra, rb, disp16, MO_LEQ, 0, 1);
403
+ gen_load_int(ctx, ra, rb, disp16, MO_LEUQ, 0, 1);
404
break;
405
case 0x2C:
406
/* STL */
407
@@ -XXX,XX +XXX,XX @@ static DisasJumpType translate_one(DisasContext *ctx, uint32_t insn)
408
break;
409
case 0x2D:
410
/* STQ */
411
- gen_store_int(ctx, ra, rb, disp16, MO_LEQ, 0);
412
+ gen_store_int(ctx, ra, rb, disp16, MO_LEUQ, 0);
413
break;
414
case 0x2E:
415
/* STL_C */
416
@@ -XXX,XX +XXX,XX @@ static DisasJumpType translate_one(DisasContext *ctx, uint32_t insn)
417
case 0x2F:
418
/* STQ_C */
419
ret = gen_store_conditional(ctx, ra, rb, disp16,
420
- ctx->mem_idx, MO_LEQ);
421
+ ctx->mem_idx, MO_LEUQ);
422
break;
423
case 0x30:
424
/* BR */
425
diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c
426
index XXXXXXX..XXXXXXX 100644
427
--- a/target/arm/helper-a64.c
428
+++ b/target/arm/helper-a64.c
429
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(paired_cmpxchg64_le)(CPUARMState *env, uint64_t addr,
430
uint64_t o0, o1;
431
bool success;
432
int mem_idx = cpu_mmu_index(env, false);
433
- MemOpIdx oi0 = make_memop_idx(MO_LEQ | MO_ALIGN_16, mem_idx);
434
- MemOpIdx oi1 = make_memop_idx(MO_LEQ, mem_idx);
435
+ MemOpIdx oi0 = make_memop_idx(MO_LEUQ | MO_ALIGN_16, mem_idx);
436
+ MemOpIdx oi1 = make_memop_idx(MO_LEUQ, mem_idx);
437
438
o0 = cpu_ldq_le_mmu(env, addr + 0, oi0, ra);
439
o1 = cpu_ldq_le_mmu(env, addr + 8, oi1, ra);
440
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(paired_cmpxchg64_be)(CPUARMState *env, uint64_t addr,
441
uint64_t o0, o1;
442
bool success;
443
int mem_idx = cpu_mmu_index(env, false);
444
- MemOpIdx oi0 = make_memop_idx(MO_BEQ | MO_ALIGN_16, mem_idx);
445
- MemOpIdx oi1 = make_memop_idx(MO_BEQ, mem_idx);
446
+ MemOpIdx oi0 = make_memop_idx(MO_BEUQ | MO_ALIGN_16, mem_idx);
447
+ MemOpIdx oi1 = make_memop_idx(MO_BEUQ, mem_idx);
448
449
o1 = cpu_ldq_be_mmu(env, addr + 0, oi0, ra);
450
o0 = cpu_ldq_be_mmu(env, addr + 8, oi1, ra);
451
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
452
index XXXXXXX..XXXXXXX 100644
453
--- a/target/arm/translate-a64.c
454
+++ b/target/arm/translate-a64.c
455
@@ -XXX,XX +XXX,XX @@ static void do_fp_st(DisasContext *s, int srcidx, TCGv_i64 tcg_addr, int size)
456
457
tcg_gen_ld_i64(tmphi, cpu_env, fp_reg_hi_offset(s, srcidx));
458
459
- mop = s->be_data | MO_Q;
460
+ mop = s->be_data | MO_UQ;
461
tcg_gen_qemu_st_i64(be ? tmphi : tmplo, tcg_addr, get_mem_index(s),
462
mop | (s->align_mem ? MO_ALIGN_16 : 0));
463
tcg_gen_addi_i64(tcg_hiaddr, tcg_addr, 8);
464
@@ -XXX,XX +XXX,XX @@ static void do_fp_ld(DisasContext *s, int destidx, TCGv_i64 tcg_addr, int size)
465
tmphi = tcg_temp_new_i64();
466
tcg_hiaddr = tcg_temp_new_i64();
467
468
- mop = s->be_data | MO_Q;
469
+ mop = s->be_data | MO_UQ;
470
tcg_gen_qemu_ld_i64(be ? tmphi : tmplo, tcg_addr, get_mem_index(s),
471
mop | (s->align_mem ? MO_ALIGN_16 : 0));
472
tcg_gen_addi_i64(tcg_hiaddr, tcg_addr, 8);
473
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_tag(DisasContext *s, uint32_t insn)
474
int i, n = (1 + is_pair) << LOG2_TAG_GRANULE;
475
476
tcg_gen_qemu_st_i64(tcg_zero, clean_addr, mem_index,
477
- MO_Q | MO_ALIGN_16);
478
+ MO_UQ | MO_ALIGN_16);
479
for (i = 8; i < n; i += 8) {
480
tcg_gen_addi_i64(clean_addr, clean_addr, 8);
481
- tcg_gen_qemu_st_i64(tcg_zero, clean_addr, mem_index, MO_Q);
482
+ tcg_gen_qemu_st_i64(tcg_zero, clean_addr, mem_index, MO_UQ);
483
}
484
tcg_temp_free_i64(tcg_zero);
485
}
486
diff --git a/target/arm/translate-neon.c b/target/arm/translate-neon.c
487
index XXXXXXX..XXXXXXX 100644
488
--- a/target/arm/translate-neon.c
489
+++ b/target/arm/translate-neon.c
490
@@ -XXX,XX +XXX,XX @@ static void neon_load_element64(TCGv_i64 var, int reg, int ele, MemOp mop)
491
case MO_UL:
492
tcg_gen_ld32u_i64(var, cpu_env, offset);
493
break;
494
- case MO_Q:
495
+ case MO_UQ:
496
tcg_gen_ld_i64(var, cpu_env, offset);
497
break;
498
default:
499
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
500
return false;
501
}
502
503
- if ((a->vd & 1) || (src1_mop == MO_Q && (a->vn & 1))) {
504
+ if ((a->vd & 1) || (src1_mop == MO_UQ && (a->vn & 1))) {
505
return false;
506
}
507
508
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
509
}; \
510
int narrow_mop = a->size == MO_32 ? MO_32 | SIGN : -1; \
511
return do_prewiden_3d(s, a, widenfn[a->size], addfn[a->size], \
512
- SRC1WIDE ? MO_Q : narrow_mop, \
513
+ SRC1WIDE ? MO_UQ : narrow_mop, \
514
narrow_mop); \
515
}
516
517
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
518
index XXXXXXX..XXXXXXX 100644
519
--- a/target/arm/translate-sve.c
520
+++ b/target/arm/translate-sve.c
521
@@ -XXX,XX +XXX,XX @@ static void do_ldr(DisasContext *s, uint32_t vofs, int len, int rn, int imm)
522
523
t0 = tcg_temp_new_i64();
524
for (i = 0; i < len_align; i += 8) {
525
- tcg_gen_qemu_ld_i64(t0, clean_addr, midx, MO_LEQ);
526
+ tcg_gen_qemu_ld_i64(t0, clean_addr, midx, MO_LEUQ);
527
tcg_gen_st_i64(t0, cpu_env, vofs + i);
528
tcg_gen_addi_i64(clean_addr, clean_addr, 8);
529
}
530
@@ -XXX,XX +XXX,XX @@ static void do_ldr(DisasContext *s, uint32_t vofs, int len, int rn, int imm)
531
gen_set_label(loop);
532
533
t0 = tcg_temp_new_i64();
534
- tcg_gen_qemu_ld_i64(t0, clean_addr, midx, MO_LEQ);
535
+ tcg_gen_qemu_ld_i64(t0, clean_addr, midx, MO_LEUQ);
536
tcg_gen_addi_i64(clean_addr, clean_addr, 8);
537
538
tp = tcg_temp_new_ptr();
539
@@ -XXX,XX +XXX,XX @@ static void do_str(DisasContext *s, uint32_t vofs, int len, int rn, int imm)
540
t0 = tcg_temp_new_i64();
541
for (i = 0; i < len_align; i += 8) {
542
tcg_gen_ld_i64(t0, cpu_env, vofs + i);
543
- tcg_gen_qemu_st_i64(t0, clean_addr, midx, MO_LEQ);
544
+ tcg_gen_qemu_st_i64(t0, clean_addr, midx, MO_LEUQ);
545
tcg_gen_addi_i64(clean_addr, clean_addr, 8);
546
}
547
tcg_temp_free_i64(t0);
548
@@ -XXX,XX +XXX,XX @@ static void do_str(DisasContext *s, uint32_t vofs, int len, int rn, int imm)
549
tcg_gen_addi_ptr(i, i, 8);
550
tcg_temp_free_ptr(tp);
551
552
- tcg_gen_qemu_st_i64(t0, clean_addr, midx, MO_LEQ);
553
+ tcg_gen_qemu_st_i64(t0, clean_addr, midx, MO_LEUQ);
554
tcg_gen_addi_i64(clean_addr, clean_addr, 8);
555
tcg_temp_free_i64(t0);
556
557
@@ -XXX,XX +XXX,XX @@ static const MemOp dtype_mop[16] = {
558
MO_UB, MO_UB, MO_UB, MO_UB,
559
MO_SL, MO_UW, MO_UW, MO_UW,
560
MO_SW, MO_SW, MO_UL, MO_UL,
561
- MO_SB, MO_SB, MO_SB, MO_Q
562
+ MO_SB, MO_SB, MO_SB, MO_UQ
563
};
564
565
#define dtype_msz(x) (dtype_mop[x] & MO_SIZE)
566
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
567
index XXXXXXX..XXXXXXX 100644
568
--- a/target/arm/translate-vfp.c
569
+++ b/target/arm/translate-vfp.c
570
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDR_VSTR_dp(DisasContext *s, arg_VLDR_VSTR_dp *a)
571
addr = add_reg_for_lit(s, a->rn, offset);
572
tmp = tcg_temp_new_i64();
573
if (a->l) {
574
- gen_aa32_ld_i64(s, tmp, addr, get_mem_index(s), MO_Q | MO_ALIGN_4);
575
+ gen_aa32_ld_i64(s, tmp, addr, get_mem_index(s), MO_UQ | MO_ALIGN_4);
576
vfp_store_reg64(tmp, a->vd);
577
} else {
578
vfp_load_reg64(tmp, a->vd);
579
- gen_aa32_st_i64(s, tmp, addr, get_mem_index(s), MO_Q | MO_ALIGN_4);
580
+ gen_aa32_st_i64(s, tmp, addr, get_mem_index(s), MO_UQ | MO_ALIGN_4);
581
}
582
tcg_temp_free_i64(tmp);
583
tcg_temp_free_i32(addr);
584
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDM_VSTM_dp(DisasContext *s, arg_VLDM_VSTM_dp *a)
585
for (i = 0; i < n; i++) {
586
if (a->l) {
587
/* load */
588
- gen_aa32_ld_i64(s, tmp, addr, get_mem_index(s), MO_Q | MO_ALIGN_4);
589
+ gen_aa32_ld_i64(s, tmp, addr, get_mem_index(s), MO_UQ | MO_ALIGN_4);
590
vfp_store_reg64(tmp, a->vd + i);
591
} else {
592
/* store */
593
vfp_load_reg64(tmp, a->vd + i);
594
- gen_aa32_st_i64(s, tmp, addr, get_mem_index(s), MO_Q | MO_ALIGN_4);
595
+ gen_aa32_st_i64(s, tmp, addr, get_mem_index(s), MO_UQ | MO_ALIGN_4);
596
}
597
tcg_gen_addi_i32(addr, addr, offset);
598
}
599
diff --git a/target/arm/translate.c b/target/arm/translate.c
600
index XXXXXXX..XXXXXXX 100644
601
--- a/target/arm/translate.c
602
+++ b/target/arm/translate.c
603
@@ -XXX,XX +XXX,XX @@ void read_neon_element64(TCGv_i64 dest, int reg, int ele, MemOp memop)
604
case MO_UL:
605
tcg_gen_ld32u_i64(dest, cpu_env, off);
606
break;
607
- case MO_Q:
608
+ case MO_UQ:
609
tcg_gen_ld_i64(dest, cpu_env, off);
610
break;
611
default:
612
diff --git a/target/cris/translate.c b/target/cris/translate.c
613
index XXXXXXX..XXXXXXX 100644
614
--- a/target/cris/translate.c
615
+++ b/target/cris/translate.c
616
@@ -XXX,XX +XXX,XX @@ static void gen_load64(DisasContext *dc, TCGv_i64 dst, TCGv addr)
617
cris_store_direct_jmp(dc);
618
}
619
620
- tcg_gen_qemu_ld_i64(dst, addr, mem_index, MO_TEQ);
621
+ tcg_gen_qemu_ld_i64(dst, addr, mem_index, MO_TEUQ);
622
}
623
624
static void gen_load(DisasContext *dc, TCGv dst, TCGv addr,
625
diff --git a/target/hppa/translate.c b/target/hppa/translate.c
626
index XXXXXXX..XXXXXXX 100644
627
--- a/target/hppa/translate.c
628
+++ b/target/hppa/translate.c
629
@@ -XXX,XX +XXX,XX @@ static bool do_floadd(DisasContext *ctx, unsigned rt, unsigned rb,
630
nullify_over(ctx);
631
632
tmp = tcg_temp_new_i64();
633
- do_load_64(ctx, tmp, rb, rx, scale, disp, sp, modify, MO_TEQ);
634
+ do_load_64(ctx, tmp, rb, rx, scale, disp, sp, modify, MO_TEUQ);
635
save_frd(rt, tmp);
636
tcg_temp_free_i64(tmp);
637
638
@@ -XXX,XX +XXX,XX @@ static bool do_fstored(DisasContext *ctx, unsigned rt, unsigned rb,
639
nullify_over(ctx);
640
641
tmp = load_frd(rt);
642
- do_store_64(ctx, tmp, rb, rx, scale, disp, sp, modify, MO_TEQ);
643
+ do_store_64(ctx, tmp, rb, rx, scale, disp, sp, modify, MO_TEUQ);
644
tcg_temp_free_i64(tmp);
645
646
return nullify_end(ctx);
647
diff --git a/target/i386/tcg/mem_helper.c b/target/i386/tcg/mem_helper.c
648
index XXXXXXX..XXXXXXX 100644
649
--- a/target/i386/tcg/mem_helper.c
650
+++ b/target/i386/tcg/mem_helper.c
651
@@ -XXX,XX +XXX,XX @@ void helper_cmpxchg8b(CPUX86State *env, target_ulong a0)
652
{
653
uintptr_t ra = GETPC();
654
int mem_idx = cpu_mmu_index(env, false);
655
- MemOpIdx oi = make_memop_idx(MO_TEQ, mem_idx);
656
+ MemOpIdx oi = make_memop_idx(MO_TEUQ, mem_idx);
657
oldv = cpu_atomic_cmpxchgq_le_mmu(env, a0, cmpv, newv, oi, ra);
658
}
659
660
diff --git a/target/i386/tcg/translate.c b/target/i386/tcg/translate.c
661
index XXXXXXX..XXXXXXX 100644
662
--- a/target/i386/tcg/translate.c
663
+++ b/target/i386/tcg/translate.c
664
@@ -XXX,XX +XXX,XX @@ static void gen_jmp(DisasContext *s, target_ulong eip)
665
666
static inline void gen_ldq_env_A0(DisasContext *s, int offset)
667
{
668
- tcg_gen_qemu_ld_i64(s->tmp1_i64, s->A0, s->mem_index, MO_LEQ);
669
+ tcg_gen_qemu_ld_i64(s->tmp1_i64, s->A0, s->mem_index, MO_LEUQ);
670
tcg_gen_st_i64(s->tmp1_i64, cpu_env, offset);
671
}
672
673
static inline void gen_stq_env_A0(DisasContext *s, int offset)
674
{
675
tcg_gen_ld_i64(s->tmp1_i64, cpu_env, offset);
676
- tcg_gen_qemu_st_i64(s->tmp1_i64, s->A0, s->mem_index, MO_LEQ);
677
+ tcg_gen_qemu_st_i64(s->tmp1_i64, s->A0, s->mem_index, MO_LEUQ);
678
}
679
680
static inline void gen_ldo_env_A0(DisasContext *s, int offset)
681
{
682
int mem_index = s->mem_index;
683
- tcg_gen_qemu_ld_i64(s->tmp1_i64, s->A0, mem_index, MO_LEQ);
684
+ tcg_gen_qemu_ld_i64(s->tmp1_i64, s->A0, mem_index, MO_LEUQ);
685
tcg_gen_st_i64(s->tmp1_i64, cpu_env, offset + offsetof(ZMMReg, ZMM_Q(0)));
686
tcg_gen_addi_tl(s->tmp0, s->A0, 8);
687
- tcg_gen_qemu_ld_i64(s->tmp1_i64, s->tmp0, mem_index, MO_LEQ);
688
+ tcg_gen_qemu_ld_i64(s->tmp1_i64, s->tmp0, mem_index, MO_LEUQ);
689
tcg_gen_st_i64(s->tmp1_i64, cpu_env, offset + offsetof(ZMMReg, ZMM_Q(1)));
690
}
691
692
@@ -XXX,XX +XXX,XX @@ static inline void gen_sto_env_A0(DisasContext *s, int offset)
693
{
694
int mem_index = s->mem_index;
695
tcg_gen_ld_i64(s->tmp1_i64, cpu_env, offset + offsetof(ZMMReg, ZMM_Q(0)));
696
- tcg_gen_qemu_st_i64(s->tmp1_i64, s->A0, mem_index, MO_LEQ);
697
+ tcg_gen_qemu_st_i64(s->tmp1_i64, s->A0, mem_index, MO_LEUQ);
698
tcg_gen_addi_tl(s->tmp0, s->A0, 8);
699
tcg_gen_ld_i64(s->tmp1_i64, cpu_env, offset + offsetof(ZMMReg, ZMM_Q(1)));
700
- tcg_gen_qemu_st_i64(s->tmp1_i64, s->tmp0, mem_index, MO_LEQ);
701
+ tcg_gen_qemu_st_i64(s->tmp1_i64, s->tmp0, mem_index, MO_LEUQ);
702
}
703
704
static inline void gen_op_movo(DisasContext *s, int d_offset, int s_offset)
705
@@ -XXX,XX +XXX,XX @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
706
tcg_gen_mov_i64(cpu_regs[rm], s->tmp1_i64);
707
} else {
708
tcg_gen_qemu_st_i64(s->tmp1_i64, s->A0,
709
- s->mem_index, MO_LEQ);
710
+ s->mem_index, MO_LEUQ);
711
}
712
#else
713
goto illegal_op;
714
@@ -XXX,XX +XXX,XX @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
715
gen_op_mov_v_reg(s, ot, s->tmp1_i64, rm);
716
} else {
717
tcg_gen_qemu_ld_i64(s->tmp1_i64, s->A0,
718
- s->mem_index, MO_LEQ);
719
+ s->mem_index, MO_LEUQ);
720
}
721
tcg_gen_st_i64(s->tmp1_i64, cpu_env,
722
offsetof(CPUX86State,
723
@@ -XXX,XX +XXX,XX @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
724
break;
725
case 2:
726
tcg_gen_qemu_ld_i64(s->tmp1_i64, s->A0,
727
- s->mem_index, MO_LEQ);
728
+ s->mem_index, MO_LEUQ);
729
gen_helper_fldl_FT0(cpu_env, s->tmp1_i64);
730
break;
731
case 3:
732
@@ -XXX,XX +XXX,XX @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
733
break;
734
case 2:
735
tcg_gen_qemu_ld_i64(s->tmp1_i64, s->A0,
736
- s->mem_index, MO_LEQ);
737
+ s->mem_index, MO_LEUQ);
738
gen_helper_fldl_ST0(cpu_env, s->tmp1_i64);
739
break;
740
case 3:
741
@@ -XXX,XX +XXX,XX @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
742
case 2:
743
gen_helper_fisttll_ST0(s->tmp1_i64, cpu_env);
744
tcg_gen_qemu_st_i64(s->tmp1_i64, s->A0,
745
- s->mem_index, MO_LEQ);
746
+ s->mem_index, MO_LEUQ);
747
break;
748
case 3:
749
default:
750
@@ -XXX,XX +XXX,XX @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
751
case 2:
752
gen_helper_fstl_ST0(s->tmp1_i64, cpu_env);
753
tcg_gen_qemu_st_i64(s->tmp1_i64, s->A0,
754
- s->mem_index, MO_LEQ);
755
+ s->mem_index, MO_LEUQ);
756
break;
757
case 3:
758
default:
759
@@ -XXX,XX +XXX,XX @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
760
break;
761
case 0x3d: /* fildll */
762
tcg_gen_qemu_ld_i64(s->tmp1_i64, s->A0,
763
- s->mem_index, MO_LEQ);
764
+ s->mem_index, MO_LEUQ);
765
gen_helper_fildll_ST0(cpu_env, s->tmp1_i64);
766
break;
767
case 0x3f: /* fistpll */
768
gen_helper_fistll_ST0(s->tmp1_i64, cpu_env);
769
tcg_gen_qemu_st_i64(s->tmp1_i64, s->A0,
770
- s->mem_index, MO_LEQ);
771
+ s->mem_index, MO_LEUQ);
772
gen_helper_fpop(cpu_env);
773
break;
774
default:
775
@@ -XXX,XX +XXX,XX @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
776
gen_lea_modrm(env, s, modrm);
777
if (CODE64(s)) {
778
tcg_gen_qemu_ld_i64(cpu_bndl[reg], s->A0,
779
- s->mem_index, MO_LEQ);
780
+ s->mem_index, MO_LEUQ);
781
tcg_gen_addi_tl(s->A0, s->A0, 8);
782
tcg_gen_qemu_ld_i64(cpu_bndu[reg], s->A0,
783
- s->mem_index, MO_LEQ);
784
+ s->mem_index, MO_LEUQ);
785
} else {
786
tcg_gen_qemu_ld_i64(cpu_bndl[reg], s->A0,
787
s->mem_index, MO_LEUL);
788
@@ -XXX,XX +XXX,XX @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
789
gen_lea_modrm(env, s, modrm);
790
if (CODE64(s)) {
791
tcg_gen_qemu_st_i64(cpu_bndl[reg], s->A0,
792
- s->mem_index, MO_LEQ);
793
+ s->mem_index, MO_LEUQ);
794
tcg_gen_addi_tl(s->A0, s->A0, 8);
795
tcg_gen_qemu_st_i64(cpu_bndu[reg], s->A0,
796
- s->mem_index, MO_LEQ);
797
+ s->mem_index, MO_LEUQ);
798
} else {
799
tcg_gen_qemu_st_i64(cpu_bndl[reg], s->A0,
800
s->mem_index, MO_LEUL);
801
diff --git a/target/m68k/op_helper.c b/target/m68k/op_helper.c
802
index XXXXXXX..XXXXXXX 100644
803
--- a/target/m68k/op_helper.c
804
+++ b/target/m68k/op_helper.c
805
@@ -XXX,XX +XXX,XX @@ static void do_cas2l(CPUM68KState *env, uint32_t regs, uint32_t a1, uint32_t a2,
806
uintptr_t ra = GETPC();
807
#if defined(CONFIG_ATOMIC64)
808
int mmu_idx = cpu_mmu_index(env, 0);
809
- MemOpIdx oi = make_memop_idx(MO_BEQ, mmu_idx);
810
+ MemOpIdx oi = make_memop_idx(MO_BEUQ, mmu_idx);
811
#endif
812
813
if (parallel) {
814
diff --git a/target/mips/tcg/translate.c b/target/mips/tcg/translate.c
815
index XXXXXXX..XXXXXXX 100644
816
--- a/target/mips/tcg/translate.c
817
+++ b/target/mips/tcg/translate.c
818
@@ -XXX,XX +XXX,XX @@ static void gen_ld(DisasContext *ctx, uint32_t opc,
819
gen_store_gpr(t0, rt);
820
break;
821
case OPC_LD:
822
- tcg_gen_qemu_ld_tl(t0, t0, mem_idx, MO_TEQ |
823
+ tcg_gen_qemu_ld_tl(t0, t0, mem_idx, MO_TEUQ |
824
ctx->default_tcg_memop_mask);
825
gen_store_gpr(t0, rt);
826
break;
827
@@ -XXX,XX +XXX,XX @@ static void gen_ld(DisasContext *ctx, uint32_t opc,
828
}
829
tcg_gen_shli_tl(t1, t1, 3);
830
tcg_gen_andi_tl(t0, t0, ~7);
831
- tcg_gen_qemu_ld_tl(t0, t0, mem_idx, MO_TEQ);
832
+ tcg_gen_qemu_ld_tl(t0, t0, mem_idx, MO_TEUQ);
833
tcg_gen_shl_tl(t0, t0, t1);
834
t2 = tcg_const_tl(-1);
835
tcg_gen_shl_tl(t2, t2, t1);
836
@@ -XXX,XX +XXX,XX @@ static void gen_ld(DisasContext *ctx, uint32_t opc,
837
}
838
tcg_gen_shli_tl(t1, t1, 3);
839
tcg_gen_andi_tl(t0, t0, ~7);
840
- tcg_gen_qemu_ld_tl(t0, t0, mem_idx, MO_TEQ);
841
+ tcg_gen_qemu_ld_tl(t0, t0, mem_idx, MO_TEUQ);
842
tcg_gen_shr_tl(t0, t0, t1);
843
tcg_gen_xori_tl(t1, t1, 63);
844
t2 = tcg_const_tl(0xfffffffffffffffeull);
845
@@ -XXX,XX +XXX,XX @@ static void gen_ld(DisasContext *ctx, uint32_t opc,
846
t1 = tcg_const_tl(pc_relative_pc(ctx));
847
gen_op_addr_add(ctx, t0, t0, t1);
848
tcg_temp_free(t1);
849
- tcg_gen_qemu_ld_tl(t0, t0, mem_idx, MO_TEQ);
850
+ tcg_gen_qemu_ld_tl(t0, t0, mem_idx, MO_TEUQ);
851
gen_store_gpr(t0, rt);
852
break;
853
#endif
854
@@ -XXX,XX +XXX,XX @@ static void gen_st(DisasContext *ctx, uint32_t opc, int rt,
855
switch (opc) {
856
#if defined(TARGET_MIPS64)
857
case OPC_SD:
858
- tcg_gen_qemu_st_tl(t1, t0, mem_idx, MO_TEQ |
859
+ tcg_gen_qemu_st_tl(t1, t0, mem_idx, MO_TEUQ |
860
ctx->default_tcg_memop_mask);
861
break;
862
case OPC_SDL:
863
@@ -XXX,XX +XXX,XX @@ static void gen_flt_ldst(DisasContext *ctx, uint32_t opc, int ft,
864
case OPC_LDC1:
865
{
866
TCGv_i64 fp0 = tcg_temp_new_i64();
867
- tcg_gen_qemu_ld_i64(fp0, t0, ctx->mem_idx, MO_TEQ |
868
+ tcg_gen_qemu_ld_i64(fp0, t0, ctx->mem_idx, MO_TEUQ |
869
ctx->default_tcg_memop_mask);
870
gen_store_fpr64(ctx, fp0, ft);
871
tcg_temp_free_i64(fp0);
872
@@ -XXX,XX +XXX,XX @@ static void gen_flt_ldst(DisasContext *ctx, uint32_t opc, int ft,
873
{
874
TCGv_i64 fp0 = tcg_temp_new_i64();
875
gen_load_fpr64(ctx, fp0, ft);
876
- tcg_gen_qemu_st_i64(fp0, t0, ctx->mem_idx, MO_TEQ |
877
+ tcg_gen_qemu_st_i64(fp0, t0, ctx->mem_idx, MO_TEUQ |
878
ctx->default_tcg_memop_mask);
879
tcg_temp_free_i64(fp0);
880
}
881
@@ -XXX,XX +XXX,XX @@ static inline void gen_pcrel(DisasContext *ctx, int opc, target_ulong pc,
882
check_mips_64(ctx);
883
offset = sextract32(ctx->opcode << 3, 0, 21);
884
addr = addr_add(ctx, (pc & ~0x7), offset);
885
- gen_r6_ld(addr, rs, ctx->mem_idx, MO_TEQ);
886
+ gen_r6_ld(addr, rs, ctx->mem_idx, MO_TEUQ);
887
break;
888
#endif
889
default:
890
@@ -XXX,XX +XXX,XX @@ static void gen_loongson_lswc2(DisasContext *ctx, int rt,
891
case OPC_GSLQ:
892
t1 = tcg_temp_new();
893
gen_base_offset_addr(ctx, t0, rs, lsq_offset);
894
- tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, MO_TEQ |
895
+ tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, MO_TEUQ |
896
ctx->default_tcg_memop_mask);
897
gen_base_offset_addr(ctx, t0, rs, lsq_offset + 8);
898
- tcg_gen_qemu_ld_tl(t0, t0, ctx->mem_idx, MO_TEQ |
899
+ tcg_gen_qemu_ld_tl(t0, t0, ctx->mem_idx, MO_TEUQ |
900
ctx->default_tcg_memop_mask);
901
gen_store_gpr(t1, rt);
902
gen_store_gpr(t0, lsq_rt1);
903
@@ -XXX,XX +XXX,XX @@ static void gen_loongson_lswc2(DisasContext *ctx, int rt,
904
check_cp1_enabled(ctx);
905
t1 = tcg_temp_new();
906
gen_base_offset_addr(ctx, t0, rs, lsq_offset);
907
- tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, MO_TEQ |
908
+ tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, MO_TEUQ |
909
ctx->default_tcg_memop_mask);
910
gen_base_offset_addr(ctx, t0, rs, lsq_offset + 8);
911
- tcg_gen_qemu_ld_tl(t0, t0, ctx->mem_idx, MO_TEQ |
912
+ tcg_gen_qemu_ld_tl(t0, t0, ctx->mem_idx, MO_TEUQ |
913
ctx->default_tcg_memop_mask);
914
gen_store_fpr64(ctx, t1, rt);
915
gen_store_fpr64(ctx, t0, lsq_rt1);
916
@@ -XXX,XX +XXX,XX @@ static void gen_loongson_lswc2(DisasContext *ctx, int rt,
917
t1 = tcg_temp_new();
918
gen_base_offset_addr(ctx, t0, rs, lsq_offset);
919
gen_load_gpr(t1, rt);
920
- tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, MO_TEQ |
921
+ tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, MO_TEUQ |
922
ctx->default_tcg_memop_mask);
923
gen_base_offset_addr(ctx, t0, rs, lsq_offset + 8);
924
gen_load_gpr(t1, lsq_rt1);
925
- tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, MO_TEQ |
926
+ tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, MO_TEUQ |
927
ctx->default_tcg_memop_mask);
928
tcg_temp_free(t1);
929
break;
930
@@ -XXX,XX +XXX,XX @@ static void gen_loongson_lswc2(DisasContext *ctx, int rt,
931
t1 = tcg_temp_new();
932
gen_base_offset_addr(ctx, t0, rs, lsq_offset);
933
gen_load_fpr64(ctx, t1, rt);
934
- tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, MO_TEQ |
935
+ tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, MO_TEUQ |
936
ctx->default_tcg_memop_mask);
937
gen_base_offset_addr(ctx, t0, rs, lsq_offset + 8);
938
gen_load_fpr64(ctx, t1, lsq_rt1);
939
- tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, MO_TEQ |
940
+ tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, MO_TEUQ |
941
ctx->default_tcg_memop_mask);
942
tcg_temp_free(t1);
943
break;
944
@@ -XXX,XX +XXX,XX @@ static void gen_loongson_lswc2(DisasContext *ctx, int rt,
945
}
946
tcg_gen_shli_tl(t1, t1, 3);
947
tcg_gen_andi_tl(t0, t0, ~7);
948
- tcg_gen_qemu_ld_tl(t0, t0, ctx->mem_idx, MO_TEQ);
949
+ tcg_gen_qemu_ld_tl(t0, t0, ctx->mem_idx, MO_TEUQ);
950
tcg_gen_shl_tl(t0, t0, t1);
951
t2 = tcg_const_tl(-1);
952
tcg_gen_shl_tl(t2, t2, t1);
953
@@ -XXX,XX +XXX,XX @@ static void gen_loongson_lswc2(DisasContext *ctx, int rt,
954
}
955
tcg_gen_shli_tl(t1, t1, 3);
956
tcg_gen_andi_tl(t0, t0, ~7);
957
- tcg_gen_qemu_ld_tl(t0, t0, ctx->mem_idx, MO_TEQ);
958
+ tcg_gen_qemu_ld_tl(t0, t0, ctx->mem_idx, MO_TEUQ);
959
tcg_gen_shr_tl(t0, t0, t1);
960
tcg_gen_xori_tl(t1, t1, 63);
961
t2 = tcg_const_tl(0xfffffffffffffffeull);
962
@@ -XXX,XX +XXX,XX @@ static void gen_loongson_lsdc2(DisasContext *ctx, int rt,
963
if (rd) {
964
gen_op_addr_add(ctx, t0, cpu_gpr[rd], t0);
965
}
966
- tcg_gen_qemu_ld_tl(t0, t0, ctx->mem_idx, MO_TEQ |
967
+ tcg_gen_qemu_ld_tl(t0, t0, ctx->mem_idx, MO_TEUQ |
968
ctx->default_tcg_memop_mask);
969
gen_store_gpr(t0, rt);
970
break;
971
@@ -XXX,XX +XXX,XX @@ static void gen_loongson_lsdc2(DisasContext *ctx, int rt,
972
if (rd) {
973
gen_op_addr_add(ctx, t0, cpu_gpr[rd], t0);
974
}
975
- tcg_gen_qemu_ld_tl(t0, t0, ctx->mem_idx, MO_TEQ |
976
+ tcg_gen_qemu_ld_tl(t0, t0, ctx->mem_idx, MO_TEUQ |
977
ctx->default_tcg_memop_mask);
978
gen_store_fpr64(ctx, t0, rt);
979
break;
980
@@ -XXX,XX +XXX,XX @@ static void gen_loongson_lsdc2(DisasContext *ctx, int rt,
981
case OPC_GSSDX:
982
t1 = tcg_temp_new();
983
gen_load_gpr(t1, rt);
984
- tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, MO_TEQ |
985
+ tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, MO_TEUQ |
986
ctx->default_tcg_memop_mask);
987
tcg_temp_free(t1);
988
break;
989
@@ -XXX,XX +XXX,XX @@ static void gen_loongson_lsdc2(DisasContext *ctx, int rt,
990
case OPC_GSSDXC1:
991
t1 = tcg_temp_new();
992
gen_load_fpr64(ctx, t1, rt);
993
- tcg_gen_qemu_st_i64(t1, t0, ctx->mem_idx, MO_TEQ |
994
+ tcg_gen_qemu_st_i64(t1, t0, ctx->mem_idx, MO_TEUQ |
995
ctx->default_tcg_memop_mask);
996
tcg_temp_free(t1);
997
break;
998
@@ -XXX,XX +XXX,XX @@ static void gen_flt3_ldst(DisasContext *ctx, uint32_t opc,
999
check_cp1_registers(ctx, fd);
1000
{
1001
TCGv_i64 fp0 = tcg_temp_new_i64();
1002
- tcg_gen_qemu_ld_i64(fp0, t0, ctx->mem_idx, MO_TEQ);
1003
+ tcg_gen_qemu_ld_i64(fp0, t0, ctx->mem_idx, MO_TEUQ);
1004
gen_store_fpr64(ctx, fp0, fd);
1005
tcg_temp_free_i64(fp0);
1006
}
1007
@@ -XXX,XX +XXX,XX @@ static void gen_flt3_ldst(DisasContext *ctx, uint32_t opc,
1008
{
1009
TCGv_i64 fp0 = tcg_temp_new_i64();
1010
1011
- tcg_gen_qemu_ld_i64(fp0, t0, ctx->mem_idx, MO_TEQ);
1012
+ tcg_gen_qemu_ld_i64(fp0, t0, ctx->mem_idx, MO_TEUQ);
1013
gen_store_fpr64(ctx, fp0, fd);
1014
tcg_temp_free_i64(fp0);
1015
}
1016
@@ -XXX,XX +XXX,XX @@ static void gen_flt3_ldst(DisasContext *ctx, uint32_t opc,
1017
{
1018
TCGv_i64 fp0 = tcg_temp_new_i64();
1019
gen_load_fpr64(ctx, fp0, fs);
1020
- tcg_gen_qemu_st_i64(fp0, t0, ctx->mem_idx, MO_TEQ);
1021
+ tcg_gen_qemu_st_i64(fp0, t0, ctx->mem_idx, MO_TEUQ);
1022
tcg_temp_free_i64(fp0);
1023
}
1024
break;
1025
@@ -XXX,XX +XXX,XX @@ static void gen_flt3_ldst(DisasContext *ctx, uint32_t opc,
1026
{
1027
TCGv_i64 fp0 = tcg_temp_new_i64();
1028
gen_load_fpr64(ctx, fp0, fs);
1029
- tcg_gen_qemu_st_i64(fp0, t0, ctx->mem_idx, MO_TEQ);
1030
+ tcg_gen_qemu_st_i64(fp0, t0, ctx->mem_idx, MO_TEUQ);
1031
tcg_temp_free_i64(fp0);
1032
}
1033
break;
1034
@@ -XXX,XX +XXX,XX @@ static void gen_mipsdsp_ld(DisasContext *ctx, uint32_t opc,
1035
break;
1036
#if defined(TARGET_MIPS64)
1037
case OPC_LDX:
1038
- tcg_gen_qemu_ld_tl(t0, t0, ctx->mem_idx, MO_TEQ);
1039
+ tcg_gen_qemu_ld_tl(t0, t0, ctx->mem_idx, MO_TEUQ);
1040
gen_store_gpr(t0, rd);
1041
break;
1042
#endif
1043
@@ -XXX,XX +XXX,XX @@ static void decode_opc_special3_r6(CPUMIPSState *env, DisasContext *ctx)
1044
#endif
1045
#if defined(TARGET_MIPS64)
1046
case R6_OPC_SCD:
1047
- gen_st_cond(ctx, rt, rs, imm, MO_TEQ, false);
1048
+ gen_st_cond(ctx, rt, rs, imm, MO_TEUQ, false);
1049
break;
1050
case R6_OPC_LLD:
1051
gen_ld(ctx, op1, rt, rs, imm);
1052
@@ -XXX,XX +XXX,XX @@ static bool decode_opc_legacy(CPUMIPSState *env, DisasContext *ctx)
1053
check_insn_opc_user_only(ctx, INSN_R5900);
1054
}
1055
check_mips_64(ctx);
1056
- gen_st_cond(ctx, rt, rs, imm, MO_TEQ, false);
1057
+ gen_st_cond(ctx, rt, rs, imm, MO_TEUQ, false);
1058
break;
1059
case OPC_BNVC: /* OPC_BNEZALC, OPC_BNEC, OPC_DADDI */
1060
if (ctx->insn_flags & ISA_MIPS_R6) {
1061
diff --git a/target/mips/tcg/tx79_translate.c b/target/mips/tcg/tx79_translate.c
1062
index XXXXXXX..XXXXXXX 100644
1063
--- a/target/mips/tcg/tx79_translate.c
1064
+++ b/target/mips/tcg/tx79_translate.c
1065
@@ -XXX,XX +XXX,XX @@ static bool trans_LQ(DisasContext *ctx, arg_i *a)
1066
tcg_gen_andi_tl(addr, addr, ~0xf);
1067
1068
/* Lower half */
1069
- tcg_gen_qemu_ld_i64(t0, addr, ctx->mem_idx, MO_TEQ);
1070
+ tcg_gen_qemu_ld_i64(t0, addr, ctx->mem_idx, MO_TEUQ);
1071
gen_store_gpr(t0, a->rt);
1072
1073
/* Upper half */
1074
tcg_gen_addi_i64(addr, addr, 8);
1075
- tcg_gen_qemu_ld_i64(t0, addr, ctx->mem_idx, MO_TEQ);
1076
+ tcg_gen_qemu_ld_i64(t0, addr, ctx->mem_idx, MO_TEUQ);
1077
gen_store_gpr_hi(t0, a->rt);
1078
1079
tcg_temp_free(t0);
1080
@@ -XXX,XX +XXX,XX @@ static bool trans_SQ(DisasContext *ctx, arg_i *a)
1081
1082
/* Lower half */
1083
gen_load_gpr(t0, a->rt);
1084
- tcg_gen_qemu_st_i64(t0, addr, ctx->mem_idx, MO_TEQ);
1085
+ tcg_gen_qemu_st_i64(t0, addr, ctx->mem_idx, MO_TEUQ);
1086
1087
/* Upper half */
1088
tcg_gen_addi_i64(addr, addr, 8);
1089
gen_load_gpr_hi(t0, a->rt);
1090
- tcg_gen_qemu_st_i64(t0, addr, ctx->mem_idx, MO_TEQ);
1091
+ tcg_gen_qemu_st_i64(t0, addr, ctx->mem_idx, MO_TEUQ);
1092
1093
tcg_temp_free(addr);
1094
tcg_temp_free(t0);
1095
diff --git a/target/ppc/translate.c b/target/ppc/translate.c
1096
index XXXXXXX..XXXXXXX 100644
1097
--- a/target/ppc/translate.c
1098
+++ b/target/ppc/translate.c
1099
@@ -XXX,XX +XXX,XX @@ GEN_QEMU_LOAD_64(ld8u, DEF_MEMOP(MO_UB))
1100
GEN_QEMU_LOAD_64(ld16u, DEF_MEMOP(MO_UW))
1101
GEN_QEMU_LOAD_64(ld32u, DEF_MEMOP(MO_UL))
1102
GEN_QEMU_LOAD_64(ld32s, DEF_MEMOP(MO_SL))
1103
-GEN_QEMU_LOAD_64(ld64, DEF_MEMOP(MO_Q))
1104
+GEN_QEMU_LOAD_64(ld64, DEF_MEMOP(MO_UQ))
1105
1106
#if defined(TARGET_PPC64)
1107
-GEN_QEMU_LOAD_64(ld64ur, BSWAP_MEMOP(MO_Q))
1108
+GEN_QEMU_LOAD_64(ld64ur, BSWAP_MEMOP(MO_UQ))
1109
#endif
1110
1111
#define GEN_QEMU_STORE_TL(stop, op) \
1112
@@ -XXX,XX +XXX,XX @@ static void glue(gen_qemu_, glue(stop, _i64))(DisasContext *ctx, \
1113
GEN_QEMU_STORE_64(st8, DEF_MEMOP(MO_UB))
1114
GEN_QEMU_STORE_64(st16, DEF_MEMOP(MO_UW))
1115
GEN_QEMU_STORE_64(st32, DEF_MEMOP(MO_UL))
1116
-GEN_QEMU_STORE_64(st64, DEF_MEMOP(MO_Q))
1117
+GEN_QEMU_STORE_64(st64, DEF_MEMOP(MO_UQ))
1118
1119
#if defined(TARGET_PPC64)
1120
-GEN_QEMU_STORE_64(st64r, BSWAP_MEMOP(MO_Q))
1121
+GEN_QEMU_STORE_64(st64r, BSWAP_MEMOP(MO_UQ))
1122
#endif
1123
1124
#define GEN_LDX_E(name, ldop, opc2, opc3, type, type2, chk) \
1125
@@ -XXX,XX +XXX,XX @@ GEN_LDEPX(lb, DEF_MEMOP(MO_UB), 0x1F, 0x02)
1126
GEN_LDEPX(lh, DEF_MEMOP(MO_UW), 0x1F, 0x08)
1127
GEN_LDEPX(lw, DEF_MEMOP(MO_UL), 0x1F, 0x00)
1128
#if defined(TARGET_PPC64)
1129
-GEN_LDEPX(ld, DEF_MEMOP(MO_Q), 0x1D, 0x00)
1130
+GEN_LDEPX(ld, DEF_MEMOP(MO_UQ), 0x1D, 0x00)
1131
#endif
1132
1133
#if defined(TARGET_PPC64)
1134
@@ -XXX,XX +XXX,XX @@ GEN_STEPX(stb, DEF_MEMOP(MO_UB), 0x1F, 0x06)
1135
GEN_STEPX(sth, DEF_MEMOP(MO_UW), 0x1F, 0x0C)
1136
GEN_STEPX(stw, DEF_MEMOP(MO_UL), 0x1F, 0x04)
1137
#if defined(TARGET_PPC64)
1138
-GEN_STEPX(std, DEF_MEMOP(MO_Q), 0x1d, 0x04)
1139
+GEN_STEPX(std, DEF_MEMOP(MO_UQ), 0x1d, 0x04)
1140
#endif
1141
1142
#if defined(TARGET_PPC64)
1143
@@ -XXX,XX +XXX,XX @@ static void gen_lwat(DisasContext *ctx)
1144
#ifdef TARGET_PPC64
1145
static void gen_ldat(DisasContext *ctx)
1146
{
1147
- gen_ld_atomic(ctx, DEF_MEMOP(MO_Q));
1148
+ gen_ld_atomic(ctx, DEF_MEMOP(MO_UQ));
1149
}
1150
#endif
1151
1152
@@ -XXX,XX +XXX,XX @@ static void gen_stwat(DisasContext *ctx)
1153
#ifdef TARGET_PPC64
1154
static void gen_stdat(DisasContext *ctx)
1155
{
1156
- gen_st_atomic(ctx, DEF_MEMOP(MO_Q));
1157
+ gen_st_atomic(ctx, DEF_MEMOP(MO_UQ));
1158
}
1159
#endif
1160
1161
@@ -XXX,XX +XXX,XX @@ STCX(stwcx_, DEF_MEMOP(MO_UL))
1162
1163
#if defined(TARGET_PPC64)
1164
/* ldarx */
1165
-LARX(ldarx, DEF_MEMOP(MO_Q))
1166
+LARX(ldarx, DEF_MEMOP(MO_UQ))
1167
/* stdcx. */
1168
-STCX(stdcx_, DEF_MEMOP(MO_Q))
1169
+STCX(stdcx_, DEF_MEMOP(MO_UQ))
1170
1171
/* lqarx */
1172
static void gen_lqarx(DisasContext *ctx)
1173
@@ -XXX,XX +XXX,XX @@ static void gen_lqarx(DisasContext *ctx)
1174
return;
1175
}
1176
} else if (ctx->le_mode) {
1177
- tcg_gen_qemu_ld_i64(lo, EA, ctx->mem_idx, MO_LEQ | MO_ALIGN_16);
1178
+ tcg_gen_qemu_ld_i64(lo, EA, ctx->mem_idx, MO_LEUQ | MO_ALIGN_16);
1179
tcg_gen_mov_tl(cpu_reserve, EA);
1180
gen_addr_add(ctx, EA, EA, 8);
1181
- tcg_gen_qemu_ld_i64(hi, EA, ctx->mem_idx, MO_LEQ);
1182
+ tcg_gen_qemu_ld_i64(hi, EA, ctx->mem_idx, MO_LEUQ);
1183
} else {
1184
- tcg_gen_qemu_ld_i64(hi, EA, ctx->mem_idx, MO_BEQ | MO_ALIGN_16);
1185
+ tcg_gen_qemu_ld_i64(hi, EA, ctx->mem_idx, MO_BEUQ | MO_ALIGN_16);
1186
tcg_gen_mov_tl(cpu_reserve, EA);
1187
gen_addr_add(ctx, EA, EA, 8);
1188
- tcg_gen_qemu_ld_i64(lo, EA, ctx->mem_idx, MO_BEQ);
1189
+ tcg_gen_qemu_ld_i64(lo, EA, ctx->mem_idx, MO_BEUQ);
1190
}
1191
tcg_temp_free(EA);
1192
1193
@@ -XXX,XX +XXX,XX @@ GEN_LDEPX(lb, DEF_MEMOP(MO_UB), 0x1F, 0x02)
1194
GEN_LDEPX(lh, DEF_MEMOP(MO_UW), 0x1F, 0x08)
1195
GEN_LDEPX(lw, DEF_MEMOP(MO_UL), 0x1F, 0x00)
1196
#if defined(TARGET_PPC64)
1197
-GEN_LDEPX(ld, DEF_MEMOP(MO_Q), 0x1D, 0x00)
1198
+GEN_LDEPX(ld, DEF_MEMOP(MO_UQ), 0x1D, 0x00)
1199
#endif
1200
1201
#undef GEN_STX_E
1202
@@ -XXX,XX +XXX,XX @@ GEN_STEPX(stb, DEF_MEMOP(MO_UB), 0x1F, 0x06)
1203
GEN_STEPX(sth, DEF_MEMOP(MO_UW), 0x1F, 0x0C)
1204
GEN_STEPX(stw, DEF_MEMOP(MO_UL), 0x1F, 0x04)
1205
#if defined(TARGET_PPC64)
1206
-GEN_STEPX(std, DEF_MEMOP(MO_Q), 0x1D, 0x04)
1207
+GEN_STEPX(std, DEF_MEMOP(MO_UQ), 0x1D, 0x04)
1208
#endif
1209
1210
#undef GEN_CRLOGIC
1211
diff --git a/target/s390x/tcg/mem_helper.c b/target/s390x/tcg/mem_helper.c
1212
index XXXXXXX..XXXXXXX 100644
1213
--- a/target/s390x/tcg/mem_helper.c
1214
+++ b/target/s390x/tcg/mem_helper.c
1215
@@ -XXX,XX +XXX,XX @@ static uint32_t do_csst(CPUS390XState *env, uint32_t r3, uint64_t a1,
1216
1217
if (parallel) {
1218
#ifdef CONFIG_ATOMIC64
1219
- MemOpIdx oi = make_memop_idx(MO_TEQ | MO_ALIGN, mem_idx);
1220
+ MemOpIdx oi = make_memop_idx(MO_TEUQ | MO_ALIGN, mem_idx);
1221
ov = cpu_atomic_cmpxchgq_be_mmu(env, a1, cv, nv, oi, ra);
1222
#else
1223
/* Note that we asserted !parallel above. */
1224
@@ -XXX,XX +XXX,XX @@ static uint32_t do_csst(CPUS390XState *env, uint32_t r3, uint64_t a1,
1225
cpu_stq_data_ra(env, a2 + 0, svh, ra);
1226
cpu_stq_data_ra(env, a2 + 8, svl, ra);
1227
} else if (HAVE_ATOMIC128) {
1228
- MemOpIdx oi = make_memop_idx(MO_TEQ | MO_ALIGN_16, mem_idx);
1229
+ MemOpIdx oi = make_memop_idx(MO_TEUQ | MO_ALIGN_16, mem_idx);
1230
Int128 sv = int128_make128(svl, svh);
1231
cpu_atomic_sto_be_mmu(env, a2, sv, oi, ra);
1232
} else {
1233
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(lpq_parallel)(CPUS390XState *env, uint64_t addr)
1234
assert(HAVE_ATOMIC128);
1235
1236
mem_idx = cpu_mmu_index(env, false);
1237
- oi = make_memop_idx(MO_TEQ | MO_ALIGN_16, mem_idx);
1238
+ oi = make_memop_idx(MO_TEUQ | MO_ALIGN_16, mem_idx);
1239
v = cpu_atomic_ldo_be_mmu(env, addr, oi, ra);
1240
hi = int128_gethi(v);
1241
lo = int128_getlo(v);
1242
@@ -XXX,XX +XXX,XX @@ void HELPER(stpq_parallel)(CPUS390XState *env, uint64_t addr,
1243
assert(HAVE_ATOMIC128);
1244
1245
mem_idx = cpu_mmu_index(env, false);
1246
- oi = make_memop_idx(MO_TEQ | MO_ALIGN_16, mem_idx);
1247
+ oi = make_memop_idx(MO_TEUQ | MO_ALIGN_16, mem_idx);
1248
v = int128_make128(low, high);
1249
cpu_atomic_sto_be_mmu(env, addr, v, oi, ra);
1250
}
1251
diff --git a/target/s390x/tcg/translate.c b/target/s390x/tcg/translate.c
1252
index XXXXXXX..XXXXXXX 100644
1253
--- a/target/s390x/tcg/translate.c
1254
+++ b/target/s390x/tcg/translate.c
1255
@@ -XXX,XX +XXX,XX @@ static DisasJumpType op_lpswe(DisasContext *s, DisasOps *o)
1256
t1 = tcg_temp_new_i64();
1257
t2 = tcg_temp_new_i64();
1258
tcg_gen_qemu_ld_i64(t1, o->in2, get_mem_index(s),
1259
- MO_TEQ | MO_ALIGN_8);
1260
+ MO_TEUQ | MO_ALIGN_8);
1261
tcg_gen_addi_i64(o->in2, o->in2, 8);
1262
tcg_gen_qemu_ld64(t2, o->in2, get_mem_index(s));
1263
gen_helper_load_psw(cpu_env, t1, t2);
1264
@@ -XXX,XX +XXX,XX @@ static DisasJumpType op_stcke(DisasContext *s, DisasOps *o)
1265
#ifndef CONFIG_USER_ONLY
1266
static DisasJumpType op_sck(DisasContext *s, DisasOps *o)
1267
{
1268
- tcg_gen_qemu_ld_i64(o->in1, o->addr1, get_mem_index(s), MO_TEQ | MO_ALIGN);
1269
+ tcg_gen_qemu_ld_i64(o->in1, o->addr1, get_mem_index(s), MO_TEUQ | MO_ALIGN);
1270
gen_helper_sck(cc_op, cpu_env, o->in1);
1271
set_cc_static(s);
1272
return DISAS_NEXT;
1273
@@ -XXX,XX +XXX,XX @@ static void wout_m1_64(DisasContext *s, DisasOps *o)
1274
#ifndef CONFIG_USER_ONLY
1275
static void wout_m1_64a(DisasContext *s, DisasOps *o)
1276
{
1277
- tcg_gen_qemu_st_i64(o->out, o->addr1, get_mem_index(s), MO_TEQ | MO_ALIGN);
1278
+ tcg_gen_qemu_st_i64(o->out, o->addr1, get_mem_index(s), MO_TEUQ | MO_ALIGN);
1279
}
1280
#define SPEC_wout_m1_64a 0
1281
#endif
1282
@@ -XXX,XX +XXX,XX @@ static void in2_m2_64w(DisasContext *s, DisasOps *o)
1283
static void in2_m2_64a(DisasContext *s, DisasOps *o)
1284
{
1285
in2_a2(s, o);
1286
- tcg_gen_qemu_ld_i64(o->in2, o->in2, get_mem_index(s), MO_TEQ | MO_ALIGN);
1287
+ tcg_gen_qemu_ld_i64(o->in2, o->in2, get_mem_index(s), MO_TEUQ | MO_ALIGN);
1288
}
1289
#define SPEC_in2_m2_64a 0
1290
#endif
1291
diff --git a/target/sh4/translate.c b/target/sh4/translate.c
1292
index XXXXXXX..XXXXXXX 100644
1293
--- a/target/sh4/translate.c
1294
+++ b/target/sh4/translate.c
1295
@@ -XXX,XX +XXX,XX @@ static void _decode_opc(DisasContext * ctx)
1296
if (ctx->tbflags & FPSCR_SZ) {
1297
TCGv_i64 fp = tcg_temp_new_i64();
1298
gen_load_fpr64(ctx, fp, XHACK(B7_4));
1299
- tcg_gen_qemu_st_i64(fp, REG(B11_8), ctx->memidx, MO_TEQ);
1300
+ tcg_gen_qemu_st_i64(fp, REG(B11_8), ctx->memidx, MO_TEUQ);
1301
tcg_temp_free_i64(fp);
1302
    } else {
1303
tcg_gen_qemu_st_i32(FREG(B7_4), REG(B11_8), ctx->memidx, MO_TEUL);
1304
@@ -XXX,XX +XXX,XX @@ static void _decode_opc(DisasContext * ctx)
1305
    CHECK_FPU_ENABLED
1306
if (ctx->tbflags & FPSCR_SZ) {
1307
TCGv_i64 fp = tcg_temp_new_i64();
1308
- tcg_gen_qemu_ld_i64(fp, REG(B7_4), ctx->memidx, MO_TEQ);
1309
+ tcg_gen_qemu_ld_i64(fp, REG(B7_4), ctx->memidx, MO_TEUQ);
1310
gen_store_fpr64(ctx, fp, XHACK(B11_8));
1311
tcg_temp_free_i64(fp);
1312
    } else {
1313
@@ -XXX,XX +XXX,XX @@ static void _decode_opc(DisasContext * ctx)
1314
    CHECK_FPU_ENABLED
1315
if (ctx->tbflags & FPSCR_SZ) {
1316
TCGv_i64 fp = tcg_temp_new_i64();
1317
- tcg_gen_qemu_ld_i64(fp, REG(B7_4), ctx->memidx, MO_TEQ);
1318
+ tcg_gen_qemu_ld_i64(fp, REG(B7_4), ctx->memidx, MO_TEUQ);
1319
gen_store_fpr64(ctx, fp, XHACK(B11_8));
1320
tcg_temp_free_i64(fp);
1321
tcg_gen_addi_i32(REG(B7_4), REG(B7_4), 8);
1322
@@ -XXX,XX +XXX,XX @@ static void _decode_opc(DisasContext * ctx)
1323
TCGv_i64 fp = tcg_temp_new_i64();
1324
gen_load_fpr64(ctx, fp, XHACK(B7_4));
1325
tcg_gen_subi_i32(addr, REG(B11_8), 8);
1326
- tcg_gen_qemu_st_i64(fp, addr, ctx->memidx, MO_TEQ);
1327
+ tcg_gen_qemu_st_i64(fp, addr, ctx->memidx, MO_TEUQ);
1328
tcg_temp_free_i64(fp);
1329
} else {
1330
tcg_gen_subi_i32(addr, REG(B11_8), 4);
1331
@@ -XXX,XX +XXX,XX @@ static void _decode_opc(DisasContext * ctx)
1332
     tcg_gen_add_i32(addr, REG(B7_4), REG(0));
1333
if (ctx->tbflags & FPSCR_SZ) {
1334
TCGv_i64 fp = tcg_temp_new_i64();
1335
- tcg_gen_qemu_ld_i64(fp, addr, ctx->memidx, MO_TEQ);
1336
+ tcg_gen_qemu_ld_i64(fp, addr, ctx->memidx, MO_TEUQ);
1337
gen_store_fpr64(ctx, fp, XHACK(B11_8));
1338
tcg_temp_free_i64(fp);
1339
     } else {
1340
@@ -XXX,XX +XXX,XX @@ static void _decode_opc(DisasContext * ctx)
1341
if (ctx->tbflags & FPSCR_SZ) {
1342
TCGv_i64 fp = tcg_temp_new_i64();
1343
gen_load_fpr64(ctx, fp, XHACK(B7_4));
1344
- tcg_gen_qemu_st_i64(fp, addr, ctx->memidx, MO_TEQ);
1345
+ tcg_gen_qemu_st_i64(fp, addr, ctx->memidx, MO_TEUQ);
1346
tcg_temp_free_i64(fp);
1347
     } else {
1348
tcg_gen_qemu_st_i32(FREG(B7_4), addr, ctx->memidx, MO_TEUL);
1349
diff --git a/target/sparc/translate.c b/target/sparc/translate.c
1350
index XXXXXXX..XXXXXXX 100644
1351
--- a/target/sparc/translate.c
1352
+++ b/target/sparc/translate.c
1353
@@ -XXX,XX +XXX,XX @@ static void gen_ldstub_asi(DisasContext *dc, TCGv dst, TCGv addr, int insn)
1354
static void gen_ldf_asi(DisasContext *dc, TCGv addr,
1355
int insn, int size, int rd)
1356
{
1357
- DisasASI da = get_asi(dc, insn, (size == 4 ? MO_TEUL : MO_TEQ));
1358
+ DisasASI da = get_asi(dc, insn, (size == 4 ? MO_TEUL : MO_TEUQ));
1359
TCGv_i32 d32;
1360
TCGv_i64 d64;
1361
1362
@@ -XXX,XX +XXX,XX @@ static void gen_ldf_asi(DisasContext *dc, TCGv addr,
1363
static void gen_stf_asi(DisasContext *dc, TCGv addr,
1364
int insn, int size, int rd)
1365
{
1366
- DisasASI da = get_asi(dc, insn, (size == 4 ? MO_TEUL : MO_TEQ));
1367
+ DisasASI da = get_asi(dc, insn, (size == 4 ? MO_TEUL : MO_TEUQ));
1368
TCGv_i32 d32;
1369
1370
switch (da.type) {
1371
@@ -XXX,XX +XXX,XX @@ static void gen_stf_asi(DisasContext *dc, TCGv addr,
1372
1373
static void gen_ldda_asi(DisasContext *dc, TCGv addr, int insn, int rd)
1374
{
1375
- DisasASI da = get_asi(dc, insn, MO_TEQ);
1376
+ DisasASI da = get_asi(dc, insn, MO_TEUQ);
1377
TCGv_i64 hi = gen_dest_gpr(dc, rd);
1378
TCGv_i64 lo = gen_dest_gpr(dc, rd + 1);
1379
1380
@@ -XXX,XX +XXX,XX @@ static void gen_ldda_asi(DisasContext *dc, TCGv addr, int insn, int rd)
1381
static void gen_stda_asi(DisasContext *dc, TCGv hi, TCGv addr,
1382
int insn, int rd)
1383
{
1384
- DisasASI da = get_asi(dc, insn, MO_TEQ);
1385
+ DisasASI da = get_asi(dc, insn, MO_TEUQ);
1386
TCGv lo = gen_load_gpr(dc, rd + 1);
1387
1388
switch (da.type) {
1389
@@ -XXX,XX +XXX,XX @@ static void gen_stda_asi(DisasContext *dc, TCGv hi, TCGv addr,
1390
static void gen_casx_asi(DisasContext *dc, TCGv addr, TCGv cmpv,
1391
int insn, int rd)
1392
{
1393
- DisasASI da = get_asi(dc, insn, MO_TEQ);
1394
+ DisasASI da = get_asi(dc, insn, MO_TEUQ);
1395
TCGv oldv;
1396
1397
switch (da.type) {
1398
@@ -XXX,XX +XXX,XX @@ static void gen_ldda_asi(DisasContext *dc, TCGv addr, int insn, int rd)
1399
TCGv lo = gen_dest_gpr(dc, rd | 1);
1400
TCGv hi = gen_dest_gpr(dc, rd);
1401
TCGv_i64 t64 = tcg_temp_new_i64();
1402
- DisasASI da = get_asi(dc, insn, MO_TEQ);
1403
+ DisasASI da = get_asi(dc, insn, MO_TEUQ);
1404
1405
switch (da.type) {
1406
case GET_ASI_EXCP:
1407
@@ -XXX,XX +XXX,XX @@ static void gen_ldda_asi(DisasContext *dc, TCGv addr, int insn, int rd)
1408
default:
1409
{
1410
TCGv_i32 r_asi = tcg_const_i32(da.asi);
1411
- TCGv_i32 r_mop = tcg_const_i32(MO_Q);
1412
+ TCGv_i32 r_mop = tcg_const_i32(MO_UQ);
1413
1414
save_state(dc);
1415
gen_helper_ld_asi(t64, cpu_env, addr, r_asi, r_mop);
1416
@@ -XXX,XX +XXX,XX @@ static void gen_ldda_asi(DisasContext *dc, TCGv addr, int insn, int rd)
1417
static void gen_stda_asi(DisasContext *dc, TCGv hi, TCGv addr,
1418
int insn, int rd)
1419
{
1420
- DisasASI da = get_asi(dc, insn, MO_TEQ);
1421
+ DisasASI da = get_asi(dc, insn, MO_TEUQ);
1422
TCGv lo = gen_load_gpr(dc, rd + 1);
1423
TCGv_i64 t64 = tcg_temp_new_i64();
1424
1425
@@ -XXX,XX +XXX,XX @@ static void gen_stda_asi(DisasContext *dc, TCGv hi, TCGv addr,
1426
default:
1427
{
1428
TCGv_i32 r_asi = tcg_const_i32(da.asi);
1429
- TCGv_i32 r_mop = tcg_const_i32(MO_Q);
1430
+ TCGv_i32 r_mop = tcg_const_i32(MO_UQ);
1431
1432
save_state(dc);
1433
gen_helper_st_asi(cpu_env, addr, t64, r_asi, r_mop);
1434
@@ -XXX,XX +XXX,XX @@ static void disas_sparc_insn(DisasContext * dc, unsigned int insn)
1435
gen_ld_asi(dc, cpu_val, cpu_addr, insn, MO_TESL);
1436
break;
1437
case 0x1b: /* V9 ldxa */
1438
- gen_ld_asi(dc, cpu_val, cpu_addr, insn, MO_TEQ);
1439
+ gen_ld_asi(dc, cpu_val, cpu_addr, insn, MO_TEUQ);
1440
break;
1441
case 0x2d: /* V9 prefetch, no effect */
1442
goto skip_move;
1443
@@ -XXX,XX +XXX,XX @@ static void disas_sparc_insn(DisasContext * dc, unsigned int insn)
1444
if (rd == 1) {
1445
TCGv_i64 t64 = tcg_temp_new_i64();
1446
tcg_gen_qemu_ld_i64(t64, cpu_addr,
1447
- dc->mem_idx, MO_TEQ);
1448
+ dc->mem_idx, MO_TEUQ);
1449
gen_helper_ldxfsr(cpu_fsr, cpu_env, cpu_fsr, t64);
1450
tcg_temp_free_i64(t64);
1451
break;
1452
@@ -XXX,XX +XXX,XX @@ static void disas_sparc_insn(DisasContext * dc, unsigned int insn)
1453
gen_address_mask(dc, cpu_addr);
1454
cpu_src1_64 = tcg_temp_new_i64();
1455
tcg_gen_qemu_ld_i64(cpu_src1_64, cpu_addr, dc->mem_idx,
1456
- MO_TEQ | MO_ALIGN_4);
1457
+ MO_TEUQ | MO_ALIGN_4);
1458
tcg_gen_addi_tl(cpu_addr, cpu_addr, 8);
1459
cpu_src2_64 = tcg_temp_new_i64();
1460
tcg_gen_qemu_ld_i64(cpu_src2_64, cpu_addr, dc->mem_idx,
1461
- MO_TEQ | MO_ALIGN_4);
1462
+ MO_TEUQ | MO_ALIGN_4);
1463
gen_store_fpr_Q(dc, rd, cpu_src1_64, cpu_src2_64);
1464
tcg_temp_free_i64(cpu_src1_64);
1465
tcg_temp_free_i64(cpu_src2_64);
1466
@@ -XXX,XX +XXX,XX @@ static void disas_sparc_insn(DisasContext * dc, unsigned int insn)
1467
gen_address_mask(dc, cpu_addr);
1468
cpu_dst_64 = gen_dest_fpr_D(dc, rd);
1469
tcg_gen_qemu_ld_i64(cpu_dst_64, cpu_addr, dc->mem_idx,
1470
- MO_TEQ | MO_ALIGN_4);
1471
+ MO_TEUQ | MO_ALIGN_4);
1472
gen_store_fpr_D(dc, rd, cpu_dst_64);
1473
break;
1474
default:
1475
@@ -XXX,XX +XXX,XX @@ static void disas_sparc_insn(DisasContext * dc, unsigned int insn)
1476
tcg_gen_qemu_st64(cpu_val, cpu_addr, dc->mem_idx);
1477
break;
1478
case 0x1e: /* V9 stxa */
1479
- gen_st_asi(dc, cpu_val, cpu_addr, insn, MO_TEQ);
1480
+ gen_st_asi(dc, cpu_val, cpu_addr, insn, MO_TEUQ);
1481
break;
1482
#endif
1483
default:
1484
@@ -XXX,XX +XXX,XX @@ static void disas_sparc_insn(DisasContext * dc, unsigned int insn)
1485
before performing the first write. */
1486
cpu_src1_64 = gen_load_fpr_Q0(dc, rd);
1487
tcg_gen_qemu_st_i64(cpu_src1_64, cpu_addr,
1488
- dc->mem_idx, MO_TEQ | MO_ALIGN_16);
1489
+ dc->mem_idx, MO_TEUQ | MO_ALIGN_16);
1490
tcg_gen_addi_tl(cpu_addr, cpu_addr, 8);
1491
cpu_src2_64 = gen_load_fpr_Q1(dc, rd);
1492
tcg_gen_qemu_st_i64(cpu_src1_64, cpu_addr,
1493
- dc->mem_idx, MO_TEQ);
1494
+ dc->mem_idx, MO_TEUQ);
1495
break;
1496
#else /* !TARGET_SPARC64 */
1497
/* stdfq, store floating point queue */
1498
@@ -XXX,XX +XXX,XX @@ static void disas_sparc_insn(DisasContext * dc, unsigned int insn)
1499
gen_address_mask(dc, cpu_addr);
1500
cpu_src1_64 = gen_load_fpr_D(dc, rd);
1501
tcg_gen_qemu_st_i64(cpu_src1_64, cpu_addr, dc->mem_idx,
1502
- MO_TEQ | MO_ALIGN_4);
1503
+ MO_TEUQ | MO_ALIGN_4);
1504
break;
1505
default:
1506
goto illegal_insn;
1507
diff --git a/target/tricore/translate.c b/target/tricore/translate.c
1508
index XXXXXXX..XXXXXXX 100644
1509
--- a/target/tricore/translate.c
1510
+++ b/target/tricore/translate.c
1511
@@ -XXX,XX +XXX,XX @@ static void gen_st_2regs_64(TCGv rh, TCGv rl, TCGv address, DisasContext *ctx)
1512
TCGv_i64 temp = tcg_temp_new_i64();
1513
1514
tcg_gen_concat_i32_i64(temp, rl, rh);
1515
- tcg_gen_qemu_st_i64(temp, address, ctx->mem_idx, MO_LEQ);
1516
+ tcg_gen_qemu_st_i64(temp, address, ctx->mem_idx, MO_LEUQ);
1517
1518
tcg_temp_free_i64(temp);
1519
}
1520
@@ -XXX,XX +XXX,XX @@ static void gen_ld_2regs_64(TCGv rh, TCGv rl, TCGv address, DisasContext *ctx)
1521
{
1522
TCGv_i64 temp = tcg_temp_new_i64();
1523
1524
- tcg_gen_qemu_ld_i64(temp, address, ctx->mem_idx, MO_LEQ);
1525
+ tcg_gen_qemu_ld_i64(temp, address, ctx->mem_idx, MO_LEUQ);
1526
/* write back to two 32 bit regs */
1527
tcg_gen_extr_i64_i32(rl, rh, temp);
1528
1529
diff --git a/target/xtensa/translate.c b/target/xtensa/translate.c
1530
index XXXXXXX..XXXXXXX 100644
1531
--- a/target/xtensa/translate.c
1532
+++ b/target/xtensa/translate.c
1533
@@ -XXX,XX +XXX,XX @@ static void translate_ldsti_d(DisasContext *dc, const OpcodeArg arg[],
1534
} else {
1535
addr = arg[1].in;
1536
}
1537
- mop = gen_load_store_alignment(dc, MO_TEQ, addr);
1538
+ mop = gen_load_store_alignment(dc, MO_TEUQ, addr);
1539
if (par[0]) {
1540
tcg_gen_qemu_st_i64(arg[0].in, addr, dc->cring, mop);
1541
} else {
1542
@@ -XXX,XX +XXX,XX @@ static void translate_ldstx_d(DisasContext *dc, const OpcodeArg arg[],
1543
} else {
1544
addr = arg[1].in;
1545
}
1546
- mop = gen_load_store_alignment(dc, MO_TEQ, addr);
1547
+ mop = gen_load_store_alignment(dc, MO_TEUQ, addr);
1548
if (par[0]) {
1549
tcg_gen_qemu_st_i64(arg[0].in, addr, dc->cring, mop);
1550
} else {
1551
diff --git a/tcg/tcg.c b/tcg/tcg.c
1552
index XXXXXXX..XXXXXXX 100644
1553
--- a/tcg/tcg.c
1554
+++ b/tcg/tcg.c
1555
@@ -XXX,XX +XXX,XX @@ static const char * const ldst_name[] =
1556
[MO_LESW] = "lesw",
1557
[MO_LEUL] = "leul",
1558
[MO_LESL] = "lesl",
1559
- [MO_LEQ] = "leq",
1560
+ [MO_LEUQ] = "leq",
1561
[MO_BEUW] = "beuw",
1562
[MO_BESW] = "besw",
1563
[MO_BEUL] = "beul",
1564
[MO_BESL] = "besl",
1565
- [MO_BEQ] = "beq",
1566
+ [MO_BEUQ] = "beq",
1567
};
1568
1569
static const char * const alignment_name[(MO_AMASK >> MO_ASHIFT) + 1] = {
1570
diff --git a/tcg/tci.c b/tcg/tci.c
1571
index XXXXXXX..XXXXXXX 100644
1572
--- a/tcg/tci.c
1573
+++ b/tcg/tci.c
1574
@@ -XXX,XX +XXX,XX @@ static uint64_t tci_qemu_ld(CPUArchState *env, target_ulong taddr,
1575
return helper_le_ldul_mmu(env, taddr, oi, ra);
1576
case MO_LESL:
1577
return helper_le_ldsl_mmu(env, taddr, oi, ra);
1578
- case MO_LEQ:
1579
+ case MO_LEUQ:
1580
return helper_le_ldq_mmu(env, taddr, oi, ra);
1581
case MO_BEUW:
1582
return helper_be_lduw_mmu(env, taddr, oi, ra);
1583
@@ -XXX,XX +XXX,XX @@ static uint64_t tci_qemu_ld(CPUArchState *env, target_ulong taddr,
1584
return helper_be_ldul_mmu(env, taddr, oi, ra);
1585
case MO_BESL:
1586
return helper_be_ldsl_mmu(env, taddr, oi, ra);
1587
- case MO_BEQ:
1588
+ case MO_BEUQ:
1589
return helper_be_ldq_mmu(env, taddr, oi, ra);
1590
default:
1591
g_assert_not_reached();
1592
@@ -XXX,XX +XXX,XX @@ static uint64_t tci_qemu_ld(CPUArchState *env, target_ulong taddr,
1593
case MO_LESL:
1594
ret = (int32_t)ldl_le_p(haddr);
1595
break;
1596
- case MO_LEQ:
1597
+ case MO_LEUQ:
1598
ret = ldq_le_p(haddr);
1599
break;
1600
case MO_BEUW:
1601
@@ -XXX,XX +XXX,XX @@ static uint64_t tci_qemu_ld(CPUArchState *env, target_ulong taddr,
1602
case MO_BESL:
1603
ret = (int32_t)ldl_be_p(haddr);
1604
break;
1605
- case MO_BEQ:
1606
+ case MO_BEUQ:
1607
ret = ldq_be_p(haddr);
1608
break;
1609
default:
1610
@@ -XXX,XX +XXX,XX @@ static void tci_qemu_st(CPUArchState *env, target_ulong taddr, uint64_t val,
1611
case MO_LEUL:
1612
helper_le_stl_mmu(env, taddr, val, oi, ra);
1613
break;
1614
- case MO_LEQ:
1615
+ case MO_LEUQ:
1616
helper_le_stq_mmu(env, taddr, val, oi, ra);
1617
break;
1618
case MO_BEUW:
1619
@@ -XXX,XX +XXX,XX @@ static void tci_qemu_st(CPUArchState *env, target_ulong taddr, uint64_t val,
1620
case MO_BEUL:
1621
helper_be_stl_mmu(env, taddr, val, oi, ra);
1622
break;
1623
- case MO_BEQ:
1624
+ case MO_BEUQ:
1625
helper_be_stq_mmu(env, taddr, val, oi, ra);
1626
break;
1627
default:
1628
@@ -XXX,XX +XXX,XX @@ static void tci_qemu_st(CPUArchState *env, target_ulong taddr, uint64_t val,
1629
case MO_LEUL:
1630
stl_le_p(haddr, val);
1631
break;
1632
- case MO_LEQ:
1633
+ case MO_LEUQ:
1634
stq_le_p(haddr, val);
1635
break;
1636
case MO_BEUW:
1637
@@ -XXX,XX +XXX,XX @@ static void tci_qemu_st(CPUArchState *env, target_ulong taddr, uint64_t val,
1638
case MO_BEUL:
1639
stl_be_p(haddr, val);
1640
break;
1641
- case MO_BEQ:
1642
+ case MO_BEUQ:
1643
stq_be_p(haddr, val);
1644
break;
1645
default:
1646
diff --git a/accel/tcg/ldst_common.c.inc b/accel/tcg/ldst_common.c.inc
1647
index XXXXXXX..XXXXXXX 100644
1648
--- a/accel/tcg/ldst_common.c.inc
1649
+++ b/accel/tcg/ldst_common.c.inc
1650
@@ -XXX,XX +XXX,XX @@ uint32_t cpu_ldl_be_mmuidx_ra(CPUArchState *env, abi_ptr addr,
1651
uint64_t cpu_ldq_be_mmuidx_ra(CPUArchState *env, abi_ptr addr,
1652
int mmu_idx, uintptr_t ra)
1653
{
1654
- MemOpIdx oi = make_memop_idx(MO_BEQ | MO_UNALN, mmu_idx);
1655
+ MemOpIdx oi = make_memop_idx(MO_BEUQ | MO_UNALN, mmu_idx);
1656
return cpu_ldq_be_mmu(env, addr, oi, ra);
1657
}
1658
1659
@@ -XXX,XX +XXX,XX @@ uint32_t cpu_ldl_le_mmuidx_ra(CPUArchState *env, abi_ptr addr,
1660
uint64_t cpu_ldq_le_mmuidx_ra(CPUArchState *env, abi_ptr addr,
1661
int mmu_idx, uintptr_t ra)
1662
{
1663
- MemOpIdx oi = make_memop_idx(MO_LEQ | MO_UNALN, mmu_idx);
1664
+ MemOpIdx oi = make_memop_idx(MO_LEUQ | MO_UNALN, mmu_idx);
1665
return cpu_ldq_le_mmu(env, addr, oi, ra);
1666
}
1667
1668
@@ -XXX,XX +XXX,XX @@ void cpu_stl_be_mmuidx_ra(CPUArchState *env, abi_ptr addr, uint32_t val,
1669
void cpu_stq_be_mmuidx_ra(CPUArchState *env, abi_ptr addr, uint64_t val,
1670
int mmu_idx, uintptr_t ra)
1671
{
1672
- MemOpIdx oi = make_memop_idx(MO_BEQ | MO_UNALN, mmu_idx);
1673
+ MemOpIdx oi = make_memop_idx(MO_BEUQ | MO_UNALN, mmu_idx);
1674
cpu_stq_be_mmu(env, addr, val, oi, ra);
1675
}
1676
1677
@@ -XXX,XX +XXX,XX @@ void cpu_stl_le_mmuidx_ra(CPUArchState *env, abi_ptr addr, uint32_t val,
1678
void cpu_stq_le_mmuidx_ra(CPUArchState *env, abi_ptr addr, uint64_t val,
1679
int mmu_idx, uintptr_t ra)
1680
{
1681
- MemOpIdx oi = make_memop_idx(MO_LEQ | MO_UNALN, mmu_idx);
1682
+ MemOpIdx oi = make_memop_idx(MO_LEUQ | MO_UNALN, mmu_idx);
1683
cpu_stq_le_mmu(env, addr, val, oi, ra);
1684
}
1685
1686
diff --git a/target/mips/tcg/micromips_translate.c.inc b/target/mips/tcg/micromips_translate.c.inc
1687
index XXXXXXX..XXXXXXX 100644
1688
--- a/target/mips/tcg/micromips_translate.c.inc
1689
+++ b/target/mips/tcg/micromips_translate.c.inc
1690
@@ -XXX,XX +XXX,XX @@ static void gen_ldst_pair(DisasContext *ctx, uint32_t opc, int rd,
1691
gen_reserved_instruction(ctx);
1692
return;
1693
}
1694
- tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, MO_TEQ);
1695
+ tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, MO_TEUQ);
1696
gen_store_gpr(t1, rd);
1697
tcg_gen_movi_tl(t1, 8);
1698
gen_op_addr_add(ctx, t0, t0, t1);
1699
- tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, MO_TEQ);
1700
+ tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, MO_TEUQ);
1701
gen_store_gpr(t1, rd + 1);
1702
break;
1703
case SDP:
1704
gen_load_gpr(t1, rd);
1705
- tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, MO_TEQ);
1706
+ tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, MO_TEUQ);
1707
tcg_gen_movi_tl(t1, 8);
1708
gen_op_addr_add(ctx, t0, t0, t1);
1709
gen_load_gpr(t1, rd + 1);
1710
- tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, MO_TEQ);
1711
+ tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, MO_TEUQ);
1712
break;
1713
#endif
1714
}
1715
@@ -XXX,XX +XXX,XX @@ static void decode_micromips32_opc(CPUMIPSState *env, DisasContext *ctx)
1716
case SCD:
1717
check_insn(ctx, ISA_MIPS3);
1718
check_mips_64(ctx);
1719
- gen_st_cond(ctx, rt, rs, offset, MO_TEQ, false);
1720
+ gen_st_cond(ctx, rt, rs, offset, MO_TEUQ, false);
1721
break;
1722
#endif
1723
case LD_EVA:
1724
diff --git a/target/ppc/translate/fixedpoint-impl.c.inc b/target/ppc/translate/fixedpoint-impl.c.inc
1725
index XXXXXXX..XXXXXXX 100644
1726
--- a/target/ppc/translate/fixedpoint-impl.c.inc
1727
+++ b/target/ppc/translate/fixedpoint-impl.c.inc
1728
@@ -XXX,XX +XXX,XX @@ static bool do_ldst_quad(DisasContext *ctx, arg_D *a, bool store, bool prefixed)
1729
ctx->base.is_jmp = DISAS_NORETURN;
1730
}
1731
} else {
1732
- mop = DEF_MEMOP(MO_Q);
1733
+ mop = DEF_MEMOP(MO_UQ);
1734
if (store) {
1735
tcg_gen_qemu_st_i64(low_addr_gpr, ea, ctx->mem_idx, mop);
1736
} else {
1737
@@ -XXX,XX +XXX,XX @@ TRANS64(LWAUX, do_ldst_X, true, false, MO_SL)
1738
TRANS64(PLWA, do_ldst_PLS_D, false, false, MO_SL)
1739
1740
/* Load Doubleword */
1741
-TRANS64(LD, do_ldst_D, false, false, MO_Q)
1742
-TRANS64(LDX, do_ldst_X, false, false, MO_Q)
1743
-TRANS64(LDU, do_ldst_D, true, false, MO_Q)
1744
-TRANS64(LDUX, do_ldst_X, true, false, MO_Q)
1745
-TRANS64(PLD, do_ldst_PLS_D, false, false, MO_Q)
1746
+TRANS64(LD, do_ldst_D, false, false, MO_UQ)
1747
+TRANS64(LDX, do_ldst_X, false, false, MO_UQ)
1748
+TRANS64(LDU, do_ldst_D, true, false, MO_UQ)
1749
+TRANS64(LDUX, do_ldst_X, true, false, MO_UQ)
1750
+TRANS64(PLD, do_ldst_PLS_D, false, false, MO_UQ)
1751
1752
/* Load Quadword */
1753
TRANS64(LQ, do_ldst_quad, false, false);
1754
@@ -XXX,XX +XXX,XX @@ TRANS(STWUX, do_ldst_X, true, true, MO_UL)
1755
TRANS(PSTW, do_ldst_PLS_D, false, true, MO_UL)
1756
1757
/* Store Doubleword */
1758
-TRANS64(STD, do_ldst_D, false, true, MO_Q)
1759
-TRANS64(STDX, do_ldst_X, false, true, MO_Q)
1760
-TRANS64(STDU, do_ldst_D, true, true, MO_Q)
1761
-TRANS64(STDUX, do_ldst_X, true, true, MO_Q)
1762
-TRANS64(PSTD, do_ldst_PLS_D, false, true, MO_Q)
1763
+TRANS64(STD, do_ldst_D, false, true, MO_UQ)
1764
+TRANS64(STDX, do_ldst_X, false, true, MO_UQ)
1765
+TRANS64(STDU, do_ldst_D, true, true, MO_UQ)
1766
+TRANS64(STDUX, do_ldst_X, true, true, MO_UQ)
1767
+TRANS64(PSTD, do_ldst_PLS_D, false, true, MO_UQ)
1768
1769
/* Store Quadword */
1770
TRANS64(STQ, do_ldst_quad, true, false);
1771
diff --git a/target/ppc/translate/fp-impl.c.inc b/target/ppc/translate/fp-impl.c.inc
1772
index XXXXXXX..XXXXXXX 100644
1773
--- a/target/ppc/translate/fp-impl.c.inc
1774
+++ b/target/ppc/translate/fp-impl.c.inc
1775
@@ -XXX,XX +XXX,XX @@ static void gen_lfdepx(DisasContext *ctx)
1776
EA = tcg_temp_new();
1777
t0 = tcg_temp_new_i64();
1778
gen_addr_reg_index(ctx, EA);
1779
- tcg_gen_qemu_ld_i64(t0, EA, PPC_TLB_EPID_LOAD, DEF_MEMOP(MO_Q));
1780
+ tcg_gen_qemu_ld_i64(t0, EA, PPC_TLB_EPID_LOAD, DEF_MEMOP(MO_UQ));
1781
set_fpr(rD(ctx->opcode), t0);
1782
tcg_temp_free(EA);
1783
tcg_temp_free_i64(t0);
1784
@@ -XXX,XX +XXX,XX @@ static void gen_stfdepx(DisasContext *ctx)
1785
t0 = tcg_temp_new_i64();
1786
gen_addr_reg_index(ctx, EA);
1787
get_fpr(t0, rD(ctx->opcode));
1788
- tcg_gen_qemu_st_i64(t0, EA, PPC_TLB_EPID_STORE, DEF_MEMOP(MO_Q));
1789
+ tcg_gen_qemu_st_i64(t0, EA, PPC_TLB_EPID_STORE, DEF_MEMOP(MO_UQ));
1790
tcg_temp_free(EA);
1791
tcg_temp_free_i64(t0);
1792
}
1793
diff --git a/target/ppc/translate/vsx-impl.c.inc b/target/ppc/translate/vsx-impl.c.inc
1794
index XXXXXXX..XXXXXXX 100644
1795
--- a/target/ppc/translate/vsx-impl.c.inc
1796
+++ b/target/ppc/translate/vsx-impl.c.inc
1797
@@ -XXX,XX +XXX,XX @@ static void gen_lxvw4x(DisasContext *ctx)
1798
TCGv_i64 t0 = tcg_temp_new_i64();
1799
TCGv_i64 t1 = tcg_temp_new_i64();
1800
1801
- tcg_gen_qemu_ld_i64(t0, EA, ctx->mem_idx, MO_LEQ);
1802
+ tcg_gen_qemu_ld_i64(t0, EA, ctx->mem_idx, MO_LEUQ);
1803
tcg_gen_shri_i64(t1, t0, 32);
1804
tcg_gen_deposit_i64(xth, t1, t0, 32, 32);
1805
tcg_gen_addi_tl(EA, EA, 8);
1806
- tcg_gen_qemu_ld_i64(t0, EA, ctx->mem_idx, MO_LEQ);
1807
+ tcg_gen_qemu_ld_i64(t0, EA, ctx->mem_idx, MO_LEUQ);
1808
tcg_gen_shri_i64(t1, t0, 32);
1809
tcg_gen_deposit_i64(xtl, t1, t0, 32, 32);
1810
tcg_temp_free_i64(t0);
1811
tcg_temp_free_i64(t1);
1812
} else {
1813
- tcg_gen_qemu_ld_i64(xth, EA, ctx->mem_idx, MO_BEQ);
1814
+ tcg_gen_qemu_ld_i64(xth, EA, ctx->mem_idx, MO_BEUQ);
1815
tcg_gen_addi_tl(EA, EA, 8);
1816
- tcg_gen_qemu_ld_i64(xtl, EA, ctx->mem_idx, MO_BEQ);
1817
+ tcg_gen_qemu_ld_i64(xtl, EA, ctx->mem_idx, MO_BEUQ);
1818
}
1819
set_cpu_vsr(xT(ctx->opcode), xth, true);
1820
set_cpu_vsr(xT(ctx->opcode), xtl, false);
1821
@@ -XXX,XX +XXX,XX @@ static void gen_lxvdsx(DisasContext *ctx)
1822
gen_addr_reg_index(ctx, EA);
1823
1824
data = tcg_temp_new_i64();
1825
- tcg_gen_qemu_ld_i64(data, EA, ctx->mem_idx, DEF_MEMOP(MO_Q));
1826
- tcg_gen_gvec_dup_i64(MO_Q, vsr_full_offset(xT(ctx->opcode)), 16, 16, data);
1827
+ tcg_gen_qemu_ld_i64(data, EA, ctx->mem_idx, DEF_MEMOP(MO_UQ));
1828
+ tcg_gen_gvec_dup_i64(MO_UQ, vsr_full_offset(xT(ctx->opcode)), 16, 16, data);
1829
1830
tcg_temp_free(EA);
1831
tcg_temp_free_i64(data);
1832
@@ -XXX,XX +XXX,XX @@ static void gen_lxvh8x(DisasContext *ctx)
1833
1834
EA = tcg_temp_new();
1835
gen_addr_reg_index(ctx, EA);
1836
- tcg_gen_qemu_ld_i64(xth, EA, ctx->mem_idx, MO_BEQ);
1837
+ tcg_gen_qemu_ld_i64(xth, EA, ctx->mem_idx, MO_BEUQ);
1838
tcg_gen_addi_tl(EA, EA, 8);
1839
- tcg_gen_qemu_ld_i64(xtl, EA, ctx->mem_idx, MO_BEQ);
1840
+ tcg_gen_qemu_ld_i64(xtl, EA, ctx->mem_idx, MO_BEUQ);
1841
if (ctx->le_mode) {
1842
gen_bswap16x8(xth, xtl, xth, xtl);
1843
}
1844
@@ -XXX,XX +XXX,XX @@ static void gen_lxvb16x(DisasContext *ctx)
1845
gen_set_access_type(ctx, ACCESS_INT);
1846
EA = tcg_temp_new();
1847
gen_addr_reg_index(ctx, EA);
1848
- tcg_gen_qemu_ld_i64(xth, EA, ctx->mem_idx, MO_BEQ);
1849
+ tcg_gen_qemu_ld_i64(xth, EA, ctx->mem_idx, MO_BEUQ);
1850
tcg_gen_addi_tl(EA, EA, 8);
1851
- tcg_gen_qemu_ld_i64(xtl, EA, ctx->mem_idx, MO_BEQ);
1852
+ tcg_gen_qemu_ld_i64(xtl, EA, ctx->mem_idx, MO_BEUQ);
1853
set_cpu_vsr(xT(ctx->opcode), xth, true);
1854
set_cpu_vsr(xT(ctx->opcode), xtl, false);
1855
tcg_temp_free(EA);
1856
@@ -XXX,XX +XXX,XX @@ static void gen_stxvw4x(DisasContext *ctx)
1857
1858
tcg_gen_shri_i64(t0, xsh, 32);
1859
tcg_gen_deposit_i64(t1, t0, xsh, 32, 32);
1860
- tcg_gen_qemu_st_i64(t1, EA, ctx->mem_idx, MO_LEQ);
1861
+ tcg_gen_qemu_st_i64(t1, EA, ctx->mem_idx, MO_LEUQ);
1862
tcg_gen_addi_tl(EA, EA, 8);
1863
tcg_gen_shri_i64(t0, xsl, 32);
1864
tcg_gen_deposit_i64(t1, t0, xsl, 32, 32);
1865
- tcg_gen_qemu_st_i64(t1, EA, ctx->mem_idx, MO_LEQ);
1866
+ tcg_gen_qemu_st_i64(t1, EA, ctx->mem_idx, MO_LEUQ);
1867
tcg_temp_free_i64(t0);
1868
tcg_temp_free_i64(t1);
1869
} else {
1870
- tcg_gen_qemu_st_i64(xsh, EA, ctx->mem_idx, MO_BEQ);
1871
+ tcg_gen_qemu_st_i64(xsh, EA, ctx->mem_idx, MO_BEUQ);
1872
tcg_gen_addi_tl(EA, EA, 8);
1873
- tcg_gen_qemu_st_i64(xsl, EA, ctx->mem_idx, MO_BEQ);
1874
+ tcg_gen_qemu_st_i64(xsl, EA, ctx->mem_idx, MO_BEUQ);
1875
}
1876
tcg_temp_free(EA);
1877
tcg_temp_free_i64(xsh);
1878
@@ -XXX,XX +XXX,XX @@ static void gen_stxvh8x(DisasContext *ctx)
1879
TCGv_i64 outl = tcg_temp_new_i64();
1880
1881
gen_bswap16x8(outh, outl, xsh, xsl);
1882
- tcg_gen_qemu_st_i64(outh, EA, ctx->mem_idx, MO_BEQ);
1883
+ tcg_gen_qemu_st_i64(outh, EA, ctx->mem_idx, MO_BEUQ);
1884
tcg_gen_addi_tl(EA, EA, 8);
1885
- tcg_gen_qemu_st_i64(outl, EA, ctx->mem_idx, MO_BEQ);
1886
+ tcg_gen_qemu_st_i64(outl, EA, ctx->mem_idx, MO_BEUQ);
1887
tcg_temp_free_i64(outh);
1888
tcg_temp_free_i64(outl);
1889
} else {
1890
- tcg_gen_qemu_st_i64(xsh, EA, ctx->mem_idx, MO_BEQ);
1891
+ tcg_gen_qemu_st_i64(xsh, EA, ctx->mem_idx, MO_BEUQ);
1892
tcg_gen_addi_tl(EA, EA, 8);
1893
- tcg_gen_qemu_st_i64(xsl, EA, ctx->mem_idx, MO_BEQ);
1894
+ tcg_gen_qemu_st_i64(xsl, EA, ctx->mem_idx, MO_BEUQ);
1895
}
1896
tcg_temp_free(EA);
1897
tcg_temp_free_i64(xsh);
1898
@@ -XXX,XX +XXX,XX @@ static void gen_stxvb16x(DisasContext *ctx)
1899
gen_set_access_type(ctx, ACCESS_INT);
1900
EA = tcg_temp_new();
1901
gen_addr_reg_index(ctx, EA);
1902
- tcg_gen_qemu_st_i64(xsh, EA, ctx->mem_idx, MO_BEQ);
1903
+ tcg_gen_qemu_st_i64(xsh, EA, ctx->mem_idx, MO_BEUQ);
1904
tcg_gen_addi_tl(EA, EA, 8);
1905
- tcg_gen_qemu_st_i64(xsl, EA, ctx->mem_idx, MO_BEQ);
1906
+ tcg_gen_qemu_st_i64(xsl, EA, ctx->mem_idx, MO_BEUQ);
1907
tcg_temp_free(EA);
1908
tcg_temp_free_i64(xsh);
1909
tcg_temp_free_i64(xsl);
1910
@@ -XXX,XX +XXX,XX @@ static bool do_lstxv(DisasContext *ctx, int ra, TCGv displ,
1911
1912
xt = tcg_temp_new_i64();
1913
1914
- mop = DEF_MEMOP(MO_Q);
1915
+ mop = DEF_MEMOP(MO_UQ);
1916
1917
gen_set_access_type(ctx, ACCESS_INT);
1918
ea = do_ea_calc(ctx, ra, displ);
1919
diff --git a/target/riscv/insn_trans/trans_rva.c.inc b/target/riscv/insn_trans/trans_rva.c.inc
1920
index XXXXXXX..XXXXXXX 100644
1921
--- a/target/riscv/insn_trans/trans_rva.c.inc
1922
+++ b/target/riscv/insn_trans/trans_rva.c.inc
1923
@@ -XXX,XX +XXX,XX @@ static bool trans_amomaxu_w(DisasContext *ctx, arg_amomaxu_w *a)
1924
static bool trans_lr_d(DisasContext *ctx, arg_lr_d *a)
1925
{
1926
REQUIRE_64BIT(ctx);
1927
- return gen_lr(ctx, a, MO_ALIGN | MO_TEQ);
1928
+ return gen_lr(ctx, a, MO_ALIGN | MO_TEUQ);
1929
}
1930
1931
static bool trans_sc_d(DisasContext *ctx, arg_sc_d *a)
1932
{
1933
REQUIRE_64BIT(ctx);
1934
- return gen_sc(ctx, a, (MO_ALIGN | MO_TEQ));
1935
+ return gen_sc(ctx, a, (MO_ALIGN | MO_TEUQ));
1936
}
1937
1938
static bool trans_amoswap_d(DisasContext *ctx, arg_amoswap_d *a)
1939
{
1940
REQUIRE_64BIT(ctx);
1941
- return gen_amo(ctx, a, &tcg_gen_atomic_xchg_tl, (MO_ALIGN | MO_TEQ));
1942
+ return gen_amo(ctx, a, &tcg_gen_atomic_xchg_tl, (MO_ALIGN | MO_TEUQ));
1943
}
1944
1945
static bool trans_amoadd_d(DisasContext *ctx, arg_amoadd_d *a)
1946
{
1947
REQUIRE_64BIT(ctx);
1948
- return gen_amo(ctx, a, &tcg_gen_atomic_fetch_add_tl, (MO_ALIGN | MO_TEQ));
1949
+ return gen_amo(ctx, a, &tcg_gen_atomic_fetch_add_tl, (MO_ALIGN | MO_TEUQ));
1950
}
1951
1952
static bool trans_amoxor_d(DisasContext *ctx, arg_amoxor_d *a)
1953
{
1954
REQUIRE_64BIT(ctx);
1955
- return gen_amo(ctx, a, &tcg_gen_atomic_fetch_xor_tl, (MO_ALIGN | MO_TEQ));
1956
+ return gen_amo(ctx, a, &tcg_gen_atomic_fetch_xor_tl, (MO_ALIGN | MO_TEUQ));
1957
}
1958
1959
static bool trans_amoand_d(DisasContext *ctx, arg_amoand_d *a)
1960
{
1961
REQUIRE_64BIT(ctx);
1962
- return gen_amo(ctx, a, &tcg_gen_atomic_fetch_and_tl, (MO_ALIGN | MO_TEQ));
1963
+ return gen_amo(ctx, a, &tcg_gen_atomic_fetch_and_tl, (MO_ALIGN | MO_TEUQ));
1964
}
1965
1966
static bool trans_amoor_d(DisasContext *ctx, arg_amoor_d *a)
1967
{
1968
REQUIRE_64BIT(ctx);
1969
- return gen_amo(ctx, a, &tcg_gen_atomic_fetch_or_tl, (MO_ALIGN | MO_TEQ));
1970
+ return gen_amo(ctx, a, &tcg_gen_atomic_fetch_or_tl, (MO_ALIGN | MO_TEUQ));
1971
}
1972
1973
static bool trans_amomin_d(DisasContext *ctx, arg_amomin_d *a)
1974
{
1975
REQUIRE_64BIT(ctx);
1976
- return gen_amo(ctx, a, &tcg_gen_atomic_fetch_smin_tl, (MO_ALIGN | MO_TEQ));
1977
+ return gen_amo(ctx, a, &tcg_gen_atomic_fetch_smin_tl, (MO_ALIGN | MO_TEUQ));
1978
}
1979
1980
static bool trans_amomax_d(DisasContext *ctx, arg_amomax_d *a)
1981
{
1982
REQUIRE_64BIT(ctx);
1983
- return gen_amo(ctx, a, &tcg_gen_atomic_fetch_smax_tl, (MO_ALIGN | MO_TEQ));
1984
+ return gen_amo(ctx, a, &tcg_gen_atomic_fetch_smax_tl, (MO_ALIGN | MO_TEUQ));
1985
}
1986
1987
static bool trans_amominu_d(DisasContext *ctx, arg_amominu_d *a)
1988
{
1989
REQUIRE_64BIT(ctx);
1990
- return gen_amo(ctx, a, &tcg_gen_atomic_fetch_umin_tl, (MO_ALIGN | MO_TEQ));
1991
+ return gen_amo(ctx, a, &tcg_gen_atomic_fetch_umin_tl, (MO_ALIGN | MO_TEUQ));
1992
}
1993
1994
static bool trans_amomaxu_d(DisasContext *ctx, arg_amomaxu_d *a)
1995
{
1996
REQUIRE_64BIT(ctx);
1997
- return gen_amo(ctx, a, &tcg_gen_atomic_fetch_umax_tl, (MO_ALIGN | MO_TEQ));
1998
+ return gen_amo(ctx, a, &tcg_gen_atomic_fetch_umax_tl, (MO_ALIGN | MO_TEUQ));
1999
}
2000
diff --git a/target/riscv/insn_trans/trans_rvd.c.inc b/target/riscv/insn_trans/trans_rvd.c.inc
2001
index XXXXXXX..XXXXXXX 100644
2002
--- a/target/riscv/insn_trans/trans_rvd.c.inc
2003
+++ b/target/riscv/insn_trans/trans_rvd.c.inc
2004
@@ -XXX,XX +XXX,XX @@ static bool trans_fld(DisasContext *ctx, arg_fld *a)
2005
}
2006
addr = gen_pm_adjust_address(ctx, addr);
2007
2008
- tcg_gen_qemu_ld_i64(cpu_fpr[a->rd], addr, ctx->mem_idx, MO_TEQ);
2009
+ tcg_gen_qemu_ld_i64(cpu_fpr[a->rd], addr, ctx->mem_idx, MO_TEUQ);
2010
2011
mark_fs_dirty(ctx);
2012
return true;
2013
@@ -XXX,XX +XXX,XX @@ static bool trans_fsd(DisasContext *ctx, arg_fsd *a)
2014
}
2015
addr = gen_pm_adjust_address(ctx, addr);
2016
2017
- tcg_gen_qemu_st_i64(cpu_fpr[a->rs2], addr, ctx->mem_idx, MO_TEQ);
2018
+ tcg_gen_qemu_st_i64(cpu_fpr[a->rs2], addr, ctx->mem_idx, MO_TEUQ);
2019
2020
return true;
2021
}
2022
diff --git a/target/riscv/insn_trans/trans_rvh.c.inc b/target/riscv/insn_trans/trans_rvh.c.inc
2023
index XXXXXXX..XXXXXXX 100644
2024
--- a/target/riscv/insn_trans/trans_rvh.c.inc
2025
+++ b/target/riscv/insn_trans/trans_rvh.c.inc
2026
@@ -XXX,XX +XXX,XX @@ static bool trans_hlv_d(DisasContext *ctx, arg_hlv_d *a)
2027
{
2028
REQUIRE_64BIT(ctx);
2029
REQUIRE_EXT(ctx, RVH);
2030
- return do_hlv(ctx, a, MO_TEQ);
2031
+ return do_hlv(ctx, a, MO_TEUQ);
2032
}
2033
2034
static bool trans_hsv_d(DisasContext *ctx, arg_hsv_d *a)
2035
{
2036
REQUIRE_64BIT(ctx);
2037
REQUIRE_EXT(ctx, RVH);
2038
- return do_hsv(ctx, a, MO_TEQ);
2039
+ return do_hsv(ctx, a, MO_TEUQ);
2040
}
2041
2042
#ifndef CONFIG_USER_ONLY
2043
diff --git a/target/riscv/insn_trans/trans_rvi.c.inc b/target/riscv/insn_trans/trans_rvi.c.inc
2044
index XXXXXXX..XXXXXXX 100644
2045
--- a/target/riscv/insn_trans/trans_rvi.c.inc
2046
+++ b/target/riscv/insn_trans/trans_rvi.c.inc
2047
@@ -XXX,XX +XXX,XX @@ static bool trans_lwu(DisasContext *ctx, arg_lwu *a)
2048
static bool trans_ld(DisasContext *ctx, arg_ld *a)
2049
{
2050
REQUIRE_64BIT(ctx);
2051
- return gen_load(ctx, a, MO_TEQ);
2052
+ return gen_load(ctx, a, MO_TEUQ);
2053
}
2054
2055
static bool trans_sd(DisasContext *ctx, arg_sd *a)
2056
{
2057
REQUIRE_64BIT(ctx);
2058
- return gen_store(ctx, a, MO_TEQ);
2059
+ return gen_store(ctx, a, MO_TEUQ);
2060
}
2061
2062
static bool trans_addi(DisasContext *ctx, arg_addi *a)
2063
diff --git a/target/s390x/tcg/translate_vx.c.inc b/target/s390x/tcg/translate_vx.c.inc
2064
index XXXXXXX..XXXXXXX 100644
2065
--- a/target/s390x/tcg/translate_vx.c.inc
2066
+++ b/target/s390x/tcg/translate_vx.c.inc
2067
@@ -XXX,XX +XXX,XX @@ static DisasJumpType op_vl(DisasContext *s, DisasOps *o)
2068
TCGv_i64 t0 = tcg_temp_new_i64();
2069
TCGv_i64 t1 = tcg_temp_new_i64();
2070
2071
- tcg_gen_qemu_ld_i64(t0, o->addr1, get_mem_index(s), MO_TEQ);
2072
+ tcg_gen_qemu_ld_i64(t0, o->addr1, get_mem_index(s), MO_TEUQ);
2073
gen_addi_and_wrap_i64(s, o->addr1, o->addr1, 8);
2074
- tcg_gen_qemu_ld_i64(t1, o->addr1, get_mem_index(s), MO_TEQ);
2075
+ tcg_gen_qemu_ld_i64(t1, o->addr1, get_mem_index(s), MO_TEUQ);
2076
write_vec_element_i64(t0, get_field(s, v1), 0, ES_64);
2077
write_vec_element_i64(t1, get_field(s, v1), 1, ES_64);
2078
tcg_temp_free(t0);
2079
@@ -XXX,XX +XXX,XX @@ static DisasJumpType op_vlm(DisasContext *s, DisasOps *o)
2080
t0 = tcg_temp_new_i64();
2081
t1 = tcg_temp_new_i64();
2082
gen_addi_and_wrap_i64(s, t0, o->addr1, (v3 - v1) * 16 + 8);
2083
- tcg_gen_qemu_ld_i64(t0, t0, get_mem_index(s), MO_TEQ);
2084
+ tcg_gen_qemu_ld_i64(t0, t0, get_mem_index(s), MO_TEUQ);
2085
2086
for (;; v1++) {
2087
- tcg_gen_qemu_ld_i64(t1, o->addr1, get_mem_index(s), MO_TEQ);
2088
+ tcg_gen_qemu_ld_i64(t1, o->addr1, get_mem_index(s), MO_TEUQ);
2089
write_vec_element_i64(t1, v1, 0, ES_64);
2090
if (v1 == v3) {
2091
break;
2092
}
2093
gen_addi_and_wrap_i64(s, o->addr1, o->addr1, 8);
2094
- tcg_gen_qemu_ld_i64(t1, o->addr1, get_mem_index(s), MO_TEQ);
2095
+ tcg_gen_qemu_ld_i64(t1, o->addr1, get_mem_index(s), MO_TEUQ);
2096
write_vec_element_i64(t1, v1, 1, ES_64);
2097
gen_addi_and_wrap_i64(s, o->addr1, o->addr1, 8);
2098
}
2099
@@ -XXX,XX +XXX,XX @@ static DisasJumpType op_vst(DisasContext *s, DisasOps *o)
2100
gen_helper_probe_write_access(cpu_env, o->addr1, tmp);
2101
2102
read_vec_element_i64(tmp, get_field(s, v1), 0, ES_64);
2103
- tcg_gen_qemu_st_i64(tmp, o->addr1, get_mem_index(s), MO_TEQ);
2104
+ tcg_gen_qemu_st_i64(tmp, o->addr1, get_mem_index(s), MO_TEUQ);
2105
gen_addi_and_wrap_i64(s, o->addr1, o->addr1, 8);
2106
read_vec_element_i64(tmp, get_field(s, v1), 1, ES_64);
2107
- tcg_gen_qemu_st_i64(tmp, o->addr1, get_mem_index(s), MO_TEQ);
2108
+ tcg_gen_qemu_st_i64(tmp, o->addr1, get_mem_index(s), MO_TEUQ);
2109
tcg_temp_free_i64(tmp);
2110
return DISAS_NEXT;
2111
}
2112
@@ -XXX,XX +XXX,XX @@ static DisasJumpType op_vstm(DisasContext *s, DisasOps *o)
2113
2114
for (;; v1++) {
2115
read_vec_element_i64(tmp, v1, 0, ES_64);
2116
- tcg_gen_qemu_st_i64(tmp, o->addr1, get_mem_index(s), MO_TEQ);
2117
+ tcg_gen_qemu_st_i64(tmp, o->addr1, get_mem_index(s), MO_TEUQ);
2118
gen_addi_and_wrap_i64(s, o->addr1, o->addr1, 8);
2119
read_vec_element_i64(tmp, v1, 1, ES_64);
2120
- tcg_gen_qemu_st_i64(tmp, o->addr1, get_mem_index(s), MO_TEQ);
2121
+ tcg_gen_qemu_st_i64(tmp, o->addr1, get_mem_index(s), MO_TEUQ);
2122
if (v1 == v3) {
2123
break;
2124
}
2125
diff --git a/tcg/aarch64/tcg-target.c.inc b/tcg/aarch64/tcg-target.c.inc
2126
index XXXXXXX..XXXXXXX 100644
2127
--- a/tcg/aarch64/tcg-target.c.inc
2128
+++ b/tcg/aarch64/tcg-target.c.inc
2129
@@ -XXX,XX +XXX,XX @@ static void tcg_out_qemu_ld_direct(TCGContext *s, MemOp memop, TCGType ext,
2130
case MO_SL:
2131
tcg_out_ldst_r(s, I3312_LDRSWX, data_r, addr_r, otype, off_r);
2132
break;
2133
- case MO_Q:
2134
+ case MO_UQ:
2135
tcg_out_ldst_r(s, I3312_LDRX, data_r, addr_r, otype, off_r);
2136
break;
2137
default:
2138
diff --git a/tcg/arm/tcg-target.c.inc b/tcg/arm/tcg-target.c.inc
2139
index XXXXXXX..XXXXXXX 100644
2140
--- a/tcg/arm/tcg-target.c.inc
2141
+++ b/tcg/arm/tcg-target.c.inc
2142
@@ -XXX,XX +XXX,XX @@ static void * const qemu_ld_helpers[MO_SSIZE + 1] = {
2143
#ifdef HOST_WORDS_BIGENDIAN
2144
[MO_UW] = helper_be_lduw_mmu,
2145
[MO_UL] = helper_be_ldul_mmu,
2146
- [MO_Q] = helper_be_ldq_mmu,
2147
+ [MO_UQ] = helper_be_ldq_mmu,
2148
[MO_SW] = helper_be_ldsw_mmu,
2149
[MO_SL] = helper_be_ldul_mmu,
2150
#else
2151
[MO_UW] = helper_le_lduw_mmu,
2152
[MO_UL] = helper_le_ldul_mmu,
2153
- [MO_Q] = helper_le_ldq_mmu,
2154
+ [MO_UQ] = helper_le_ldq_mmu,
2155
[MO_SW] = helper_le_ldsw_mmu,
2156
[MO_SL] = helper_le_ldul_mmu,
2157
#endif
2158
@@ -XXX,XX +XXX,XX @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
2159
default:
2160
tcg_out_mov_reg(s, COND_AL, datalo, TCG_REG_R0);
2161
break;
2162
- case MO_Q:
2163
+ case MO_UQ:
2164
if (datalo != TCG_REG_R1) {
2165
tcg_out_mov_reg(s, COND_AL, datalo, TCG_REG_R0);
2166
tcg_out_mov_reg(s, COND_AL, datahi, TCG_REG_R1);
2167
@@ -XXX,XX +XXX,XX @@ static void tcg_out_qemu_ld_index(TCGContext *s, MemOp opc,
2168
case MO_UL:
2169
tcg_out_ld32_r(s, COND_AL, datalo, addrlo, addend);
2170
break;
2171
- case MO_Q:
2172
+ case MO_UQ:
2173
/* Avoid ldrd for user-only emulation, to handle unaligned. */
2174
if (USING_SOFTMMU && use_armv6_instructions
2175
&& (datalo & 1) == 0 && datahi == datalo + 1) {
2176
@@ -XXX,XX +XXX,XX @@ static void tcg_out_qemu_ld_direct(TCGContext *s, MemOp opc, TCGReg datalo,
2177
case MO_UL:
2178
tcg_out_ld32_12(s, COND_AL, datalo, addrlo, 0);
2179
break;
2180
- case MO_Q:
2181
+ case MO_UQ:
2182
/* Avoid ldrd for user-only emulation, to handle unaligned. */
2183
if (USING_SOFTMMU && use_armv6_instructions
2184
&& (datalo & 1) == 0 && datahi == datalo + 1) {
2185
diff --git a/tcg/i386/tcg-target.c.inc b/tcg/i386/tcg-target.c.inc
2186
index XXXXXXX..XXXXXXX 100644
2187
--- a/tcg/i386/tcg-target.c.inc
2188
+++ b/tcg/i386/tcg-target.c.inc
2189
@@ -XXX,XX +XXX,XX @@ static void * const qemu_ld_helpers[(MO_SIZE | MO_BSWAP) + 1] = {
2190
[MO_UB] = helper_ret_ldub_mmu,
2191
[MO_LEUW] = helper_le_lduw_mmu,
2192
[MO_LEUL] = helper_le_ldul_mmu,
2193
- [MO_LEQ] = helper_le_ldq_mmu,
2194
+ [MO_LEUQ] = helper_le_ldq_mmu,
2195
[MO_BEUW] = helper_be_lduw_mmu,
2196
[MO_BEUL] = helper_be_ldul_mmu,
2197
- [MO_BEQ] = helper_be_ldq_mmu,
2198
+ [MO_BEUQ] = helper_be_ldq_mmu,
2199
};
2200
2201
/* helper signature: helper_ret_st_mmu(CPUState *env, target_ulong addr,
2202
@@ -XXX,XX +XXX,XX @@ static void * const qemu_st_helpers[(MO_SIZE | MO_BSWAP) + 1] = {
2203
[MO_UB] = helper_ret_stb_mmu,
2204
[MO_LEUW] = helper_le_stw_mmu,
2205
[MO_LEUL] = helper_le_stl_mmu,
2206
- [MO_LEQ] = helper_le_stq_mmu,
2207
+ [MO_LEUQ] = helper_le_stq_mmu,
2208
[MO_BEUW] = helper_be_stw_mmu,
2209
[MO_BEUL] = helper_be_stl_mmu,
2210
- [MO_BEQ] = helper_be_stq_mmu,
2211
+ [MO_BEUQ] = helper_be_stq_mmu,
2212
};
2213
2214
/* Perform the TLB load and compare.
2215
@@ -XXX,XX +XXX,XX @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
2216
case MO_UL:
2217
tcg_out_mov(s, TCG_TYPE_I32, data_reg, TCG_REG_EAX);
2218
break;
2219
- case MO_Q:
2220
+ case MO_UQ:
2221
if (TCG_TARGET_REG_BITS == 64) {
2222
tcg_out_mov(s, TCG_TYPE_I64, data_reg, TCG_REG_RAX);
2223
} else if (data_reg == TCG_REG_EDX) {
2224
@@ -XXX,XX +XXX,XX @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCGReg datalo, TCGReg datahi,
2225
}
2226
break;
2227
#endif
2228
- case MO_Q:
2229
+ case MO_UQ:
2230
if (TCG_TARGET_REG_BITS == 64) {
2231
tcg_out_modrm_sib_offset(s, movop + P_REXW + seg, datalo,
2232
base, index, 0, ofs);
2233
diff --git a/tcg/mips/tcg-target.c.inc b/tcg/mips/tcg-target.c.inc
2234
index XXXXXXX..XXXXXXX 100644
2235
--- a/tcg/mips/tcg-target.c.inc
2236
+++ b/tcg/mips/tcg-target.c.inc
2237
@@ -XXX,XX +XXX,XX @@ static void * const qemu_ld_helpers[(MO_SSIZE | MO_BSWAP) + 1] = {
2238
[MO_LEUW] = helper_le_lduw_mmu,
2239
[MO_LESW] = helper_le_ldsw_mmu,
2240
[MO_LEUL] = helper_le_ldul_mmu,
2241
- [MO_LEQ] = helper_le_ldq_mmu,
2242
+ [MO_LEUQ] = helper_le_ldq_mmu,
2243
[MO_BEUW] = helper_be_lduw_mmu,
2244
[MO_BESW] = helper_be_ldsw_mmu,
2245
[MO_BEUL] = helper_be_ldul_mmu,
2246
- [MO_BEQ] = helper_be_ldq_mmu,
2247
+ [MO_BEUQ] = helper_be_ldq_mmu,
2248
#if TCG_TARGET_REG_BITS == 64
2249
[MO_LESL] = helper_le_ldsl_mmu,
2250
[MO_BESL] = helper_be_ldsl_mmu,
2251
@@ -XXX,XX +XXX,XX @@ static void * const qemu_st_helpers[(MO_SIZE | MO_BSWAP) + 1] = {
2252
[MO_UB] = helper_ret_stb_mmu,
2253
[MO_LEUW] = helper_le_stw_mmu,
2254
[MO_LEUL] = helper_le_stl_mmu,
2255
- [MO_LEQ] = helper_le_stq_mmu,
2256
+ [MO_LEUQ] = helper_le_stq_mmu,
2257
[MO_BEUW] = helper_be_stw_mmu,
2258
[MO_BEUL] = helper_be_stl_mmu,
2259
- [MO_BEQ] = helper_be_stq_mmu,
2260
+ [MO_BEUQ] = helper_be_stq_mmu,
2261
};
2262
2263
/* Helper routines for marshalling helper function arguments into
2264
@@ -XXX,XX +XXX,XX @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCGReg lo, TCGReg hi,
2265
case MO_SL:
2266
tcg_out_opc_imm(s, OPC_LW, lo, base, 0);
2267
break;
2268
- case MO_Q | MO_BSWAP:
2269
+ case MO_UQ | MO_BSWAP:
2270
if (TCG_TARGET_REG_BITS == 64) {
2271
if (use_mips32r2_instructions) {
2272
tcg_out_opc_imm(s, OPC_LD, lo, base, 0);
2273
@@ -XXX,XX +XXX,XX @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCGReg lo, TCGReg hi,
2274
tcg_out_mov(s, TCG_TYPE_I32, MIPS_BE ? hi : lo, TCG_TMP3);
2275
}
2276
break;
2277
- case MO_Q:
2278
+ case MO_UQ:
2279
/* Prefer to load from offset 0 first, but allow for overlap. */
2280
if (TCG_TARGET_REG_BITS == 64) {
2281
tcg_out_opc_imm(s, OPC_LD, lo, base, 0);
2282
diff --git a/tcg/ppc/tcg-target.c.inc b/tcg/ppc/tcg-target.c.inc
2283
index XXXXXXX..XXXXXXX 100644
2284
--- a/tcg/ppc/tcg-target.c.inc
2285
+++ b/tcg/ppc/tcg-target.c.inc
2286
@@ -XXX,XX +XXX,XX @@ static const uint32_t qemu_ldx_opc[(MO_SSIZE + MO_BSWAP) + 1] = {
2287
[MO_UB] = LBZX,
2288
[MO_UW] = LHZX,
2289
[MO_UL] = LWZX,
2290
- [MO_Q] = LDX,
2291
+ [MO_UQ] = LDX,
2292
[MO_SW] = LHAX,
2293
[MO_SL] = LWAX,
2294
[MO_BSWAP | MO_UB] = LBZX,
2295
[MO_BSWAP | MO_UW] = LHBRX,
2296
[MO_BSWAP | MO_UL] = LWBRX,
2297
- [MO_BSWAP | MO_Q] = LDBRX,
2298
+ [MO_BSWAP | MO_UQ] = LDBRX,
2299
};
2300
2301
static const uint32_t qemu_stx_opc[(MO_SIZE + MO_BSWAP) + 1] = {
2302
[MO_UB] = STBX,
2303
[MO_UW] = STHX,
2304
[MO_UL] = STWX,
2305
- [MO_Q] = STDX,
2306
+ [MO_UQ] = STDX,
2307
[MO_BSWAP | MO_UB] = STBX,
2308
[MO_BSWAP | MO_UW] = STHBRX,
2309
[MO_BSWAP | MO_UL] = STWBRX,
2310
- [MO_BSWAP | MO_Q] = STDBRX,
2311
+ [MO_BSWAP | MO_UQ] = STDBRX,
2312
};
2313
2314
static const uint32_t qemu_exts_opc[4] = {
2315
@@ -XXX,XX +XXX,XX @@ static void * const qemu_ld_helpers[(MO_SIZE | MO_BSWAP) + 1] = {
2316
[MO_UB] = helper_ret_ldub_mmu,
2317
[MO_LEUW] = helper_le_lduw_mmu,
2318
[MO_LEUL] = helper_le_ldul_mmu,
2319
- [MO_LEQ] = helper_le_ldq_mmu,
2320
+ [MO_LEUQ] = helper_le_ldq_mmu,
2321
[MO_BEUW] = helper_be_lduw_mmu,
2322
[MO_BEUL] = helper_be_ldul_mmu,
2323
- [MO_BEQ] = helper_be_ldq_mmu,
2324
+ [MO_BEUQ] = helper_be_ldq_mmu,
2325
};
2326
2327
/* helper signature: helper_st_mmu(CPUState *env, target_ulong addr,
2328
@@ -XXX,XX +XXX,XX @@ static void * const qemu_st_helpers[(MO_SIZE | MO_BSWAP) + 1] = {
2329
[MO_UB] = helper_ret_stb_mmu,
2330
[MO_LEUW] = helper_le_stw_mmu,
2331
[MO_LEUL] = helper_le_stl_mmu,
2332
- [MO_LEQ] = helper_le_stq_mmu,
2333
+ [MO_LEUQ] = helper_le_stq_mmu,
2334
[MO_BEUW] = helper_be_stw_mmu,
2335
[MO_BEUL] = helper_be_stl_mmu,
2336
- [MO_BEQ] = helper_be_stq_mmu,
2337
+ [MO_BEUQ] = helper_be_stq_mmu,
2338
};
2339
2340
/* We expect to use a 16-bit negative offset from ENV. */
2341
diff --git a/tcg/riscv/tcg-target.c.inc b/tcg/riscv/tcg-target.c.inc
2342
index XXXXXXX..XXXXXXX 100644
2343
--- a/tcg/riscv/tcg-target.c.inc
2344
+++ b/tcg/riscv/tcg-target.c.inc
2345
@@ -XXX,XX +XXX,XX @@ static void * const qemu_ld_helpers[MO_SSIZE + 1] = {
2346
#if TCG_TARGET_REG_BITS == 64
2347
[MO_SL] = helper_be_ldsl_mmu,
2348
#endif
2349
- [MO_Q] = helper_be_ldq_mmu,
2350
+ [MO_UQ] = helper_be_ldq_mmu,
2351
#else
2352
[MO_UW] = helper_le_lduw_mmu,
2353
[MO_SW] = helper_le_ldsw_mmu,
2354
@@ -XXX,XX +XXX,XX @@ static void * const qemu_ld_helpers[MO_SSIZE + 1] = {
2355
#if TCG_TARGET_REG_BITS == 64
2356
[MO_SL] = helper_le_ldsl_mmu,
2357
#endif
2358
- [MO_Q] = helper_le_ldq_mmu,
2359
+ [MO_UQ] = helper_le_ldq_mmu,
2360
#endif
2361
};
2362
2363
@@ -XXX,XX +XXX,XX @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCGReg lo, TCGReg hi,
2364
case MO_SL:
2365
tcg_out_opc_imm(s, OPC_LW, lo, base, 0);
2366
break;
2367
- case MO_Q:
2368
+ case MO_UQ:
2369
/* Prefer to load from offset 0 first, but allow for overlap. */
2370
if (TCG_TARGET_REG_BITS == 64) {
2371
tcg_out_opc_imm(s, OPC_LD, lo, base, 0);
2372
diff --git a/tcg/s390x/tcg-target.c.inc b/tcg/s390x/tcg-target.c.inc
2373
index XXXXXXX..XXXXXXX 100644
2374
--- a/tcg/s390x/tcg-target.c.inc
2375
+++ b/tcg/s390x/tcg-target.c.inc
2376
@@ -XXX,XX +XXX,XX @@ static void * const qemu_ld_helpers[(MO_SSIZE | MO_BSWAP) + 1] = {
2377
[MO_LESW] = helper_le_ldsw_mmu,
2378
[MO_LEUL] = helper_le_ldul_mmu,
2379
[MO_LESL] = helper_le_ldsl_mmu,
2380
- [MO_LEQ] = helper_le_ldq_mmu,
2381
+ [MO_LEUQ] = helper_le_ldq_mmu,
2382
[MO_BEUW] = helper_be_lduw_mmu,
2383
[MO_BESW] = helper_be_ldsw_mmu,
2384
[MO_BEUL] = helper_be_ldul_mmu,
2385
[MO_BESL] = helper_be_ldsl_mmu,
2386
- [MO_BEQ] = helper_be_ldq_mmu,
2387
+ [MO_BEUQ] = helper_be_ldq_mmu,
2388
};
2389
2390
static void * const qemu_st_helpers[(MO_SIZE | MO_BSWAP) + 1] = {
2391
[MO_UB] = helper_ret_stb_mmu,
2392
[MO_LEUW] = helper_le_stw_mmu,
2393
[MO_LEUL] = helper_le_stl_mmu,
2394
- [MO_LEQ] = helper_le_stq_mmu,
2395
+ [MO_LEUQ] = helper_le_stq_mmu,
2396
[MO_BEUW] = helper_be_stw_mmu,
2397
[MO_BEUL] = helper_be_stl_mmu,
2398
- [MO_BEQ] = helper_be_stq_mmu,
2399
+ [MO_BEUQ] = helper_be_stq_mmu,
2400
};
2401
#endif
2402
2403
@@ -XXX,XX +XXX,XX @@ static void tcg_out_qemu_ld_direct(TCGContext *s, MemOp opc, TCGReg data,
2404
tcg_out_insn(s, RXY, LGF, data, base, index, disp);
2405
break;
2406
2407
- case MO_Q | MO_BSWAP:
2408
+ case MO_UQ | MO_BSWAP:
2409
tcg_out_insn(s, RXY, LRVG, data, base, index, disp);
2410
break;
2411
- case MO_Q:
2412
+ case MO_UQ:
2413
tcg_out_insn(s, RXY, LG, data, base, index, disp);
2414
break;
2415
2416
@@ -XXX,XX +XXX,XX @@ static void tcg_out_qemu_st_direct(TCGContext *s, MemOp opc, TCGReg data,
2417
}
2418
break;
2419
2420
- case MO_Q | MO_BSWAP:
2421
+ case MO_UQ | MO_BSWAP:
2422
tcg_out_insn(s, RXY, STRVG, data, base, index, disp);
2423
break;
2424
- case MO_Q:
2425
+ case MO_UQ:
2426
tcg_out_insn(s, RXY, STG, data, base, index, disp);
2427
break;
2428
2429
@@ -XXX,XX +XXX,XX @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
2430
case MO_UL:
2431
tgen_ext32u(s, TCG_REG_R4, data_reg);
2432
break;
2433
- case MO_Q:
2434
+ case MO_UQ:
2435
tcg_out_mov(s, TCG_TYPE_I64, TCG_REG_R4, data_reg);
2436
break;
2437
default:
2438
diff --git a/tcg/sparc/tcg-target.c.inc b/tcg/sparc/tcg-target.c.inc
2439
index XXXXXXX..XXXXXXX 100644
2440
--- a/tcg/sparc/tcg-target.c.inc
2441
+++ b/tcg/sparc/tcg-target.c.inc
2442
@@ -XXX,XX +XXX,XX @@ static void build_trampolines(TCGContext *s)
2443
[MO_LEUW] = helper_le_lduw_mmu,
2444
[MO_LESW] = helper_le_ldsw_mmu,
2445
[MO_LEUL] = helper_le_ldul_mmu,
2446
- [MO_LEQ] = helper_le_ldq_mmu,
2447
+ [MO_LEUQ] = helper_le_ldq_mmu,
2448
[MO_BEUW] = helper_be_lduw_mmu,
2449
[MO_BESW] = helper_be_ldsw_mmu,
2450
[MO_BEUL] = helper_be_ldul_mmu,
2451
- [MO_BEQ] = helper_be_ldq_mmu,
2452
+ [MO_BEUQ] = helper_be_ldq_mmu,
2453
};
2454
static void * const qemu_st_helpers[] = {
2455
[MO_UB] = helper_ret_stb_mmu,
2456
[MO_LEUW] = helper_le_stw_mmu,
2457
[MO_LEUL] = helper_le_stl_mmu,
2458
- [MO_LEQ] = helper_le_stq_mmu,
2459
+ [MO_LEUQ] = helper_le_stq_mmu,
2460
[MO_BEUW] = helper_be_stw_mmu,
2461
[MO_BEUL] = helper_be_stl_mmu,
2462
- [MO_BEQ] = helper_be_stq_mmu,
2463
+ [MO_BEUQ] = helper_be_stq_mmu,
2464
};
2465
2466
int i;
2467
@@ -XXX,XX +XXX,XX @@ static const int qemu_ld_opc[(MO_SSIZE | MO_BSWAP) + 1] = {
2468
[MO_BESW] = LDSH,
2469
[MO_BEUL] = LDUW,
2470
[MO_BESL] = LDSW,
2471
- [MO_BEQ] = LDX,
2472
+ [MO_BEUQ] = LDX,
2473
2474
[MO_LEUW] = LDUH_LE,
2475
[MO_LESW] = LDSH_LE,
2476
[MO_LEUL] = LDUW_LE,
2477
[MO_LESL] = LDSW_LE,
2478
- [MO_LEQ] = LDX_LE,
2479
+ [MO_LEUQ] = LDX_LE,
2480
};
2481
2482
static const int qemu_st_opc[(MO_SIZE | MO_BSWAP) + 1] = {
2483
@@ -XXX,XX +XXX,XX @@ static const int qemu_st_opc[(MO_SIZE | MO_BSWAP) + 1] = {
2484
2485
[MO_BEUW] = STH,
2486
[MO_BEUL] = STW,
2487
- [MO_BEQ] = STX,
2488
+ [MO_BEUQ] = STX,
2489
2490
[MO_LEUW] = STH_LE,
2491
[MO_LEUL] = STW_LE,
2492
- [MO_LEQ] = STX_LE,
2493
+ [MO_LEUQ] = STX_LE,
2494
};
2495
2496
static void tcg_out_qemu_ld(TCGContext *s, TCGReg data, TCGReg addr,
2497
diff --git a/target/s390x/tcg/insn-data.def b/target/s390x/tcg/insn-data.def
2498
index XXXXXXX..XXXXXXX 100644
2499
--- a/target/s390x/tcg/insn-data.def
2500
+++ b/target/s390x/tcg/insn-data.def
2501
@@ -XXX,XX +XXX,XX @@
2502
D(0xeb6a, ASI, SIY, GIE, la1, i2, new, 0, asi, adds32, MO_TESL)
2503
C(0xecd8, AHIK, RIE_d, DO, r3, i2, new, r1_32, add, adds32)
2504
C(0xc208, AGFI, RIL_a, EI, r1, i2, r1, 0, add, adds64)
2505
- D(0xeb7a, AGSI, SIY, GIE, la1, i2, new, 0, asi, adds64, MO_TEQ)
2506
+ D(0xeb7a, AGSI, SIY, GIE, la1, i2, new, 0, asi, adds64, MO_TEUQ)
2507
C(0xecd9, AGHIK, RIE_d, DO, r3, i2, r1, 0, add, adds64)
2508
/* ADD IMMEDIATE HIGH */
2509
C(0xcc08, AIH, RIL_a, HW, r1_sr32, i2, new, r1_32h, add, adds32)
2510
@@ -XXX,XX +XXX,XX @@
2511
/* ADD LOGICAL WITH SIGNED IMMEDIATE */
2512
D(0xeb6e, ALSI, SIY, GIE, la1, i2_32u, new, 0, asi, addu32, MO_TEUL)
2513
C(0xecda, ALHSIK, RIE_d, DO, r3_32u, i2_32u, new, r1_32, add, addu32)
2514
- D(0xeb7e, ALGSI, SIY, GIE, la1, i2, new, 0, asiu64, addu64, MO_TEQ)
2515
+ D(0xeb7e, ALGSI, SIY, GIE, la1, i2, new, 0, asiu64, addu64, MO_TEUQ)
2516
C(0xecdb, ALGHSIK, RIE_d, DO, r3, i2, r1, 0, addu64, addu64)
2517
/* ADD LOGICAL WITH SIGNED IMMEDIATE HIGH */
2518
C(0xcc0a, ALSIH, RIL_a, HW, r1_sr32, i2_32u, new, r1_32h, add, addu32)
2519
@@ -XXX,XX +XXX,XX @@
2520
/* COMPARE AND SWAP */
2521
D(0xba00, CS, RS_a, Z, r3_32u, r1_32u, new, r1_32, cs, 0, MO_TEUL)
2522
D(0xeb14, CSY, RSY_a, LD, r3_32u, r1_32u, new, r1_32, cs, 0, MO_TEUL)
2523
- D(0xeb30, CSG, RSY_a, Z, r3_o, r1_o, new, r1, cs, 0, MO_TEQ)
2524
+ D(0xeb30, CSG, RSY_a, Z, r3_o, r1_o, new, r1, cs, 0, MO_TEUQ)
2525
/* COMPARE DOUBLE AND SWAP */
2526
- D(0xbb00, CDS, RS_a, Z, r3_D32, r1_D32, new, r1_D32, cs, 0, MO_TEQ)
2527
- D(0xeb31, CDSY, RSY_a, LD, r3_D32, r1_D32, new, r1_D32, cs, 0, MO_TEQ)
2528
+ D(0xbb00, CDS, RS_a, Z, r3_D32, r1_D32, new, r1_D32, cs, 0, MO_TEUQ)
2529
+ D(0xeb31, CDSY, RSY_a, LD, r3_D32, r1_D32, new, r1_D32, cs, 0, MO_TEUQ)
2530
C(0xeb3e, CDSG, RSY_a, Z, 0, 0, 0, 0, cdsg, 0)
2531
/* COMPARE AND SWAP AND STORE */
2532
C(0xc802, CSST, SSF, CASS, la1, a2, 0, 0, csst, 0)
2533
@@ -XXX,XX +XXX,XX @@
2534
C(0xc000, LARL, RIL_b, Z, 0, ri2, 0, r1, mov2, 0)
2535
/* LOAD AND ADD */
2536
D(0xebf8, LAA, RSY_a, ILA, r3_32s, a2, new, in2_r1_32, laa, adds32, MO_TESL)
2537
- D(0xebe8, LAAG, RSY_a, ILA, r3, a2, new, in2_r1, laa, adds64, MO_TEQ)
2538
+ D(0xebe8, LAAG, RSY_a, ILA, r3, a2, new, in2_r1, laa, adds64, MO_TEUQ)
2539
/* LOAD AND ADD LOGICAL */
2540
D(0xebfa, LAAL, RSY_a, ILA, r3_32u, a2, new, in2_r1_32, laa, addu32, MO_TEUL)
2541
- D(0xebea, LAALG, RSY_a, ILA, r3, a2, new, in2_r1, laa, addu64, MO_TEQ)
2542
+ D(0xebea, LAALG, RSY_a, ILA, r3, a2, new, in2_r1, laa, addu64, MO_TEUQ)
2543
/* LOAD AND AND */
2544
D(0xebf4, LAN, RSY_a, ILA, r3_32s, a2, new, in2_r1_32, lan, nz32, MO_TESL)
2545
- D(0xebe4, LANG, RSY_a, ILA, r3, a2, new, in2_r1, lan, nz64, MO_TEQ)
2546
+ D(0xebe4, LANG, RSY_a, ILA, r3, a2, new, in2_r1, lan, nz64, MO_TEUQ)
2547
/* LOAD AND EXCLUSIVE OR */
2548
D(0xebf7, LAX, RSY_a, ILA, r3_32s, a2, new, in2_r1_32, lax, nz32, MO_TESL)
2549
- D(0xebe7, LAXG, RSY_a, ILA, r3, a2, new, in2_r1, lax, nz64, MO_TEQ)
2550
+ D(0xebe7, LAXG, RSY_a, ILA, r3, a2, new, in2_r1, lax, nz64, MO_TEUQ)
2551
/* LOAD AND OR */
2552
D(0xebf6, LAO, RSY_a, ILA, r3_32s, a2, new, in2_r1_32, lao, nz32, MO_TESL)
2553
- D(0xebe6, LAOG, RSY_a, ILA, r3, a2, new, in2_r1, lao, nz64, MO_TEQ)
2554
+ D(0xebe6, LAOG, RSY_a, ILA, r3, a2, new, in2_r1, lao, nz64, MO_TEUQ)
2555
/* LOAD AND TEST */
2556
C(0x1200, LTR, RR_a, Z, 0, r2_o, 0, cond_r1r2_32, mov2, s32)
2557
C(0xb902, LTGR, RRE, Z, 0, r2_o, 0, r1, mov2, s64)
2558
@@ -XXX,XX +XXX,XX @@
2559
C(0xebe0, LOCFH, RSY_b, LOC2, r1_sr32, m2_32u, new, r1_32h, loc, 0)
2560
/* LOAD PAIR DISJOINT */
2561
D(0xc804, LPD, SSF, ILA, 0, 0, new_P, r3_P32, lpd, 0, MO_TEUL)
2562
- D(0xc805, LPDG, SSF, ILA, 0, 0, new_P, r3_P64, lpd, 0, MO_TEQ)
2563
+ D(0xc805, LPDG, SSF, ILA, 0, 0, new_P, r3_P64, lpd, 0, MO_TEUQ)
2564
/* LOAD PAIR FROM QUADWORD */
2565
C(0xe38f, LPQ, RXY_a, Z, 0, a2, r1_P, 0, lpq, 0)
2566
/* LOAD POSITIVE */
2567
@@ -XXX,XX +XXX,XX @@
2568
#ifndef CONFIG_USER_ONLY
2569
/* COMPARE AND SWAP AND PURGE */
2570
E(0xb250, CSP, RRE, Z, r1_32u, ra2, r1_P, 0, csp, 0, MO_TEUL, IF_PRIV)
2571
- E(0xb98a, CSPG, RRE, DAT_ENH, r1_o, ra2, r1_P, 0, csp, 0, MO_TEQ, IF_PRIV)
2572
+ E(0xb98a, CSPG, RRE, DAT_ENH, r1_o, ra2, r1_P, 0, csp, 0, MO_TEUQ, IF_PRIV)
2573
/* DIAGNOSE (KVM hypercall) */
2574
F(0x8300, DIAG, RSI, Z, 0, 0, 0, 0, diag, 0, IF_PRIV | IF_IO)
2575
/* INSERT STORAGE KEY EXTENDED */
2576
@@ -XXX,XX +XXX,XX @@
2577
F(0xe303, LRAG, RXY_a, Z, 0, a2, r1, 0, lra, 0, IF_PRIV)
2578
/* LOAD USING REAL ADDRESS */
2579
E(0xb24b, LURA, RRE, Z, 0, ra2, new, r1_32, lura, 0, MO_TEUL, IF_PRIV)
2580
- E(0xb905, LURAG, RRE, Z, 0, ra2, r1, 0, lura, 0, MO_TEQ, IF_PRIV)
2581
+ E(0xb905, LURAG, RRE, Z, 0, ra2, r1, 0, lura, 0, MO_TEUQ, IF_PRIV)
2582
/* MOVE TO PRIMARY */
2583
F(0xda00, MVCP, SS_d, Z, la1, a2, 0, 0, mvcp, 0, IF_PRIV)
2584
/* MOVE TO SECONDARY */
2585
@@ -XXX,XX +XXX,XX @@
2586
F(0xad00, STOSM, SI, Z, la1, 0, 0, 0, stnosm, 0, IF_PRIV)
2587
/* STORE USING REAL ADDRESS */
2588
E(0xb246, STURA, RRE, Z, r1_o, ra2, 0, 0, stura, 0, MO_TEUL, IF_PRIV)
2589
- E(0xb925, STURG, RRE, Z, r1_o, ra2, 0, 0, stura, 0, MO_TEQ, IF_PRIV)
2590
+ E(0xb925, STURG, RRE, Z, r1_o, ra2, 0, 0, stura, 0, MO_TEUQ, IF_PRIV)
2591
/* TEST BLOCK */
2592
F(0xb22c, TB, RRE, Z, 0, r2_o, 0, 0, testblock, 0, IF_PRIV)
2593
/* TEST PROTECTION */
2594
--
2595
2.31.1
2596
2597
diff view generated by jsdifflib
New patch
1
From: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
1
2
3
Adding defines to handle signed 64-bit and unsigned 128-bit quantities in
4
memory accesses.
5
6
Signed-off-by: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
9
Message-id: 20220106210108.138226-3-frederic.petrot@univ-grenoble-alpes.fr
10
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
11
---
12
include/exec/memop.h | 7 +++++++
13
1 file changed, 7 insertions(+)
14
15
diff --git a/include/exec/memop.h b/include/exec/memop.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/include/exec/memop.h
18
+++ b/include/exec/memop.h
19
@@ -XXX,XX +XXX,XX @@ typedef enum MemOp {
20
MO_UW = MO_16,
21
MO_UL = MO_32,
22
MO_UQ = MO_64,
23
+ MO_UO = MO_128,
24
MO_SB = MO_SIGN | MO_8,
25
MO_SW = MO_SIGN | MO_16,
26
MO_SL = MO_SIGN | MO_32,
27
+ MO_SQ = MO_SIGN | MO_64,
28
+ MO_SO = MO_SIGN | MO_128,
29
30
MO_LEUW = MO_LE | MO_UW,
31
MO_LEUL = MO_LE | MO_UL,
32
MO_LEUQ = MO_LE | MO_UQ,
33
MO_LESW = MO_LE | MO_SW,
34
MO_LESL = MO_LE | MO_SL,
35
+ MO_LESQ = MO_LE | MO_SQ,
36
37
MO_BEUW = MO_BE | MO_UW,
38
MO_BEUL = MO_BE | MO_UL,
39
MO_BEUQ = MO_BE | MO_UQ,
40
MO_BESW = MO_BE | MO_SW,
41
MO_BESL = MO_BE | MO_SL,
42
+ MO_BESQ = MO_BE | MO_SQ,
43
44
#ifdef NEED_CPU_H
45
MO_TEUW = MO_TE | MO_UW,
46
MO_TEUL = MO_TE | MO_UL,
47
MO_TEUQ = MO_TE | MO_UQ,
48
+ MO_TEUO = MO_TE | MO_UO,
49
MO_TESW = MO_TE | MO_SW,
50
MO_TESL = MO_TE | MO_SL,
51
+ MO_TESQ = MO_TE | MO_SQ,
52
#endif
53
54
MO_SSIZE = MO_SIZE | MO_SIGN,
55
--
56
2.31.1
57
58
diff view generated by jsdifflib
New patch
1
1
From: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
2
3
Addition of div and rem on 128-bit integers, using the 128/64->128 divu and
4
64x64->128 mulu in host-utils.
5
These operations will be used within div/rem helpers in the 128-bit riscv
6
target.
7
8
Signed-off-by: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
9
Co-authored-by: Fabien Portas <fabien.portas@grenoble-inp.org>
10
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
11
Message-id: 20220106210108.138226-4-frederic.petrot@univ-grenoble-alpes.fr
12
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
13
---
14
include/qemu/int128.h | 27 ++++++++
15
util/int128.c | 147 ++++++++++++++++++++++++++++++++++++++++++
16
util/meson.build | 1 +
17
3 files changed, 175 insertions(+)
18
create mode 100644 util/int128.c
19
20
diff --git a/include/qemu/int128.h b/include/qemu/int128.h
21
index XXXXXXX..XXXXXXX 100644
22
--- a/include/qemu/int128.h
23
+++ b/include/qemu/int128.h
24
@@ -XXX,XX +XXX,XX @@ static inline Int128 bswap128(Int128 a)
25
#endif
26
}
27
28
+static inline Int128 int128_divu(Int128 a, Int128 b)
29
+{
30
+ return (__uint128_t)a / (__uint128_t)b;
31
+}
32
+
33
+static inline Int128 int128_remu(Int128 a, Int128 b)
34
+{
35
+ return (__uint128_t)a % (__uint128_t)b;
36
+}
37
+
38
+static inline Int128 int128_divs(Int128 a, Int128 b)
39
+{
40
+ return a / b;
41
+}
42
+
43
+static inline Int128 int128_rems(Int128 a, Int128 b)
44
+{
45
+ return a % b;
46
+}
47
+
48
#else /* !CONFIG_INT128 */
49
50
typedef struct Int128 Int128;
51
@@ -XXX,XX +XXX,XX @@ static inline Int128 bswap128(Int128 a)
52
return int128_make128(bswap64(a.hi), bswap64(a.lo));
53
}
54
55
+Int128 int128_divu(Int128, Int128);
56
+Int128 int128_remu(Int128, Int128);
57
+Int128 int128_divs(Int128, Int128);
58
+Int128 int128_rems(Int128, Int128);
59
+
60
#endif /* CONFIG_INT128 */
61
62
static inline void bswap128s(Int128 *s)
63
@@ -XXX,XX +XXX,XX @@ static inline void bswap128s(Int128 *s)
64
*s = bswap128(*s);
65
}
66
67
+#define UINT128_MAX int128_make128(~0LL, ~0LL)
68
+
69
#endif /* INT128_H */
70
diff --git a/util/int128.c b/util/int128.c
71
new file mode 100644
72
index XXXXXXX..XXXXXXX
73
--- /dev/null
74
+++ b/util/int128.c
75
@@ -XXX,XX +XXX,XX @@
76
+/*
77
+ * 128-bit division and remainder for compilers not supporting __int128
78
+ *
79
+ * Copyright (c) 2021 Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
80
+ *
81
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
82
+ * of this software and associated documentation files (the "Software"), to deal
83
+ * in the Software without restriction, including without limitation the rights
84
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
85
+ * copies of the Software, and to permit persons to whom the Software is
86
+ * furnished to do so, subject to the following conditions:
87
+ *
88
+ * The above copyright notice and this permission notice shall be included in
89
+ * all copies or substantial portions of the Software.
90
+ *
91
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
92
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
93
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
94
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
95
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
96
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
97
+ * THE SOFTWARE.
98
+ */
99
+
100
+#include "qemu/osdep.h"
101
+#include "qemu/host-utils.h"
102
+#include "qemu/int128.h"
103
+
104
+#ifndef CONFIG_INT128
105
+
106
+/*
107
+ * Division and remainder algorithms for 128-bit due to Stefan Kanthak,
108
+ * https://skanthak.homepage.t-online.de/integer.html#udivmodti4
109
+ * Preconditions:
110
+ * - function should never be called with v equals to 0, it has to
111
+ * be dealt with beforehand
112
+ * - quotien pointer must be valid
113
+ */
114
+static Int128 divrem128(Int128 u, Int128 v, Int128 *q)
115
+{
116
+ Int128 qq;
117
+ uint64_t hi, lo, tmp;
118
+ int s = clz64(v.hi);
119
+
120
+ if (s == 64) {
121
+ /* we have uu÷0v => let's use divu128 */
122
+ hi = u.hi;
123
+ lo = u.lo;
124
+ tmp = divu128(&lo, &hi, v.lo);
125
+ *q = int128_make128(lo, hi);
126
+ return int128_make128(tmp, 0);
127
+ } else {
128
+ hi = int128_gethi(int128_lshift(v, s));
129
+
130
+ if (hi > u.hi) {
131
+ lo = u.lo;
132
+ tmp = u.hi;
133
+ divu128(&lo, &tmp, hi);
134
+ lo = int128_gethi(int128_lshift(int128_make128(lo, 0), s));
135
+ } else { /* prevent overflow */
136
+ lo = u.lo;
137
+ tmp = u.hi - hi;
138
+ divu128(&lo, &tmp, hi);
139
+ lo = int128_gethi(int128_lshift(int128_make128(lo, 1), s));
140
+ }
141
+
142
+ qq = int128_make64(lo);
143
+
144
+ tmp = lo * v.hi;
145
+ mulu64(&lo, &hi, lo, v.lo);
146
+ hi += tmp;
147
+
148
+ if (hi < tmp /* quotient * divisor >= 2**128 > dividend */
149
+ || hi > u.hi /* quotient * divisor > dividend */
150
+ || (hi == u.hi && lo > u.lo)) {
151
+ qq.lo -= 1;
152
+ mulu64(&lo, &hi, qq.lo, v.lo);
153
+ hi += qq.lo * v.hi;
154
+ }
155
+
156
+ *q = qq;
157
+ u.hi -= hi + (u.lo < lo);
158
+ u.lo -= lo;
159
+ return u;
160
+ }
161
+}
162
+
163
+Int128 int128_divu(Int128 a, Int128 b)
164
+{
165
+ Int128 q;
166
+ divrem128(a, b, &q);
167
+ return q;
168
+}
169
+
170
+Int128 int128_remu(Int128 a, Int128 b)
171
+{
172
+ Int128 q;
173
+ return divrem128(a, b, &q);
174
+}
175
+
176
+Int128 int128_divs(Int128 a, Int128 b)
177
+{
178
+ Int128 q;
179
+ bool sgna = !int128_nonneg(a);
180
+ bool sgnb = !int128_nonneg(b);
181
+
182
+ if (sgna) {
183
+ a = int128_neg(a);
184
+ }
185
+
186
+ if (sgnb) {
187
+ b = int128_neg(b);
188
+ }
189
+
190
+ divrem128(a, b, &q);
191
+
192
+ if (sgna != sgnb) {
193
+ q = int128_neg(q);
194
+ }
195
+
196
+ return q;
197
+}
198
+
199
+Int128 int128_rems(Int128 a, Int128 b)
200
+{
201
+ Int128 q, r;
202
+ bool sgna = !int128_nonneg(a);
203
+ bool sgnb = !int128_nonneg(b);
204
+
205
+ if (sgna) {
206
+ a = int128_neg(a);
207
+ }
208
+
209
+ if (sgnb) {
210
+ b = int128_neg(b);
211
+ }
212
+
213
+ r = divrem128(a, b, &q);
214
+
215
+ if (sgna) {
216
+ r = int128_neg(r);
217
+ }
218
+
219
+ return r;
220
+}
221
+
222
+#endif
223
diff --git a/util/meson.build b/util/meson.build
224
index XXXXXXX..XXXXXXX 100644
225
--- a/util/meson.build
226
+++ b/util/meson.build
227
@@ -XXX,XX +XXX,XX @@ util_ss.add(files('transactions.c'))
228
util_ss.add(when: 'CONFIG_POSIX', if_true: files('drm.c'))
229
util_ss.add(files('guest-random.c'))
230
util_ss.add(files('yank.c'))
231
+util_ss.add(files('int128.c'))
232
233
if have_user
234
util_ss.add(files('selfmap.c'))
235
--
236
2.31.1
237
238
diff view generated by jsdifflib
New patch
1
From: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
1
2
3
Given that the 128-bit version of the riscv spec adds new instructions, and
4
that some instructions that were previously only available in 64-bit mode
5
are now available for both 64-bit and 128-bit, we added new macros to check
6
for the processor mode during translation.
7
Although RV128 is a superset of RV64, we keep for now the RV64 only tests
8
for extensions other than RVI and RVM.
9
10
Signed-off-by: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
11
Co-authored-by: Fabien Portas <fabien.portas@grenoble-inp.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
14
Message-id: 20220106210108.138226-5-frederic.petrot@univ-grenoble-alpes.fr
15
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
16
---
17
target/riscv/translate.c | 20 ++++++++++++++++----
18
1 file changed, 16 insertions(+), 4 deletions(-)
19
20
diff --git a/target/riscv/translate.c b/target/riscv/translate.c
21
index XXXXXXX..XXXXXXX 100644
22
--- a/target/riscv/translate.c
23
+++ b/target/riscv/translate.c
24
@@ -XXX,XX +XXX,XX @@ EX_SH(12)
25
} \
26
} while (0)
27
28
-#define REQUIRE_64BIT(ctx) do { \
29
- if (get_xl(ctx) < MXL_RV64) { \
30
- return false; \
31
- } \
32
+#define REQUIRE_64BIT(ctx) do { \
33
+ if (get_xl(ctx) != MXL_RV64) { \
34
+ return false; \
35
+ } \
36
+} while (0)
37
+
38
+#define REQUIRE_128BIT(ctx) do { \
39
+ if (get_xl(ctx) != MXL_RV128) { \
40
+ return false; \
41
+ } \
42
+} while (0)
43
+
44
+#define REQUIRE_64_OR_128BIT(ctx) do { \
45
+ if (get_xl(ctx) == MXL_RV32) { \
46
+ return false; \
47
+ } \
48
} while (0)
49
50
static int ex_rvc_register(DisasContext *ctx, int reg)
51
--
52
2.31.1
53
54
diff view generated by jsdifflib
New patch
1
From: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
1
2
3
Introduction of a gen_logic function for bitwise logic to implement
4
instructions in which no propagation of information occurs between bits and
5
use of this function on the bitwise instructions.
6
7
Signed-off-by: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
8
Co-authored-by: Fabien Portas <fabien.portas@grenoble-inp.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
11
Message-id: 20220106210108.138226-6-frederic.petrot@univ-grenoble-alpes.fr
12
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
13
---
14
target/riscv/translate.c | 27 +++++++++++++++++++++++++
15
target/riscv/insn_trans/trans_rvb.c.inc | 6 +++---
16
target/riscv/insn_trans/trans_rvi.c.inc | 12 +++++------
17
3 files changed, 36 insertions(+), 9 deletions(-)
18
19
diff --git a/target/riscv/translate.c b/target/riscv/translate.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/riscv/translate.c
22
+++ b/target/riscv/translate.c
23
@@ -XXX,XX +XXX,XX @@ static int ex_rvc_shifti(DisasContext *ctx, int imm)
24
/* Include the auto-generated decoder for 32 bit insn */
25
#include "decode-insn32.c.inc"
26
27
+static bool gen_logic_imm_fn(DisasContext *ctx, arg_i *a,
28
+ void (*func)(TCGv, TCGv, target_long))
29
+{
30
+ TCGv dest = dest_gpr(ctx, a->rd);
31
+ TCGv src1 = get_gpr(ctx, a->rs1, EXT_NONE);
32
+
33
+ func(dest, src1, a->imm);
34
+
35
+ gen_set_gpr(ctx, a->rd, dest);
36
+
37
+ return true;
38
+}
39
+
40
+static bool gen_logic(DisasContext *ctx, arg_r *a,
41
+ void (*func)(TCGv, TCGv, TCGv))
42
+{
43
+ TCGv dest = dest_gpr(ctx, a->rd);
44
+ TCGv src1 = get_gpr(ctx, a->rs1, EXT_NONE);
45
+ TCGv src2 = get_gpr(ctx, a->rs2, EXT_NONE);
46
+
47
+ func(dest, src1, src2);
48
+
49
+ gen_set_gpr(ctx, a->rd, dest);
50
+
51
+ return true;
52
+}
53
+
54
static bool gen_arith_imm_fn(DisasContext *ctx, arg_i *a, DisasExtend ext,
55
void (*func)(TCGv, TCGv, target_long))
56
{
57
diff --git a/target/riscv/insn_trans/trans_rvb.c.inc b/target/riscv/insn_trans/trans_rvb.c.inc
58
index XXXXXXX..XXXXXXX 100644
59
--- a/target/riscv/insn_trans/trans_rvb.c.inc
60
+++ b/target/riscv/insn_trans/trans_rvb.c.inc
61
@@ -XXX,XX +XXX,XX @@ static bool trans_cpop(DisasContext *ctx, arg_cpop *a)
62
static bool trans_andn(DisasContext *ctx, arg_andn *a)
63
{
64
REQUIRE_ZBB(ctx);
65
- return gen_arith(ctx, a, EXT_NONE, tcg_gen_andc_tl);
66
+ return gen_logic(ctx, a, tcg_gen_andc_tl);
67
}
68
69
static bool trans_orn(DisasContext *ctx, arg_orn *a)
70
{
71
REQUIRE_ZBB(ctx);
72
- return gen_arith(ctx, a, EXT_NONE, tcg_gen_orc_tl);
73
+ return gen_logic(ctx, a, tcg_gen_orc_tl);
74
}
75
76
static bool trans_xnor(DisasContext *ctx, arg_xnor *a)
77
{
78
REQUIRE_ZBB(ctx);
79
- return gen_arith(ctx, a, EXT_NONE, tcg_gen_eqv_tl);
80
+ return gen_logic(ctx, a, tcg_gen_eqv_tl);
81
}
82
83
static bool trans_min(DisasContext *ctx, arg_min *a)
84
diff --git a/target/riscv/insn_trans/trans_rvi.c.inc b/target/riscv/insn_trans/trans_rvi.c.inc
85
index XXXXXXX..XXXXXXX 100644
86
--- a/target/riscv/insn_trans/trans_rvi.c.inc
87
+++ b/target/riscv/insn_trans/trans_rvi.c.inc
88
@@ -XXX,XX +XXX,XX @@ static bool trans_sltiu(DisasContext *ctx, arg_sltiu *a)
89
90
static bool trans_xori(DisasContext *ctx, arg_xori *a)
91
{
92
- return gen_arith_imm_fn(ctx, a, EXT_NONE, tcg_gen_xori_tl);
93
+ return gen_logic_imm_fn(ctx, a, tcg_gen_xori_tl);
94
}
95
96
static bool trans_ori(DisasContext *ctx, arg_ori *a)
97
{
98
- return gen_arith_imm_fn(ctx, a, EXT_NONE, tcg_gen_ori_tl);
99
+ return gen_logic_imm_fn(ctx, a, tcg_gen_ori_tl);
100
}
101
102
static bool trans_andi(DisasContext *ctx, arg_andi *a)
103
{
104
- return gen_arith_imm_fn(ctx, a, EXT_NONE, tcg_gen_andi_tl);
105
+ return gen_logic_imm_fn(ctx, a, tcg_gen_andi_tl);
106
}
107
108
static bool trans_slli(DisasContext *ctx, arg_slli *a)
109
@@ -XXX,XX +XXX,XX @@ static bool trans_sltu(DisasContext *ctx, arg_sltu *a)
110
111
static bool trans_xor(DisasContext *ctx, arg_xor *a)
112
{
113
- return gen_arith(ctx, a, EXT_NONE, tcg_gen_xor_tl);
114
+ return gen_logic(ctx, a, tcg_gen_xor_tl);
115
}
116
117
static bool trans_srl(DisasContext *ctx, arg_srl *a)
118
@@ -XXX,XX +XXX,XX @@ static bool trans_sra(DisasContext *ctx, arg_sra *a)
119
120
static bool trans_or(DisasContext *ctx, arg_or *a)
121
{
122
- return gen_arith(ctx, a, EXT_NONE, tcg_gen_or_tl);
123
+ return gen_logic(ctx, a, tcg_gen_or_tl);
124
}
125
126
static bool trans_and(DisasContext *ctx, arg_and *a)
127
{
128
- return gen_arith(ctx, a, EXT_NONE, tcg_gen_and_tl);
129
+ return gen_logic(ctx, a, tcg_gen_and_tl);
130
}
131
132
static bool trans_addiw(DisasContext *ctx, arg_addiw *a)
133
--
134
2.31.1
135
136
diff view generated by jsdifflib
New patch
1
From: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
1
2
3
The upper 64-bit of the 128-bit registers have now a place inside
4
the cpu state structure, and are created as globals for future use.
5
6
Signed-off-by: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
7
Co-authored-by: Fabien Portas <fabien.portas@grenoble-inp.org>
8
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
9
Message-id: 20220106210108.138226-7-frederic.petrot@univ-grenoble-alpes.fr
10
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
11
---
12
target/riscv/cpu.h | 2 ++
13
target/riscv/cpu.c | 9 +++++++++
14
target/riscv/machine.c | 20 ++++++++++++++++++++
15
target/riscv/translate.c | 5 ++++-
16
4 files changed, 35 insertions(+), 1 deletion(-)
17
18
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/riscv/cpu.h
21
+++ b/target/riscv/cpu.h
22
@@ -XXX,XX +XXX,XX @@ FIELD(VTYPE, VILL, sizeof(target_ulong) * 8 - 1, 1)
23
24
struct CPURISCVState {
25
target_ulong gpr[32];
26
+ target_ulong gprh[32]; /* 64 top bits of the 128-bit registers */
27
uint64_t fpr[32]; /* assume both F and D extensions */
28
29
/* vector coprocessor state. */
30
@@ -XXX,XX +XXX,XX @@ static inline bool riscv_feature(CPURISCVState *env, int feature)
31
#include "cpu_user.h"
32
33
extern const char * const riscv_int_regnames[];
34
+extern const char * const riscv_int_regnamesh[];
35
extern const char * const riscv_fpr_regnames[];
36
37
const char *riscv_cpu_get_trap_name(target_ulong cause, bool async);
38
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/riscv/cpu.c
41
+++ b/target/riscv/cpu.c
42
@@ -XXX,XX +XXX,XX @@ const char * const riscv_int_regnames[] = {
43
"x28/t3", "x29/t4", "x30/t5", "x31/t6"
44
};
45
46
+const char * const riscv_int_regnamesh[] = {
47
+ "x0h/zeroh", "x1h/rah", "x2h/sph", "x3h/gph", "x4h/tph", "x5h/t0h",
48
+ "x6h/t1h", "x7h/t2h", "x8h/s0h", "x9h/s1h", "x10h/a0h", "x11h/a1h",
49
+ "x12h/a2h", "x13h/a3h", "x14h/a4h", "x15h/a5h", "x16h/a6h", "x17h/a7h",
50
+ "x18h/s2h", "x19h/s3h", "x20h/s4h", "x21h/s5h", "x22h/s6h", "x23h/s7h",
51
+ "x24h/s8h", "x25h/s9h", "x26h/s10h", "x27h/s11h", "x28h/t3h", "x29h/t4h",
52
+ "x30h/t5h", "x31h/t6h"
53
+};
54
+
55
const char * const riscv_fpr_regnames[] = {
56
"f0/ft0", "f1/ft1", "f2/ft2", "f3/ft3", "f4/ft4", "f5/ft5",
57
"f6/ft6", "f7/ft7", "f8/fs0", "f9/fs1", "f10/fa0", "f11/fa1",
58
diff --git a/target/riscv/machine.c b/target/riscv/machine.c
59
index XXXXXXX..XXXXXXX 100644
60
--- a/target/riscv/machine.c
61
+++ b/target/riscv/machine.c
62
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_pointermasking = {
63
}
64
};
65
66
+static bool rv128_needed(void *opaque)
67
+{
68
+ RISCVCPU *cpu = opaque;
69
+ CPURISCVState *env = &cpu->env;
70
+
71
+ return env->misa_mxl_max == MXL_RV128;
72
+}
73
+
74
+static const VMStateDescription vmstate_rv128 = {
75
+ .name = "cpu/rv128",
76
+ .version_id = 1,
77
+ .minimum_version_id = 1,
78
+ .needed = rv128_needed,
79
+ .fields = (VMStateField[]) {
80
+ VMSTATE_UINTTL_ARRAY(env.gprh, RISCVCPU, 32),
81
+ VMSTATE_END_OF_LIST()
82
+ }
83
+};
84
+
85
const VMStateDescription vmstate_riscv_cpu = {
86
.name = "cpu",
87
.version_id = 3,
88
@@ -XXX,XX +XXX,XX @@ const VMStateDescription vmstate_riscv_cpu = {
89
&vmstate_hyper,
90
&vmstate_vector,
91
&vmstate_pointermasking,
92
+ &vmstate_rv128,
93
NULL
94
}
95
};
96
diff --git a/target/riscv/translate.c b/target/riscv/translate.c
97
index XXXXXXX..XXXXXXX 100644
98
--- a/target/riscv/translate.c
99
+++ b/target/riscv/translate.c
100
@@ -XXX,XX +XXX,XX @@
101
#include "internals.h"
102
103
/* global register indices */
104
-static TCGv cpu_gpr[32], cpu_pc, cpu_vl, cpu_vstart;
105
+static TCGv cpu_gpr[32], cpu_gprh[32], cpu_pc, cpu_vl, cpu_vstart;
106
static TCGv_i64 cpu_fpr[32]; /* assume F and D extensions */
107
static TCGv load_res;
108
static TCGv load_val;
109
@@ -XXX,XX +XXX,XX @@ void riscv_translate_init(void)
110
* unless you specifically block reads/writes to reg 0.
111
*/
112
cpu_gpr[0] = NULL;
113
+ cpu_gprh[0] = NULL;
114
115
for (i = 1; i < 32; i++) {
116
cpu_gpr[i] = tcg_global_mem_new(cpu_env,
117
offsetof(CPURISCVState, gpr[i]), riscv_int_regnames[i]);
118
+ cpu_gprh[i] = tcg_global_mem_new(cpu_env,
119
+ offsetof(CPURISCVState, gprh[i]), riscv_int_regnamesh[i]);
120
}
121
122
for (i = 0; i < 32; i++) {
123
--
124
2.31.1
125
126
diff view generated by jsdifflib
New patch
1
From: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
1
2
3
This patch adds the support of the '-cpu rv128' option to
4
qemu-system-riscv64 so that we can indicate that we want to run rv128
5
executables.
6
Still, there is no support for 128-bit insns at that stage so qemu fails
7
miserably (as expected) if launched with this option.
8
9
Signed-off-by: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
10
Co-authored-by: Fabien Portas <fabien.portas@grenoble-inp.org>
11
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
12
Message-id: 20220106210108.138226-8-frederic.petrot@univ-grenoble-alpes.fr
13
[ Changed by AF
14
- Rename CPU to "x-rv128"
15
]
16
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
17
---
18
include/disas/dis-asm.h | 1 +
19
target/riscv/cpu.h | 1 +
20
disas/riscv.c | 5 +++++
21
target/riscv/cpu.c | 20 ++++++++++++++++++++
22
target/riscv/gdbstub.c | 5 +++++
23
5 files changed, 32 insertions(+)
24
25
diff --git a/include/disas/dis-asm.h b/include/disas/dis-asm.h
26
index XXXXXXX..XXXXXXX 100644
27
--- a/include/disas/dis-asm.h
28
+++ b/include/disas/dis-asm.h
29
@@ -XXX,XX +XXX,XX @@ int print_insn_nios2(bfd_vma, disassemble_info*);
30
int print_insn_xtensa (bfd_vma, disassemble_info*);
31
int print_insn_riscv32 (bfd_vma, disassemble_info*);
32
int print_insn_riscv64 (bfd_vma, disassemble_info*);
33
+int print_insn_riscv128 (bfd_vma, disassemble_info*);
34
int print_insn_rx(bfd_vma, disassemble_info *);
35
int print_insn_hexagon(bfd_vma, disassemble_info *);
36
37
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
38
index XXXXXXX..XXXXXXX 100644
39
--- a/target/riscv/cpu.h
40
+++ b/target/riscv/cpu.h
41
@@ -XXX,XX +XXX,XX @@
42
#define TYPE_RISCV_CPU_ANY RISCV_CPU_TYPE_NAME("any")
43
#define TYPE_RISCV_CPU_BASE32 RISCV_CPU_TYPE_NAME("rv32")
44
#define TYPE_RISCV_CPU_BASE64 RISCV_CPU_TYPE_NAME("rv64")
45
+#define TYPE_RISCV_CPU_BASE128 RISCV_CPU_TYPE_NAME("x-rv128")
46
#define TYPE_RISCV_CPU_IBEX RISCV_CPU_TYPE_NAME("lowrisc-ibex")
47
#define TYPE_RISCV_CPU_SHAKTI_C RISCV_CPU_TYPE_NAME("shakti-c")
48
#define TYPE_RISCV_CPU_SIFIVE_E31 RISCV_CPU_TYPE_NAME("sifive-e31")
49
diff --git a/disas/riscv.c b/disas/riscv.c
50
index XXXXXXX..XXXXXXX 100644
51
--- a/disas/riscv.c
52
+++ b/disas/riscv.c
53
@@ -XXX,XX +XXX,XX @@ int print_insn_riscv64(bfd_vma memaddr, struct disassemble_info *info)
54
{
55
return print_insn_riscv(memaddr, info, rv64);
56
}
57
+
58
+int print_insn_riscv128(bfd_vma memaddr, struct disassemble_info *info)
59
+{
60
+ return print_insn_riscv(memaddr, info, rv128);
61
+}
62
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
63
index XXXXXXX..XXXXXXX 100644
64
--- a/target/riscv/cpu.c
65
+++ b/target/riscv/cpu.c
66
@@ -XXX,XX +XXX,XX @@ static void rv64_sifive_e_cpu_init(Object *obj)
67
set_priv_version(env, PRIV_VERSION_1_10_0);
68
qdev_prop_set_bit(DEVICE(obj), "mmu", false);
69
}
70
+
71
+static void rv128_base_cpu_init(Object *obj)
72
+{
73
+ if (qemu_tcg_mttcg_enabled()) {
74
+ /* Missing 128-bit aligned atomics */
75
+ error_report("128-bit RISC-V currently does not work with Multi "
76
+ "Threaded TCG. Please use: -accel tcg,thread=single");
77
+ exit(EXIT_FAILURE);
78
+ }
79
+ CPURISCVState *env = &RISCV_CPU(obj)->env;
80
+ /* We set this in the realise function */
81
+ set_misa(env, MXL_RV128, 0);
82
+}
83
#else
84
static void rv32_base_cpu_init(Object *obj)
85
{
86
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_disas_set_info(CPUState *s, disassemble_info *info)
87
case MXL_RV64:
88
info->print_insn = print_insn_riscv64;
89
break;
90
+ case MXL_RV128:
91
+ info->print_insn = print_insn_riscv128;
92
+ break;
93
default:
94
g_assert_not_reached();
95
}
96
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_realize(DeviceState *dev, Error **errp)
97
#ifdef TARGET_RISCV64
98
case MXL_RV64:
99
break;
100
+ case MXL_RV128:
101
+ break;
102
#endif
103
case MXL_RV32:
104
break;
105
@@ -XXX,XX +XXX,XX @@ static gchar *riscv_gdb_arch_name(CPUState *cs)
106
case MXL_RV32:
107
return g_strdup("riscv:rv32");
108
case MXL_RV64:
109
+ case MXL_RV128:
110
return g_strdup("riscv:rv64");
111
default:
112
g_assert_not_reached();
113
@@ -XXX,XX +XXX,XX @@ static const TypeInfo riscv_cpu_type_infos[] = {
114
DEFINE_CPU(TYPE_RISCV_CPU_SIFIVE_E51, rv64_sifive_e_cpu_init),
115
DEFINE_CPU(TYPE_RISCV_CPU_SIFIVE_U54, rv64_sifive_u_cpu_init),
116
DEFINE_CPU(TYPE_RISCV_CPU_SHAKTI_C, rv64_sifive_u_cpu_init),
117
+ DEFINE_CPU(TYPE_RISCV_CPU_BASE128, rv128_base_cpu_init),
118
#endif
119
};
120
121
diff --git a/target/riscv/gdbstub.c b/target/riscv/gdbstub.c
122
index XXXXXXX..XXXXXXX 100644
123
--- a/target/riscv/gdbstub.c
124
+++ b/target/riscv/gdbstub.c
125
@@ -XXX,XX +XXX,XX @@ static int riscv_gen_dynamic_csr_xml(CPUState *cs, int base_reg)
126
int bitsize = 16 << env->misa_mxl_max;
127
int i;
128
129
+ /* Until gdb knows about 128-bit registers */
130
+ if (bitsize > 64) {
131
+ bitsize = 64;
132
+ }
133
+
134
g_string_printf(s, "<?xml version=\"1.0\"?>");
135
g_string_append_printf(s, "<!DOCTYPE feature SYSTEM \"gdb-target.dtd\">");
136
g_string_append_printf(s, "<feature name=\"org.gnu.gdb.riscv.csr\">");
137
--
138
2.31.1
139
140
diff view generated by jsdifflib
New patch
1
From: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
1
2
3
lwu and ld are functionally close to the other loads, but were after the
4
stores in the source file.
5
Similarly, xor was away from or and and by two arithmetic functions, while
6
the immediate versions were nicely put together.
7
This patch moves the aforementioned loads after lhu, and xor above or,
8
where they more logically belong.
9
10
Signed-off-by: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
11
Co-authored-by: Fabien Portas <fabien.portas@grenoble-inp.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
14
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
15
Message-id: 20220106210108.138226-9-frederic.petrot@univ-grenoble-alpes.fr
16
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
17
---
18
target/riscv/insn_trans/trans_rvi.c.inc | 34 ++++++++++++-------------
19
1 file changed, 17 insertions(+), 17 deletions(-)
20
21
diff --git a/target/riscv/insn_trans/trans_rvi.c.inc b/target/riscv/insn_trans/trans_rvi.c.inc
22
index XXXXXXX..XXXXXXX 100644
23
--- a/target/riscv/insn_trans/trans_rvi.c.inc
24
+++ b/target/riscv/insn_trans/trans_rvi.c.inc
25
@@ -XXX,XX +XXX,XX @@ static bool trans_lhu(DisasContext *ctx, arg_lhu *a)
26
return gen_load(ctx, a, MO_TEUW);
27
}
28
29
+static bool trans_lwu(DisasContext *ctx, arg_lwu *a)
30
+{
31
+ REQUIRE_64BIT(ctx);
32
+ return gen_load(ctx, a, MO_TEUL);
33
+}
34
+
35
+static bool trans_ld(DisasContext *ctx, arg_ld *a)
36
+{
37
+ REQUIRE_64BIT(ctx);
38
+ return gen_load(ctx, a, MO_TEUQ);
39
+}
40
+
41
static bool gen_store(DisasContext *ctx, arg_sb *a, MemOp memop)
42
{
43
TCGv addr = get_gpr(ctx, a->rs1, EXT_NONE);
44
@@ -XXX,XX +XXX,XX @@ static bool trans_sw(DisasContext *ctx, arg_sw *a)
45
return gen_store(ctx, a, MO_TESL);
46
}
47
48
-static bool trans_lwu(DisasContext *ctx, arg_lwu *a)
49
-{
50
- REQUIRE_64BIT(ctx);
51
- return gen_load(ctx, a, MO_TEUL);
52
-}
53
-
54
-static bool trans_ld(DisasContext *ctx, arg_ld *a)
55
-{
56
- REQUIRE_64BIT(ctx);
57
- return gen_load(ctx, a, MO_TEUQ);
58
-}
59
-
60
static bool trans_sd(DisasContext *ctx, arg_sd *a)
61
{
62
REQUIRE_64BIT(ctx);
63
@@ -XXX,XX +XXX,XX @@ static bool trans_sltu(DisasContext *ctx, arg_sltu *a)
64
return gen_arith(ctx, a, EXT_SIGN, gen_sltu);
65
}
66
67
-static bool trans_xor(DisasContext *ctx, arg_xor *a)
68
-{
69
- return gen_logic(ctx, a, tcg_gen_xor_tl);
70
-}
71
-
72
static bool trans_srl(DisasContext *ctx, arg_srl *a)
73
{
74
return gen_shift(ctx, a, EXT_ZERO, tcg_gen_shr_tl);
75
@@ -XXX,XX +XXX,XX @@ static bool trans_sra(DisasContext *ctx, arg_sra *a)
76
return gen_shift(ctx, a, EXT_SIGN, tcg_gen_sar_tl);
77
}
78
79
+static bool trans_xor(DisasContext *ctx, arg_xor *a)
80
+{
81
+ return gen_logic(ctx, a, tcg_gen_xor_tl);
82
+}
83
+
84
static bool trans_or(DisasContext *ctx, arg_or *a)
85
{
86
return gen_logic(ctx, a, tcg_gen_or_tl);
87
--
88
2.31.1
89
90
diff view generated by jsdifflib
New patch
1
1
From: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
2
3
Get function to retrieve the 64 top bits of a register, stored in the gprh
4
field of the cpu state. Set function that writes the 128-bit value at once.
5
The access to the gprh field can not be protected at compile time to make
6
sure it is accessed only in the 128-bit version of the processor because we
7
have no way to indicate that the misa_mxl_max field is const.
8
9
The 128-bit ISA adds ldu, lq and sq. We provide support for these
10
instructions. Note that (a) we compute only 64-bit addresses to actually
11
access memory, cowardly utilizing the existing address translation mechanism
12
of QEMU, and (b) we assume for now little-endian memory accesses.
13
14
Signed-off-by: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
15
Co-authored-by: Fabien Portas <fabien.portas@grenoble-inp.org>
16
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
17
Message-id: 20220106210108.138226-10-frederic.petrot@univ-grenoble-alpes.fr
18
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
19
---
20
target/riscv/insn16.decode | 27 ++++++-
21
target/riscv/insn32.decode | 5 ++
22
target/riscv/translate.c | 41 ++++++++++
23
target/riscv/insn_trans/trans_rvi.c.inc | 100 ++++++++++++++++++++++--
24
4 files changed, 163 insertions(+), 10 deletions(-)
25
26
diff --git a/target/riscv/insn16.decode b/target/riscv/insn16.decode
27
index XXXXXXX..XXXXXXX 100644
28
--- a/target/riscv/insn16.decode
29
+++ b/target/riscv/insn16.decode
30
@@ -XXX,XX +XXX,XX @@
31
# Immediates:
32
%imm_ci 12:s1 2:5
33
%nzuimm_ciw 7:4 11:2 5:1 6:1 !function=ex_shift_2
34
+%uimm_cl_q 10:1 5:2 11:2 !function=ex_shift_4
35
%uimm_cl_d 5:2 10:3 !function=ex_shift_3
36
%uimm_cl_w 5:1 10:3 6:1 !function=ex_shift_2
37
%imm_cb 12:s1 5:2 2:1 10:2 3:2 !function=ex_shift_1
38
%imm_cj 12:s1 8:1 9:2 6:1 7:1 2:1 11:1 3:3 !function=ex_shift_1
39
40
%shimm_6bit 12:1 2:5 !function=ex_rvc_shifti
41
+%uimm_6bit_lq 2:4 12:1 6:1 !function=ex_shift_4
42
%uimm_6bit_ld 2:3 12:1 5:2 !function=ex_shift_3
43
%uimm_6bit_lw 2:2 12:1 4:3 !function=ex_shift_2
44
+%uimm_6bit_sq 7:4 11:2 !function=ex_shift_4
45
%uimm_6bit_sd 7:3 10:3 !function=ex_shift_3
46
%uimm_6bit_sw 7:2 9:4 !function=ex_shift_2
47
48
@@ -XXX,XX +XXX,XX @@
49
# Formats 16:
50
@cr .... ..... ..... .. &r rs2=%rs2_5 rs1=%rd %rd
51
@ci ... . ..... ..... .. &i imm=%imm_ci rs1=%rd %rd
52
+@cl_q ... . ..... ..... .. &i imm=%uimm_cl_q rs1=%rs1_3 rd=%rs2_3
53
@cl_d ... ... ... .. ... .. &i imm=%uimm_cl_d rs1=%rs1_3 rd=%rs2_3
54
@cl_w ... ... ... .. ... .. &i imm=%uimm_cl_w rs1=%rs1_3 rd=%rs2_3
55
@cs_2 ... ... ... .. ... .. &r rs2=%rs2_3 rs1=%rs1_3 rd=%rs1_3
56
+@cs_q ... ... ... .. ... .. &s imm=%uimm_cl_q rs1=%rs1_3 rs2=%rs2_3
57
@cs_d ... ... ... .. ... .. &s imm=%uimm_cl_d rs1=%rs1_3 rs2=%rs2_3
58
@cs_w ... ... ... .. ... .. &s imm=%uimm_cl_w rs1=%rs1_3 rs2=%rs2_3
59
@cj ... ........... .. &j imm=%imm_cj
60
@cb_z ... ... ... .. ... .. &b imm=%imm_cb rs1=%rs1_3 rs2=0
61
62
+@c_lqsp ... . ..... ..... .. &i imm=%uimm_6bit_lq rs1=2 %rd
63
@c_ldsp ... . ..... ..... .. &i imm=%uimm_6bit_ld rs1=2 %rd
64
@c_lwsp ... . ..... ..... .. &i imm=%uimm_6bit_lw rs1=2 %rd
65
+@c_sqsp ... . ..... ..... .. &s imm=%uimm_6bit_sq rs1=2 rs2=%rs2_5
66
@c_sdsp ... . ..... ..... .. &s imm=%uimm_6bit_sd rs1=2 rs2=%rs2_5
67
@c_swsp ... . ..... ..... .. &s imm=%uimm_6bit_sw rs1=2 rs2=%rs2_5
68
@c_li ... . ..... ..... .. &i imm=%imm_ci rs1=0 %rd
69
@@ -XXX,XX +XXX,XX @@
70
illegal 000 000 000 00 --- 00
71
addi 000 ... ... .. ... 00 @c_addi4spn
72
}
73
-fld 001 ... ... .. ... 00 @cl_d
74
+{
75
+ lq 001 ... ... .. ... 00 @cl_q
76
+ fld 001 ... ... .. ... 00 @cl_d
77
+}
78
lw 010 ... ... .. ... 00 @cl_w
79
-fsd 101 ... ... .. ... 00 @cs_d
80
+{
81
+ sq 101 ... ... .. ... 00 @cs_q
82
+ fsd 101 ... ... .. ... 00 @cs_d
83
+}
84
sw 110 ... ... .. ... 00 @cs_w
85
86
# *** RV32C and RV64C specific Standard Extension (Quadrant 0) ***
87
@@ -XXX,XX +XXX,XX @@ addw 100 1 11 ... 01 ... 01 @cs_2
88
89
# *** RV32/64C Standard Extension (Quadrant 2) ***
90
slli 000 . ..... ..... 10 @c_shift2
91
-fld 001 . ..... ..... 10 @c_ldsp
92
+{
93
+ lq 001 ... ... .. ... 10 @c_lqsp
94
+ fld 001 . ..... ..... 10 @c_ldsp
95
+}
96
{
97
illegal 010 - 00000 ----- 10 # c.lwsp, RES rd=0
98
lw 010 . ..... ..... 10 @c_lwsp
99
@@ -XXX,XX +XXX,XX @@ fld 001 . ..... ..... 10 @c_ldsp
100
jalr 100 1 ..... 00000 10 @c_jalr rd=1 # C.JALR
101
add 100 1 ..... ..... 10 @cr
102
}
103
-fsd 101 ...... ..... 10 @c_sdsp
104
+{
105
+ sq 101 ... ... .. ... 10 @c_sqsp
106
+ fsd 101 ...... ..... 10 @c_sdsp
107
+}
108
sw 110 . ..... ..... 10 @c_swsp
109
110
# *** RV32C and RV64C specific Standard Extension (Quadrant 2) ***
111
diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode
112
index XXXXXXX..XXXXXXX 100644
113
--- a/target/riscv/insn32.decode
114
+++ b/target/riscv/insn32.decode
115
@@ -XXX,XX +XXX,XX @@ sllw 0000000 ..... ..... 001 ..... 0111011 @r
116
srlw 0000000 ..... ..... 101 ..... 0111011 @r
117
sraw 0100000 ..... ..... 101 ..... 0111011 @r
118
119
+# *** RV128I Base Instruction Set (in addition to RV64I) ***
120
+ldu ............ ..... 111 ..... 0000011 @i
121
+lq ............ ..... 010 ..... 0001111 @i
122
+sq ............ ..... 100 ..... 0100011 @s
123
+
124
# *** RV32M Standard Extension ***
125
mul 0000001 ..... ..... 000 ..... 0110011 @r
126
mulh 0000001 ..... ..... 001 ..... 0110011 @r
127
diff --git a/target/riscv/translate.c b/target/riscv/translate.c
128
index XXXXXXX..XXXXXXX 100644
129
--- a/target/riscv/translate.c
130
+++ b/target/riscv/translate.c
131
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
132
/* pc_succ_insn points to the instruction following base.pc_next */
133
target_ulong pc_succ_insn;
134
target_ulong priv_ver;
135
+ RISCVMXL misa_mxl_max;
136
RISCVMXL xl;
137
uint32_t misa_ext;
138
uint32_t opcode;
139
@@ -XXX,XX +XXX,XX @@ static inline int get_olen(DisasContext *ctx)
140
return 16 << get_ol(ctx);
141
}
142
143
+/* The maximum register length */
144
+#ifdef TARGET_RISCV32
145
+#define get_xl_max(ctx) MXL_RV32
146
+#else
147
+#define get_xl_max(ctx) ((ctx)->misa_mxl_max)
148
+#endif
149
+
150
/*
151
* RISC-V requires NaN-boxing of narrower width floating point values.
152
* This applies when a 32-bit value is assigned to a 64-bit FP register.
153
@@ -XXX,XX +XXX,XX @@ static TCGv get_gpr(DisasContext *ctx, int reg_num, DisasExtend ext)
154
}
155
break;
156
case MXL_RV64:
157
+ case MXL_RV128:
158
break;
159
default:
160
g_assert_not_reached();
161
@@ -XXX,XX +XXX,XX @@ static TCGv get_gpr(DisasContext *ctx, int reg_num, DisasExtend ext)
162
return cpu_gpr[reg_num];
163
}
164
165
+static TCGv get_gprh(DisasContext *ctx, int reg_num)
166
+{
167
+ assert(get_xl(ctx) == MXL_RV128);
168
+ if (reg_num == 0) {
169
+ return ctx->zero;
170
+ }
171
+ return cpu_gprh[reg_num];
172
+}
173
+
174
static TCGv dest_gpr(DisasContext *ctx, int reg_num)
175
{
176
if (reg_num == 0 || get_olen(ctx) < TARGET_LONG_BITS) {
177
@@ -XXX,XX +XXX,XX @@ static TCGv dest_gpr(DisasContext *ctx, int reg_num)
178
return cpu_gpr[reg_num];
179
}
180
181
+static TCGv dest_gprh(DisasContext *ctx, int reg_num)
182
+{
183
+ if (reg_num == 0) {
184
+ return temp_new(ctx);
185
+ }
186
+ return cpu_gprh[reg_num];
187
+}
188
+
189
static void gen_set_gpr(DisasContext *ctx, int reg_num, TCGv t)
190
{
191
if (reg_num != 0) {
192
@@ -XXX,XX +XXX,XX @@ static void gen_set_gpr(DisasContext *ctx, int reg_num, TCGv t)
193
tcg_gen_ext32s_tl(cpu_gpr[reg_num], t);
194
break;
195
case MXL_RV64:
196
+ case MXL_RV128:
197
tcg_gen_mov_tl(cpu_gpr[reg_num], t);
198
break;
199
default:
200
g_assert_not_reached();
201
}
202
+
203
+ if (get_xl_max(ctx) == MXL_RV128) {
204
+ tcg_gen_sari_tl(cpu_gprh[reg_num], cpu_gpr[reg_num], 63);
205
+ }
206
+ }
207
+}
208
+
209
+static void gen_set_gpr128(DisasContext *ctx, int reg_num, TCGv rl, TCGv rh)
210
+{
211
+ assert(get_ol(ctx) == MXL_RV128);
212
+ if (reg_num != 0) {
213
+ tcg_gen_mov_tl(cpu_gpr[reg_num], rl);
214
+ tcg_gen_mov_tl(cpu_gprh[reg_num], rh);
215
}
216
}
217
218
@@ -XXX,XX +XXX,XX @@ static void riscv_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
219
ctx->lmul = sextract32(FIELD_EX32(tb_flags, TB_FLAGS, LMUL), 0, 3);
220
ctx->vstart = env->vstart;
221
ctx->vl_eq_vlmax = FIELD_EX32(tb_flags, TB_FLAGS, VL_EQ_VLMAX);
222
+ ctx->misa_mxl_max = env->misa_mxl_max;
223
ctx->xl = FIELD_EX32(tb_flags, TB_FLAGS, XL);
224
ctx->cs = cs;
225
ctx->ntemp = 0;
226
diff --git a/target/riscv/insn_trans/trans_rvi.c.inc b/target/riscv/insn_trans/trans_rvi.c.inc
227
index XXXXXXX..XXXXXXX 100644
228
--- a/target/riscv/insn_trans/trans_rvi.c.inc
229
+++ b/target/riscv/insn_trans/trans_rvi.c.inc
230
@@ -XXX,XX +XXX,XX @@ static bool trans_bgeu(DisasContext *ctx, arg_bgeu *a)
231
return gen_branch(ctx, a, TCG_COND_GEU);
232
}
233
234
-static bool gen_load(DisasContext *ctx, arg_lb *a, MemOp memop)
235
+static bool gen_load_tl(DisasContext *ctx, arg_lb *a, MemOp memop)
236
{
237
TCGv dest = dest_gpr(ctx, a->rd);
238
TCGv addr = get_gpr(ctx, a->rs1, EXT_NONE);
239
@@ -XXX,XX +XXX,XX @@ static bool gen_load(DisasContext *ctx, arg_lb *a, MemOp memop)
240
return true;
241
}
242
243
+/* Compute only 64-bit addresses to use the address translation mechanism */
244
+static bool gen_load_i128(DisasContext *ctx, arg_lb *a, MemOp memop)
245
+{
246
+ TCGv src1l = get_gpr(ctx, a->rs1, EXT_NONE);
247
+ TCGv destl = dest_gpr(ctx, a->rd);
248
+ TCGv desth = dest_gprh(ctx, a->rd);
249
+ TCGv addrl = tcg_temp_new();
250
+
251
+ tcg_gen_addi_tl(addrl, src1l, a->imm);
252
+
253
+ if ((memop & MO_SIZE) <= MO_64) {
254
+ tcg_gen_qemu_ld_tl(destl, addrl, ctx->mem_idx, memop);
255
+ if (memop & MO_SIGN) {
256
+ tcg_gen_sari_tl(desth, destl, 63);
257
+ } else {
258
+ tcg_gen_movi_tl(desth, 0);
259
+ }
260
+ } else {
261
+ /* assume little-endian memory access for now */
262
+ tcg_gen_qemu_ld_tl(destl, addrl, ctx->mem_idx, MO_TEUQ);
263
+ tcg_gen_addi_tl(addrl, addrl, 8);
264
+ tcg_gen_qemu_ld_tl(desth, addrl, ctx->mem_idx, MO_TEUQ);
265
+ }
266
+
267
+ gen_set_gpr128(ctx, a->rd, destl, desth);
268
+
269
+ tcg_temp_free(addrl);
270
+ return true;
271
+}
272
+
273
+static bool gen_load(DisasContext *ctx, arg_lb *a, MemOp memop)
274
+{
275
+ if (get_xl(ctx) == MXL_RV128) {
276
+ return gen_load_i128(ctx, a, memop);
277
+ } else {
278
+ return gen_load_tl(ctx, a, memop);
279
+ }
280
+}
281
+
282
static bool trans_lb(DisasContext *ctx, arg_lb *a)
283
{
284
return gen_load(ctx, a, MO_SB);
285
@@ -XXX,XX +XXX,XX @@ static bool trans_lw(DisasContext *ctx, arg_lw *a)
286
return gen_load(ctx, a, MO_TESL);
287
}
288
289
+static bool trans_ld(DisasContext *ctx, arg_ld *a)
290
+{
291
+ REQUIRE_64_OR_128BIT(ctx);
292
+ return gen_load(ctx, a, MO_TESQ);
293
+}
294
+
295
+static bool trans_lq(DisasContext *ctx, arg_lq *a)
296
+{
297
+ REQUIRE_128BIT(ctx);
298
+ return gen_load(ctx, a, MO_TEUO);
299
+}
300
+
301
static bool trans_lbu(DisasContext *ctx, arg_lbu *a)
302
{
303
return gen_load(ctx, a, MO_UB);
304
@@ -XXX,XX +XXX,XX @@ static bool trans_lhu(DisasContext *ctx, arg_lhu *a)
305
306
static bool trans_lwu(DisasContext *ctx, arg_lwu *a)
307
{
308
- REQUIRE_64BIT(ctx);
309
+ REQUIRE_64_OR_128BIT(ctx);
310
return gen_load(ctx, a, MO_TEUL);
311
}
312
313
-static bool trans_ld(DisasContext *ctx, arg_ld *a)
314
+static bool trans_ldu(DisasContext *ctx, arg_ldu *a)
315
{
316
- REQUIRE_64BIT(ctx);
317
+ REQUIRE_128BIT(ctx);
318
return gen_load(ctx, a, MO_TEUQ);
319
}
320
321
-static bool gen_store(DisasContext *ctx, arg_sb *a, MemOp memop)
322
+static bool gen_store_tl(DisasContext *ctx, arg_sb *a, MemOp memop)
323
{
324
TCGv addr = get_gpr(ctx, a->rs1, EXT_NONE);
325
TCGv data = get_gpr(ctx, a->rs2, EXT_NONE);
326
@@ -XXX,XX +XXX,XX @@ static bool gen_store(DisasContext *ctx, arg_sb *a, MemOp memop)
327
return true;
328
}
329
330
+static bool gen_store_i128(DisasContext *ctx, arg_sb *a, MemOp memop)
331
+{
332
+ TCGv src1l = get_gpr(ctx, a->rs1, EXT_NONE);
333
+ TCGv src2l = get_gpr(ctx, a->rs2, EXT_NONE);
334
+ TCGv src2h = get_gprh(ctx, a->rs2);
335
+ TCGv addrl = tcg_temp_new();
336
+
337
+ tcg_gen_addi_tl(addrl, src1l, a->imm);
338
+
339
+ if ((memop & MO_SIZE) <= MO_64) {
340
+ tcg_gen_qemu_st_tl(src2l, addrl, ctx->mem_idx, memop);
341
+ } else {
342
+ /* little-endian memory access assumed for now */
343
+ tcg_gen_qemu_st_tl(src2l, addrl, ctx->mem_idx, MO_TEUQ);
344
+ tcg_gen_addi_tl(addrl, addrl, 8);
345
+ tcg_gen_qemu_st_tl(src2h, addrl, ctx->mem_idx, MO_TEUQ);
346
+ }
347
+
348
+ tcg_temp_free(addrl);
349
+ return true;
350
+}
351
+
352
+static bool gen_store(DisasContext *ctx, arg_sb *a, MemOp memop)
353
+{
354
+ if (get_xl(ctx) == MXL_RV128) {
355
+ return gen_store_i128(ctx, a, memop);
356
+ } else {
357
+ return gen_store_tl(ctx, a, memop);
358
+ }
359
+}
360
+
361
static bool trans_sb(DisasContext *ctx, arg_sb *a)
362
{
363
return gen_store(ctx, a, MO_SB);
364
@@ -XXX,XX +XXX,XX @@ static bool trans_sw(DisasContext *ctx, arg_sw *a)
365
366
static bool trans_sd(DisasContext *ctx, arg_sd *a)
367
{
368
- REQUIRE_64BIT(ctx);
369
+ REQUIRE_64_OR_128BIT(ctx);
370
return gen_store(ctx, a, MO_TEUQ);
371
}
372
373
+static bool trans_sq(DisasContext *ctx, arg_sq *a)
374
+{
375
+ REQUIRE_128BIT(ctx);
376
+ return gen_store(ctx, a, MO_TEUO);
377
+}
378
+
379
static bool trans_addi(DisasContext *ctx, arg_addi *a)
380
{
381
return gen_arith_imm_fn(ctx, a, EXT_NONE, tcg_gen_addi_tl);
382
--
383
2.31.1
384
385
diff view generated by jsdifflib
New patch
1
From: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
1
2
3
The 128-bit bitwise instructions do not need any function prototype change
4
as the functions can be applied independently on the lower and upper part of
5
the registers.
6
7
Signed-off-by: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
8
Co-authored-by: Fabien Portas <fabien.portas@grenoble-inp.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
11
Message-id: 20220106210108.138226-11-frederic.petrot@univ-grenoble-alpes.fr
12
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
13
---
14
target/riscv/translate.c | 21 +++++++++++++++++++--
15
1 file changed, 19 insertions(+), 2 deletions(-)
16
17
diff --git a/target/riscv/translate.c b/target/riscv/translate.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/riscv/translate.c
20
+++ b/target/riscv/translate.c
21
@@ -XXX,XX +XXX,XX @@ static bool gen_logic_imm_fn(DisasContext *ctx, arg_i *a,
22
23
func(dest, src1, a->imm);
24
25
- gen_set_gpr(ctx, a->rd, dest);
26
+ if (get_xl(ctx) == MXL_RV128) {
27
+ TCGv src1h = get_gprh(ctx, a->rs1);
28
+ TCGv desth = dest_gprh(ctx, a->rd);
29
+
30
+ func(desth, src1h, -(a->imm < 0));
31
+ gen_set_gpr128(ctx, a->rd, dest, desth);
32
+ } else {
33
+ gen_set_gpr(ctx, a->rd, dest);
34
+ }
35
36
return true;
37
}
38
@@ -XXX,XX +XXX,XX @@ static bool gen_logic(DisasContext *ctx, arg_r *a,
39
40
func(dest, src1, src2);
41
42
- gen_set_gpr(ctx, a->rd, dest);
43
+ if (get_xl(ctx) == MXL_RV128) {
44
+ TCGv src1h = get_gprh(ctx, a->rs1);
45
+ TCGv src2h = get_gprh(ctx, a->rs2);
46
+ TCGv desth = dest_gprh(ctx, a->rd);
47
+
48
+ func(desth, src1h, src2h);
49
+ gen_set_gpr128(ctx, a->rd, dest, desth);
50
+ } else {
51
+ gen_set_gpr(ctx, a->rd, dest);
52
+ }
53
54
return true;
55
}
56
--
57
2.31.1
58
59
diff view generated by jsdifflib
New patch
1
From: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
1
2
3
Adding the 128-bit version of lui and auipc, and introducing to that end
4
a "set register with immediat" function to handle extension on 128 bits.
5
6
Signed-off-by: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
7
Co-authored-by: Fabien Portas <fabien.portas@grenoble-inp.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
10
Message-id: 20220106210108.138226-12-frederic.petrot@univ-grenoble-alpes.fr
11
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
12
---
13
target/riscv/translate.c | 21 +++++++++++++++++++++
14
target/riscv/insn_trans/trans_rvi.c.inc | 8 ++++----
15
2 files changed, 25 insertions(+), 4 deletions(-)
16
17
diff --git a/target/riscv/translate.c b/target/riscv/translate.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/riscv/translate.c
20
+++ b/target/riscv/translate.c
21
@@ -XXX,XX +XXX,XX @@ static void gen_set_gpr(DisasContext *ctx, int reg_num, TCGv t)
22
}
23
}
24
25
+static void gen_set_gpri(DisasContext *ctx, int reg_num, target_long imm)
26
+{
27
+ if (reg_num != 0) {
28
+ switch (get_ol(ctx)) {
29
+ case MXL_RV32:
30
+ tcg_gen_movi_tl(cpu_gpr[reg_num], (int32_t)imm);
31
+ break;
32
+ case MXL_RV64:
33
+ case MXL_RV128:
34
+ tcg_gen_movi_tl(cpu_gpr[reg_num], imm);
35
+ break;
36
+ default:
37
+ g_assert_not_reached();
38
+ }
39
+
40
+ if (get_xl_max(ctx) == MXL_RV128) {
41
+ tcg_gen_movi_tl(cpu_gprh[reg_num], -(imm < 0));
42
+ }
43
+ }
44
+}
45
+
46
static void gen_set_gpr128(DisasContext *ctx, int reg_num, TCGv rl, TCGv rh)
47
{
48
assert(get_ol(ctx) == MXL_RV128);
49
diff --git a/target/riscv/insn_trans/trans_rvi.c.inc b/target/riscv/insn_trans/trans_rvi.c.inc
50
index XXXXXXX..XXXXXXX 100644
51
--- a/target/riscv/insn_trans/trans_rvi.c.inc
52
+++ b/target/riscv/insn_trans/trans_rvi.c.inc
53
@@ -XXX,XX +XXX,XX @@ static bool trans_illegal(DisasContext *ctx, arg_empty *a)
54
55
static bool trans_c64_illegal(DisasContext *ctx, arg_empty *a)
56
{
57
- REQUIRE_64BIT(ctx);
58
- return trans_illegal(ctx, a);
59
+ REQUIRE_64_OR_128BIT(ctx);
60
+ return trans_illegal(ctx, a);
61
}
62
63
static bool trans_lui(DisasContext *ctx, arg_lui *a)
64
{
65
if (a->rd != 0) {
66
- tcg_gen_movi_tl(cpu_gpr[a->rd], a->imm);
67
+ gen_set_gpri(ctx, a->rd, a->imm);
68
}
69
return true;
70
}
71
@@ -XXX,XX +XXX,XX @@ static bool trans_lui(DisasContext *ctx, arg_lui *a)
72
static bool trans_auipc(DisasContext *ctx, arg_auipc *a)
73
{
74
if (a->rd != 0) {
75
- tcg_gen_movi_tl(cpu_gpr[a->rd], a->imm + ctx->base.pc_next);
76
+ gen_set_gpri(ctx, a->rd, a->imm + ctx->base.pc_next);
77
}
78
return true;
79
}
80
--
81
2.31.1
82
83
diff view generated by jsdifflib
New patch
1
From: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
1
2
3
Handling shifts for 32, 64 and 128 operation length for RV128, following the
4
general framework for handling various olens proposed by Richard.
5
6
Signed-off-by: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
7
Co-authored-by: Fabien Portas <fabien.portas@grenoble-inp.org>
8
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
9
Message-id: 20220106210108.138226-13-frederic.petrot@univ-grenoble-alpes.fr
10
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
11
---
12
target/riscv/insn32.decode | 10 ++
13
target/riscv/translate.c | 58 ++++--
14
target/riscv/insn_trans/trans_rvb.c.inc | 22 +--
15
target/riscv/insn_trans/trans_rvi.c.inc | 224 ++++++++++++++++++++++--
16
4 files changed, 270 insertions(+), 44 deletions(-)
17
18
diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/riscv/insn32.decode
21
+++ b/target/riscv/insn32.decode
22
@@ -XXX,XX +XXX,XX @@
23
%rs1 15:5
24
%rd 7:5
25
%sh5 20:5
26
+%sh6 20:6
27
28
%sh7 20:7
29
%csr 20:12
30
@@ -XXX,XX +XXX,XX @@
31
# Formats 64:
32
@sh5 ....... ..... ..... ... ..... ....... &shift shamt=%sh5 %rs1 %rd
33
34
+# Formats 128:
35
+@sh6 ...... ...... ..... ... ..... ....... &shift shamt=%sh6 %rs1 %rd
36
+
37
# *** Privileged Instructions ***
38
ecall 000000000000 00000 000 00000 1110011
39
ebreak 000000000001 00000 000 00000 1110011
40
@@ -XXX,XX +XXX,XX @@ sraw 0100000 ..... ..... 101 ..... 0111011 @r
41
ldu ............ ..... 111 ..... 0000011 @i
42
lq ............ ..... 010 ..... 0001111 @i
43
sq ............ ..... 100 ..... 0100011 @s
44
+sllid 000000 ...... ..... 001 ..... 1011011 @sh6
45
+srlid 000000 ...... ..... 101 ..... 1011011 @sh6
46
+sraid 010000 ...... ..... 101 ..... 1011011 @sh6
47
+slld 0000000 ..... ..... 001 ..... 1111011 @r
48
+srld 0000000 ..... ..... 101 ..... 1111011 @r
49
+srad 0100000 ..... ..... 101 ..... 1111011 @r
50
51
# *** RV32M Standard Extension ***
52
mul 0000001 ..... ..... 000 ..... 0110011 @r
53
diff --git a/target/riscv/translate.c b/target/riscv/translate.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/target/riscv/translate.c
56
+++ b/target/riscv/translate.c
57
@@ -XXX,XX +XXX,XX @@ static bool gen_arith_per_ol(DisasContext *ctx, arg_r *a, DisasExtend ext,
58
}
59
60
static bool gen_shift_imm_fn(DisasContext *ctx, arg_shift *a, DisasExtend ext,
61
- void (*func)(TCGv, TCGv, target_long))
62
+ void (*func)(TCGv, TCGv, target_long),
63
+ void (*f128)(TCGv, TCGv, TCGv, TCGv, target_long))
64
{
65
TCGv dest, src1;
66
int max_len = get_olen(ctx);
67
@@ -XXX,XX +XXX,XX @@ static bool gen_shift_imm_fn(DisasContext *ctx, arg_shift *a, DisasExtend ext,
68
dest = dest_gpr(ctx, a->rd);
69
src1 = get_gpr(ctx, a->rs1, ext);
70
71
- func(dest, src1, a->shamt);
72
+ if (max_len < 128) {
73
+ func(dest, src1, a->shamt);
74
+ gen_set_gpr(ctx, a->rd, dest);
75
+ } else {
76
+ TCGv src1h = get_gprh(ctx, a->rs1);
77
+ TCGv desth = dest_gprh(ctx, a->rd);
78
79
- gen_set_gpr(ctx, a->rd, dest);
80
+ if (f128 == NULL) {
81
+ return false;
82
+ }
83
+ f128(dest, desth, src1, src1h, a->shamt);
84
+ gen_set_gpr128(ctx, a->rd, dest, desth);
85
+ }
86
return true;
87
}
88
89
static bool gen_shift_imm_fn_per_ol(DisasContext *ctx, arg_shift *a,
90
DisasExtend ext,
91
void (*f_tl)(TCGv, TCGv, target_long),
92
- void (*f_32)(TCGv, TCGv, target_long))
93
+ void (*f_32)(TCGv, TCGv, target_long),
94
+ void (*f_128)(TCGv, TCGv, TCGv, TCGv,
95
+ target_long))
96
{
97
int olen = get_olen(ctx);
98
if (olen != TARGET_LONG_BITS) {
99
if (olen == 32) {
100
f_tl = f_32;
101
- } else {
102
+ } else if (olen != 128) {
103
g_assert_not_reached();
104
}
105
}
106
- return gen_shift_imm_fn(ctx, a, ext, f_tl);
107
+ return gen_shift_imm_fn(ctx, a, ext, f_tl, f_128);
108
}
109
110
static bool gen_shift_imm_tl(DisasContext *ctx, arg_shift *a, DisasExtend ext,
111
@@ -XXX,XX +XXX,XX @@ static bool gen_shift_imm_tl(DisasContext *ctx, arg_shift *a, DisasExtend ext,
112
}
113
114
static bool gen_shift(DisasContext *ctx, arg_r *a, DisasExtend ext,
115
- void (*func)(TCGv, TCGv, TCGv))
116
+ void (*func)(TCGv, TCGv, TCGv),
117
+ void (*f128)(TCGv, TCGv, TCGv, TCGv, TCGv))
118
{
119
- TCGv dest = dest_gpr(ctx, a->rd);
120
- TCGv src1 = get_gpr(ctx, a->rs1, ext);
121
TCGv src2 = get_gpr(ctx, a->rs2, EXT_NONE);
122
TCGv ext2 = tcg_temp_new();
123
+ int max_len = get_olen(ctx);
124
125
- tcg_gen_andi_tl(ext2, src2, get_olen(ctx) - 1);
126
- func(dest, src1, ext2);
127
+ tcg_gen_andi_tl(ext2, src2, max_len - 1);
128
129
- gen_set_gpr(ctx, a->rd, dest);
130
+ TCGv dest = dest_gpr(ctx, a->rd);
131
+ TCGv src1 = get_gpr(ctx, a->rs1, ext);
132
+
133
+ if (max_len < 128) {
134
+ func(dest, src1, ext2);
135
+ gen_set_gpr(ctx, a->rd, dest);
136
+ } else {
137
+ TCGv src1h = get_gprh(ctx, a->rs1);
138
+ TCGv desth = dest_gprh(ctx, a->rd);
139
+
140
+ if (f128 == NULL) {
141
+ return false;
142
+ }
143
+ f128(dest, desth, src1, src1h, ext2);
144
+ gen_set_gpr128(ctx, a->rd, dest, desth);
145
+ }
146
tcg_temp_free(ext2);
147
return true;
148
}
149
150
static bool gen_shift_per_ol(DisasContext *ctx, arg_r *a, DisasExtend ext,
151
void (*f_tl)(TCGv, TCGv, TCGv),
152
- void (*f_32)(TCGv, TCGv, TCGv))
153
+ void (*f_32)(TCGv, TCGv, TCGv),
154
+ void (*f_128)(TCGv, TCGv, TCGv, TCGv, TCGv))
155
{
156
int olen = get_olen(ctx);
157
if (olen != TARGET_LONG_BITS) {
158
if (olen == 32) {
159
f_tl = f_32;
160
- } else {
161
+ } else if (olen != 128) {
162
g_assert_not_reached();
163
}
164
}
165
- return gen_shift(ctx, a, ext, f_tl);
166
+ return gen_shift(ctx, a, ext, f_tl, f_128);
167
}
168
169
static bool gen_unary(DisasContext *ctx, arg_r2 *a, DisasExtend ext,
170
diff --git a/target/riscv/insn_trans/trans_rvb.c.inc b/target/riscv/insn_trans/trans_rvb.c.inc
171
index XXXXXXX..XXXXXXX 100644
172
--- a/target/riscv/insn_trans/trans_rvb.c.inc
173
+++ b/target/riscv/insn_trans/trans_rvb.c.inc
174
@@ -XXX,XX +XXX,XX @@ static void gen_bset(TCGv ret, TCGv arg1, TCGv shamt)
175
static bool trans_bset(DisasContext *ctx, arg_bset *a)
176
{
177
REQUIRE_ZBS(ctx);
178
- return gen_shift(ctx, a, EXT_NONE, gen_bset);
179
+ return gen_shift(ctx, a, EXT_NONE, gen_bset, NULL);
180
}
181
182
static bool trans_bseti(DisasContext *ctx, arg_bseti *a)
183
@@ -XXX,XX +XXX,XX @@ static void gen_bclr(TCGv ret, TCGv arg1, TCGv shamt)
184
static bool trans_bclr(DisasContext *ctx, arg_bclr *a)
185
{
186
REQUIRE_ZBS(ctx);
187
- return gen_shift(ctx, a, EXT_NONE, gen_bclr);
188
+ return gen_shift(ctx, a, EXT_NONE, gen_bclr, NULL);
189
}
190
191
static bool trans_bclri(DisasContext *ctx, arg_bclri *a)
192
@@ -XXX,XX +XXX,XX @@ static void gen_binv(TCGv ret, TCGv arg1, TCGv shamt)
193
static bool trans_binv(DisasContext *ctx, arg_binv *a)
194
{
195
REQUIRE_ZBS(ctx);
196
- return gen_shift(ctx, a, EXT_NONE, gen_binv);
197
+ return gen_shift(ctx, a, EXT_NONE, gen_binv, NULL);
198
}
199
200
static bool trans_binvi(DisasContext *ctx, arg_binvi *a)
201
@@ -XXX,XX +XXX,XX @@ static void gen_bext(TCGv ret, TCGv arg1, TCGv shamt)
202
static bool trans_bext(DisasContext *ctx, arg_bext *a)
203
{
204
REQUIRE_ZBS(ctx);
205
- return gen_shift(ctx, a, EXT_NONE, gen_bext);
206
+ return gen_shift(ctx, a, EXT_NONE, gen_bext, NULL);
207
}
208
209
static bool trans_bexti(DisasContext *ctx, arg_bexti *a)
210
@@ -XXX,XX +XXX,XX @@ static void gen_rorw(TCGv ret, TCGv arg1, TCGv arg2)
211
static bool trans_ror(DisasContext *ctx, arg_ror *a)
212
{
213
REQUIRE_ZBB(ctx);
214
- return gen_shift_per_ol(ctx, a, EXT_NONE, tcg_gen_rotr_tl, gen_rorw);
215
+ return gen_shift_per_ol(ctx, a, EXT_NONE, tcg_gen_rotr_tl, gen_rorw, NULL);
216
}
217
218
static void gen_roriw(TCGv ret, TCGv arg1, target_long shamt)
219
@@ -XXX,XX +XXX,XX @@ static bool trans_rori(DisasContext *ctx, arg_rori *a)
220
{
221
REQUIRE_ZBB(ctx);
222
return gen_shift_imm_fn_per_ol(ctx, a, EXT_NONE,
223
- tcg_gen_rotri_tl, gen_roriw);
224
+ tcg_gen_rotri_tl, gen_roriw, NULL);
225
}
226
227
static void gen_rolw(TCGv ret, TCGv arg1, TCGv arg2)
228
@@ -XXX,XX +XXX,XX @@ static void gen_rolw(TCGv ret, TCGv arg1, TCGv arg2)
229
static bool trans_rol(DisasContext *ctx, arg_rol *a)
230
{
231
REQUIRE_ZBB(ctx);
232
- return gen_shift_per_ol(ctx, a, EXT_NONE, tcg_gen_rotl_tl, gen_rolw);
233
+ return gen_shift_per_ol(ctx, a, EXT_NONE, tcg_gen_rotl_tl, gen_rolw, NULL);
234
}
235
236
static void gen_rev8_32(TCGv ret, TCGv src1)
237
@@ -XXX,XX +XXX,XX @@ static bool trans_rorw(DisasContext *ctx, arg_rorw *a)
238
REQUIRE_64BIT(ctx);
239
REQUIRE_ZBB(ctx);
240
ctx->ol = MXL_RV32;
241
- return gen_shift(ctx, a, EXT_NONE, gen_rorw);
242
+ return gen_shift(ctx, a, EXT_NONE, gen_rorw, NULL);
243
}
244
245
static bool trans_roriw(DisasContext *ctx, arg_roriw *a)
246
@@ -XXX,XX +XXX,XX @@ static bool trans_roriw(DisasContext *ctx, arg_roriw *a)
247
REQUIRE_64BIT(ctx);
248
REQUIRE_ZBB(ctx);
249
ctx->ol = MXL_RV32;
250
- return gen_shift_imm_fn(ctx, a, EXT_NONE, gen_roriw);
251
+ return gen_shift_imm_fn(ctx, a, EXT_NONE, gen_roriw, NULL);
252
}
253
254
static bool trans_rolw(DisasContext *ctx, arg_rolw *a)
255
@@ -XXX,XX +XXX,XX @@ static bool trans_rolw(DisasContext *ctx, arg_rolw *a)
256
REQUIRE_64BIT(ctx);
257
REQUIRE_ZBB(ctx);
258
ctx->ol = MXL_RV32;
259
- return gen_shift(ctx, a, EXT_NONE, gen_rolw);
260
+ return gen_shift(ctx, a, EXT_NONE, gen_rolw, NULL);
261
}
262
263
#define GEN_SHADD_UW(SHAMT) \
264
@@ -XXX,XX +XXX,XX @@ static bool trans_slli_uw(DisasContext *ctx, arg_slli_uw *a)
265
{
266
REQUIRE_64BIT(ctx);
267
REQUIRE_ZBA(ctx);
268
- return gen_shift_imm_fn(ctx, a, EXT_NONE, gen_slli_uw);
269
+ return gen_shift_imm_fn(ctx, a, EXT_NONE, gen_slli_uw, NULL);
270
}
271
272
static bool trans_clmul(DisasContext *ctx, arg_clmul *a)
273
diff --git a/target/riscv/insn_trans/trans_rvi.c.inc b/target/riscv/insn_trans/trans_rvi.c.inc
274
index XXXXXXX..XXXXXXX 100644
275
--- a/target/riscv/insn_trans/trans_rvi.c.inc
276
+++ b/target/riscv/insn_trans/trans_rvi.c.inc
277
@@ -XXX,XX +XXX,XX @@ static bool trans_andi(DisasContext *ctx, arg_andi *a)
278
return gen_logic_imm_fn(ctx, a, tcg_gen_andi_tl);
279
}
280
281
+static void gen_slli_i128(TCGv retl, TCGv reth,
282
+ TCGv src1l, TCGv src1h,
283
+ target_long shamt)
284
+{
285
+ if (shamt >= 64) {
286
+ tcg_gen_shli_tl(reth, src1l, shamt - 64);
287
+ tcg_gen_movi_tl(retl, 0);
288
+ } else {
289
+ tcg_gen_extract2_tl(reth, src1l, src1h, 64 - shamt);
290
+ tcg_gen_shli_tl(retl, src1l, shamt);
291
+ }
292
+}
293
+
294
static bool trans_slli(DisasContext *ctx, arg_slli *a)
295
{
296
- return gen_shift_imm_fn(ctx, a, EXT_NONE, tcg_gen_shli_tl);
297
+ return gen_shift_imm_fn(ctx, a, EXT_NONE, tcg_gen_shli_tl, gen_slli_i128);
298
}
299
300
static void gen_srliw(TCGv dst, TCGv src, target_long shamt)
301
@@ -XXX,XX +XXX,XX @@ static void gen_srliw(TCGv dst, TCGv src, target_long shamt)
302
tcg_gen_extract_tl(dst, src, shamt, 32 - shamt);
303
}
304
305
+static void gen_srli_i128(TCGv retl, TCGv reth,
306
+ TCGv src1l, TCGv src1h,
307
+ target_long shamt)
308
+{
309
+ if (shamt >= 64) {
310
+ tcg_gen_shri_tl(retl, src1h, shamt - 64);
311
+ tcg_gen_movi_tl(reth, 0);
312
+ } else {
313
+ tcg_gen_extract2_tl(retl, src1l, src1h, shamt);
314
+ tcg_gen_shri_tl(reth, src1h, shamt);
315
+ }
316
+}
317
+
318
static bool trans_srli(DisasContext *ctx, arg_srli *a)
319
{
320
return gen_shift_imm_fn_per_ol(ctx, a, EXT_NONE,
321
- tcg_gen_shri_tl, gen_srliw);
322
+ tcg_gen_shri_tl, gen_srliw, gen_srli_i128);
323
}
324
325
static void gen_sraiw(TCGv dst, TCGv src, target_long shamt)
326
@@ -XXX,XX +XXX,XX @@ static void gen_sraiw(TCGv dst, TCGv src, target_long shamt)
327
tcg_gen_sextract_tl(dst, src, shamt, 32 - shamt);
328
}
329
330
+static void gen_srai_i128(TCGv retl, TCGv reth,
331
+ TCGv src1l, TCGv src1h,
332
+ target_long shamt)
333
+{
334
+ if (shamt >= 64) {
335
+ tcg_gen_sari_tl(retl, src1h, shamt - 64);
336
+ tcg_gen_sari_tl(reth, src1h, 63);
337
+ } else {
338
+ tcg_gen_extract2_tl(retl, src1l, src1h, shamt);
339
+ tcg_gen_sari_tl(reth, src1h, shamt);
340
+ }
341
+}
342
+
343
static bool trans_srai(DisasContext *ctx, arg_srai *a)
344
{
345
return gen_shift_imm_fn_per_ol(ctx, a, EXT_NONE,
346
- tcg_gen_sari_tl, gen_sraiw);
347
+ tcg_gen_sari_tl, gen_sraiw, gen_srai_i128);
348
}
349
350
static bool trans_add(DisasContext *ctx, arg_add *a)
351
@@ -XXX,XX +XXX,XX @@ static bool trans_sub(DisasContext *ctx, arg_sub *a)
352
return gen_arith(ctx, a, EXT_NONE, tcg_gen_sub_tl);
353
}
354
355
+static void gen_sll_i128(TCGv destl, TCGv desth,
356
+ TCGv src1l, TCGv src1h, TCGv shamt)
357
+{
358
+ TCGv ls = tcg_temp_new();
359
+ TCGv rs = tcg_temp_new();
360
+ TCGv hs = tcg_temp_new();
361
+ TCGv ll = tcg_temp_new();
362
+ TCGv lr = tcg_temp_new();
363
+ TCGv h0 = tcg_temp_new();
364
+ TCGv h1 = tcg_temp_new();
365
+ TCGv zero = tcg_constant_tl(0);
366
+
367
+ tcg_gen_andi_tl(hs, shamt, 64);
368
+ tcg_gen_andi_tl(ls, shamt, 63);
369
+ tcg_gen_neg_tl(shamt, shamt);
370
+ tcg_gen_andi_tl(rs, shamt, 63);
371
+
372
+ tcg_gen_shl_tl(ll, src1l, ls);
373
+ tcg_gen_shl_tl(h0, src1h, ls);
374
+ tcg_gen_shr_tl(lr, src1l, rs);
375
+ tcg_gen_movcond_tl(TCG_COND_NE, lr, shamt, zero, lr, zero);
376
+ tcg_gen_or_tl(h1, h0, lr);
377
+
378
+ tcg_gen_movcond_tl(TCG_COND_NE, destl, hs, zero, zero, ll);
379
+ tcg_gen_movcond_tl(TCG_COND_NE, desth, hs, zero, ll, h1);
380
+
381
+ tcg_temp_free(ls);
382
+ tcg_temp_free(rs);
383
+ tcg_temp_free(hs);
384
+ tcg_temp_free(ll);
385
+ tcg_temp_free(lr);
386
+ tcg_temp_free(h0);
387
+ tcg_temp_free(h1);
388
+}
389
+
390
static bool trans_sll(DisasContext *ctx, arg_sll *a)
391
{
392
- return gen_shift(ctx, a, EXT_NONE, tcg_gen_shl_tl);
393
+ return gen_shift(ctx, a, EXT_NONE, tcg_gen_shl_tl, gen_sll_i128);
394
}
395
396
static bool trans_slt(DisasContext *ctx, arg_slt *a)
397
@@ -XXX,XX +XXX,XX @@ static bool trans_sltu(DisasContext *ctx, arg_sltu *a)
398
return gen_arith(ctx, a, EXT_SIGN, gen_sltu);
399
}
400
401
+static void gen_srl_i128(TCGv destl, TCGv desth,
402
+ TCGv src1l, TCGv src1h, TCGv shamt)
403
+{
404
+ TCGv ls = tcg_temp_new();
405
+ TCGv rs = tcg_temp_new();
406
+ TCGv hs = tcg_temp_new();
407
+ TCGv ll = tcg_temp_new();
408
+ TCGv lr = tcg_temp_new();
409
+ TCGv h0 = tcg_temp_new();
410
+ TCGv h1 = tcg_temp_new();
411
+ TCGv zero = tcg_constant_tl(0);
412
+
413
+ tcg_gen_andi_tl(hs, shamt, 64);
414
+ tcg_gen_andi_tl(rs, shamt, 63);
415
+ tcg_gen_neg_tl(shamt, shamt);
416
+ tcg_gen_andi_tl(ls, shamt, 63);
417
+
418
+ tcg_gen_shr_tl(lr, src1l, rs);
419
+ tcg_gen_shr_tl(h1, src1h, rs);
420
+ tcg_gen_shl_tl(ll, src1h, ls);
421
+ tcg_gen_movcond_tl(TCG_COND_NE, ll, shamt, zero, ll, zero);
422
+ tcg_gen_or_tl(h0, ll, lr);
423
+
424
+ tcg_gen_movcond_tl(TCG_COND_NE, destl, hs, zero, h1, h0);
425
+ tcg_gen_movcond_tl(TCG_COND_NE, desth, hs, zero, zero, h1);
426
+
427
+ tcg_temp_free(ls);
428
+ tcg_temp_free(rs);
429
+ tcg_temp_free(hs);
430
+ tcg_temp_free(ll);
431
+ tcg_temp_free(lr);
432
+ tcg_temp_free(h0);
433
+ tcg_temp_free(h1);
434
+}
435
+
436
static bool trans_srl(DisasContext *ctx, arg_srl *a)
437
{
438
- return gen_shift(ctx, a, EXT_ZERO, tcg_gen_shr_tl);
439
+ return gen_shift(ctx, a, EXT_ZERO, tcg_gen_shr_tl, gen_srl_i128);
440
+}
441
+
442
+static void gen_sra_i128(TCGv destl, TCGv desth,
443
+ TCGv src1l, TCGv src1h, TCGv shamt)
444
+{
445
+ TCGv ls = tcg_temp_new();
446
+ TCGv rs = tcg_temp_new();
447
+ TCGv hs = tcg_temp_new();
448
+ TCGv ll = tcg_temp_new();
449
+ TCGv lr = tcg_temp_new();
450
+ TCGv h0 = tcg_temp_new();
451
+ TCGv h1 = tcg_temp_new();
452
+ TCGv zero = tcg_constant_tl(0);
453
+
454
+ tcg_gen_andi_tl(hs, shamt, 64);
455
+ tcg_gen_andi_tl(rs, shamt, 63);
456
+ tcg_gen_neg_tl(shamt, shamt);
457
+ tcg_gen_andi_tl(ls, shamt, 63);
458
+
459
+ tcg_gen_shr_tl(lr, src1l, rs);
460
+ tcg_gen_sar_tl(h1, src1h, rs);
461
+ tcg_gen_shl_tl(ll, src1h, ls);
462
+ tcg_gen_movcond_tl(TCG_COND_NE, ll, shamt, zero, ll, zero);
463
+ tcg_gen_or_tl(h0, ll, lr);
464
+ tcg_gen_sari_tl(lr, src1h, 63);
465
+
466
+ tcg_gen_movcond_tl(TCG_COND_NE, destl, hs, zero, h1, h0);
467
+ tcg_gen_movcond_tl(TCG_COND_NE, desth, hs, zero, lr, h1);
468
+
469
+ tcg_temp_free(ls);
470
+ tcg_temp_free(rs);
471
+ tcg_temp_free(hs);
472
+ tcg_temp_free(ll);
473
+ tcg_temp_free(lr);
474
+ tcg_temp_free(h0);
475
+ tcg_temp_free(h1);
476
}
477
478
static bool trans_sra(DisasContext *ctx, arg_sra *a)
479
{
480
- return gen_shift(ctx, a, EXT_SIGN, tcg_gen_sar_tl);
481
+ return gen_shift(ctx, a, EXT_SIGN, tcg_gen_sar_tl, gen_sra_i128);
482
}
483
484
static bool trans_xor(DisasContext *ctx, arg_xor *a)
485
@@ -XXX,XX +XXX,XX @@ static bool trans_addiw(DisasContext *ctx, arg_addiw *a)
486
487
static bool trans_slliw(DisasContext *ctx, arg_slliw *a)
488
{
489
- REQUIRE_64BIT(ctx);
490
+ REQUIRE_64_OR_128BIT(ctx);
491
ctx->ol = MXL_RV32;
492
- return gen_shift_imm_fn(ctx, a, EXT_NONE, tcg_gen_shli_tl);
493
+ return gen_shift_imm_fn(ctx, a, EXT_NONE, tcg_gen_shli_tl, NULL);
494
}
495
496
static bool trans_srliw(DisasContext *ctx, arg_srliw *a)
497
{
498
- REQUIRE_64BIT(ctx);
499
+ REQUIRE_64_OR_128BIT(ctx);
500
ctx->ol = MXL_RV32;
501
- return gen_shift_imm_fn(ctx, a, EXT_NONE, gen_srliw);
502
+ return gen_shift_imm_fn(ctx, a, EXT_NONE, gen_srliw, NULL);
503
}
504
505
static bool trans_sraiw(DisasContext *ctx, arg_sraiw *a)
506
{
507
- REQUIRE_64BIT(ctx);
508
+ REQUIRE_64_OR_128BIT(ctx);
509
ctx->ol = MXL_RV32;
510
- return gen_shift_imm_fn(ctx, a, EXT_NONE, gen_sraiw);
511
+ return gen_shift_imm_fn(ctx, a, EXT_NONE, gen_sraiw, NULL);
512
+}
513
+
514
+static bool trans_sllid(DisasContext *ctx, arg_sllid *a)
515
+{
516
+ REQUIRE_128BIT(ctx);
517
+ ctx->ol = MXL_RV64;
518
+ return gen_shift_imm_fn(ctx, a, EXT_NONE, tcg_gen_shli_tl, NULL);
519
+}
520
+
521
+static bool trans_srlid(DisasContext *ctx, arg_srlid *a)
522
+{
523
+ REQUIRE_128BIT(ctx);
524
+ ctx->ol = MXL_RV64;
525
+ return gen_shift_imm_fn(ctx, a, EXT_NONE, tcg_gen_shri_tl, NULL);
526
+}
527
+
528
+static bool trans_sraid(DisasContext *ctx, arg_sraid *a)
529
+{
530
+ REQUIRE_128BIT(ctx);
531
+ ctx->ol = MXL_RV64;
532
+ return gen_shift_imm_fn(ctx, a, EXT_NONE, tcg_gen_sari_tl, NULL);
533
}
534
535
static bool trans_addw(DisasContext *ctx, arg_addw *a)
536
@@ -XXX,XX +XXX,XX @@ static bool trans_subw(DisasContext *ctx, arg_subw *a)
537
538
static bool trans_sllw(DisasContext *ctx, arg_sllw *a)
539
{
540
- REQUIRE_64BIT(ctx);
541
+ REQUIRE_64_OR_128BIT(ctx);
542
ctx->ol = MXL_RV32;
543
- return gen_shift(ctx, a, EXT_NONE, tcg_gen_shl_tl);
544
+ return gen_shift(ctx, a, EXT_NONE, tcg_gen_shl_tl, NULL);
545
}
546
547
static bool trans_srlw(DisasContext *ctx, arg_srlw *a)
548
{
549
- REQUIRE_64BIT(ctx);
550
+ REQUIRE_64_OR_128BIT(ctx);
551
ctx->ol = MXL_RV32;
552
- return gen_shift(ctx, a, EXT_ZERO, tcg_gen_shr_tl);
553
+ return gen_shift(ctx, a, EXT_ZERO, tcg_gen_shr_tl, NULL);
554
}
555
556
static bool trans_sraw(DisasContext *ctx, arg_sraw *a)
557
{
558
- REQUIRE_64BIT(ctx);
559
+ REQUIRE_64_OR_128BIT(ctx);
560
ctx->ol = MXL_RV32;
561
- return gen_shift(ctx, a, EXT_SIGN, tcg_gen_sar_tl);
562
+ return gen_shift(ctx, a, EXT_SIGN, tcg_gen_sar_tl, NULL);
563
+}
564
+
565
+static bool trans_slld(DisasContext *ctx, arg_slld *a)
566
+{
567
+ REQUIRE_128BIT(ctx);
568
+ ctx->ol = MXL_RV64;
569
+ return gen_shift(ctx, a, EXT_NONE, tcg_gen_shl_tl, NULL);
570
}
571
572
+static bool trans_srld(DisasContext *ctx, arg_srld *a)
573
+{
574
+ REQUIRE_128BIT(ctx);
575
+ ctx->ol = MXL_RV64;
576
+ return gen_shift(ctx, a, EXT_ZERO, tcg_gen_shr_tl, NULL);
577
+}
578
+
579
+static bool trans_srad(DisasContext *ctx, arg_srad *a)
580
+{
581
+ REQUIRE_128BIT(ctx);
582
+ ctx->ol = MXL_RV64;
583
+ return gen_shift(ctx, a, EXT_SIGN, tcg_gen_sar_tl, NULL);
584
+}
585
+
586
+
587
static bool trans_fence(DisasContext *ctx, arg_fence *a)
588
{
589
/* FENCE is a full memory barrier. */
590
--
591
2.31.1
592
593
diff view generated by jsdifflib
New patch
1
From: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
1
2
3
Addition of 128-bit adds and subs in their various sizes,
4
"set if less than"s and branches.
5
Refactored the code to have a comparison function used for both stls and
6
branches.
7
8
Signed-off-by: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
9
Co-authored-by: Fabien Portas <fabien.portas@grenoble-inp.org>
10
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
11
Message-id: 20220106210108.138226-14-frederic.petrot@univ-grenoble-alpes.fr
12
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
13
---
14
target/riscv/insn32.decode | 3 +
15
target/riscv/translate.c | 63 ++++++++--
16
target/riscv/insn_trans/trans_rvb.c.inc | 20 +--
17
target/riscv/insn_trans/trans_rvi.c.inc | 159 +++++++++++++++++++++---
18
target/riscv/insn_trans/trans_rvm.c.inc | 26 ++--
19
5 files changed, 222 insertions(+), 49 deletions(-)
20
21
diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode
22
index XXXXXXX..XXXXXXX 100644
23
--- a/target/riscv/insn32.decode
24
+++ b/target/riscv/insn32.decode
25
@@ -XXX,XX +XXX,XX @@ sraw 0100000 ..... ..... 101 ..... 0111011 @r
26
ldu ............ ..... 111 ..... 0000011 @i
27
lq ............ ..... 010 ..... 0001111 @i
28
sq ............ ..... 100 ..... 0100011 @s
29
+addid ............ ..... 000 ..... 1011011 @i
30
sllid 000000 ...... ..... 001 ..... 1011011 @sh6
31
srlid 000000 ...... ..... 101 ..... 1011011 @sh6
32
sraid 010000 ...... ..... 101 ..... 1011011 @sh6
33
+addd 0000000 ..... ..... 000 ..... 1111011 @r
34
+subd 0100000 ..... ..... 000 ..... 1111011 @r
35
slld 0000000 ..... ..... 001 ..... 1111011 @r
36
srld 0000000 ..... ..... 101 ..... 1111011 @r
37
srad 0100000 ..... ..... 101 ..... 1111011 @r
38
diff --git a/target/riscv/translate.c b/target/riscv/translate.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/riscv/translate.c
41
+++ b/target/riscv/translate.c
42
@@ -XXX,XX +XXX,XX @@ static bool gen_logic(DisasContext *ctx, arg_r *a,
43
}
44
45
static bool gen_arith_imm_fn(DisasContext *ctx, arg_i *a, DisasExtend ext,
46
- void (*func)(TCGv, TCGv, target_long))
47
+ void (*func)(TCGv, TCGv, target_long),
48
+ void (*f128)(TCGv, TCGv, TCGv, TCGv, target_long))
49
{
50
TCGv dest = dest_gpr(ctx, a->rd);
51
TCGv src1 = get_gpr(ctx, a->rs1, ext);
52
53
- func(dest, src1, a->imm);
54
+ if (get_ol(ctx) < MXL_RV128) {
55
+ func(dest, src1, a->imm);
56
+ gen_set_gpr(ctx, a->rd, dest);
57
+ } else {
58
+ if (f128 == NULL) {
59
+ return false;
60
+ }
61
62
- gen_set_gpr(ctx, a->rd, dest);
63
+ TCGv src1h = get_gprh(ctx, a->rs1);
64
+ TCGv desth = dest_gprh(ctx, a->rd);
65
+
66
+ f128(dest, desth, src1, src1h, a->imm);
67
+ gen_set_gpr128(ctx, a->rd, dest, desth);
68
+ }
69
return true;
70
}
71
72
static bool gen_arith_imm_tl(DisasContext *ctx, arg_i *a, DisasExtend ext,
73
- void (*func)(TCGv, TCGv, TCGv))
74
+ void (*func)(TCGv, TCGv, TCGv),
75
+ void (*f128)(TCGv, TCGv, TCGv, TCGv, TCGv, TCGv))
76
{
77
TCGv dest = dest_gpr(ctx, a->rd);
78
TCGv src1 = get_gpr(ctx, a->rs1, ext);
79
TCGv src2 = tcg_constant_tl(a->imm);
80
81
- func(dest, src1, src2);
82
+ if (get_ol(ctx) < MXL_RV128) {
83
+ func(dest, src1, src2);
84
+ gen_set_gpr(ctx, a->rd, dest);
85
+ } else {
86
+ if (f128 == NULL) {
87
+ return false;
88
+ }
89
90
- gen_set_gpr(ctx, a->rd, dest);
91
+ TCGv src1h = get_gprh(ctx, a->rs1);
92
+ TCGv src2h = tcg_constant_tl(-(a->imm < 0));
93
+ TCGv desth = dest_gprh(ctx, a->rd);
94
+
95
+ f128(dest, desth, src1, src1h, src2, src2h);
96
+ gen_set_gpr128(ctx, a->rd, dest, desth);
97
+ }
98
return true;
99
}
100
101
static bool gen_arith(DisasContext *ctx, arg_r *a, DisasExtend ext,
102
- void (*func)(TCGv, TCGv, TCGv))
103
+ void (*func)(TCGv, TCGv, TCGv),
104
+ void (*f128)(TCGv, TCGv, TCGv, TCGv, TCGv, TCGv))
105
{
106
TCGv dest = dest_gpr(ctx, a->rd);
107
TCGv src1 = get_gpr(ctx, a->rs1, ext);
108
TCGv src2 = get_gpr(ctx, a->rs2, ext);
109
110
- func(dest, src1, src2);
111
+ if (get_ol(ctx) < MXL_RV128) {
112
+ func(dest, src1, src2);
113
+ gen_set_gpr(ctx, a->rd, dest);
114
+ } else {
115
+ if (f128 == NULL) {
116
+ return false;
117
+ }
118
119
- gen_set_gpr(ctx, a->rd, dest);
120
+ TCGv src1h = get_gprh(ctx, a->rs1);
121
+ TCGv src2h = get_gprh(ctx, a->rs2);
122
+ TCGv desth = dest_gprh(ctx, a->rd);
123
+
124
+ f128(dest, desth, src1, src1h, src2, src2h);
125
+ gen_set_gpr128(ctx, a->rd, dest, desth);
126
+ }
127
return true;
128
}
129
130
static bool gen_arith_per_ol(DisasContext *ctx, arg_r *a, DisasExtend ext,
131
void (*f_tl)(TCGv, TCGv, TCGv),
132
- void (*f_32)(TCGv, TCGv, TCGv))
133
+ void (*f_32)(TCGv, TCGv, TCGv),
134
+ void (*f_128)(TCGv, TCGv, TCGv, TCGv, TCGv, TCGv))
135
{
136
int olen = get_olen(ctx);
137
138
if (olen != TARGET_LONG_BITS) {
139
if (olen == 32) {
140
f_tl = f_32;
141
- } else {
142
+ } else if (olen != 128) {
143
g_assert_not_reached();
144
}
145
}
146
- return gen_arith(ctx, a, ext, f_tl);
147
+ return gen_arith(ctx, a, ext, f_tl, f_128);
148
}
149
150
static bool gen_shift_imm_fn(DisasContext *ctx, arg_shift *a, DisasExtend ext,
151
diff --git a/target/riscv/insn_trans/trans_rvb.c.inc b/target/riscv/insn_trans/trans_rvb.c.inc
152
index XXXXXXX..XXXXXXX 100644
153
--- a/target/riscv/insn_trans/trans_rvb.c.inc
154
+++ b/target/riscv/insn_trans/trans_rvb.c.inc
155
@@ -XXX,XX +XXX,XX @@ static bool trans_xnor(DisasContext *ctx, arg_xnor *a)
156
static bool trans_min(DisasContext *ctx, arg_min *a)
157
{
158
REQUIRE_ZBB(ctx);
159
- return gen_arith(ctx, a, EXT_SIGN, tcg_gen_smin_tl);
160
+ return gen_arith(ctx, a, EXT_SIGN, tcg_gen_smin_tl, NULL);
161
}
162
163
static bool trans_max(DisasContext *ctx, arg_max *a)
164
{
165
REQUIRE_ZBB(ctx);
166
- return gen_arith(ctx, a, EXT_SIGN, tcg_gen_smax_tl);
167
+ return gen_arith(ctx, a, EXT_SIGN, tcg_gen_smax_tl, NULL);
168
}
169
170
static bool trans_minu(DisasContext *ctx, arg_minu *a)
171
{
172
REQUIRE_ZBB(ctx);
173
- return gen_arith(ctx, a, EXT_SIGN, tcg_gen_umin_tl);
174
+ return gen_arith(ctx, a, EXT_SIGN, tcg_gen_umin_tl, NULL);
175
}
176
177
static bool trans_maxu(DisasContext *ctx, arg_maxu *a)
178
{
179
REQUIRE_ZBB(ctx);
180
- return gen_arith(ctx, a, EXT_SIGN, tcg_gen_umax_tl);
181
+ return gen_arith(ctx, a, EXT_SIGN, tcg_gen_umax_tl, NULL);
182
}
183
184
static bool trans_sext_b(DisasContext *ctx, arg_sext_b *a)
185
@@ -XXX,XX +XXX,XX @@ GEN_SHADD(3)
186
static bool trans_sh##SHAMT##add(DisasContext *ctx, arg_sh##SHAMT##add *a) \
187
{ \
188
REQUIRE_ZBA(ctx); \
189
- return gen_arith(ctx, a, EXT_NONE, gen_sh##SHAMT##add); \
190
+ return gen_arith(ctx, a, EXT_NONE, gen_sh##SHAMT##add, NULL); \
191
}
192
193
GEN_TRANS_SHADD(1)
194
@@ -XXX,XX +XXX,XX @@ static bool trans_sh##SHAMT##add_uw(DisasContext *ctx, \
195
{ \
196
REQUIRE_64BIT(ctx); \
197
REQUIRE_ZBA(ctx); \
198
- return gen_arith(ctx, a, EXT_NONE, gen_sh##SHAMT##add_uw); \
199
+ return gen_arith(ctx, a, EXT_NONE, gen_sh##SHAMT##add_uw, NULL); \
200
}
201
202
GEN_TRANS_SHADD_UW(1)
203
@@ -XXX,XX +XXX,XX @@ static bool trans_add_uw(DisasContext *ctx, arg_add_uw *a)
204
{
205
REQUIRE_64BIT(ctx);
206
REQUIRE_ZBA(ctx);
207
- return gen_arith(ctx, a, EXT_NONE, gen_add_uw);
208
+ return gen_arith(ctx, a, EXT_NONE, gen_add_uw, NULL);
209
}
210
211
static void gen_slli_uw(TCGv dest, TCGv src, target_long shamt)
212
@@ -XXX,XX +XXX,XX @@ static bool trans_slli_uw(DisasContext *ctx, arg_slli_uw *a)
213
static bool trans_clmul(DisasContext *ctx, arg_clmul *a)
214
{
215
REQUIRE_ZBC(ctx);
216
- return gen_arith(ctx, a, EXT_NONE, gen_helper_clmul);
217
+ return gen_arith(ctx, a, EXT_NONE, gen_helper_clmul, NULL);
218
}
219
220
static void gen_clmulh(TCGv dst, TCGv src1, TCGv src2)
221
@@ -XXX,XX +XXX,XX @@ static void gen_clmulh(TCGv dst, TCGv src1, TCGv src2)
222
static bool trans_clmulh(DisasContext *ctx, arg_clmulr *a)
223
{
224
REQUIRE_ZBC(ctx);
225
- return gen_arith(ctx, a, EXT_NONE, gen_clmulh);
226
+ return gen_arith(ctx, a, EXT_NONE, gen_clmulh, NULL);
227
}
228
229
static bool trans_clmulr(DisasContext *ctx, arg_clmulh *a)
230
{
231
REQUIRE_ZBC(ctx);
232
- return gen_arith(ctx, a, EXT_NONE, gen_helper_clmulr);
233
+ return gen_arith(ctx, a, EXT_NONE, gen_helper_clmulr, NULL);
234
}
235
diff --git a/target/riscv/insn_trans/trans_rvi.c.inc b/target/riscv/insn_trans/trans_rvi.c.inc
236
index XXXXXXX..XXXXXXX 100644
237
--- a/target/riscv/insn_trans/trans_rvi.c.inc
238
+++ b/target/riscv/insn_trans/trans_rvi.c.inc
239
@@ -XXX,XX +XXX,XX @@ static bool trans_jalr(DisasContext *ctx, arg_jalr *a)
240
return true;
241
}
242
243
+static TCGCond gen_compare_i128(bool bz, TCGv rl,
244
+ TCGv al, TCGv ah, TCGv bl, TCGv bh,
245
+ TCGCond cond)
246
+{
247
+ TCGv rh = tcg_temp_new();
248
+ bool invert = false;
249
+
250
+ switch (cond) {
251
+ case TCG_COND_EQ:
252
+ case TCG_COND_NE:
253
+ if (bz) {
254
+ tcg_gen_or_tl(rl, al, ah);
255
+ } else {
256
+ tcg_gen_xor_tl(rl, al, bl);
257
+ tcg_gen_xor_tl(rh, ah, bh);
258
+ tcg_gen_or_tl(rl, rl, rh);
259
+ }
260
+ break;
261
+
262
+ case TCG_COND_GE:
263
+ case TCG_COND_LT:
264
+ if (bz) {
265
+ tcg_gen_mov_tl(rl, ah);
266
+ } else {
267
+ TCGv tmp = tcg_temp_new();
268
+
269
+ tcg_gen_sub2_tl(rl, rh, al, ah, bl, bh);
270
+ tcg_gen_xor_tl(rl, rh, ah);
271
+ tcg_gen_xor_tl(tmp, ah, bh);
272
+ tcg_gen_and_tl(rl, rl, tmp);
273
+ tcg_gen_xor_tl(rl, rh, rl);
274
+
275
+ tcg_temp_free(tmp);
276
+ }
277
+ break;
278
+
279
+ case TCG_COND_LTU:
280
+ invert = true;
281
+ /* fallthrough */
282
+ case TCG_COND_GEU:
283
+ {
284
+ TCGv tmp = tcg_temp_new();
285
+ TCGv zero = tcg_constant_tl(0);
286
+ TCGv one = tcg_constant_tl(1);
287
+
288
+ cond = TCG_COND_NE;
289
+ /* borrow in to second word */
290
+ tcg_gen_setcond_tl(TCG_COND_LTU, tmp, al, bl);
291
+ /* seed third word with 1, which will be result */
292
+ tcg_gen_sub2_tl(tmp, rh, ah, one, tmp, zero);
293
+ tcg_gen_sub2_tl(tmp, rl, tmp, rh, bh, zero);
294
+
295
+ tcg_temp_free(tmp);
296
+ }
297
+ break;
298
+
299
+ default:
300
+ g_assert_not_reached();
301
+ }
302
+
303
+ if (invert) {
304
+ cond = tcg_invert_cond(cond);
305
+ }
306
+
307
+ tcg_temp_free(rh);
308
+ return cond;
309
+}
310
+
311
+static void gen_setcond_i128(TCGv rl, TCGv rh,
312
+ TCGv src1l, TCGv src1h,
313
+ TCGv src2l, TCGv src2h,
314
+ TCGCond cond)
315
+{
316
+ cond = gen_compare_i128(false, rl, src1l, src1h, src2l, src2h, cond);
317
+ tcg_gen_setcondi_tl(cond, rl, rl, 0);
318
+ tcg_gen_movi_tl(rh, 0);
319
+}
320
+
321
static bool gen_branch(DisasContext *ctx, arg_b *a, TCGCond cond)
322
{
323
TCGLabel *l = gen_new_label();
324
TCGv src1 = get_gpr(ctx, a->rs1, EXT_SIGN);
325
TCGv src2 = get_gpr(ctx, a->rs2, EXT_SIGN);
326
327
- tcg_gen_brcond_tl(cond, src1, src2, l);
328
+ if (get_xl(ctx) == MXL_RV128) {
329
+ TCGv src1h = get_gprh(ctx, a->rs1);
330
+ TCGv src2h = get_gprh(ctx, a->rs2);
331
+ TCGv tmp = tcg_temp_new();
332
+
333
+ cond = gen_compare_i128(a->rs2 == 0,
334
+ tmp, src1, src1h, src2, src2h, cond);
335
+ tcg_gen_brcondi_tl(cond, tmp, 0, l);
336
+
337
+ tcg_temp_free(tmp);
338
+ } else {
339
+ tcg_gen_brcond_tl(cond, src1, src2, l);
340
+ }
341
gen_goto_tb(ctx, 1, ctx->pc_succ_insn);
342
343
gen_set_label(l); /* branch taken */
344
@@ -XXX,XX +XXX,XX @@ static bool trans_sq(DisasContext *ctx, arg_sq *a)
345
return gen_store(ctx, a, MO_TEUO);
346
}
347
348
+static bool trans_addd(DisasContext *ctx, arg_addd *a)
349
+{
350
+ REQUIRE_128BIT(ctx);
351
+ ctx->ol = MXL_RV64;
352
+ return gen_arith(ctx, a, EXT_NONE, tcg_gen_add_tl, NULL);
353
+}
354
+
355
+static bool trans_addid(DisasContext *ctx, arg_addid *a)
356
+{
357
+ REQUIRE_128BIT(ctx);
358
+ ctx->ol = MXL_RV64;
359
+ return gen_arith_imm_fn(ctx, a, EXT_NONE, tcg_gen_addi_tl, NULL);
360
+}
361
+
362
+static bool trans_subd(DisasContext *ctx, arg_subd *a)
363
+{
364
+ REQUIRE_128BIT(ctx);
365
+ ctx->ol = MXL_RV64;
366
+ return gen_arith(ctx, a, EXT_NONE, tcg_gen_sub_tl, NULL);
367
+}
368
+
369
+static void gen_addi2_i128(TCGv retl, TCGv reth,
370
+ TCGv srcl, TCGv srch, target_long imm)
371
+{
372
+ TCGv imml = tcg_constant_tl(imm);
373
+ TCGv immh = tcg_constant_tl(-(imm < 0));
374
+ tcg_gen_add2_tl(retl, reth, srcl, srch, imml, immh);
375
+}
376
+
377
static bool trans_addi(DisasContext *ctx, arg_addi *a)
378
{
379
- return gen_arith_imm_fn(ctx, a, EXT_NONE, tcg_gen_addi_tl);
380
+ return gen_arith_imm_fn(ctx, a, EXT_NONE, tcg_gen_addi_tl, gen_addi2_i128);
381
}
382
383
static void gen_slt(TCGv ret, TCGv s1, TCGv s2)
384
@@ -XXX,XX +XXX,XX @@ static void gen_slt(TCGv ret, TCGv s1, TCGv s2)
385
tcg_gen_setcond_tl(TCG_COND_LT, ret, s1, s2);
386
}
387
388
+static void gen_slt_i128(TCGv retl, TCGv reth,
389
+ TCGv s1l, TCGv s1h, TCGv s2l, TCGv s2h)
390
+{
391
+ gen_setcond_i128(retl, reth, s1l, s1h, s2l, s2h, TCG_COND_LT);
392
+}
393
+
394
static void gen_sltu(TCGv ret, TCGv s1, TCGv s2)
395
{
396
tcg_gen_setcond_tl(TCG_COND_LTU, ret, s1, s2);
397
}
398
399
+static void gen_sltu_i128(TCGv retl, TCGv reth,
400
+ TCGv s1l, TCGv s1h, TCGv s2l, TCGv s2h)
401
+{
402
+ gen_setcond_i128(retl, reth, s1l, s1h, s2l, s2h, TCG_COND_LTU);
403
+}
404
+
405
static bool trans_slti(DisasContext *ctx, arg_slti *a)
406
{
407
- return gen_arith_imm_tl(ctx, a, EXT_SIGN, gen_slt);
408
+ return gen_arith_imm_tl(ctx, a, EXT_SIGN, gen_slt, gen_slt_i128);
409
}
410
411
static bool trans_sltiu(DisasContext *ctx, arg_sltiu *a)
412
{
413
- return gen_arith_imm_tl(ctx, a, EXT_SIGN, gen_sltu);
414
+ return gen_arith_imm_tl(ctx, a, EXT_SIGN, gen_sltu, gen_sltu_i128);
415
}
416
417
static bool trans_xori(DisasContext *ctx, arg_xori *a)
418
@@ -XXX,XX +XXX,XX @@ static bool trans_srai(DisasContext *ctx, arg_srai *a)
419
420
static bool trans_add(DisasContext *ctx, arg_add *a)
421
{
422
- return gen_arith(ctx, a, EXT_NONE, tcg_gen_add_tl);
423
+ return gen_arith(ctx, a, EXT_NONE, tcg_gen_add_tl, tcg_gen_add2_tl);
424
}
425
426
static bool trans_sub(DisasContext *ctx, arg_sub *a)
427
{
428
- return gen_arith(ctx, a, EXT_NONE, tcg_gen_sub_tl);
429
+ return gen_arith(ctx, a, EXT_NONE, tcg_gen_sub_tl, tcg_gen_sub2_tl);
430
}
431
432
static void gen_sll_i128(TCGv destl, TCGv desth,
433
@@ -XXX,XX +XXX,XX @@ static bool trans_sll(DisasContext *ctx, arg_sll *a)
434
435
static bool trans_slt(DisasContext *ctx, arg_slt *a)
436
{
437
- return gen_arith(ctx, a, EXT_SIGN, gen_slt);
438
+ return gen_arith(ctx, a, EXT_SIGN, gen_slt, gen_slt_i128);
439
}
440
441
static bool trans_sltu(DisasContext *ctx, arg_sltu *a)
442
{
443
- return gen_arith(ctx, a, EXT_SIGN, gen_sltu);
444
+ return gen_arith(ctx, a, EXT_SIGN, gen_sltu, gen_sltu_i128);
445
}
446
447
static void gen_srl_i128(TCGv destl, TCGv desth,
448
@@ -XXX,XX +XXX,XX @@ static bool trans_and(DisasContext *ctx, arg_and *a)
449
450
static bool trans_addiw(DisasContext *ctx, arg_addiw *a)
451
{
452
- REQUIRE_64BIT(ctx);
453
+ REQUIRE_64_OR_128BIT(ctx);
454
ctx->ol = MXL_RV32;
455
- return gen_arith_imm_fn(ctx, a, EXT_NONE, tcg_gen_addi_tl);
456
+ return gen_arith_imm_fn(ctx, a, EXT_NONE, tcg_gen_addi_tl, NULL);
457
}
458
459
static bool trans_slliw(DisasContext *ctx, arg_slliw *a)
460
@@ -XXX,XX +XXX,XX @@ static bool trans_sraid(DisasContext *ctx, arg_sraid *a)
461
462
static bool trans_addw(DisasContext *ctx, arg_addw *a)
463
{
464
- REQUIRE_64BIT(ctx);
465
+ REQUIRE_64_OR_128BIT(ctx);
466
ctx->ol = MXL_RV32;
467
- return gen_arith(ctx, a, EXT_NONE, tcg_gen_add_tl);
468
+ return gen_arith(ctx, a, EXT_NONE, tcg_gen_add_tl, NULL);
469
}
470
471
static bool trans_subw(DisasContext *ctx, arg_subw *a)
472
{
473
- REQUIRE_64BIT(ctx);
474
+ REQUIRE_64_OR_128BIT(ctx);
475
ctx->ol = MXL_RV32;
476
- return gen_arith(ctx, a, EXT_NONE, tcg_gen_sub_tl);
477
+ return gen_arith(ctx, a, EXT_NONE, tcg_gen_sub_tl, NULL);
478
}
479
480
static bool trans_sllw(DisasContext *ctx, arg_sllw *a)
481
diff --git a/target/riscv/insn_trans/trans_rvm.c.inc b/target/riscv/insn_trans/trans_rvm.c.inc
482
index XXXXXXX..XXXXXXX 100644
483
--- a/target/riscv/insn_trans/trans_rvm.c.inc
484
+++ b/target/riscv/insn_trans/trans_rvm.c.inc
485
@@ -XXX,XX +XXX,XX @@
486
static bool trans_mul(DisasContext *ctx, arg_mul *a)
487
{
488
REQUIRE_EXT(ctx, RVM);
489
- return gen_arith(ctx, a, EXT_NONE, tcg_gen_mul_tl);
490
+ return gen_arith(ctx, a, EXT_NONE, tcg_gen_mul_tl, NULL);
491
}
492
493
static void gen_mulh(TCGv ret, TCGv s1, TCGv s2)
494
@@ -XXX,XX +XXX,XX @@ static void gen_mulh_w(TCGv ret, TCGv s1, TCGv s2)
495
static bool trans_mulh(DisasContext *ctx, arg_mulh *a)
496
{
497
REQUIRE_EXT(ctx, RVM);
498
- return gen_arith_per_ol(ctx, a, EXT_SIGN, gen_mulh, gen_mulh_w);
499
+ return gen_arith_per_ol(ctx, a, EXT_SIGN, gen_mulh, gen_mulh_w, NULL);
500
}
501
502
static void gen_mulhsu(TCGv ret, TCGv arg1, TCGv arg2)
503
@@ -XXX,XX +XXX,XX @@ static void gen_mulhsu_w(TCGv ret, TCGv arg1, TCGv arg2)
504
static bool trans_mulhsu(DisasContext *ctx, arg_mulhsu *a)
505
{
506
REQUIRE_EXT(ctx, RVM);
507
- return gen_arith_per_ol(ctx, a, EXT_NONE, gen_mulhsu, gen_mulhsu_w);
508
+ return gen_arith_per_ol(ctx, a, EXT_NONE, gen_mulhsu, gen_mulhsu_w, NULL);
509
}
510
511
static void gen_mulhu(TCGv ret, TCGv s1, TCGv s2)
512
@@ -XXX,XX +XXX,XX @@ static bool trans_mulhu(DisasContext *ctx, arg_mulhu *a)
513
{
514
REQUIRE_EXT(ctx, RVM);
515
/* gen_mulh_w works for either sign as input. */
516
- return gen_arith_per_ol(ctx, a, EXT_ZERO, gen_mulhu, gen_mulh_w);
517
+ return gen_arith_per_ol(ctx, a, EXT_ZERO, gen_mulhu, gen_mulh_w, NULL);
518
}
519
520
static void gen_div(TCGv ret, TCGv source1, TCGv source2)
521
@@ -XXX,XX +XXX,XX @@ static void gen_div(TCGv ret, TCGv source1, TCGv source2)
522
static bool trans_div(DisasContext *ctx, arg_div *a)
523
{
524
REQUIRE_EXT(ctx, RVM);
525
- return gen_arith(ctx, a, EXT_SIGN, gen_div);
526
+ return gen_arith(ctx, a, EXT_SIGN, gen_div, NULL);
527
}
528
529
static void gen_divu(TCGv ret, TCGv source1, TCGv source2)
530
@@ -XXX,XX +XXX,XX @@ static void gen_divu(TCGv ret, TCGv source1, TCGv source2)
531
static bool trans_divu(DisasContext *ctx, arg_divu *a)
532
{
533
REQUIRE_EXT(ctx, RVM);
534
- return gen_arith(ctx, a, EXT_ZERO, gen_divu);
535
+ return gen_arith(ctx, a, EXT_ZERO, gen_divu, NULL);
536
}
537
538
static void gen_rem(TCGv ret, TCGv source1, TCGv source2)
539
@@ -XXX,XX +XXX,XX @@ static void gen_rem(TCGv ret, TCGv source1, TCGv source2)
540
static bool trans_rem(DisasContext *ctx, arg_rem *a)
541
{
542
REQUIRE_EXT(ctx, RVM);
543
- return gen_arith(ctx, a, EXT_SIGN, gen_rem);
544
+ return gen_arith(ctx, a, EXT_SIGN, gen_rem, NULL);
545
}
546
547
static void gen_remu(TCGv ret, TCGv source1, TCGv source2)
548
@@ -XXX,XX +XXX,XX @@ static void gen_remu(TCGv ret, TCGv source1, TCGv source2)
549
static bool trans_remu(DisasContext *ctx, arg_remu *a)
550
{
551
REQUIRE_EXT(ctx, RVM);
552
- return gen_arith(ctx, a, EXT_ZERO, gen_remu);
553
+ return gen_arith(ctx, a, EXT_ZERO, gen_remu, NULL);
554
}
555
556
static bool trans_mulw(DisasContext *ctx, arg_mulw *a)
557
@@ -XXX,XX +XXX,XX @@ static bool trans_mulw(DisasContext *ctx, arg_mulw *a)
558
REQUIRE_64BIT(ctx);
559
REQUIRE_EXT(ctx, RVM);
560
ctx->ol = MXL_RV32;
561
- return gen_arith(ctx, a, EXT_NONE, tcg_gen_mul_tl);
562
+ return gen_arith(ctx, a, EXT_NONE, tcg_gen_mul_tl, NULL);
563
}
564
565
static bool trans_divw(DisasContext *ctx, arg_divw *a)
566
@@ -XXX,XX +XXX,XX @@ static bool trans_divw(DisasContext *ctx, arg_divw *a)
567
REQUIRE_64BIT(ctx);
568
REQUIRE_EXT(ctx, RVM);
569
ctx->ol = MXL_RV32;
570
- return gen_arith(ctx, a, EXT_SIGN, gen_div);
571
+ return gen_arith(ctx, a, EXT_SIGN, gen_div, NULL);
572
}
573
574
static bool trans_divuw(DisasContext *ctx, arg_divuw *a)
575
@@ -XXX,XX +XXX,XX @@ static bool trans_divuw(DisasContext *ctx, arg_divuw *a)
576
REQUIRE_64BIT(ctx);
577
REQUIRE_EXT(ctx, RVM);
578
ctx->ol = MXL_RV32;
579
- return gen_arith(ctx, a, EXT_ZERO, gen_divu);
580
+ return gen_arith(ctx, a, EXT_ZERO, gen_divu, NULL);
581
}
582
583
static bool trans_remw(DisasContext *ctx, arg_remw *a)
584
@@ -XXX,XX +XXX,XX @@ static bool trans_remw(DisasContext *ctx, arg_remw *a)
585
REQUIRE_64BIT(ctx);
586
REQUIRE_EXT(ctx, RVM);
587
ctx->ol = MXL_RV32;
588
- return gen_arith(ctx, a, EXT_SIGN, gen_rem);
589
+ return gen_arith(ctx, a, EXT_SIGN, gen_rem, NULL);
590
}
591
592
static bool trans_remuw(DisasContext *ctx, arg_remuw *a)
593
@@ -XXX,XX +XXX,XX @@ static bool trans_remuw(DisasContext *ctx, arg_remuw *a)
594
REQUIRE_64BIT(ctx);
595
REQUIRE_EXT(ctx, RVM);
596
ctx->ol = MXL_RV32;
597
- return gen_arith(ctx, a, EXT_ZERO, gen_remu);
598
+ return gen_arith(ctx, a, EXT_ZERO, gen_remu, NULL);
599
}
600
--
601
2.31.1
602
603
diff view generated by jsdifflib
New patch
1
From: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
1
2
3
Mult are generated inline (using a cool trick pointed out by Richard), but
4
for div and rem, given the complexity of the implementation of these
5
instructions, we call helpers to produce their behavior. From an
6
implementation standpoint, the helpers return the low part of the results,
7
while the high part is temporarily stored in a dedicated field of cpu_env
8
that is used to update the architectural register in the generation wrapper.
9
10
Signed-off-by: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
11
Co-authored-by: Fabien Portas <fabien.portas@grenoble-inp.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
14
Message-id: 20220106210108.138226-15-frederic.petrot@univ-grenoble-alpes.fr
15
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
16
---
17
target/riscv/cpu.h | 3 +
18
target/riscv/helper.h | 6 +
19
target/riscv/insn32.decode | 7 +
20
target/riscv/m128_helper.c | 109 ++++++++++++++
21
target/riscv/insn_trans/trans_rvm.c.inc | 182 ++++++++++++++++++++++--
22
target/riscv/meson.build | 1 +
23
6 files changed, 295 insertions(+), 13 deletions(-)
24
create mode 100644 target/riscv/m128_helper.c
25
26
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
27
index XXXXXXX..XXXXXXX 100644
28
--- a/target/riscv/cpu.h
29
+++ b/target/riscv/cpu.h
30
@@ -XXX,XX +XXX,XX @@ struct CPURISCVState {
31
uint32_t misa_ext; /* current extensions */
32
uint32_t misa_ext_mask; /* max ext for this cpu */
33
34
+ /* 128-bit helpers upper part return value */
35
+ target_ulong retxh;
36
+
37
uint32_t features;
38
39
#ifdef CONFIG_USER_ONLY
40
diff --git a/target/riscv/helper.h b/target/riscv/helper.h
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/riscv/helper.h
43
+++ b/target/riscv/helper.h
44
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_5(vsext_vf2_d, void, ptr, ptr, ptr, env, i32)
45
DEF_HELPER_5(vsext_vf4_w, void, ptr, ptr, ptr, env, i32)
46
DEF_HELPER_5(vsext_vf4_d, void, ptr, ptr, ptr, env, i32)
47
DEF_HELPER_5(vsext_vf8_d, void, ptr, ptr, ptr, env, i32)
48
+
49
+/* 128-bit integer multiplication and division */
50
+DEF_HELPER_5(divu_i128, tl, env, tl, tl, tl, tl)
51
+DEF_HELPER_5(divs_i128, tl, env, tl, tl, tl, tl)
52
+DEF_HELPER_5(remu_i128, tl, env, tl, tl, tl, tl)
53
+DEF_HELPER_5(rems_i128, tl, env, tl, tl, tl, tl)
54
diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode
55
index XXXXXXX..XXXXXXX 100644
56
--- a/target/riscv/insn32.decode
57
+++ b/target/riscv/insn32.decode
58
@@ -XXX,XX +XXX,XX @@ divuw 0000001 ..... ..... 101 ..... 0111011 @r
59
remw 0000001 ..... ..... 110 ..... 0111011 @r
60
remuw 0000001 ..... ..... 111 ..... 0111011 @r
61
62
+# *** RV128M Standard Extension (in addition to RV64M) ***
63
+muld 0000001 ..... ..... 000 ..... 1111011 @r
64
+divd 0000001 ..... ..... 100 ..... 1111011 @r
65
+divud 0000001 ..... ..... 101 ..... 1111011 @r
66
+remd 0000001 ..... ..... 110 ..... 1111011 @r
67
+remud 0000001 ..... ..... 111 ..... 1111011 @r
68
+
69
# *** RV32A Standard Extension ***
70
lr_w 00010 . . 00000 ..... 010 ..... 0101111 @atom_ld
71
sc_w 00011 . . ..... ..... 010 ..... 0101111 @atom_st
72
diff --git a/target/riscv/m128_helper.c b/target/riscv/m128_helper.c
73
new file mode 100644
74
index XXXXXXX..XXXXXXX
75
--- /dev/null
76
+++ b/target/riscv/m128_helper.c
77
@@ -XXX,XX +XXX,XX @@
78
+/*
79
+ * RISC-V Emulation Helpers for QEMU.
80
+ *
81
+ * Copyright (c) 2016-2017 Sagar Karandikar, sagark@eecs.berkeley.edu
82
+ * Copyright (c) 2017-2018 SiFive, Inc.
83
+ *
84
+ * This program is free software; you can redistribute it and/or modify it
85
+ * under the terms and conditions of the GNU General Public License,
86
+ * version 2 or later, as published by the Free Software Foundation.
87
+ *
88
+ * This program is distributed in the hope it will be useful, but WITHOUT
89
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
90
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
91
+ * more details.
92
+ *
93
+ * You should have received a copy of the GNU General Public License along with
94
+ * this program. If not, see <http://www.gnu.org/licenses/>.
95
+ */
96
+
97
+#include "qemu/osdep.h"
98
+#include "cpu.h"
99
+#include "qemu/main-loop.h"
100
+#include "exec/exec-all.h"
101
+#include "exec/helper-proto.h"
102
+
103
+target_ulong HELPER(divu_i128)(CPURISCVState *env,
104
+ target_ulong ul, target_ulong uh,
105
+ target_ulong vl, target_ulong vh)
106
+{
107
+ target_ulong ql, qh;
108
+ Int128 q;
109
+
110
+ if (vl == 0 && vh == 0) { /* Handle special behavior on div by zero */
111
+ ql = ~0x0;
112
+ qh = ~0x0;
113
+ } else {
114
+ q = int128_divu(int128_make128(ul, uh), int128_make128(vl, vh));
115
+ ql = int128_getlo(q);
116
+ qh = int128_gethi(q);
117
+ }
118
+
119
+ env->retxh = qh;
120
+ return ql;
121
+}
122
+
123
+target_ulong HELPER(remu_i128)(CPURISCVState *env,
124
+ target_ulong ul, target_ulong uh,
125
+ target_ulong vl, target_ulong vh)
126
+{
127
+ target_ulong rl, rh;
128
+ Int128 r;
129
+
130
+ if (vl == 0 && vh == 0) {
131
+ rl = ul;
132
+ rh = uh;
133
+ } else {
134
+ r = int128_remu(int128_make128(ul, uh), int128_make128(vl, vh));
135
+ rl = int128_getlo(r);
136
+ rh = int128_gethi(r);
137
+ }
138
+
139
+ env->retxh = rh;
140
+ return rl;
141
+}
142
+
143
+target_ulong HELPER(divs_i128)(CPURISCVState *env,
144
+ target_ulong ul, target_ulong uh,
145
+ target_ulong vl, target_ulong vh)
146
+{
147
+ target_ulong qh, ql;
148
+ Int128 q;
149
+
150
+ if (vl == 0 && vh == 0) { /* Div by zero check */
151
+ ql = ~0x0;
152
+ qh = ~0x0;
153
+ } else if (uh == (1ULL << (TARGET_LONG_BITS - 1)) && ul == 0 &&
154
+ vh == ~0x0 && vl == ~0x0) {
155
+ /* Signed div overflow check (-2**127 / -1) */
156
+ ql = ul;
157
+ qh = uh;
158
+ } else {
159
+ q = int128_divs(int128_make128(ul, uh), int128_make128(vl, vh));
160
+ ql = int128_getlo(q);
161
+ qh = int128_gethi(q);
162
+ }
163
+
164
+ env->retxh = qh;
165
+ return ql;
166
+}
167
+
168
+target_ulong HELPER(rems_i128)(CPURISCVState *env,
169
+ target_ulong ul, target_ulong uh,
170
+ target_ulong vl, target_ulong vh)
171
+{
172
+ target_ulong rh, rl;
173
+ Int128 r;
174
+
175
+ if (vl == 0 && vh == 0) {
176
+ rl = ul;
177
+ rh = uh;
178
+ } else {
179
+ r = int128_rems(int128_make128(ul, uh), int128_make128(vl, vh));
180
+ rl = int128_getlo(r);
181
+ rh = int128_gethi(r);
182
+ }
183
+
184
+ env->retxh = rh;
185
+ return rl;
186
+}
187
diff --git a/target/riscv/insn_trans/trans_rvm.c.inc b/target/riscv/insn_trans/trans_rvm.c.inc
188
index XXXXXXX..XXXXXXX 100644
189
--- a/target/riscv/insn_trans/trans_rvm.c.inc
190
+++ b/target/riscv/insn_trans/trans_rvm.c.inc
191
@@ -XXX,XX +XXX,XX @@
192
* this program. If not, see <http://www.gnu.org/licenses/>.
193
*/
194
195
+static void gen_mulhu_i128(TCGv r2, TCGv r3, TCGv al, TCGv ah, TCGv bl, TCGv bh)
196
+{
197
+ TCGv tmpl = tcg_temp_new();
198
+ TCGv tmph = tcg_temp_new();
199
+ TCGv r0 = tcg_temp_new();
200
+ TCGv r1 = tcg_temp_new();
201
+ TCGv zero = tcg_constant_tl(0);
202
+
203
+ tcg_gen_mulu2_tl(r0, r1, al, bl);
204
+
205
+ tcg_gen_mulu2_tl(tmpl, tmph, al, bh);
206
+ tcg_gen_add2_tl(r1, r2, r1, zero, tmpl, tmph);
207
+ tcg_gen_mulu2_tl(tmpl, tmph, ah, bl);
208
+ tcg_gen_add2_tl(r1, tmph, r1, r2, tmpl, tmph);
209
+ /* Overflow detection into r3 */
210
+ tcg_gen_setcond_tl(TCG_COND_LTU, r3, tmph, r2);
211
+
212
+ tcg_gen_mov_tl(r2, tmph);
213
+
214
+ tcg_gen_mulu2_tl(tmpl, tmph, ah, bh);
215
+ tcg_gen_add2_tl(r2, r3, r2, r3, tmpl, tmph);
216
+
217
+ tcg_temp_free(tmpl);
218
+ tcg_temp_free(tmph);
219
+}
220
+
221
+static void gen_mul_i128(TCGv rl, TCGv rh,
222
+ TCGv rs1l, TCGv rs1h, TCGv rs2l, TCGv rs2h)
223
+{
224
+ TCGv tmpl = tcg_temp_new();
225
+ TCGv tmph = tcg_temp_new();
226
+ TCGv tmpx = tcg_temp_new();
227
+ TCGv zero = tcg_constant_tl(0);
228
+
229
+ tcg_gen_mulu2_tl(rl, rh, rs1l, rs2l);
230
+ tcg_gen_mulu2_tl(tmpl, tmph, rs1l, rs2h);
231
+ tcg_gen_add2_tl(rh, tmpx, rh, zero, tmpl, tmph);
232
+ tcg_gen_mulu2_tl(tmpl, tmph, rs1h, rs2l);
233
+ tcg_gen_add2_tl(rh, tmph, rh, tmpx, tmpl, tmph);
234
+
235
+ tcg_temp_free(tmpl);
236
+ tcg_temp_free(tmph);
237
+ tcg_temp_free(tmpx);
238
+}
239
240
static bool trans_mul(DisasContext *ctx, arg_mul *a)
241
{
242
REQUIRE_EXT(ctx, RVM);
243
- return gen_arith(ctx, a, EXT_NONE, tcg_gen_mul_tl, NULL);
244
+ return gen_arith(ctx, a, EXT_NONE, tcg_gen_mul_tl, gen_mul_i128);
245
+}
246
+
247
+static void gen_mulh_i128(TCGv rl, TCGv rh,
248
+ TCGv rs1l, TCGv rs1h, TCGv rs2l, TCGv rs2h)
249
+{
250
+ TCGv t0l = tcg_temp_new();
251
+ TCGv t0h = tcg_temp_new();
252
+ TCGv t1l = tcg_temp_new();
253
+ TCGv t1h = tcg_temp_new();
254
+
255
+ gen_mulhu_i128(rl, rh, rs1l, rs1h, rs2l, rs2h);
256
+ tcg_gen_sari_tl(t0h, rs1h, 63);
257
+ tcg_gen_and_tl(t0l, t0h, rs2l);
258
+ tcg_gen_and_tl(t0h, t0h, rs2h);
259
+ tcg_gen_sari_tl(t1h, rs2h, 63);
260
+ tcg_gen_and_tl(t1l, t1h, rs1l);
261
+ tcg_gen_and_tl(t1h, t1h, rs1h);
262
+ tcg_gen_sub2_tl(t0l, t0h, rl, rh, t0l, t0h);
263
+ tcg_gen_sub2_tl(rl, rh, t0l, t0h, t1l, t1h);
264
+
265
+ tcg_temp_free(t0l);
266
+ tcg_temp_free(t0h);
267
+ tcg_temp_free(t1l);
268
+ tcg_temp_free(t1h);
269
}
270
271
static void gen_mulh(TCGv ret, TCGv s1, TCGv s2)
272
@@ -XXX,XX +XXX,XX @@ static void gen_mulh_w(TCGv ret, TCGv s1, TCGv s2)
273
static bool trans_mulh(DisasContext *ctx, arg_mulh *a)
274
{
275
REQUIRE_EXT(ctx, RVM);
276
- return gen_arith_per_ol(ctx, a, EXT_SIGN, gen_mulh, gen_mulh_w, NULL);
277
+ return gen_arith_per_ol(ctx, a, EXT_SIGN, gen_mulh, gen_mulh_w,
278
+ gen_mulh_i128);
279
+}
280
+
281
+static void gen_mulhsu_i128(TCGv rl, TCGv rh,
282
+ TCGv rs1l, TCGv rs1h, TCGv rs2l, TCGv rs2h)
283
+{
284
+
285
+ TCGv t0l = tcg_temp_new();
286
+ TCGv t0h = tcg_temp_new();
287
+
288
+ gen_mulhu_i128(rl, rh, rs1l, rs1h, rs2l, rs2h);
289
+ tcg_gen_sari_tl(t0h, rs1h, 63);
290
+ tcg_gen_and_tl(t0l, t0h, rs2l);
291
+ tcg_gen_and_tl(t0h, t0h, rs2h);
292
+ tcg_gen_sub2_tl(rl, rh, rl, rh, t0l, t0h);
293
+
294
+ tcg_temp_free(t0l);
295
+ tcg_temp_free(t0h);
296
}
297
298
static void gen_mulhsu(TCGv ret, TCGv arg1, TCGv arg2)
299
@@ -XXX,XX +XXX,XX @@ static void gen_mulhsu_w(TCGv ret, TCGv arg1, TCGv arg2)
300
static bool trans_mulhsu(DisasContext *ctx, arg_mulhsu *a)
301
{
302
REQUIRE_EXT(ctx, RVM);
303
- return gen_arith_per_ol(ctx, a, EXT_NONE, gen_mulhsu, gen_mulhsu_w, NULL);
304
+ return gen_arith_per_ol(ctx, a, EXT_NONE, gen_mulhsu, gen_mulhsu_w,
305
+ gen_mulhsu_i128);
306
}
307
308
static void gen_mulhu(TCGv ret, TCGv s1, TCGv s2)
309
@@ -XXX,XX +XXX,XX @@ static bool trans_mulhu(DisasContext *ctx, arg_mulhu *a)
310
{
311
REQUIRE_EXT(ctx, RVM);
312
/* gen_mulh_w works for either sign as input. */
313
- return gen_arith_per_ol(ctx, a, EXT_ZERO, gen_mulhu, gen_mulh_w, NULL);
314
+ return gen_arith_per_ol(ctx, a, EXT_ZERO, gen_mulhu, gen_mulh_w,
315
+ gen_mulhu_i128);
316
+}
317
+
318
+static void gen_div_i128(TCGv rdl, TCGv rdh,
319
+ TCGv rs1l, TCGv rs1h, TCGv rs2l, TCGv rs2h)
320
+{
321
+ gen_helper_divs_i128(rdl, cpu_env, rs1l, rs1h, rs2l, rs2h);
322
+ tcg_gen_ld_tl(rdh, cpu_env, offsetof(CPURISCVState, retxh));
323
}
324
325
static void gen_div(TCGv ret, TCGv source1, TCGv source2)
326
@@ -XXX,XX +XXX,XX @@ static void gen_div(TCGv ret, TCGv source1, TCGv source2)
327
static bool trans_div(DisasContext *ctx, arg_div *a)
328
{
329
REQUIRE_EXT(ctx, RVM);
330
- return gen_arith(ctx, a, EXT_SIGN, gen_div, NULL);
331
+ return gen_arith(ctx, a, EXT_SIGN, gen_div, gen_div_i128);
332
+}
333
+
334
+static void gen_divu_i128(TCGv rdl, TCGv rdh,
335
+ TCGv rs1l, TCGv rs1h, TCGv rs2l, TCGv rs2h)
336
+{
337
+ gen_helper_divu_i128(rdl, cpu_env, rs1l, rs1h, rs2l, rs2h);
338
+ tcg_gen_ld_tl(rdh, cpu_env, offsetof(CPURISCVState, retxh));
339
}
340
341
static void gen_divu(TCGv ret, TCGv source1, TCGv source2)
342
@@ -XXX,XX +XXX,XX @@ static void gen_divu(TCGv ret, TCGv source1, TCGv source2)
343
static bool trans_divu(DisasContext *ctx, arg_divu *a)
344
{
345
REQUIRE_EXT(ctx, RVM);
346
- return gen_arith(ctx, a, EXT_ZERO, gen_divu, NULL);
347
+ return gen_arith(ctx, a, EXT_ZERO, gen_divu, gen_divu_i128);
348
+}
349
+
350
+static void gen_rem_i128(TCGv rdl, TCGv rdh,
351
+ TCGv rs1l, TCGv rs1h, TCGv rs2l, TCGv rs2h)
352
+{
353
+ gen_helper_rems_i128(rdl, cpu_env, rs1l, rs1h, rs2l, rs2h);
354
+ tcg_gen_ld_tl(rdh, cpu_env, offsetof(CPURISCVState, retxh));
355
}
356
357
static void gen_rem(TCGv ret, TCGv source1, TCGv source2)
358
@@ -XXX,XX +XXX,XX @@ static void gen_rem(TCGv ret, TCGv source1, TCGv source2)
359
static bool trans_rem(DisasContext *ctx, arg_rem *a)
360
{
361
REQUIRE_EXT(ctx, RVM);
362
- return gen_arith(ctx, a, EXT_SIGN, gen_rem, NULL);
363
+ return gen_arith(ctx, a, EXT_SIGN, gen_rem, gen_rem_i128);
364
+}
365
+
366
+static void gen_remu_i128(TCGv rdl, TCGv rdh,
367
+ TCGv rs1l, TCGv rs1h, TCGv rs2l, TCGv rs2h)
368
+{
369
+ gen_helper_remu_i128(rdl, cpu_env, rs1l, rs1h, rs2l, rs2h);
370
+ tcg_gen_ld_tl(rdh, cpu_env, offsetof(CPURISCVState, retxh));
371
}
372
373
static void gen_remu(TCGv ret, TCGv source1, TCGv source2)
374
@@ -XXX,XX +XXX,XX @@ static void gen_remu(TCGv ret, TCGv source1, TCGv source2)
375
static bool trans_remu(DisasContext *ctx, arg_remu *a)
376
{
377
REQUIRE_EXT(ctx, RVM);
378
- return gen_arith(ctx, a, EXT_ZERO, gen_remu, NULL);
379
+ return gen_arith(ctx, a, EXT_ZERO, gen_remu, gen_remu_i128);
380
}
381
382
static bool trans_mulw(DisasContext *ctx, arg_mulw *a)
383
{
384
- REQUIRE_64BIT(ctx);
385
+ REQUIRE_64_OR_128BIT(ctx);
386
REQUIRE_EXT(ctx, RVM);
387
ctx->ol = MXL_RV32;
388
return gen_arith(ctx, a, EXT_NONE, tcg_gen_mul_tl, NULL);
389
@@ -XXX,XX +XXX,XX @@ static bool trans_mulw(DisasContext *ctx, arg_mulw *a)
390
391
static bool trans_divw(DisasContext *ctx, arg_divw *a)
392
{
393
- REQUIRE_64BIT(ctx);
394
+ REQUIRE_64_OR_128BIT(ctx);
395
REQUIRE_EXT(ctx, RVM);
396
ctx->ol = MXL_RV32;
397
return gen_arith(ctx, a, EXT_SIGN, gen_div, NULL);
398
@@ -XXX,XX +XXX,XX @@ static bool trans_divw(DisasContext *ctx, arg_divw *a)
399
400
static bool trans_divuw(DisasContext *ctx, arg_divuw *a)
401
{
402
- REQUIRE_64BIT(ctx);
403
+ REQUIRE_64_OR_128BIT(ctx);
404
REQUIRE_EXT(ctx, RVM);
405
ctx->ol = MXL_RV32;
406
return gen_arith(ctx, a, EXT_ZERO, gen_divu, NULL);
407
@@ -XXX,XX +XXX,XX @@ static bool trans_divuw(DisasContext *ctx, arg_divuw *a)
408
409
static bool trans_remw(DisasContext *ctx, arg_remw *a)
410
{
411
- REQUIRE_64BIT(ctx);
412
+ REQUIRE_64_OR_128BIT(ctx);
413
REQUIRE_EXT(ctx, RVM);
414
ctx->ol = MXL_RV32;
415
return gen_arith(ctx, a, EXT_SIGN, gen_rem, NULL);
416
@@ -XXX,XX +XXX,XX @@ static bool trans_remw(DisasContext *ctx, arg_remw *a)
417
418
static bool trans_remuw(DisasContext *ctx, arg_remuw *a)
419
{
420
- REQUIRE_64BIT(ctx);
421
+ REQUIRE_64_OR_128BIT(ctx);
422
REQUIRE_EXT(ctx, RVM);
423
ctx->ol = MXL_RV32;
424
return gen_arith(ctx, a, EXT_ZERO, gen_remu, NULL);
425
}
426
+
427
+static bool trans_muld(DisasContext *ctx, arg_muld *a)
428
+{
429
+ REQUIRE_128BIT(ctx);
430
+ REQUIRE_EXT(ctx, RVM);
431
+ ctx->ol = MXL_RV64;
432
+ return gen_arith(ctx, a, EXT_SIGN, tcg_gen_mul_tl, NULL);
433
+}
434
+
435
+static bool trans_divd(DisasContext *ctx, arg_divd *a)
436
+{
437
+ REQUIRE_128BIT(ctx);
438
+ REQUIRE_EXT(ctx, RVM);
439
+ ctx->ol = MXL_RV64;
440
+ return gen_arith(ctx, a, EXT_SIGN, gen_div, NULL);
441
+}
442
+
443
+static bool trans_divud(DisasContext *ctx, arg_divud *a)
444
+{
445
+ REQUIRE_128BIT(ctx);
446
+ REQUIRE_EXT(ctx, RVM);
447
+ ctx->ol = MXL_RV64;
448
+ return gen_arith(ctx, a, EXT_ZERO, gen_divu, NULL);
449
+}
450
+
451
+static bool trans_remd(DisasContext *ctx, arg_remd *a)
452
+{
453
+ REQUIRE_128BIT(ctx);
454
+ REQUIRE_EXT(ctx, RVM);
455
+ ctx->ol = MXL_RV64;
456
+ return gen_arith(ctx, a, EXT_SIGN, gen_rem, NULL);
457
+}
458
+
459
+static bool trans_remud(DisasContext *ctx, arg_remud *a)
460
+{
461
+ REQUIRE_128BIT(ctx);
462
+ REQUIRE_EXT(ctx, RVM);
463
+ ctx->ol = MXL_RV64;
464
+ return gen_arith(ctx, a, EXT_ZERO, gen_remu, NULL);
465
+}
466
diff --git a/target/riscv/meson.build b/target/riscv/meson.build
467
index XXXXXXX..XXXXXXX 100644
468
--- a/target/riscv/meson.build
469
+++ b/target/riscv/meson.build
470
@@ -XXX,XX +XXX,XX @@ riscv_ss.add(files(
471
'vector_helper.c',
472
'bitmanip_helper.c',
473
'translate.c',
474
+ 'm128_helper.c'
475
))
476
477
riscv_softmmu_ss = ss.source_set()
478
--
479
2.31.1
480
481
diff view generated by jsdifflib
New patch
1
From: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
1
2
3
Adding the high part of a very minimal set of csr.
4
5
Signed-off-by: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
6
Co-authored-by: Fabien Portas <fabien.portas@grenoble-inp.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
9
Message-id: 20220106210108.138226-16-frederic.petrot@univ-grenoble-alpes.fr
10
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
11
---
12
target/riscv/cpu.h | 4 ++++
13
target/riscv/machine.c | 2 ++
14
2 files changed, 6 insertions(+)
15
16
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/riscv/cpu.h
19
+++ b/target/riscv/cpu.h
20
@@ -XXX,XX +XXX,XX @@ struct CPURISCVState {
21
target_ulong hgatp;
22
uint64_t htimedelta;
23
24
+ /* Upper 64-bits of 128-bit CSRs */
25
+ uint64_t mscratchh;
26
+ uint64_t sscratchh;
27
+
28
/* Virtual CSRs */
29
/*
30
* For RV32 this is 32-bit vsstatus and 32-bit vsstatush.
31
diff --git a/target/riscv/machine.c b/target/riscv/machine.c
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/riscv/machine.c
34
+++ b/target/riscv/machine.c
35
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_rv128 = {
36
.needed = rv128_needed,
37
.fields = (VMStateField[]) {
38
VMSTATE_UINTTL_ARRAY(env.gprh, RISCVCPU, 32),
39
+ VMSTATE_UINT64(env.mscratchh, RISCVCPU),
40
+ VMSTATE_UINT64(env.sscratchh, RISCVCPU),
41
VMSTATE_END_OF_LIST()
42
}
43
};
44
--
45
2.31.1
46
47
diff view generated by jsdifflib
New patch
1
From: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
1
2
3
Given the side effects they have, the csr instructions are realized as
4
helpers. We extend this existing infrastructure for 128-bit sized csr.
5
We return 128-bit values using the same approach as for div/rem.
6
Theses helpers all call a unique function that is currently a fallback
7
on the 64-bit version.
8
The trans_csrxx functions supporting 128-bit are yet to be implemented.
9
10
Signed-off-by: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
11
Co-authored-by: Fabien Portas <fabien.portas@grenoble-inp.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
14
Message-id: 20220106210108.138226-17-frederic.petrot@univ-grenoble-alpes.fr
15
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
16
---
17
target/riscv/cpu.h | 5 +++++
18
target/riscv/helper.h | 3 +++
19
target/riscv/csr.c | 17 ++++++++++++++++
20
target/riscv/op_helper.c | 44 ++++++++++++++++++++++++++++++++++++++++
21
4 files changed, 69 insertions(+)
22
23
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
24
index XXXXXXX..XXXXXXX 100644
25
--- a/target/riscv/cpu.h
26
+++ b/target/riscv/cpu.h
27
@@ -XXX,XX +XXX,XX @@
28
#include "exec/cpu-defs.h"
29
#include "fpu/softfloat-types.h"
30
#include "qom/object.h"
31
+#include "qemu/int128.h"
32
#include "cpu_bits.h"
33
34
#define TCG_GUEST_DEFAULT_MO 0
35
@@ -XXX,XX +XXX,XX @@ typedef RISCVException (*riscv_csr_op_fn)(CPURISCVState *env, int csrno,
36
target_ulong new_value,
37
target_ulong write_mask);
38
39
+RISCVException riscv_csrrw_i128(CPURISCVState *env, int csrno,
40
+ Int128 *ret_value,
41
+ Int128 new_value, Int128 write_mask);
42
+
43
typedef struct {
44
const char *name;
45
riscv_csr_predicate_fn predicate;
46
diff --git a/target/riscv/helper.h b/target/riscv/helper.h
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/riscv/helper.h
49
+++ b/target/riscv/helper.h
50
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_1(fclass_h, TCG_CALL_NO_RWG_SE, tl, i64)
51
DEF_HELPER_2(csrr, tl, env, int)
52
DEF_HELPER_3(csrw, void, env, int, tl)
53
DEF_HELPER_4(csrrw, tl, env, int, tl, tl)
54
+DEF_HELPER_2(csrr_i128, tl, env, int)
55
+DEF_HELPER_4(csrw_i128, void, env, int, tl, tl)
56
+DEF_HELPER_6(csrrw_i128, tl, env, int, tl, tl, tl, tl)
57
#ifndef CONFIG_USER_ONLY
58
DEF_HELPER_2(sret, tl, env, tl)
59
DEF_HELPER_2(mret, tl, env, tl)
60
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
61
index XXXXXXX..XXXXXXX 100644
62
--- a/target/riscv/csr.c
63
+++ b/target/riscv/csr.c
64
@@ -XXX,XX +XXX,XX @@ RISCVException riscv_csrrw(CPURISCVState *env, int csrno,
65
return RISCV_EXCP_NONE;
66
}
67
68
+RISCVException riscv_csrrw_i128(CPURISCVState *env, int csrno,
69
+ Int128 *ret_value,
70
+ Int128 new_value, Int128 write_mask)
71
+{
72
+ /* fall back to 64-bit version for now */
73
+ target_ulong ret_64;
74
+ RISCVException ret = riscv_csrrw(env, csrno, &ret_64,
75
+ int128_getlo(new_value),
76
+ int128_getlo(write_mask));
77
+
78
+ if (ret_value) {
79
+ *ret_value = int128_make64(ret_64);
80
+ }
81
+
82
+ return ret;
83
+}
84
+
85
/*
86
* Debugger support. If not in user mode, set env->debugger before the
87
* riscv_csrrw call and clear it after the call.
88
diff --git a/target/riscv/op_helper.c b/target/riscv/op_helper.c
89
index XXXXXXX..XXXXXXX 100644
90
--- a/target/riscv/op_helper.c
91
+++ b/target/riscv/op_helper.c
92
@@ -XXX,XX +XXX,XX @@ target_ulong helper_csrrw(CPURISCVState *env, int csr,
93
return val;
94
}
95
96
+target_ulong helper_csrr_i128(CPURISCVState *env, int csr)
97
+{
98
+ Int128 rv = int128_zero();
99
+ RISCVException ret = riscv_csrrw_i128(env, csr, &rv,
100
+ int128_zero(),
101
+ int128_zero());
102
+
103
+ if (ret != RISCV_EXCP_NONE) {
104
+ riscv_raise_exception(env, ret, GETPC());
105
+ }
106
+
107
+ env->retxh = int128_gethi(rv);
108
+ return int128_getlo(rv);
109
+}
110
+
111
+void helper_csrw_i128(CPURISCVState *env, int csr,
112
+ target_ulong srcl, target_ulong srch)
113
+{
114
+ RISCVException ret = riscv_csrrw_i128(env, csr, NULL,
115
+ int128_make128(srcl, srch),
116
+ UINT128_MAX);
117
+
118
+ if (ret != RISCV_EXCP_NONE) {
119
+ riscv_raise_exception(env, ret, GETPC());
120
+ }
121
+}
122
+
123
+target_ulong helper_csrrw_i128(CPURISCVState *env, int csr,
124
+ target_ulong srcl, target_ulong srch,
125
+ target_ulong maskl, target_ulong maskh)
126
+{
127
+ Int128 rv = int128_zero();
128
+ RISCVException ret = riscv_csrrw_i128(env, csr, &rv,
129
+ int128_make128(srcl, srch),
130
+ int128_make128(maskl, maskh));
131
+
132
+ if (ret != RISCV_EXCP_NONE) {
133
+ riscv_raise_exception(env, ret, GETPC());
134
+ }
135
+
136
+ env->retxh = int128_gethi(rv);
137
+ return int128_getlo(rv);
138
+}
139
+
140
#ifndef CONFIG_USER_ONLY
141
142
target_ulong helper_sret(CPURISCVState *env, target_ulong cpu_pc_deb)
143
--
144
2.31.1
145
146
diff view generated by jsdifflib
New patch
1
1
From: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
2
3
As opposed to the gen_arith and gen_shift generation helpers, the csr insns
4
do not have a common prototype, so the choice to generate 32/64 or 128-bit
5
helper calls is done in the trans_csrxx functions.
6
7
Signed-off-by: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
8
Co-authored-by: Fabien Portas <fabien.portas@grenoble-inp.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
11
Message-id: 20220106210108.138226-18-frederic.petrot@univ-grenoble-alpes.fr
12
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
13
---
14
target/riscv/insn_trans/trans_rvi.c.inc | 201 +++++++++++++++++++-----
15
1 file changed, 158 insertions(+), 43 deletions(-)
16
17
diff --git a/target/riscv/insn_trans/trans_rvi.c.inc b/target/riscv/insn_trans/trans_rvi.c.inc
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/riscv/insn_trans/trans_rvi.c.inc
20
+++ b/target/riscv/insn_trans/trans_rvi.c.inc
21
@@ -XXX,XX +XXX,XX @@ static bool do_csrrw(DisasContext *ctx, int rd, int rc, TCGv src, TCGv mask)
22
return do_csr_post(ctx);
23
}
24
25
-static bool trans_csrrw(DisasContext *ctx, arg_csrrw *a)
26
+static bool do_csrr_i128(DisasContext *ctx, int rd, int rc)
27
{
28
- TCGv src = get_gpr(ctx, a->rs1, EXT_NONE);
29
+ TCGv destl = dest_gpr(ctx, rd);
30
+ TCGv desth = dest_gprh(ctx, rd);
31
+ TCGv_i32 csr = tcg_constant_i32(rc);
32
33
- /*
34
- * If rd == 0, the insn shall not read the csr, nor cause any of the
35
- * side effects that might occur on a csr read.
36
- */
37
- if (a->rd == 0) {
38
- return do_csrw(ctx, a->csr, src);
39
+ if (tb_cflags(ctx->base.tb) & CF_USE_ICOUNT) {
40
+ gen_io_start();
41
}
42
+ gen_helper_csrr_i128(destl, cpu_env, csr);
43
+ tcg_gen_ld_tl(desth, cpu_env, offsetof(CPURISCVState, retxh));
44
+ gen_set_gpr128(ctx, rd, destl, desth);
45
+ return do_csr_post(ctx);
46
+}
47
+
48
+static bool do_csrw_i128(DisasContext *ctx, int rc, TCGv srcl, TCGv srch)
49
+{
50
+ TCGv_i32 csr = tcg_constant_i32(rc);
51
+
52
+ if (tb_cflags(ctx->base.tb) & CF_USE_ICOUNT) {
53
+ gen_io_start();
54
+ }
55
+ gen_helper_csrw_i128(cpu_env, csr, srcl, srch);
56
+ return do_csr_post(ctx);
57
+}
58
59
- TCGv mask = tcg_constant_tl(-1);
60
- return do_csrrw(ctx, a->rd, a->csr, src, mask);
61
+static bool do_csrrw_i128(DisasContext *ctx, int rd, int rc,
62
+ TCGv srcl, TCGv srch, TCGv maskl, TCGv maskh)
63
+{
64
+ TCGv destl = dest_gpr(ctx, rd);
65
+ TCGv desth = dest_gprh(ctx, rd);
66
+ TCGv_i32 csr = tcg_constant_i32(rc);
67
+
68
+ if (tb_cflags(ctx->base.tb) & CF_USE_ICOUNT) {
69
+ gen_io_start();
70
+ }
71
+ gen_helper_csrrw_i128(destl, cpu_env, csr, srcl, srch, maskl, maskh);
72
+ tcg_gen_ld_tl(desth, cpu_env, offsetof(CPURISCVState, retxh));
73
+ gen_set_gpr128(ctx, rd, destl, desth);
74
+ return do_csr_post(ctx);
75
+}
76
+
77
+static bool trans_csrrw(DisasContext *ctx, arg_csrrw *a)
78
+{
79
+ if (get_xl(ctx) < MXL_RV128) {
80
+ TCGv src = get_gpr(ctx, a->rs1, EXT_NONE);
81
+
82
+ /*
83
+ * If rd == 0, the insn shall not read the csr, nor cause any of the
84
+ * side effects that might occur on a csr read.
85
+ */
86
+ if (a->rd == 0) {
87
+ return do_csrw(ctx, a->csr, src);
88
+ }
89
+
90
+ TCGv mask = tcg_constant_tl(-1);
91
+ return do_csrrw(ctx, a->rd, a->csr, src, mask);
92
+ } else {
93
+ TCGv srcl = get_gpr(ctx, a->rs1, EXT_NONE);
94
+ TCGv srch = get_gprh(ctx, a->rs1);
95
+
96
+ /*
97
+ * If rd == 0, the insn shall not read the csr, nor cause any of the
98
+ * side effects that might occur on a csr read.
99
+ */
100
+ if (a->rd == 0) {
101
+ return do_csrw_i128(ctx, a->csr, srcl, srch);
102
+ }
103
+
104
+ TCGv mask = tcg_constant_tl(-1);
105
+ return do_csrrw_i128(ctx, a->rd, a->csr, srcl, srch, mask, mask);
106
+ }
107
}
108
109
static bool trans_csrrs(DisasContext *ctx, arg_csrrs *a)
110
@@ -XXX,XX +XXX,XX @@ static bool trans_csrrs(DisasContext *ctx, arg_csrrs *a)
111
* a zero value, the instruction will still attempt to write the
112
* unmodified value back to the csr and will cause side effects.
113
*/
114
- if (a->rs1 == 0) {
115
- return do_csrr(ctx, a->rd, a->csr);
116
- }
117
+ if (get_xl(ctx) < MXL_RV128) {
118
+ if (a->rs1 == 0) {
119
+ return do_csrr(ctx, a->rd, a->csr);
120
+ }
121
122
- TCGv ones = tcg_constant_tl(-1);
123
- TCGv mask = get_gpr(ctx, a->rs1, EXT_ZERO);
124
- return do_csrrw(ctx, a->rd, a->csr, ones, mask);
125
+ TCGv ones = tcg_constant_tl(-1);
126
+ TCGv mask = get_gpr(ctx, a->rs1, EXT_ZERO);
127
+ return do_csrrw(ctx, a->rd, a->csr, ones, mask);
128
+ } else {
129
+ if (a->rs1 == 0) {
130
+ return do_csrr_i128(ctx, a->rd, a->csr);
131
+ }
132
+
133
+ TCGv ones = tcg_constant_tl(-1);
134
+ TCGv maskl = get_gpr(ctx, a->rs1, EXT_ZERO);
135
+ TCGv maskh = get_gprh(ctx, a->rs1);
136
+ return do_csrrw_i128(ctx, a->rd, a->csr, ones, ones, maskl, maskh);
137
+ }
138
}
139
140
static bool trans_csrrc(DisasContext *ctx, arg_csrrc *a)
141
@@ -XXX,XX +XXX,XX @@ static bool trans_csrrc(DisasContext *ctx, arg_csrrc *a)
142
* a zero value, the instruction will still attempt to write the
143
* unmodified value back to the csr and will cause side effects.
144
*/
145
- if (a->rs1 == 0) {
146
- return do_csrr(ctx, a->rd, a->csr);
147
- }
148
+ if (get_xl(ctx) < MXL_RV128) {
149
+ if (a->rs1 == 0) {
150
+ return do_csrr(ctx, a->rd, a->csr);
151
+ }
152
153
- TCGv mask = get_gpr(ctx, a->rs1, EXT_ZERO);
154
- return do_csrrw(ctx, a->rd, a->csr, ctx->zero, mask);
155
+ TCGv mask = get_gpr(ctx, a->rs1, EXT_ZERO);
156
+ return do_csrrw(ctx, a->rd, a->csr, ctx->zero, mask);
157
+ } else {
158
+ if (a->rs1 == 0) {
159
+ return do_csrr_i128(ctx, a->rd, a->csr);
160
+ }
161
+
162
+ TCGv maskl = get_gpr(ctx, a->rs1, EXT_ZERO);
163
+ TCGv maskh = get_gprh(ctx, a->rs1);
164
+ return do_csrrw_i128(ctx, a->rd, a->csr,
165
+ ctx->zero, ctx->zero, maskl, maskh);
166
+ }
167
}
168
169
static bool trans_csrrwi(DisasContext *ctx, arg_csrrwi *a)
170
{
171
- TCGv src = tcg_constant_tl(a->rs1);
172
+ if (get_xl(ctx) < MXL_RV128) {
173
+ TCGv src = tcg_constant_tl(a->rs1);
174
175
- /*
176
- * If rd == 0, the insn shall not read the csr, nor cause any of the
177
- * side effects that might occur on a csr read.
178
- */
179
- if (a->rd == 0) {
180
- return do_csrw(ctx, a->csr, src);
181
- }
182
+ /*
183
+ * If rd == 0, the insn shall not read the csr, nor cause any of the
184
+ * side effects that might occur on a csr read.
185
+ */
186
+ if (a->rd == 0) {
187
+ return do_csrw(ctx, a->csr, src);
188
+ }
189
190
- TCGv mask = tcg_constant_tl(-1);
191
- return do_csrrw(ctx, a->rd, a->csr, src, mask);
192
+ TCGv mask = tcg_constant_tl(-1);
193
+ return do_csrrw(ctx, a->rd, a->csr, src, mask);
194
+ } else {
195
+ TCGv src = tcg_constant_tl(a->rs1);
196
+
197
+ /*
198
+ * If rd == 0, the insn shall not read the csr, nor cause any of the
199
+ * side effects that might occur on a csr read.
200
+ */
201
+ if (a->rd == 0) {
202
+ return do_csrw_i128(ctx, a->csr, src, ctx->zero);
203
+ }
204
+
205
+ TCGv mask = tcg_constant_tl(-1);
206
+ return do_csrrw_i128(ctx, a->rd, a->csr, src, ctx->zero, mask, mask);
207
+ }
208
}
209
210
static bool trans_csrrsi(DisasContext *ctx, arg_csrrsi *a)
211
@@ -XXX,XX +XXX,XX @@ static bool trans_csrrsi(DisasContext *ctx, arg_csrrsi *a)
212
* a zero value, the instruction will still attempt to write the
213
* unmodified value back to the csr and will cause side effects.
214
*/
215
- if (a->rs1 == 0) {
216
- return do_csrr(ctx, a->rd, a->csr);
217
- }
218
+ if (get_xl(ctx) < MXL_RV128) {
219
+ if (a->rs1 == 0) {
220
+ return do_csrr(ctx, a->rd, a->csr);
221
+ }
222
+
223
+ TCGv ones = tcg_constant_tl(-1);
224
+ TCGv mask = tcg_constant_tl(a->rs1);
225
+ return do_csrrw(ctx, a->rd, a->csr, ones, mask);
226
+ } else {
227
+ if (a->rs1 == 0) {
228
+ return do_csrr_i128(ctx, a->rd, a->csr);
229
+ }
230
231
- TCGv ones = tcg_constant_tl(-1);
232
- TCGv mask = tcg_constant_tl(a->rs1);
233
- return do_csrrw(ctx, a->rd, a->csr, ones, mask);
234
+ TCGv ones = tcg_constant_tl(-1);
235
+ TCGv mask = tcg_constant_tl(a->rs1);
236
+ return do_csrrw_i128(ctx, a->rd, a->csr, ones, ones, mask, ctx->zero);
237
+ }
238
}
239
240
-static bool trans_csrrci(DisasContext *ctx, arg_csrrci *a)
241
+static bool trans_csrrci(DisasContext *ctx, arg_csrrci * a)
242
{
243
/*
244
* If rs1 == 0, the insn shall not write to the csr at all, nor
245
@@ -XXX,XX +XXX,XX @@ static bool trans_csrrci(DisasContext *ctx, arg_csrrci *a)
246
* a zero value, the instruction will still attempt to write the
247
* unmodified value back to the csr and will cause side effects.
248
*/
249
- if (a->rs1 == 0) {
250
- return do_csrr(ctx, a->rd, a->csr);
251
- }
252
+ if (get_xl(ctx) < MXL_RV128) {
253
+ if (a->rs1 == 0) {
254
+ return do_csrr(ctx, a->rd, a->csr);
255
+ }
256
257
- TCGv mask = tcg_constant_tl(a->rs1);
258
- return do_csrrw(ctx, a->rd, a->csr, ctx->zero, mask);
259
+ TCGv mask = tcg_constant_tl(a->rs1);
260
+ return do_csrrw(ctx, a->rd, a->csr, ctx->zero, mask);
261
+ } else {
262
+ if (a->rs1 == 0) {
263
+ return do_csrr_i128(ctx, a->rd, a->csr);
264
+ }
265
+
266
+ TCGv mask = tcg_constant_tl(a->rs1);
267
+ return do_csrrw_i128(ctx, a->rd, a->csr,
268
+ ctx->zero, ctx->zero, mask, ctx->zero);
269
+ }
270
}
271
--
272
2.31.1
273
274
diff view generated by jsdifflib
1
From: Thomas Huth <thuth@redhat.com>
1
From: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
2
2
3
Configuring a drive with "if=none" is meant for creation of a backend
3
The csrs are accessed through function pointers: we add 128-bit read
4
only, it should not get automatically assigned to a device frontend.
4
operations in the table for three csrs (writes fallback to the
5
Use "if=pflash" for the One-Time-Programmable device instead (like
5
64-bit version as the upper 64-bit information is handled elsewhere):
6
it is e.g. also done for the efuse device in hw/arm/xlnx-zcu102.c).
6
- misa, as mxl is needed for proper operation,
7
7
- mstatus and sstatus, to return sd
8
Since the old way of configuring the device has already been published
8
In addition, we also add read and write accesses to the machine and
9
with the previous QEMU versions, we cannot remove this immediately, but
9
supervisor scratch registers.
10
have to deprecate it and support it for at least two more releases.
10
11
11
Signed-off-by: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
12
Signed-off-by: Thomas Huth <thuth@redhat.com>
12
Co-authored-by: Fabien Portas <fabien.portas@grenoble-inp.org>
13
Acked-by: Philippe Mathieu-Daudé <philmd@redhat.com>
14
Reviewed-by: Markus Armbruster <armbru@redhat.com>
15
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
13
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
16
Message-id: 20211119102549.217755-1-thuth@redhat.com
14
Message-id: 20220106210108.138226-19-frederic.petrot@univ-grenoble-alpes.fr
17
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
15
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
18
---
16
---
19
docs/about/deprecated.rst | 6 ++++++
17
target/riscv/cpu.h | 7 ++
20
hw/misc/sifive_u_otp.c | 9 ++++++++-
18
target/riscv/cpu_bits.h | 3 +
21
2 files changed, 14 insertions(+), 1 deletion(-)
19
target/riscv/csr.c | 195 +++++++++++++++++++++++++++++++++-------
22
20
3 files changed, 175 insertions(+), 30 deletions(-)
23
diff --git a/docs/about/deprecated.rst b/docs/about/deprecated.rst
21
22
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
24
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
25
--- a/docs/about/deprecated.rst
24
--- a/target/riscv/cpu.h
26
+++ b/docs/about/deprecated.rst
25
+++ b/target/riscv/cpu.h
27
@@ -XXX,XX +XXX,XX @@ as short-form boolean values, and passed to plugins as ``arg_name=on``.
26
@@ -XXX,XX +XXX,XX @@ RISCVException riscv_csrrw_i128(CPURISCVState *env, int csrno,
28
However, short-form booleans are deprecated and full explicit ``arg_name=on``
27
Int128 *ret_value,
29
form is preferred.
28
Int128 new_value, Int128 write_mask);
30
29
31
+``-drive if=none`` for the sifive_u OTP device (since 6.2)
30
+typedef RISCVException (*riscv_csr_read128_fn)(CPURISCVState *env, int csrno,
32
+''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
31
+ Int128 *ret_value);
33
+
32
+typedef RISCVException (*riscv_csr_write128_fn)(CPURISCVState *env, int csrno,
34
+Using ``-drive if=none`` to configure the OTP device of the sifive_u
33
+ Int128 new_value);
35
+RISC-V machine is deprecated. Use ``-drive if=pflash`` instead.
34
+
36
+
35
typedef struct {
37
36
const char *name;
38
QEMU Machine Protocol (QMP) commands
37
riscv_csr_predicate_fn predicate;
39
------------------------------------
38
riscv_csr_read_fn read;
40
diff --git a/hw/misc/sifive_u_otp.c b/hw/misc/sifive_u_otp.c
39
riscv_csr_write_fn write;
40
riscv_csr_op_fn op;
41
+ riscv_csr_read128_fn read128;
42
+ riscv_csr_write128_fn write128;
43
} riscv_csr_operations;
44
45
/* CSR function table constants */
46
diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
41
index XXXXXXX..XXXXXXX 100644
47
index XXXXXXX..XXXXXXX 100644
42
--- a/hw/misc/sifive_u_otp.c
48
--- a/target/riscv/cpu_bits.h
43
+++ b/hw/misc/sifive_u_otp.c
49
+++ b/target/riscv/cpu_bits.h
44
@@ -XXX,XX +XXX,XX @@ static void sifive_u_otp_realize(DeviceState *dev, Error **errp)
50
@@ -XXX,XX +XXX,XX @@
45
TYPE_SIFIVE_U_OTP, SIFIVE_U_OTP_REG_SIZE);
51
46
sysbus_init_mmio(SYS_BUS_DEVICE(dev), &s->mmio);
52
#define MSTATUS32_SD 0x80000000
47
53
#define MSTATUS64_SD 0x8000000000000000ULL
48
- dinfo = drive_get_next(IF_NONE);
54
+#define MSTATUSH128_SD 0x8000000000000000ULL
49
+ dinfo = drive_get_next(IF_PFLASH);
55
50
+ if (!dinfo) {
56
#define MISA32_MXL 0xC0000000
51
+ dinfo = drive_get_next(IF_NONE);
57
#define MISA64_MXL 0xC000000000000000ULL
52
+ if (dinfo) {
58
@@ -XXX,XX +XXX,XX @@ typedef enum {
53
+ warn_report("using \"-drive if=none\" for the OTP is deprecated, "
59
#define SSTATUS_SUM 0x00040000 /* since: priv-1.10 */
54
+ "use \"-drive if=pflash\" instead.");
60
#define SSTATUS_MXR 0x00080000
61
62
+#define SSTATUS64_UXL 0x0000000300000000ULL
63
+
64
#define SSTATUS32_SD 0x80000000
65
#define SSTATUS64_SD 0x8000000000000000ULL
66
67
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
68
index XXXXXXX..XXXXXXX 100644
69
--- a/target/riscv/csr.c
70
+++ b/target/riscv/csr.c
71
@@ -XXX,XX +XXX,XX @@ static const target_ulong vs_delegable_excps = DELEGABLE_EXCPS &
72
(1ULL << (RISCV_EXCP_STORE_GUEST_AMO_ACCESS_FAULT)));
73
static const target_ulong sstatus_v1_10_mask = SSTATUS_SIE | SSTATUS_SPIE |
74
SSTATUS_UIE | SSTATUS_UPIE | SSTATUS_SPP | SSTATUS_FS | SSTATUS_XS |
75
- SSTATUS_SUM | SSTATUS_MXR | SSTATUS_VS;
76
+ SSTATUS_SUM | SSTATUS_MXR | SSTATUS_VS | (target_ulong)SSTATUS64_UXL;
77
static const target_ulong sip_writable_mask = SIP_SSIP | MIP_USIP | MIP_UEIP;
78
static const target_ulong hip_writable_mask = MIP_VSSIP;
79
static const target_ulong hvip_writable_mask = MIP_VSSIP | MIP_VSTIP | MIP_VSEIP;
80
@@ -XXX,XX +XXX,XX @@ static uint64_t add_status_sd(RISCVMXL xl, uint64_t status)
81
return status | MSTATUS32_SD;
82
case MXL_RV64:
83
return status | MSTATUS64_SD;
84
+ case MXL_RV128:
85
+ return MSTATUSH128_SD;
86
default:
87
g_assert_not_reached();
88
}
89
@@ -XXX,XX +XXX,XX @@ static RISCVException write_mstatus(CPURISCVState *env, int csrno,
90
91
mstatus = (mstatus & ~mask) | (val & mask);
92
93
- if (riscv_cpu_mxl(env) == MXL_RV64) {
94
+ RISCVMXL xl = riscv_cpu_mxl(env);
95
+ if (xl > MXL_RV32) {
96
/* SXL and UXL fields are for now read only */
97
- mstatus = set_field(mstatus, MSTATUS64_SXL, MXL_RV64);
98
- mstatus = set_field(mstatus, MSTATUS64_UXL, MXL_RV64);
99
+ mstatus = set_field(mstatus, MSTATUS64_SXL, xl);
100
+ mstatus = set_field(mstatus, MSTATUS64_UXL, xl);
101
}
102
env->mstatus = mstatus;
103
104
@@ -XXX,XX +XXX,XX @@ static RISCVException write_mstatush(CPURISCVState *env, int csrno,
105
return RISCV_EXCP_NONE;
106
}
107
108
+static RISCVException read_mstatus_i128(CPURISCVState *env, int csrno,
109
+ Int128 *val)
110
+{
111
+ *val = int128_make128(env->mstatus, add_status_sd(MXL_RV128, env->mstatus));
112
+ return RISCV_EXCP_NONE;
113
+}
114
+
115
+static RISCVException read_misa_i128(CPURISCVState *env, int csrno,
116
+ Int128 *val)
117
+{
118
+ *val = int128_make128(env->misa_ext, (uint64_t)MXL_RV128 << 62);
119
+ return RISCV_EXCP_NONE;
120
+}
121
+
122
static RISCVException read_misa(CPURISCVState *env, int csrno,
123
target_ulong *val)
124
{
125
@@ -XXX,XX +XXX,XX @@ static RISCVException write_mcounteren(CPURISCVState *env, int csrno,
126
}
127
128
/* Machine Trap Handling */
129
+static RISCVException read_mscratch_i128(CPURISCVState *env, int csrno,
130
+ Int128 *val)
131
+{
132
+ *val = int128_make128(env->mscratch, env->mscratchh);
133
+ return RISCV_EXCP_NONE;
134
+}
135
+
136
+static RISCVException write_mscratch_i128(CPURISCVState *env, int csrno,
137
+ Int128 val)
138
+{
139
+ env->mscratch = int128_getlo(val);
140
+ env->mscratchh = int128_gethi(val);
141
+ return RISCV_EXCP_NONE;
142
+}
143
+
144
static RISCVException read_mscratch(CPURISCVState *env, int csrno,
145
target_ulong *val)
146
{
147
@@ -XXX,XX +XXX,XX @@ static RISCVException rmw_mip(CPURISCVState *env, int csrno,
148
}
149
150
/* Supervisor Trap Setup */
151
+static RISCVException read_sstatus_i128(CPURISCVState *env, int csrno,
152
+ Int128 *val)
153
+{
154
+ uint64_t mask = sstatus_v1_10_mask;
155
+ uint64_t sstatus = env->mstatus & mask;
156
+
157
+ *val = int128_make128(sstatus, add_status_sd(MXL_RV128, sstatus));
158
+ return RISCV_EXCP_NONE;
159
+}
160
+
161
static RISCVException read_sstatus(CPURISCVState *env, int csrno,
162
target_ulong *val)
163
{
164
@@ -XXX,XX +XXX,XX @@ static RISCVException write_scounteren(CPURISCVState *env, int csrno,
165
}
166
167
/* Supervisor Trap Handling */
168
+static RISCVException read_sscratch_i128(CPURISCVState *env, int csrno,
169
+ Int128 *val)
170
+{
171
+ *val = int128_make128(env->sscratch, env->sscratchh);
172
+ return RISCV_EXCP_NONE;
173
+}
174
+
175
+static RISCVException write_sscratch_i128(CPURISCVState *env, int csrno,
176
+ Int128 val)
177
+{
178
+ env->sscratch = int128_getlo(val);
179
+ env->sscratchh = int128_gethi(val);
180
+ return RISCV_EXCP_NONE;
181
+}
182
+
183
static RISCVException read_sscratch(CPURISCVState *env, int csrno,
184
target_ulong *val)
185
{
186
@@ -XXX,XX +XXX,XX @@ static RISCVException write_upmbase(CPURISCVState *env, int csrno,
187
* csrrc <-> riscv_csrrw(env, csrno, ret_value, 0, value);
188
*/
189
190
-RISCVException riscv_csrrw(CPURISCVState *env, int csrno,
191
- target_ulong *ret_value,
192
- target_ulong new_value, target_ulong write_mask)
193
+static inline RISCVException riscv_csrrw_check(CPURISCVState *env,
194
+ int csrno,
195
+ bool write_mask,
196
+ RISCVCPU *cpu)
197
{
198
- RISCVException ret;
199
- target_ulong old_value;
200
- RISCVCPU *cpu = env_archcpu(env);
201
- int read_only = get_field(csrno, 0xC00) == 3;
202
-
203
/* check privileges and return RISCV_EXCP_ILLEGAL_INST if check fails */
204
+ int read_only = get_field(csrno, 0xC00) == 3;
205
#if !defined(CONFIG_USER_ONLY)
206
int effective_priv = env->priv;
207
208
@@ -XXX,XX +XXX,XX @@ RISCVException riscv_csrrw(CPURISCVState *env, int csrno,
209
if (!csr_ops[csrno].predicate) {
210
return RISCV_EXCP_ILLEGAL_INST;
211
}
212
- ret = csr_ops[csrno].predicate(env, csrno);
213
- if (ret != RISCV_EXCP_NONE) {
214
- return ret;
215
- }
216
+
217
+ return csr_ops[csrno].predicate(env, csrno);
218
+}
219
+
220
+static RISCVException riscv_csrrw_do64(CPURISCVState *env, int csrno,
221
+ target_ulong *ret_value,
222
+ target_ulong new_value,
223
+ target_ulong write_mask)
224
+{
225
+ RISCVException ret;
226
+ target_ulong old_value;
227
228
/* execute combined read/write operation if it exists */
229
if (csr_ops[csrno].op) {
230
@@ -XXX,XX +XXX,XX @@ RISCVException riscv_csrrw(CPURISCVState *env, int csrno,
231
return RISCV_EXCP_NONE;
232
}
233
234
-RISCVException riscv_csrrw_i128(CPURISCVState *env, int csrno,
235
- Int128 *ret_value,
236
- Int128 new_value, Int128 write_mask)
237
+RISCVException riscv_csrrw(CPURISCVState *env, int csrno,
238
+ target_ulong *ret_value,
239
+ target_ulong new_value, target_ulong write_mask)
240
+{
241
+ RISCVCPU *cpu = env_archcpu(env);
242
+
243
+ RISCVException ret = riscv_csrrw_check(env, csrno, write_mask, cpu);
244
+ if (ret != RISCV_EXCP_NONE) {
245
+ return ret;
246
+ }
247
+
248
+ return riscv_csrrw_do64(env, csrno, ret_value, new_value, write_mask);
249
+}
250
+
251
+static RISCVException riscv_csrrw_do128(CPURISCVState *env, int csrno,
252
+ Int128 *ret_value,
253
+ Int128 new_value,
254
+ Int128 write_mask)
255
{
256
- /* fall back to 64-bit version for now */
257
- target_ulong ret_64;
258
- RISCVException ret = riscv_csrrw(env, csrno, &ret_64,
259
- int128_getlo(new_value),
260
- int128_getlo(write_mask));
261
+ RISCVException ret;
262
+ Int128 old_value;
263
+
264
+ /* read old value */
265
+ ret = csr_ops[csrno].read128(env, csrno, &old_value);
266
+ if (ret != RISCV_EXCP_NONE) {
267
+ return ret;
268
+ }
269
+
270
+ /* write value if writable and write mask set, otherwise drop writes */
271
+ if (int128_nz(write_mask)) {
272
+ new_value = int128_or(int128_and(old_value, int128_not(write_mask)),
273
+ int128_and(new_value, write_mask));
274
+ if (csr_ops[csrno].write128) {
275
+ ret = csr_ops[csrno].write128(env, csrno, new_value);
276
+ if (ret != RISCV_EXCP_NONE) {
277
+ return ret;
278
+ }
279
+ } else if (csr_ops[csrno].write) {
280
+ /* avoids having to write wrappers for all registers */
281
+ ret = csr_ops[csrno].write(env, csrno, int128_getlo(new_value));
282
+ if (ret != RISCV_EXCP_NONE) {
283
+ return ret;
284
+ }
55
+ }
285
+ }
56
+ }
286
+ }
57
if (dinfo) {
287
58
int ret;
288
+ /* return old value */
59
uint64_t perm;
289
if (ret_value) {
290
- *ret_value = int128_make64(ret_64);
291
+ *ret_value = old_value;
292
+ }
293
+
294
+ return RISCV_EXCP_NONE;
295
+}
296
+
297
+RISCVException riscv_csrrw_i128(CPURISCVState *env, int csrno,
298
+ Int128 *ret_value,
299
+ Int128 new_value, Int128 write_mask)
300
+{
301
+ RISCVException ret;
302
+ RISCVCPU *cpu = env_archcpu(env);
303
+
304
+ ret = riscv_csrrw_check(env, csrno, int128_nz(write_mask), cpu);
305
+ if (ret != RISCV_EXCP_NONE) {
306
+ return ret;
307
}
308
309
+ if (csr_ops[csrno].read128) {
310
+ return riscv_csrrw_do128(env, csrno, ret_value, new_value, write_mask);
311
+ }
312
+
313
+ /*
314
+ * Fall back to 64-bit version for now, if the 128-bit alternative isn't
315
+ * at all defined.
316
+ * Note, some CSRs don't need to extend to MXLEN (64 upper bits non
317
+ * significant), for those, this fallback is correctly handling the accesses
318
+ */
319
+ target_ulong old_value;
320
+ ret = riscv_csrrw_do64(env, csrno, &old_value,
321
+ int128_getlo(new_value),
322
+ int128_getlo(write_mask));
323
+ if (ret == RISCV_EXCP_NONE && ret_value) {
324
+ *ret_value = int128_make64(old_value);
325
+ }
326
return ret;
327
}
328
329
@@ -XXX,XX +XXX,XX @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
330
[CSR_MHARTID] = { "mhartid", any, read_mhartid },
331
332
/* Machine Trap Setup */
333
- [CSR_MSTATUS] = { "mstatus", any, read_mstatus, write_mstatus },
334
- [CSR_MISA] = { "misa", any, read_misa, write_misa },
335
+ [CSR_MSTATUS] = { "mstatus", any, read_mstatus, write_mstatus, NULL,
336
+ read_mstatus_i128 },
337
+ [CSR_MISA] = { "misa", any, read_misa, write_misa, NULL,
338
+ read_misa_i128 },
339
[CSR_MIDELEG] = { "mideleg", any, read_mideleg, write_mideleg },
340
[CSR_MEDELEG] = { "medeleg", any, read_medeleg, write_medeleg },
341
[CSR_MIE] = { "mie", any, read_mie, write_mie },
342
@@ -XXX,XX +XXX,XX @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
343
[CSR_MSTATUSH] = { "mstatush", any32, read_mstatush, write_mstatush },
344
345
/* Machine Trap Handling */
346
- [CSR_MSCRATCH] = { "mscratch", any, read_mscratch, write_mscratch },
347
+ [CSR_MSCRATCH] = { "mscratch", any, read_mscratch, write_mscratch, NULL,
348
+ read_mscratch_i128, write_mscratch_i128 },
349
[CSR_MEPC] = { "mepc", any, read_mepc, write_mepc },
350
[CSR_MCAUSE] = { "mcause", any, read_mcause, write_mcause },
351
[CSR_MTVAL] = { "mtval", any, read_mtval, write_mtval },
352
[CSR_MIP] = { "mip", any, NULL, NULL, rmw_mip },
353
354
/* Supervisor Trap Setup */
355
- [CSR_SSTATUS] = { "sstatus", smode, read_sstatus, write_sstatus },
356
+ [CSR_SSTATUS] = { "sstatus", smode, read_sstatus, write_sstatus, NULL,
357
+ read_sstatus_i128 },
358
[CSR_SIE] = { "sie", smode, read_sie, write_sie },
359
[CSR_STVEC] = { "stvec", smode, read_stvec, write_stvec },
360
[CSR_SCOUNTEREN] = { "scounteren", smode, read_scounteren, write_scounteren },
361
362
/* Supervisor Trap Handling */
363
- [CSR_SSCRATCH] = { "sscratch", smode, read_sscratch, write_sscratch },
364
+ [CSR_SSCRATCH] = { "sscratch", smode, read_sscratch, write_sscratch, NULL,
365
+ read_sscratch_i128, write_sscratch_i128 },
366
[CSR_SEPC] = { "sepc", smode, read_sepc, write_sepc },
367
[CSR_SCAUSE] = { "scause", smode, read_scause, write_scause },
368
[CSR_STVAL] = { "stval", smode, read_stval, write_stval },
60
--
369
--
61
2.31.1
370
2.31.1
62
371
63
372
diff view generated by jsdifflib
New patch
1
From: Alistair Francis <alistair.francis@wdc.com>
1
2
3
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
6
Message-id: 20211220064916.107241-2-alistair.francis@opensource.wdc.com
7
---
8
target/riscv/translate.c | 2 ++
9
1 file changed, 2 insertions(+)
10
11
diff --git a/target/riscv/translate.c b/target/riscv/translate.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/riscv/translate.c
14
+++ b/target/riscv/translate.c
15
@@ -XXX,XX +XXX,XX @@ static void decode_opc(CPURISCVState *env, DisasContext *ctx, uint16_t opcode)
16
if (!has_ext(ctx, RVC)) {
17
gen_exception_illegal(ctx);
18
} else {
19
+ ctx->opcode = opcode;
20
ctx->pc_succ_insn = ctx->base.pc_next + 2;
21
if (!decode_insn16(ctx, opcode)) {
22
gen_exception_illegal(ctx);
23
@@ -XXX,XX +XXX,XX @@ static void decode_opc(CPURISCVState *env, DisasContext *ctx, uint16_t opcode)
24
opcode32 = deposit32(opcode32, 16, 16,
25
translator_lduw(env, &ctx->base,
26
ctx->base.pc_next + 2));
27
+ ctx->opcode = opcode32;
28
ctx->pc_succ_insn = ctx->base.pc_next + 4;
29
if (!decode_insn32(ctx, opcode32)) {
30
gen_exception_illegal(ctx);
31
--
32
2.31.1
33
34
diff view generated by jsdifflib
New patch
1
From: Alistair Francis <alistair.francis@wdc.com>
1
2
3
In preparation for adding support for the illegal instruction address
4
let's fixup the Hypervisor extension setting GVA logic and improve the
5
variable names.
6
7
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
10
Message-id: 20211220064916.107241-3-alistair.francis@opensource.wdc.com
11
---
12
target/riscv/cpu_helper.c | 21 ++++++---------------
13
1 file changed, 6 insertions(+), 15 deletions(-)
14
15
diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/riscv/cpu_helper.c
18
+++ b/target/riscv/cpu_helper.c
19
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_do_interrupt(CPUState *cs)
20
21
RISCVCPU *cpu = RISCV_CPU(cs);
22
CPURISCVState *env = &cpu->env;
23
+ bool write_gva = false;
24
uint64_t s;
25
26
/* cs->exception is 32-bits wide unlike mcause which is XLEN-bits wide
27
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_do_interrupt(CPUState *cs)
28
bool async = !!(cs->exception_index & RISCV_EXCP_INT_FLAG);
29
target_ulong cause = cs->exception_index & RISCV_EXCP_INT_MASK;
30
target_ulong deleg = async ? env->mideleg : env->medeleg;
31
- bool write_tval = false;
32
target_ulong tval = 0;
33
target_ulong htval = 0;
34
target_ulong mtval2 = 0;
35
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_do_interrupt(CPUState *cs)
36
case RISCV_EXCP_INST_PAGE_FAULT:
37
case RISCV_EXCP_LOAD_PAGE_FAULT:
38
case RISCV_EXCP_STORE_PAGE_FAULT:
39
- write_tval = true;
40
+ write_gva = true;
41
tval = env->badaddr;
42
break;
43
default:
44
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_do_interrupt(CPUState *cs)
45
if (riscv_has_ext(env, RVH)) {
46
target_ulong hdeleg = async ? env->hideleg : env->hedeleg;
47
48
- if (env->two_stage_lookup && write_tval) {
49
- /*
50
- * If we are writing a guest virtual address to stval, set
51
- * this to 1. If we are trapping to VS we will set this to 0
52
- * later.
53
- */
54
- env->hstatus = set_field(env->hstatus, HSTATUS_GVA, 1);
55
- } else {
56
- /* For other HS-mode traps, we set this to 0. */
57
- env->hstatus = set_field(env->hstatus, HSTATUS_GVA, 0);
58
- }
59
-
60
if (riscv_cpu_virt_enabled(env) && ((hdeleg >> cause) & 1)) {
61
/* Trap to VS mode */
62
/*
63
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_do_interrupt(CPUState *cs)
64
cause == IRQ_VS_EXT) {
65
cause = cause - 1;
66
}
67
- env->hstatus = set_field(env->hstatus, HSTATUS_GVA, 0);
68
+ write_gva = false;
69
} else if (riscv_cpu_virt_enabled(env)) {
70
/* Trap into HS mode, from virt */
71
riscv_cpu_swap_hypervisor_regs(env);
72
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_do_interrupt(CPUState *cs)
73
env->hstatus = set_field(env->hstatus, HSTATUS_SPV,
74
riscv_cpu_virt_enabled(env));
75
76
+
77
htval = env->guest_phys_fault_addr;
78
79
riscv_cpu_set_virt_enabled(env, 0);
80
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_do_interrupt(CPUState *cs)
81
/* Trap into HS mode */
82
env->hstatus = set_field(env->hstatus, HSTATUS_SPV, false);
83
htval = env->guest_phys_fault_addr;
84
+ write_gva = false;
85
}
86
+ env->hstatus = set_field(env->hstatus, HSTATUS_GVA, write_gva);
87
}
88
89
s = env->mstatus;
90
--
91
2.31.1
92
93
diff view generated by jsdifflib
New patch
1
From: Alistair Francis <alistair.francis@wdc.com>
1
2
3
The stval and mtval registers can optionally contain the faulting
4
instruction on an illegal instruction exception. This patch adds support
5
for setting the stval and mtval registers.
6
7
The RISC-V spec states that "The stval register can optionally also be
8
used to return the faulting instruction bits on an illegal instruction
9
exception...". In this case we are always writing the value on an
10
illegal instruction.
11
12
This doesn't match all CPUs (some CPUs won't write the data), but in
13
QEMU let's just populate the value on illegal instructions. This won't
14
break any guest software, but will provide more information to guests.
15
16
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
18
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
19
Message-id: 20211220064916.107241-4-alistair.francis@opensource.wdc.com
20
---
21
target/riscv/cpu.h | 2 ++
22
target/riscv/cpu_helper.c | 3 +++
23
target/riscv/translate.c | 3 +++
24
3 files changed, 8 insertions(+)
25
26
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
27
index XXXXXXX..XXXXXXX 100644
28
--- a/target/riscv/cpu.h
29
+++ b/target/riscv/cpu.h
30
@@ -XXX,XX +XXX,XX @@ struct CPURISCVState {
31
target_ulong frm;
32
33
target_ulong badaddr;
34
+ uint32_t bins;
35
+
36
target_ulong guest_phys_fault_addr;
37
38
target_ulong priv_ver;
39
diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
40
index XXXXXXX..XXXXXXX 100644
41
--- a/target/riscv/cpu_helper.c
42
+++ b/target/riscv/cpu_helper.c
43
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_do_interrupt(CPUState *cs)
44
write_gva = true;
45
tval = env->badaddr;
46
break;
47
+ case RISCV_EXCP_ILLEGAL_INST:
48
+ tval = env->bins;
49
+ break;
50
default:
51
break;
52
}
53
diff --git a/target/riscv/translate.c b/target/riscv/translate.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/target/riscv/translate.c
56
+++ b/target/riscv/translate.c
57
@@ -XXX,XX +XXX,XX @@ static void generate_exception_mtval(DisasContext *ctx, int excp)
58
59
static void gen_exception_illegal(DisasContext *ctx)
60
{
61
+ tcg_gen_st_i32(tcg_constant_i32(ctx->opcode), cpu_env,
62
+ offsetof(CPURISCVState, bins));
63
+
64
generate_exception(ctx, RISCV_EXCP_ILLEGAL_INST);
65
}
66
67
--
68
2.31.1
69
70
diff view generated by jsdifflib