1
target-arm queue; this one has a fair scattering of more
1
I don't have anything else queued up at the moment, so this is just
2
miscellaneous things in it which I've sent out this week.
2
Richard's SME patches.
3
I've shoved those in as well as it seemed the least-effort
4
way of getting them into master; a few of them are dependencies
5
on arm-related patches I have brewing.
6
3
7
thanks
8
-- PMM
4
-- PMM
9
5
6
The following changes since commit 63b38f6c85acd312c2cab68554abf33adf4ee2b3:
10
7
11
The following changes since commit 2702c2d3eb74e3908c0c5dbf3a71c8987595a86e:
8
Merge tag 'pull-target-arm-20220707' of https://git.linaro.org/people/pmaydell/qemu-arm into staging (2022-07-08 06:17:11 +0530)
12
13
Merge remote-tracking branch 'remotes/stsquad/tags/pull-travis-updates-140618-1' into staging (2018-06-15 12:49:36 +0100)
14
9
15
are available in the Git repository at:
10
are available in the Git repository at:
16
11
17
git://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20180615
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20220711
18
13
19
for you to fetch changes up to 14120108f87b3f9e1beacdf0a6096e464e62bb65:
14
for you to fetch changes up to f9982ceaf26df27d15547a3a7990a95019e9e3a8:
20
15
21
target/arm: Allow ARMv6-M Thumb2 instructions (2018-06-15 15:23:34 +0100)
16
linux-user/aarch64: Add SME related hwcap entries (2022-07-11 13:43:52 +0100)
22
17
23
----------------------------------------------------------------
18
----------------------------------------------------------------
24
target-arm and miscellaneous queue:
19
target-arm:
25
* fix KVM state save/restore for GICv3 priority registers for high IRQ numbers
20
* Implement SME emulation, for both system and linux-user
26
* hw/arm/mps2-tz: Put ethernet controller behind PPC
27
* hw/sh/sh7750: Convert away from old_mmio
28
* hw/m68k/mcf5206: Convert away from old_mmio
29
* hw/block/pflash_cfi02: Convert away from old_mmio
30
* hw/watchdog/wdt_i6300esb: Convert away from old_mmio
31
* hw/input/pckbd: Convert away from old_mmio
32
* hw/char/parallel: Convert away from old_mmio
33
* armv7m: refactor to get rid of armv7m_init() function
34
* arm: Don't crash if user tries to use a Cortex-M CPU without an NVIC
35
* hw/core/or-irq: Support more than 16 inputs to an OR gate
36
* cpu-defs.h: Document CPUIOTLBEntry 'addr' field
37
* cputlb: Pass cpu_transaction_failed() the correct physaddr
38
* CODING_STYLE: Define our preferred form for multiline comments
39
* Add and use new stn_*_p() and ldn_*_p() memory access functions
40
* target/arm: More parts of the upcoming SVE support
41
* aspeed_scu: Implement RNG register
42
* m25p80: add support for two bytes WRSR for Macronix chips
43
* exec.c: Handle IOMMUs being in the path of TCG CPU memory accesses
44
* target/arm: Allow ARMv6-M Thumb2 instructions
45
21
46
----------------------------------------------------------------
22
----------------------------------------------------------------
47
Cédric Le Goater (1):
23
Richard Henderson (45):
48
m25p80: add support for two bytes WRSR for Macronix chips
24
target/arm: Handle SME in aarch64_cpu_dump_state
25
target/arm: Add infrastructure for disas_sme
26
target/arm: Trap non-streaming usage when Streaming SVE is active
27
target/arm: Mark ADR as non-streaming
28
target/arm: Mark RDFFR, WRFFR, SETFFR as non-streaming
29
target/arm: Mark BDEP, BEXT, BGRP, COMPACT, FEXPA, FTSSEL as non-streaming
30
target/arm: Mark PMULL, FMMLA as non-streaming
31
target/arm: Mark FTSMUL, FTMAD, FADDA as non-streaming
32
target/arm: Mark SMMLA, UMMLA, USMMLA as non-streaming
33
target/arm: Mark string/histo/crypto as non-streaming
34
target/arm: Mark gather/scatter load/store as non-streaming
35
target/arm: Mark gather prefetch as non-streaming
36
target/arm: Mark LDFF1 and LDNF1 as non-streaming
37
target/arm: Mark LD1RO as non-streaming
38
target/arm: Add SME enablement checks
39
target/arm: Handle SME in sve_access_check
40
target/arm: Implement SME RDSVL, ADDSVL, ADDSPL
41
target/arm: Implement SME ZERO
42
target/arm: Implement SME MOVA
43
target/arm: Implement SME LD1, ST1
44
target/arm: Export unpredicated ld/st from translate-sve.c
45
target/arm: Implement SME LDR, STR
46
target/arm: Implement SME ADDHA, ADDVA
47
target/arm: Implement FMOPA, FMOPS (non-widening)
48
target/arm: Implement BFMOPA, BFMOPS
49
target/arm: Implement FMOPA, FMOPS (widening)
50
target/arm: Implement SME integer outer product
51
target/arm: Implement PSEL
52
target/arm: Implement REVD
53
target/arm: Implement SCLAMP, UCLAMP
54
target/arm: Reset streaming sve state on exception boundaries
55
target/arm: Enable SME for -cpu max
56
linux-user/aarch64: Clear tpidr2_el0 if CLONE_SETTLS
57
linux-user/aarch64: Reset PSTATE.SM on syscalls
58
linux-user/aarch64: Add SM bit to SVE signal context
59
linux-user/aarch64: Tidy target_restore_sigframe error return
60
linux-user/aarch64: Do not allow duplicate or short sve records
61
linux-user/aarch64: Verify extra record lock succeeded
62
linux-user/aarch64: Move sve record checks into restore
63
linux-user/aarch64: Implement SME signal handling
64
linux-user: Rename sve prctls
65
linux-user/aarch64: Implement PR_SME_GET_VL, PR_SME_SET_VL
66
target/arm: Only set ZEN in reset if SVE present
67
target/arm: Enable SME for user-only
68
linux-user/aarch64: Add SME related hwcap entries
49
69
50
Joel Stanley (1):
70
docs/system/arm/emulation.rst | 4 +
51
aspeed_scu: Implement RNG register
71
linux-user/aarch64/target_cpu.h | 5 +-
52
72
linux-user/aarch64/target_prctl.h | 62 +-
53
Julia Suvorova (1):
73
target/arm/cpu.h | 7 +
54
target/arm: Allow ARMv6-M Thumb2 instructions
74
target/arm/helper-sme.h | 126 ++++
55
75
target/arm/helper-sve.h | 4 +
56
Peter Maydell (21):
76
target/arm/helper.h | 18 +
57
hw/arm/mps2-tz: Put ethernet controller behind PPC
77
target/arm/translate-a64.h | 45 ++
58
hw/sh/sh7750: Convert away from old_mmio
78
target/arm/translate.h | 16 +
59
hw/m68k/mcf5206: Convert away from old_mmio
79
target/arm/sme-fa64.decode | 60 ++
60
hw/block/pflash_cfi02: Convert away from old_mmio
80
target/arm/sme.decode | 88 +++
61
hw/watchdog/wdt_i6300esb: Convert away from old_mmio
81
target/arm/sve.decode | 41 +-
62
hw/input/pckbd: Convert away from old_mmio
82
linux-user/aarch64/cpu_loop.c | 9 +
63
hw/char/parallel: Convert away from old_mmio
83
linux-user/aarch64/signal.c | 243 ++++++--
64
stellaris: Stop using armv7m_init()
84
linux-user/elfload.c | 20 +
65
hw/arm/armv7m: Remove unused armv7m_init() function
85
linux-user/syscall.c | 28 +-
66
arm: Don't crash if user tries to use a Cortex-M CPU without an NVIC
86
target/arm/cpu.c | 35 +-
67
hw/core/or-irq: Support more than 16 inputs to an OR gate
87
target/arm/cpu64.c | 11 +
68
cpu-defs.h: Document CPUIOTLBEntry 'addr' field
88
target/arm/helper.c | 56 +-
69
cputlb: Pass cpu_transaction_failed() the correct physaddr
89
target/arm/sme_helper.c | 1140 +++++++++++++++++++++++++++++++++++++
70
CODING_STYLE: Define our preferred form for multiline comments
90
target/arm/sve_helper.c | 28 +
71
bswap: Add new stn_*_p() and ldn_*_p() memory access functions
91
target/arm/translate-a64.c | 103 +++-
72
exec.c: Don't accidentally sign-extend 4-byte loads in subpage_read()
92
target/arm/translate-sme.c | 373 ++++++++++++
73
exec.c: Use stn_p() and ldn_p() instead of explicit switches
93
target/arm/translate-sve.c | 393 ++++++++++---
74
iommu: Add IOMMU index concept to IOMMU API
94
target/arm/translate-vfp.c | 12 +
75
iommu: Add IOMMU index argument to notifier APIs
95
target/arm/translate.c | 2 +
76
iommu: Add IOMMU index argument to translate method
96
target/arm/vec_helper.c | 24 +
77
exec.c: Handle IOMMUs in address_space_translate_for_iotlb()
97
target/arm/meson.build | 3 +
78
98
28 files changed, 2821 insertions(+), 135 deletions(-)
79
Richard Henderson (18):
99
create mode 100644 target/arm/sme-fa64.decode
80
target/arm: Extend vec_reg_offset to larger sizes
100
create mode 100644 target/arm/sme.decode
81
target/arm: Implement SVE Permute - Unpredicated Group
101
create mode 100644 target/arm/translate-sme.c
82
target/arm: Implement SVE Permute - Predicates Group
83
target/arm: Implement SVE Permute - Interleaving Group
84
target/arm: Implement SVE compress active elements
85
target/arm: Implement SVE conditionally broadcast/extract element
86
target/arm: Implement SVE copy to vector (predicated)
87
target/arm: Implement SVE reverse within elements
88
target/arm: Implement SVE vector splice (predicated)
89
target/arm: Implement SVE Select Vectors Group
90
target/arm: Implement SVE Integer Compare - Vectors Group
91
target/arm: Implement SVE Integer Compare - Immediate Group
92
target/arm: Implement SVE Partition Break Group
93
target/arm: Implement SVE Predicate Count Group
94
target/arm: Implement SVE Integer Compare - Scalars Group
95
target/arm: Implement FDUP/DUP
96
target/arm: Implement SVE Integer Wide Immediate - Unpredicated Group
97
target/arm: Implement SVE Floating Point Arithmetic - Unpredicated Group
98
99
Shannon Zhao (1):
100
arm_gicv3_kvm: kvm_dist_get/put_priority: skip the registers banked by GICR_IPRIORITYR
101
102
include/exec/cpu-all.h | 4 +
103
include/exec/cpu-defs.h | 9 +
104
include/exec/exec-all.h | 16 +-
105
include/exec/memory.h | 65 +-
106
include/hw/arm/arm.h | 8 +-
107
include/hw/or-irq.h | 5 +-
108
include/qemu/bswap.h | 52 ++
109
include/qom/cpu.h | 3 +
110
target/arm/helper-sve.h | 294 +++++++++
111
target/arm/helper.h | 19 +
112
target/arm/translate-a64.h | 26 +-
113
accel/tcg/cputlb.c | 59 +-
114
exec.c | 263 ++++----
115
hw/alpha/typhoon.c | 3 +-
116
hw/arm/armv7m.c | 28 +-
117
hw/arm/mps2-tz.c | 32 +-
118
hw/arm/smmuv3.c | 2 +-
119
hw/arm/stellaris.c | 12 +-
120
hw/block/m25p80.c | 1 +
121
hw/block/pflash_cfi02.c | 97 +--
122
hw/char/parallel.c | 50 +-
123
hw/core/or-irq.c | 39 +-
124
hw/dma/rc4030.c | 2 +-
125
hw/i386/amd_iommu.c | 2 +-
126
hw/i386/intel_iommu.c | 8 +-
127
hw/input/pckbd.c | 14 +-
128
hw/intc/arm_gicv3_kvm.c | 18 +-
129
hw/intc/armv7m_nvic.c | 6 +-
130
hw/m68k/mcf5206.c | 48 +-
131
hw/misc/aspeed_scu.c | 20 +
132
hw/ppc/spapr_iommu.c | 5 +-
133
hw/s390x/s390-pci-bus.c | 2 +-
134
hw/s390x/s390-pci-inst.c | 4 +-
135
hw/sh4/sh7750.c | 44 +-
136
hw/sparc/sun4m_iommu.c | 3 +-
137
hw/sparc64/sun4u_iommu.c | 2 +-
138
hw/vfio/common.c | 6 +-
139
hw/virtio/vhost.c | 7 +-
140
hw/watchdog/wdt_i6300esb.c | 48 +-
141
memory.c | 33 +-
142
target/arm/cpu.c | 18 +
143
target/arm/sve_helper.c | 1250 +++++++++++++++++++++++++++++++++++++
144
target/arm/translate-sve.c | 1458 +++++++++++++++++++++++++++++++++++++++++++
145
target/arm/translate.c | 43 +-
146
target/arm/vec_helper.c | 69 ++
147
CODING_STYLE | 17 +
148
docs/devel/loads-stores.rst | 15 +
149
target/arm/sve.decode | 248 ++++++++
150
48 files changed, 4114 insertions(+), 363 deletions(-)
151
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Dump SVCR, plus use the correct access check for Streaming Mode.
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20220708151540.18136-2-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/cpu.c | 17 ++++++++++++++++-
11
1 file changed, 16 insertions(+), 1 deletion(-)
12
13
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/cpu.c
16
+++ b/target/arm/cpu.c
17
@@ -XXX,XX +XXX,XX @@ static void aarch64_cpu_dump_state(CPUState *cs, FILE *f, int flags)
18
int i;
19
int el = arm_current_el(env);
20
const char *ns_status;
21
+ bool sve;
22
23
qemu_fprintf(f, " PC=%016" PRIx64 " ", env->pc);
24
for (i = 0; i < 32; i++) {
25
@@ -XXX,XX +XXX,XX @@ static void aarch64_cpu_dump_state(CPUState *cs, FILE *f, int flags)
26
el,
27
psr & PSTATE_SP ? 'h' : 't');
28
29
+ if (cpu_isar_feature(aa64_sme, cpu)) {
30
+ qemu_fprintf(f, " SVCR=%08" PRIx64 " %c%c",
31
+ env->svcr,
32
+ (FIELD_EX64(env->svcr, SVCR, ZA) ? 'Z' : '-'),
33
+ (FIELD_EX64(env->svcr, SVCR, SM) ? 'S' : '-'));
34
+ }
35
if (cpu_isar_feature(aa64_bti, cpu)) {
36
qemu_fprintf(f, " BTYPE=%d", (psr & PSTATE_BTYPE) >> 10);
37
}
38
@@ -XXX,XX +XXX,XX @@ static void aarch64_cpu_dump_state(CPUState *cs, FILE *f, int flags)
39
qemu_fprintf(f, " FPCR=%08x FPSR=%08x\n",
40
vfp_get_fpcr(env), vfp_get_fpsr(env));
41
42
- if (cpu_isar_feature(aa64_sve, cpu) && sve_exception_el(env, el) == 0) {
43
+ if (cpu_isar_feature(aa64_sme, cpu) && FIELD_EX64(env->svcr, SVCR, SM)) {
44
+ sve = sme_exception_el(env, el) == 0;
45
+ } else if (cpu_isar_feature(aa64_sve, cpu)) {
46
+ sve = sve_exception_el(env, el) == 0;
47
+ } else {
48
+ sve = false;
49
+ }
50
+
51
+ if (sve) {
52
int j, zcr_len = sve_vqm1_for_el(env, el);
53
54
for (i = 0; i <= FFR_PRED_NUM; i++) {
55
--
56
2.25.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Rearrange the arithmetic so that we are agnostic about the total size
3
This includes the build rules for the decoder, and the
4
of the vector and the size of the element. This will allow us to index
4
new file for translation, but excludes any instructions.
5
up to the 32nd byte and with 16-byte elements.
6
5
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180613015641.5667-2-richard.henderson@linaro.org
8
Message-id: 20220708151540.18136-3-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
10
---
12
target/arm/translate-a64.h | 26 +++++++++++++++++---------
11
target/arm/translate-a64.h | 1 +
13
1 file changed, 17 insertions(+), 9 deletions(-)
12
target/arm/sme.decode | 20 ++++++++++++++++++++
13
target/arm/translate-a64.c | 7 ++++++-
14
target/arm/translate-sme.c | 35 +++++++++++++++++++++++++++++++++++
15
target/arm/meson.build | 2 ++
16
5 files changed, 64 insertions(+), 1 deletion(-)
17
create mode 100644 target/arm/sme.decode
18
create mode 100644 target/arm/translate-sme.c
14
19
15
diff --git a/target/arm/translate-a64.h b/target/arm/translate-a64.h
20
diff --git a/target/arm/translate-a64.h b/target/arm/translate-a64.h
16
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate-a64.h
22
--- a/target/arm/translate-a64.h
18
+++ b/target/arm/translate-a64.h
23
+++ b/target/arm/translate-a64.h
19
@@ -XXX,XX +XXX,XX @@ static inline void assert_fp_access_checked(DisasContext *s)
24
@@ -XXX,XX +XXX,XX @@ static inline int pred_gvec_reg_size(DisasContext *s)
20
static inline int vec_reg_offset(DisasContext *s, int regno,
25
}
21
int element, TCGMemOp size)
26
22
{
27
bool disas_sve(DisasContext *, uint32_t);
23
- int offs = 0;
28
+bool disas_sme(DisasContext *, uint32_t);
24
+ int element_size = 1 << size;
29
25
+ int offs = element * element_size;
30
void gen_gvec_rax1(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
26
#ifdef HOST_WORDS_BIGENDIAN
31
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
27
/* This is complicated slightly because vfp.zregs[n].d[0] is
32
diff --git a/target/arm/sme.decode b/target/arm/sme.decode
28
- * still the low half and vfp.zregs[n].d[1] the high half
33
new file mode 100644
29
- * of the 128 bit vector, even on big endian systems.
34
index XXXXXXX..XXXXXXX
30
- * Calculate the offset assuming a fully bigendian 128 bits,
35
--- /dev/null
31
- * then XOR to account for the order of the two 64 bit halves.
36
+++ b/target/arm/sme.decode
32
+ * still the lowest and vfp.zregs[n].d[15] the highest of the
37
@@ -XXX,XX +XXX,XX @@
33
+ * 256 byte vector, even on big endian systems.
38
+# AArch64 SME instruction descriptions
34
+ *
39
+#
35
+ * Calculate the offset assuming fully little-endian,
40
+# Copyright (c) 2022 Linaro, Ltd
36
+ * then XOR to account for the order of the 8-byte units.
41
+#
37
+ *
42
+# This library is free software; you can redistribute it and/or
38
+ * For 16 byte elements, the two 8 byte halves will not form a
43
+# modify it under the terms of the GNU Lesser General Public
39
+ * host int128 if the host is bigendian, since they're in the
44
+# License as published by the Free Software Foundation; either
40
+ * wrong order. However the only 16 byte operation we have is
45
+# version 2.1 of the License, or (at your option) any later version.
41
+ * a move, so we can ignore this for the moment. More complicated
46
+#
42
+ * operations will have to special case loading and storing from
47
+# This library is distributed in the hope that it will be useful,
43
+ * the zregs array.
48
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
44
*/
49
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
45
- offs += (16 - ((element + 1) * (1 << size)));
50
+# Lesser General Public License for more details.
46
- offs ^= 8;
51
+#
47
-#else
52
+# You should have received a copy of the GNU Lesser General Public
48
- offs += element * (1 << size);
53
+# License along with this library; if not, see <http://www.gnu.org/licenses/>.
49
+ if (element_size < 8) {
54
+
50
+ offs ^= 8 - element_size;
55
+#
51
+ }
56
+# This file is processed by scripts/decodetree.py
52
#endif
57
+#
53
offs += offsetof(CPUARMState, vfp.zregs[regno]);
58
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
54
assert_fp_access_checked(s);
59
index XXXXXXX..XXXXXXX 100644
60
--- a/target/arm/translate-a64.c
61
+++ b/target/arm/translate-a64.c
62
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
63
}
64
65
switch (extract32(insn, 25, 4)) {
66
- case 0x0: case 0x1: case 0x3: /* UNALLOCATED */
67
+ case 0x0:
68
+ if (!extract32(insn, 31, 1) || !disas_sme(s, insn)) {
69
+ unallocated_encoding(s);
70
+ }
71
+ break;
72
+ case 0x1: case 0x3: /* UNALLOCATED */
73
unallocated_encoding(s);
74
break;
75
case 0x2:
76
diff --git a/target/arm/translate-sme.c b/target/arm/translate-sme.c
77
new file mode 100644
78
index XXXXXXX..XXXXXXX
79
--- /dev/null
80
+++ b/target/arm/translate-sme.c
81
@@ -XXX,XX +XXX,XX @@
82
+/*
83
+ * AArch64 SME translation
84
+ *
85
+ * Copyright (c) 2022 Linaro, Ltd
86
+ *
87
+ * This library is free software; you can redistribute it and/or
88
+ * modify it under the terms of the GNU Lesser General Public
89
+ * License as published by the Free Software Foundation; either
90
+ * version 2.1 of the License, or (at your option) any later version.
91
+ *
92
+ * This library is distributed in the hope that it will be useful,
93
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
94
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
95
+ * Lesser General Public License for more details.
96
+ *
97
+ * You should have received a copy of the GNU Lesser General Public
98
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
99
+ */
100
+
101
+#include "qemu/osdep.h"
102
+#include "cpu.h"
103
+#include "tcg/tcg-op.h"
104
+#include "tcg/tcg-op-gvec.h"
105
+#include "tcg/tcg-gvec-desc.h"
106
+#include "translate.h"
107
+#include "exec/helper-gen.h"
108
+#include "translate-a64.h"
109
+#include "fpu/softfloat.h"
110
+
111
+
112
+/*
113
+ * Include the generated decoder.
114
+ */
115
+
116
+#include "decode-sme.c.inc"
117
diff --git a/target/arm/meson.build b/target/arm/meson.build
118
index XXXXXXX..XXXXXXX 100644
119
--- a/target/arm/meson.build
120
+++ b/target/arm/meson.build
121
@@ -XXX,XX +XXX,XX @@
122
gen = [
123
decodetree.process('sve.decode', extra_args: '--decode=disas_sve'),
124
+ decodetree.process('sme.decode', extra_args: '--decode=disas_sme'),
125
decodetree.process('neon-shared.decode', extra_args: '--decode=disas_neon_shared'),
126
decodetree.process('neon-dp.decode', extra_args: '--decode=disas_neon_dp'),
127
decodetree.process('neon-ls.decode', extra_args: '--decode=disas_neon_ls'),
128
@@ -XXX,XX +XXX,XX @@ arm_ss.add(when: 'TARGET_AARCH64', if_true: files(
129
'sme_helper.c',
130
'translate-a64.c',
131
'translate-sve.c',
132
+ 'translate-sme.c',
133
))
134
135
arm_softmmu_ss = ss.source_set()
55
--
136
--
56
2.17.1
137
2.25.1
57
58
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
This new behaviour is in the ARM pseudocode function
4
AArch64.CheckFPAdvSIMDEnabled, which applies to AArch32
5
via AArch32.CheckAdvSIMDOrFPEnabled when the EL to which
6
the trap would be delivered is in AArch64 mode.
7
8
Given that ARMv9 drops support for AArch32 outside EL0, the trap EL
9
detection ought to be trivially true, but the pseudocode still contains
10
a number of conditions, and QEMU has not yet committed to dropping A32
11
support for EL[12] when v9 features are present.
12
13
Since the computation of SME_TRAP_NONSTREAMING is necessarily different
14
for the two modes, we might as well preserve bits within TBFLAG_ANY and
15
allocate separate bits within TBFLAG_A32 and TBFLAG_A64 instead.
16
17
Note that DDI0616A.a has typos for bits [22:21] of LD1RO in the table
18
of instructions illegal in streaming mode.
2
19
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
20
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
21
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-18-richard.henderson@linaro.org
22
Message-id: 20220708151540.18136-4-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
24
---
8
target/arm/helper-sve.h | 25 +++++++
25
target/arm/cpu.h | 7 +++
9
target/arm/sve_helper.c | 41 +++++++++++
26
target/arm/translate.h | 4 ++
10
target/arm/translate-sve.c | 144 +++++++++++++++++++++++++++++++++++++
27
target/arm/sme-fa64.decode | 90 ++++++++++++++++++++++++++++++++++++++
11
target/arm/sve.decode | 26 +++++++
28
target/arm/helper.c | 41 +++++++++++++++++
12
4 files changed, 236 insertions(+)
29
target/arm/translate-a64.c | 40 ++++++++++++++++-
13
30
target/arm/translate-vfp.c | 12 +++++
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
31
target/arm/translate.c | 2 +
15
index XXXXXXX..XXXXXXX 100644
32
target/arm/meson.build | 1 +
16
--- a/target/arm/helper-sve.h
33
8 files changed, 195 insertions(+), 2 deletions(-)
17
+++ b/target/arm/helper-sve.h
34
create mode 100644 target/arm/sme-fa64.decode
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_brkns, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
35
19
DEF_HELPER_FLAGS_3(sve_cntp, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
36
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
20
37
index XXXXXXX..XXXXXXX 100644
21
DEF_HELPER_FLAGS_3(sve_while, TCG_CALL_NO_RWG, i32, ptr, i32, i32)
38
--- a/target/arm/cpu.h
22
+
39
+++ b/target/arm/cpu.h
23
+DEF_HELPER_FLAGS_4(sve_subri_b, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
40
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_A32, HSTR_ACTIVE, 9, 1)
24
+DEF_HELPER_FLAGS_4(sve_subri_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
41
* the same thing as the current security state of the processor!
25
+DEF_HELPER_FLAGS_4(sve_subri_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
42
*/
26
+DEF_HELPER_FLAGS_4(sve_subri_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
43
FIELD(TBFLAG_A32, NS, 10, 1)
27
+
44
+/*
28
+DEF_HELPER_FLAGS_4(sve_smaxi_b, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
45
+ * Indicates that SME Streaming mode is active, and SMCR_ELx.FA64 is not.
29
+DEF_HELPER_FLAGS_4(sve_smaxi_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
46
+ * This requires an SME trap from AArch32 mode when using NEON.
30
+DEF_HELPER_FLAGS_4(sve_smaxi_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
47
+ */
31
+DEF_HELPER_FLAGS_4(sve_smaxi_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
48
+FIELD(TBFLAG_A32, SME_TRAP_NONSTREAMING, 11, 1)
32
+
49
33
+DEF_HELPER_FLAGS_4(sve_smini_b, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
50
/*
34
+DEF_HELPER_FLAGS_4(sve_smini_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
51
* Bit usage when in AArch32 state, for M-profile only.
35
+DEF_HELPER_FLAGS_4(sve_smini_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
52
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_A64, SMEEXC_EL, 20, 2)
36
+DEF_HELPER_FLAGS_4(sve_smini_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
53
FIELD(TBFLAG_A64, PSTATE_SM, 22, 1)
37
+
54
FIELD(TBFLAG_A64, PSTATE_ZA, 23, 1)
38
+DEF_HELPER_FLAGS_4(sve_umaxi_b, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
55
FIELD(TBFLAG_A64, SVL, 24, 4)
39
+DEF_HELPER_FLAGS_4(sve_umaxi_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
56
+/* Indicates that SME Streaming mode is active, and SMCR_ELx.FA64 is not. */
40
+DEF_HELPER_FLAGS_4(sve_umaxi_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
57
+FIELD(TBFLAG_A64, SME_TRAP_NONSTREAMING, 28, 1)
41
+DEF_HELPER_FLAGS_4(sve_umaxi_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
58
42
+
59
/*
43
+DEF_HELPER_FLAGS_4(sve_umini_b, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
60
* Helpers for using the above.
44
+DEF_HELPER_FLAGS_4(sve_umini_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
61
diff --git a/target/arm/translate.h b/target/arm/translate.h
45
+DEF_HELPER_FLAGS_4(sve_umini_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
62
index XXXXXXX..XXXXXXX 100644
46
+DEF_HELPER_FLAGS_4(sve_umini_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
63
--- a/target/arm/translate.h
47
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
64
+++ b/target/arm/translate.h
48
index XXXXXXX..XXXXXXX 100644
65
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
49
--- a/target/arm/sve_helper.c
66
bool pstate_sm;
50
+++ b/target/arm/sve_helper.c
67
/* True if PSTATE.ZA is set. */
51
@@ -XXX,XX +XXX,XX @@ DO_VPZ_D(sve_uminv_d, uint64_t, uint64_t, -1, DO_MIN)
68
bool pstate_za;
52
#undef DO_VPZ
69
+ /* True if non-streaming insns should raise an SME Streaming exception. */
53
#undef DO_VPZ_D
70
+ bool sme_trap_nonstreaming;
54
71
+ /* True if the current instruction is non-streaming. */
55
+/* Two vector operand, one scalar operand, unpredicated. */
72
+ bool is_nonstreaming;
56
+#define DO_ZZI(NAME, TYPE, OP) \
73
/* True if MVE insns are definitely not predicated by VPR or LTPSIZE */
57
+void HELPER(NAME)(void *vd, void *vn, uint64_t s64, uint32_t desc) \
74
bool mve_no_pred;
58
+{ \
75
/*
59
+ intptr_t i, opr_sz = simd_oprsz(desc) / sizeof(TYPE); \
76
diff --git a/target/arm/sme-fa64.decode b/target/arm/sme-fa64.decode
60
+ TYPE s = s64, *d = vd, *n = vn; \
77
new file mode 100644
61
+ for (i = 0; i < opr_sz; ++i) { \
78
index XXXXXXX..XXXXXXX
62
+ d[i] = OP(n[i], s); \
79
--- /dev/null
63
+ } \
80
+++ b/target/arm/sme-fa64.decode
64
+}
81
@@ -XXX,XX +XXX,XX @@
65
+
82
+# AArch64 SME allowed instruction decoding
66
+#define DO_SUBR(X, Y) (Y - X)
83
+#
67
+
84
+# Copyright (c) 2022 Linaro, Ltd
68
+DO_ZZI(sve_subri_b, uint8_t, DO_SUBR)
85
+#
69
+DO_ZZI(sve_subri_h, uint16_t, DO_SUBR)
86
+# This library is free software; you can redistribute it and/or
70
+DO_ZZI(sve_subri_s, uint32_t, DO_SUBR)
87
+# modify it under the terms of the GNU Lesser General Public
71
+DO_ZZI(sve_subri_d, uint64_t, DO_SUBR)
88
+# License as published by the Free Software Foundation; either
72
+
89
+# version 2.1 of the License, or (at your option) any later version.
73
+DO_ZZI(sve_smaxi_b, int8_t, DO_MAX)
90
+#
74
+DO_ZZI(sve_smaxi_h, int16_t, DO_MAX)
91
+# This library is distributed in the hope that it will be useful,
75
+DO_ZZI(sve_smaxi_s, int32_t, DO_MAX)
92
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
76
+DO_ZZI(sve_smaxi_d, int64_t, DO_MAX)
93
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
77
+
94
+# Lesser General Public License for more details.
78
+DO_ZZI(sve_smini_b, int8_t, DO_MIN)
95
+#
79
+DO_ZZI(sve_smini_h, int16_t, DO_MIN)
96
+# You should have received a copy of the GNU Lesser General Public
80
+DO_ZZI(sve_smini_s, int32_t, DO_MIN)
97
+# License along with this library; if not, see <http://www.gnu.org/licenses/>.
81
+DO_ZZI(sve_smini_d, int64_t, DO_MIN)
98
+
82
+
99
+#
83
+DO_ZZI(sve_umaxi_b, uint8_t, DO_MAX)
100
+# This file is processed by scripts/decodetree.py
84
+DO_ZZI(sve_umaxi_h, uint16_t, DO_MAX)
101
+#
85
+DO_ZZI(sve_umaxi_s, uint32_t, DO_MAX)
102
+
86
+DO_ZZI(sve_umaxi_d, uint64_t, DO_MAX)
103
+# These patterns are taken from Appendix E1.1 of DDI0616 A.a,
87
+
104
+# Arm Architecture Reference Manual Supplement,
88
+DO_ZZI(sve_umini_b, uint8_t, DO_MIN)
105
+# The Scalable Matrix Extension (SME), for Armv9-A
89
+DO_ZZI(sve_umini_h, uint16_t, DO_MIN)
106
+
90
+DO_ZZI(sve_umini_s, uint32_t, DO_MIN)
107
+{
91
+DO_ZZI(sve_umini_d, uint64_t, DO_MIN)
108
+ [
92
+
109
+ OK 0-00 1110 0000 0001 0010 11-- ---- ---- # SMOV W|Xd,Vn.B[0]
93
+#undef DO_ZZI
110
+ OK 0-00 1110 0000 0010 0010 11-- ---- ---- # SMOV W|Xd,Vn.H[0]
94
+
111
+ OK 0100 1110 0000 0100 0010 11-- ---- ---- # SMOV Xd,Vn.S[0]
95
#undef DO_AND
112
+ OK 0000 1110 0000 0001 0011 11-- ---- ---- # UMOV Wd,Vn.B[0]
96
#undef DO_ORR
113
+ OK 0000 1110 0000 0010 0011 11-- ---- ---- # UMOV Wd,Vn.H[0]
97
#undef DO_EOR
114
+ OK 0000 1110 0000 0100 0011 11-- ---- ---- # UMOV Wd,Vn.S[0]
98
@@ -XXX,XX +XXX,XX @@ DO_VPZ_D(sve_uminv_d, uint64_t, uint64_t, -1, DO_MIN)
115
+ OK 0100 1110 0000 1000 0011 11-- ---- ---- # UMOV Xd,Vn.D[0]
99
#undef DO_ASR
116
+ ]
100
#undef DO_LSR
117
+ FAIL 0--0 111- ---- ---- ---- ---- ---- ---- # Advanced SIMD vector operations
101
#undef DO_LSL
118
+}
102
+#undef DO_SUBR
119
+
103
120
+{
104
/* Similar to the ARM LastActiveElement pseudocode function, except the
121
+ [
105
result is multiplied by the element size. This includes the not found
122
+ OK 0101 1110 --1- ---- 11-1 11-- ---- ---- # FMULX/FRECPS/FRSQRTS (scalar)
106
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
123
+ OK 0101 1110 -10- ---- 00-1 11-- ---- ---- # FMULX/FRECPS/FRSQRTS (scalar, FP16)
107
index XXXXXXX..XXXXXXX 100644
124
+ OK 01-1 1110 1-10 0001 11-1 10-- ---- ---- # FRECPE/FRSQRTE/FRECPX (scalar)
108
--- a/target/arm/translate-sve.c
125
+ OK 01-1 1110 1111 1001 11-1 10-- ---- ---- # FRECPE/FRSQRTE/FRECPX (scalar, FP16)
109
+++ b/target/arm/translate-sve.c
126
+ ]
110
@@ -XXX,XX +XXX,XX @@ static inline int expand_imm_sh8s(int x)
127
+ FAIL 01-1 111- ---- ---- ---- ---- ---- ---- # Advanced SIMD single-element operations
111
return (int8_t)x << (x & 0x100 ? 8 : 0);
128
+}
129
+
130
+FAIL 0-00 110- ---- ---- ---- ---- ---- ---- # Advanced SIMD structure load/store
131
+FAIL 1100 1110 ---- ---- ---- ---- ---- ---- # Advanced SIMD cryptography extensions
132
+FAIL 0001 1110 0111 1110 0000 00-- ---- ---- # FJCVTZS
133
+
134
+# These are the "avoidance of doubt" final table of Illegal Advanced SIMD instructions
135
+# We don't actually need to include these, as the default is OK.
136
+# -001 111- ---- ---- ---- ---- ---- ---- # Scalar floating-point operations
137
+# --10 110- ---- ---- ---- ---- ---- ---- # Load/store pair of FP registers
138
+# --01 1100 ---- ---- ---- ---- ---- ---- # Load FP register (PC-relative literal)
139
+# --11 1100 --0- ---- ---- ---- ---- ---- # Load/store FP register (unscaled imm)
140
+# --11 1100 --1- ---- ---- ---- ---- --10 # Load/store FP register (register offset)
141
+# --11 1101 ---- ---- ---- ---- ---- ---- # Load/store FP register (scaled imm)
142
+
143
+FAIL 0000 0100 --1- ---- 1010 ---- ---- ---- # ADR
144
+FAIL 0000 0100 --1- ---- 1011 -0-- ---- ---- # FTSSEL, FEXPA
145
+FAIL 0000 0101 --10 0001 100- ---- ---- ---- # COMPACT
146
+FAIL 0010 0101 --01 100- 1111 000- ---0 ---- # RDFFR, RDFFRS
147
+FAIL 0010 0101 --10 1--- 1001 ---- ---- ---- # WRFFR, SETFFR
148
+FAIL 0100 0101 --0- ---- 1011 ---- ---- ---- # BDEP, BEXT, BGRP
149
+FAIL 0100 0101 000- ---- 0110 1--- ---- ---- # PMULLB, PMULLT (128b result)
150
+FAIL 0110 0100 --1- ---- 1110 01-- ---- ---- # FMMLA, BFMMLA
151
+FAIL 0110 0101 --0- ---- 0000 11-- ---- ---- # FTSMUL
152
+FAIL 0110 0101 --01 0--- 100- ---- ---- ---- # FTMAD
153
+FAIL 0110 0101 --01 1--- 001- ---- ---- ---- # FADDA
154
+FAIL 0100 0101 --0- ---- 1001 10-- ---- ---- # SMMLA, UMMLA, USMMLA
155
+FAIL 0100 0101 --1- ---- 1--- ---- ---- ---- # SVE2 string/histo/crypto instructions
156
+FAIL 1000 010- -00- ---- 10-- ---- ---- ---- # SVE2 32-bit gather NT load (vector+scalar)
157
+FAIL 1000 010- -00- ---- 111- ---- ---- ---- # SVE 32-bit gather prefetch (vector+imm)
158
+FAIL 1000 0100 0-1- ---- 0--- ---- ---- ---- # SVE 32-bit gather prefetch (scalar+vector)
159
+FAIL 1000 010- -01- ---- 1--- ---- ---- ---- # SVE 32-bit gather load (vector+imm)
160
+FAIL 1000 0100 0-0- ---- 0--- ---- ---- ---- # SVE 32-bit gather load byte (scalar+vector)
161
+FAIL 1000 0100 1--- ---- 0--- ---- ---- ---- # SVE 32-bit gather load half (scalar+vector)
162
+FAIL 1000 0101 0--- ---- 0--- ---- ---- ---- # SVE 32-bit gather load word (scalar+vector)
163
+FAIL 1010 010- ---- ---- 011- ---- ---- ---- # SVE contiguous FF load (scalar+scalar)
164
+FAIL 1010 010- ---1 ---- 101- ---- ---- ---- # SVE contiguous NF load (scalar+imm)
165
+FAIL 1010 010- -01- ---- 000- ---- ---- ---- # SVE load & replicate 32 bytes (scalar+scalar)
166
+FAIL 1010 010- -010 ---- 001- ---- ---- ---- # SVE load & replicate 32 bytes (scalar+imm)
167
+FAIL 1100 010- ---- ---- ---- ---- ---- ---- # SVE 64-bit gather load/prefetch
168
+FAIL 1110 010- -00- ---- 001- ---- ---- ---- # SVE2 64-bit scatter NT store (vector+scalar)
169
+FAIL 1110 010- -10- ---- 001- ---- ---- ---- # SVE2 32-bit scatter NT store (vector+scalar)
170
+FAIL 1110 010- ---- ---- 1-0- ---- ---- ---- # SVE scatter store (scalar+32-bit vector)
171
+FAIL 1110 010- ---- ---- 101- ---- ---- ---- # SVE scatter store (misc)
172
diff --git a/target/arm/helper.c b/target/arm/helper.c
173
index XXXXXXX..XXXXXXX 100644
174
--- a/target/arm/helper.c
175
+++ b/target/arm/helper.c
176
@@ -XXX,XX +XXX,XX @@ int sme_exception_el(CPUARMState *env, int el)
177
return 0;
112
}
178
}
113
179
114
+static inline int expand_imm_sh8u(int x)
180
+/* This corresponds to the ARM pseudocode function IsFullA64Enabled(). */
115
+{
181
+static bool sme_fa64(CPUARMState *env, int el)
116
+ return (uint8_t)x << (x & 0x100 ? 8 : 0);
182
+{
183
+ if (!cpu_isar_feature(aa64_sme_fa64, env_archcpu(env))) {
184
+ return false;
185
+ }
186
+
187
+ if (el <= 1 && !el_is_in_host(env, el)) {
188
+ if (!FIELD_EX64(env->vfp.smcr_el[1], SMCR, FA64)) {
189
+ return false;
190
+ }
191
+ }
192
+ if (el <= 2 && arm_is_el2_enabled(env)) {
193
+ if (!FIELD_EX64(env->vfp.smcr_el[2], SMCR, FA64)) {
194
+ return false;
195
+ }
196
+ }
197
+ if (arm_feature(env, ARM_FEATURE_EL3)) {
198
+ if (!FIELD_EX64(env->vfp.smcr_el[3], SMCR, FA64)) {
199
+ return false;
200
+ }
201
+ }
202
+
203
+ return true;
117
+}
204
+}
118
+
205
+
119
/*
206
/*
120
* Include the generated decoder.
207
* Given that SVE is enabled, return the vector length for EL.
121
*/
208
*/
122
@@ -XXX,XX +XXX,XX @@ static bool trans_DUP_i(DisasContext *s, arg_DUP_i *a, uint32_t insn)
209
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_a32(CPUARMState *env, int fp_el,
210
DP_TBFLAG_ANY(flags, PSTATE__IL, 1);
211
}
212
213
+ /*
214
+ * The SME exception we are testing for is raised via
215
+ * AArch64.CheckFPAdvSIMDEnabled(), as called from
216
+ * AArch32.CheckAdvSIMDOrFPEnabled().
217
+ */
218
+ if (el == 0
219
+ && FIELD_EX64(env->svcr, SVCR, SM)
220
+ && (!arm_is_el2_enabled(env)
221
+ || (arm_el_is_aa64(env, 2) && !(env->cp15.hcr_el2 & HCR_TGE)))
222
+ && arm_el_is_aa64(env, 1)
223
+ && !sme_fa64(env, el)) {
224
+ DP_TBFLAG_A32(flags, SME_TRAP_NONSTREAMING, 1);
225
+ }
226
+
227
return rebuild_hflags_common_32(env, fp_el, mmu_idx, flags);
228
}
229
230
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
231
}
232
if (FIELD_EX64(env->svcr, SVCR, SM)) {
233
DP_TBFLAG_A64(flags, PSTATE_SM, 1);
234
+ DP_TBFLAG_A64(flags, SME_TRAP_NONSTREAMING, !sme_fa64(env, el));
235
}
236
DP_TBFLAG_A64(flags, PSTATE_ZA, FIELD_EX64(env->svcr, SVCR, ZA));
237
}
238
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
239
index XXXXXXX..XXXXXXX 100644
240
--- a/target/arm/translate-a64.c
241
+++ b/target/arm/translate-a64.c
242
@@ -XXX,XX +XXX,XX @@ static void do_vec_ld(DisasContext *s, int destidx, int element,
243
* unallocated-encoding checks (otherwise the syndrome information
244
* for the resulting exception will be incorrect).
245
*/
246
-static bool fp_access_check(DisasContext *s)
247
+static bool fp_access_check_only(DisasContext *s)
248
{
249
if (s->fp_excp_el) {
250
assert(!s->fp_access_checked);
251
@@ -XXX,XX +XXX,XX @@ static bool fp_access_check(DisasContext *s)
123
return true;
252
return true;
124
}
253
}
125
254
126
+static bool trans_ADD_zzi(DisasContext *s, arg_rri_esz *a, uint32_t insn)
255
+static bool fp_access_check(DisasContext *s)
127
+{
256
+{
128
+ if (a->esz == 0 && extract32(insn, 13, 1)) {
257
+ if (!fp_access_check_only(s)) {
129
+ return false;
258
+ return false;
130
+ }
259
+ }
131
+ if (sve_access_check(s)) {
260
+ if (s->sme_trap_nonstreaming && s->is_nonstreaming) {
132
+ unsigned vsz = vec_full_reg_size(s);
261
+ gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
133
+ tcg_gen_gvec_addi(a->esz, vec_full_reg_offset(s, a->rd),
262
+ syn_smetrap(SME_ET_Streaming, false));
134
+ vec_full_reg_offset(s, a->rn), a->imm, vsz, vsz);
263
+ return false;
135
+ }
264
+ }
136
+ return true;
265
+ return true;
137
+}
266
+}
138
+
267
+
139
+static bool trans_SUB_zzi(DisasContext *s, arg_rri_esz *a, uint32_t insn)
268
/* Check that SVE access is enabled. If it is, return true.
140
+{
269
* If not, emit code to generate an appropriate exception and return false.
141
+ a->imm = -a->imm;
270
*/
142
+ return trans_ADD_zzi(s, a, insn);
271
@@ -XXX,XX +XXX,XX @@ static void handle_sys(DisasContext *s, uint32_t insn, bool isread,
143
+}
272
default:
144
+
273
g_assert_not_reached();
145
+static bool trans_SUBR_zzi(DisasContext *s, arg_rri_esz *a, uint32_t insn)
274
}
146
+{
275
- if ((ri->type & ARM_CP_FPU) && !fp_access_check(s)) {
147
+ static const GVecGen2s op[4] = {
276
+ if ((ri->type & ARM_CP_FPU) && !fp_access_check_only(s)) {
148
+ { .fni8 = tcg_gen_vec_sub8_i64,
277
return;
149
+ .fniv = tcg_gen_sub_vec,
278
} else if ((ri->type & ARM_CP_SVE) && !sve_access_check(s)) {
150
+ .fno = gen_helper_sve_subri_b,
279
return;
151
+ .opc = INDEX_op_sub_vec,
280
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_simd_fp(DisasContext *s, uint32_t insn)
152
+ .vece = MO_8,
281
}
153
+ .scalar_first = true },
282
}
154
+ { .fni8 = tcg_gen_vec_sub16_i64,
283
155
+ .fniv = tcg_gen_sub_vec,
284
+/*
156
+ .fno = gen_helper_sve_subri_h,
285
+ * Include the generated SME FA64 decoder.
157
+ .opc = INDEX_op_sub_vec,
286
+ */
158
+ .vece = MO_16,
287
+
159
+ .scalar_first = true },
288
+#include "decode-sme-fa64.c.inc"
160
+ { .fni4 = tcg_gen_sub_i32,
289
+
161
+ .fniv = tcg_gen_sub_vec,
290
+static bool trans_OK(DisasContext *s, arg_OK *a)
162
+ .fno = gen_helper_sve_subri_s,
291
+{
163
+ .opc = INDEX_op_sub_vec,
292
+ return true;
164
+ .vece = MO_32,
293
+}
165
+ .scalar_first = true },
294
+
166
+ { .fni8 = tcg_gen_sub_i64,
295
+static bool trans_FAIL(DisasContext *s, arg_OK *a)
167
+ .fniv = tcg_gen_sub_vec,
296
+{
168
+ .fno = gen_helper_sve_subri_d,
297
+ s->is_nonstreaming = true;
169
+ .opc = INDEX_op_sub_vec,
298
+ return true;
170
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
299
+}
171
+ .vece = MO_64,
300
+
172
+ .scalar_first = true }
301
/**
173
+ };
302
* is_guarded_page:
174
+
303
* @env: The cpu environment
175
+ if (a->esz == 0 && extract32(insn, 13, 1)) {
304
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
305
dc->mte_active[1] = EX_TBFLAG_A64(tb_flags, MTE0_ACTIVE);
306
dc->pstate_sm = EX_TBFLAG_A64(tb_flags, PSTATE_SM);
307
dc->pstate_za = EX_TBFLAG_A64(tb_flags, PSTATE_ZA);
308
+ dc->sme_trap_nonstreaming = EX_TBFLAG_A64(tb_flags, SME_TRAP_NONSTREAMING);
309
dc->vec_len = 0;
310
dc->vec_stride = 0;
311
dc->cp_regs = arm_cpu->cp_regs;
312
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
313
}
314
}
315
316
+ s->is_nonstreaming = false;
317
+ if (s->sme_trap_nonstreaming) {
318
+ disas_sme_fa64(s, insn);
319
+ }
320
+
321
switch (extract32(insn, 25, 4)) {
322
case 0x0:
323
if (!extract32(insn, 31, 1) || !disas_sme(s, insn)) {
324
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
325
index XXXXXXX..XXXXXXX 100644
326
--- a/target/arm/translate-vfp.c
327
+++ b/target/arm/translate-vfp.c
328
@@ -XXX,XX +XXX,XX @@ static bool vfp_access_check_a(DisasContext *s, bool ignore_vfp_enabled)
329
return false;
330
}
331
332
+ /*
333
+ * Note that rebuild_hflags_a32 has already accounted for being in EL0
334
+ * and the higher EL in A64 mode, etc. Unlike A64 mode, there do not
335
+ * appear to be any insns which touch VFP which are allowed.
336
+ */
337
+ if (s->sme_trap_nonstreaming) {
338
+ gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
339
+ syn_smetrap(SME_ET_Streaming,
340
+ s->base.pc_next - s->pc_curr == 2));
176
+ return false;
341
+ return false;
177
+ }
342
+ }
178
+ if (sve_access_check(s)) {
343
+
179
+ unsigned vsz = vec_full_reg_size(s);
344
if (!s->vfp_enabled && !ignore_vfp_enabled) {
180
+ TCGv_i64 c = tcg_const_i64(a->imm);
345
assert(!arm_dc_feature(s, ARM_FEATURE_M));
181
+ tcg_gen_gvec_2s(vec_full_reg_offset(s, a->rd),
346
unallocated_encoding(s);
182
+ vec_full_reg_offset(s, a->rn),
347
diff --git a/target/arm/translate.c b/target/arm/translate.c
183
+ vsz, vsz, c, &op[a->esz]);
348
index XXXXXXX..XXXXXXX 100644
184
+ tcg_temp_free_i64(c);
349
--- a/target/arm/translate.c
185
+ }
350
+++ b/target/arm/translate.c
186
+ return true;
351
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
187
+}
352
dc->vec_len = EX_TBFLAG_A32(tb_flags, VECLEN);
188
+
353
dc->vec_stride = EX_TBFLAG_A32(tb_flags, VECSTRIDE);
189
+static bool trans_MUL_zzi(DisasContext *s, arg_rri_esz *a, uint32_t insn)
354
}
190
+{
355
+ dc->sme_trap_nonstreaming =
191
+ if (sve_access_check(s)) {
356
+ EX_TBFLAG_A32(tb_flags, SME_TRAP_NONSTREAMING);
192
+ unsigned vsz = vec_full_reg_size(s);
357
}
193
+ tcg_gen_gvec_muli(a->esz, vec_full_reg_offset(s, a->rd),
358
dc->cp_regs = cpu->cp_regs;
194
+ vec_full_reg_offset(s, a->rn), a->imm, vsz, vsz);
359
dc->features = env->features;
195
+ }
360
diff --git a/target/arm/meson.build b/target/arm/meson.build
196
+ return true;
361
index XXXXXXX..XXXXXXX 100644
197
+}
362
--- a/target/arm/meson.build
198
+
363
+++ b/target/arm/meson.build
199
+static bool do_zzi_sat(DisasContext *s, arg_rri_esz *a, uint32_t insn,
200
+ bool u, bool d)
201
+{
202
+ if (a->esz == 0 && extract32(insn, 13, 1)) {
203
+ return false;
204
+ }
205
+ if (sve_access_check(s)) {
206
+ TCGv_i64 val = tcg_const_i64(a->imm);
207
+ do_sat_addsub_vec(s, a->esz, a->rd, a->rn, val, u, d);
208
+ tcg_temp_free_i64(val);
209
+ }
210
+ return true;
211
+}
212
+
213
+static bool trans_SQADD_zzi(DisasContext *s, arg_rri_esz *a, uint32_t insn)
214
+{
215
+ return do_zzi_sat(s, a, insn, false, false);
216
+}
217
+
218
+static bool trans_UQADD_zzi(DisasContext *s, arg_rri_esz *a, uint32_t insn)
219
+{
220
+ return do_zzi_sat(s, a, insn, true, false);
221
+}
222
+
223
+static bool trans_SQSUB_zzi(DisasContext *s, arg_rri_esz *a, uint32_t insn)
224
+{
225
+ return do_zzi_sat(s, a, insn, false, true);
226
+}
227
+
228
+static bool trans_UQSUB_zzi(DisasContext *s, arg_rri_esz *a, uint32_t insn)
229
+{
230
+ return do_zzi_sat(s, a, insn, true, true);
231
+}
232
+
233
+static bool do_zzi_ool(DisasContext *s, arg_rri_esz *a, gen_helper_gvec_2i *fn)
234
+{
235
+ if (sve_access_check(s)) {
236
+ unsigned vsz = vec_full_reg_size(s);
237
+ TCGv_i64 c = tcg_const_i64(a->imm);
238
+
239
+ tcg_gen_gvec_2i_ool(vec_full_reg_offset(s, a->rd),
240
+ vec_full_reg_offset(s, a->rn),
241
+ c, vsz, vsz, 0, fn);
242
+ tcg_temp_free_i64(c);
243
+ }
244
+ return true;
245
+}
246
+
247
+#define DO_ZZI(NAME, name) \
248
+static bool trans_##NAME##_zzi(DisasContext *s, arg_rri_esz *a, \
249
+ uint32_t insn) \
250
+{ \
251
+ static gen_helper_gvec_2i * const fns[4] = { \
252
+ gen_helper_sve_##name##i_b, gen_helper_sve_##name##i_h, \
253
+ gen_helper_sve_##name##i_s, gen_helper_sve_##name##i_d, \
254
+ }; \
255
+ return do_zzi_ool(s, a, fns[a->esz]); \
256
+}
257
+
258
+DO_ZZI(SMAX, smax)
259
+DO_ZZI(UMAX, umax)
260
+DO_ZZI(SMIN, smin)
261
+DO_ZZI(UMIN, umin)
262
+
263
+#undef DO_ZZI
264
+
265
/*
266
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
267
*/
268
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
269
index XXXXXXX..XXXXXXX 100644
270
--- a/target/arm/sve.decode
271
+++ b/target/arm/sve.decode
272
@@ -XXX,XX +XXX,XX @@
364
@@ -XXX,XX +XXX,XX @@
273
365
gen = [
274
# Signed 8-bit immediate, optionally shifted left by 8.
366
decodetree.process('sve.decode', extra_args: '--decode=disas_sve'),
275
%sh8_i8s 5:9 !function=expand_imm_sh8s
367
decodetree.process('sme.decode', extra_args: '--decode=disas_sme'),
276
+# Unsigned 8-bit immediate, optionally shifted left by 8.
368
+ decodetree.process('sme-fa64.decode', extra_args: '--static-decode=disas_sme_fa64'),
277
+%sh8_i8u 5:9 !function=expand_imm_sh8u
369
decodetree.process('neon-shared.decode', extra_args: '--decode=disas_neon_shared'),
278
370
decodetree.process('neon-dp.decode', extra_args: '--decode=disas_neon_dp'),
279
# Either a copy of rd (at bit 0), or a different source
371
decodetree.process('neon-ls.decode', extra_args: '--decode=disas_neon_ls'),
280
# as propagated via the MOVPRFX instruction.
281
@@ -XXX,XX +XXX,XX @@
282
@pd_pn_pm ........ esz:2 .. rm:4 ....... rn:4 . rd:4 &rrr_esz
283
@rdn_rm ........ esz:2 ...... ...... rm:5 rd:5 \
284
&rrr_esz rn=%reg_movprfx
285
+@rdn_sh_i8u ........ esz:2 ...... ...... ..... rd:5 \
286
+ &rri_esz rn=%reg_movprfx imm=%sh8_i8u
287
+@rdn_i8u ........ esz:2 ...... ... imm:8 rd:5 \
288
+ &rri_esz rn=%reg_movprfx
289
+@rdn_i8s ........ esz:2 ...... ... imm:s8 rd:5 \
290
+ &rri_esz rn=%reg_movprfx
291
292
# Three operand with "memory" size, aka immediate left shift
293
@rd_rn_msz_rm ........ ... rm:5 .... imm:2 rn:5 rd:5 &rrri
294
@@ -XXX,XX +XXX,XX @@ FDUP 00100101 esz:2 111 00 1110 imm:8 rd:5
295
# SVE broadcast integer immediate (unpredicated)
296
DUP_i 00100101 esz:2 111 00 011 . ........ rd:5 imm=%sh8_i8s
297
298
+# SVE integer add/subtract immediate (unpredicated)
299
+ADD_zzi 00100101 .. 100 000 11 . ........ ..... @rdn_sh_i8u
300
+SUB_zzi 00100101 .. 100 001 11 . ........ ..... @rdn_sh_i8u
301
+SUBR_zzi 00100101 .. 100 011 11 . ........ ..... @rdn_sh_i8u
302
+SQADD_zzi 00100101 .. 100 100 11 . ........ ..... @rdn_sh_i8u
303
+UQADD_zzi 00100101 .. 100 101 11 . ........ ..... @rdn_sh_i8u
304
+SQSUB_zzi 00100101 .. 100 110 11 . ........ ..... @rdn_sh_i8u
305
+UQSUB_zzi 00100101 .. 100 111 11 . ........ ..... @rdn_sh_i8u
306
+
307
+# SVE integer min/max immediate (unpredicated)
308
+SMAX_zzi 00100101 .. 101 000 110 ........ ..... @rdn_i8s
309
+UMAX_zzi 00100101 .. 101 001 110 ........ ..... @rdn_i8u
310
+SMIN_zzi 00100101 .. 101 010 110 ........ ..... @rdn_i8s
311
+UMIN_zzi 00100101 .. 101 011 110 ........ ..... @rdn_i8u
312
+
313
+# SVE integer multiply immediate (unpredicated)
314
+MUL_zzi 00100101 .. 110 000 110 ........ ..... @rdn_i8s
315
+
316
### SVE Memory - 32-bit Gather and Unsized Contiguous Group
317
318
# SVE load predicate register
319
--
372
--
320
2.17.1
373
2.25.1
321
322
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
Mark ADR as a non-streaming instruction, which should trap
4
if full a64 support is not enabled in streaming mode.
5
6
Removing entries from sme-fa64.decode is an easy way to see
7
what remains to be done.
2
8
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-10-richard.henderson@linaro.org
11
Message-id: 20220708151540.18136-5-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
13
---
8
target/arm/helper-sve.h | 2 ++
14
target/arm/translate.h | 7 +++++++
9
target/arm/sve_helper.c | 37 +++++++++++++++++++++++++++++++++++++
15
target/arm/sme-fa64.decode | 1 -
10
target/arm/translate-sve.c | 13 +++++++++++++
16
target/arm/translate-sve.c | 8 ++++----
11
target/arm/sve.decode | 3 +++
17
3 files changed, 11 insertions(+), 5 deletions(-)
12
4 files changed, 55 insertions(+)
13
18
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
19
diff --git a/target/arm/translate.h b/target/arm/translate.h
15
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
21
--- a/target/arm/translate.h
17
+++ b/target/arm/helper-sve.h
22
+++ b/target/arm/translate.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_rbit_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
23
@@ -XXX,XX +XXX,XX @@ uint64_t asimd_imm_const(uint32_t imm, int cmode, int op);
19
DEF_HELPER_FLAGS_4(sve_rbit_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
static bool trans_##NAME(DisasContext *s, arg_##NAME *a) \
20
DEF_HELPER_FLAGS_4(sve_rbit_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
25
{ return dc_isar_feature(FEAT, s) && FUNC(s, __VA_ARGS__); }
21
26
22
+DEF_HELPER_FLAGS_5(sve_splice, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
27
+#define TRANS_FEAT_NONSTREAMING(NAME, FEAT, FUNC, ...) \
23
+
28
+ static bool trans_##NAME(DisasContext *s, arg_##NAME *a) \
24
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
29
+ { \
25
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
30
+ s->is_nonstreaming = true; \
26
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
31
+ return dc_isar_feature(FEAT, s) && FUNC(s, __VA_ARGS__); \
27
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/sve_helper.c
30
+++ b/target/arm/sve_helper.c
31
@@ -XXX,XX +XXX,XX @@ int32_t HELPER(sve_last_active_element)(void *vg, uint32_t pred_desc)
32
33
return last_active_element(vg, DIV_ROUND_UP(oprsz, 8), esz);
34
}
35
+
36
+void HELPER(sve_splice)(void *vd, void *vn, void *vm, void *vg, uint32_t desc)
37
+{
38
+ intptr_t opr_sz = simd_oprsz(desc) / 8;
39
+ int esz = simd_data(desc);
40
+ uint64_t pg, first_g, last_g, len, mask = pred_esz_masks[esz];
41
+ intptr_t i, first_i, last_i;
42
+ ARMVectorReg tmp;
43
+
44
+ first_i = last_i = 0;
45
+ first_g = last_g = 0;
46
+
47
+ /* Find the extent of the active elements within VG. */
48
+ for (i = QEMU_ALIGN_UP(opr_sz, 8) - 8; i >= 0; i -= 8) {
49
+ pg = *(uint64_t *)(vg + i) & mask;
50
+ if (pg) {
51
+ if (last_g == 0) {
52
+ last_g = pg;
53
+ last_i = i;
54
+ }
55
+ first_g = pg;
56
+ first_i = i;
57
+ }
58
+ }
32
+ }
59
+
33
+
60
+ len = 0;
34
#endif /* TARGET_ARM_TRANSLATE_H */
61
+ if (first_g != 0) {
35
diff --git a/target/arm/sme-fa64.decode b/target/arm/sme-fa64.decode
62
+ first_i = first_i * 8 + ctz64(first_g);
36
index XXXXXXX..XXXXXXX 100644
63
+ last_i = last_i * 8 + 63 - clz64(last_g);
37
--- a/target/arm/sme-fa64.decode
64
+ len = last_i - first_i + (1 << esz);
38
+++ b/target/arm/sme-fa64.decode
65
+ if (vd == vm) {
39
@@ -XXX,XX +XXX,XX @@ FAIL 0001 1110 0111 1110 0000 00-- ---- ---- # FJCVTZS
66
+ vm = memcpy(&tmp, vm, opr_sz * 8);
40
# --11 1100 --1- ---- ---- ---- ---- --10 # Load/store FP register (register offset)
67
+ }
41
# --11 1101 ---- ---- ---- ---- ---- ---- # Load/store FP register (scaled imm)
68
+ swap_memmove(vd, vn + first_i, len);
42
69
+ }
43
-FAIL 0000 0100 --1- ---- 1010 ---- ---- ---- # ADR
70
+ swap_memmove(vd + len, vm, opr_sz * 8 - len);
44
FAIL 0000 0100 --1- ---- 1011 -0-- ---- ---- # FTSSEL, FEXPA
71
+}
45
FAIL 0000 0101 --10 0001 100- ---- ---- ---- # COMPACT
46
FAIL 0010 0101 --01 100- 1111 000- ---0 ---- # RDFFR, RDFFRS
72
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
47
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
73
index XXXXXXX..XXXXXXX 100644
48
index XXXXXXX..XXXXXXX 100644
74
--- a/target/arm/translate-sve.c
49
--- a/target/arm/translate-sve.c
75
+++ b/target/arm/translate-sve.c
50
+++ b/target/arm/translate-sve.c
76
@@ -XXX,XX +XXX,XX @@ static bool trans_RBIT(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
51
@@ -XXX,XX +XXX,XX @@ static bool do_adr(DisasContext *s, arg_rrri *a, gen_helper_gvec_3 *fn)
77
return do_zpz_ool(s, a, fns[a->esz]);
52
return gen_gvec_ool_zzz(s, fn, a->rd, a->rn, a->rm, a->imm);
78
}
53
}
79
54
80
+static bool trans_SPLICE(DisasContext *s, arg_rprr_esz *a, uint32_t insn)
55
-TRANS_FEAT(ADR_p32, aa64_sve, do_adr, a, gen_helper_sve_adr_p32)
81
+{
56
-TRANS_FEAT(ADR_p64, aa64_sve, do_adr, a, gen_helper_sve_adr_p64)
82
+ if (sve_access_check(s)) {
57
-TRANS_FEAT(ADR_s32, aa64_sve, do_adr, a, gen_helper_sve_adr_s32)
83
+ unsigned vsz = vec_full_reg_size(s);
58
-TRANS_FEAT(ADR_u32, aa64_sve, do_adr, a, gen_helper_sve_adr_u32)
84
+ tcg_gen_gvec_4_ool(vec_full_reg_offset(s, a->rd),
59
+TRANS_FEAT_NONSTREAMING(ADR_p32, aa64_sve, do_adr, a, gen_helper_sve_adr_p32)
85
+ vec_full_reg_offset(s, a->rn),
60
+TRANS_FEAT_NONSTREAMING(ADR_p64, aa64_sve, do_adr, a, gen_helper_sve_adr_p64)
86
+ vec_full_reg_offset(s, a->rm),
61
+TRANS_FEAT_NONSTREAMING(ADR_s32, aa64_sve, do_adr, a, gen_helper_sve_adr_s32)
87
+ pred_full_reg_offset(s, a->pg),
62
+TRANS_FEAT_NONSTREAMING(ADR_u32, aa64_sve, do_adr, a, gen_helper_sve_adr_u32)
88
+ vsz, vsz, a->esz, gen_helper_sve_splice);
63
89
+ }
90
+ return true;
91
+}
92
+
93
/*
64
/*
94
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
65
*** SVE Integer Misc - Unpredicated Group
95
*/
96
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
97
index XXXXXXX..XXXXXXX 100644
98
--- a/target/arm/sve.decode
99
+++ b/target/arm/sve.decode
100
@@ -XXX,XX +XXX,XX @@ REVH 00000101 .. 1001 01 100 ... ..... ..... @rd_pg_rn
101
REVW 00000101 .. 1001 10 100 ... ..... ..... @rd_pg_rn
102
RBIT 00000101 .. 1001 11 100 ... ..... ..... @rd_pg_rn
103
104
+# SVE vector splice (predicated)
105
+SPLICE 00000101 .. 101 100 100 ... ..... ..... @rdn_pg_rm
106
+
107
### SVE Predicate Logical Operations Group
108
109
# SVE predicate logical operations
110
--
66
--
111
2.17.1
67
2.25.1
112
113
diff view generated by jsdifflib
1
Add an IOMMU index argument to the translate method of
1
From: Richard Henderson <richard.henderson@linaro.org>
2
IOMMUs. Since all of our current IOMMU implementations
3
support only a single IOMMU index, this has no effect
4
on the behaviour.
5
2
3
Mark these as a non-streaming instructions, which should trap
4
if full a64 support is not enabled in streaming mode.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20220708151540.18136-6-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
9
Message-id: 20180604152941.20374-4-peter.maydell@linaro.org
10
---
10
---
11
include/exec/memory.h | 3 ++-
11
target/arm/sme-fa64.decode | 2 --
12
exec.c | 11 +++++++++--
12
target/arm/translate-sve.c | 9 ++++++---
13
hw/alpha/typhoon.c | 3 ++-
13
2 files changed, 6 insertions(+), 5 deletions(-)
14
hw/arm/smmuv3.c | 2 +-
15
hw/dma/rc4030.c | 2 +-
16
hw/i386/amd_iommu.c | 2 +-
17
hw/i386/intel_iommu.c | 2 +-
18
hw/ppc/spapr_iommu.c | 3 ++-
19
hw/s390x/s390-pci-bus.c | 2 +-
20
hw/sparc/sun4m_iommu.c | 3 ++-
21
hw/sparc64/sun4u_iommu.c | 2 +-
22
memory.c | 2 +-
23
12 files changed, 24 insertions(+), 13 deletions(-)
24
14
25
diff --git a/include/exec/memory.h b/include/exec/memory.h
15
diff --git a/target/arm/sme-fa64.decode b/target/arm/sme-fa64.decode
26
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
27
--- a/include/exec/memory.h
17
--- a/target/arm/sme-fa64.decode
28
+++ b/include/exec/memory.h
18
+++ b/target/arm/sme-fa64.decode
29
@@ -XXX,XX +XXX,XX @@ typedef struct IOMMUMemoryRegionClass {
19
@@ -XXX,XX +XXX,XX @@ FAIL 0001 1110 0111 1110 0000 00-- ---- ---- # FJCVTZS
30
* @iommu: the IOMMUMemoryRegion
20
31
* @hwaddr: address to be translated within the memory region
21
FAIL 0000 0100 --1- ---- 1011 -0-- ---- ---- # FTSSEL, FEXPA
32
* @flag: requested access permissions
22
FAIL 0000 0101 --10 0001 100- ---- ---- ---- # COMPACT
33
+ * @iommu_idx: IOMMU index for the translation
23
-FAIL 0010 0101 --01 100- 1111 000- ---0 ---- # RDFFR, RDFFRS
34
*/
24
-FAIL 0010 0101 --10 1--- 1001 ---- ---- ---- # WRFFR, SETFFR
35
IOMMUTLBEntry (*translate)(IOMMUMemoryRegion *iommu, hwaddr addr,
25
FAIL 0100 0101 --0- ---- 1011 ---- ---- ---- # BDEP, BEXT, BGRP
36
- IOMMUAccessFlags flag);
26
FAIL 0100 0101 000- ---- 0110 1--- ---- ---- # PMULLB, PMULLT (128b result)
37
+ IOMMUAccessFlags flag, int iommu_idx);
27
FAIL 0110 0100 --1- ---- 1110 01-- ---- ---- # FMMLA, BFMMLA
38
/* Returns minimum supported page size in bytes.
28
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
39
* If this method is not provided then the minimum is assumed to
40
* be TARGET_PAGE_SIZE.
41
diff --git a/exec.c b/exec.c
42
index XXXXXXX..XXXXXXX 100644
29
index XXXXXXX..XXXXXXX 100644
43
--- a/exec.c
30
--- a/target/arm/translate-sve.c
44
+++ b/exec.c
31
+++ b/target/arm/translate-sve.c
45
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection address_space_translate_iommu(IOMMUMemoryRegion *iomm
32
@@ -XXX,XX +XXX,XX @@ static bool do_predset(DisasContext *s, int esz, int rd, int pat, bool setflag)
46
do {
33
TRANS_FEAT(PTRUE, aa64_sve, do_predset, a->esz, a->rd, a->pat, a->s)
47
hwaddr addr = *xlat;
34
48
IOMMUMemoryRegionClass *imrc = memory_region_get_iommu_class_nocheck(iommu_mr);
35
/* Note pat == 31 is #all, to set all elements. */
49
- IOMMUTLBEntry iotlb = imrc->translate(iommu_mr, addr, is_write ?
36
-TRANS_FEAT(SETFFR, aa64_sve, do_predset, 0, FFR_PRED_NUM, 31, false)
50
- IOMMU_WO : IOMMU_RO);
37
+TRANS_FEAT_NONSTREAMING(SETFFR, aa64_sve,
51
+ int iommu_idx = 0;
38
+ do_predset, 0, FFR_PRED_NUM, 31, false)
52
+ IOMMUTLBEntry iotlb;
39
40
/* Note pat == 32 is #unimp, to set no elements. */
41
TRANS_FEAT(PFALSE, aa64_sve, do_predset, 0, a->rd, 32, false)
42
@@ -XXX,XX +XXX,XX @@ static bool trans_RDFFR_p(DisasContext *s, arg_RDFFR_p *a)
43
.rd = a->rd, .pg = a->pg, .s = a->s,
44
.rn = FFR_PRED_NUM, .rm = FFR_PRED_NUM,
45
};
53
+
46
+
54
+ if (imrc->attrs_to_index) {
47
+ s->is_nonstreaming = true;
55
+ iommu_idx = imrc->attrs_to_index(iommu_mr, attrs);
48
return trans_AND_pppp(s, &alt_a);
56
+ }
57
+
58
+ iotlb = imrc->translate(iommu_mr, addr, is_write ?
59
+ IOMMU_WO : IOMMU_RO, iommu_idx);
60
61
if (!(iotlb.perm & (1 << is_write))) {
62
goto unassigned;
63
diff --git a/hw/alpha/typhoon.c b/hw/alpha/typhoon.c
64
index XXXXXXX..XXXXXXX 100644
65
--- a/hw/alpha/typhoon.c
66
+++ b/hw/alpha/typhoon.c
67
@@ -XXX,XX +XXX,XX @@ static bool window_translate(TyphoonWindow *win, hwaddr addr,
68
Pchip and generate a machine check interrupt. */
69
static IOMMUTLBEntry typhoon_translate_iommu(IOMMUMemoryRegion *iommu,
70
hwaddr addr,
71
- IOMMUAccessFlags flag)
72
+ IOMMUAccessFlags flag,
73
+ int iommu_idx)
74
{
75
TyphoonPchip *pchip = container_of(iommu, TyphoonPchip, iommu);
76
IOMMUTLBEntry ret;
77
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
78
index XXXXXXX..XXXXXXX 100644
79
--- a/hw/arm/smmuv3.c
80
+++ b/hw/arm/smmuv3.c
81
@@ -XXX,XX +XXX,XX @@ static int smmuv3_decode_config(IOMMUMemoryRegion *mr, SMMUTransCfg *cfg,
82
}
49
}
83
50
84
static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
51
-TRANS_FEAT(RDFFR, aa64_sve, do_mov_p, a->rd, FFR_PRED_NUM)
85
- IOMMUAccessFlags flag)
52
-TRANS_FEAT(WRFFR, aa64_sve, do_mov_p, FFR_PRED_NUM, a->rn)
86
+ IOMMUAccessFlags flag, int iommu_idx)
53
+TRANS_FEAT_NONSTREAMING(RDFFR, aa64_sve, do_mov_p, a->rd, FFR_PRED_NUM)
87
{
54
+TRANS_FEAT_NONSTREAMING(WRFFR, aa64_sve, do_mov_p, FFR_PRED_NUM, a->rn)
88
SMMUDevice *sdev = container_of(mr, SMMUDevice, iommu);
55
89
SMMUv3State *s = sdev->smmu;
56
static bool do_pfirst_pnext(DisasContext *s, arg_rr_esz *a,
90
diff --git a/hw/dma/rc4030.c b/hw/dma/rc4030.c
57
void (*gen_fn)(TCGv_i32, TCGv_ptr,
91
index XXXXXXX..XXXXXXX 100644
92
--- a/hw/dma/rc4030.c
93
+++ b/hw/dma/rc4030.c
94
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps jazzio_ops = {
95
};
96
97
static IOMMUTLBEntry rc4030_dma_translate(IOMMUMemoryRegion *iommu, hwaddr addr,
98
- IOMMUAccessFlags flag)
99
+ IOMMUAccessFlags flag, int iommu_idx)
100
{
101
rc4030State *s = container_of(iommu, rc4030State, dma_mr);
102
IOMMUTLBEntry ret = {
103
diff --git a/hw/i386/amd_iommu.c b/hw/i386/amd_iommu.c
104
index XXXXXXX..XXXXXXX 100644
105
--- a/hw/i386/amd_iommu.c
106
+++ b/hw/i386/amd_iommu.c
107
@@ -XXX,XX +XXX,XX @@ static inline bool amdvi_is_interrupt_addr(hwaddr addr)
108
}
109
110
static IOMMUTLBEntry amdvi_translate(IOMMUMemoryRegion *iommu, hwaddr addr,
111
- IOMMUAccessFlags flag)
112
+ IOMMUAccessFlags flag, int iommu_idx)
113
{
114
AMDVIAddressSpace *as = container_of(iommu, AMDVIAddressSpace, iommu);
115
AMDVIState *s = as->iommu_state;
116
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
117
index XXXXXXX..XXXXXXX 100644
118
--- a/hw/i386/intel_iommu.c
119
+++ b/hw/i386/intel_iommu.c
120
@@ -XXX,XX +XXX,XX @@ static void vtd_mem_write(void *opaque, hwaddr addr,
121
}
122
123
static IOMMUTLBEntry vtd_iommu_translate(IOMMUMemoryRegion *iommu, hwaddr addr,
124
- IOMMUAccessFlags flag)
125
+ IOMMUAccessFlags flag, int iommu_idx)
126
{
127
VTDAddressSpace *vtd_as = container_of(iommu, VTDAddressSpace, iommu);
128
IntelIOMMUState *s = vtd_as->iommu_state;
129
diff --git a/hw/ppc/spapr_iommu.c b/hw/ppc/spapr_iommu.c
130
index XXXXXXX..XXXXXXX 100644
131
--- a/hw/ppc/spapr_iommu.c
132
+++ b/hw/ppc/spapr_iommu.c
133
@@ -XXX,XX +XXX,XX @@ static void spapr_tce_free_table(uint64_t *table, int fd, uint32_t nb_table)
134
/* Called from RCU critical section */
135
static IOMMUTLBEntry spapr_tce_translate_iommu(IOMMUMemoryRegion *iommu,
136
hwaddr addr,
137
- IOMMUAccessFlags flag)
138
+ IOMMUAccessFlags flag,
139
+ int iommu_idx)
140
{
141
sPAPRTCETable *tcet = container_of(iommu, sPAPRTCETable, iommu);
142
uint64_t tce;
143
diff --git a/hw/s390x/s390-pci-bus.c b/hw/s390x/s390-pci-bus.c
144
index XXXXXXX..XXXXXXX 100644
145
--- a/hw/s390x/s390-pci-bus.c
146
+++ b/hw/s390x/s390-pci-bus.c
147
@@ -XXX,XX +XXX,XX @@ uint16_t s390_guest_io_table_walk(uint64_t g_iota, hwaddr addr,
148
}
149
150
static IOMMUTLBEntry s390_translate_iommu(IOMMUMemoryRegion *mr, hwaddr addr,
151
- IOMMUAccessFlags flag)
152
+ IOMMUAccessFlags flag, int iommu_idx)
153
{
154
S390PCIIOMMU *iommu = container_of(mr, S390PCIIOMMU, iommu_mr);
155
S390IOTLBEntry *entry;
156
diff --git a/hw/sparc/sun4m_iommu.c b/hw/sparc/sun4m_iommu.c
157
index XXXXXXX..XXXXXXX 100644
158
--- a/hw/sparc/sun4m_iommu.c
159
+++ b/hw/sparc/sun4m_iommu.c
160
@@ -XXX,XX +XXX,XX @@ static void iommu_bad_addr(IOMMUState *s, hwaddr addr,
161
/* Called from RCU critical section */
162
static IOMMUTLBEntry sun4m_translate_iommu(IOMMUMemoryRegion *iommu,
163
hwaddr addr,
164
- IOMMUAccessFlags flags)
165
+ IOMMUAccessFlags flags,
166
+ int iommu_idx)
167
{
168
IOMMUState *is = container_of(iommu, IOMMUState, iommu);
169
hwaddr page, pa;
170
diff --git a/hw/sparc64/sun4u_iommu.c b/hw/sparc64/sun4u_iommu.c
171
index XXXXXXX..XXXXXXX 100644
172
--- a/hw/sparc64/sun4u_iommu.c
173
+++ b/hw/sparc64/sun4u_iommu.c
174
@@ -XXX,XX +XXX,XX @@
175
/* Called from RCU critical section */
176
static IOMMUTLBEntry sun4u_translate_iommu(IOMMUMemoryRegion *iommu,
177
hwaddr addr,
178
- IOMMUAccessFlags flag)
179
+ IOMMUAccessFlags flag, int iommu_idx)
180
{
181
IOMMUState *is = container_of(iommu, IOMMUState, iommu);
182
hwaddr baseaddr, offset;
183
diff --git a/memory.c b/memory.c
184
index XXXXXXX..XXXXXXX 100644
185
--- a/memory.c
186
+++ b/memory.c
187
@@ -XXX,XX +XXX,XX @@ void memory_region_iommu_replay(IOMMUMemoryRegion *iommu_mr, IOMMUNotifier *n)
188
granularity = memory_region_iommu_get_min_page_size(iommu_mr);
189
190
for (addr = 0; addr < memory_region_size(mr); addr += granularity) {
191
- iotlb = imrc->translate(iommu_mr, addr, IOMMU_NONE);
192
+ iotlb = imrc->translate(iommu_mr, addr, IOMMU_NONE, n->iommu_idx);
193
if (iotlb.perm != IOMMU_NONE) {
194
n->notify(n, &iotlb);
195
}
196
--
58
--
197
2.17.1
59
2.25.1
198
199
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Mark these as a non-streaming instructions, which should trap
4
if full a64 support is not enabled in streaming mode.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20220708151540.18136-7-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/sme-fa64.decode | 3 ---
12
target/arm/translate-sve.c | 22 ++++++++++++----------
13
2 files changed, 12 insertions(+), 13 deletions(-)
14
15
diff --git a/target/arm/sme-fa64.decode b/target/arm/sme-fa64.decode
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/sme-fa64.decode
18
+++ b/target/arm/sme-fa64.decode
19
@@ -XXX,XX +XXX,XX @@ FAIL 0001 1110 0111 1110 0000 00-- ---- ---- # FJCVTZS
20
# --11 1100 --1- ---- ---- ---- ---- --10 # Load/store FP register (register offset)
21
# --11 1101 ---- ---- ---- ---- ---- ---- # Load/store FP register (scaled imm)
22
23
-FAIL 0000 0100 --1- ---- 1011 -0-- ---- ---- # FTSSEL, FEXPA
24
-FAIL 0000 0101 --10 0001 100- ---- ---- ---- # COMPACT
25
-FAIL 0100 0101 --0- ---- 1011 ---- ---- ---- # BDEP, BEXT, BGRP
26
FAIL 0100 0101 000- ---- 0110 1--- ---- ---- # PMULLB, PMULLT (128b result)
27
FAIL 0110 0100 --1- ---- 1110 01-- ---- ---- # FMMLA, BFMMLA
28
FAIL 0110 0101 --0- ---- 0000 11-- ---- ---- # FTSMUL
29
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/translate-sve.c
32
+++ b/target/arm/translate-sve.c
33
@@ -XXX,XX +XXX,XX @@ static gen_helper_gvec_2 * const fexpa_fns[4] = {
34
NULL, gen_helper_sve_fexpa_h,
35
gen_helper_sve_fexpa_s, gen_helper_sve_fexpa_d,
36
};
37
-TRANS_FEAT(FEXPA, aa64_sve, gen_gvec_ool_zz,
38
- fexpa_fns[a->esz], a->rd, a->rn, 0)
39
+TRANS_FEAT_NONSTREAMING(FEXPA, aa64_sve, gen_gvec_ool_zz,
40
+ fexpa_fns[a->esz], a->rd, a->rn, 0)
41
42
static gen_helper_gvec_3 * const ftssel_fns[4] = {
43
NULL, gen_helper_sve_ftssel_h,
44
gen_helper_sve_ftssel_s, gen_helper_sve_ftssel_d,
45
};
46
-TRANS_FEAT(FTSSEL, aa64_sve, gen_gvec_ool_arg_zzz, ftssel_fns[a->esz], a, 0)
47
+TRANS_FEAT_NONSTREAMING(FTSSEL, aa64_sve, gen_gvec_ool_arg_zzz,
48
+ ftssel_fns[a->esz], a, 0)
49
50
/*
51
*** SVE Predicate Logical Operations Group
52
@@ -XXX,XX +XXX,XX @@ TRANS_FEAT(TRN2_q, aa64_sve_f64mm, gen_gvec_ool_arg_zzz,
53
static gen_helper_gvec_3 * const compact_fns[4] = {
54
NULL, NULL, gen_helper_sve_compact_s, gen_helper_sve_compact_d
55
};
56
-TRANS_FEAT(COMPACT, aa64_sve, gen_gvec_ool_arg_zpz, compact_fns[a->esz], a, 0)
57
+TRANS_FEAT_NONSTREAMING(COMPACT, aa64_sve, gen_gvec_ool_arg_zpz,
58
+ compact_fns[a->esz], a, 0)
59
60
/* Call the helper that computes the ARM LastActiveElement pseudocode
61
* function, scaled by the element size. This includes the not found
62
@@ -XXX,XX +XXX,XX @@ static gen_helper_gvec_3 * const bext_fns[4] = {
63
gen_helper_sve2_bext_b, gen_helper_sve2_bext_h,
64
gen_helper_sve2_bext_s, gen_helper_sve2_bext_d,
65
};
66
-TRANS_FEAT(BEXT, aa64_sve2_bitperm, gen_gvec_ool_arg_zzz,
67
- bext_fns[a->esz], a, 0)
68
+TRANS_FEAT_NONSTREAMING(BEXT, aa64_sve2_bitperm, gen_gvec_ool_arg_zzz,
69
+ bext_fns[a->esz], a, 0)
70
71
static gen_helper_gvec_3 * const bdep_fns[4] = {
72
gen_helper_sve2_bdep_b, gen_helper_sve2_bdep_h,
73
gen_helper_sve2_bdep_s, gen_helper_sve2_bdep_d,
74
};
75
-TRANS_FEAT(BDEP, aa64_sve2_bitperm, gen_gvec_ool_arg_zzz,
76
- bdep_fns[a->esz], a, 0)
77
+TRANS_FEAT_NONSTREAMING(BDEP, aa64_sve2_bitperm, gen_gvec_ool_arg_zzz,
78
+ bdep_fns[a->esz], a, 0)
79
80
static gen_helper_gvec_3 * const bgrp_fns[4] = {
81
gen_helper_sve2_bgrp_b, gen_helper_sve2_bgrp_h,
82
gen_helper_sve2_bgrp_s, gen_helper_sve2_bgrp_d,
83
};
84
-TRANS_FEAT(BGRP, aa64_sve2_bitperm, gen_gvec_ool_arg_zzz,
85
- bgrp_fns[a->esz], a, 0)
86
+TRANS_FEAT_NONSTREAMING(BGRP, aa64_sve2_bitperm, gen_gvec_ool_arg_zzz,
87
+ bgrp_fns[a->esz], a, 0)
88
89
static gen_helper_gvec_3 * const cadd_fns[4] = {
90
gen_helper_sve2_cadd_b, gen_helper_sve2_cadd_h,
91
--
92
2.25.1
diff view generated by jsdifflib
1
There's a common pattern in QEMU where a function needs to perform
1
From: Richard Henderson <richard.henderson@linaro.org>
2
a data load or store of an N byte integer in a particular endianness.
3
At the moment this is handled by doing a switch() on the size and
4
calling the appropriate ld*_p or st*_p function for each size.
5
2
6
Provide a new family of functions ldn_*_p() and stn_*_p() which
3
Mark these as a non-streaming instructions, which should trap
7
take the size as an argument and do the switch() themselves.
4
if full a64 support is not enabled in streaming mode.
8
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20220708151540.18136-8-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20180611171007.4165-2-peter.maydell@linaro.org
13
---
10
---
14
include/exec/cpu-all.h | 4 +++
11
target/arm/sme-fa64.decode | 2 --
15
include/qemu/bswap.h | 52 +++++++++++++++++++++++++++++++++++++
12
target/arm/translate-sve.c | 24 +++++++++++++++---------
16
docs/devel/loads-stores.rst | 15 +++++++++++
13
2 files changed, 15 insertions(+), 11 deletions(-)
17
3 files changed, 71 insertions(+)
18
14
19
diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h
15
diff --git a/target/arm/sme-fa64.decode b/target/arm/sme-fa64.decode
20
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
21
--- a/include/exec/cpu-all.h
17
--- a/target/arm/sme-fa64.decode
22
+++ b/include/exec/cpu-all.h
18
+++ b/target/arm/sme-fa64.decode
23
@@ -XXX,XX +XXX,XX @@ static inline void tswap64s(uint64_t *s)
19
@@ -XXX,XX +XXX,XX @@ FAIL 0001 1110 0111 1110 0000 00-- ---- ---- # FJCVTZS
24
#define stq_p(p, v) stq_be_p(p, v)
20
# --11 1100 --1- ---- ---- ---- ---- --10 # Load/store FP register (register offset)
25
#define stfl_p(p, v) stfl_be_p(p, v)
21
# --11 1101 ---- ---- ---- ---- ---- ---- # Load/store FP register (scaled imm)
26
#define stfq_p(p, v) stfq_be_p(p, v)
22
27
+#define ldn_p(p, sz) ldn_be_p(p, sz)
23
-FAIL 0100 0101 000- ---- 0110 1--- ---- ---- # PMULLB, PMULLT (128b result)
28
+#define stn_p(p, sz, v) stn_be_p(p, sz, v)
24
-FAIL 0110 0100 --1- ---- 1110 01-- ---- ---- # FMMLA, BFMMLA
29
#else
25
FAIL 0110 0101 --0- ---- 0000 11-- ---- ---- # FTSMUL
30
#define lduw_p(p) lduw_le_p(p)
26
FAIL 0110 0101 --01 0--- 100- ---- ---- ---- # FTMAD
31
#define ldsw_p(p) ldsw_le_p(p)
27
FAIL 0110 0101 --01 1--- 001- ---- ---- ---- # FADDA
32
@@ -XXX,XX +XXX,XX @@ static inline void tswap64s(uint64_t *s)
28
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
33
#define stq_p(p, v) stq_le_p(p, v)
34
#define stfl_p(p, v) stfl_le_p(p, v)
35
#define stfq_p(p, v) stfq_le_p(p, v)
36
+#define ldn_p(p, sz) ldn_le_p(p, sz)
37
+#define stn_p(p, sz, v) stn_le_p(p, sz, v)
38
#endif
39
40
/* MMU memory access macros */
41
diff --git a/include/qemu/bswap.h b/include/qemu/bswap.h
42
index XXXXXXX..XXXXXXX 100644
29
index XXXXXXX..XXXXXXX 100644
43
--- a/include/qemu/bswap.h
30
--- a/target/arm/translate-sve.c
44
+++ b/include/qemu/bswap.h
31
+++ b/target/arm/translate-sve.c
45
@@ -XXX,XX +XXX,XX @@ typedef union {
32
@@ -XXX,XX +XXX,XX @@ static bool do_trans_pmull(DisasContext *s, arg_rrr_esz *a, bool sel)
46
* For accessors that take a guest address rather than a
33
gen_helper_gvec_pmull_q, gen_helper_sve2_pmull_h,
47
* host address, see the cpu_{ld,st}_* accessors defined in
34
NULL, gen_helper_sve2_pmull_d,
48
* cpu_ldst.h.
35
};
49
+ *
36
- if (a->esz == 0
50
+ * For cases where the size to be used is not fixed at compile time,
37
- ? !dc_isar_feature(aa64_sve2_pmull128, s)
51
+ * there are
38
- : !dc_isar_feature(aa64_sve, s)) {
52
+ * stn{endian}_p(ptr, sz, val)
39
+
53
+ * which stores @val to @ptr as an @endian-order number @sz bytes in size
40
+ if (a->esz == 0) {
54
+ * and
41
+ if (!dc_isar_feature(aa64_sve2_pmull128, s)) {
55
+ * ldn{endian}_p(ptr, sz)
42
+ return false;
56
+ * which loads @sz bytes from @ptr as an unsigned @endian-order number
43
+ }
57
+ * and returns it in a uint64_t.
44
+ s->is_nonstreaming = true;
45
+ } else if (!dc_isar_feature(aa64_sve, s)) {
46
return false;
47
}
48
return gen_gvec_ool_arg_zzz(s, fns[a->esz], a, sel);
49
@@ -XXX,XX +XXX,XX @@ DO_ZPZZ_FP(FMINP, aa64_sve2, sve2_fminp_zpzz)
50
* SVE Integer Multiply-Add (unpredicated)
58
*/
51
*/
59
52
60
static inline int ldub_p(const void *ptr)
53
-TRANS_FEAT(FMMLA_s, aa64_sve_f32mm, gen_gvec_fpst_zzzz, gen_helper_fmmla_s,
61
@@ -XXX,XX +XXX,XX @@ static inline unsigned long leul_to_cpu(unsigned long v)
54
- a->rd, a->rn, a->rm, a->ra, 0, FPST_FPCR)
62
#endif
55
-TRANS_FEAT(FMMLA_d, aa64_sve_f64mm, gen_gvec_fpst_zzzz, gen_helper_fmmla_d,
63
}
56
- a->rd, a->rn, a->rm, a->ra, 0, FPST_FPCR)
64
57
+TRANS_FEAT_NONSTREAMING(FMMLA_s, aa64_sve_f32mm, gen_gvec_fpst_zzzz,
65
+/* Store v to p as a sz byte value in host order */
58
+ gen_helper_fmmla_s, a->rd, a->rn, a->rm, a->ra,
66
+#define DO_STN_LDN_P(END) \
59
+ 0, FPST_FPCR)
67
+ static inline void stn_## END ## _p(void *ptr, int sz, uint64_t v) \
60
+TRANS_FEAT_NONSTREAMING(FMMLA_d, aa64_sve_f64mm, gen_gvec_fpst_zzzz,
68
+ { \
61
+ gen_helper_fmmla_d, a->rd, a->rn, a->rm, a->ra,
69
+ switch (sz) { \
62
+ 0, FPST_FPCR)
70
+ case 1: \
63
71
+ stb_p(ptr, v); \
64
static gen_helper_gvec_4 * const sqdmlal_zzzw_fns[] = {
72
+ break; \
65
NULL, gen_helper_sve2_sqdmlal_zzzw_h,
73
+ case 2: \
66
@@ -XXX,XX +XXX,XX @@ TRANS_FEAT(BFDOT_zzzz, aa64_sve_bf16, gen_gvec_ool_arg_zzzz,
74
+ stw_ ## END ## _p(ptr, v); \
67
TRANS_FEAT(BFDOT_zzxz, aa64_sve_bf16, gen_gvec_ool_arg_zzxz,
75
+ break; \
68
gen_helper_gvec_bfdot_idx, a)
76
+ case 4: \
69
77
+ stl_ ## END ## _p(ptr, v); \
70
-TRANS_FEAT(BFMMLA, aa64_sve_bf16, gen_gvec_ool_arg_zzzz,
78
+ break; \
71
- gen_helper_gvec_bfmmla, a, 0)
79
+ case 8: \
72
+TRANS_FEAT_NONSTREAMING(BFMMLA, aa64_sve_bf16, gen_gvec_ool_arg_zzzz,
80
+ stq_ ## END ## _p(ptr, v); \
73
+ gen_helper_gvec_bfmmla, a, 0)
81
+ break; \
74
82
+ default: \
75
static bool do_BFMLAL_zzzw(DisasContext *s, arg_rrrr_esz *a, bool sel)
83
+ g_assert_not_reached(); \
76
{
84
+ } \
85
+ } \
86
+ static inline uint64_t ldn_## END ## _p(const void *ptr, int sz) \
87
+ { \
88
+ switch (sz) { \
89
+ case 1: \
90
+ return ldub_p(ptr); \
91
+ case 2: \
92
+ return lduw_ ## END ## _p(ptr); \
93
+ case 4: \
94
+ return (uint32_t)ldl_ ## END ## _p(ptr); \
95
+ case 8: \
96
+ return ldq_ ## END ## _p(ptr); \
97
+ default: \
98
+ g_assert_not_reached(); \
99
+ } \
100
+ }
101
+
102
+DO_STN_LDN_P(he)
103
+DO_STN_LDN_P(le)
104
+DO_STN_LDN_P(be)
105
+
106
+#undef DO_STN_LDN_P
107
+
108
#undef le_bswap
109
#undef be_bswap
110
#undef le_bswaps
111
diff --git a/docs/devel/loads-stores.rst b/docs/devel/loads-stores.rst
112
index XXXXXXX..XXXXXXX 100644
113
--- a/docs/devel/loads-stores.rst
114
+++ b/docs/devel/loads-stores.rst
115
@@ -XXX,XX +XXX,XX @@ The ``_{endian}`` infix is omitted for target-endian accesses.
116
The target endian accessors are only available to source
117
files which are built per-target.
118
119
+There are also functions which take the size as an argument:
120
+
121
+load: ``ldn{endian}_p(ptr, sz)``
122
+
123
+which performs an unsigned load of ``sz`` bytes from ``ptr``
124
+as an ``{endian}`` order value and returns it in a uint64_t.
125
+
126
+store: ``stn{endian}_p(ptr, sz, val)``
127
+
128
+which stores ``val`` to ``ptr`` as an ``{endian}`` order value
129
+of size ``sz`` bytes.
130
+
131
+
132
Regexes for git grep
133
- ``\<ldf\?[us]\?[bwlq]\(_[hbl]e\)\?_p\>``
134
- ``\<stf\?[bwlq]\(_[hbl]e\)\?_p\>``
135
+ - ``\<ldn_\([hbl]e\)?_p\>``
136
+ - ``\<stn_\([hbl]e\)?_p\>``
137
138
``cpu_{ld,st}_*``
139
~~~~~~~~~~~~~~~~~
140
--
77
--
141
2.17.1
78
2.25.1
142
143
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
Mark these as a non-streaming instructions, which should trap
4
if full a64 support is not enabled in streaming mode.
2
5
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-6-richard.henderson@linaro.org
8
Message-id: 20220708151540.18136-9-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
target/arm/helper-sve.h | 3 +++
11
target/arm/sme-fa64.decode | 3 ---
9
target/arm/sve_helper.c | 34 ++++++++++++++++++++++++++++++++++
12
target/arm/translate-sve.c | 15 +++++++++++----
10
target/arm/translate-sve.c | 12 ++++++++++++
13
2 files changed, 11 insertions(+), 7 deletions(-)
11
target/arm/sve.decode | 6 ++++++
12
4 files changed, 55 insertions(+)
13
14
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
diff --git a/target/arm/sme-fa64.decode b/target/arm/sme-fa64.decode
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
--- a/target/arm/sme-fa64.decode
17
+++ b/target/arm/helper-sve.h
18
+++ b/target/arm/sme-fa64.decode
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_trn_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
19
@@ -XXX,XX +XXX,XX @@ FAIL 0001 1110 0111 1110 0000 00-- ---- ---- # FJCVTZS
19
DEF_HELPER_FLAGS_4(sve_trn_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
20
# --11 1100 --1- ---- ---- ---- ---- --10 # Load/store FP register (register offset)
20
DEF_HELPER_FLAGS_4(sve_trn_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
21
# --11 1101 ---- ---- ---- ---- ---- ---- # Load/store FP register (scaled imm)
21
22
22
+DEF_HELPER_FLAGS_4(sve_compact_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
23
-FAIL 0110 0101 --0- ---- 0000 11-- ---- ---- # FTSMUL
23
+DEF_HELPER_FLAGS_4(sve_compact_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
-FAIL 0110 0101 --01 0--- 100- ---- ---- ---- # FTMAD
24
+
25
-FAIL 0110 0101 --01 1--- 001- ---- ---- ---- # FADDA
25
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
26
FAIL 0100 0101 --0- ---- 1001 10-- ---- ---- # SMMLA, UMMLA, USMMLA
26
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
27
FAIL 0100 0101 --1- ---- 1--- ---- ---- ---- # SVE2 string/histo/crypto instructions
27
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
28
FAIL 1000 010- -00- ---- 10-- ---- ---- ---- # SVE2 32-bit gather NT load (vector+scalar)
28
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/sve_helper.c
31
+++ b/target/arm/sve_helper.c
32
@@ -XXX,XX +XXX,XX @@ DO_TRN(sve_trn_d, uint64_t, )
33
#undef DO_ZIP
34
#undef DO_UZP
35
#undef DO_TRN
36
+
37
+void HELPER(sve_compact_s)(void *vd, void *vn, void *vg, uint32_t desc)
38
+{
39
+ intptr_t i, j, opr_sz = simd_oprsz(desc) / 4;
40
+ uint32_t *d = vd, *n = vn;
41
+ uint8_t *pg = vg;
42
+
43
+ for (i = j = 0; i < opr_sz; i++) {
44
+ if (pg[H1(i / 2)] & (i & 1 ? 0x10 : 0x01)) {
45
+ d[H4(j)] = n[H4(i)];
46
+ j++;
47
+ }
48
+ }
49
+ for (; j < opr_sz; j++) {
50
+ d[H4(j)] = 0;
51
+ }
52
+}
53
+
54
+void HELPER(sve_compact_d)(void *vd, void *vn, void *vg, uint32_t desc)
55
+{
56
+ intptr_t i, j, opr_sz = simd_oprsz(desc) / 8;
57
+ uint64_t *d = vd, *n = vn;
58
+ uint8_t *pg = vg;
59
+
60
+ for (i = j = 0; i < opr_sz; i++) {
61
+ if (pg[H1(i)] & 1) {
62
+ d[j] = n[i];
63
+ j++;
64
+ }
65
+ }
66
+ for (; j < opr_sz; j++) {
67
+ d[j] = 0;
68
+ }
69
+}
70
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
29
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
71
index XXXXXXX..XXXXXXX 100644
30
index XXXXXXX..XXXXXXX 100644
72
--- a/target/arm/translate-sve.c
31
--- a/target/arm/translate-sve.c
73
+++ b/target/arm/translate-sve.c
32
+++ b/target/arm/translate-sve.c
74
@@ -XXX,XX +XXX,XX @@ static bool trans_TRN2_z(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
33
@@ -XXX,XX +XXX,XX @@ static gen_helper_gvec_3_ptr * const ftmad_fns[4] = {
75
return do_zzz_data_ool(s, a, 1 << a->esz, trn_fns[a->esz]);
34
NULL, gen_helper_sve_ftmad_h,
76
}
35
gen_helper_sve_ftmad_s, gen_helper_sve_ftmad_d,
77
36
};
78
+/*
37
-TRANS_FEAT(FTMAD, aa64_sve, gen_gvec_fpst_zzz,
79
+ *** SVE Permute Vector - Predicated Group
38
- ftmad_fns[a->esz], a->rd, a->rn, a->rm, a->imm,
80
+ */
39
- a->esz == MO_16 ? FPST_FPCR_F16 : FPST_FPCR)
81
+
40
+TRANS_FEAT_NONSTREAMING(FTMAD, aa64_sve, gen_gvec_fpst_zzz,
82
+static bool trans_COMPACT(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
41
+ ftmad_fns[a->esz], a->rd, a->rn, a->rm, a->imm,
83
+{
42
+ a->esz == MO_16 ? FPST_FPCR_F16 : FPST_FPCR)
84
+ static gen_helper_gvec_3 * const fns[4] = {
43
85
+ NULL, NULL, gen_helper_sve_compact_s, gen_helper_sve_compact_d
44
/*
86
+ };
45
*** SVE Floating Point Accumulating Reduction Group
87
+ return do_zpz_ool(s, a, fns[a->esz]);
46
@@ -XXX,XX +XXX,XX @@ static bool trans_FADDA(DisasContext *s, arg_rprr_esz *a)
88
+}
47
if (a->esz == 0 || !dc_isar_feature(aa64_sve, s)) {
48
return false;
49
}
50
+ s->is_nonstreaming = true;
51
if (!sve_access_check(s)) {
52
return true;
53
}
54
@@ -XXX,XX +XXX,XX @@ static bool trans_FADDA(DisasContext *s, arg_rprr_esz *a)
55
DO_FP3(FADD_zzz, fadd)
56
DO_FP3(FSUB_zzz, fsub)
57
DO_FP3(FMUL_zzz, fmul)
58
-DO_FP3(FTSMUL, ftsmul)
59
DO_FP3(FRECPS, recps)
60
DO_FP3(FRSQRTS, rsqrts)
61
62
#undef DO_FP3
63
64
+static gen_helper_gvec_3_ptr * const ftsmul_fns[4] = {
65
+ NULL, gen_helper_gvec_ftsmul_h,
66
+ gen_helper_gvec_ftsmul_s, gen_helper_gvec_ftsmul_d
67
+};
68
+TRANS_FEAT_NONSTREAMING(FTSMUL, aa64_sve, gen_gvec_fpst_arg_zzz,
69
+ ftsmul_fns[a->esz], a, 0)
89
+
70
+
90
/*
71
/*
91
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
72
*** SVE Floating Point Arithmetic - Predicated Group
92
*/
73
*/
93
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
94
index XXXXXXX..XXXXXXX 100644
95
--- a/target/arm/sve.decode
96
+++ b/target/arm/sve.decode
97
@@ -XXX,XX +XXX,XX @@ UZP2_z 00000101 .. 1 ..... 011 011 ..... ..... @rd_rn_rm
98
TRN1_z 00000101 .. 1 ..... 011 100 ..... ..... @rd_rn_rm
99
TRN2_z 00000101 .. 1 ..... 011 101 ..... ..... @rd_rn_rm
100
101
+### SVE Permute - Predicated Group
102
+
103
+# SVE compress active elements
104
+# Note esz >= 2
105
+COMPACT 00000101 .. 100001 100 ... ..... ..... @rd_pg_rn
106
+
107
### SVE Predicate Logical Operations Group
108
109
# SVE predicate logical operations
110
--
74
--
111
2.17.1
75
2.25.1
112
113
diff view generated by jsdifflib
1
From: Cédric Le Goater <clg@kaod.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
On Macronix chips, two bytes can written to the WRSR. First byte will
3
Mark these as a non-streaming instructions, which should trap
4
configure the status register and the second the configuration
4
if full a64 support is not enabled in streaming mode.
5
register. It is important to save the configuration value as it
6
contains the dummy cycle setting when using dual or quad IO mode.
7
5
8
Signed-off-by: Cédric Le Goater <clg@kaod.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Acked-by: Alistair Francis <alistair.francis@wdc.com>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20220708151540.18136-10-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
10
---
12
hw/block/m25p80.c | 1 +
11
target/arm/sme-fa64.decode | 1 -
13
1 file changed, 1 insertion(+)
12
target/arm/translate-sve.c | 12 ++++++------
13
2 files changed, 6 insertions(+), 7 deletions(-)
14
14
15
diff --git a/hw/block/m25p80.c b/hw/block/m25p80.c
15
diff --git a/target/arm/sme-fa64.decode b/target/arm/sme-fa64.decode
16
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/block/m25p80.c
17
--- a/target/arm/sme-fa64.decode
18
+++ b/hw/block/m25p80.c
18
+++ b/target/arm/sme-fa64.decode
19
@@ -XXX,XX +XXX,XX @@ static void complete_collecting_data(Flash *s)
19
@@ -XXX,XX +XXX,XX @@ FAIL 0001 1110 0111 1110 0000 00-- ---- ---- # FJCVTZS
20
case MAN_MACRONIX:
20
# --11 1100 --1- ---- ---- ---- ---- --10 # Load/store FP register (register offset)
21
s->quad_enable = extract32(s->data[0], 6, 1);
21
# --11 1101 ---- ---- ---- ---- ---- ---- # Load/store FP register (scaled imm)
22
if (s->len > 1) {
22
23
+ s->volatile_cfg = s->data[1];
23
-FAIL 0100 0101 --0- ---- 1001 10-- ---- ---- # SMMLA, UMMLA, USMMLA
24
s->four_bytes_address_mode = extract32(s->data[1], 5, 1);
24
FAIL 0100 0101 --1- ---- 1--- ---- ---- ---- # SVE2 string/histo/crypto instructions
25
}
25
FAIL 1000 010- -00- ---- 10-- ---- ---- ---- # SVE2 32-bit gather NT load (vector+scalar)
26
break;
26
FAIL 1000 010- -00- ---- 111- ---- ---- ---- # SVE 32-bit gather prefetch (vector+imm)
27
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/translate-sve.c
30
+++ b/target/arm/translate-sve.c
31
@@ -XXX,XX +XXX,XX @@ TRANS_FEAT(FMLALT_zzxw, aa64_sve2, do_FMLAL_zzxw, a, false, true)
32
TRANS_FEAT(FMLSLB_zzxw, aa64_sve2, do_FMLAL_zzxw, a, true, false)
33
TRANS_FEAT(FMLSLT_zzxw, aa64_sve2, do_FMLAL_zzxw, a, true, true)
34
35
-TRANS_FEAT(SMMLA, aa64_sve_i8mm, gen_gvec_ool_arg_zzzz,
36
- gen_helper_gvec_smmla_b, a, 0)
37
-TRANS_FEAT(USMMLA, aa64_sve_i8mm, gen_gvec_ool_arg_zzzz,
38
- gen_helper_gvec_usmmla_b, a, 0)
39
-TRANS_FEAT(UMMLA, aa64_sve_i8mm, gen_gvec_ool_arg_zzzz,
40
- gen_helper_gvec_ummla_b, a, 0)
41
+TRANS_FEAT_NONSTREAMING(SMMLA, aa64_sve_i8mm, gen_gvec_ool_arg_zzzz,
42
+ gen_helper_gvec_smmla_b, a, 0)
43
+TRANS_FEAT_NONSTREAMING(USMMLA, aa64_sve_i8mm, gen_gvec_ool_arg_zzzz,
44
+ gen_helper_gvec_usmmla_b, a, 0)
45
+TRANS_FEAT_NONSTREAMING(UMMLA, aa64_sve_i8mm, gen_gvec_ool_arg_zzzz,
46
+ gen_helper_gvec_ummla_b, a, 0)
47
48
TRANS_FEAT(BFDOT_zzzz, aa64_sve_bf16, gen_gvec_ool_arg_zzzz,
49
gen_helper_gvec_bfdot, a, 0)
27
--
50
--
28
2.17.1
51
2.25.1
29
30
diff view generated by jsdifflib
1
Now we have stn_p() and ldn_p() we can use them in various
1
From: Richard Henderson <richard.henderson@linaro.org>
2
functions in exec.c that used to have their own switch-on-size code.
3
2
3
Mark these as non-streaming instructions, which should trap
4
if full a64 support is not enabled in streaming mode.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20220708151540.18136-11-richard.henderson@linaro.org
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20180611171007.4165-4-peter.maydell@linaro.org
8
---
10
---
9
exec.c | 112 +++++----------------------------------------------------
11
target/arm/sme-fa64.decode | 1 -
10
1 file changed, 8 insertions(+), 104 deletions(-)
12
target/arm/translate-sve.c | 35 ++++++++++++++++++-----------------
13
2 files changed, 18 insertions(+), 18 deletions(-)
11
14
12
diff --git a/exec.c b/exec.c
15
diff --git a/target/arm/sme-fa64.decode b/target/arm/sme-fa64.decode
13
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
14
--- a/exec.c
17
--- a/target/arm/sme-fa64.decode
15
+++ b/exec.c
18
+++ b/target/arm/sme-fa64.decode
16
@@ -XXX,XX +XXX,XX @@ static void notdirty_mem_write(void *opaque, hwaddr ram_addr,
19
@@ -XXX,XX +XXX,XX @@ FAIL 0001 1110 0111 1110 0000 00-- ---- ---- # FJCVTZS
17
memory_notdirty_write_prepare(&ndi, current_cpu, current_cpu->mem_io_vaddr,
20
# --11 1100 --1- ---- ---- ---- ---- --10 # Load/store FP register (register offset)
18
ram_addr, size);
21
# --11 1101 ---- ---- ---- ---- ---- ---- # Load/store FP register (scaled imm)
19
22
20
- switch (size) {
23
-FAIL 0100 0101 --1- ---- 1--- ---- ---- ---- # SVE2 string/histo/crypto instructions
21
- case 1:
24
FAIL 1000 010- -00- ---- 10-- ---- ---- ---- # SVE2 32-bit gather NT load (vector+scalar)
22
- stb_p(qemu_map_ram_ptr(NULL, ram_addr), val);
25
FAIL 1000 010- -00- ---- 111- ---- ---- ---- # SVE 32-bit gather prefetch (vector+imm)
23
- break;
26
FAIL 1000 0100 0-1- ---- 0--- ---- ---- ---- # SVE 32-bit gather prefetch (scalar+vector)
24
- case 2:
27
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
25
- stw_p(qemu_map_ram_ptr(NULL, ram_addr), val);
28
index XXXXXXX..XXXXXXX 100644
26
- break;
29
--- a/target/arm/translate-sve.c
27
- case 4:
30
+++ b/target/arm/translate-sve.c
28
- stl_p(qemu_map_ram_ptr(NULL, ram_addr), val);
31
@@ -XXX,XX +XXX,XX @@ DO_SVE2_ZZZ_NARROW(RSUBHNT, rsubhnt)
29
- break;
32
static gen_helper_gvec_flags_4 * const match_fns[4] = {
30
- case 8:
33
gen_helper_sve2_match_ppzz_b, gen_helper_sve2_match_ppzz_h, NULL, NULL
31
- stq_p(qemu_map_ram_ptr(NULL, ram_addr), val);
34
};
32
- break;
35
-TRANS_FEAT(MATCH, aa64_sve2, do_ppzz_flags, a, match_fns[a->esz])
33
- default:
36
+TRANS_FEAT_NONSTREAMING(MATCH, aa64_sve2, do_ppzz_flags, a, match_fns[a->esz])
34
- abort();
37
35
- }
38
static gen_helper_gvec_flags_4 * const nmatch_fns[4] = {
36
+ stn_p(qemu_map_ram_ptr(NULL, ram_addr), size, val);
39
gen_helper_sve2_nmatch_ppzz_b, gen_helper_sve2_nmatch_ppzz_h, NULL, NULL
37
memory_notdirty_write_complete(&ndi);
40
};
38
}
41
-TRANS_FEAT(NMATCH, aa64_sve2, do_ppzz_flags, a, nmatch_fns[a->esz])
39
42
+TRANS_FEAT_NONSTREAMING(NMATCH, aa64_sve2, do_ppzz_flags, a, nmatch_fns[a->esz])
40
@@ -XXX,XX +XXX,XX @@ static MemTxResult subpage_read(void *opaque, hwaddr addr, uint64_t *data,
43
41
if (res) {
44
static gen_helper_gvec_4 * const histcnt_fns[4] = {
42
return res;
45
NULL, NULL, gen_helper_sve2_histcnt_s, gen_helper_sve2_histcnt_d
43
}
46
};
44
- switch (len) {
47
-TRANS_FEAT(HISTCNT, aa64_sve2, gen_gvec_ool_arg_zpzz,
45
- case 1:
48
- histcnt_fns[a->esz], a, 0)
46
- *data = ldub_p(buf);
49
+TRANS_FEAT_NONSTREAMING(HISTCNT, aa64_sve2, gen_gvec_ool_arg_zpzz,
47
- return MEMTX_OK;
50
+ histcnt_fns[a->esz], a, 0)
48
- case 2:
51
49
- *data = lduw_p(buf);
52
-TRANS_FEAT(HISTSEG, aa64_sve2, gen_gvec_ool_arg_zzz,
50
- return MEMTX_OK;
53
- a->esz == 0 ? gen_helper_sve2_histseg : NULL, a, 0)
51
- case 4:
54
+TRANS_FEAT_NONSTREAMING(HISTSEG, aa64_sve2, gen_gvec_ool_arg_zzz,
52
- *data = (uint32_t)ldl_p(buf);
55
+ a->esz == 0 ? gen_helper_sve2_histseg : NULL, a, 0)
53
- return MEMTX_OK;
56
54
- case 8:
57
DO_ZPZZ_FP(FADDP, aa64_sve2, sve2_faddp_zpzz)
55
- *data = ldq_p(buf);
58
DO_ZPZZ_FP(FMAXNMP, aa64_sve2, sve2_fmaxnmp_zpzz)
56
- return MEMTX_OK;
59
@@ -XXX,XX +XXX,XX @@ TRANS_FEAT(SQRDCMLAH_zzzz, aa64_sve2, gen_gvec_ool_zzzz,
57
- default:
60
TRANS_FEAT(USDOT_zzzz, aa64_sve_i8mm, gen_gvec_ool_arg_zzzz,
58
- abort();
61
a->esz == 2 ? gen_helper_gvec_usdot_b : NULL, a, 0)
59
- }
62
60
+ *data = ldn_p(buf, len);
63
-TRANS_FEAT(AESMC, aa64_sve2_aes, gen_gvec_ool_zz,
61
+ return MEMTX_OK;
64
- gen_helper_crypto_aesmc, a->rd, a->rd, a->decrypt)
62
}
65
+TRANS_FEAT_NONSTREAMING(AESMC, aa64_sve2_aes, gen_gvec_ool_zz,
63
66
+ gen_helper_crypto_aesmc, a->rd, a->rd, a->decrypt)
64
static MemTxResult subpage_write(void *opaque, hwaddr addr,
67
65
@@ -XXX,XX +XXX,XX @@ static MemTxResult subpage_write(void *opaque, hwaddr addr,
68
-TRANS_FEAT(AESE, aa64_sve2_aes, gen_gvec_ool_arg_zzz,
66
" value %"PRIx64"\n",
69
- gen_helper_crypto_aese, a, false)
67
__func__, subpage, len, addr, value);
70
-TRANS_FEAT(AESD, aa64_sve2_aes, gen_gvec_ool_arg_zzz,
68
#endif
71
- gen_helper_crypto_aese, a, true)
69
- switch (len) {
72
+TRANS_FEAT_NONSTREAMING(AESE, aa64_sve2_aes, gen_gvec_ool_arg_zzz,
70
- case 1:
73
+ gen_helper_crypto_aese, a, false)
71
- stb_p(buf, value);
74
+TRANS_FEAT_NONSTREAMING(AESD, aa64_sve2_aes, gen_gvec_ool_arg_zzz,
72
- break;
75
+ gen_helper_crypto_aese, a, true)
73
- case 2:
76
74
- stw_p(buf, value);
77
-TRANS_FEAT(SM4E, aa64_sve2_sm4, gen_gvec_ool_arg_zzz,
75
- break;
78
- gen_helper_crypto_sm4e, a, 0)
76
- case 4:
79
-TRANS_FEAT(SM4EKEY, aa64_sve2_sm4, gen_gvec_ool_arg_zzz,
77
- stl_p(buf, value);
80
- gen_helper_crypto_sm4ekey, a, 0)
78
- break;
81
+TRANS_FEAT_NONSTREAMING(SM4E, aa64_sve2_sm4, gen_gvec_ool_arg_zzz,
79
- case 8:
82
+ gen_helper_crypto_sm4e, a, 0)
80
- stq_p(buf, value);
83
+TRANS_FEAT_NONSTREAMING(SM4EKEY, aa64_sve2_sm4, gen_gvec_ool_arg_zzz,
81
- break;
84
+ gen_helper_crypto_sm4ekey, a, 0)
82
- default:
85
83
- abort();
86
-TRANS_FEAT(RAX1, aa64_sve2_sha3, gen_gvec_fn_arg_zzz, gen_gvec_rax1, a)
84
- }
87
+TRANS_FEAT_NONSTREAMING(RAX1, aa64_sve2_sha3, gen_gvec_fn_arg_zzz,
85
+ stn_p(buf, len, value);
88
+ gen_gvec_rax1, a)
86
return flatview_write(subpage->fv, addr + subpage->base, attrs, buf, len);
89
87
}
90
TRANS_FEAT(FCVTNT_sh, aa64_sve2, gen_gvec_fpst_arg_zpz,
88
91
gen_helper_sve2_fcvtnt_sh, a, 0, FPST_FPCR)
89
@@ -XXX,XX +XXX,XX @@ static MemTxResult flatview_write_continue(FlatView *fv, hwaddr addr,
90
l = memory_access_size(mr, l, addr1);
91
/* XXX: could force current_cpu to NULL to avoid
92
potential bugs */
93
- switch (l) {
94
- case 8:
95
- /* 64 bit write access */
96
- val = ldq_p(buf);
97
- result |= memory_region_dispatch_write(mr, addr1, val, 8,
98
- attrs);
99
- break;
100
- case 4:
101
- /* 32 bit write access */
102
- val = (uint32_t)ldl_p(buf);
103
- result |= memory_region_dispatch_write(mr, addr1, val, 4,
104
- attrs);
105
- break;
106
- case 2:
107
- /* 16 bit write access */
108
- val = lduw_p(buf);
109
- result |= memory_region_dispatch_write(mr, addr1, val, 2,
110
- attrs);
111
- break;
112
- case 1:
113
- /* 8 bit write access */
114
- val = ldub_p(buf);
115
- result |= memory_region_dispatch_write(mr, addr1, val, 1,
116
- attrs);
117
- break;
118
- default:
119
- abort();
120
- }
121
+ val = ldn_p(buf, l);
122
+ result |= memory_region_dispatch_write(mr, addr1, val, l, attrs);
123
} else {
124
/* RAM case */
125
ptr = qemu_ram_ptr_length(mr->ram_block, addr1, &l, false);
126
@@ -XXX,XX +XXX,XX @@ MemTxResult flatview_read_continue(FlatView *fv, hwaddr addr,
127
/* I/O case */
128
release_lock |= prepare_mmio_access(mr);
129
l = memory_access_size(mr, l, addr1);
130
- switch (l) {
131
- case 8:
132
- /* 64 bit read access */
133
- result |= memory_region_dispatch_read(mr, addr1, &val, 8,
134
- attrs);
135
- stq_p(buf, val);
136
- break;
137
- case 4:
138
- /* 32 bit read access */
139
- result |= memory_region_dispatch_read(mr, addr1, &val, 4,
140
- attrs);
141
- stl_p(buf, val);
142
- break;
143
- case 2:
144
- /* 16 bit read access */
145
- result |= memory_region_dispatch_read(mr, addr1, &val, 2,
146
- attrs);
147
- stw_p(buf, val);
148
- break;
149
- case 1:
150
- /* 8 bit read access */
151
- result |= memory_region_dispatch_read(mr, addr1, &val, 1,
152
- attrs);
153
- stb_p(buf, val);
154
- break;
155
- default:
156
- abort();
157
- }
158
+ result |= memory_region_dispatch_read(mr, addr1, &val, l, attrs);
159
+ stn_p(buf, l, val);
160
} else {
161
/* RAM case */
162
ptr = qemu_ram_ptr_length(mr->ram_block, addr1, &l, false);
163
--
92
--
164
2.17.1
93
2.25.1
165
166
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
Mark these as a non-streaming instructions, which should trap
4
if full a64 support is not enabled in streaming mode.
2
5
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-9-richard.henderson@linaro.org
8
Message-id: 20220708151540.18136-12-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
target/arm/helper-sve.h | 14 +++++++++++++
11
target/arm/sme-fa64.decode | 9 ---------
9
target/arm/sve_helper.c | 41 +++++++++++++++++++++++++++++++-------
12
target/arm/translate-sve.c | 6 ++++++
10
target/arm/translate-sve.c | 38 +++++++++++++++++++++++++++++++++++
13
2 files changed, 6 insertions(+), 9 deletions(-)
11
target/arm/sve.decode | 7 +++++++
12
4 files changed, 93 insertions(+), 7 deletions(-)
13
14
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
diff --git a/target/arm/sme-fa64.decode b/target/arm/sme-fa64.decode
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
--- a/target/arm/sme-fa64.decode
17
+++ b/target/arm/helper-sve.h
18
+++ b/target/arm/sme-fa64.decode
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_compact_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
19
@@ -XXX,XX +XXX,XX @@ FAIL 0001 1110 0111 1110 0000 00-- ---- ---- # FJCVTZS
19
20
# --11 1100 --1- ---- ---- ---- ---- --10 # Load/store FP register (register offset)
20
DEF_HELPER_FLAGS_2(sve_last_active_element, TCG_CALL_NO_RWG, s32, ptr, i32)
21
# --11 1101 ---- ---- ---- ---- ---- ---- # Load/store FP register (scaled imm)
21
22
22
+DEF_HELPER_FLAGS_4(sve_revb_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
23
-FAIL 1000 010- -00- ---- 10-- ---- ---- ---- # SVE2 32-bit gather NT load (vector+scalar)
23
+DEF_HELPER_FLAGS_4(sve_revb_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
FAIL 1000 010- -00- ---- 111- ---- ---- ---- # SVE 32-bit gather prefetch (vector+imm)
24
+DEF_HELPER_FLAGS_4(sve_revb_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
25
FAIL 1000 0100 0-1- ---- 0--- ---- ---- ---- # SVE 32-bit gather prefetch (scalar+vector)
25
+
26
-FAIL 1000 010- -01- ---- 1--- ---- ---- ---- # SVE 32-bit gather load (vector+imm)
26
+DEF_HELPER_FLAGS_4(sve_revh_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
27
-FAIL 1000 0100 0-0- ---- 0--- ---- ---- ---- # SVE 32-bit gather load byte (scalar+vector)
27
+DEF_HELPER_FLAGS_4(sve_revh_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
28
-FAIL 1000 0100 1--- ---- 0--- ---- ---- ---- # SVE 32-bit gather load half (scalar+vector)
28
+
29
-FAIL 1000 0101 0--- ---- 0--- ---- ---- ---- # SVE 32-bit gather load word (scalar+vector)
29
+DEF_HELPER_FLAGS_4(sve_revw_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
FAIL 1010 010- ---- ---- 011- ---- ---- ---- # SVE contiguous FF load (scalar+scalar)
30
+
31
FAIL 1010 010- ---1 ---- 101- ---- ---- ---- # SVE contiguous NF load (scalar+imm)
31
+DEF_HELPER_FLAGS_4(sve_rbit_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
32
FAIL 1010 010- -01- ---- 000- ---- ---- ---- # SVE load & replicate 32 bytes (scalar+scalar)
32
+DEF_HELPER_FLAGS_4(sve_rbit_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
33
FAIL 1010 010- -010 ---- 001- ---- ---- ---- # SVE load & replicate 32 bytes (scalar+imm)
33
+DEF_HELPER_FLAGS_4(sve_rbit_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
34
FAIL 1100 010- ---- ---- ---- ---- ---- ---- # SVE 64-bit gather load/prefetch
34
+DEF_HELPER_FLAGS_4(sve_rbit_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
35
-FAIL 1110 010- -00- ---- 001- ---- ---- ---- # SVE2 64-bit scatter NT store (vector+scalar)
35
+
36
-FAIL 1110 010- -10- ---- 001- ---- ---- ---- # SVE2 32-bit scatter NT store (vector+scalar)
36
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
37
-FAIL 1110 010- ---- ---- 1-0- ---- ---- ---- # SVE scatter store (scalar+32-bit vector)
37
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
38
-FAIL 1110 010- ---- ---- 101- ---- ---- ---- # SVE scatter store (misc)
38
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
39
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
40
index XXXXXXX..XXXXXXX 100644
41
--- a/target/arm/sve_helper.c
42
+++ b/target/arm/sve_helper.c
43
@@ -XXX,XX +XXX,XX @@ static inline uint64_t expand_pred_s(uint8_t byte)
44
return word[byte & 0x11];
45
}
46
47
+/* Swap 16-bit words within a 32-bit word. */
48
+static inline uint32_t hswap32(uint32_t h)
49
+{
50
+ return rol32(h, 16);
51
+}
52
+
53
+/* Swap 16-bit words within a 64-bit word. */
54
+static inline uint64_t hswap64(uint64_t h)
55
+{
56
+ uint64_t m = 0x0000ffff0000ffffull;
57
+ h = rol64(h, 32);
58
+ return ((h & m) << 16) | ((h >> 16) & m);
59
+}
60
+
61
+/* Swap 32-bit words within a 64-bit word. */
62
+static inline uint64_t wswap64(uint64_t h)
63
+{
64
+ return rol64(h, 32);
65
+}
66
+
67
#define LOGICAL_PPPP(NAME, FUNC) \
68
void HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, uint32_t desc) \
69
{ \
70
@@ -XXX,XX +XXX,XX @@ DO_ZPZ(sve_neg_h, uint16_t, H1_2, DO_NEG)
71
DO_ZPZ(sve_neg_s, uint32_t, H1_4, DO_NEG)
72
DO_ZPZ_D(sve_neg_d, uint64_t, DO_NEG)
73
74
+DO_ZPZ(sve_revb_h, uint16_t, H1_2, bswap16)
75
+DO_ZPZ(sve_revb_s, uint32_t, H1_4, bswap32)
76
+DO_ZPZ_D(sve_revb_d, uint64_t, bswap64)
77
+
78
+DO_ZPZ(sve_revh_s, uint32_t, H1_4, hswap32)
79
+DO_ZPZ_D(sve_revh_d, uint64_t, hswap64)
80
+
81
+DO_ZPZ_D(sve_revw_d, uint64_t, wswap64)
82
+
83
+DO_ZPZ(sve_rbit_b, uint8_t, H1, revbit8)
84
+DO_ZPZ(sve_rbit_h, uint16_t, H1_2, revbit16)
85
+DO_ZPZ(sve_rbit_s, uint32_t, H1_4, revbit32)
86
+DO_ZPZ_D(sve_rbit_d, uint64_t, revbit64)
87
+
88
/* Three-operand expander, unpredicated, in which the third operand is "wide".
89
*/
90
#define DO_ZZW(NAME, TYPE, TYPEW, H, OP) \
91
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_rev_b)(void *vd, void *vn, uint32_t desc)
92
}
93
}
94
95
-static inline uint64_t hswap64(uint64_t h)
96
-{
97
- uint64_t m = 0x0000ffff0000ffffull;
98
- h = rol64(h, 32);
99
- return ((h & m) << 16) | ((h >> 16) & m);
100
-}
101
-
102
void HELPER(sve_rev_h)(void *vd, void *vn, uint32_t desc)
103
{
104
intptr_t i, j, opr_sz = simd_oprsz(desc);
105
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
39
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
106
index XXXXXXX..XXXXXXX 100644
40
index XXXXXXX..XXXXXXX 100644
107
--- a/target/arm/translate-sve.c
41
--- a/target/arm/translate-sve.c
108
+++ b/target/arm/translate-sve.c
42
+++ b/target/arm/translate-sve.c
109
@@ -XXX,XX +XXX,XX @@ static bool trans_CPY_m_v(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
43
@@ -XXX,XX +XXX,XX @@ static bool trans_LD1_zprz(DisasContext *s, arg_LD1_zprz *a)
110
return true;
44
if (!dc_isar_feature(aa64_sve, s)) {
111
}
45
return false;
112
46
}
113
+static bool trans_REVB(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
47
+ s->is_nonstreaming = true;
114
+{
48
if (!sve_access_check(s)) {
115
+ static gen_helper_gvec_3 * const fns[4] = {
49
return true;
116
+ NULL,
50
}
117
+ gen_helper_sve_revb_h,
51
@@ -XXX,XX +XXX,XX @@ static bool trans_LD1_zpiz(DisasContext *s, arg_LD1_zpiz *a)
118
+ gen_helper_sve_revb_s,
52
if (!dc_isar_feature(aa64_sve, s)) {
119
+ gen_helper_sve_revb_d,
53
return false;
120
+ };
54
}
121
+ return do_zpz_ool(s, a, fns[a->esz]);
55
+ s->is_nonstreaming = true;
122
+}
56
if (!sve_access_check(s)) {
123
+
57
return true;
124
+static bool trans_REVH(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
58
}
125
+{
59
@@ -XXX,XX +XXX,XX @@ static bool trans_LDNT1_zprz(DisasContext *s, arg_LD1_zprz *a)
126
+ static gen_helper_gvec_3 * const fns[4] = {
60
if (!dc_isar_feature(aa64_sve2, s)) {
127
+ NULL,
61
return false;
128
+ NULL,
62
}
129
+ gen_helper_sve_revh_s,
63
+ s->is_nonstreaming = true;
130
+ gen_helper_sve_revh_d,
64
if (!sve_access_check(s)) {
131
+ };
65
return true;
132
+ return do_zpz_ool(s, a, fns[a->esz]);
66
}
133
+}
67
@@ -XXX,XX +XXX,XX @@ static bool trans_ST1_zprz(DisasContext *s, arg_ST1_zprz *a)
134
+
68
if (!dc_isar_feature(aa64_sve, s)) {
135
+static bool trans_REVW(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
69
return false;
136
+{
70
}
137
+ return do_zpz_ool(s, a, a->esz == 3 ? gen_helper_sve_revw_d : NULL);
71
+ s->is_nonstreaming = true;
138
+}
72
if (!sve_access_check(s)) {
139
+
73
return true;
140
+static bool trans_RBIT(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
74
}
141
+{
75
@@ -XXX,XX +XXX,XX @@ static bool trans_ST1_zpiz(DisasContext *s, arg_ST1_zpiz *a)
142
+ static gen_helper_gvec_3 * const fns[4] = {
76
if (!dc_isar_feature(aa64_sve, s)) {
143
+ gen_helper_sve_rbit_b,
77
return false;
144
+ gen_helper_sve_rbit_h,
78
}
145
+ gen_helper_sve_rbit_s,
79
+ s->is_nonstreaming = true;
146
+ gen_helper_sve_rbit_d,
80
if (!sve_access_check(s)) {
147
+ };
81
return true;
148
+ return do_zpz_ool(s, a, fns[a->esz]);
82
}
149
+}
83
@@ -XXX,XX +XXX,XX @@ static bool trans_STNT1_zprz(DisasContext *s, arg_ST1_zprz *a)
150
+
84
if (!dc_isar_feature(aa64_sve2, s)) {
151
/*
85
return false;
152
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
86
}
153
*/
87
+ s->is_nonstreaming = true;
154
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
88
if (!sve_access_check(s)) {
155
index XXXXXXX..XXXXXXX 100644
89
return true;
156
--- a/target/arm/sve.decode
90
}
157
+++ b/target/arm/sve.decode
158
@@ -XXX,XX +XXX,XX @@ CPY_m_v 00000101 .. 100000 100 ... ..... ..... @rd_pg_rn
159
# SVE copy element from general register to vector (predicated)
160
CPY_m_r 00000101 .. 101000 101 ... ..... ..... @rd_pg_rn
161
162
+# SVE reverse within elements
163
+# Note esz >= operation size
164
+REVB 00000101 .. 1001 00 100 ... ..... ..... @rd_pg_rn
165
+REVH 00000101 .. 1001 01 100 ... ..... ..... @rd_pg_rn
166
+REVW 00000101 .. 1001 10 100 ... ..... ..... @rd_pg_rn
167
+RBIT 00000101 .. 1001 11 100 ... ..... ..... @rd_pg_rn
168
+
169
### SVE Predicate Logical Operations Group
170
171
# SVE predicate logical operations
172
--
91
--
173
2.17.1
92
2.25.1
174
175
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
Mark these as a non-streaming instructions, which should trap if full
4
a64 support is not enabled in streaming mode. In this case, introduce
5
PRF_ns (prefetch non-streaming) to handle the checks.
2
6
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-17-richard.henderson@linaro.org
9
Message-id: 20220708151540.18136-13-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
11
---
8
target/arm/translate-sve.c | 37 +++++++++++++++++++++++++++++++++++++
12
target/arm/sme-fa64.decode | 3 ---
9
target/arm/sve.decode | 8 ++++++++
13
target/arm/sve.decode | 10 +++++-----
10
2 files changed, 45 insertions(+)
14
target/arm/translate-sve.c | 11 +++++++++++
15
3 files changed, 16 insertions(+), 8 deletions(-)
11
16
17
diff --git a/target/arm/sme-fa64.decode b/target/arm/sme-fa64.decode
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/sme-fa64.decode
20
+++ b/target/arm/sme-fa64.decode
21
@@ -XXX,XX +XXX,XX @@ FAIL 0001 1110 0111 1110 0000 00-- ---- ---- # FJCVTZS
22
# --11 1100 --1- ---- ---- ---- ---- --10 # Load/store FP register (register offset)
23
# --11 1101 ---- ---- ---- ---- ---- ---- # Load/store FP register (scaled imm)
24
25
-FAIL 1000 010- -00- ---- 111- ---- ---- ---- # SVE 32-bit gather prefetch (vector+imm)
26
-FAIL 1000 0100 0-1- ---- 0--- ---- ---- ---- # SVE 32-bit gather prefetch (scalar+vector)
27
FAIL 1010 010- ---- ---- 011- ---- ---- ---- # SVE contiguous FF load (scalar+scalar)
28
FAIL 1010 010- ---1 ---- 101- ---- ---- ---- # SVE contiguous NF load (scalar+imm)
29
FAIL 1010 010- -01- ---- 000- ---- ---- ---- # SVE load & replicate 32 bytes (scalar+scalar)
30
FAIL 1010 010- -010 ---- 001- ---- ---- ---- # SVE load & replicate 32 bytes (scalar+imm)
31
-FAIL 1100 010- ---- ---- ---- ---- ---- ---- # SVE 64-bit gather load/prefetch
32
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
33
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/sve.decode
35
+++ b/target/arm/sve.decode
36
@@ -XXX,XX +XXX,XX @@ LD1RO_zpri 1010010 .. 01 0.... 001 ... ..... ..... \
37
@rpri_load_msz nreg=0
38
39
# SVE 32-bit gather prefetch (scalar plus 32-bit scaled offsets)
40
-PRF 1000010 00 -1 ----- 0-- --- ----- 0 ----
41
+PRF_ns 1000010 00 -1 ----- 0-- --- ----- 0 ----
42
43
# SVE 32-bit gather prefetch (vector plus immediate)
44
-PRF 1000010 -- 00 ----- 111 --- ----- 0 ----
45
+PRF_ns 1000010 -- 00 ----- 111 --- ----- 0 ----
46
47
# SVE contiguous prefetch (scalar plus immediate)
48
PRF 1000010 11 1- ----- 0-- --- ----- 0 ----
49
@@ -XXX,XX +XXX,XX @@ LD1_zpiz 1100010 .. 01 ..... 1.. ... ..... ..... \
50
@rpri_g_load esz=3
51
52
# SVE 64-bit gather prefetch (scalar plus 64-bit scaled offsets)
53
-PRF 1100010 00 11 ----- 1-- --- ----- 0 ----
54
+PRF_ns 1100010 00 11 ----- 1-- --- ----- 0 ----
55
56
# SVE 64-bit gather prefetch (scalar plus unpacked 32-bit scaled offsets)
57
-PRF 1100010 00 -1 ----- 0-- --- ----- 0 ----
58
+PRF_ns 1100010 00 -1 ----- 0-- --- ----- 0 ----
59
60
# SVE 64-bit gather prefetch (vector plus immediate)
61
-PRF 1100010 -- 00 ----- 111 --- ----- 0 ----
62
+PRF_ns 1100010 -- 00 ----- 111 --- ----- 0 ----
63
64
### SVE Memory Store Group
65
12
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
66
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
13
index XXXXXXX..XXXXXXX 100644
67
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/translate-sve.c
68
--- a/target/arm/translate-sve.c
15
+++ b/target/arm/translate-sve.c
69
+++ b/target/arm/translate-sve.c
16
@@ -XXX,XX +XXX,XX @@ static bool trans_WHILE(DisasContext *s, arg_WHILE *a, uint32_t insn)
70
@@ -XXX,XX +XXX,XX @@ static bool trans_PRF_rr(DisasContext *s, arg_PRF_rr *a)
17
return true;
71
return true;
18
}
72
}
19
73
20
+/*
74
+static bool trans_PRF_ns(DisasContext *s, arg_PRF_ns *a)
21
+ *** SVE Integer Wide Immediate - Unpredicated Group
22
+ */
23
+
24
+static bool trans_FDUP(DisasContext *s, arg_FDUP *a, uint32_t insn)
25
+{
75
+{
26
+ if (a->esz == 0) {
76
+ if (!dc_isar_feature(aa64_sve, s)) {
27
+ return false;
77
+ return false;
28
+ }
78
+ }
29
+ if (sve_access_check(s)) {
79
+ /* Prefetch is a nop within QEMU. */
30
+ unsigned vsz = vec_full_reg_size(s);
80
+ s->is_nonstreaming = true;
31
+ int dofs = vec_full_reg_offset(s, a->rd);
81
+ (void)sve_access_check(s);
32
+ uint64_t imm;
33
+
34
+ /* Decode the VFP immediate. */
35
+ imm = vfp_expand_imm(a->esz, a->imm);
36
+ imm = dup_const(a->esz, imm);
37
+
38
+ tcg_gen_gvec_dup64i(dofs, vsz, vsz, imm);
39
+ }
40
+ return true;
41
+}
42
+
43
+static bool trans_DUP_i(DisasContext *s, arg_DUP_i *a, uint32_t insn)
44
+{
45
+ if (a->esz == 0 && extract32(insn, 13, 1)) {
46
+ return false;
47
+ }
48
+ if (sve_access_check(s)) {
49
+ unsigned vsz = vec_full_reg_size(s);
50
+ int dofs = vec_full_reg_offset(s, a->rd);
51
+
52
+ tcg_gen_gvec_dup64i(dofs, vsz, vsz, dup_const(a->esz, a->imm));
53
+ }
54
+ return true;
82
+ return true;
55
+}
83
+}
56
+
84
+
57
/*
85
/*
58
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
86
* Move Prefix
59
*/
87
*
60
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
61
index XXXXXXX..XXXXXXX 100644
62
--- a/target/arm/sve.decode
63
+++ b/target/arm/sve.decode
64
@@ -XXX,XX +XXX,XX @@ CTERM 00100101 1 sf:1 1 rm:5 001000 rn:5 ne:1 0000
65
# SVE integer compare scalar count and limit
66
WHILE 00100101 esz:2 1 rm:5 000 sf:1 u:1 1 rn:5 eq:1 rd:4
67
68
+### SVE Integer Wide Immediate - Unpredicated Group
69
+
70
+# SVE broadcast floating-point immediate (unpredicated)
71
+FDUP 00100101 esz:2 111 00 1110 imm:8 rd:5
72
+
73
+# SVE broadcast integer immediate (unpredicated)
74
+DUP_i 00100101 esz:2 111 00 011 . ........ rd:5 imm=%sh8_i8s
75
+
76
### SVE Memory - 32-bit Gather and Unsized Contiguous Group
77
78
# SVE load predicate register
79
--
88
--
80
2.17.1
89
2.25.1
81
82
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
Mark these as a non-streaming instructions, which should trap
4
if full a64 support is not enabled in streaming mode.
2
5
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-8-richard.henderson@linaro.org
8
Message-id: 20220708151540.18136-14-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
target/arm/translate-sve.c | 19 +++++++++++++++++++
11
target/arm/sme-fa64.decode | 2 --
9
target/arm/sve.decode | 6 ++++++
12
target/arm/translate-sve.c | 2 ++
10
2 files changed, 25 insertions(+)
13
2 files changed, 2 insertions(+), 2 deletions(-)
11
14
15
diff --git a/target/arm/sme-fa64.decode b/target/arm/sme-fa64.decode
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/sme-fa64.decode
18
+++ b/target/arm/sme-fa64.decode
19
@@ -XXX,XX +XXX,XX @@ FAIL 0001 1110 0111 1110 0000 00-- ---- ---- # FJCVTZS
20
# --11 1100 --1- ---- ---- ---- ---- --10 # Load/store FP register (register offset)
21
# --11 1101 ---- ---- ---- ---- ---- ---- # Load/store FP register (scaled imm)
22
23
-FAIL 1010 010- ---- ---- 011- ---- ---- ---- # SVE contiguous FF load (scalar+scalar)
24
-FAIL 1010 010- ---1 ---- 101- ---- ---- ---- # SVE contiguous NF load (scalar+imm)
25
FAIL 1010 010- -01- ---- 000- ---- ---- ---- # SVE load & replicate 32 bytes (scalar+scalar)
26
FAIL 1010 010- -010 ---- 001- ---- ---- ---- # SVE load & replicate 32 bytes (scalar+imm)
12
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
27
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
13
index XXXXXXX..XXXXXXX 100644
28
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/translate-sve.c
29
--- a/target/arm/translate-sve.c
15
+++ b/target/arm/translate-sve.c
30
+++ b/target/arm/translate-sve.c
16
@@ -XXX,XX +XXX,XX @@ static bool trans_LASTB_r(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
31
@@ -XXX,XX +XXX,XX @@ static bool trans_LDFF1_zprr(DisasContext *s, arg_rprr_load *a)
17
return do_last_general(s, a, true);
32
if (!dc_isar_feature(aa64_sve, s)) {
18
}
33
return false;
19
34
}
20
+static bool trans_CPY_m_r(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
35
+ s->is_nonstreaming = true;
21
+{
36
if (sve_access_check(s)) {
22
+ if (sve_access_check(s)) {
37
TCGv_i64 addr = new_tmp_a64(s);
23
+ do_cpy_m(s, a->esz, a->rd, a->rd, a->pg, cpu_reg_sp(s, a->rn));
38
tcg_gen_shli_i64(addr, cpu_reg(s, a->rm), dtype_msz(a->dtype));
24
+ }
39
@@ -XXX,XX +XXX,XX @@ static bool trans_LDNF1_zpri(DisasContext *s, arg_rpri_load *a)
25
+ return true;
40
if (!dc_isar_feature(aa64_sve, s)) {
26
+}
41
return false;
27
+
42
}
28
+static bool trans_CPY_m_v(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
43
+ s->is_nonstreaming = true;
29
+{
44
if (sve_access_check(s)) {
30
+ if (sve_access_check(s)) {
45
int vsz = vec_full_reg_size(s);
31
+ int ofs = vec_reg_offset(s, a->rn, 0, a->esz);
46
int elements = vsz >> dtype_esz[a->dtype];
32
+ TCGv_i64 t = load_esz(cpu_env, ofs, a->esz);
33
+ do_cpy_m(s, a->esz, a->rd, a->rd, a->pg, t);
34
+ tcg_temp_free_i64(t);
35
+ }
36
+ return true;
37
+}
38
+
39
/*
40
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
41
*/
42
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/sve.decode
45
+++ b/target/arm/sve.decode
46
@@ -XXX,XX +XXX,XX @@ LASTB_v 00000101 .. 10001 1 100 ... ..... ..... @rd_pg_rn
47
LASTA_r 00000101 .. 10000 0 101 ... ..... ..... @rd_pg_rn
48
LASTB_r 00000101 .. 10000 1 101 ... ..... ..... @rd_pg_rn
49
50
+# SVE copy element from SIMD&FP scalar register
51
+CPY_m_v 00000101 .. 100000 100 ... ..... ..... @rd_pg_rn
52
+
53
+# SVE copy element from general register to vector (predicated)
54
+CPY_m_r 00000101 .. 101000 101 ... ..... ..... @rd_pg_rn
55
+
56
### SVE Predicate Logical Operations Group
57
58
# SVE predicate logical operations
59
--
47
--
60
2.17.1
48
2.25.1
61
62
diff view generated by jsdifflib
1
In subpage_read() we perform a load of the data into a local buffer
1
From: Richard Henderson <richard.henderson@linaro.org>
2
which we then access using ldub_p(), lduw_p(), ldl_p() or ldq_p()
3
depending on its size, storing the result into the uint64_t *data.
4
Since ldl_p() returns an 'int', this means that for the 4-byte
5
case we will sign-extend the data, whereas for 1 and 2 byte
6
reads we zero-extend it.
7
2
8
This ought not to matter since the caller will likely ignore values in
3
Mark these as a non-streaming instructions, which should trap
9
the high bytes of the data, but add a cast so that we're consistent.
4
if full a64 support is not enabled in streaming mode.
10
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20220708151540.18136-15-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20180611171007.4165-3-peter.maydell@linaro.org
14
---
10
---
15
exec.c | 2 +-
11
target/arm/sme-fa64.decode | 3 ---
16
1 file changed, 1 insertion(+), 1 deletion(-)
12
target/arm/translate-sve.c | 2 ++
13
2 files changed, 2 insertions(+), 3 deletions(-)
17
14
18
diff --git a/exec.c b/exec.c
15
diff --git a/target/arm/sme-fa64.decode b/target/arm/sme-fa64.decode
19
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
20
--- a/exec.c
17
--- a/target/arm/sme-fa64.decode
21
+++ b/exec.c
18
+++ b/target/arm/sme-fa64.decode
22
@@ -XXX,XX +XXX,XX @@ static MemTxResult subpage_read(void *opaque, hwaddr addr, uint64_t *data,
19
@@ -XXX,XX +XXX,XX @@ FAIL 0001 1110 0111 1110 0000 00-- ---- ---- # FJCVTZS
23
*data = lduw_p(buf);
20
# --11 1100 --0- ---- ---- ---- ---- ---- # Load/store FP register (unscaled imm)
24
return MEMTX_OK;
21
# --11 1100 --1- ---- ---- ---- ---- --10 # Load/store FP register (register offset)
25
case 4:
22
# --11 1101 ---- ---- ---- ---- ---- ---- # Load/store FP register (scaled imm)
26
- *data = ldl_p(buf);
23
-
27
+ *data = (uint32_t)ldl_p(buf);
24
-FAIL 1010 010- -01- ---- 000- ---- ---- ---- # SVE load & replicate 32 bytes (scalar+scalar)
28
return MEMTX_OK;
25
-FAIL 1010 010- -010 ---- 001- ---- ---- ---- # SVE load & replicate 32 bytes (scalar+imm)
29
case 8:
26
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
30
*data = ldq_p(buf);
27
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/translate-sve.c
29
+++ b/target/arm/translate-sve.c
30
@@ -XXX,XX +XXX,XX @@ static bool trans_LD1RO_zprr(DisasContext *s, arg_rprr_load *a)
31
if (a->rm == 31) {
32
return false;
33
}
34
+ s->is_nonstreaming = true;
35
if (sve_access_check(s)) {
36
TCGv_i64 addr = new_tmp_a64(s);
37
tcg_gen_shli_i64(addr, cpu_reg(s, a->rm), dtype_msz(a->dtype));
38
@@ -XXX,XX +XXX,XX @@ static bool trans_LD1RO_zpri(DisasContext *s, arg_rpri_load *a)
39
if (!dc_isar_feature(aa64_sve_f64mm, s)) {
40
return false;
41
}
42
+ s->is_nonstreaming = true;
43
if (sve_access_check(s)) {
44
TCGv_i64 addr = new_tmp_a64(s);
45
tcg_gen_addi_i64(addr, cpu_reg_sp(s, a->rn), a->imm * 32);
31
--
46
--
32
2.17.1
47
2.25.1
33
34
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
These functions will be used to verify that the cpu
4
is in the correct state for a given instruction.
2
5
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-15-richard.henderson@linaro.org
8
Message-id: 20220708151540.18136-16-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
target/arm/helper-sve.h | 2 +
11
target/arm/translate-a64.h | 21 +++++++++++++++++++++
9
target/arm/sve_helper.c | 14 ++++
12
target/arm/translate-a64.c | 34 ++++++++++++++++++++++++++++++++++
10
target/arm/translate-sve.c | 133 +++++++++++++++++++++++++++++++++++++
13
2 files changed, 55 insertions(+)
11
target/arm/sve.decode | 27 ++++++++
12
4 files changed, 176 insertions(+)
13
14
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
diff --git a/target/arm/translate-a64.h b/target/arm/translate-a64.h
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
--- a/target/arm/translate-a64.h
17
+++ b/target/arm/helper-sve.h
18
+++ b/target/arm/translate-a64.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_brkbs_m, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
19
@@ -XXX,XX +XXX,XX @@ void write_fp_dreg(DisasContext *s, int reg, TCGv_i64 v);
19
20
bool logic_imm_decode_wmask(uint64_t *result, unsigned int immn,
20
DEF_HELPER_FLAGS_4(sve_brkn, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
21
unsigned int imms, unsigned int immr);
21
DEF_HELPER_FLAGS_4(sve_brkns, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
22
bool sve_access_check(DisasContext *s);
23
+bool sme_enabled_check(DisasContext *s);
24
+bool sme_enabled_check_with_svcr(DisasContext *s, unsigned);
22
+
25
+
23
+DEF_HELPER_FLAGS_3(sve_cntp, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
26
+/* This function corresponds to CheckStreamingSVEEnabled. */
24
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
27
+static inline bool sme_sm_enabled_check(DisasContext *s)
25
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/sve_helper.c
27
+++ b/target/arm/sve_helper.c
28
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(sve_brkns)(void *vd, void *vn, void *vg, uint32_t pred_desc)
29
return do_zero(vd, oprsz);
30
}
31
}
32
+
33
+uint64_t HELPER(sve_cntp)(void *vn, void *vg, uint32_t pred_desc)
34
+{
28
+{
35
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
29
+ return sme_enabled_check_with_svcr(s, R_SVCR_SM_MASK);
36
+ intptr_t esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
37
+ uint64_t *n = vn, *g = vg, sum = 0, mask = pred_esz_masks[esz];
38
+ intptr_t i;
39
+
40
+ for (i = 0; i < DIV_ROUND_UP(oprsz, 8); ++i) {
41
+ uint64_t t = n[i] & g[i] & mask;
42
+ sum += ctpop64(t);
43
+ }
44
+ return sum;
45
+}
46
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/translate-sve.c
49
+++ b/target/arm/translate-sve.c
50
@@ -XXX,XX +XXX,XX @@
51
#include "translate-a64.h"
52
53
54
+typedef void GVecGen2sFn(unsigned, uint32_t, uint32_t,
55
+ TCGv_i64, uint32_t, uint32_t);
56
+
57
typedef void gen_helper_gvec_flags_3(TCGv_i32, TCGv_ptr, TCGv_ptr,
58
TCGv_ptr, TCGv_i32);
59
typedef void gen_helper_gvec_flags_4(TCGv_i32, TCGv_ptr, TCGv_ptr,
60
@@ -XXX,XX +XXX,XX @@ static bool trans_BRKN(DisasContext *s, arg_rpr_s *a, uint32_t insn)
61
return do_brk2(s, a, gen_helper_sve_brkn, gen_helper_sve_brkns);
62
}
63
64
+/*
65
+ *** SVE Predicate Count Group
66
+ */
67
+
68
+static void do_cntp(DisasContext *s, TCGv_i64 val, int esz, int pn, int pg)
69
+{
70
+ unsigned psz = pred_full_reg_size(s);
71
+
72
+ if (psz <= 8) {
73
+ uint64_t psz_mask;
74
+
75
+ tcg_gen_ld_i64(val, cpu_env, pred_full_reg_offset(s, pn));
76
+ if (pn != pg) {
77
+ TCGv_i64 g = tcg_temp_new_i64();
78
+ tcg_gen_ld_i64(g, cpu_env, pred_full_reg_offset(s, pg));
79
+ tcg_gen_and_i64(val, val, g);
80
+ tcg_temp_free_i64(g);
81
+ }
82
+
83
+ /* Reduce the pred_esz_masks value simply to reduce the
84
+ * size of the code generated here.
85
+ */
86
+ psz_mask = MAKE_64BIT_MASK(0, psz * 8);
87
+ tcg_gen_andi_i64(val, val, pred_esz_masks[esz] & psz_mask);
88
+
89
+ tcg_gen_ctpop_i64(val, val);
90
+ } else {
91
+ TCGv_ptr t_pn = tcg_temp_new_ptr();
92
+ TCGv_ptr t_pg = tcg_temp_new_ptr();
93
+ unsigned desc;
94
+ TCGv_i32 t_desc;
95
+
96
+ desc = psz - 2;
97
+ desc = deposit32(desc, SIMD_DATA_SHIFT, 2, esz);
98
+
99
+ tcg_gen_addi_ptr(t_pn, cpu_env, pred_full_reg_offset(s, pn));
100
+ tcg_gen_addi_ptr(t_pg, cpu_env, pred_full_reg_offset(s, pg));
101
+ t_desc = tcg_const_i32(desc);
102
+
103
+ gen_helper_sve_cntp(val, t_pn, t_pg, t_desc);
104
+ tcg_temp_free_ptr(t_pn);
105
+ tcg_temp_free_ptr(t_pg);
106
+ tcg_temp_free_i32(t_desc);
107
+ }
108
+}
30
+}
109
+
31
+
110
+static bool trans_CNTP(DisasContext *s, arg_CNTP *a, uint32_t insn)
32
+/* This function corresponds to CheckSMEAndZAEnabled. */
33
+static inline bool sme_za_enabled_check(DisasContext *s)
111
+{
34
+{
112
+ if (sve_access_check(s)) {
35
+ return sme_enabled_check_with_svcr(s, R_SVCR_ZA_MASK);
113
+ do_cntp(s, cpu_reg(s, a->rd), a->esz, a->rn, a->pg);
114
+ }
115
+ return true;
116
+}
36
+}
117
+
37
+
118
+static bool trans_INCDECP_r(DisasContext *s, arg_incdec_pred *a,
38
+/* Note that this function corresponds to CheckStreamingSVEAndZAEnabled. */
119
+ uint32_t insn)
39
+static inline bool sme_smza_enabled_check(DisasContext *s)
120
+{
40
+{
121
+ if (sve_access_check(s)) {
41
+ return sme_enabled_check_with_svcr(s, R_SVCR_SM_MASK | R_SVCR_ZA_MASK);
122
+ TCGv_i64 reg = cpu_reg(s, a->rd);
123
+ TCGv_i64 val = tcg_temp_new_i64();
124
+
125
+ do_cntp(s, val, a->esz, a->pg, a->pg);
126
+ if (a->d) {
127
+ tcg_gen_sub_i64(reg, reg, val);
128
+ } else {
129
+ tcg_gen_add_i64(reg, reg, val);
130
+ }
131
+ tcg_temp_free_i64(val);
132
+ }
133
+ return true;
134
+}
42
+}
135
+
43
+
136
+static bool trans_INCDECP_z(DisasContext *s, arg_incdec2_pred *a,
44
TCGv_i64 clean_data_tbi(DisasContext *s, TCGv_i64 addr);
137
+ uint32_t insn)
45
TCGv_i64 gen_mte_check1(DisasContext *s, TCGv_i64 addr, bool is_write,
46
bool tag_checked, int log2_size);
47
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
48
index XXXXXXX..XXXXXXX 100644
49
--- a/target/arm/translate-a64.c
50
+++ b/target/arm/translate-a64.c
51
@@ -XXX,XX +XXX,XX @@ static bool sme_access_check(DisasContext *s)
52
return true;
53
}
54
55
+/* This function corresponds to CheckSMEEnabled. */
56
+bool sme_enabled_check(DisasContext *s)
138
+{
57
+{
139
+ if (a->esz == 0) {
58
+ /*
59
+ * Note that unlike sve_excp_el, we have not constrained sme_excp_el
60
+ * to be zero when fp_excp_el has priority. This is because we need
61
+ * sme_excp_el by itself for cpregs access checks.
62
+ */
63
+ if (!s->fp_excp_el || s->sme_excp_el < s->fp_excp_el) {
64
+ s->fp_access_checked = true;
65
+ return sme_access_check(s);
66
+ }
67
+ return fp_access_check_only(s);
68
+}
69
+
70
+/* Common subroutine for CheckSMEAnd*Enabled. */
71
+bool sme_enabled_check_with_svcr(DisasContext *s, unsigned req)
72
+{
73
+ if (!sme_enabled_check(s)) {
140
+ return false;
74
+ return false;
141
+ }
75
+ }
142
+ if (sve_access_check(s)) {
76
+ if (FIELD_EX64(req, SVCR, SM) && !s->pstate_sm) {
143
+ unsigned vsz = vec_full_reg_size(s);
77
+ gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
144
+ TCGv_i64 val = tcg_temp_new_i64();
78
+ syn_smetrap(SME_ET_NotStreaming, false));
145
+ GVecGen2sFn *gvec_fn = a->d ? tcg_gen_gvec_subs : tcg_gen_gvec_adds;
146
+
147
+ do_cntp(s, val, a->esz, a->pg, a->pg);
148
+ gvec_fn(a->esz, vec_full_reg_offset(s, a->rd),
149
+ vec_full_reg_offset(s, a->rn), val, vsz, vsz);
150
+ }
151
+ return true;
152
+}
153
+
154
+static bool trans_SINCDECP_r_32(DisasContext *s, arg_incdec_pred *a,
155
+ uint32_t insn)
156
+{
157
+ if (sve_access_check(s)) {
158
+ TCGv_i64 reg = cpu_reg(s, a->rd);
159
+ TCGv_i64 val = tcg_temp_new_i64();
160
+
161
+ do_cntp(s, val, a->esz, a->pg, a->pg);
162
+ do_sat_addsub_32(reg, val, a->u, a->d);
163
+ }
164
+ return true;
165
+}
166
+
167
+static bool trans_SINCDECP_r_64(DisasContext *s, arg_incdec_pred *a,
168
+ uint32_t insn)
169
+{
170
+ if (sve_access_check(s)) {
171
+ TCGv_i64 reg = cpu_reg(s, a->rd);
172
+ TCGv_i64 val = tcg_temp_new_i64();
173
+
174
+ do_cntp(s, val, a->esz, a->pg, a->pg);
175
+ do_sat_addsub_64(reg, val, a->u, a->d);
176
+ }
177
+ return true;
178
+}
179
+
180
+static bool trans_SINCDECP_z(DisasContext *s, arg_incdec2_pred *a,
181
+ uint32_t insn)
182
+{
183
+ if (a->esz == 0) {
184
+ return false;
79
+ return false;
185
+ }
80
+ }
186
+ if (sve_access_check(s)) {
81
+ if (FIELD_EX64(req, SVCR, ZA) && !s->pstate_za) {
187
+ TCGv_i64 val = tcg_temp_new_i64();
82
+ gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
188
+ do_cntp(s, val, a->esz, a->pg, a->pg);
83
+ syn_smetrap(SME_ET_InactiveZA, false));
189
+ do_sat_addsub_vec(s, a->esz, a->rd, a->rn, val, a->u, a->d);
84
+ return false;
190
+ }
85
+ }
191
+ return true;
86
+ return true;
192
+}
87
+}
193
+
88
+
194
/*
89
/*
195
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
90
* This utility function is for doing register extension with an
196
*/
91
* optional shift. You will likely want to pass a temporary for the
197
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
198
index XXXXXXX..XXXXXXX 100644
199
--- a/target/arm/sve.decode
200
+++ b/target/arm/sve.decode
201
@@ -XXX,XX +XXX,XX @@
202
&ptrue rd esz pat s
203
&incdec_cnt rd pat esz imm d u
204
&incdec2_cnt rd rn pat esz imm d u
205
+&incdec_pred rd pg esz d u
206
+&incdec2_pred rd rn pg esz d u
207
208
###########################################################################
209
# Named instruction formats. These are generally used to
210
@@ -XXX,XX +XXX,XX @@
211
212
# One register operand, with governing predicate, vector element size
213
@rd_pg_rn ........ esz:2 ... ... ... pg:3 rn:5 rd:5 &rpr_esz
214
+@rd_pg4_pn ........ esz:2 ... ... .. pg:4 . rn:4 rd:5 &rpr_esz
215
216
# Two register operands with a 6-bit signed immediate.
217
@rd_rn_i6 ........ ... rn:5 ..... imm:s6 rd:5 &rri
218
@@ -XXX,XX +XXX,XX @@
219
@incdec2_cnt ........ esz:2 .. .... ...... pat:5 rd:5 \
220
&incdec2_cnt imm=%imm4_16_p1 rn=%reg_movprfx
221
222
+# One register, predicate.
223
+# User must fill in U and D.
224
+@incdec_pred ........ esz:2 .... .. ..... .. pg:4 rd:5 &incdec_pred
225
+@incdec2_pred ........ esz:2 .... .. ..... .. pg:4 rd:5 \
226
+ &incdec2_pred rn=%reg_movprfx
227
+
228
###########################################################################
229
# Instruction patterns. Grouped according to the SVE encodingindex.xhtml.
230
231
@@ -XXX,XX +XXX,XX @@ BRKB_m 00100101 1. 01000001 .... 0 .... 1 .... @pd_pg_pn_s
232
# SVE propagate break to next partition
233
BRKN 00100101 0. 01100001 .... 0 .... 0 .... @pd_pg_pn_s
234
235
+### SVE Predicate Count Group
236
+
237
+# SVE predicate count
238
+CNTP 00100101 .. 100 000 10 .... 0 .... ..... @rd_pg4_pn
239
+
240
+# SVE inc/dec register by predicate count
241
+INCDECP_r 00100101 .. 10110 d:1 10001 00 .... ..... @incdec_pred u=1
242
+
243
+# SVE inc/dec vector by predicate count
244
+INCDECP_z 00100101 .. 10110 d:1 10000 00 .... ..... @incdec2_pred u=1
245
+
246
+# SVE saturating inc/dec register by predicate count
247
+SINCDECP_r_32 00100101 .. 1010 d:1 u:1 10001 00 .... ..... @incdec_pred
248
+SINCDECP_r_64 00100101 .. 1010 d:1 u:1 10001 10 .... ..... @incdec_pred
249
+
250
+# SVE saturating inc/dec vector by predicate count
251
+SINCDECP_z 00100101 .. 1010 d:1 u:1 10000 00 .... ..... @incdec2_pred
252
+
253
### SVE Memory - 32-bit Gather and Unsized Contiguous Group
254
255
# SVE load predicate register
256
--
92
--
257
2.17.1
93
2.25.1
258
259
diff view generated by jsdifflib
1
Convert the parallel device away from using the old_mmio field
1
From: Richard Henderson <richard.henderson@linaro.org>
2
of MemoryRegionOps. This change only affects the memory-mapped
3
variant, which is used by the MIPS Jazz boards 'magnum' and 'pica61'.
4
2
3
The pseudocode for CheckSVEEnabled gains a check for Streaming
4
SVE mode, and for SME present but SVE absent.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20220708151540.18136-17-richard.henderson@linaro.org
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Message-id: 20180601141223.26630-7-peter.maydell@linaro.org
8
---
10
---
9
hw/char/parallel.c | 50 ++++++++++------------------------------------
11
target/arm/translate-a64.c | 22 ++++++++++++++++------
10
1 file changed, 11 insertions(+), 39 deletions(-)
12
1 file changed, 16 insertions(+), 6 deletions(-)
11
13
12
diff --git a/hw/char/parallel.c b/hw/char/parallel.c
14
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
13
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
14
--- a/hw/char/parallel.c
16
--- a/target/arm/translate-a64.c
15
+++ b/hw/char/parallel.c
17
+++ b/target/arm/translate-a64.c
16
@@ -XXX,XX +XXX,XX @@ static void parallel_isa_realizefn(DeviceState *dev, Error **errp)
18
@@ -XXX,XX +XXX,XX @@ static bool fp_access_check(DisasContext *s)
19
return true;
17
}
20
}
18
21
19
/* Memory mapped interface */
22
-/* Check that SVE access is enabled. If it is, return true.
20
-static uint32_t parallel_mm_readb (void *opaque, hwaddr addr)
23
+/*
21
+static uint64_t parallel_mm_readfn(void *opaque, hwaddr addr, unsigned size)
24
+ * Check that SVE access is enabled. If it is, return true.
25
* If not, emit code to generate an appropriate exception and return false.
26
+ * This function corresponds to CheckSVEEnabled().
27
*/
28
bool sve_access_check(DisasContext *s)
22
{
29
{
23
ParallelState *s = opaque;
30
- if (s->sve_excp_el) {
24
31
- assert(!s->sve_access_checked);
25
- return parallel_ioport_read_sw(s, addr >> s->it_shift) & 0xFF;
32
- s->sve_access_checked = true;
26
+ return parallel_ioport_read_sw(s, addr >> s->it_shift) &
33
-
27
+ MAKE_64BIT_MASK(0, size * 8);
34
+ if (s->pstate_sm || !dc_isar_feature(aa64_sve, s)) {
35
+ assert(dc_isar_feature(aa64_sme, s));
36
+ if (!sme_sm_enabled_check(s)) {
37
+ goto fail_exit;
38
+ }
39
+ } else if (s->sve_excp_el) {
40
gen_exception_insn_el(s, s->pc_curr, EXCP_UDEF,
41
syn_sve_access_trap(), s->sve_excp_el);
42
- return false;
43
+ goto fail_exit;
44
}
45
s->sve_access_checked = true;
46
return fp_access_check(s);
47
+
48
+ fail_exit:
49
+ /* Assert that we only raise one exception per instruction. */
50
+ assert(!s->sve_access_checked);
51
+ s->sve_access_checked = true;
52
+ return false;
28
}
53
}
29
54
30
-static void parallel_mm_writeb (void *opaque,
55
/*
31
- hwaddr addr, uint32_t value)
32
+static void parallel_mm_writefn(void *opaque, hwaddr addr,
33
+ uint64_t value, unsigned size)
34
{
35
ParallelState *s = opaque;
36
37
- parallel_ioport_write_sw(s, addr >> s->it_shift, value & 0xFF);
38
-}
39
-
40
-static uint32_t parallel_mm_readw (void *opaque, hwaddr addr)
41
-{
42
- ParallelState *s = opaque;
43
-
44
- return parallel_ioport_read_sw(s, addr >> s->it_shift) & 0xFFFF;
45
-}
46
-
47
-static void parallel_mm_writew (void *opaque,
48
- hwaddr addr, uint32_t value)
49
-{
50
- ParallelState *s = opaque;
51
-
52
- parallel_ioport_write_sw(s, addr >> s->it_shift, value & 0xFFFF);
53
-}
54
-
55
-static uint32_t parallel_mm_readl (void *opaque, hwaddr addr)
56
-{
57
- ParallelState *s = opaque;
58
-
59
- return parallel_ioport_read_sw(s, addr >> s->it_shift);
60
-}
61
-
62
-static void parallel_mm_writel (void *opaque,
63
- hwaddr addr, uint32_t value)
64
-{
65
- ParallelState *s = opaque;
66
-
67
- parallel_ioport_write_sw(s, addr >> s->it_shift, value);
68
+ parallel_ioport_write_sw(s, addr >> s->it_shift,
69
+ value & MAKE_64BIT_MASK(0, size * 8));
70
}
71
72
static const MemoryRegionOps parallel_mm_ops = {
73
- .old_mmio = {
74
- .read = { parallel_mm_readb, parallel_mm_readw, parallel_mm_readl },
75
- .write = { parallel_mm_writeb, parallel_mm_writew, parallel_mm_writel },
76
- },
77
+ .read = parallel_mm_readfn,
78
+ .write = parallel_mm_writefn,
79
+ .valid.min_access_size = 1,
80
+ .valid.max_access_size = 4,
81
.endianness = DEVICE_NATIVE_ENDIAN,
82
};
83
84
--
56
--
85
2.17.1
57
2.25.1
86
87
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
These SME instructions are nominally within the SVE decode space,
4
so we add them to sve.decode and translate-sve.c.
2
5
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-3-richard.henderson@linaro.org
8
Message-id: 20220708151540.18136-18-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
target/arm/helper-sve.h | 23 +++++++
11
target/arm/translate-a64.h | 12 ++++++++++++
9
target/arm/sve_helper.c | 114 +++++++++++++++++++++++++++++++
12
target/arm/sve.decode | 5 ++++-
10
target/arm/translate-sve.c | 133 +++++++++++++++++++++++++++++++++++++
13
target/arm/translate-sve.c | 38 ++++++++++++++++++++++++++++++++++++++
11
target/arm/sve.decode | 27 ++++++++
14
3 files changed, 54 insertions(+), 1 deletion(-)
12
4 files changed, 297 insertions(+)
13
15
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
16
diff --git a/target/arm/translate-a64.h b/target/arm/translate-a64.h
15
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
18
--- a/target/arm/translate-a64.h
17
+++ b/target/arm/helper-sve.h
19
+++ b/target/arm/translate-a64.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_cpy_z_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
20
@@ -XXX,XX +XXX,XX @@ static inline int vec_full_reg_size(DisasContext *s)
19
21
return s->vl;
20
DEF_HELPER_FLAGS_4(sve_ext, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
21
22
+DEF_HELPER_FLAGS_4(sve_insr_b, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
23
+DEF_HELPER_FLAGS_4(sve_insr_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
24
+DEF_HELPER_FLAGS_4(sve_insr_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
25
+DEF_HELPER_FLAGS_4(sve_insr_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
26
+
27
+DEF_HELPER_FLAGS_3(sve_rev_b, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_3(sve_rev_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_3(sve_rev_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_3(sve_rev_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
31
+
32
+DEF_HELPER_FLAGS_4(sve_tbl_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
33
+DEF_HELPER_FLAGS_4(sve_tbl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
34
+DEF_HELPER_FLAGS_4(sve_tbl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
35
+DEF_HELPER_FLAGS_4(sve_tbl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
36
+
37
+DEF_HELPER_FLAGS_3(sve_sunpk_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
38
+DEF_HELPER_FLAGS_3(sve_sunpk_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
39
+DEF_HELPER_FLAGS_3(sve_sunpk_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
40
+
41
+DEF_HELPER_FLAGS_3(sve_uunpk_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
42
+DEF_HELPER_FLAGS_3(sve_uunpk_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
43
+DEF_HELPER_FLAGS_3(sve_uunpk_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
44
+
45
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
46
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
47
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
48
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
49
index XXXXXXX..XXXXXXX 100644
50
--- a/target/arm/sve_helper.c
51
+++ b/target/arm/sve_helper.c
52
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_ext)(void *vd, void *vn, void *vm, uint32_t desc)
53
memcpy(vd + n_siz, &tmp, n_ofs);
54
}
55
}
22
}
56
+
23
57
+#define DO_INSR(NAME, TYPE, H) \
24
+/* Return the byte size of the vector register, SVL / 8. */
58
+void HELPER(NAME)(void *vd, void *vn, uint64_t val, uint32_t desc) \
25
+static inline int streaming_vec_reg_size(DisasContext *s)
59
+{ \
26
+{
60
+ intptr_t opr_sz = simd_oprsz(desc); \
27
+ return s->svl;
61
+ swap_memmove(vd + sizeof(TYPE), vn, opr_sz - sizeof(TYPE)); \
62
+ *(TYPE *)(vd + H(0)) = val; \
63
+}
28
+}
64
+
29
+
65
+DO_INSR(sve_insr_b, uint8_t, H1)
30
/*
66
+DO_INSR(sve_insr_h, uint16_t, H1_2)
31
* Return the offset info CPUARMState of the predicate vector register Pn.
67
+DO_INSR(sve_insr_s, uint32_t, H1_4)
32
* Note for this purpose, FFR is P16.
68
+DO_INSR(sve_insr_d, uint64_t, )
33
@@ -XXX,XX +XXX,XX @@ static inline int pred_full_reg_size(DisasContext *s)
69
+
34
return s->vl >> 3;
70
+#undef DO_INSR
35
}
71
+
36
72
+void HELPER(sve_rev_b)(void *vd, void *vn, uint32_t desc)
37
+/* Return the byte size of the predicate register, SVL / 64. */
38
+static inline int streaming_pred_reg_size(DisasContext *s)
73
+{
39
+{
74
+ intptr_t i, j, opr_sz = simd_oprsz(desc);
40
+ return s->svl >> 3;
75
+ for (i = 0, j = opr_sz - 8; i < opr_sz / 2; i += 8, j -= 8) {
76
+ uint64_t f = *(uint64_t *)(vn + i);
77
+ uint64_t b = *(uint64_t *)(vn + j);
78
+ *(uint64_t *)(vd + i) = bswap64(b);
79
+ *(uint64_t *)(vd + j) = bswap64(f);
80
+ }
81
+}
41
+}
82
+
42
+
83
+static inline uint64_t hswap64(uint64_t h)
43
/*
84
+{
44
* Round up the size of a register to a size allowed by
85
+ uint64_t m = 0x0000ffff0000ffffull;
45
* the tcg vector infrastructure. Any operation which uses this
86
+ h = rol64(h, 32);
46
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
87
+ return ((h & m) << 16) | ((h >> 16) & m);
47
index XXXXXXX..XXXXXXX 100644
88
+}
48
--- a/target/arm/sve.decode
89
+
49
+++ b/target/arm/sve.decode
90
+void HELPER(sve_rev_h)(void *vd, void *vn, uint32_t desc)
50
@@ -XXX,XX +XXX,XX @@ INDEX_ri 00000100 esz:2 1 imm:s5 010001 rn:5 rd:5
91
+{
51
# SVE index generation (register start, register increment)
92
+ intptr_t i, j, opr_sz = simd_oprsz(desc);
52
INDEX_rr 00000100 .. 1 ..... 010011 ..... ..... @rd_rn_rm
93
+ for (i = 0, j = opr_sz - 8; i < opr_sz / 2; i += 8, j -= 8) {
53
94
+ uint64_t f = *(uint64_t *)(vn + i);
54
-### SVE Stack Allocation Group
95
+ uint64_t b = *(uint64_t *)(vn + j);
55
+### SVE / Streaming SVE Stack Allocation Group
96
+ *(uint64_t *)(vd + i) = hswap64(b);
56
97
+ *(uint64_t *)(vd + j) = hswap64(f);
57
# SVE stack frame adjustment
98
+ }
58
ADDVL 00000100 001 ..... 01010 ...... ..... @rd_rn_i6
99
+}
59
+ADDSVL 00000100 001 ..... 01011 ...... ..... @rd_rn_i6
100
+
60
ADDPL 00000100 011 ..... 01010 ...... ..... @rd_rn_i6
101
+void HELPER(sve_rev_s)(void *vd, void *vn, uint32_t desc)
61
+ADDSPL 00000100 011 ..... 01011 ...... ..... @rd_rn_i6
102
+{
62
103
+ intptr_t i, j, opr_sz = simd_oprsz(desc);
63
# SVE stack frame size
104
+ for (i = 0, j = opr_sz - 8; i < opr_sz / 2; i += 8, j -= 8) {
64
RDVL 00000100 101 11111 01010 imm:s6 rd:5
105
+ uint64_t f = *(uint64_t *)(vn + i);
65
+RDSVL 00000100 101 11111 01011 imm:s6 rd:5
106
+ uint64_t b = *(uint64_t *)(vn + j);
66
107
+ *(uint64_t *)(vd + i) = rol64(b, 32);
67
### SVE Bitwise Shift - Unpredicated Group
108
+ *(uint64_t *)(vd + j) = rol64(f, 32);
68
109
+ }
110
+}
111
+
112
+void HELPER(sve_rev_d)(void *vd, void *vn, uint32_t desc)
113
+{
114
+ intptr_t i, j, opr_sz = simd_oprsz(desc);
115
+ for (i = 0, j = opr_sz - 8; i < opr_sz / 2; i += 8, j -= 8) {
116
+ uint64_t f = *(uint64_t *)(vn + i);
117
+ uint64_t b = *(uint64_t *)(vn + j);
118
+ *(uint64_t *)(vd + i) = b;
119
+ *(uint64_t *)(vd + j) = f;
120
+ }
121
+}
122
+
123
+#define DO_TBL(NAME, TYPE, H) \
124
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
125
+{ \
126
+ intptr_t i, opr_sz = simd_oprsz(desc); \
127
+ uintptr_t elem = opr_sz / sizeof(TYPE); \
128
+ TYPE *d = vd, *n = vn, *m = vm; \
129
+ ARMVectorReg tmp; \
130
+ if (unlikely(vd == vn)) { \
131
+ n = memcpy(&tmp, vn, opr_sz); \
132
+ } \
133
+ for (i = 0; i < elem; i++) { \
134
+ TYPE j = m[H(i)]; \
135
+ d[H(i)] = j < elem ? n[H(j)] : 0; \
136
+ } \
137
+}
138
+
139
+DO_TBL(sve_tbl_b, uint8_t, H1)
140
+DO_TBL(sve_tbl_h, uint16_t, H2)
141
+DO_TBL(sve_tbl_s, uint32_t, H4)
142
+DO_TBL(sve_tbl_d, uint64_t, )
143
+
144
+#undef TBL
145
+
146
+#define DO_UNPK(NAME, TYPED, TYPES, HD, HS) \
147
+void HELPER(NAME)(void *vd, void *vn, uint32_t desc) \
148
+{ \
149
+ intptr_t i, opr_sz = simd_oprsz(desc); \
150
+ TYPED *d = vd; \
151
+ TYPES *n = vn; \
152
+ ARMVectorReg tmp; \
153
+ if (unlikely(vn - vd < opr_sz)) { \
154
+ n = memcpy(&tmp, n, opr_sz / 2); \
155
+ } \
156
+ for (i = 0; i < opr_sz / sizeof(TYPED); i++) { \
157
+ d[HD(i)] = n[HS(i)]; \
158
+ } \
159
+}
160
+
161
+DO_UNPK(sve_sunpk_h, int16_t, int8_t, H2, H1)
162
+DO_UNPK(sve_sunpk_s, int32_t, int16_t, H4, H2)
163
+DO_UNPK(sve_sunpk_d, int64_t, int32_t, , H4)
164
+
165
+DO_UNPK(sve_uunpk_h, uint16_t, uint8_t, H2, H1)
166
+DO_UNPK(sve_uunpk_s, uint32_t, uint16_t, H4, H2)
167
+DO_UNPK(sve_uunpk_d, uint64_t, uint32_t, , H4)
168
+
169
+#undef DO_UNPK
170
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
69
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
171
index XXXXXXX..XXXXXXX 100644
70
index XXXXXXX..XXXXXXX 100644
172
--- a/target/arm/translate-sve.c
71
--- a/target/arm/translate-sve.c
173
+++ b/target/arm/translate-sve.c
72
+++ b/target/arm/translate-sve.c
174
@@ -XXX,XX +XXX,XX @@ static bool trans_EXT(DisasContext *s, arg_EXT *a, uint32_t insn)
73
@@ -XXX,XX +XXX,XX @@ static bool trans_ADDVL(DisasContext *s, arg_ADDVL *a)
175
return true;
74
return true;
176
}
75
}
177
76
178
+/*
77
+static bool trans_ADDSVL(DisasContext *s, arg_ADDSVL *a)
179
+ *** SVE Permute - Unpredicated Group
180
+ */
181
+
182
+static bool trans_DUP_s(DisasContext *s, arg_DUP_s *a, uint32_t insn)
183
+{
78
+{
184
+ if (sve_access_check(s)) {
79
+ if (!dc_isar_feature(aa64_sme, s)) {
185
+ unsigned vsz = vec_full_reg_size(s);
80
+ return false;
186
+ tcg_gen_gvec_dup_i64(a->esz, vec_full_reg_offset(s, a->rd),
81
+ }
187
+ vsz, vsz, cpu_reg_sp(s, a->rn));
82
+ if (sme_enabled_check(s)) {
83
+ TCGv_i64 rd = cpu_reg_sp(s, a->rd);
84
+ TCGv_i64 rn = cpu_reg_sp(s, a->rn);
85
+ tcg_gen_addi_i64(rd, rn, a->imm * streaming_vec_reg_size(s));
188
+ }
86
+ }
189
+ return true;
87
+ return true;
190
+}
88
+}
191
+
89
+
192
+static bool trans_DUP_x(DisasContext *s, arg_DUP_x *a, uint32_t insn)
90
static bool trans_ADDPL(DisasContext *s, arg_ADDPL *a)
91
{
92
if (!dc_isar_feature(aa64_sve, s)) {
93
@@ -XXX,XX +XXX,XX @@ static bool trans_ADDPL(DisasContext *s, arg_ADDPL *a)
94
return true;
95
}
96
97
+static bool trans_ADDSPL(DisasContext *s, arg_ADDSPL *a)
193
+{
98
+{
194
+ if ((a->imm & 0x1f) == 0) {
99
+ if (!dc_isar_feature(aa64_sme, s)) {
195
+ return false;
100
+ return false;
196
+ }
101
+ }
197
+ if (sve_access_check(s)) {
102
+ if (sme_enabled_check(s)) {
198
+ unsigned vsz = vec_full_reg_size(s);
103
+ TCGv_i64 rd = cpu_reg_sp(s, a->rd);
199
+ unsigned dofs = vec_full_reg_offset(s, a->rd);
104
+ TCGv_i64 rn = cpu_reg_sp(s, a->rn);
200
+ unsigned esz, index;
105
+ tcg_gen_addi_i64(rd, rn, a->imm * streaming_pred_reg_size(s));
201
+
202
+ esz = ctz32(a->imm);
203
+ index = a->imm >> (esz + 1);
204
+
205
+ if ((index << esz) < vsz) {
206
+ unsigned nofs = vec_reg_offset(s, a->rn, index, esz);
207
+ tcg_gen_gvec_dup_mem(esz, dofs, nofs, vsz, vsz);
208
+ } else {
209
+ tcg_gen_gvec_dup64i(dofs, vsz, vsz, 0);
210
+ }
211
+ }
106
+ }
212
+ return true;
107
+ return true;
213
+}
108
+}
214
+
109
+
215
+static void do_insr_i64(DisasContext *s, arg_rrr_esz *a, TCGv_i64 val)
110
static bool trans_RDVL(DisasContext *s, arg_RDVL *a)
111
{
112
if (!dc_isar_feature(aa64_sve, s)) {
113
@@ -XXX,XX +XXX,XX @@ static bool trans_RDVL(DisasContext *s, arg_RDVL *a)
114
return true;
115
}
116
117
+static bool trans_RDSVL(DisasContext *s, arg_RDSVL *a)
216
+{
118
+{
217
+ typedef void gen_insr(TCGv_ptr, TCGv_ptr, TCGv_i64, TCGv_i32);
119
+ if (!dc_isar_feature(aa64_sme, s)) {
218
+ static gen_insr * const fns[4] = {
219
+ gen_helper_sve_insr_b, gen_helper_sve_insr_h,
220
+ gen_helper_sve_insr_s, gen_helper_sve_insr_d,
221
+ };
222
+ unsigned vsz = vec_full_reg_size(s);
223
+ TCGv_i32 desc = tcg_const_i32(simd_desc(vsz, vsz, 0));
224
+ TCGv_ptr t_zd = tcg_temp_new_ptr();
225
+ TCGv_ptr t_zn = tcg_temp_new_ptr();
226
+
227
+ tcg_gen_addi_ptr(t_zd, cpu_env, vec_full_reg_offset(s, a->rd));
228
+ tcg_gen_addi_ptr(t_zn, cpu_env, vec_full_reg_offset(s, a->rn));
229
+
230
+ fns[a->esz](t_zd, t_zn, val, desc);
231
+
232
+ tcg_temp_free_ptr(t_zd);
233
+ tcg_temp_free_ptr(t_zn);
234
+ tcg_temp_free_i32(desc);
235
+}
236
+
237
+static bool trans_INSR_f(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
238
+{
239
+ if (sve_access_check(s)) {
240
+ TCGv_i64 t = tcg_temp_new_i64();
241
+ tcg_gen_ld_i64(t, cpu_env, vec_reg_offset(s, a->rm, 0, MO_64));
242
+ do_insr_i64(s, a, t);
243
+ tcg_temp_free_i64(t);
244
+ }
245
+ return true;
246
+}
247
+
248
+static bool trans_INSR_r(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
249
+{
250
+ if (sve_access_check(s)) {
251
+ do_insr_i64(s, a, cpu_reg(s, a->rm));
252
+ }
253
+ return true;
254
+}
255
+
256
+static bool trans_REV_v(DisasContext *s, arg_rr_esz *a, uint32_t insn)
257
+{
258
+ static gen_helper_gvec_2 * const fns[4] = {
259
+ gen_helper_sve_rev_b, gen_helper_sve_rev_h,
260
+ gen_helper_sve_rev_s, gen_helper_sve_rev_d
261
+ };
262
+
263
+ if (sve_access_check(s)) {
264
+ unsigned vsz = vec_full_reg_size(s);
265
+ tcg_gen_gvec_2_ool(vec_full_reg_offset(s, a->rd),
266
+ vec_full_reg_offset(s, a->rn),
267
+ vsz, vsz, 0, fns[a->esz]);
268
+ }
269
+ return true;
270
+}
271
+
272
+static bool trans_TBL(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
273
+{
274
+ static gen_helper_gvec_3 * const fns[4] = {
275
+ gen_helper_sve_tbl_b, gen_helper_sve_tbl_h,
276
+ gen_helper_sve_tbl_s, gen_helper_sve_tbl_d
277
+ };
278
+
279
+ if (sve_access_check(s)) {
280
+ unsigned vsz = vec_full_reg_size(s);
281
+ tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
282
+ vec_full_reg_offset(s, a->rn),
283
+ vec_full_reg_offset(s, a->rm),
284
+ vsz, vsz, 0, fns[a->esz]);
285
+ }
286
+ return true;
287
+}
288
+
289
+static bool trans_UNPK(DisasContext *s, arg_UNPK *a, uint32_t insn)
290
+{
291
+ static gen_helper_gvec_2 * const fns[4][2] = {
292
+ { NULL, NULL },
293
+ { gen_helper_sve_sunpk_h, gen_helper_sve_uunpk_h },
294
+ { gen_helper_sve_sunpk_s, gen_helper_sve_uunpk_s },
295
+ { gen_helper_sve_sunpk_d, gen_helper_sve_uunpk_d },
296
+ };
297
+
298
+ if (a->esz == 0) {
299
+ return false;
120
+ return false;
300
+ }
121
+ }
301
+ if (sve_access_check(s)) {
122
+ if (sme_enabled_check(s)) {
302
+ unsigned vsz = vec_full_reg_size(s);
123
+ TCGv_i64 reg = cpu_reg(s, a->rd);
303
+ tcg_gen_gvec_2_ool(vec_full_reg_offset(s, a->rd),
124
+ tcg_gen_movi_i64(reg, a->imm * streaming_vec_reg_size(s));
304
+ vec_full_reg_offset(s, a->rn)
305
+ + (a->h ? vsz / 2 : 0),
306
+ vsz, vsz, 0, fns[a->esz][a->u]);
307
+ }
125
+ }
308
+ return true;
126
+ return true;
309
+}
127
+}
310
+
128
+
311
/*
129
/*
312
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
130
*** SVE Compute Vector Address Group
313
*/
131
*/
314
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
315
index XXXXXXX..XXXXXXX 100644
316
--- a/target/arm/sve.decode
317
+++ b/target/arm/sve.decode
318
@@ -XXX,XX +XXX,XX @@
319
320
%imm4_16_p1 16:4 !function=plus1
321
%imm6_22_5 22:1 5:5
322
+%imm7_22_16 22:2 16:5
323
%imm8_16_10 16:5 10:3
324
%imm9_16_10 16:s6 10:3
325
326
@@ -XXX,XX +XXX,XX @@
327
328
# Three operand, vector element size
329
@rd_rn_rm ........ esz:2 . rm:5 ... ... rn:5 rd:5 &rrr_esz
330
+@rdn_rm ........ esz:2 ...... ...... rm:5 rd:5 \
331
+ &rrr_esz rn=%reg_movprfx
332
333
# Three operand with "memory" size, aka immediate left shift
334
@rd_rn_msz_rm ........ ... rm:5 .... imm:2 rn:5 rd:5 &rrri
335
@@ -XXX,XX +XXX,XX @@ CPY_z_i 00000101 .. 01 .... 00 . ........ ..... @rdn_pg4 imm=%sh8_i8s
336
EXT 00000101 001 ..... 000 ... rm:5 rd:5 \
337
&rrri rn=%reg_movprfx imm=%imm8_16_10
338
339
+### SVE Permute - Unpredicated Group
340
+
341
+# SVE broadcast general register
342
+DUP_s 00000101 .. 1 00000 001110 ..... ..... @rd_rn
343
+
344
+# SVE broadcast indexed element
345
+DUP_x 00000101 .. 1 ..... 001000 rn:5 rd:5 \
346
+ &rri imm=%imm7_22_16
347
+
348
+# SVE insert SIMD&FP scalar register
349
+INSR_f 00000101 .. 1 10100 001110 ..... ..... @rdn_rm
350
+
351
+# SVE insert general register
352
+INSR_r 00000101 .. 1 00100 001110 ..... ..... @rdn_rm
353
+
354
+# SVE reverse vector elements
355
+REV_v 00000101 .. 1 11000 001110 ..... ..... @rd_rn
356
+
357
+# SVE vector table lookup
358
+TBL 00000101 .. 1 ..... 001100 ..... ..... @rd_rn_rm
359
+
360
+# SVE unpack vector elements
361
+UNPK 00000101 esz:2 1100 u:1 h:1 001110 rn:5 rd:5
362
+
363
### SVE Predicate Logical Operations Group
364
365
# SVE predicate logical operations
366
--
132
--
367
2.17.1
133
2.25.1
368
369
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-5-richard.henderson@linaro.org
5
Message-id: 20220708151540.18136-19-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
7
---
8
target/arm/helper-sve.h | 15 ++++++++
8
target/arm/helper-sme.h | 2 ++
9
target/arm/sve_helper.c | 72 ++++++++++++++++++++++++++++++++++++
9
target/arm/sme.decode | 4 ++++
10
target/arm/translate-sve.c | 75 ++++++++++++++++++++++++++++++++++++++
10
target/arm/sme_helper.c | 25 +++++++++++++++++++++++++
11
target/arm/sve.decode | 10 +++++
11
target/arm/translate-sme.c | 13 +++++++++++++
12
4 files changed, 172 insertions(+)
12
4 files changed, 44 insertions(+)
13
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
14
diff --git a/target/arm/helper-sme.h b/target/arm/helper-sme.h
15
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
16
--- a/target/arm/helper-sme.h
17
+++ b/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sme.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_trn_p, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
18
@@ -XXX,XX +XXX,XX @@
19
DEF_HELPER_FLAGS_3(sve_rev_p, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
19
20
DEF_HELPER_FLAGS_3(sve_punpk_p, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
20
DEF_HELPER_FLAGS_2(set_pstate_sm, TCG_CALL_NO_RWG, void, env, i32)
21
21
DEF_HELPER_FLAGS_2(set_pstate_za, TCG_CALL_NO_RWG, void, env, i32)
22
+DEF_HELPER_FLAGS_4(sve_zip_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
23
+DEF_HELPER_FLAGS_4(sve_zip_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_4(sve_zip_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_4(sve_zip_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
26
+
22
+
27
+DEF_HELPER_FLAGS_4(sve_uzp_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
23
+DEF_HELPER_FLAGS_3(sme_zero, TCG_CALL_NO_RWG, void, env, i32, i32)
28
+DEF_HELPER_FLAGS_4(sve_uzp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
diff --git a/target/arm/sme.decode b/target/arm/sme.decode
29
+DEF_HELPER_FLAGS_4(sve_uzp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
25
index XXXXXXX..XXXXXXX 100644
30
+DEF_HELPER_FLAGS_4(sve_uzp_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
26
--- a/target/arm/sme.decode
27
+++ b/target/arm/sme.decode
28
@@ -XXX,XX +XXX,XX @@
29
#
30
# This file is processed by scripts/decodetree.py
31
#
31
+
32
+
32
+DEF_HELPER_FLAGS_4(sve_trn_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
33
+### SME Misc
33
+DEF_HELPER_FLAGS_4(sve_trn_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
34
+DEF_HELPER_FLAGS_4(sve_trn_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
35
+DEF_HELPER_FLAGS_4(sve_trn_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
36
+
34
+
37
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
35
+ZERO 11000000 00 001 00000000000 imm:8
38
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
36
diff --git a/target/arm/sme_helper.c b/target/arm/sme_helper.c
39
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
40
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
41
index XXXXXXX..XXXXXXX 100644
37
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/sve_helper.c
38
--- a/target/arm/sme_helper.c
43
+++ b/target/arm/sve_helper.c
39
+++ b/target/arm/sme_helper.c
44
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_punpk_p)(void *vd, void *vn, uint32_t pred_desc)
40
@@ -XXX,XX +XXX,XX @@ void helper_set_pstate_za(CPUARMState *env, uint32_t i)
45
}
41
memset(env->zarray, 0, sizeof(env->zarray));
46
}
42
}
47
}
43
}
48
+
44
+
49
+#define DO_ZIP(NAME, TYPE, H) \
45
+void helper_sme_zero(CPUARMState *env, uint32_t imm, uint32_t svl)
50
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
46
+{
51
+{ \
47
+ uint32_t i;
52
+ intptr_t oprsz = simd_oprsz(desc); \
48
+
53
+ intptr_t i, oprsz_2 = oprsz / 2; \
49
+ /*
54
+ ARMVectorReg tmp_n, tmp_m; \
50
+ * Special case clearing the entire ZA space.
55
+ /* We produce output faster than we consume input. \
51
+ * This falls into the CONSTRAINED UNPREDICTABLE zeroing of any
56
+ Therefore we must be mindful of possible overlap. */ \
52
+ * parts of the ZA storage outside of SVL.
57
+ if (unlikely((vn - vd) < (uintptr_t)oprsz)) { \
53
+ */
58
+ vn = memcpy(&tmp_n, vn, oprsz_2); \
54
+ if (imm == 0xff) {
59
+ } \
55
+ memset(env->zarray, 0, sizeof(env->zarray));
60
+ if (unlikely((vm - vd) < (uintptr_t)oprsz)) { \
56
+ return;
61
+ vm = memcpy(&tmp_m, vm, oprsz_2); \
57
+ }
62
+ } \
58
+
63
+ for (i = 0; i < oprsz_2; i += sizeof(TYPE)) { \
59
+ /*
64
+ *(TYPE *)(vd + H(2 * i + 0)) = *(TYPE *)(vn + H(i)); \
60
+ * Recall that ZAnH.D[m] is spread across ZA[n+8*m],
65
+ *(TYPE *)(vd + H(2 * i + sizeof(TYPE))) = *(TYPE *)(vm + H(i)); \
61
+ * so each row is discontiguous within ZA[].
66
+ } \
62
+ */
63
+ for (i = 0; i < svl; i++) {
64
+ if (imm & (1 << (i % 8))) {
65
+ memset(&env->zarray[i], 0, svl);
66
+ }
67
+ }
67
+}
68
+}
69
diff --git a/target/arm/translate-sme.c b/target/arm/translate-sme.c
70
index XXXXXXX..XXXXXXX 100644
71
--- a/target/arm/translate-sme.c
72
+++ b/target/arm/translate-sme.c
73
@@ -XXX,XX +XXX,XX @@
74
*/
75
76
#include "decode-sme.c.inc"
68
+
77
+
69
+DO_ZIP(sve_zip_b, uint8_t, H1)
70
+DO_ZIP(sve_zip_h, uint16_t, H1_2)
71
+DO_ZIP(sve_zip_s, uint32_t, H1_4)
72
+DO_ZIP(sve_zip_d, uint64_t, )
73
+
78
+
74
+#define DO_UZP(NAME, TYPE, H) \
79
+static bool trans_ZERO(DisasContext *s, arg_ZERO *a)
75
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
76
+{ \
77
+ intptr_t oprsz = simd_oprsz(desc); \
78
+ intptr_t oprsz_2 = oprsz / 2; \
79
+ intptr_t odd_ofs = simd_data(desc); \
80
+ intptr_t i; \
81
+ ARMVectorReg tmp_m; \
82
+ if (unlikely((vm - vd) < (uintptr_t)oprsz)) { \
83
+ vm = memcpy(&tmp_m, vm, oprsz); \
84
+ } \
85
+ for (i = 0; i < oprsz_2; i += sizeof(TYPE)) { \
86
+ *(TYPE *)(vd + H(i)) = *(TYPE *)(vn + H(2 * i + odd_ofs)); \
87
+ } \
88
+ for (i = 0; i < oprsz_2; i += sizeof(TYPE)) { \
89
+ *(TYPE *)(vd + H(oprsz_2 + i)) = *(TYPE *)(vm + H(2 * i + odd_ofs)); \
90
+ } \
91
+}
92
+
93
+DO_UZP(sve_uzp_b, uint8_t, H1)
94
+DO_UZP(sve_uzp_h, uint16_t, H1_2)
95
+DO_UZP(sve_uzp_s, uint32_t, H1_4)
96
+DO_UZP(sve_uzp_d, uint64_t, )
97
+
98
+#define DO_TRN(NAME, TYPE, H) \
99
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
100
+{ \
101
+ intptr_t oprsz = simd_oprsz(desc); \
102
+ intptr_t odd_ofs = simd_data(desc); \
103
+ intptr_t i; \
104
+ for (i = 0; i < oprsz; i += 2 * sizeof(TYPE)) { \
105
+ TYPE ae = *(TYPE *)(vn + H(i + odd_ofs)); \
106
+ TYPE be = *(TYPE *)(vm + H(i + odd_ofs)); \
107
+ *(TYPE *)(vd + H(i + 0)) = ae; \
108
+ *(TYPE *)(vd + H(i + sizeof(TYPE))) = be; \
109
+ } \
110
+}
111
+
112
+DO_TRN(sve_trn_b, uint8_t, H1)
113
+DO_TRN(sve_trn_h, uint16_t, H1_2)
114
+DO_TRN(sve_trn_s, uint32_t, H1_4)
115
+DO_TRN(sve_trn_d, uint64_t, )
116
+
117
+#undef DO_ZIP
118
+#undef DO_UZP
119
+#undef DO_TRN
120
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
121
index XXXXXXX..XXXXXXX 100644
122
--- a/target/arm/translate-sve.c
123
+++ b/target/arm/translate-sve.c
124
@@ -XXX,XX +XXX,XX @@ static bool trans_PUNPKHI(DisasContext *s, arg_PUNPKHI *a, uint32_t insn)
125
return do_perm_pred2(s, a, 1, gen_helper_sve_punpk_p);
126
}
127
128
+/*
129
+ *** SVE Permute - Interleaving Group
130
+ */
131
+
132
+static bool do_zip(DisasContext *s, arg_rrr_esz *a, bool high)
133
+{
80
+{
134
+ static gen_helper_gvec_3 * const fns[4] = {
81
+ if (!dc_isar_feature(aa64_sme, s)) {
135
+ gen_helper_sve_zip_b, gen_helper_sve_zip_h,
82
+ return false;
136
+ gen_helper_sve_zip_s, gen_helper_sve_zip_d,
83
+ }
137
+ };
84
+ if (sme_za_enabled_check(s)) {
138
+
85
+ gen_helper_sme_zero(cpu_env, tcg_constant_i32(a->imm),
139
+ if (sve_access_check(s)) {
86
+ tcg_constant_i32(streaming_vec_reg_size(s)));
140
+ unsigned vsz = vec_full_reg_size(s);
141
+ unsigned high_ofs = high ? vsz / 2 : 0;
142
+ tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
143
+ vec_full_reg_offset(s, a->rn) + high_ofs,
144
+ vec_full_reg_offset(s, a->rm) + high_ofs,
145
+ vsz, vsz, 0, fns[a->esz]);
146
+ }
87
+ }
147
+ return true;
88
+ return true;
148
+}
89
+}
149
+
150
+static bool do_zzz_data_ool(DisasContext *s, arg_rrr_esz *a, int data,
151
+ gen_helper_gvec_3 *fn)
152
+{
153
+ if (sve_access_check(s)) {
154
+ unsigned vsz = vec_full_reg_size(s);
155
+ tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
156
+ vec_full_reg_offset(s, a->rn),
157
+ vec_full_reg_offset(s, a->rm),
158
+ vsz, vsz, data, fn);
159
+ }
160
+ return true;
161
+}
162
+
163
+static bool trans_ZIP1_z(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
164
+{
165
+ return do_zip(s, a, false);
166
+}
167
+
168
+static bool trans_ZIP2_z(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
169
+{
170
+ return do_zip(s, a, true);
171
+}
172
+
173
+static gen_helper_gvec_3 * const uzp_fns[4] = {
174
+ gen_helper_sve_uzp_b, gen_helper_sve_uzp_h,
175
+ gen_helper_sve_uzp_s, gen_helper_sve_uzp_d,
176
+};
177
+
178
+static bool trans_UZP1_z(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
179
+{
180
+ return do_zzz_data_ool(s, a, 0, uzp_fns[a->esz]);
181
+}
182
+
183
+static bool trans_UZP2_z(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
184
+{
185
+ return do_zzz_data_ool(s, a, 1 << a->esz, uzp_fns[a->esz]);
186
+}
187
+
188
+static gen_helper_gvec_3 * const trn_fns[4] = {
189
+ gen_helper_sve_trn_b, gen_helper_sve_trn_h,
190
+ gen_helper_sve_trn_s, gen_helper_sve_trn_d,
191
+};
192
+
193
+static bool trans_TRN1_z(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
194
+{
195
+ return do_zzz_data_ool(s, a, 0, trn_fns[a->esz]);
196
+}
197
+
198
+static bool trans_TRN2_z(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
199
+{
200
+ return do_zzz_data_ool(s, a, 1 << a->esz, trn_fns[a->esz]);
201
+}
202
+
203
/*
204
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
205
*/
206
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
207
index XXXXXXX..XXXXXXX 100644
208
--- a/target/arm/sve.decode
209
+++ b/target/arm/sve.decode
210
@@ -XXX,XX +XXX,XX @@ REV_p 00000101 .. 11 0100 010 000 0 .... 0 .... @pd_pn
211
PUNPKLO 00000101 00 11 0000 010 000 0 .... 0 .... @pd_pn_e0
212
PUNPKHI 00000101 00 11 0001 010 000 0 .... 0 .... @pd_pn_e0
213
214
+### SVE Permute - Interleaving Group
215
+
216
+# SVE permute vector elements
217
+ZIP1_z 00000101 .. 1 ..... 011 000 ..... ..... @rd_rn_rm
218
+ZIP2_z 00000101 .. 1 ..... 011 001 ..... ..... @rd_rn_rm
219
+UZP1_z 00000101 .. 1 ..... 011 010 ..... ..... @rd_rn_rm
220
+UZP2_z 00000101 .. 1 ..... 011 011 ..... ..... @rd_rn_rm
221
+TRN1_z 00000101 .. 1 ..... 011 100 ..... ..... @rd_rn_rm
222
+TRN2_z 00000101 .. 1 ..... 011 101 ..... ..... @rd_rn_rm
223
+
224
### SVE Predicate Logical Operations Group
225
226
# SVE predicate logical operations
227
--
90
--
228
2.17.1
91
2.25.1
229
230
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
We can reuse the SVE functions for implementing moves to/from
4
horizontal tile slices, but we need new ones for moves to/from
5
vertical tile slices.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20180613015641.5667-16-richard.henderson@linaro.org
9
Message-id: 20220708151540.18136-20-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
11
---
8
target/arm/helper-sve.h | 2 +
12
target/arm/helper-sme.h | 12 +++
9
target/arm/sve_helper.c | 31 ++++++++++++
13
target/arm/helper-sve.h | 2 +
10
target/arm/translate-sve.c | 99 ++++++++++++++++++++++++++++++++++++++
14
target/arm/translate-a64.h | 8 ++
11
target/arm/sve.decode | 8 +++
15
target/arm/translate.h | 5 ++
12
4 files changed, 140 insertions(+)
16
target/arm/sme.decode | 15 ++++
17
target/arm/sme_helper.c | 151 ++++++++++++++++++++++++++++++++++++-
18
target/arm/sve_helper.c | 12 +++
19
target/arm/translate-sme.c | 127 +++++++++++++++++++++++++++++++
20
8 files changed, 331 insertions(+), 1 deletion(-)
13
21
22
diff --git a/target/arm/helper-sme.h b/target/arm/helper-sme.h
23
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/helper-sme.h
25
+++ b/target/arm/helper-sme.h
26
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_2(set_pstate_sm, TCG_CALL_NO_RWG, void, env, i32)
27
DEF_HELPER_FLAGS_2(set_pstate_za, TCG_CALL_NO_RWG, void, env, i32)
28
29
DEF_HELPER_FLAGS_3(sme_zero, TCG_CALL_NO_RWG, void, env, i32, i32)
30
+
31
+/* Move to/from vertical array slices, i.e. columns, so 'c'. */
32
+DEF_HELPER_FLAGS_4(sme_mova_cz_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
33
+DEF_HELPER_FLAGS_4(sme_mova_zc_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
34
+DEF_HELPER_FLAGS_4(sme_mova_cz_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
35
+DEF_HELPER_FLAGS_4(sme_mova_zc_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
36
+DEF_HELPER_FLAGS_4(sme_mova_cz_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
37
+DEF_HELPER_FLAGS_4(sme_mova_zc_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
38
+DEF_HELPER_FLAGS_4(sme_mova_cz_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
39
+DEF_HELPER_FLAGS_4(sme_mova_zc_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
40
+DEF_HELPER_FLAGS_4(sme_mova_cz_q, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
41
+DEF_HELPER_FLAGS_4(sme_mova_zc_q, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
42
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
43
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
44
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
45
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_brkn, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
46
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve_sel_zpzz_s, TCG_CALL_NO_RWG,
19
DEF_HELPER_FLAGS_4(sve_brkns, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
47
void, ptr, ptr, ptr, ptr, i32)
20
48
DEF_HELPER_FLAGS_5(sve_sel_zpzz_d, TCG_CALL_NO_RWG,
21
DEF_HELPER_FLAGS_3(sve_cntp, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
49
void, ptr, ptr, ptr, ptr, i32)
22
+
50
+DEF_HELPER_FLAGS_5(sve_sel_zpzz_q, TCG_CALL_NO_RWG,
23
+DEF_HELPER_FLAGS_3(sve_while, TCG_CALL_NO_RWG, i32, ptr, i32, i32)
51
+ void, ptr, ptr, ptr, ptr, i32)
52
53
DEF_HELPER_FLAGS_5(sve2_addp_zpzz_b, TCG_CALL_NO_RWG,
54
void, ptr, ptr, ptr, ptr, i32)
55
diff --git a/target/arm/translate-a64.h b/target/arm/translate-a64.h
56
index XXXXXXX..XXXXXXX 100644
57
--- a/target/arm/translate-a64.h
58
+++ b/target/arm/translate-a64.h
59
@@ -XXX,XX +XXX,XX @@ static inline int pred_gvec_reg_size(DisasContext *s)
60
return size_for_gvec(pred_full_reg_size(s));
61
}
62
63
+/* Return a newly allocated pointer to the predicate register. */
64
+static inline TCGv_ptr pred_full_reg_ptr(DisasContext *s, int regno)
65
+{
66
+ TCGv_ptr ret = tcg_temp_new_ptr();
67
+ tcg_gen_addi_ptr(ret, cpu_env, pred_full_reg_offset(s, regno));
68
+ return ret;
69
+}
70
+
71
bool disas_sve(DisasContext *, uint32_t);
72
bool disas_sme(DisasContext *, uint32_t);
73
74
diff --git a/target/arm/translate.h b/target/arm/translate.h
75
index XXXXXXX..XXXXXXX 100644
76
--- a/target/arm/translate.h
77
+++ b/target/arm/translate.h
78
@@ -XXX,XX +XXX,XX @@ static inline int plus_2(DisasContext *s, int x)
79
return x + 2;
80
}
81
82
+static inline int plus_12(DisasContext *s, int x)
83
+{
84
+ return x + 12;
85
+}
86
+
87
static inline int times_2(DisasContext *s, int x)
88
{
89
return x * 2;
90
diff --git a/target/arm/sme.decode b/target/arm/sme.decode
91
index XXXXXXX..XXXXXXX 100644
92
--- a/target/arm/sme.decode
93
+++ b/target/arm/sme.decode
94
@@ -XXX,XX +XXX,XX @@
95
### SME Misc
96
97
ZERO 11000000 00 001 00000000000 imm:8
98
+
99
+### SME Move into/from Array
100
+
101
+%mova_rs 13:2 !function=plus_12
102
+&mova esz rs pg zr za_imm v:bool to_vec:bool
103
+
104
+MOVA 11000000 esz:2 00000 0 v:1 .. pg:3 zr:5 0 za_imm:4 \
105
+ &mova to_vec=0 rs=%mova_rs
106
+MOVA 11000000 11 00000 1 v:1 .. pg:3 zr:5 0 za_imm:4 \
107
+ &mova to_vec=0 rs=%mova_rs esz=4
108
+
109
+MOVA 11000000 esz:2 00001 0 v:1 .. pg:3 0 za_imm:4 zr:5 \
110
+ &mova to_vec=1 rs=%mova_rs
111
+MOVA 11000000 11 00001 1 v:1 .. pg:3 0 za_imm:4 zr:5 \
112
+ &mova to_vec=1 rs=%mova_rs esz=4
113
diff --git a/target/arm/sme_helper.c b/target/arm/sme_helper.c
114
index XXXXXXX..XXXXXXX 100644
115
--- a/target/arm/sme_helper.c
116
+++ b/target/arm/sme_helper.c
117
@@ -XXX,XX +XXX,XX @@
118
119
#include "qemu/osdep.h"
120
#include "cpu.h"
121
-#include "internals.h"
122
+#include "tcg/tcg-gvec-desc.h"
123
#include "exec/helper-proto.h"
124
+#include "qemu/int128.h"
125
+#include "vec_internal.h"
126
127
/* ResetSVEState */
128
void arm_reset_sve_state(CPUARMState *env)
129
@@ -XXX,XX +XXX,XX @@ void helper_sme_zero(CPUARMState *env, uint32_t imm, uint32_t svl)
130
}
131
}
132
}
133
+
134
+
135
+/*
136
+ * When considering the ZA storage as an array of elements of
137
+ * type T, the index within that array of the Nth element of
138
+ * a vertical slice of a tile can be calculated like this,
139
+ * regardless of the size of type T. This is because the tiles
140
+ * are interleaved, so if type T is size N bytes then row 1 of
141
+ * the tile is N rows away from row 0. The division by N to
142
+ * convert a byte offset into an array index and the multiplication
143
+ * by N to convert from vslice-index-within-the-tile to
144
+ * the index within the ZA storage cancel out.
145
+ */
146
+#define tile_vslice_index(i) ((i) * sizeof(ARMVectorReg))
147
+
148
+/*
149
+ * When doing byte arithmetic on the ZA storage, the element
150
+ * byteoff bytes away in a tile vertical slice is always this
151
+ * many bytes away in the ZA storage, regardless of the
152
+ * size of the tile element, assuming that byteoff is a multiple
153
+ * of the element size. Again this is because of the interleaving
154
+ * of the tiles. For instance if we have 1 byte per element then
155
+ * each row of the ZA storage has one byte of the vslice data,
156
+ * and (counting from 0) byte 8 goes in row 8 of the storage
157
+ * at offset (8 * row-size-in-bytes).
158
+ * If we have 8 bytes per element then each row of the ZA storage
159
+ * has 8 bytes of the data, but there are 8 interleaved tiles and
160
+ * so byte 8 of the data goes into row 1 of the tile,
161
+ * which is again row 8 of the storage, so the offset is still
162
+ * (8 * row-size-in-bytes). Similarly for other element sizes.
163
+ */
164
+#define tile_vslice_offset(byteoff) ((byteoff) * sizeof(ARMVectorReg))
165
+
166
+
167
+/*
168
+ * Move Zreg vector to ZArray column.
169
+ */
170
+#define DO_MOVA_C(NAME, TYPE, H) \
171
+void HELPER(NAME)(void *za, void *vn, void *vg, uint32_t desc) \
172
+{ \
173
+ int i, oprsz = simd_oprsz(desc); \
174
+ for (i = 0; i < oprsz; ) { \
175
+ uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3)); \
176
+ do { \
177
+ if (pg & 1) { \
178
+ *(TYPE *)(za + tile_vslice_offset(i)) = *(TYPE *)(vn + H(i)); \
179
+ } \
180
+ i += sizeof(TYPE); \
181
+ pg >>= sizeof(TYPE); \
182
+ } while (i & 15); \
183
+ } \
184
+}
185
+
186
+DO_MOVA_C(sme_mova_cz_b, uint8_t, H1)
187
+DO_MOVA_C(sme_mova_cz_h, uint16_t, H1_2)
188
+DO_MOVA_C(sme_mova_cz_s, uint32_t, H1_4)
189
+
190
+void HELPER(sme_mova_cz_d)(void *za, void *vn, void *vg, uint32_t desc)
191
+{
192
+ int i, oprsz = simd_oprsz(desc) / 8;
193
+ uint8_t *pg = vg;
194
+ uint64_t *n = vn;
195
+ uint64_t *a = za;
196
+
197
+ for (i = 0; i < oprsz; i++) {
198
+ if (pg[H1(i)] & 1) {
199
+ a[tile_vslice_index(i)] = n[i];
200
+ }
201
+ }
202
+}
203
+
204
+void HELPER(sme_mova_cz_q)(void *za, void *vn, void *vg, uint32_t desc)
205
+{
206
+ int i, oprsz = simd_oprsz(desc) / 16;
207
+ uint16_t *pg = vg;
208
+ Int128 *n = vn;
209
+ Int128 *a = za;
210
+
211
+ /*
212
+ * Int128 is used here simply to copy 16 bytes, and to simplify
213
+ * the address arithmetic.
214
+ */
215
+ for (i = 0; i < oprsz; i++) {
216
+ if (pg[H2(i)] & 1) {
217
+ a[tile_vslice_index(i)] = n[i];
218
+ }
219
+ }
220
+}
221
+
222
+#undef DO_MOVA_C
223
+
224
+/*
225
+ * Move ZArray column to Zreg vector.
226
+ */
227
+#define DO_MOVA_Z(NAME, TYPE, H) \
228
+void HELPER(NAME)(void *vd, void *za, void *vg, uint32_t desc) \
229
+{ \
230
+ int i, oprsz = simd_oprsz(desc); \
231
+ for (i = 0; i < oprsz; ) { \
232
+ uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3)); \
233
+ do { \
234
+ if (pg & 1) { \
235
+ *(TYPE *)(vd + H(i)) = *(TYPE *)(za + tile_vslice_offset(i)); \
236
+ } \
237
+ i += sizeof(TYPE); \
238
+ pg >>= sizeof(TYPE); \
239
+ } while (i & 15); \
240
+ } \
241
+}
242
+
243
+DO_MOVA_Z(sme_mova_zc_b, uint8_t, H1)
244
+DO_MOVA_Z(sme_mova_zc_h, uint16_t, H1_2)
245
+DO_MOVA_Z(sme_mova_zc_s, uint32_t, H1_4)
246
+
247
+void HELPER(sme_mova_zc_d)(void *vd, void *za, void *vg, uint32_t desc)
248
+{
249
+ int i, oprsz = simd_oprsz(desc) / 8;
250
+ uint8_t *pg = vg;
251
+ uint64_t *d = vd;
252
+ uint64_t *a = za;
253
+
254
+ for (i = 0; i < oprsz; i++) {
255
+ if (pg[H1(i)] & 1) {
256
+ d[i] = a[tile_vslice_index(i)];
257
+ }
258
+ }
259
+}
260
+
261
+void HELPER(sme_mova_zc_q)(void *vd, void *za, void *vg, uint32_t desc)
262
+{
263
+ int i, oprsz = simd_oprsz(desc) / 16;
264
+ uint16_t *pg = vg;
265
+ Int128 *d = vd;
266
+ Int128 *a = za;
267
+
268
+ /*
269
+ * Int128 is used here simply to copy 16 bytes, and to simplify
270
+ * the address arithmetic.
271
+ */
272
+ for (i = 0; i < oprsz; i++, za += sizeof(ARMVectorReg)) {
273
+ if (pg[H2(i)] & 1) {
274
+ d[i] = a[tile_vslice_index(i)];
275
+ }
276
+ }
277
+}
278
+
279
+#undef DO_MOVA_Z
24
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
280
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
25
index XXXXXXX..XXXXXXX 100644
281
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/sve_helper.c
282
--- a/target/arm/sve_helper.c
27
+++ b/target/arm/sve_helper.c
283
+++ b/target/arm/sve_helper.c
28
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(sve_cntp)(void *vn, void *vg, uint32_t pred_desc)
284
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_sel_zpzz_d)(void *vd, void *vn, void *vm,
29
}
285
}
30
return sum;
31
}
286
}
32
+
287
33
+uint32_t HELPER(sve_while)(void *vd, uint32_t count, uint32_t pred_desc)
288
+void HELPER(sve_sel_zpzz_q)(void *vd, void *vn, void *vm,
34
+{
289
+ void *vg, uint32_t desc)
35
+ uintptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
290
+{
36
+ intptr_t esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
291
+ intptr_t i, opr_sz = simd_oprsz(desc) / 16;
37
+ uint64_t esz_mask = pred_esz_masks[esz];
292
+ Int128 *d = vd, *n = vn, *m = vm;
38
+ ARMPredicateReg *d = vd;
293
+ uint16_t *pg = vg;
39
+ uint32_t flags;
294
+
40
+ intptr_t i;
295
+ for (i = 0; i < opr_sz; i += 1) {
41
+
296
+ d[i] = (pg[H2(i)] & 1 ? n : m)[i];
42
+ /* Begin with a zero predicate register. */
297
+ }
43
+ flags = do_zero(d, oprsz);
298
+}
44
+ if (count == 0) {
299
+
45
+ return flags;
300
/* Two operand comparison controlled by a predicate.
46
+ }
301
* ??? It is very tempting to want to be able to expand this inline
47
+
302
* with x86 instructions, e.g.
48
+ /* Scale from predicate element count to bits. */
303
diff --git a/target/arm/translate-sme.c b/target/arm/translate-sme.c
49
+ count <<= esz;
304
index XXXXXXX..XXXXXXX 100644
50
+ /* Bound to the bits in the predicate. */
305
--- a/target/arm/translate-sme.c
51
+ count = MIN(count, oprsz * 8);
306
+++ b/target/arm/translate-sme.c
52
+
307
@@ -XXX,XX +XXX,XX @@
53
+ /* Set all of the requested bits. */
308
#include "decode-sme.c.inc"
54
+ for (i = 0; i < count / 64; ++i) {
309
55
+ d->p[i] = esz_mask;
310
56
+ }
311
+/*
57
+ if (count & 63) {
312
+ * Resolve tile.size[index] to a host pointer, where tile and index
58
+ d->p[i] = MAKE_64BIT_MASK(0, count & 63) & esz_mask;
313
+ * are always decoded together, dependent on the element size.
59
+ }
314
+ */
60
+
315
+static TCGv_ptr get_tile_rowcol(DisasContext *s, int esz, int rs,
61
+ return predtest_ones(d, oprsz, esz_mask);
316
+ int tile_index, bool vertical)
62
+}
317
+{
63
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
318
+ int tile = tile_index >> (4 - esz);
64
index XXXXXXX..XXXXXXX 100644
319
+ int index = esz == MO_128 ? 0 : extract32(tile_index, 0, 4 - esz);
65
--- a/target/arm/translate-sve.c
320
+ int pos, len, offset;
66
+++ b/target/arm/translate-sve.c
321
+ TCGv_i32 tmp;
67
@@ -XXX,XX +XXX,XX @@ static bool trans_SINCDECP_z(DisasContext *s, arg_incdec2_pred *a,
322
+ TCGv_ptr addr;
323
+
324
+ /* Compute the final index, which is Rs+imm. */
325
+ tmp = tcg_temp_new_i32();
326
+ tcg_gen_trunc_tl_i32(tmp, cpu_reg(s, rs));
327
+ tcg_gen_addi_i32(tmp, tmp, index);
328
+
329
+ /* Prepare a power-of-two modulo via extraction of @len bits. */
330
+ len = ctz32(streaming_vec_reg_size(s)) - esz;
331
+
332
+ if (vertical) {
333
+ /*
334
+ * Compute the byte offset of the index within the tile:
335
+ * (index % (svl / size)) * size
336
+ * = (index % (svl >> esz)) << esz
337
+ * Perform the power-of-two modulo via extraction of the low @len bits.
338
+ * Perform the multiply by shifting left by @pos bits.
339
+ * Perform these operations simultaneously via deposit into zero.
340
+ */
341
+ pos = esz;
342
+ tcg_gen_deposit_z_i32(tmp, tmp, pos, len);
343
+
344
+ /*
345
+ * For big-endian, adjust the indexed column byte offset within
346
+ * the uint64_t host words that make up env->zarray[].
347
+ */
348
+ if (HOST_BIG_ENDIAN && esz < MO_64) {
349
+ tcg_gen_xori_i32(tmp, tmp, 8 - (1 << esz));
350
+ }
351
+ } else {
352
+ /*
353
+ * Compute the byte offset of the index within the tile:
354
+ * (index % (svl / size)) * (size * sizeof(row))
355
+ * = (index % (svl >> esz)) << (esz + log2(sizeof(row)))
356
+ */
357
+ pos = esz + ctz32(sizeof(ARMVectorReg));
358
+ tcg_gen_deposit_z_i32(tmp, tmp, pos, len);
359
+
360
+ /* Row slices are always aligned and need no endian adjustment. */
361
+ }
362
+
363
+ /* The tile byte offset within env->zarray is the row. */
364
+ offset = tile * sizeof(ARMVectorReg);
365
+
366
+ /* Include the byte offset of zarray to make this relative to env. */
367
+ offset += offsetof(CPUARMState, zarray);
368
+ tcg_gen_addi_i32(tmp, tmp, offset);
369
+
370
+ /* Add the byte offset to env to produce the final pointer. */
371
+ addr = tcg_temp_new_ptr();
372
+ tcg_gen_ext_i32_ptr(addr, tmp);
373
+ tcg_temp_free_i32(tmp);
374
+ tcg_gen_add_ptr(addr, addr, cpu_env);
375
+
376
+ return addr;
377
+}
378
+
379
static bool trans_ZERO(DisasContext *s, arg_ZERO *a)
380
{
381
if (!dc_isar_feature(aa64_sme, s)) {
382
@@ -XXX,XX +XXX,XX @@ static bool trans_ZERO(DisasContext *s, arg_ZERO *a)
383
}
68
return true;
384
return true;
69
}
385
}
70
386
+
71
+/*
387
+static bool trans_MOVA(DisasContext *s, arg_MOVA *a)
72
+ *** SVE Integer Compare Scalars Group
388
+{
73
+ */
389
+ static gen_helper_gvec_4 * const h_fns[5] = {
74
+
390
+ gen_helper_sve_sel_zpzz_b, gen_helper_sve_sel_zpzz_h,
75
+static bool trans_CTERM(DisasContext *s, arg_CTERM *a, uint32_t insn)
391
+ gen_helper_sve_sel_zpzz_s, gen_helper_sve_sel_zpzz_d,
76
+{
392
+ gen_helper_sve_sel_zpzz_q
77
+ if (!sve_access_check(s)) {
393
+ };
394
+ static gen_helper_gvec_3 * const cz_fns[5] = {
395
+ gen_helper_sme_mova_cz_b, gen_helper_sme_mova_cz_h,
396
+ gen_helper_sme_mova_cz_s, gen_helper_sme_mova_cz_d,
397
+ gen_helper_sme_mova_cz_q,
398
+ };
399
+ static gen_helper_gvec_3 * const zc_fns[5] = {
400
+ gen_helper_sme_mova_zc_b, gen_helper_sme_mova_zc_h,
401
+ gen_helper_sme_mova_zc_s, gen_helper_sme_mova_zc_d,
402
+ gen_helper_sme_mova_zc_q,
403
+ };
404
+
405
+ TCGv_ptr t_za, t_zr, t_pg;
406
+ TCGv_i32 t_desc;
407
+ int svl;
408
+
409
+ if (!dc_isar_feature(aa64_sme, s)) {
410
+ return false;
411
+ }
412
+ if (!sme_smza_enabled_check(s)) {
78
+ return true;
413
+ return true;
79
+ }
414
+ }
80
+
415
+
81
+ TCGCond cond = (a->ne ? TCG_COND_NE : TCG_COND_EQ);
416
+ t_za = get_tile_rowcol(s, a->esz, a->rs, a->za_imm, a->v);
82
+ TCGv_i64 rn = read_cpu_reg(s, a->rn, a->sf);
417
+ t_zr = vec_full_reg_ptr(s, a->zr);
83
+ TCGv_i64 rm = read_cpu_reg(s, a->rm, a->sf);
418
+ t_pg = pred_full_reg_ptr(s, a->pg);
84
+ TCGv_i64 cmp = tcg_temp_new_i64();
419
+
85
+
420
+ svl = streaming_vec_reg_size(s);
86
+ tcg_gen_setcond_i64(cond, cmp, rn, rm);
421
+ t_desc = tcg_constant_i32(simd_desc(svl, svl, 0));
87
+ tcg_gen_extrl_i64_i32(cpu_NF, cmp);
422
+
88
+ tcg_temp_free_i64(cmp);
423
+ if (a->v) {
89
+
424
+ /* Vertical slice -- use sme mova helpers. */
90
+ /* VF = !NF & !CF. */
425
+ if (a->to_vec) {
91
+ tcg_gen_xori_i32(cpu_VF, cpu_NF, 1);
426
+ zc_fns[a->esz](t_zr, t_za, t_pg, t_desc);
92
+ tcg_gen_andc_i32(cpu_VF, cpu_VF, cpu_CF);
427
+ } else {
93
+
428
+ cz_fns[a->esz](t_za, t_zr, t_pg, t_desc);
94
+ /* Both NF and VF actually look at bit 31. */
429
+ }
95
+ tcg_gen_neg_i32(cpu_NF, cpu_NF);
430
+ } else {
96
+ tcg_gen_neg_i32(cpu_VF, cpu_VF);
431
+ /* Horizontal slice -- reuse sve sel helpers. */
432
+ if (a->to_vec) {
433
+ h_fns[a->esz](t_zr, t_za, t_zr, t_pg, t_desc);
434
+ } else {
435
+ h_fns[a->esz](t_za, t_zr, t_za, t_pg, t_desc);
436
+ }
437
+ }
438
+
439
+ tcg_temp_free_ptr(t_za);
440
+ tcg_temp_free_ptr(t_zr);
441
+ tcg_temp_free_ptr(t_pg);
442
+
97
+ return true;
443
+ return true;
98
+}
444
+}
99
+
100
+static bool trans_WHILE(DisasContext *s, arg_WHILE *a, uint32_t insn)
101
+{
102
+ if (!sve_access_check(s)) {
103
+ return true;
104
+ }
105
+
106
+ TCGv_i64 op0 = read_cpu_reg(s, a->rn, 1);
107
+ TCGv_i64 op1 = read_cpu_reg(s, a->rm, 1);
108
+ TCGv_i64 t0 = tcg_temp_new_i64();
109
+ TCGv_i64 t1 = tcg_temp_new_i64();
110
+ TCGv_i32 t2, t3;
111
+ TCGv_ptr ptr;
112
+ unsigned desc, vsz = vec_full_reg_size(s);
113
+ TCGCond cond;
114
+
115
+ if (!a->sf) {
116
+ if (a->u) {
117
+ tcg_gen_ext32u_i64(op0, op0);
118
+ tcg_gen_ext32u_i64(op1, op1);
119
+ } else {
120
+ tcg_gen_ext32s_i64(op0, op0);
121
+ tcg_gen_ext32s_i64(op1, op1);
122
+ }
123
+ }
124
+
125
+ /* For the helper, compress the different conditions into a computation
126
+ * of how many iterations for which the condition is true.
127
+ *
128
+ * This is slightly complicated by 0 <= UINT64_MAX, which is nominally
129
+ * 2**64 iterations, overflowing to 0. Of course, predicate registers
130
+ * aren't that large, so any value >= predicate size is sufficient.
131
+ */
132
+ tcg_gen_sub_i64(t0, op1, op0);
133
+
134
+ /* t0 = MIN(op1 - op0, vsz). */
135
+ tcg_gen_movi_i64(t1, vsz);
136
+ tcg_gen_umin_i64(t0, t0, t1);
137
+ if (a->eq) {
138
+ /* Equality means one more iteration. */
139
+ tcg_gen_addi_i64(t0, t0, 1);
140
+ }
141
+
142
+ /* t0 = (condition true ? t0 : 0). */
143
+ cond = (a->u
144
+ ? (a->eq ? TCG_COND_LEU : TCG_COND_LTU)
145
+ : (a->eq ? TCG_COND_LE : TCG_COND_LT));
146
+ tcg_gen_movi_i64(t1, 0);
147
+ tcg_gen_movcond_i64(cond, t0, op0, op1, t0, t1);
148
+
149
+ t2 = tcg_temp_new_i32();
150
+ tcg_gen_extrl_i64_i32(t2, t0);
151
+ tcg_temp_free_i64(t0);
152
+ tcg_temp_free_i64(t1);
153
+
154
+ desc = (vsz / 8) - 2;
155
+ desc = deposit32(desc, SIMD_DATA_SHIFT, 2, a->esz);
156
+ t3 = tcg_const_i32(desc);
157
+
158
+ ptr = tcg_temp_new_ptr();
159
+ tcg_gen_addi_ptr(ptr, cpu_env, pred_full_reg_offset(s, a->rd));
160
+
161
+ gen_helper_sve_while(t2, ptr, t2, t3);
162
+ do_pred_flags(t2);
163
+
164
+ tcg_temp_free_ptr(ptr);
165
+ tcg_temp_free_i32(t2);
166
+ tcg_temp_free_i32(t3);
167
+ return true;
168
+}
169
+
170
/*
171
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
172
*/
173
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
174
index XXXXXXX..XXXXXXX 100644
175
--- a/target/arm/sve.decode
176
+++ b/target/arm/sve.decode
177
@@ -XXX,XX +XXX,XX @@ SINCDECP_r_64 00100101 .. 1010 d:1 u:1 10001 10 .... ..... @incdec_pred
178
# SVE saturating inc/dec vector by predicate count
179
SINCDECP_z 00100101 .. 1010 d:1 u:1 10000 00 .... ..... @incdec2_pred
180
181
+### SVE Integer Compare - Scalars Group
182
+
183
+# SVE conditionally terminate scalars
184
+CTERM 00100101 1 sf:1 1 rm:5 001000 rn:5 ne:1 0000
185
+
186
+# SVE integer compare scalar count and limit
187
+WHILE 00100101 esz:2 1 rm:5 000 sf:1 u:1 1 rn:5 eq:1 rd:4
188
+
189
### SVE Memory - 32-bit Gather and Unsized Contiguous Group
190
191
# SVE load predicate register
192
--
445
--
193
2.17.1
446
2.25.1
194
195
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
We cannot reuse the SVE functions for LD[1-4] and ST[1-4],
4
because those functions accept only a Zreg register number.
5
For SME, we want to pass a pointer into ZA storage.
2
6
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-12-richard.henderson@linaro.org
9
Message-id: 20220708151540.18136-21-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
11
---
8
target/arm/helper-sve.h | 115 +++++++++++++++++++++++
12
target/arm/helper-sme.h | 82 +++++
9
target/arm/sve_helper.c | 187 +++++++++++++++++++++++++++++++++++++
13
target/arm/sme.decode | 9 +
10
target/arm/translate-sve.c | 91 ++++++++++++++++++
14
target/arm/sme_helper.c | 595 +++++++++++++++++++++++++++++++++++++
11
target/arm/sve.decode | 24 +++++
15
target/arm/translate-sme.c | 70 +++++
12
4 files changed, 417 insertions(+)
16
4 files changed, 756 insertions(+)
13
17
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
18
diff --git a/target/arm/helper-sme.h b/target/arm/helper-sme.h
15
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
20
--- a/target/arm/helper-sme.h
17
+++ b/target/arm/helper-sve.h
21
+++ b/target/arm/helper-sme.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_rbit_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
22
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sme_mova_cz_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
19
23
DEF_HELPER_FLAGS_4(sme_mova_zc_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
20
DEF_HELPER_FLAGS_5(sve_splice, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
24
DEF_HELPER_FLAGS_4(sme_mova_cz_q, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
21
25
DEF_HELPER_FLAGS_4(sme_mova_zc_q, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
22
+DEF_HELPER_FLAGS_5(sve_cmpeq_ppzz_b, TCG_CALL_NO_RWG,
26
+
23
+ i32, ptr, ptr, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_5(sme_ld1b_h, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
24
+DEF_HELPER_FLAGS_5(sve_cmpne_ppzz_b, TCG_CALL_NO_RWG,
28
+DEF_HELPER_FLAGS_5(sme_ld1b_v, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
25
+ i32, ptr, ptr, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_5(sme_ld1b_h_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
26
+DEF_HELPER_FLAGS_5(sve_cmpge_ppzz_b, TCG_CALL_NO_RWG,
30
+DEF_HELPER_FLAGS_5(sme_ld1b_v_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
27
+ i32, ptr, ptr, ptr, ptr, i32)
31
+
28
+DEF_HELPER_FLAGS_5(sve_cmpgt_ppzz_b, TCG_CALL_NO_RWG,
32
+DEF_HELPER_FLAGS_5(sme_ld1h_be_h, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
29
+ i32, ptr, ptr, ptr, ptr, i32)
33
+DEF_HELPER_FLAGS_5(sme_ld1h_le_h, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
30
+DEF_HELPER_FLAGS_5(sve_cmphi_ppzz_b, TCG_CALL_NO_RWG,
34
+DEF_HELPER_FLAGS_5(sme_ld1h_be_v, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
31
+ i32, ptr, ptr, ptr, ptr, i32)
35
+DEF_HELPER_FLAGS_5(sme_ld1h_le_v, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
32
+DEF_HELPER_FLAGS_5(sve_cmphs_ppzz_b, TCG_CALL_NO_RWG,
36
+DEF_HELPER_FLAGS_5(sme_ld1h_be_h_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
33
+ i32, ptr, ptr, ptr, ptr, i32)
37
+DEF_HELPER_FLAGS_5(sme_ld1h_le_h_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
34
+
38
+DEF_HELPER_FLAGS_5(sme_ld1h_be_v_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
35
+DEF_HELPER_FLAGS_5(sve_cmpeq_ppzz_h, TCG_CALL_NO_RWG,
39
+DEF_HELPER_FLAGS_5(sme_ld1h_le_v_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
36
+ i32, ptr, ptr, ptr, ptr, i32)
40
+
37
+DEF_HELPER_FLAGS_5(sve_cmpne_ppzz_h, TCG_CALL_NO_RWG,
41
+DEF_HELPER_FLAGS_5(sme_ld1s_be_h, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
38
+ i32, ptr, ptr, ptr, ptr, i32)
42
+DEF_HELPER_FLAGS_5(sme_ld1s_le_h, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
39
+DEF_HELPER_FLAGS_5(sve_cmpge_ppzz_h, TCG_CALL_NO_RWG,
43
+DEF_HELPER_FLAGS_5(sme_ld1s_be_v, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
40
+ i32, ptr, ptr, ptr, ptr, i32)
44
+DEF_HELPER_FLAGS_5(sme_ld1s_le_v, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
41
+DEF_HELPER_FLAGS_5(sve_cmpgt_ppzz_h, TCG_CALL_NO_RWG,
45
+DEF_HELPER_FLAGS_5(sme_ld1s_be_h_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
42
+ i32, ptr, ptr, ptr, ptr, i32)
46
+DEF_HELPER_FLAGS_5(sme_ld1s_le_h_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
43
+DEF_HELPER_FLAGS_5(sve_cmphi_ppzz_h, TCG_CALL_NO_RWG,
47
+DEF_HELPER_FLAGS_5(sme_ld1s_be_v_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
44
+ i32, ptr, ptr, ptr, ptr, i32)
48
+DEF_HELPER_FLAGS_5(sme_ld1s_le_v_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
45
+DEF_HELPER_FLAGS_5(sve_cmphs_ppzz_h, TCG_CALL_NO_RWG,
49
+
46
+ i32, ptr, ptr, ptr, ptr, i32)
50
+DEF_HELPER_FLAGS_5(sme_ld1d_be_h, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
47
+
51
+DEF_HELPER_FLAGS_5(sme_ld1d_le_h, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
48
+DEF_HELPER_FLAGS_5(sve_cmpeq_ppzz_s, TCG_CALL_NO_RWG,
52
+DEF_HELPER_FLAGS_5(sme_ld1d_be_v, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
49
+ i32, ptr, ptr, ptr, ptr, i32)
53
+DEF_HELPER_FLAGS_5(sme_ld1d_le_v, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
50
+DEF_HELPER_FLAGS_5(sve_cmpne_ppzz_s, TCG_CALL_NO_RWG,
54
+DEF_HELPER_FLAGS_5(sme_ld1d_be_h_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
51
+ i32, ptr, ptr, ptr, ptr, i32)
55
+DEF_HELPER_FLAGS_5(sme_ld1d_le_h_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
52
+DEF_HELPER_FLAGS_5(sve_cmpge_ppzz_s, TCG_CALL_NO_RWG,
56
+DEF_HELPER_FLAGS_5(sme_ld1d_be_v_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
53
+ i32, ptr, ptr, ptr, ptr, i32)
57
+DEF_HELPER_FLAGS_5(sme_ld1d_le_v_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
54
+DEF_HELPER_FLAGS_5(sve_cmpgt_ppzz_s, TCG_CALL_NO_RWG,
58
+
55
+ i32, ptr, ptr, ptr, ptr, i32)
59
+DEF_HELPER_FLAGS_5(sme_ld1q_be_h, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
56
+DEF_HELPER_FLAGS_5(sve_cmphi_ppzz_s, TCG_CALL_NO_RWG,
60
+DEF_HELPER_FLAGS_5(sme_ld1q_le_h, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
57
+ i32, ptr, ptr, ptr, ptr, i32)
61
+DEF_HELPER_FLAGS_5(sme_ld1q_be_v, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
58
+DEF_HELPER_FLAGS_5(sve_cmphs_ppzz_s, TCG_CALL_NO_RWG,
62
+DEF_HELPER_FLAGS_5(sme_ld1q_le_v, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
59
+ i32, ptr, ptr, ptr, ptr, i32)
63
+DEF_HELPER_FLAGS_5(sme_ld1q_be_h_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
60
+
64
+DEF_HELPER_FLAGS_5(sme_ld1q_le_h_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
61
+DEF_HELPER_FLAGS_5(sve_cmpeq_ppzz_d, TCG_CALL_NO_RWG,
65
+DEF_HELPER_FLAGS_5(sme_ld1q_be_v_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
62
+ i32, ptr, ptr, ptr, ptr, i32)
66
+DEF_HELPER_FLAGS_5(sme_ld1q_le_v_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
63
+DEF_HELPER_FLAGS_5(sve_cmpne_ppzz_d, TCG_CALL_NO_RWG,
67
+
64
+ i32, ptr, ptr, ptr, ptr, i32)
68
+DEF_HELPER_FLAGS_5(sme_st1b_h, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
65
+DEF_HELPER_FLAGS_5(sve_cmpge_ppzz_d, TCG_CALL_NO_RWG,
69
+DEF_HELPER_FLAGS_5(sme_st1b_v, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
66
+ i32, ptr, ptr, ptr, ptr, i32)
70
+DEF_HELPER_FLAGS_5(sme_st1b_h_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
67
+DEF_HELPER_FLAGS_5(sve_cmpgt_ppzz_d, TCG_CALL_NO_RWG,
71
+DEF_HELPER_FLAGS_5(sme_st1b_v_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
68
+ i32, ptr, ptr, ptr, ptr, i32)
72
+
69
+DEF_HELPER_FLAGS_5(sve_cmphi_ppzz_d, TCG_CALL_NO_RWG,
73
+DEF_HELPER_FLAGS_5(sme_st1h_be_h, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
70
+ i32, ptr, ptr, ptr, ptr, i32)
74
+DEF_HELPER_FLAGS_5(sme_st1h_le_h, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
71
+DEF_HELPER_FLAGS_5(sve_cmphs_ppzz_d, TCG_CALL_NO_RWG,
75
+DEF_HELPER_FLAGS_5(sme_st1h_be_v, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
72
+ i32, ptr, ptr, ptr, ptr, i32)
76
+DEF_HELPER_FLAGS_5(sme_st1h_le_v, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
73
+
77
+DEF_HELPER_FLAGS_5(sme_st1h_be_h_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
74
+DEF_HELPER_FLAGS_5(sve_cmpeq_ppzw_b, TCG_CALL_NO_RWG,
78
+DEF_HELPER_FLAGS_5(sme_st1h_le_h_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
75
+ i32, ptr, ptr, ptr, ptr, i32)
79
+DEF_HELPER_FLAGS_5(sme_st1h_be_v_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
76
+DEF_HELPER_FLAGS_5(sve_cmpne_ppzw_b, TCG_CALL_NO_RWG,
80
+DEF_HELPER_FLAGS_5(sme_st1h_le_v_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
77
+ i32, ptr, ptr, ptr, ptr, i32)
81
+
78
+DEF_HELPER_FLAGS_5(sve_cmpge_ppzw_b, TCG_CALL_NO_RWG,
82
+DEF_HELPER_FLAGS_5(sme_st1s_be_h, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
79
+ i32, ptr, ptr, ptr, ptr, i32)
83
+DEF_HELPER_FLAGS_5(sme_st1s_le_h, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
80
+DEF_HELPER_FLAGS_5(sve_cmpgt_ppzw_b, TCG_CALL_NO_RWG,
84
+DEF_HELPER_FLAGS_5(sme_st1s_be_v, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
81
+ i32, ptr, ptr, ptr, ptr, i32)
85
+DEF_HELPER_FLAGS_5(sme_st1s_le_v, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
82
+DEF_HELPER_FLAGS_5(sve_cmphi_ppzw_b, TCG_CALL_NO_RWG,
86
+DEF_HELPER_FLAGS_5(sme_st1s_be_h_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
83
+ i32, ptr, ptr, ptr, ptr, i32)
87
+DEF_HELPER_FLAGS_5(sme_st1s_le_h_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
84
+DEF_HELPER_FLAGS_5(sve_cmphs_ppzw_b, TCG_CALL_NO_RWG,
88
+DEF_HELPER_FLAGS_5(sme_st1s_be_v_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
85
+ i32, ptr, ptr, ptr, ptr, i32)
89
+DEF_HELPER_FLAGS_5(sme_st1s_le_v_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
86
+DEF_HELPER_FLAGS_5(sve_cmple_ppzw_b, TCG_CALL_NO_RWG,
90
+
87
+ i32, ptr, ptr, ptr, ptr, i32)
91
+DEF_HELPER_FLAGS_5(sme_st1d_be_h, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
88
+DEF_HELPER_FLAGS_5(sve_cmplt_ppzw_b, TCG_CALL_NO_RWG,
92
+DEF_HELPER_FLAGS_5(sme_st1d_le_h, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
89
+ i32, ptr, ptr, ptr, ptr, i32)
93
+DEF_HELPER_FLAGS_5(sme_st1d_be_v, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
90
+DEF_HELPER_FLAGS_5(sve_cmplo_ppzw_b, TCG_CALL_NO_RWG,
94
+DEF_HELPER_FLAGS_5(sme_st1d_le_v, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
91
+ i32, ptr, ptr, ptr, ptr, i32)
95
+DEF_HELPER_FLAGS_5(sme_st1d_be_h_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
92
+DEF_HELPER_FLAGS_5(sve_cmpls_ppzw_b, TCG_CALL_NO_RWG,
96
+DEF_HELPER_FLAGS_5(sme_st1d_le_h_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
93
+ i32, ptr, ptr, ptr, ptr, i32)
97
+DEF_HELPER_FLAGS_5(sme_st1d_be_v_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
94
+
98
+DEF_HELPER_FLAGS_5(sme_st1d_le_v_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
95
+DEF_HELPER_FLAGS_5(sve_cmpeq_ppzw_h, TCG_CALL_NO_RWG,
99
+
96
+ i32, ptr, ptr, ptr, ptr, i32)
100
+DEF_HELPER_FLAGS_5(sme_st1q_be_h, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
97
+DEF_HELPER_FLAGS_5(sve_cmpne_ppzw_h, TCG_CALL_NO_RWG,
101
+DEF_HELPER_FLAGS_5(sme_st1q_le_h, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
98
+ i32, ptr, ptr, ptr, ptr, i32)
102
+DEF_HELPER_FLAGS_5(sme_st1q_be_v, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
99
+DEF_HELPER_FLAGS_5(sve_cmpge_ppzw_h, TCG_CALL_NO_RWG,
103
+DEF_HELPER_FLAGS_5(sme_st1q_le_v, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
100
+ i32, ptr, ptr, ptr, ptr, i32)
104
+DEF_HELPER_FLAGS_5(sme_st1q_be_h_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
101
+DEF_HELPER_FLAGS_5(sve_cmpgt_ppzw_h, TCG_CALL_NO_RWG,
105
+DEF_HELPER_FLAGS_5(sme_st1q_le_h_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
102
+ i32, ptr, ptr, ptr, ptr, i32)
106
+DEF_HELPER_FLAGS_5(sme_st1q_be_v_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
103
+DEF_HELPER_FLAGS_5(sve_cmphi_ppzw_h, TCG_CALL_NO_RWG,
107
+DEF_HELPER_FLAGS_5(sme_st1q_le_v_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
104
+ i32, ptr, ptr, ptr, ptr, i32)
108
diff --git a/target/arm/sme.decode b/target/arm/sme.decode
105
+DEF_HELPER_FLAGS_5(sve_cmphs_ppzw_h, TCG_CALL_NO_RWG,
106
+ i32, ptr, ptr, ptr, ptr, i32)
107
+DEF_HELPER_FLAGS_5(sve_cmple_ppzw_h, TCG_CALL_NO_RWG,
108
+ i32, ptr, ptr, ptr, ptr, i32)
109
+DEF_HELPER_FLAGS_5(sve_cmplt_ppzw_h, TCG_CALL_NO_RWG,
110
+ i32, ptr, ptr, ptr, ptr, i32)
111
+DEF_HELPER_FLAGS_5(sve_cmplo_ppzw_h, TCG_CALL_NO_RWG,
112
+ i32, ptr, ptr, ptr, ptr, i32)
113
+DEF_HELPER_FLAGS_5(sve_cmpls_ppzw_h, TCG_CALL_NO_RWG,
114
+ i32, ptr, ptr, ptr, ptr, i32)
115
+
116
+DEF_HELPER_FLAGS_5(sve_cmpeq_ppzw_s, TCG_CALL_NO_RWG,
117
+ i32, ptr, ptr, ptr, ptr, i32)
118
+DEF_HELPER_FLAGS_5(sve_cmpne_ppzw_s, TCG_CALL_NO_RWG,
119
+ i32, ptr, ptr, ptr, ptr, i32)
120
+DEF_HELPER_FLAGS_5(sve_cmpge_ppzw_s, TCG_CALL_NO_RWG,
121
+ i32, ptr, ptr, ptr, ptr, i32)
122
+DEF_HELPER_FLAGS_5(sve_cmpgt_ppzw_s, TCG_CALL_NO_RWG,
123
+ i32, ptr, ptr, ptr, ptr, i32)
124
+DEF_HELPER_FLAGS_5(sve_cmphi_ppzw_s, TCG_CALL_NO_RWG,
125
+ i32, ptr, ptr, ptr, ptr, i32)
126
+DEF_HELPER_FLAGS_5(sve_cmphs_ppzw_s, TCG_CALL_NO_RWG,
127
+ i32, ptr, ptr, ptr, ptr, i32)
128
+DEF_HELPER_FLAGS_5(sve_cmple_ppzw_s, TCG_CALL_NO_RWG,
129
+ i32, ptr, ptr, ptr, ptr, i32)
130
+DEF_HELPER_FLAGS_5(sve_cmplt_ppzw_s, TCG_CALL_NO_RWG,
131
+ i32, ptr, ptr, ptr, ptr, i32)
132
+DEF_HELPER_FLAGS_5(sve_cmplo_ppzw_s, TCG_CALL_NO_RWG,
133
+ i32, ptr, ptr, ptr, ptr, i32)
134
+DEF_HELPER_FLAGS_5(sve_cmpls_ppzw_s, TCG_CALL_NO_RWG,
135
+ i32, ptr, ptr, ptr, ptr, i32)
136
+
137
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
138
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
139
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
140
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
141
index XXXXXXX..XXXXXXX 100644
109
index XXXXXXX..XXXXXXX 100644
142
--- a/target/arm/sve_helper.c
110
--- a/target/arm/sme.decode
143
+++ b/target/arm/sve_helper.c
111
+++ b/target/arm/sme.decode
144
@@ -XXX,XX +XXX,XX @@ static uint32_t iter_predtest_fwd(uint64_t d, uint64_t g, uint32_t flags)
112
@@ -XXX,XX +XXX,XX @@ MOVA 11000000 esz:2 00001 0 v:1 .. pg:3 0 za_imm:4 zr:5 \
145
return flags;
113
&mova to_vec=1 rs=%mova_rs
114
MOVA 11000000 11 00001 1 v:1 .. pg:3 0 za_imm:4 zr:5 \
115
&mova to_vec=1 rs=%mova_rs esz=4
116
+
117
+### SME Memory
118
+
119
+&ldst esz rs pg rn rm za_imm v:bool st:bool
120
+
121
+LDST1 1110000 0 esz:2 st:1 rm:5 v:1 .. pg:3 rn:5 0 za_imm:4 \
122
+ &ldst rs=%mova_rs
123
+LDST1 1110000 111 st:1 rm:5 v:1 .. pg:3 rn:5 0 za_imm:4 \
124
+ &ldst esz=4 rs=%mova_rs
125
diff --git a/target/arm/sme_helper.c b/target/arm/sme_helper.c
126
index XXXXXXX..XXXXXXX 100644
127
--- a/target/arm/sme_helper.c
128
+++ b/target/arm/sme_helper.c
129
@@ -XXX,XX +XXX,XX @@
130
131
#include "qemu/osdep.h"
132
#include "cpu.h"
133
+#include "internals.h"
134
#include "tcg/tcg-gvec-desc.h"
135
#include "exec/helper-proto.h"
136
+#include "exec/cpu_ldst.h"
137
+#include "exec/exec-all.h"
138
#include "qemu/int128.h"
139
#include "vec_internal.h"
140
+#include "sve_ldst_internal.h"
141
142
/* ResetSVEState */
143
void arm_reset_sve_state(CPUARMState *env)
144
@@ -XXX,XX +XXX,XX @@ void HELPER(sme_mova_zc_q)(void *vd, void *za, void *vg, uint32_t desc)
146
}
145
}
147
146
148
+/* This is an iterative function, called for each Pd and Pg word
147
#undef DO_MOVA_Z
149
+ * moving backward.
148
+
149
+/*
150
+ * Clear elements in a tile slice comprising len bytes.
150
+ */
151
+ */
151
+static uint32_t iter_predtest_bwd(uint64_t d, uint64_t g, uint32_t flags)
152
+
152
+{
153
+typedef void ClearFn(void *ptr, size_t off, size_t len);
153
+ if (likely(g)) {
154
+
154
+ /* Compute C from first (i.e last) !(D & G).
155
+static void clear_horizontal(void *ptr, size_t off, size_t len)
155
+ Use bit 2 to signal first G bit seen. */
156
+{
156
+ if (!(flags & 4)) {
157
+ memset(ptr + off, 0, len);
157
+ flags += 4 - 1; /* add bit 2, subtract C from PREDTEST_INIT */
158
+}
158
+ flags |= (d & pow2floor(g)) == 0;
159
+
160
+static void clear_vertical_b(void *vptr, size_t off, size_t len)
161
+{
162
+ for (size_t i = 0; i < len; ++i) {
163
+ *(uint8_t *)(vptr + tile_vslice_offset(i + off)) = 0;
164
+ }
165
+}
166
+
167
+static void clear_vertical_h(void *vptr, size_t off, size_t len)
168
+{
169
+ for (size_t i = 0; i < len; i += 2) {
170
+ *(uint16_t *)(vptr + tile_vslice_offset(i + off)) = 0;
171
+ }
172
+}
173
+
174
+static void clear_vertical_s(void *vptr, size_t off, size_t len)
175
+{
176
+ for (size_t i = 0; i < len; i += 4) {
177
+ *(uint32_t *)(vptr + tile_vslice_offset(i + off)) = 0;
178
+ }
179
+}
180
+
181
+static void clear_vertical_d(void *vptr, size_t off, size_t len)
182
+{
183
+ for (size_t i = 0; i < len; i += 8) {
184
+ *(uint64_t *)(vptr + tile_vslice_offset(i + off)) = 0;
185
+ }
186
+}
187
+
188
+static void clear_vertical_q(void *vptr, size_t off, size_t len)
189
+{
190
+ for (size_t i = 0; i < len; i += 16) {
191
+ memset(vptr + tile_vslice_offset(i + off), 0, 16);
192
+ }
193
+}
194
+
195
+/*
196
+ * Copy elements from an array into a tile slice comprising len bytes.
197
+ */
198
+
199
+typedef void CopyFn(void *dst, const void *src, size_t len);
200
+
201
+static void copy_horizontal(void *dst, const void *src, size_t len)
202
+{
203
+ memcpy(dst, src, len);
204
+}
205
+
206
+static void copy_vertical_b(void *vdst, const void *vsrc, size_t len)
207
+{
208
+ const uint8_t *src = vsrc;
209
+ uint8_t *dst = vdst;
210
+ size_t i;
211
+
212
+ for (i = 0; i < len; ++i) {
213
+ dst[tile_vslice_index(i)] = src[i];
214
+ }
215
+}
216
+
217
+static void copy_vertical_h(void *vdst, const void *vsrc, size_t len)
218
+{
219
+ const uint16_t *src = vsrc;
220
+ uint16_t *dst = vdst;
221
+ size_t i;
222
+
223
+ for (i = 0; i < len / 2; ++i) {
224
+ dst[tile_vslice_index(i)] = src[i];
225
+ }
226
+}
227
+
228
+static void copy_vertical_s(void *vdst, const void *vsrc, size_t len)
229
+{
230
+ const uint32_t *src = vsrc;
231
+ uint32_t *dst = vdst;
232
+ size_t i;
233
+
234
+ for (i = 0; i < len / 4; ++i) {
235
+ dst[tile_vslice_index(i)] = src[i];
236
+ }
237
+}
238
+
239
+static void copy_vertical_d(void *vdst, const void *vsrc, size_t len)
240
+{
241
+ const uint64_t *src = vsrc;
242
+ uint64_t *dst = vdst;
243
+ size_t i;
244
+
245
+ for (i = 0; i < len / 8; ++i) {
246
+ dst[tile_vslice_index(i)] = src[i];
247
+ }
248
+}
249
+
250
+static void copy_vertical_q(void *vdst, const void *vsrc, size_t len)
251
+{
252
+ for (size_t i = 0; i < len; i += 16) {
253
+ memcpy(vdst + tile_vslice_offset(i), vsrc + i, 16);
254
+ }
255
+}
256
+
257
+/*
258
+ * Host and TLB primitives for vertical tile slice addressing.
259
+ */
260
+
261
+#define DO_LD(NAME, TYPE, HOST, TLB) \
262
+static inline void sme_##NAME##_v_host(void *za, intptr_t off, void *host) \
263
+{ \
264
+ TYPE val = HOST(host); \
265
+ *(TYPE *)(za + tile_vslice_offset(off)) = val; \
266
+} \
267
+static inline void sme_##NAME##_v_tlb(CPUARMState *env, void *za, \
268
+ intptr_t off, target_ulong addr, uintptr_t ra) \
269
+{ \
270
+ TYPE val = TLB(env, useronly_clean_ptr(addr), ra); \
271
+ *(TYPE *)(za + tile_vslice_offset(off)) = val; \
272
+}
273
+
274
+#define DO_ST(NAME, TYPE, HOST, TLB) \
275
+static inline void sme_##NAME##_v_host(void *za, intptr_t off, void *host) \
276
+{ \
277
+ TYPE val = *(TYPE *)(za + tile_vslice_offset(off)); \
278
+ HOST(host, val); \
279
+} \
280
+static inline void sme_##NAME##_v_tlb(CPUARMState *env, void *za, \
281
+ intptr_t off, target_ulong addr, uintptr_t ra) \
282
+{ \
283
+ TYPE val = *(TYPE *)(za + tile_vslice_offset(off)); \
284
+ TLB(env, useronly_clean_ptr(addr), val, ra); \
285
+}
286
+
287
+/*
288
+ * The ARMVectorReg elements are stored in host-endian 64-bit units.
289
+ * For 128-bit quantities, the sequence defined by the Elem[] pseudocode
290
+ * corresponds to storing the two 64-bit pieces in little-endian order.
291
+ */
292
+#define DO_LDQ(HNAME, VNAME, BE, HOST, TLB) \
293
+static inline void HNAME##_host(void *za, intptr_t off, void *host) \
294
+{ \
295
+ uint64_t val0 = HOST(host), val1 = HOST(host + 8); \
296
+ uint64_t *ptr = za + off; \
297
+ ptr[0] = BE ? val1 : val0, ptr[1] = BE ? val0 : val1; \
298
+} \
299
+static inline void VNAME##_v_host(void *za, intptr_t off, void *host) \
300
+{ \
301
+ HNAME##_host(za, tile_vslice_offset(off), host); \
302
+} \
303
+static inline void HNAME##_tlb(CPUARMState *env, void *za, intptr_t off, \
304
+ target_ulong addr, uintptr_t ra) \
305
+{ \
306
+ uint64_t val0 = TLB(env, useronly_clean_ptr(addr), ra); \
307
+ uint64_t val1 = TLB(env, useronly_clean_ptr(addr + 8), ra); \
308
+ uint64_t *ptr = za + off; \
309
+ ptr[0] = BE ? val1 : val0, ptr[1] = BE ? val0 : val1; \
310
+} \
311
+static inline void VNAME##_v_tlb(CPUARMState *env, void *za, intptr_t off, \
312
+ target_ulong addr, uintptr_t ra) \
313
+{ \
314
+ HNAME##_tlb(env, za, tile_vslice_offset(off), addr, ra); \
315
+}
316
+
317
+#define DO_STQ(HNAME, VNAME, BE, HOST, TLB) \
318
+static inline void HNAME##_host(void *za, intptr_t off, void *host) \
319
+{ \
320
+ uint64_t *ptr = za + off; \
321
+ HOST(host, ptr[BE]); \
322
+ HOST(host + 1, ptr[!BE]); \
323
+} \
324
+static inline void VNAME##_v_host(void *za, intptr_t off, void *host) \
325
+{ \
326
+ HNAME##_host(za, tile_vslice_offset(off), host); \
327
+} \
328
+static inline void HNAME##_tlb(CPUARMState *env, void *za, intptr_t off, \
329
+ target_ulong addr, uintptr_t ra) \
330
+{ \
331
+ uint64_t *ptr = za + off; \
332
+ TLB(env, useronly_clean_ptr(addr), ptr[BE], ra); \
333
+ TLB(env, useronly_clean_ptr(addr + 8), ptr[!BE], ra); \
334
+} \
335
+static inline void VNAME##_v_tlb(CPUARMState *env, void *za, intptr_t off, \
336
+ target_ulong addr, uintptr_t ra) \
337
+{ \
338
+ HNAME##_tlb(env, za, tile_vslice_offset(off), addr, ra); \
339
+}
340
+
341
+DO_LD(ld1b, uint8_t, ldub_p, cpu_ldub_data_ra)
342
+DO_LD(ld1h_be, uint16_t, lduw_be_p, cpu_lduw_be_data_ra)
343
+DO_LD(ld1h_le, uint16_t, lduw_le_p, cpu_lduw_le_data_ra)
344
+DO_LD(ld1s_be, uint32_t, ldl_be_p, cpu_ldl_be_data_ra)
345
+DO_LD(ld1s_le, uint32_t, ldl_le_p, cpu_ldl_le_data_ra)
346
+DO_LD(ld1d_be, uint64_t, ldq_be_p, cpu_ldq_be_data_ra)
347
+DO_LD(ld1d_le, uint64_t, ldq_le_p, cpu_ldq_le_data_ra)
348
+
349
+DO_LDQ(sve_ld1qq_be, sme_ld1q_be, 1, ldq_be_p, cpu_ldq_be_data_ra)
350
+DO_LDQ(sve_ld1qq_le, sme_ld1q_le, 0, ldq_le_p, cpu_ldq_le_data_ra)
351
+
352
+DO_ST(st1b, uint8_t, stb_p, cpu_stb_data_ra)
353
+DO_ST(st1h_be, uint16_t, stw_be_p, cpu_stw_be_data_ra)
354
+DO_ST(st1h_le, uint16_t, stw_le_p, cpu_stw_le_data_ra)
355
+DO_ST(st1s_be, uint32_t, stl_be_p, cpu_stl_be_data_ra)
356
+DO_ST(st1s_le, uint32_t, stl_le_p, cpu_stl_le_data_ra)
357
+DO_ST(st1d_be, uint64_t, stq_be_p, cpu_stq_be_data_ra)
358
+DO_ST(st1d_le, uint64_t, stq_le_p, cpu_stq_le_data_ra)
359
+
360
+DO_STQ(sve_st1qq_be, sme_st1q_be, 1, stq_be_p, cpu_stq_be_data_ra)
361
+DO_STQ(sve_st1qq_le, sme_st1q_le, 0, stq_le_p, cpu_stq_le_data_ra)
362
+
363
+#undef DO_LD
364
+#undef DO_ST
365
+#undef DO_LDQ
366
+#undef DO_STQ
367
+
368
+/*
369
+ * Common helper for all contiguous predicated loads.
370
+ */
371
+
372
+static inline QEMU_ALWAYS_INLINE
373
+void sme_ld1(CPUARMState *env, void *za, uint64_t *vg,
374
+ const target_ulong addr, uint32_t desc, const uintptr_t ra,
375
+ const int esz, uint32_t mtedesc, bool vertical,
376
+ sve_ldst1_host_fn *host_fn,
377
+ sve_ldst1_tlb_fn *tlb_fn,
378
+ ClearFn *clr_fn,
379
+ CopyFn *cpy_fn)
380
+{
381
+ const intptr_t reg_max = simd_oprsz(desc);
382
+ const intptr_t esize = 1 << esz;
383
+ intptr_t reg_off, reg_last;
384
+ SVEContLdSt info;
385
+ void *host;
386
+ int flags;
387
+
388
+ /* Find the active elements. */
389
+ if (!sve_cont_ldst_elements(&info, addr, vg, reg_max, esz, esize)) {
390
+ /* The entire predicate was false; no load occurs. */
391
+ clr_fn(za, 0, reg_max);
392
+ return;
393
+ }
394
+
395
+ /* Probe the page(s). Exit with exception for any invalid page. */
396
+ sve_cont_ldst_pages(&info, FAULT_ALL, env, addr, MMU_DATA_LOAD, ra);
397
+
398
+ /* Handle watchpoints for all active elements. */
399
+ sve_cont_ldst_watchpoints(&info, env, vg, addr, esize, esize,
400
+ BP_MEM_READ, ra);
401
+
402
+ /*
403
+ * Handle mte checks for all active elements.
404
+ * Since TBI must be set for MTE, !mtedesc => !mte_active.
405
+ */
406
+ if (mtedesc) {
407
+ sve_cont_ldst_mte_check(&info, env, vg, addr, esize, esize,
408
+ mtedesc, ra);
409
+ }
410
+
411
+ flags = info.page[0].flags | info.page[1].flags;
412
+ if (unlikely(flags != 0)) {
413
+#ifdef CONFIG_USER_ONLY
414
+ g_assert_not_reached();
415
+#else
416
+ /*
417
+ * At least one page includes MMIO.
418
+ * Any bus operation can fail with cpu_transaction_failed,
419
+ * which for ARM will raise SyncExternal. Perform the load
420
+ * into scratch memory to preserve register state until the end.
421
+ */
422
+ ARMVectorReg scratch = { };
423
+
424
+ reg_off = info.reg_off_first[0];
425
+ reg_last = info.reg_off_last[1];
426
+ if (reg_last < 0) {
427
+ reg_last = info.reg_off_split;
428
+ if (reg_last < 0) {
429
+ reg_last = info.reg_off_last[0];
430
+ }
159
+ }
431
+ }
160
+
432
+
161
+ /* Accumulate Z from each D & G. */
433
+ do {
162
+ flags |= ((d & g) != 0) << 1;
434
+ uint64_t pg = vg[reg_off >> 6];
163
+
435
+ do {
164
+ /* Compute N from last (i.e first) D & G. Replace previous. */
436
+ if ((pg >> (reg_off & 63)) & 1) {
165
+ flags = deposit32(flags, 31, 1, (d & (g & -g)) != 0);
437
+ tlb_fn(env, &scratch, reg_off, addr + reg_off, ra);
166
+ }
438
+ }
167
+ return flags;
439
+ reg_off += esize;
168
+}
440
+ } while (reg_off & 63);
169
+
441
+ } while (reg_off <= reg_last);
170
/* The same for a single word predicate. */
442
+
171
uint32_t HELPER(sve_predtest1)(uint64_t d, uint64_t g)
443
+ cpy_fn(za, &scratch, reg_max);
172
{
444
+ return;
173
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_sel_zpzz_d)(void *vd, void *vn, void *vm,
445
+#endif
174
d[i] = (pg[H1(i)] & 1 ? nn : mm);
446
+ }
175
}
447
+
176
}
448
+ /* The entire operation is in RAM, on valid pages. */
177
+
449
+
178
+/* Two operand comparison controlled by a predicate.
450
+ reg_off = info.reg_off_first[0];
179
+ * ??? It is very tempting to want to be able to expand this inline
451
+ reg_last = info.reg_off_last[0];
180
+ * with x86 instructions, e.g.
452
+ host = info.page[0].host;
181
+ *
453
+
182
+ * vcmpeqw zm, zn, %ymm0
454
+ if (!vertical) {
183
+ * vpmovmskb %ymm0, %eax
455
+ memset(za, 0, reg_max);
184
+ * and $0x5555, %eax
456
+ } else if (reg_off) {
185
+ * and pg, %eax
457
+ clr_fn(za, 0, reg_off);
186
+ *
458
+ }
187
+ * or even aarch64, e.g.
459
+
188
+ *
460
+ while (reg_off <= reg_last) {
189
+ * // mask = 4000 1000 0400 0100 0040 0010 0004 0001
461
+ uint64_t pg = vg[reg_off >> 6];
190
+ * cmeq v0.8h, zn, zm
462
+ do {
191
+ * and v0.8h, v0.8h, mask
463
+ if ((pg >> (reg_off & 63)) & 1) {
192
+ * addv h0, v0.8h
464
+ host_fn(za, reg_off, host + reg_off);
193
+ * and v0.8b, pg
465
+ } else if (vertical) {
194
+ *
466
+ clr_fn(za, reg_off, esize);
195
+ * However, coming up with an abstraction that allows vector inputs and
467
+ }
196
+ * a scalar output, and also handles the byte-ordering of sub-uint64_t
468
+ reg_off += esize;
197
+ * scalar outputs, is tricky.
469
+ } while (reg_off <= reg_last && (reg_off & 63));
470
+ }
471
+
472
+ /*
473
+ * Use the slow path to manage the cross-page misalignment.
474
+ * But we know this is RAM and cannot trap.
475
+ */
476
+ reg_off = info.reg_off_split;
477
+ if (unlikely(reg_off >= 0)) {
478
+ tlb_fn(env, za, reg_off, addr + reg_off, ra);
479
+ }
480
+
481
+ reg_off = info.reg_off_first[1];
482
+ if (unlikely(reg_off >= 0)) {
483
+ reg_last = info.reg_off_last[1];
484
+ host = info.page[1].host;
485
+
486
+ do {
487
+ uint64_t pg = vg[reg_off >> 6];
488
+ do {
489
+ if ((pg >> (reg_off & 63)) & 1) {
490
+ host_fn(za, reg_off, host + reg_off);
491
+ } else if (vertical) {
492
+ clr_fn(za, reg_off, esize);
493
+ }
494
+ reg_off += esize;
495
+ } while (reg_off & 63);
496
+ } while (reg_off <= reg_last);
497
+ }
498
+}
499
+
500
+static inline QEMU_ALWAYS_INLINE
501
+void sme_ld1_mte(CPUARMState *env, void *za, uint64_t *vg,
502
+ target_ulong addr, uint32_t desc, uintptr_t ra,
503
+ const int esz, bool vertical,
504
+ sve_ldst1_host_fn *host_fn,
505
+ sve_ldst1_tlb_fn *tlb_fn,
506
+ ClearFn *clr_fn,
507
+ CopyFn *cpy_fn)
508
+{
509
+ uint32_t mtedesc = desc >> (SIMD_DATA_SHIFT + SVE_MTEDESC_SHIFT);
510
+ int bit55 = extract64(addr, 55, 1);
511
+
512
+ /* Remove mtedesc from the normal sve descriptor. */
513
+ desc = extract32(desc, 0, SIMD_DATA_SHIFT + SVE_MTEDESC_SHIFT);
514
+
515
+ /* Perform gross MTE suppression early. */
516
+ if (!tbi_check(desc, bit55) ||
517
+ tcma_check(desc, bit55, allocation_tag_from_addr(addr))) {
518
+ mtedesc = 0;
519
+ }
520
+
521
+ sme_ld1(env, za, vg, addr, desc, ra, esz, mtedesc, vertical,
522
+ host_fn, tlb_fn, clr_fn, cpy_fn);
523
+}
524
+
525
+#define DO_LD(L, END, ESZ) \
526
+void HELPER(sme_ld1##L##END##_h)(CPUARMState *env, void *za, void *vg, \
527
+ target_ulong addr, uint32_t desc) \
528
+{ \
529
+ sme_ld1(env, za, vg, addr, desc, GETPC(), ESZ, 0, false, \
530
+ sve_ld1##L##L##END##_host, sve_ld1##L##L##END##_tlb, \
531
+ clear_horizontal, copy_horizontal); \
532
+} \
533
+void HELPER(sme_ld1##L##END##_v)(CPUARMState *env, void *za, void *vg, \
534
+ target_ulong addr, uint32_t desc) \
535
+{ \
536
+ sme_ld1(env, za, vg, addr, desc, GETPC(), ESZ, 0, true, \
537
+ sme_ld1##L##END##_v_host, sme_ld1##L##END##_v_tlb, \
538
+ clear_vertical_##L, copy_vertical_##L); \
539
+} \
540
+void HELPER(sme_ld1##L##END##_h_mte)(CPUARMState *env, void *za, void *vg, \
541
+ target_ulong addr, uint32_t desc) \
542
+{ \
543
+ sme_ld1_mte(env, za, vg, addr, desc, GETPC(), ESZ, false, \
544
+ sve_ld1##L##L##END##_host, sve_ld1##L##L##END##_tlb, \
545
+ clear_horizontal, copy_horizontal); \
546
+} \
547
+void HELPER(sme_ld1##L##END##_v_mte)(CPUARMState *env, void *za, void *vg, \
548
+ target_ulong addr, uint32_t desc) \
549
+{ \
550
+ sme_ld1_mte(env, za, vg, addr, desc, GETPC(), ESZ, true, \
551
+ sme_ld1##L##END##_v_host, sme_ld1##L##END##_v_tlb, \
552
+ clear_vertical_##L, copy_vertical_##L); \
553
+}
554
+
555
+DO_LD(b, , MO_8)
556
+DO_LD(h, _be, MO_16)
557
+DO_LD(h, _le, MO_16)
558
+DO_LD(s, _be, MO_32)
559
+DO_LD(s, _le, MO_32)
560
+DO_LD(d, _be, MO_64)
561
+DO_LD(d, _le, MO_64)
562
+DO_LD(q, _be, MO_128)
563
+DO_LD(q, _le, MO_128)
564
+
565
+#undef DO_LD
566
+
567
+/*
568
+ * Common helper for all contiguous predicated stores.
198
+ */
569
+ */
199
+#define DO_CMP_PPZZ(NAME, TYPE, OP, H, MASK) \
570
+
200
+uint32_t HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, uint32_t desc) \
571
+static inline QEMU_ALWAYS_INLINE
201
+{ \
572
+void sme_st1(CPUARMState *env, void *za, uint64_t *vg,
202
+ intptr_t opr_sz = simd_oprsz(desc); \
573
+ const target_ulong addr, uint32_t desc, const uintptr_t ra,
203
+ uint32_t flags = PREDTEST_INIT; \
574
+ const int esz, uint32_t mtedesc, bool vertical,
204
+ intptr_t i = opr_sz; \
575
+ sve_ldst1_host_fn *host_fn,
205
+ do { \
576
+ sve_ldst1_tlb_fn *tlb_fn)
206
+ uint64_t out = 0, pg; \
577
+{
207
+ do { \
578
+ const intptr_t reg_max = simd_oprsz(desc);
208
+ i -= sizeof(TYPE), out <<= sizeof(TYPE); \
579
+ const intptr_t esize = 1 << esz;
209
+ TYPE nn = *(TYPE *)(vn + H(i)); \
580
+ intptr_t reg_off, reg_last;
210
+ TYPE mm = *(TYPE *)(vm + H(i)); \
581
+ SVEContLdSt info;
211
+ out |= nn OP mm; \
582
+ void *host;
212
+ } while (i & 63); \
583
+ int flags;
213
+ pg = *(uint64_t *)(vg + (i >> 3)) & MASK; \
584
+
214
+ out &= pg; \
585
+ /* Find the active elements. */
215
+ *(uint64_t *)(vd + (i >> 3)) = out; \
586
+ if (!sve_cont_ldst_elements(&info, addr, vg, reg_max, esz, esize)) {
216
+ flags = iter_predtest_bwd(out, pg, flags); \
587
+ /* The entire predicate was false; no store occurs. */
217
+ } while (i > 0); \
588
+ return;
218
+ return flags; \
589
+ }
219
+}
590
+
220
+
591
+ /* Probe the page(s). Exit with exception for any invalid page. */
221
+#define DO_CMP_PPZZ_B(NAME, TYPE, OP) \
592
+ sve_cont_ldst_pages(&info, FAULT_ALL, env, addr, MMU_DATA_STORE, ra);
222
+ DO_CMP_PPZZ(NAME, TYPE, OP, H1, 0xffffffffffffffffull)
593
+
223
+#define DO_CMP_PPZZ_H(NAME, TYPE, OP) \
594
+ /* Handle watchpoints for all active elements. */
224
+ DO_CMP_PPZZ(NAME, TYPE, OP, H1_2, 0x5555555555555555ull)
595
+ sve_cont_ldst_watchpoints(&info, env, vg, addr, esize, esize,
225
+#define DO_CMP_PPZZ_S(NAME, TYPE, OP) \
596
+ BP_MEM_WRITE, ra);
226
+ DO_CMP_PPZZ(NAME, TYPE, OP, H1_4, 0x1111111111111111ull)
597
+
227
+#define DO_CMP_PPZZ_D(NAME, TYPE, OP) \
598
+ /*
228
+ DO_CMP_PPZZ(NAME, TYPE, OP, , 0x0101010101010101ull)
599
+ * Handle mte checks for all active elements.
229
+
600
+ * Since TBI must be set for MTE, !mtedesc => !mte_active.
230
+DO_CMP_PPZZ_B(sve_cmpeq_ppzz_b, uint8_t, ==)
601
+ */
231
+DO_CMP_PPZZ_H(sve_cmpeq_ppzz_h, uint16_t, ==)
602
+ if (mtedesc) {
232
+DO_CMP_PPZZ_S(sve_cmpeq_ppzz_s, uint32_t, ==)
603
+ sve_cont_ldst_mte_check(&info, env, vg, addr, esize, esize,
233
+DO_CMP_PPZZ_D(sve_cmpeq_ppzz_d, uint64_t, ==)
604
+ mtedesc, ra);
234
+
605
+ }
235
+DO_CMP_PPZZ_B(sve_cmpne_ppzz_b, uint8_t, !=)
606
+
236
+DO_CMP_PPZZ_H(sve_cmpne_ppzz_h, uint16_t, !=)
607
+ flags = info.page[0].flags | info.page[1].flags;
237
+DO_CMP_PPZZ_S(sve_cmpne_ppzz_s, uint32_t, !=)
608
+ if (unlikely(flags != 0)) {
238
+DO_CMP_PPZZ_D(sve_cmpne_ppzz_d, uint64_t, !=)
609
+#ifdef CONFIG_USER_ONLY
239
+
610
+ g_assert_not_reached();
240
+DO_CMP_PPZZ_B(sve_cmpgt_ppzz_b, int8_t, >)
611
+#else
241
+DO_CMP_PPZZ_H(sve_cmpgt_ppzz_h, int16_t, >)
612
+ /*
242
+DO_CMP_PPZZ_S(sve_cmpgt_ppzz_s, int32_t, >)
613
+ * At least one page includes MMIO.
243
+DO_CMP_PPZZ_D(sve_cmpgt_ppzz_d, int64_t, >)
614
+ * Any bus operation can fail with cpu_transaction_failed,
244
+
615
+ * which for ARM will raise SyncExternal. We cannot avoid
245
+DO_CMP_PPZZ_B(sve_cmpge_ppzz_b, int8_t, >=)
616
+ * this fault and will leave with the store incomplete.
246
+DO_CMP_PPZZ_H(sve_cmpge_ppzz_h, int16_t, >=)
617
+ */
247
+DO_CMP_PPZZ_S(sve_cmpge_ppzz_s, int32_t, >=)
618
+ reg_off = info.reg_off_first[0];
248
+DO_CMP_PPZZ_D(sve_cmpge_ppzz_d, int64_t, >=)
619
+ reg_last = info.reg_off_last[1];
249
+
620
+ if (reg_last < 0) {
250
+DO_CMP_PPZZ_B(sve_cmphi_ppzz_b, uint8_t, >)
621
+ reg_last = info.reg_off_split;
251
+DO_CMP_PPZZ_H(sve_cmphi_ppzz_h, uint16_t, >)
622
+ if (reg_last < 0) {
252
+DO_CMP_PPZZ_S(sve_cmphi_ppzz_s, uint32_t, >)
623
+ reg_last = info.reg_off_last[0];
253
+DO_CMP_PPZZ_D(sve_cmphi_ppzz_d, uint64_t, >)
624
+ }
254
+
625
+ }
255
+DO_CMP_PPZZ_B(sve_cmphs_ppzz_b, uint8_t, >=)
626
+
256
+DO_CMP_PPZZ_H(sve_cmphs_ppzz_h, uint16_t, >=)
627
+ do {
257
+DO_CMP_PPZZ_S(sve_cmphs_ppzz_s, uint32_t, >=)
628
+ uint64_t pg = vg[reg_off >> 6];
258
+DO_CMP_PPZZ_D(sve_cmphs_ppzz_d, uint64_t, >=)
629
+ do {
259
+
630
+ if ((pg >> (reg_off & 63)) & 1) {
260
+#undef DO_CMP_PPZZ_B
631
+ tlb_fn(env, za, reg_off, addr + reg_off, ra);
261
+#undef DO_CMP_PPZZ_H
632
+ }
262
+#undef DO_CMP_PPZZ_S
633
+ reg_off += esize;
263
+#undef DO_CMP_PPZZ_D
634
+ } while (reg_off & 63);
264
+#undef DO_CMP_PPZZ
635
+ } while (reg_off <= reg_last);
265
+
636
+ return;
266
+/* Similar, but the second source is "wide". */
637
+#endif
267
+#define DO_CMP_PPZW(NAME, TYPE, TYPEW, OP, H, MASK) \
638
+ }
268
+uint32_t HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, uint32_t desc) \
639
+
269
+{ \
640
+ reg_off = info.reg_off_first[0];
270
+ intptr_t opr_sz = simd_oprsz(desc); \
641
+ reg_last = info.reg_off_last[0];
271
+ uint32_t flags = PREDTEST_INIT; \
642
+ host = info.page[0].host;
272
+ intptr_t i = opr_sz; \
643
+
273
+ do { \
644
+ while (reg_off <= reg_last) {
274
+ uint64_t out = 0, pg; \
645
+ uint64_t pg = vg[reg_off >> 6];
275
+ do { \
646
+ do {
276
+ TYPEW mm = *(TYPEW *)(vm + i - 8); \
647
+ if ((pg >> (reg_off & 63)) & 1) {
277
+ do { \
648
+ host_fn(za, reg_off, host + reg_off);
278
+ i -= sizeof(TYPE), out <<= sizeof(TYPE); \
649
+ }
279
+ TYPE nn = *(TYPE *)(vn + H(i)); \
650
+ reg_off += 1 << esz;
280
+ out |= nn OP mm; \
651
+ } while (reg_off <= reg_last && (reg_off & 63));
281
+ } while (i & 7); \
652
+ }
282
+ } while (i & 63); \
653
+
283
+ pg = *(uint64_t *)(vg + (i >> 3)) & MASK; \
654
+ /*
284
+ out &= pg; \
655
+ * Use the slow path to manage the cross-page misalignment.
285
+ *(uint64_t *)(vd + (i >> 3)) = out; \
656
+ * But we know this is RAM and cannot trap.
286
+ flags = iter_predtest_bwd(out, pg, flags); \
657
+ */
287
+ } while (i > 0); \
658
+ reg_off = info.reg_off_split;
288
+ return flags; \
659
+ if (unlikely(reg_off >= 0)) {
289
+}
660
+ tlb_fn(env, za, reg_off, addr + reg_off, ra);
290
+
661
+ }
291
+#define DO_CMP_PPZW_B(NAME, TYPE, TYPEW, OP) \
662
+
292
+ DO_CMP_PPZW(NAME, TYPE, TYPEW, OP, H1, 0xffffffffffffffffull)
663
+ reg_off = info.reg_off_first[1];
293
+#define DO_CMP_PPZW_H(NAME, TYPE, TYPEW, OP) \
664
+ if (unlikely(reg_off >= 0)) {
294
+ DO_CMP_PPZW(NAME, TYPE, TYPEW, OP, H1_2, 0x5555555555555555ull)
665
+ reg_last = info.reg_off_last[1];
295
+#define DO_CMP_PPZW_S(NAME, TYPE, TYPEW, OP) \
666
+ host = info.page[1].host;
296
+ DO_CMP_PPZW(NAME, TYPE, TYPEW, OP, H1_4, 0x1111111111111111ull)
667
+
297
+
668
+ do {
298
+DO_CMP_PPZW_B(sve_cmpeq_ppzw_b, uint8_t, uint64_t, ==)
669
+ uint64_t pg = vg[reg_off >> 6];
299
+DO_CMP_PPZW_H(sve_cmpeq_ppzw_h, uint16_t, uint64_t, ==)
670
+ do {
300
+DO_CMP_PPZW_S(sve_cmpeq_ppzw_s, uint32_t, uint64_t, ==)
671
+ if ((pg >> (reg_off & 63)) & 1) {
301
+
672
+ host_fn(za, reg_off, host + reg_off);
302
+DO_CMP_PPZW_B(sve_cmpne_ppzw_b, uint8_t, uint64_t, !=)
673
+ }
303
+DO_CMP_PPZW_H(sve_cmpne_ppzw_h, uint16_t, uint64_t, !=)
674
+ reg_off += 1 << esz;
304
+DO_CMP_PPZW_S(sve_cmpne_ppzw_s, uint32_t, uint64_t, !=)
675
+ } while (reg_off & 63);
305
+
676
+ } while (reg_off <= reg_last);
306
+DO_CMP_PPZW_B(sve_cmpgt_ppzw_b, int8_t, int64_t, >)
677
+ }
307
+DO_CMP_PPZW_H(sve_cmpgt_ppzw_h, int16_t, int64_t, >)
678
+}
308
+DO_CMP_PPZW_S(sve_cmpgt_ppzw_s, int32_t, int64_t, >)
679
+
309
+
680
+static inline QEMU_ALWAYS_INLINE
310
+DO_CMP_PPZW_B(sve_cmpge_ppzw_b, int8_t, int64_t, >=)
681
+void sme_st1_mte(CPUARMState *env, void *za, uint64_t *vg, target_ulong addr,
311
+DO_CMP_PPZW_H(sve_cmpge_ppzw_h, int16_t, int64_t, >=)
682
+ uint32_t desc, uintptr_t ra, int esz, bool vertical,
312
+DO_CMP_PPZW_S(sve_cmpge_ppzw_s, int32_t, int64_t, >=)
683
+ sve_ldst1_host_fn *host_fn,
313
+
684
+ sve_ldst1_tlb_fn *tlb_fn)
314
+DO_CMP_PPZW_B(sve_cmphi_ppzw_b, uint8_t, uint64_t, >)
685
+{
315
+DO_CMP_PPZW_H(sve_cmphi_ppzw_h, uint16_t, uint64_t, >)
686
+ uint32_t mtedesc = desc >> (SIMD_DATA_SHIFT + SVE_MTEDESC_SHIFT);
316
+DO_CMP_PPZW_S(sve_cmphi_ppzw_s, uint32_t, uint64_t, >)
687
+ int bit55 = extract64(addr, 55, 1);
317
+
688
+
318
+DO_CMP_PPZW_B(sve_cmphs_ppzw_b, uint8_t, uint64_t, >=)
689
+ /* Remove mtedesc from the normal sve descriptor. */
319
+DO_CMP_PPZW_H(sve_cmphs_ppzw_h, uint16_t, uint64_t, >=)
690
+ desc = extract32(desc, 0, SIMD_DATA_SHIFT + SVE_MTEDESC_SHIFT);
320
+DO_CMP_PPZW_S(sve_cmphs_ppzw_s, uint32_t, uint64_t, >=)
691
+
321
+
692
+ /* Perform gross MTE suppression early. */
322
+DO_CMP_PPZW_B(sve_cmplt_ppzw_b, int8_t, int64_t, <)
693
+ if (!tbi_check(desc, bit55) ||
323
+DO_CMP_PPZW_H(sve_cmplt_ppzw_h, int16_t, int64_t, <)
694
+ tcma_check(desc, bit55, allocation_tag_from_addr(addr))) {
324
+DO_CMP_PPZW_S(sve_cmplt_ppzw_s, int32_t, int64_t, <)
695
+ mtedesc = 0;
325
+
696
+ }
326
+DO_CMP_PPZW_B(sve_cmple_ppzw_b, int8_t, int64_t, <=)
697
+
327
+DO_CMP_PPZW_H(sve_cmple_ppzw_h, int16_t, int64_t, <=)
698
+ sme_st1(env, za, vg, addr, desc, ra, esz, mtedesc,
328
+DO_CMP_PPZW_S(sve_cmple_ppzw_s, int32_t, int64_t, <=)
699
+ vertical, host_fn, tlb_fn);
329
+
700
+}
330
+DO_CMP_PPZW_B(sve_cmplo_ppzw_b, uint8_t, uint64_t, <)
701
+
331
+DO_CMP_PPZW_H(sve_cmplo_ppzw_h, uint16_t, uint64_t, <)
702
+#define DO_ST(L, END, ESZ) \
332
+DO_CMP_PPZW_S(sve_cmplo_ppzw_s, uint32_t, uint64_t, <)
703
+void HELPER(sme_st1##L##END##_h)(CPUARMState *env, void *za, void *vg, \
333
+
704
+ target_ulong addr, uint32_t desc) \
334
+DO_CMP_PPZW_B(sve_cmpls_ppzw_b, uint8_t, uint64_t, <=)
705
+{ \
335
+DO_CMP_PPZW_H(sve_cmpls_ppzw_h, uint16_t, uint64_t, <=)
706
+ sme_st1(env, za, vg, addr, desc, GETPC(), ESZ, 0, false, \
336
+DO_CMP_PPZW_S(sve_cmpls_ppzw_s, uint32_t, uint64_t, <=)
707
+ sve_st1##L##L##END##_host, sve_st1##L##L##END##_tlb); \
337
+
708
+} \
338
+#undef DO_CMP_PPZW_B
709
+void HELPER(sme_st1##L##END##_v)(CPUARMState *env, void *za, void *vg, \
339
+#undef DO_CMP_PPZW_H
710
+ target_ulong addr, uint32_t desc) \
340
+#undef DO_CMP_PPZW_S
711
+{ \
341
+#undef DO_CMP_PPZW
712
+ sme_st1(env, za, vg, addr, desc, GETPC(), ESZ, 0, true, \
342
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
713
+ sme_st1##L##END##_v_host, sme_st1##L##END##_v_tlb); \
714
+} \
715
+void HELPER(sme_st1##L##END##_h_mte)(CPUARMState *env, void *za, void *vg, \
716
+ target_ulong addr, uint32_t desc) \
717
+{ \
718
+ sme_st1_mte(env, za, vg, addr, desc, GETPC(), ESZ, false, \
719
+ sve_st1##L##L##END##_host, sve_st1##L##L##END##_tlb); \
720
+} \
721
+void HELPER(sme_st1##L##END##_v_mte)(CPUARMState *env, void *za, void *vg, \
722
+ target_ulong addr, uint32_t desc) \
723
+{ \
724
+ sme_st1_mte(env, za, vg, addr, desc, GETPC(), ESZ, true, \
725
+ sme_st1##L##END##_v_host, sme_st1##L##END##_v_tlb); \
726
+}
727
+
728
+DO_ST(b, , MO_8)
729
+DO_ST(h, _be, MO_16)
730
+DO_ST(h, _le, MO_16)
731
+DO_ST(s, _be, MO_32)
732
+DO_ST(s, _le, MO_32)
733
+DO_ST(d, _be, MO_64)
734
+DO_ST(d, _le, MO_64)
735
+DO_ST(q, _be, MO_128)
736
+DO_ST(q, _le, MO_128)
737
+
738
+#undef DO_ST
739
diff --git a/target/arm/translate-sme.c b/target/arm/translate-sme.c
343
index XXXXXXX..XXXXXXX 100644
740
index XXXXXXX..XXXXXXX 100644
344
--- a/target/arm/translate-sve.c
741
--- a/target/arm/translate-sme.c
345
+++ b/target/arm/translate-sve.c
742
+++ b/target/arm/translate-sme.c
346
@@ -XXX,XX +XXX,XX @@
743
@@ -XXX,XX +XXX,XX @@ static bool trans_MOVA(DisasContext *s, arg_MOVA *a)
347
#include "trace-tcg.h"
744
348
#include "translate-a64.h"
349
350
+
351
+typedef void gen_helper_gvec_flags_4(TCGv_i32, TCGv_ptr, TCGv_ptr,
352
+ TCGv_ptr, TCGv_ptr, TCGv_i32);
353
+
354
/*
355
* Helpers for extracting complex instruction fields.
356
*/
357
@@ -XXX,XX +XXX,XX @@ static bool trans_SPLICE(DisasContext *s, arg_rprr_esz *a, uint32_t insn)
358
return true;
745
return true;
359
}
746
}
360
747
+
361
+/*
748
+static bool trans_LDST1(DisasContext *s, arg_LDST1 *a)
362
+ *** SVE Integer Compare - Vectors Group
749
+{
363
+ */
750
+ typedef void GenLdSt1(TCGv_env, TCGv_ptr, TCGv_ptr, TCGv, TCGv_i32);
364
+
751
+
365
+static bool do_ppzz_flags(DisasContext *s, arg_rprr_esz *a,
752
+ /*
366
+ gen_helper_gvec_flags_4 *gen_fn)
753
+ * Indexed by [esz][be][v][mte][st], which is (except for load/store)
367
+{
754
+ * also the order in which the elements appear in the function names,
368
+ TCGv_ptr pd, zn, zm, pg;
755
+ * and so how we must concatenate the pieces.
369
+ unsigned vsz;
756
+ */
370
+ TCGv_i32 t;
757
+
371
+
758
+#define FN_LS(F) { gen_helper_sme_ld1##F, gen_helper_sme_st1##F }
372
+ if (gen_fn == NULL) {
759
+#define FN_MTE(F) { FN_LS(F), FN_LS(F##_mte) }
760
+#define FN_HV(F) { FN_MTE(F##_h), FN_MTE(F##_v) }
761
+#define FN_END(L, B) { FN_HV(L), FN_HV(B) }
762
+
763
+ static GenLdSt1 * const fns[5][2][2][2][2] = {
764
+ FN_END(b, b),
765
+ FN_END(h_le, h_be),
766
+ FN_END(s_le, s_be),
767
+ FN_END(d_le, d_be),
768
+ FN_END(q_le, q_be),
769
+ };
770
+
771
+#undef FN_LS
772
+#undef FN_MTE
773
+#undef FN_HV
774
+#undef FN_END
775
+
776
+ TCGv_ptr t_za, t_pg;
777
+ TCGv_i64 addr;
778
+ int svl, desc = 0;
779
+ bool be = s->be_data == MO_BE;
780
+ bool mte = s->mte_active[0];
781
+
782
+ if (!dc_isar_feature(aa64_sme, s)) {
373
+ return false;
783
+ return false;
374
+ }
784
+ }
375
+ if (!sve_access_check(s)) {
785
+ if (!sme_smza_enabled_check(s)) {
376
+ return true;
786
+ return true;
377
+ }
787
+ }
378
+
788
+
379
+ vsz = vec_full_reg_size(s);
789
+ t_za = get_tile_rowcol(s, a->esz, a->rs, a->za_imm, a->v);
380
+ t = tcg_const_i32(simd_desc(vsz, vsz, 0));
790
+ t_pg = pred_full_reg_ptr(s, a->pg);
381
+ pd = tcg_temp_new_ptr();
791
+ addr = tcg_temp_new_i64();
382
+ zn = tcg_temp_new_ptr();
792
+
383
+ zm = tcg_temp_new_ptr();
793
+ tcg_gen_shli_i64(addr, cpu_reg(s, a->rm), a->esz);
384
+ pg = tcg_temp_new_ptr();
794
+ tcg_gen_add_i64(addr, addr, cpu_reg_sp(s, a->rn));
385
+
795
+
386
+ tcg_gen_addi_ptr(pd, cpu_env, pred_full_reg_offset(s, a->rd));
796
+ if (mte) {
387
+ tcg_gen_addi_ptr(zn, cpu_env, vec_full_reg_offset(s, a->rn));
797
+ desc = FIELD_DP32(desc, MTEDESC, MIDX, get_mem_index(s));
388
+ tcg_gen_addi_ptr(zm, cpu_env, vec_full_reg_offset(s, a->rm));
798
+ desc = FIELD_DP32(desc, MTEDESC, TBI, s->tbid);
389
+ tcg_gen_addi_ptr(pg, cpu_env, pred_full_reg_offset(s, a->pg));
799
+ desc = FIELD_DP32(desc, MTEDESC, TCMA, s->tcma);
390
+
800
+ desc = FIELD_DP32(desc, MTEDESC, WRITE, a->st);
391
+ gen_fn(t, pd, zn, zm, pg, t);
801
+ desc = FIELD_DP32(desc, MTEDESC, SIZEM1, (1 << a->esz) - 1);
392
+
802
+ desc <<= SVE_MTEDESC_SHIFT;
393
+ tcg_temp_free_ptr(pd);
803
+ } else {
394
+ tcg_temp_free_ptr(zn);
804
+ addr = clean_data_tbi(s, addr);
395
+ tcg_temp_free_ptr(zm);
805
+ }
396
+ tcg_temp_free_ptr(pg);
806
+ svl = streaming_vec_reg_size(s);
397
+
807
+ desc = simd_desc(svl, svl, desc);
398
+ do_pred_flags(t);
808
+
399
+
809
+ fns[a->esz][be][a->v][mte][a->st](cpu_env, t_za, t_pg, addr,
400
+ tcg_temp_free_i32(t);
810
+ tcg_constant_i32(desc));
811
+
812
+ tcg_temp_free_ptr(t_za);
813
+ tcg_temp_free_ptr(t_pg);
814
+ tcg_temp_free_i64(addr);
401
+ return true;
815
+ return true;
402
+}
816
+}
403
+
404
+#define DO_PPZZ(NAME, name) \
405
+static bool trans_##NAME##_ppzz(DisasContext *s, arg_rprr_esz *a, \
406
+ uint32_t insn) \
407
+{ \
408
+ static gen_helper_gvec_flags_4 * const fns[4] = { \
409
+ gen_helper_sve_##name##_ppzz_b, gen_helper_sve_##name##_ppzz_h, \
410
+ gen_helper_sve_##name##_ppzz_s, gen_helper_sve_##name##_ppzz_d, \
411
+ }; \
412
+ return do_ppzz_flags(s, a, fns[a->esz]); \
413
+}
414
+
415
+DO_PPZZ(CMPEQ, cmpeq)
416
+DO_PPZZ(CMPNE, cmpne)
417
+DO_PPZZ(CMPGT, cmpgt)
418
+DO_PPZZ(CMPGE, cmpge)
419
+DO_PPZZ(CMPHI, cmphi)
420
+DO_PPZZ(CMPHS, cmphs)
421
+
422
+#undef DO_PPZZ
423
+
424
+#define DO_PPZW(NAME, name) \
425
+static bool trans_##NAME##_ppzw(DisasContext *s, arg_rprr_esz *a, \
426
+ uint32_t insn) \
427
+{ \
428
+ static gen_helper_gvec_flags_4 * const fns[4] = { \
429
+ gen_helper_sve_##name##_ppzw_b, gen_helper_sve_##name##_ppzw_h, \
430
+ gen_helper_sve_##name##_ppzw_s, NULL \
431
+ }; \
432
+ return do_ppzz_flags(s, a, fns[a->esz]); \
433
+}
434
+
435
+DO_PPZW(CMPEQ, cmpeq)
436
+DO_PPZW(CMPNE, cmpne)
437
+DO_PPZW(CMPGT, cmpgt)
438
+DO_PPZW(CMPGE, cmpge)
439
+DO_PPZW(CMPHI, cmphi)
440
+DO_PPZW(CMPHS, cmphs)
441
+DO_PPZW(CMPLT, cmplt)
442
+DO_PPZW(CMPLE, cmple)
443
+DO_PPZW(CMPLO, cmplo)
444
+DO_PPZW(CMPLS, cmpls)
445
+
446
+#undef DO_PPZW
447
+
448
/*
449
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
450
*/
451
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
452
index XXXXXXX..XXXXXXX 100644
453
--- a/target/arm/sve.decode
454
+++ b/target/arm/sve.decode
455
@@ -XXX,XX +XXX,XX @@
456
@rdm_pg_rn ........ esz:2 ... ... ... pg:3 rn:5 rd:5 \
457
&rprr_esz rm=%reg_movprfx
458
@rd_pg4_rn_rm ........ esz:2 . rm:5 .. pg:4 rn:5 rd:5 &rprr_esz
459
+@pd_pg_rn_rm ........ esz:2 . rm:5 ... pg:3 rn:5 . rd:4 &rprr_esz
460
461
# Three register operand, with governing predicate, vector element size
462
@rda_pg_rn_rm ........ esz:2 . rm:5 ... pg:3 rn:5 rd:5 \
463
@@ -XXX,XX +XXX,XX @@ SPLICE 00000101 .. 101 100 100 ... ..... ..... @rdn_pg_rm
464
# SVE select vector elements (predicated)
465
SEL_zpzz 00000101 .. 1 ..... 11 .... ..... ..... @rd_pg4_rn_rm
466
467
+### SVE Integer Compare - Vectors Group
468
+
469
+# SVE integer compare_vectors
470
+CMPHS_ppzz 00100100 .. 0 ..... 000 ... ..... 0 .... @pd_pg_rn_rm
471
+CMPHI_ppzz 00100100 .. 0 ..... 000 ... ..... 1 .... @pd_pg_rn_rm
472
+CMPGE_ppzz 00100100 .. 0 ..... 100 ... ..... 0 .... @pd_pg_rn_rm
473
+CMPGT_ppzz 00100100 .. 0 ..... 100 ... ..... 1 .... @pd_pg_rn_rm
474
+CMPEQ_ppzz 00100100 .. 0 ..... 101 ... ..... 0 .... @pd_pg_rn_rm
475
+CMPNE_ppzz 00100100 .. 0 ..... 101 ... ..... 1 .... @pd_pg_rn_rm
476
+
477
+# SVE integer compare with wide elements
478
+# Note these require esz != 3.
479
+CMPEQ_ppzw 00100100 .. 0 ..... 001 ... ..... 0 .... @pd_pg_rn_rm
480
+CMPNE_ppzw 00100100 .. 0 ..... 001 ... ..... 1 .... @pd_pg_rn_rm
481
+CMPGE_ppzw 00100100 .. 0 ..... 010 ... ..... 0 .... @pd_pg_rn_rm
482
+CMPGT_ppzw 00100100 .. 0 ..... 010 ... ..... 1 .... @pd_pg_rn_rm
483
+CMPLT_ppzw 00100100 .. 0 ..... 011 ... ..... 0 .... @pd_pg_rn_rm
484
+CMPLE_ppzw 00100100 .. 0 ..... 011 ... ..... 1 .... @pd_pg_rn_rm
485
+CMPHS_ppzw 00100100 .. 0 ..... 110 ... ..... 0 .... @pd_pg_rn_rm
486
+CMPHI_ppzw 00100100 .. 0 ..... 110 ... ..... 1 .... @pd_pg_rn_rm
487
+CMPLO_ppzw 00100100 .. 0 ..... 111 ... ..... 0 .... @pd_pg_rn_rm
488
+CMPLS_ppzw 00100100 .. 0 ..... 111 ... ..... 1 .... @pd_pg_rn_rm
489
+
490
### SVE Predicate Logical Operations Group
491
492
# SVE predicate logical operations
493
--
817
--
494
2.17.1
818
2.25.1
495
496
diff view generated by jsdifflib
1
The ethernet controller in the AN505 MPC FPGA image is behind
1
From: Richard Henderson <richard.henderson@linaro.org>
2
the same AHB Peripheral Protection Controller that handles
3
the graphics and GPIOs. (In the documentation this is clear
4
in the block diagram but the ethernet controller was omitted
5
from the table listing devices connected to the PPC.)
6
The ethernet sits behind AHB PPCEXP0 interface 5. We had
7
incorrectly claimed that this was a "gpio4", but there are
8
only 4 GPIOs in this image.
9
2
10
Correct the QEMU model to match the hardware.
3
Add a TCGv_ptr base argument, which will be cpu_env for SVE.
4
We will reuse this for SME save and restore array insns.
11
5
12
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20220708151540.18136-22-richard.henderson@linaro.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Message-id: 20180515171446.10834-1-peter.maydell@linaro.org
15
---
10
---
16
hw/arm/mps2-tz.c | 32 +++++++++++++++++++++++---------
11
target/arm/translate-a64.h | 3 +++
17
1 file changed, 23 insertions(+), 9 deletions(-)
12
target/arm/translate-sve.c | 48 ++++++++++++++++++++++++++++----------
13
2 files changed, 39 insertions(+), 12 deletions(-)
18
14
19
diff --git a/hw/arm/mps2-tz.c b/hw/arm/mps2-tz.c
15
diff --git a/target/arm/translate-a64.h b/target/arm/translate-a64.h
20
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
21
--- a/hw/arm/mps2-tz.c
17
--- a/target/arm/translate-a64.h
22
+++ b/hw/arm/mps2-tz.c
18
+++ b/target/arm/translate-a64.h
23
@@ -XXX,XX +XXX,XX @@ typedef struct {
19
@@ -XXX,XX +XXX,XX @@ void gen_gvec_xar(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
24
UnimplementedDeviceState spi[5];
20
uint32_t rm_ofs, int64_t shift,
25
UnimplementedDeviceState i2c[4];
21
uint32_t opr_sz, uint32_t max_sz);
26
UnimplementedDeviceState i2s_audio;
22
27
- UnimplementedDeviceState gpio[5];
23
+void gen_sve_ldr(DisasContext *s, TCGv_ptr, int vofs, int len, int rn, int imm);
28
+ UnimplementedDeviceState gpio[4];
24
+void gen_sve_str(DisasContext *s, TCGv_ptr, int vofs, int len, int rn, int imm);
29
UnimplementedDeviceState dma[4];
25
+
30
UnimplementedDeviceState gfx;
26
#endif /* TARGET_ARM_TRANSLATE_A64_H */
31
CMSDKAPBUART uart[5];
27
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
32
SplitIRQ sec_resp_splitter;
28
index XXXXXXX..XXXXXXX 100644
33
qemu_or_irq uart_irq_orgate;
29
--- a/target/arm/translate-sve.c
34
+ DeviceState *lan9118;
30
+++ b/target/arm/translate-sve.c
35
} MPS2TZMachineState;
31
@@ -XXX,XX +XXX,XX @@ TRANS_FEAT(UCVTF_dd, aa64_sve, gen_gvec_fpst_arg_zpz,
36
32
* The load should begin at the address Rn + IMM.
37
#define TYPE_MPS2TZ_MACHINE "mps2tz"
33
*/
38
@@ -XXX,XX +XXX,XX @@ static MemoryRegion *make_fpgaio(MPS2TZMachineState *mms, void *opaque,
34
39
return sysbus_mmio_get_region(SYS_BUS_DEVICE(fpgaio), 0);
35
-static void do_ldr(DisasContext *s, uint32_t vofs, int len, int rn, int imm)
36
+void gen_sve_ldr(DisasContext *s, TCGv_ptr base, int vofs,
37
+ int len, int rn, int imm)
38
{
39
int len_align = QEMU_ALIGN_DOWN(len, 8);
40
int len_remain = len % 8;
41
@@ -XXX,XX +XXX,XX @@ static void do_ldr(DisasContext *s, uint32_t vofs, int len, int rn, int imm)
42
t0 = tcg_temp_new_i64();
43
for (i = 0; i < len_align; i += 8) {
44
tcg_gen_qemu_ld_i64(t0, clean_addr, midx, MO_LEUQ);
45
- tcg_gen_st_i64(t0, cpu_env, vofs + i);
46
+ tcg_gen_st_i64(t0, base, vofs + i);
47
tcg_gen_addi_i64(clean_addr, clean_addr, 8);
48
}
49
tcg_temp_free_i64(t0);
50
@@ -XXX,XX +XXX,XX @@ static void do_ldr(DisasContext *s, uint32_t vofs, int len, int rn, int imm)
51
clean_addr = new_tmp_a64_local(s);
52
tcg_gen_mov_i64(clean_addr, t0);
53
54
+ if (base != cpu_env) {
55
+ TCGv_ptr b = tcg_temp_local_new_ptr();
56
+ tcg_gen_mov_ptr(b, base);
57
+ base = b;
58
+ }
59
+
60
gen_set_label(loop);
61
62
t0 = tcg_temp_new_i64();
63
@@ -XXX,XX +XXX,XX @@ static void do_ldr(DisasContext *s, uint32_t vofs, int len, int rn, int imm)
64
tcg_gen_addi_i64(clean_addr, clean_addr, 8);
65
66
tp = tcg_temp_new_ptr();
67
- tcg_gen_add_ptr(tp, cpu_env, i);
68
+ tcg_gen_add_ptr(tp, base, i);
69
tcg_gen_addi_ptr(i, i, 8);
70
tcg_gen_st_i64(t0, tp, vofs);
71
tcg_temp_free_ptr(tp);
72
@@ -XXX,XX +XXX,XX @@ static void do_ldr(DisasContext *s, uint32_t vofs, int len, int rn, int imm)
73
74
tcg_gen_brcondi_ptr(TCG_COND_LTU, i, len_align, loop);
75
tcg_temp_free_ptr(i);
76
+
77
+ if (base != cpu_env) {
78
+ tcg_temp_free_ptr(base);
79
+ assert(len_remain == 0);
80
+ }
81
}
82
83
/*
84
@@ -XXX,XX +XXX,XX @@ static void do_ldr(DisasContext *s, uint32_t vofs, int len, int rn, int imm)
85
default:
86
g_assert_not_reached();
87
}
88
- tcg_gen_st_i64(t0, cpu_env, vofs + len_align);
89
+ tcg_gen_st_i64(t0, base, vofs + len_align);
90
tcg_temp_free_i64(t0);
91
}
40
}
92
}
41
93
42
+static MemoryRegion *make_eth_dev(MPS2TZMachineState *mms, void *opaque,
94
/* Similarly for stores. */
43
+ const char *name, hwaddr size)
95
-static void do_str(DisasContext *s, uint32_t vofs, int len, int rn, int imm)
44
+{
96
+void gen_sve_str(DisasContext *s, TCGv_ptr base, int vofs,
45
+ SysBusDevice *s;
97
+ int len, int rn, int imm)
46
+ DeviceState *iotkitdev = DEVICE(&mms->iotkit);
98
{
47
+ NICInfo *nd = &nd_table[0];
99
int len_align = QEMU_ALIGN_DOWN(len, 8);
100
int len_remain = len % 8;
101
@@ -XXX,XX +XXX,XX @@ static void do_str(DisasContext *s, uint32_t vofs, int len, int rn, int imm)
102
103
t0 = tcg_temp_new_i64();
104
for (i = 0; i < len_align; i += 8) {
105
- tcg_gen_ld_i64(t0, cpu_env, vofs + i);
106
+ tcg_gen_ld_i64(t0, base, vofs + i);
107
tcg_gen_qemu_st_i64(t0, clean_addr, midx, MO_LEUQ);
108
tcg_gen_addi_i64(clean_addr, clean_addr, 8);
109
}
110
@@ -XXX,XX +XXX,XX @@ static void do_str(DisasContext *s, uint32_t vofs, int len, int rn, int imm)
111
clean_addr = new_tmp_a64_local(s);
112
tcg_gen_mov_i64(clean_addr, t0);
113
114
+ if (base != cpu_env) {
115
+ TCGv_ptr b = tcg_temp_local_new_ptr();
116
+ tcg_gen_mov_ptr(b, base);
117
+ base = b;
118
+ }
48
+
119
+
49
+ /* In hardware this is a LAN9220; the LAN9118 is software compatible
120
gen_set_label(loop);
50
+ * except that it doesn't support the checksum-offload feature.
121
51
+ */
122
t0 = tcg_temp_new_i64();
52
+ qemu_check_nic_model(nd, "lan9118");
123
tp = tcg_temp_new_ptr();
53
+ mms->lan9118 = qdev_create(NULL, "lan9118");
124
- tcg_gen_add_ptr(tp, cpu_env, i);
54
+ qdev_set_nic_properties(mms->lan9118, nd);
125
+ tcg_gen_add_ptr(tp, base, i);
55
+ qdev_init_nofail(mms->lan9118);
126
tcg_gen_ld_i64(t0, tp, vofs);
127
tcg_gen_addi_ptr(i, i, 8);
128
tcg_temp_free_ptr(tp);
129
@@ -XXX,XX +XXX,XX @@ static void do_str(DisasContext *s, uint32_t vofs, int len, int rn, int imm)
130
131
tcg_gen_brcondi_ptr(TCG_COND_LTU, i, len_align, loop);
132
tcg_temp_free_ptr(i);
56
+
133
+
57
+ s = SYS_BUS_DEVICE(mms->lan9118);
134
+ if (base != cpu_env) {
58
+ sysbus_connect_irq(s, 0, qdev_get_gpio_in_named(iotkitdev, "EXP_IRQ", 16));
135
+ tcg_temp_free_ptr(base);
59
+ return sysbus_mmio_get_region(s, 0);
136
+ assert(len_remain == 0);
60
+}
137
+ }
61
+
62
static void mps2tz_common_init(MachineState *machine)
63
{
64
MPS2TZMachineState *mms = MPS2TZ_MACHINE(machine);
65
@@ -XXX,XX +XXX,XX @@ static void mps2tz_common_init(MachineState *machine)
66
{ "gpio1", make_unimp_dev, &mms->gpio[1], 0x40101000, 0x1000 },
67
{ "gpio2", make_unimp_dev, &mms->gpio[2], 0x40102000, 0x1000 },
68
{ "gpio3", make_unimp_dev, &mms->gpio[3], 0x40103000, 0x1000 },
69
- { "gpio4", make_unimp_dev, &mms->gpio[4], 0x40104000, 0x1000 },
70
+ { "eth", make_eth_dev, NULL, 0x42000000, 0x100000 },
71
},
72
}, {
73
.name = "ahb_ppcexp1",
74
@@ -XXX,XX +XXX,XX @@ static void mps2tz_common_init(MachineState *machine)
75
"cfg_sec_resp", 0));
76
}
138
}
77
139
78
- /* In hardware this is a LAN9220; the LAN9118 is software compatible
140
/* Predicate register stores can be any multiple of 2. */
79
- * except that it doesn't support the checksum-offload feature.
141
if (len_remain) {
80
- * The ethernet controller is not behind a PPC.
142
t0 = tcg_temp_new_i64();
81
- */
143
- tcg_gen_ld_i64(t0, cpu_env, vofs + len_align);
82
- lan9118_init(&nd_table[0], 0x42000000,
144
+ tcg_gen_ld_i64(t0, base, vofs + len_align);
83
- qdev_get_gpio_in_named(iotkitdev, "EXP_IRQ", 16));
145
84
-
146
switch (len_remain) {
85
create_unimplemented_device("FPGA NS PC", 0x48007000, 0x1000);
147
case 2:
86
148
@@ -XXX,XX +XXX,XX @@ static bool trans_LDR_zri(DisasContext *s, arg_rri *a)
87
armv7m_load_kernel(ARM_CPU(first_cpu), machine->kernel_filename, 0x400000);
149
if (sve_access_check(s)) {
150
int size = vec_full_reg_size(s);
151
int off = vec_full_reg_offset(s, a->rd);
152
- do_ldr(s, off, size, a->rn, a->imm * size);
153
+ gen_sve_ldr(s, cpu_env, off, size, a->rn, a->imm * size);
154
}
155
return true;
156
}
157
@@ -XXX,XX +XXX,XX @@ static bool trans_LDR_pri(DisasContext *s, arg_rri *a)
158
if (sve_access_check(s)) {
159
int size = pred_full_reg_size(s);
160
int off = pred_full_reg_offset(s, a->rd);
161
- do_ldr(s, off, size, a->rn, a->imm * size);
162
+ gen_sve_ldr(s, cpu_env, off, size, a->rn, a->imm * size);
163
}
164
return true;
165
}
166
@@ -XXX,XX +XXX,XX @@ static bool trans_STR_zri(DisasContext *s, arg_rri *a)
167
if (sve_access_check(s)) {
168
int size = vec_full_reg_size(s);
169
int off = vec_full_reg_offset(s, a->rd);
170
- do_str(s, off, size, a->rn, a->imm * size);
171
+ gen_sve_str(s, cpu_env, off, size, a->rn, a->imm * size);
172
}
173
return true;
174
}
175
@@ -XXX,XX +XXX,XX @@ static bool trans_STR_pri(DisasContext *s, arg_rri *a)
176
if (sve_access_check(s)) {
177
int size = pred_full_reg_size(s);
178
int off = pred_full_reg_offset(s, a->rd);
179
- do_str(s, off, size, a->rn, a->imm * size);
180
+ gen_sve_str(s, cpu_env, off, size, a->rn, a->imm * size);
181
}
182
return true;
183
}
88
--
184
--
89
2.17.1
185
2.25.1
90
91
diff view generated by jsdifflib
1
If an IOMMU supports mappings that care about the memory
1
From: Richard Henderson <richard.henderson@linaro.org>
2
transaction attributes, then it no longer has a unique
3
address -> output mapping, but more than one. We can
4
represent these using an IOMMU index, analogous to TCG's
5
mmu indexes.
6
2
3
We can reuse the SVE functions for LDR and STR, passing in the
4
base of the ZA vector and a zero offset.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20220708151540.18136-23-richard.henderson@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
10
Message-id: 20180604152941.20374-2-peter.maydell@linaro.org
11
---
10
---
12
include/exec/memory.h | 55 +++++++++++++++++++++++++++++++++++++++++++
11
target/arm/sme.decode | 7 +++++++
13
memory.c | 23 ++++++++++++++++++
12
target/arm/translate-sme.c | 24 ++++++++++++++++++++++++
14
2 files changed, 78 insertions(+)
13
2 files changed, 31 insertions(+)
15
14
16
diff --git a/include/exec/memory.h b/include/exec/memory.h
15
diff --git a/target/arm/sme.decode b/target/arm/sme.decode
17
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
18
--- a/include/exec/memory.h
17
--- a/target/arm/sme.decode
19
+++ b/include/exec/memory.h
18
+++ b/target/arm/sme.decode
20
@@ -XXX,XX +XXX,XX @@ enum IOMMUMemoryRegionAttr {
19
@@ -XXX,XX +XXX,XX @@ LDST1 1110000 0 esz:2 st:1 rm:5 v:1 .. pg:3 rn:5 0 za_imm:4 \
21
* to report whenever mappings are changed, by calling
20
&ldst rs=%mova_rs
22
* memory_region_notify_iommu() (or, if necessary, by calling
21
LDST1 1110000 111 st:1 rm:5 v:1 .. pg:3 rn:5 0 za_imm:4 \
23
* memory_region_notify_one() for each registered notifier).
22
&ldst esz=4 rs=%mova_rs
24
+ *
25
+ * Conceptually an IOMMU provides a mapping from input address
26
+ * to an output TLB entry. If the IOMMU is aware of memory transaction
27
+ * attributes and the output TLB entry depends on the transaction
28
+ * attributes, we represent this using IOMMU indexes. Each index
29
+ * selects a particular translation table that the IOMMU has:
30
+ * @attrs_to_index returns the IOMMU index for a set of transaction attributes
31
+ * @translate takes an input address and an IOMMU index
32
+ * and the mapping returned can only depend on the input address and the
33
+ * IOMMU index.
34
+ *
35
+ * Most IOMMUs don't care about the transaction attributes and support
36
+ * only a single IOMMU index. A more complex IOMMU might have one index
37
+ * for secure transactions and one for non-secure transactions.
38
*/
39
typedef struct IOMMUMemoryRegionClass {
40
/* private */
41
@@ -XXX,XX +XXX,XX @@ typedef struct IOMMUMemoryRegionClass {
42
*/
43
int (*get_attr)(IOMMUMemoryRegion *iommu, enum IOMMUMemoryRegionAttr attr,
44
void *data);
45
+
23
+
46
+ /* Return the IOMMU index to use for a given set of transaction attributes.
24
+&ldstr rv rn imm
47
+ *
25
+@ldstr ....... ... . ...... .. ... rn:5 . imm:4 \
48
+ * Optional method: if an IOMMU only supports a single IOMMU index then
26
+ &ldstr rv=%mova_rs
49
+ * the default implementation of memory_region_iommu_attrs_to_index()
50
+ * will return 0.
51
+ *
52
+ * The indexes supported by an IOMMU must be contiguous, starting at 0.
53
+ *
54
+ * @iommu: the IOMMUMemoryRegion
55
+ * @attrs: memory transaction attributes
56
+ */
57
+ int (*attrs_to_index)(IOMMUMemoryRegion *iommu, MemTxAttrs attrs);
58
+
27
+
59
+ /* Return the number of IOMMU indexes this IOMMU supports.
28
+LDR 1110000 100 0 000000 .. 000 ..... 0 .... @ldstr
60
+ *
29
+STR 1110000 100 1 000000 .. 000 ..... 0 .... @ldstr
61
+ * Optional method: if this method is not provided, then
30
diff --git a/target/arm/translate-sme.c b/target/arm/translate-sme.c
62
+ * memory_region_iommu_num_indexes() will return 1, indicating that
31
index XXXXXXX..XXXXXXX 100644
63
+ * only a single IOMMU index is supported.
32
--- a/target/arm/translate-sme.c
64
+ *
33
+++ b/target/arm/translate-sme.c
65
+ * @iommu: the IOMMUMemoryRegion
34
@@ -XXX,XX +XXX,XX @@ static bool trans_LDST1(DisasContext *s, arg_LDST1 *a)
66
+ */
35
tcg_temp_free_i64(addr);
67
+ int (*num_indexes)(IOMMUMemoryRegion *iommu);
36
return true;
68
} IOMMUMemoryRegionClass;
37
}
69
70
typedef struct CoalescedMemoryRange CoalescedMemoryRange;
71
@@ -XXX,XX +XXX,XX @@ int memory_region_iommu_get_attr(IOMMUMemoryRegion *iommu_mr,
72
enum IOMMUMemoryRegionAttr attr,
73
void *data);
74
75
+/**
76
+ * memory_region_iommu_attrs_to_index: return the IOMMU index to
77
+ * use for translations with the given memory transaction attributes.
78
+ *
79
+ * @iommu_mr: the memory region
80
+ * @attrs: the memory transaction attributes
81
+ */
82
+int memory_region_iommu_attrs_to_index(IOMMUMemoryRegion *iommu_mr,
83
+ MemTxAttrs attrs);
84
+
38
+
85
+/**
39
+typedef void GenLdStR(DisasContext *, TCGv_ptr, int, int, int, int);
86
+ * memory_region_iommu_num_indexes: return the total number of IOMMU
87
+ * indexes that this IOMMU supports.
88
+ *
89
+ * @iommu_mr: the memory region
90
+ */
91
+int memory_region_iommu_num_indexes(IOMMUMemoryRegion *iommu_mr);
92
+
40
+
93
/**
41
+static bool do_ldst_r(DisasContext *s, arg_ldstr *a, GenLdStR *fn)
94
* memory_region_name: get a memory region's name
95
*
96
diff --git a/memory.c b/memory.c
97
index XXXXXXX..XXXXXXX 100644
98
--- a/memory.c
99
+++ b/memory.c
100
@@ -XXX,XX +XXX,XX @@ int memory_region_iommu_get_attr(IOMMUMemoryRegion *iommu_mr,
101
return imrc->get_attr(iommu_mr, attr, data);
102
}
103
104
+int memory_region_iommu_attrs_to_index(IOMMUMemoryRegion *iommu_mr,
105
+ MemTxAttrs attrs)
106
+{
42
+{
107
+ IOMMUMemoryRegionClass *imrc = IOMMU_MEMORY_REGION_GET_CLASS(iommu_mr);
43
+ int svl = streaming_vec_reg_size(s);
44
+ int imm = a->imm;
45
+ TCGv_ptr base;
108
+
46
+
109
+ if (!imrc->attrs_to_index) {
47
+ if (!sme_za_enabled_check(s)) {
110
+ return 0;
48
+ return true;
111
+ }
49
+ }
112
+
50
+
113
+ return imrc->attrs_to_index(iommu_mr, attrs);
51
+ /* ZA[n] equates to ZA0H.B[n]. */
52
+ base = get_tile_rowcol(s, MO_8, a->rv, imm, false);
53
+
54
+ fn(s, base, 0, svl, a->rn, imm * svl);
55
+
56
+ tcg_temp_free_ptr(base);
57
+ return true;
114
+}
58
+}
115
+
59
+
116
+int memory_region_iommu_num_indexes(IOMMUMemoryRegion *iommu_mr)
60
+TRANS_FEAT(LDR, aa64_sme, do_ldst_r, a, gen_sve_ldr)
117
+{
61
+TRANS_FEAT(STR, aa64_sme, do_ldst_r, a, gen_sve_str)
118
+ IOMMUMemoryRegionClass *imrc = IOMMU_MEMORY_REGION_GET_CLASS(iommu_mr);
119
+
120
+ if (!imrc->num_indexes) {
121
+ return 1;
122
+ }
123
+
124
+ return imrc->num_indexes(iommu_mr);
125
+}
126
+
127
void memory_region_set_log(MemoryRegion *mr, bool log, unsigned client)
128
{
129
uint8_t mask = 1 << client;
130
--
62
--
131
2.17.1
63
2.25.1
132
133
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-14-richard.henderson@linaro.org
5
Message-id: 20220708151540.18136-24-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
7
---
8
target/arm/helper-sve.h | 18 +++
8
target/arm/helper-sme.h | 5 +++
9
target/arm/sve_helper.c | 248 +++++++++++++++++++++++++++++++++++++
9
target/arm/sme.decode | 11 +++++
10
target/arm/translate-sve.c | 106 ++++++++++++++++
10
target/arm/sme_helper.c | 90 ++++++++++++++++++++++++++++++++++++++
11
target/arm/sve.decode | 19 +++
11
target/arm/translate-sme.c | 31 +++++++++++++
12
4 files changed, 391 insertions(+)
12
4 files changed, 137 insertions(+)
13
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
14
diff --git a/target/arm/helper-sme.h b/target/arm/helper-sme.h
15
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
16
--- a/target/arm/helper-sme.h
17
+++ b/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sme.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve_orn_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sme_st1q_be_h_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i
19
DEF_HELPER_FLAGS_5(sve_nor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
19
DEF_HELPER_FLAGS_5(sme_st1q_le_h_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
20
DEF_HELPER_FLAGS_5(sve_nand_pppp, TCG_CALL_NO_RWG,
20
DEF_HELPER_FLAGS_5(sme_st1q_be_v_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
21
void, ptr, ptr, ptr, ptr, i32)
21
DEF_HELPER_FLAGS_5(sme_st1q_le_v_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
22
+
22
+
23
+DEF_HELPER_FLAGS_5(sve_brkpa, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
23
+DEF_HELPER_FLAGS_5(sme_addha_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_5(sve_brkpb, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_5(sme_addva_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_5(sve_brkpas, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_5(sme_addha_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_5(sve_brkpbs, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_5(sme_addva_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
27
diff --git a/target/arm/sme.decode b/target/arm/sme.decode
28
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/sme.decode
30
+++ b/target/arm/sme.decode
31
@@ -XXX,XX +XXX,XX @@ LDST1 1110000 111 st:1 rm:5 v:1 .. pg:3 rn:5 0 za_imm:4 \
32
33
LDR 1110000 100 0 000000 .. 000 ..... 0 .... @ldstr
34
STR 1110000 100 1 000000 .. 000 ..... 0 .... @ldstr
27
+
35
+
28
+DEF_HELPER_FLAGS_4(sve_brka_z, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
36
+### SME Add Vector to Array
29
+DEF_HELPER_FLAGS_4(sve_brkb_z, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_4(sve_brka_m, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_4(sve_brkb_m, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
32
+
37
+
33
+DEF_HELPER_FLAGS_4(sve_brkas_z, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
38
+&adda zad zn pm pn
34
+DEF_HELPER_FLAGS_4(sve_brkbs_z, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
39
+@adda_32 ........ .. ..... . pm:3 pn:3 zn:5 ... zad:2 &adda
35
+DEF_HELPER_FLAGS_4(sve_brkas_m, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
40
+@adda_64 ........ .. ..... . pm:3 pn:3 zn:5 .. zad:3 &adda
36
+DEF_HELPER_FLAGS_4(sve_brkbs_m, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
37
+
41
+
38
+DEF_HELPER_FLAGS_4(sve_brkn, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
42
+ADDHA_s 11000000 10 01000 0 ... ... ..... 000 .. @adda_32
39
+DEF_HELPER_FLAGS_4(sve_brkns, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
43
+ADDVA_s 11000000 10 01000 1 ... ... ..... 000 .. @adda_32
40
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
44
+ADDHA_d 11000000 11 01000 0 ... ... ..... 00 ... @adda_64
45
+ADDVA_d 11000000 11 01000 1 ... ... ..... 00 ... @adda_64
46
diff --git a/target/arm/sme_helper.c b/target/arm/sme_helper.c
41
index XXXXXXX..XXXXXXX 100644
47
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/sve_helper.c
48
--- a/target/arm/sme_helper.c
43
+++ b/target/arm/sve_helper.c
49
+++ b/target/arm/sme_helper.c
44
@@ -XXX,XX +XXX,XX @@ DO_CMP_PPZI_D(sve_cmpls_ppzi_d, uint64_t, <=)
50
@@ -XXX,XX +XXX,XX @@ DO_ST(q, _be, MO_128)
45
#undef DO_CMP_PPZI_S
51
DO_ST(q, _le, MO_128)
46
#undef DO_CMP_PPZI_D
52
47
#undef DO_CMP_PPZI
53
#undef DO_ST
48
+
54
+
49
+/* Similar to the ARM LastActive pseudocode function. */
55
+void HELPER(sme_addha_s)(void *vzda, void *vzn, void *vpn,
50
+static bool last_active_pred(void *vd, void *vg, intptr_t oprsz)
56
+ void *vpm, uint32_t desc)
51
+{
57
+{
52
+ intptr_t i;
58
+ intptr_t row, col, oprsz = simd_oprsz(desc) / 4;
59
+ uint64_t *pn = vpn, *pm = vpm;
60
+ uint32_t *zda = vzda, *zn = vzn;
53
+
61
+
54
+ for (i = QEMU_ALIGN_UP(oprsz, 8) - 8; i >= 0; i -= 8) {
62
+ for (row = 0; row < oprsz; ) {
55
+ uint64_t pg = *(uint64_t *)(vg + i);
63
+ uint64_t pa = pn[row >> 4];
56
+ if (pg) {
64
+ do {
57
+ return (pow2floor(pg) & *(uint64_t *)(vd + i)) != 0;
65
+ if (pa & 1) {
58
+ }
66
+ for (col = 0; col < oprsz; ) {
59
+ }
67
+ uint64_t pb = pm[col >> 4];
60
+ return 0;
68
+ do {
61
+}
69
+ if (pb & 1) {
62
+
70
+ zda[tile_vslice_index(row) + H4(col)] += zn[H4(col)];
63
+/* Compute a mask into RETB that is true for all G, up to and including
71
+ }
64
+ * (if after) or excluding (if !after) the first G & N.
72
+ pb >>= 4;
65
+ * Return true if BRK found.
73
+ } while (++col & 15);
66
+ */
74
+ }
67
+static bool compute_brk(uint64_t *retb, uint64_t n, uint64_t g,
75
+ }
68
+ bool brk, bool after)
76
+ pa >>= 4;
69
+{
77
+ } while (++row & 15);
70
+ uint64_t b;
71
+
72
+ if (brk) {
73
+ b = 0;
74
+ } else if ((g & n) == 0) {
75
+ /* For all G, no N are set; break not found. */
76
+ b = g;
77
+ } else {
78
+ /* Break somewhere in N. Locate it. */
79
+ b = g & n; /* guard true, pred true */
80
+ b = b & -b; /* first such */
81
+ if (after) {
82
+ b = b | (b - 1); /* break after same */
83
+ } else {
84
+ b = b - 1; /* break before same */
85
+ }
86
+ brk = true;
87
+ }
88
+
89
+ *retb = b;
90
+ return brk;
91
+}
92
+
93
+/* Compute a zeroing BRK. */
94
+static void compute_brk_z(uint64_t *d, uint64_t *n, uint64_t *g,
95
+ intptr_t oprsz, bool after)
96
+{
97
+ bool brk = false;
98
+ intptr_t i;
99
+
100
+ for (i = 0; i < DIV_ROUND_UP(oprsz, 8); ++i) {
101
+ uint64_t this_b, this_g = g[i];
102
+
103
+ brk = compute_brk(&this_b, n[i], this_g, brk, after);
104
+ d[i] = this_b & this_g;
105
+ }
78
+ }
106
+}
79
+}
107
+
80
+
108
+/* Likewise, but also compute flags. */
81
+void HELPER(sme_addha_d)(void *vzda, void *vzn, void *vpn,
109
+static uint32_t compute_brks_z(uint64_t *d, uint64_t *n, uint64_t *g,
82
+ void *vpm, uint32_t desc)
110
+ intptr_t oprsz, bool after)
111
+{
83
+{
112
+ uint32_t flags = PREDTEST_INIT;
84
+ intptr_t row, col, oprsz = simd_oprsz(desc) / 8;
113
+ bool brk = false;
85
+ uint8_t *pn = vpn, *pm = vpm;
114
+ intptr_t i;
86
+ uint64_t *zda = vzda, *zn = vzn;
115
+
87
+
116
+ for (i = 0; i < DIV_ROUND_UP(oprsz, 8); ++i) {
88
+ for (row = 0; row < oprsz; ++row) {
117
+ uint64_t this_b, this_d, this_g = g[i];
89
+ if (pn[H1(row)] & 1) {
118
+
90
+ for (col = 0; col < oprsz; ++col) {
119
+ brk = compute_brk(&this_b, n[i], this_g, brk, after);
91
+ if (pm[H1(col)] & 1) {
120
+ d[i] = this_d = this_b & this_g;
92
+ zda[tile_vslice_index(row) + col] += zn[col];
121
+ flags = iter_predtest_fwd(this_d, this_g, flags);
93
+ }
122
+ }
94
+ }
123
+ return flags;
95
+ }
124
+}
125
+
126
+/* Compute a merging BRK. */
127
+static void compute_brk_m(uint64_t *d, uint64_t *n, uint64_t *g,
128
+ intptr_t oprsz, bool after)
129
+{
130
+ bool brk = false;
131
+ intptr_t i;
132
+
133
+ for (i = 0; i < DIV_ROUND_UP(oprsz, 8); ++i) {
134
+ uint64_t this_b, this_g = g[i];
135
+
136
+ brk = compute_brk(&this_b, n[i], this_g, brk, after);
137
+ d[i] = (this_b & this_g) | (d[i] & ~this_g);
138
+ }
96
+ }
139
+}
97
+}
140
+
98
+
141
+/* Likewise, but also compute flags. */
99
+void HELPER(sme_addva_s)(void *vzda, void *vzn, void *vpn,
142
+static uint32_t compute_brks_m(uint64_t *d, uint64_t *n, uint64_t *g,
100
+ void *vpm, uint32_t desc)
143
+ intptr_t oprsz, bool after)
144
+{
101
+{
145
+ uint32_t flags = PREDTEST_INIT;
102
+ intptr_t row, col, oprsz = simd_oprsz(desc) / 4;
146
+ bool brk = false;
103
+ uint64_t *pn = vpn, *pm = vpm;
147
+ intptr_t i;
104
+ uint32_t *zda = vzda, *zn = vzn;
148
+
105
+
149
+ for (i = 0; i < oprsz / 8; ++i) {
106
+ for (row = 0; row < oprsz; ) {
150
+ uint64_t this_b, this_d = d[i], this_g = g[i];
107
+ uint64_t pa = pn[row >> 4];
151
+
108
+ do {
152
+ brk = compute_brk(&this_b, n[i], this_g, brk, after);
109
+ if (pa & 1) {
153
+ d[i] = this_d = (this_b & this_g) | (this_d & ~this_g);
110
+ uint32_t zn_row = zn[H4(row)];
154
+ flags = iter_predtest_fwd(this_d, this_g, flags);
111
+ for (col = 0; col < oprsz; ) {
155
+ }
112
+ uint64_t pb = pm[col >> 4];
156
+ return flags;
113
+ do {
157
+}
114
+ if (pb & 1) {
158
+
115
+ zda[tile_vslice_index(row) + H4(col)] += zn_row;
159
+static uint32_t do_zero(ARMPredicateReg *d, intptr_t oprsz)
116
+ }
160
+{
117
+ pb >>= 4;
161
+ /* It is quicker to zero the whole predicate than loop on OPRSZ.
118
+ } while (++col & 15);
162
+ * The compiler should turn this into 4 64-bit integer stores.
119
+ }
163
+ */
120
+ }
164
+ memset(d, 0, sizeof(ARMPredicateReg));
121
+ pa >>= 4;
165
+ return PREDTEST_INIT;
122
+ } while (++row & 15);
166
+}
167
+
168
+void HELPER(sve_brkpa)(void *vd, void *vn, void *vm, void *vg,
169
+ uint32_t pred_desc)
170
+{
171
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
172
+ if (last_active_pred(vn, vg, oprsz)) {
173
+ compute_brk_z(vd, vm, vg, oprsz, true);
174
+ } else {
175
+ do_zero(vd, oprsz);
176
+ }
123
+ }
177
+}
124
+}
178
+
125
+
179
+uint32_t HELPER(sve_brkpas)(void *vd, void *vn, void *vm, void *vg,
126
+void HELPER(sme_addva_d)(void *vzda, void *vzn, void *vpn,
180
+ uint32_t pred_desc)
127
+ void *vpm, uint32_t desc)
181
+{
128
+{
182
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
129
+ intptr_t row, col, oprsz = simd_oprsz(desc) / 8;
183
+ if (last_active_pred(vn, vg, oprsz)) {
130
+ uint8_t *pn = vpn, *pm = vpm;
184
+ return compute_brks_z(vd, vm, vg, oprsz, true);
131
+ uint64_t *zda = vzda, *zn = vzn;
185
+ } else {
132
+
186
+ return do_zero(vd, oprsz);
133
+ for (row = 0; row < oprsz; ++row) {
134
+ if (pn[H1(row)] & 1) {
135
+ uint64_t zn_row = zn[row];
136
+ for (col = 0; col < oprsz; ++col) {
137
+ if (pm[H1(col)] & 1) {
138
+ zda[tile_vslice_index(row) + col] += zn_row;
139
+ }
140
+ }
141
+ }
187
+ }
142
+ }
188
+}
143
+}
144
diff --git a/target/arm/translate-sme.c b/target/arm/translate-sme.c
145
index XXXXXXX..XXXXXXX 100644
146
--- a/target/arm/translate-sme.c
147
+++ b/target/arm/translate-sme.c
148
@@ -XXX,XX +XXX,XX @@ static bool do_ldst_r(DisasContext *s, arg_ldstr *a, GenLdStR *fn)
149
150
TRANS_FEAT(LDR, aa64_sme, do_ldst_r, a, gen_sve_ldr)
151
TRANS_FEAT(STR, aa64_sme, do_ldst_r, a, gen_sve_str)
189
+
152
+
190
+void HELPER(sve_brkpb)(void *vd, void *vn, void *vm, void *vg,
153
+static bool do_adda(DisasContext *s, arg_adda *a, MemOp esz,
191
+ uint32_t pred_desc)
154
+ gen_helper_gvec_4 *fn)
192
+{
155
+{
193
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
156
+ int svl = streaming_vec_reg_size(s);
194
+ if (last_active_pred(vn, vg, oprsz)) {
157
+ uint32_t desc = simd_desc(svl, svl, 0);
195
+ compute_brk_z(vd, vm, vg, oprsz, false);
158
+ TCGv_ptr za, zn, pn, pm;
196
+ } else {
197
+ do_zero(vd, oprsz);
198
+ }
199
+}
200
+
159
+
201
+uint32_t HELPER(sve_brkpbs)(void *vd, void *vn, void *vm, void *vg,
160
+ if (!sme_smza_enabled_check(s)) {
202
+ uint32_t pred_desc)
203
+{
204
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
205
+ if (last_active_pred(vn, vg, oprsz)) {
206
+ return compute_brks_z(vd, vm, vg, oprsz, false);
207
+ } else {
208
+ return do_zero(vd, oprsz);
209
+ }
210
+}
211
+
212
+void HELPER(sve_brka_z)(void *vd, void *vn, void *vg, uint32_t pred_desc)
213
+{
214
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
215
+ compute_brk_z(vd, vn, vg, oprsz, true);
216
+}
217
+
218
+uint32_t HELPER(sve_brkas_z)(void *vd, void *vn, void *vg, uint32_t pred_desc)
219
+{
220
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
221
+ return compute_brks_z(vd, vn, vg, oprsz, true);
222
+}
223
+
224
+void HELPER(sve_brkb_z)(void *vd, void *vn, void *vg, uint32_t pred_desc)
225
+{
226
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
227
+ compute_brk_z(vd, vn, vg, oprsz, false);
228
+}
229
+
230
+uint32_t HELPER(sve_brkbs_z)(void *vd, void *vn, void *vg, uint32_t pred_desc)
231
+{
232
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
233
+ return compute_brks_z(vd, vn, vg, oprsz, false);
234
+}
235
+
236
+void HELPER(sve_brka_m)(void *vd, void *vn, void *vg, uint32_t pred_desc)
237
+{
238
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
239
+ compute_brk_m(vd, vn, vg, oprsz, true);
240
+}
241
+
242
+uint32_t HELPER(sve_brkas_m)(void *vd, void *vn, void *vg, uint32_t pred_desc)
243
+{
244
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
245
+ return compute_brks_m(vd, vn, vg, oprsz, true);
246
+}
247
+
248
+void HELPER(sve_brkb_m)(void *vd, void *vn, void *vg, uint32_t pred_desc)
249
+{
250
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
251
+ compute_brk_m(vd, vn, vg, oprsz, false);
252
+}
253
+
254
+uint32_t HELPER(sve_brkbs_m)(void *vd, void *vn, void *vg, uint32_t pred_desc)
255
+{
256
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
257
+ return compute_brks_m(vd, vn, vg, oprsz, false);
258
+}
259
+
260
+void HELPER(sve_brkn)(void *vd, void *vn, void *vg, uint32_t pred_desc)
261
+{
262
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
263
+
264
+ if (!last_active_pred(vn, vg, oprsz)) {
265
+ do_zero(vd, oprsz);
266
+ }
267
+}
268
+
269
+/* As if PredTest(Ones(PL), D, esz). */
270
+static uint32_t predtest_ones(ARMPredicateReg *d, intptr_t oprsz,
271
+ uint64_t esz_mask)
272
+{
273
+ uint32_t flags = PREDTEST_INIT;
274
+ intptr_t i;
275
+
276
+ for (i = 0; i < oprsz / 8; i++) {
277
+ flags = iter_predtest_fwd(d->p[i], esz_mask, flags);
278
+ }
279
+ if (oprsz & 7) {
280
+ uint64_t mask = ~(-1ULL << (8 * (oprsz & 7)));
281
+ flags = iter_predtest_fwd(d->p[i], esz_mask & mask, flags);
282
+ }
283
+ return flags;
284
+}
285
+
286
+uint32_t HELPER(sve_brkns)(void *vd, void *vn, void *vg, uint32_t pred_desc)
287
+{
288
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
289
+
290
+ if (last_active_pred(vn, vg, oprsz)) {
291
+ return predtest_ones(vd, oprsz, -1);
292
+ } else {
293
+ return do_zero(vd, oprsz);
294
+ }
295
+}
296
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
297
index XXXXXXX..XXXXXXX 100644
298
--- a/target/arm/translate-sve.c
299
+++ b/target/arm/translate-sve.c
300
@@ -XXX,XX +XXX,XX @@ DO_PPZI(CMPLS, cmpls)
301
302
#undef DO_PPZI
303
304
+/*
305
+ *** SVE Partition Break Group
306
+ */
307
+
308
+static bool do_brk3(DisasContext *s, arg_rprr_s *a,
309
+ gen_helper_gvec_4 *fn, gen_helper_gvec_flags_4 *fn_s)
310
+{
311
+ if (!sve_access_check(s)) {
312
+ return true;
161
+ return true;
313
+ }
162
+ }
314
+
163
+
315
+ unsigned vsz = pred_full_reg_size(s);
164
+ /* Sum XZR+zad to find ZAd. */
165
+ za = get_tile_rowcol(s, esz, 31, a->zad, false);
166
+ zn = vec_full_reg_ptr(s, a->zn);
167
+ pn = pred_full_reg_ptr(s, a->pn);
168
+ pm = pred_full_reg_ptr(s, a->pm);
316
+
169
+
317
+ /* Predicate sizes may be smaller and cannot use simd_desc. */
170
+ fn(za, zn, pn, pm, tcg_constant_i32(desc));
318
+ TCGv_ptr d = tcg_temp_new_ptr();
319
+ TCGv_ptr n = tcg_temp_new_ptr();
320
+ TCGv_ptr m = tcg_temp_new_ptr();
321
+ TCGv_ptr g = tcg_temp_new_ptr();
322
+ TCGv_i32 t = tcg_const_i32(vsz - 2);
323
+
171
+
324
+ tcg_gen_addi_ptr(d, cpu_env, pred_full_reg_offset(s, a->rd));
172
+ tcg_temp_free_ptr(za);
325
+ tcg_gen_addi_ptr(n, cpu_env, pred_full_reg_offset(s, a->rn));
173
+ tcg_temp_free_ptr(zn);
326
+ tcg_gen_addi_ptr(m, cpu_env, pred_full_reg_offset(s, a->rm));
174
+ tcg_temp_free_ptr(pn);
327
+ tcg_gen_addi_ptr(g, cpu_env, pred_full_reg_offset(s, a->pg));
175
+ tcg_temp_free_ptr(pm);
328
+
329
+ if (a->s) {
330
+ fn_s(t, d, n, m, g, t);
331
+ do_pred_flags(t);
332
+ } else {
333
+ fn(d, n, m, g, t);
334
+ }
335
+ tcg_temp_free_ptr(d);
336
+ tcg_temp_free_ptr(n);
337
+ tcg_temp_free_ptr(m);
338
+ tcg_temp_free_ptr(g);
339
+ tcg_temp_free_i32(t);
340
+ return true;
176
+ return true;
341
+}
177
+}
342
+
178
+
343
+static bool do_brk2(DisasContext *s, arg_rpr_s *a,
179
+TRANS_FEAT(ADDHA_s, aa64_sme, do_adda, a, MO_32, gen_helper_sme_addha_s)
344
+ gen_helper_gvec_3 *fn, gen_helper_gvec_flags_3 *fn_s)
180
+TRANS_FEAT(ADDVA_s, aa64_sme, do_adda, a, MO_32, gen_helper_sme_addva_s)
345
+{
181
+TRANS_FEAT(ADDHA_d, aa64_sme_i16i64, do_adda, a, MO_64, gen_helper_sme_addha_d)
346
+ if (!sve_access_check(s)) {
182
+TRANS_FEAT(ADDVA_d, aa64_sme_i16i64, do_adda, a, MO_64, gen_helper_sme_addva_d)
347
+ return true;
348
+ }
349
+
350
+ unsigned vsz = pred_full_reg_size(s);
351
+
352
+ /* Predicate sizes may be smaller and cannot use simd_desc. */
353
+ TCGv_ptr d = tcg_temp_new_ptr();
354
+ TCGv_ptr n = tcg_temp_new_ptr();
355
+ TCGv_ptr g = tcg_temp_new_ptr();
356
+ TCGv_i32 t = tcg_const_i32(vsz - 2);
357
+
358
+ tcg_gen_addi_ptr(d, cpu_env, pred_full_reg_offset(s, a->rd));
359
+ tcg_gen_addi_ptr(n, cpu_env, pred_full_reg_offset(s, a->rn));
360
+ tcg_gen_addi_ptr(g, cpu_env, pred_full_reg_offset(s, a->pg));
361
+
362
+ if (a->s) {
363
+ fn_s(t, d, n, g, t);
364
+ do_pred_flags(t);
365
+ } else {
366
+ fn(d, n, g, t);
367
+ }
368
+ tcg_temp_free_ptr(d);
369
+ tcg_temp_free_ptr(n);
370
+ tcg_temp_free_ptr(g);
371
+ tcg_temp_free_i32(t);
372
+ return true;
373
+}
374
+
375
+static bool trans_BRKPA(DisasContext *s, arg_rprr_s *a, uint32_t insn)
376
+{
377
+ return do_brk3(s, a, gen_helper_sve_brkpa, gen_helper_sve_brkpas);
378
+}
379
+
380
+static bool trans_BRKPB(DisasContext *s, arg_rprr_s *a, uint32_t insn)
381
+{
382
+ return do_brk3(s, a, gen_helper_sve_brkpb, gen_helper_sve_brkpbs);
383
+}
384
+
385
+static bool trans_BRKA_m(DisasContext *s, arg_rpr_s *a, uint32_t insn)
386
+{
387
+ return do_brk2(s, a, gen_helper_sve_brka_m, gen_helper_sve_brkas_m);
388
+}
389
+
390
+static bool trans_BRKB_m(DisasContext *s, arg_rpr_s *a, uint32_t insn)
391
+{
392
+ return do_brk2(s, a, gen_helper_sve_brkb_m, gen_helper_sve_brkbs_m);
393
+}
394
+
395
+static bool trans_BRKA_z(DisasContext *s, arg_rpr_s *a, uint32_t insn)
396
+{
397
+ return do_brk2(s, a, gen_helper_sve_brka_z, gen_helper_sve_brkas_z);
398
+}
399
+
400
+static bool trans_BRKB_z(DisasContext *s, arg_rpr_s *a, uint32_t insn)
401
+{
402
+ return do_brk2(s, a, gen_helper_sve_brkb_z, gen_helper_sve_brkbs_z);
403
+}
404
+
405
+static bool trans_BRKN(DisasContext *s, arg_rpr_s *a, uint32_t insn)
406
+{
407
+ return do_brk2(s, a, gen_helper_sve_brkn, gen_helper_sve_brkns);
408
+}
409
+
410
/*
411
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
412
*/
413
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
414
index XXXXXXX..XXXXXXX 100644
415
--- a/target/arm/sve.decode
416
+++ b/target/arm/sve.decode
417
@@ -XXX,XX +XXX,XX @@
418
&rri_esz rd rn imm esz
419
&rrr_esz rd rn rm esz
420
&rpr_esz rd pg rn esz
421
+&rpr_s rd pg rn s
422
&rprr_s rd pg rn rm s
423
&rprr_esz rd pg rn rm esz
424
&rprrr_esz rd pg rn rm ra esz
425
@@ -XXX,XX +XXX,XX @@
426
@pd_pn ........ esz:2 .. .... ....... rn:4 . rd:4 &rr_esz
427
@rd_rn ........ esz:2 ...... ...... rn:5 rd:5 &rr_esz
428
429
+# Two operand with governing predicate, flags setting
430
+@pd_pg_pn_s ........ . s:1 ...... .. pg:4 . rn:4 . rd:4 &rpr_s
431
+
432
# Three operand with unused vector element size
433
@rd_rn_rm_e0 ........ ... rm:5 ... ... rn:5 rd:5 &rrr_esz esz=0
434
435
@@ -XXX,XX +XXX,XX @@ PFIRST 00100101 01 011 000 11000 00 .... 0 .... @pd_pn_e0
436
# SVE predicate next active
437
PNEXT 00100101 .. 011 001 11000 10 .... 0 .... @pd_pn
438
439
+### SVE Partition Break Group
440
+
441
+# SVE propagate break from previous partition
442
+BRKPA 00100101 0. 00 .... 11 .... 0 .... 0 .... @pd_pg_pn_pm_s
443
+BRKPB 00100101 0. 00 .... 11 .... 0 .... 1 .... @pd_pg_pn_pm_s
444
+
445
+# SVE partition break condition
446
+BRKA_z 00100101 0. 01000001 .... 0 .... 0 .... @pd_pg_pn_s
447
+BRKB_z 00100101 1. 01000001 .... 0 .... 0 .... @pd_pg_pn_s
448
+BRKA_m 00100101 0. 01000001 .... 0 .... 1 .... @pd_pg_pn_s
449
+BRKB_m 00100101 1. 01000001 .... 0 .... 1 .... @pd_pg_pn_s
450
+
451
+# SVE propagate break to next partition
452
+BRKN 00100101 0. 01100001 .... 0 .... 0 .... @pd_pg_pn_s
453
+
454
### SVE Memory - 32-bit Gather and Unsized Contiguous Group
455
456
# SVE load predicate register
457
--
183
--
458
2.17.1
184
2.25.1
459
460
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20220708151540.18136-25-richard.henderson@linaro.org
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-7-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
7
---
8
target/arm/helper-sve.h | 2 +
8
target/arm/helper-sme.h | 5 +++
9
target/arm/sve_helper.c | 12 ++
9
target/arm/sme.decode | 9 +++++
10
target/arm/translate-sve.c | 328 +++++++++++++++++++++++++++++++++++++
10
target/arm/sme_helper.c | 69 ++++++++++++++++++++++++++++++++++++++
11
target/arm/sve.decode | 20 +++
11
target/arm/translate-sme.c | 32 ++++++++++++++++++
12
4 files changed, 362 insertions(+)
12
4 files changed, 115 insertions(+)
13
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
14
diff --git a/target/arm/helper-sme.h b/target/arm/helper-sme.h
15
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
16
--- a/target/arm/helper-sme.h
17
+++ b/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sme.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_trn_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sme_addha_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
19
DEF_HELPER_FLAGS_4(sve_compact_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
19
DEF_HELPER_FLAGS_5(sme_addva_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
20
DEF_HELPER_FLAGS_4(sve_compact_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
20
DEF_HELPER_FLAGS_5(sme_addha_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
21
21
DEF_HELPER_FLAGS_5(sme_addva_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
22
+DEF_HELPER_FLAGS_2(sve_last_active_element, TCG_CALL_NO_RWG, s32, ptr, i32)
23
+
22
+
24
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
23
+DEF_HELPER_FLAGS_7(sme_fmopa_s, TCG_CALL_NO_RWG,
25
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
24
+ void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
26
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_7(sme_fmopa_d, TCG_CALL_NO_RWG,
27
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
26
+ void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
27
diff --git a/target/arm/sme.decode b/target/arm/sme.decode
28
index XXXXXXX..XXXXXXX 100644
28
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/sve_helper.c
29
--- a/target/arm/sme.decode
30
+++ b/target/arm/sve_helper.c
30
+++ b/target/arm/sme.decode
31
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_compact_d)(void *vd, void *vn, void *vg, uint32_t desc)
31
@@ -XXX,XX +XXX,XX @@ ADDHA_s 11000000 10 01000 0 ... ... ..... 000 .. @adda_32
32
d[j] = 0;
32
ADDVA_s 11000000 10 01000 1 ... ... ..... 000 .. @adda_32
33
ADDHA_d 11000000 11 01000 0 ... ... ..... 00 ... @adda_64
34
ADDVA_d 11000000 11 01000 1 ... ... ..... 00 ... @adda_64
35
+
36
+### SME Outer Product
37
+
38
+&op zad zn zm pm pn sub:bool
39
+@op_32 ........ ... zm:5 pm:3 pn:3 zn:5 sub:1 .. zad:2 &op
40
+@op_64 ........ ... zm:5 pm:3 pn:3 zn:5 sub:1 . zad:3 &op
41
+
42
+FMOPA_s 10000000 100 ..... ... ... ..... . 00 .. @op_32
43
+FMOPA_d 10000000 110 ..... ... ... ..... . 0 ... @op_64
44
diff --git a/target/arm/sme_helper.c b/target/arm/sme_helper.c
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/sme_helper.c
47
+++ b/target/arm/sme_helper.c
48
@@ -XXX,XX +XXX,XX @@
49
#include "exec/cpu_ldst.h"
50
#include "exec/exec-all.h"
51
#include "qemu/int128.h"
52
+#include "fpu/softfloat.h"
53
#include "vec_internal.h"
54
#include "sve_ldst_internal.h"
55
56
@@ -XXX,XX +XXX,XX @@ void HELPER(sme_addva_d)(void *vzda, void *vzn, void *vpn,
57
}
33
}
58
}
34
}
59
}
35
+
60
+
36
+/* Similar to the ARM LastActiveElement pseudocode function, except the
61
+void HELPER(sme_fmopa_s)(void *vza, void *vzn, void *vzm, void *vpn,
37
+ * result is multiplied by the element size. This includes the not found
62
+ void *vpm, void *vst, uint32_t desc)
38
+ * indication; e.g. not found for esz=3 is -8.
39
+ */
40
+int32_t HELPER(sve_last_active_element)(void *vg, uint32_t pred_desc)
41
+{
63
+{
42
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
64
+ intptr_t row, col, oprsz = simd_maxsz(desc);
43
+ intptr_t esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
65
+ uint32_t neg = simd_data(desc) << 31;
66
+ uint16_t *pn = vpn, *pm = vpm;
67
+ float_status fpst;
44
+
68
+
45
+ return last_active_element(vg, DIV_ROUND_UP(oprsz, 8), esz);
69
+ /*
46
+}
70
+ * Make a copy of float_status because this operation does not
47
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
71
+ * update the cumulative fp exception status. It also produces
48
index XXXXXXX..XXXXXXX 100644
72
+ * default nans.
49
--- a/target/arm/translate-sve.c
50
+++ b/target/arm/translate-sve.c
51
@@ -XXX,XX +XXX,XX @@ static bool trans_COMPACT(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
52
return do_zpz_ool(s, a, fns[a->esz]);
53
}
54
55
+/* Call the helper that computes the ARM LastActiveElement pseudocode
56
+ * function, scaled by the element size. This includes the not found
57
+ * indication; e.g. not found for esz=3 is -8.
58
+ */
59
+static void find_last_active(DisasContext *s, TCGv_i32 ret, int esz, int pg)
60
+{
61
+ /* Predicate sizes may be smaller and cannot use simd_desc. We cannot
62
+ * round up, as we do elsewhere, because we need the exact size.
63
+ */
73
+ */
64
+ TCGv_ptr t_p = tcg_temp_new_ptr();
74
+ fpst = *(float_status *)vst;
65
+ TCGv_i32 t_desc;
75
+ set_default_nan_mode(true, &fpst);
66
+ unsigned vsz = pred_full_reg_size(s);
67
+ unsigned desc;
68
+
76
+
69
+ desc = vsz - 2;
77
+ for (row = 0; row < oprsz; ) {
70
+ desc = deposit32(desc, SIMD_DATA_SHIFT, 2, esz);
78
+ uint16_t pa = pn[H2(row >> 4)];
79
+ do {
80
+ if (pa & 1) {
81
+ void *vza_row = vza + tile_vslice_offset(row);
82
+ uint32_t n = *(uint32_t *)(vzn + H1_4(row)) ^ neg;
71
+
83
+
72
+ tcg_gen_addi_ptr(t_p, cpu_env, pred_full_reg_offset(s, pg));
84
+ for (col = 0; col < oprsz; ) {
73
+ t_desc = tcg_const_i32(desc);
85
+ uint16_t pb = pm[H2(col >> 4)];
74
+
86
+ do {
75
+ gen_helper_sve_last_active_element(ret, t_p, t_desc);
87
+ if (pb & 1) {
76
+
88
+ uint32_t *a = vza_row + H1_4(col);
77
+ tcg_temp_free_i32(t_desc);
89
+ uint32_t *m = vzm + H1_4(col);
78
+ tcg_temp_free_ptr(t_p);
90
+ *a = float32_muladd(n, *m, *a, 0, vst);
79
+}
91
+ }
80
+
92
+ col += 4;
81
+/* Increment LAST to the offset of the next element in the vector,
93
+ pb >>= 4;
82
+ * wrapping around to 0.
94
+ } while (col & 15);
83
+ */
95
+ }
84
+static void incr_last_active(DisasContext *s, TCGv_i32 last, int esz)
96
+ }
85
+{
97
+ row += 4;
86
+ unsigned vsz = vec_full_reg_size(s);
98
+ pa >>= 4;
87
+
99
+ } while (row & 15);
88
+ tcg_gen_addi_i32(last, last, 1 << esz);
89
+ if (is_power_of_2(vsz)) {
90
+ tcg_gen_andi_i32(last, last, vsz - 1);
91
+ } else {
92
+ TCGv_i32 max = tcg_const_i32(vsz);
93
+ TCGv_i32 zero = tcg_const_i32(0);
94
+ tcg_gen_movcond_i32(TCG_COND_GEU, last, last, max, zero, last);
95
+ tcg_temp_free_i32(max);
96
+ tcg_temp_free_i32(zero);
97
+ }
100
+ }
98
+}
101
+}
99
+
102
+
100
+/* If LAST < 0, set LAST to the offset of the last element in the vector. */
103
+void HELPER(sme_fmopa_d)(void *vza, void *vzn, void *vzm, void *vpn,
101
+static void wrap_last_active(DisasContext *s, TCGv_i32 last, int esz)
104
+ void *vpm, void *vst, uint32_t desc)
102
+{
105
+{
103
+ unsigned vsz = vec_full_reg_size(s);
106
+ intptr_t row, col, oprsz = simd_oprsz(desc) / 8;
107
+ uint64_t neg = (uint64_t)simd_data(desc) << 63;
108
+ uint64_t *za = vza, *zn = vzn, *zm = vzm;
109
+ uint8_t *pn = vpn, *pm = vpm;
110
+ float_status fpst = *(float_status *)vst;
104
+
111
+
105
+ if (is_power_of_2(vsz)) {
112
+ set_default_nan_mode(true, &fpst);
106
+ tcg_gen_andi_i32(last, last, vsz - 1);
113
+
107
+ } else {
114
+ for (row = 0; row < oprsz; ++row) {
108
+ TCGv_i32 max = tcg_const_i32(vsz - (1 << esz));
115
+ if (pn[H1(row)] & 1) {
109
+ TCGv_i32 zero = tcg_const_i32(0);
116
+ uint64_t *za_row = &za[tile_vslice_index(row)];
110
+ tcg_gen_movcond_i32(TCG_COND_LT, last, last, zero, max, last);
117
+ uint64_t n = zn[row] ^ neg;
111
+ tcg_temp_free_i32(max);
118
+
112
+ tcg_temp_free_i32(zero);
119
+ for (col = 0; col < oprsz; ++col) {
120
+ if (pm[H1(col)] & 1) {
121
+ uint64_t *a = &za_row[col];
122
+ *a = float64_muladd(n, zm[col], *a, 0, &fpst);
123
+ }
124
+ }
125
+ }
113
+ }
126
+ }
114
+}
127
+}
128
diff --git a/target/arm/translate-sme.c b/target/arm/translate-sme.c
129
index XXXXXXX..XXXXXXX 100644
130
--- a/target/arm/translate-sme.c
131
+++ b/target/arm/translate-sme.c
132
@@ -XXX,XX +XXX,XX @@ TRANS_FEAT(ADDHA_s, aa64_sme, do_adda, a, MO_32, gen_helper_sme_addha_s)
133
TRANS_FEAT(ADDVA_s, aa64_sme, do_adda, a, MO_32, gen_helper_sme_addva_s)
134
TRANS_FEAT(ADDHA_d, aa64_sme_i16i64, do_adda, a, MO_64, gen_helper_sme_addha_d)
135
TRANS_FEAT(ADDVA_d, aa64_sme_i16i64, do_adda, a, MO_64, gen_helper_sme_addva_d)
115
+
136
+
116
+/* Load an unsigned element of ESZ from BASE+OFS. */
137
+static bool do_outprod_fpst(DisasContext *s, arg_op *a, MemOp esz,
117
+static TCGv_i64 load_esz(TCGv_ptr base, int ofs, int esz)
138
+ gen_helper_gvec_5_ptr *fn)
118
+{
139
+{
119
+ TCGv_i64 r = tcg_temp_new_i64();
140
+ int svl = streaming_vec_reg_size(s);
141
+ uint32_t desc = simd_desc(svl, svl, a->sub);
142
+ TCGv_ptr za, zn, zm, pn, pm, fpst;
120
+
143
+
121
+ switch (esz) {
144
+ if (!sme_smza_enabled_check(s)) {
122
+ case 0:
123
+ tcg_gen_ld8u_i64(r, base, ofs);
124
+ break;
125
+ case 1:
126
+ tcg_gen_ld16u_i64(r, base, ofs);
127
+ break;
128
+ case 2:
129
+ tcg_gen_ld32u_i64(r, base, ofs);
130
+ break;
131
+ case 3:
132
+ tcg_gen_ld_i64(r, base, ofs);
133
+ break;
134
+ default:
135
+ g_assert_not_reached();
136
+ }
137
+ return r;
138
+}
139
+
140
+/* Load an unsigned element of ESZ from RM[LAST]. */
141
+static TCGv_i64 load_last_active(DisasContext *s, TCGv_i32 last,
142
+ int rm, int esz)
143
+{
144
+ TCGv_ptr p = tcg_temp_new_ptr();
145
+ TCGv_i64 r;
146
+
147
+ /* Convert offset into vector into offset into ENV.
148
+ * The final adjustment for the vector register base
149
+ * is added via constant offset to the load.
150
+ */
151
+#ifdef HOST_WORDS_BIGENDIAN
152
+ /* Adjust for element ordering. See vec_reg_offset. */
153
+ if (esz < 3) {
154
+ tcg_gen_xori_i32(last, last, 8 - (1 << esz));
155
+ }
156
+#endif
157
+ tcg_gen_ext_i32_ptr(p, last);
158
+ tcg_gen_add_ptr(p, p, cpu_env);
159
+
160
+ r = load_esz(p, vec_full_reg_offset(s, rm), esz);
161
+ tcg_temp_free_ptr(p);
162
+
163
+ return r;
164
+}
165
+
166
+/* Compute CLAST for a Zreg. */
167
+static bool do_clast_vector(DisasContext *s, arg_rprr_esz *a, bool before)
168
+{
169
+ TCGv_i32 last;
170
+ TCGLabel *over;
171
+ TCGv_i64 ele;
172
+ unsigned vsz, esz = a->esz;
173
+
174
+ if (!sve_access_check(s)) {
175
+ return true;
145
+ return true;
176
+ }
146
+ }
177
+
147
+
178
+ last = tcg_temp_local_new_i32();
148
+ /* Sum XZR+zad to find ZAd. */
179
+ over = gen_new_label();
149
+ za = get_tile_rowcol(s, esz, 31, a->zad, false);
150
+ zn = vec_full_reg_ptr(s, a->zn);
151
+ zm = vec_full_reg_ptr(s, a->zm);
152
+ pn = pred_full_reg_ptr(s, a->pn);
153
+ pm = pred_full_reg_ptr(s, a->pm);
154
+ fpst = fpstatus_ptr(FPST_FPCR);
180
+
155
+
181
+ find_last_active(s, last, esz, a->pg);
156
+ fn(za, zn, zm, pn, pm, fpst, tcg_constant_i32(desc));
182
+
157
+
183
+ /* There is of course no movcond for a 2048-bit vector,
158
+ tcg_temp_free_ptr(za);
184
+ * so we must branch over the actual store.
159
+ tcg_temp_free_ptr(zn);
185
+ */
160
+ tcg_temp_free_ptr(pn);
186
+ tcg_gen_brcondi_i32(TCG_COND_LT, last, 0, over);
161
+ tcg_temp_free_ptr(pm);
187
+
162
+ tcg_temp_free_ptr(fpst);
188
+ if (!before) {
189
+ incr_last_active(s, last, esz);
190
+ }
191
+
192
+ ele = load_last_active(s, last, a->rm, esz);
193
+ tcg_temp_free_i32(last);
194
+
195
+ vsz = vec_full_reg_size(s);
196
+ tcg_gen_gvec_dup_i64(esz, vec_full_reg_offset(s, a->rd), vsz, vsz, ele);
197
+ tcg_temp_free_i64(ele);
198
+
199
+ /* If this insn used MOVPRFX, we may need a second move. */
200
+ if (a->rd != a->rn) {
201
+ TCGLabel *done = gen_new_label();
202
+ tcg_gen_br(done);
203
+
204
+ gen_set_label(over);
205
+ do_mov_z(s, a->rd, a->rn);
206
+
207
+ gen_set_label(done);
208
+ } else {
209
+ gen_set_label(over);
210
+ }
211
+ return true;
163
+ return true;
212
+}
164
+}
213
+
165
+
214
+static bool trans_CLASTA_z(DisasContext *s, arg_rprr_esz *a, uint32_t insn)
166
+TRANS_FEAT(FMOPA_s, aa64_sme, do_outprod_fpst, a, MO_32, gen_helper_sme_fmopa_s)
215
+{
167
+TRANS_FEAT(FMOPA_d, aa64_sme_f64f64, do_outprod_fpst, a, MO_64, gen_helper_sme_fmopa_d)
216
+ return do_clast_vector(s, a, false);
217
+}
218
+
219
+static bool trans_CLASTB_z(DisasContext *s, arg_rprr_esz *a, uint32_t insn)
220
+{
221
+ return do_clast_vector(s, a, true);
222
+}
223
+
224
+/* Compute CLAST for a scalar. */
225
+static void do_clast_scalar(DisasContext *s, int esz, int pg, int rm,
226
+ bool before, TCGv_i64 reg_val)
227
+{
228
+ TCGv_i32 last = tcg_temp_new_i32();
229
+ TCGv_i64 ele, cmp, zero;
230
+
231
+ find_last_active(s, last, esz, pg);
232
+
233
+ /* Extend the original value of last prior to incrementing. */
234
+ cmp = tcg_temp_new_i64();
235
+ tcg_gen_ext_i32_i64(cmp, last);
236
+
237
+ if (!before) {
238
+ incr_last_active(s, last, esz);
239
+ }
240
+
241
+ /* The conceit here is that while last < 0 indicates not found, after
242
+ * adjusting for cpu_env->vfp.zregs[rm], it is still a valid address
243
+ * from which we can load garbage. We then discard the garbage with
244
+ * a conditional move.
245
+ */
246
+ ele = load_last_active(s, last, rm, esz);
247
+ tcg_temp_free_i32(last);
248
+
249
+ zero = tcg_const_i64(0);
250
+ tcg_gen_movcond_i64(TCG_COND_GE, reg_val, cmp, zero, ele, reg_val);
251
+
252
+ tcg_temp_free_i64(zero);
253
+ tcg_temp_free_i64(cmp);
254
+ tcg_temp_free_i64(ele);
255
+}
256
+
257
+/* Compute CLAST for a Vreg. */
258
+static bool do_clast_fp(DisasContext *s, arg_rpr_esz *a, bool before)
259
+{
260
+ if (sve_access_check(s)) {
261
+ int esz = a->esz;
262
+ int ofs = vec_reg_offset(s, a->rd, 0, esz);
263
+ TCGv_i64 reg = load_esz(cpu_env, ofs, esz);
264
+
265
+ do_clast_scalar(s, esz, a->pg, a->rn, before, reg);
266
+ write_fp_dreg(s, a->rd, reg);
267
+ tcg_temp_free_i64(reg);
268
+ }
269
+ return true;
270
+}
271
+
272
+static bool trans_CLASTA_v(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
273
+{
274
+ return do_clast_fp(s, a, false);
275
+}
276
+
277
+static bool trans_CLASTB_v(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
278
+{
279
+ return do_clast_fp(s, a, true);
280
+}
281
+
282
+/* Compute CLAST for a Xreg. */
283
+static bool do_clast_general(DisasContext *s, arg_rpr_esz *a, bool before)
284
+{
285
+ TCGv_i64 reg;
286
+
287
+ if (!sve_access_check(s)) {
288
+ return true;
289
+ }
290
+
291
+ reg = cpu_reg(s, a->rd);
292
+ switch (a->esz) {
293
+ case 0:
294
+ tcg_gen_ext8u_i64(reg, reg);
295
+ break;
296
+ case 1:
297
+ tcg_gen_ext16u_i64(reg, reg);
298
+ break;
299
+ case 2:
300
+ tcg_gen_ext32u_i64(reg, reg);
301
+ break;
302
+ case 3:
303
+ break;
304
+ default:
305
+ g_assert_not_reached();
306
+ }
307
+
308
+ do_clast_scalar(s, a->esz, a->pg, a->rn, before, reg);
309
+ return true;
310
+}
311
+
312
+static bool trans_CLASTA_r(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
313
+{
314
+ return do_clast_general(s, a, false);
315
+}
316
+
317
+static bool trans_CLASTB_r(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
318
+{
319
+ return do_clast_general(s, a, true);
320
+}
321
+
322
+/* Compute LAST for a scalar. */
323
+static TCGv_i64 do_last_scalar(DisasContext *s, int esz,
324
+ int pg, int rm, bool before)
325
+{
326
+ TCGv_i32 last = tcg_temp_new_i32();
327
+ TCGv_i64 ret;
328
+
329
+ find_last_active(s, last, esz, pg);
330
+ if (before) {
331
+ wrap_last_active(s, last, esz);
332
+ } else {
333
+ incr_last_active(s, last, esz);
334
+ }
335
+
336
+ ret = load_last_active(s, last, rm, esz);
337
+ tcg_temp_free_i32(last);
338
+ return ret;
339
+}
340
+
341
+/* Compute LAST for a Vreg. */
342
+static bool do_last_fp(DisasContext *s, arg_rpr_esz *a, bool before)
343
+{
344
+ if (sve_access_check(s)) {
345
+ TCGv_i64 val = do_last_scalar(s, a->esz, a->pg, a->rn, before);
346
+ write_fp_dreg(s, a->rd, val);
347
+ tcg_temp_free_i64(val);
348
+ }
349
+ return true;
350
+}
351
+
352
+static bool trans_LASTA_v(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
353
+{
354
+ return do_last_fp(s, a, false);
355
+}
356
+
357
+static bool trans_LASTB_v(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
358
+{
359
+ return do_last_fp(s, a, true);
360
+}
361
+
362
+/* Compute LAST for a Xreg. */
363
+static bool do_last_general(DisasContext *s, arg_rpr_esz *a, bool before)
364
+{
365
+ if (sve_access_check(s)) {
366
+ TCGv_i64 val = do_last_scalar(s, a->esz, a->pg, a->rn, before);
367
+ tcg_gen_mov_i64(cpu_reg(s, a->rd), val);
368
+ tcg_temp_free_i64(val);
369
+ }
370
+ return true;
371
+}
372
+
373
+static bool trans_LASTA_r(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
374
+{
375
+ return do_last_general(s, a, false);
376
+}
377
+
378
+static bool trans_LASTB_r(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
379
+{
380
+ return do_last_general(s, a, true);
381
+}
382
+
383
/*
384
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
385
*/
386
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
387
index XXXXXXX..XXXXXXX 100644
388
--- a/target/arm/sve.decode
389
+++ b/target/arm/sve.decode
390
@@ -XXX,XX +XXX,XX @@ TRN2_z 00000101 .. 1 ..... 011 101 ..... ..... @rd_rn_rm
391
# Note esz >= 2
392
COMPACT 00000101 .. 100001 100 ... ..... ..... @rd_pg_rn
393
394
+# SVE conditionally broadcast element to vector
395
+CLASTA_z 00000101 .. 10100 0 100 ... ..... ..... @rdn_pg_rm
396
+CLASTB_z 00000101 .. 10100 1 100 ... ..... ..... @rdn_pg_rm
397
+
398
+# SVE conditionally copy element to SIMD&FP scalar
399
+CLASTA_v 00000101 .. 10101 0 100 ... ..... ..... @rd_pg_rn
400
+CLASTB_v 00000101 .. 10101 1 100 ... ..... ..... @rd_pg_rn
401
+
402
+# SVE conditionally copy element to general register
403
+CLASTA_r 00000101 .. 11000 0 101 ... ..... ..... @rd_pg_rn
404
+CLASTB_r 00000101 .. 11000 1 101 ... ..... ..... @rd_pg_rn
405
+
406
+# SVE copy element to SIMD&FP scalar register
407
+LASTA_v 00000101 .. 10001 0 100 ... ..... ..... @rd_pg_rn
408
+LASTB_v 00000101 .. 10001 1 100 ... ..... ..... @rd_pg_rn
409
+
410
+# SVE copy element to general register
411
+LASTA_r 00000101 .. 10000 0 101 ... ..... ..... @rd_pg_rn
412
+LASTB_r 00000101 .. 10000 1 101 ... ..... ..... @rd_pg_rn
413
+
414
### SVE Predicate Logical Operations Group
415
416
# SVE predicate logical operations
417
--
168
--
418
2.17.1
169
2.25.1
419
420
diff view generated by jsdifflib
1
Convert the wdt_i6300esb device away from using the old_mmio field
1
From: Richard Henderson <richard.henderson@linaro.org>
2
of MemoryRegionOps.
3
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20220708151540.18136-26-richard.henderson@linaro.org
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Message-id: 20180601141223.26630-5-peter.maydell@linaro.org
7
---
7
---
8
hw/watchdog/wdt_i6300esb.c | 48 ++++++++++++++++++++++++++++----------
8
target/arm/helper-sme.h | 2 ++
9
1 file changed, 36 insertions(+), 12 deletions(-)
9
target/arm/sme.decode | 2 ++
10
target/arm/sme_helper.c | 56 ++++++++++++++++++++++++++++++++++++++
11
target/arm/translate-sme.c | 30 ++++++++++++++++++++
12
4 files changed, 90 insertions(+)
10
13
11
diff --git a/hw/watchdog/wdt_i6300esb.c b/hw/watchdog/wdt_i6300esb.c
14
diff --git a/target/arm/helper-sme.h b/target/arm/helper-sme.h
12
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/watchdog/wdt_i6300esb.c
16
--- a/target/arm/helper-sme.h
14
+++ b/hw/watchdog/wdt_i6300esb.c
17
+++ b/target/arm/helper-sme.h
15
@@ -XXX,XX +XXX,XX @@ static void i6300esb_mem_writel(void *vp, hwaddr addr, uint32_t val)
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_7(sme_fmopa_s, TCG_CALL_NO_RWG,
19
void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
20
DEF_HELPER_FLAGS_7(sme_fmopa_d, TCG_CALL_NO_RWG,
21
void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
22
+DEF_HELPER_FLAGS_6(sme_bfmopa, TCG_CALL_NO_RWG,
23
+ void, ptr, ptr, ptr, ptr, ptr, i32)
24
diff --git a/target/arm/sme.decode b/target/arm/sme.decode
25
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/sme.decode
27
+++ b/target/arm/sme.decode
28
@@ -XXX,XX +XXX,XX @@ ADDVA_d 11000000 11 01000 1 ... ... ..... 00 ... @adda_64
29
30
FMOPA_s 10000000 100 ..... ... ... ..... . 00 .. @op_32
31
FMOPA_d 10000000 110 ..... ... ... ..... . 0 ... @op_64
32
+
33
+BFMOPA 10000001 100 ..... ... ... ..... . 00 .. @op_32
34
diff --git a/target/arm/sme_helper.c b/target/arm/sme_helper.c
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/sme_helper.c
37
+++ b/target/arm/sme_helper.c
38
@@ -XXX,XX +XXX,XX @@ void HELPER(sme_fmopa_d)(void *vza, void *vzn, void *vzm, void *vpn,
39
}
16
}
40
}
17
}
41
}
18
42
+
19
+static uint64_t i6300esb_mem_readfn(void *opaque, hwaddr addr, unsigned size)
43
+/*
44
+ * Alter PAIR as needed for controlling predicates being false,
45
+ * and for NEG on an enabled row element.
46
+ */
47
+static inline uint32_t f16mop_adj_pair(uint32_t pair, uint32_t pg, uint32_t neg)
20
+{
48
+{
21
+ switch (size) {
49
+ /*
22
+ case 1:
50
+ * The pseudocode uses a conditional negate after the conditional zero.
23
+ return i6300esb_mem_readb(opaque, addr);
51
+ * It is simpler here to unconditionally negate before conditional zero.
24
+ case 2:
52
+ */
25
+ return i6300esb_mem_readw(opaque, addr);
53
+ pair ^= neg;
26
+ case 4:
54
+ if (!(pg & 1)) {
27
+ return i6300esb_mem_readl(opaque, addr);
55
+ pair &= 0xffff0000u;
28
+ default:
56
+ }
29
+ g_assert_not_reached();
57
+ if (!(pg & 4)) {
58
+ pair &= 0x0000ffffu;
59
+ }
60
+ return pair;
61
+}
62
+
63
+void HELPER(sme_bfmopa)(void *vza, void *vzn, void *vzm, void *vpn,
64
+ void *vpm, uint32_t desc)
65
+{
66
+ intptr_t row, col, oprsz = simd_maxsz(desc);
67
+ uint32_t neg = simd_data(desc) * 0x80008000u;
68
+ uint16_t *pn = vpn, *pm = vpm;
69
+
70
+ for (row = 0; row < oprsz; ) {
71
+ uint16_t prow = pn[H2(row >> 4)];
72
+ do {
73
+ void *vza_row = vza + tile_vslice_offset(row);
74
+ uint32_t n = *(uint32_t *)(vzn + H1_4(row));
75
+
76
+ n = f16mop_adj_pair(n, prow, neg);
77
+
78
+ for (col = 0; col < oprsz; ) {
79
+ uint16_t pcol = pm[H2(col >> 4)];
80
+ do {
81
+ if (prow & pcol & 0b0101) {
82
+ uint32_t *a = vza_row + H1_4(col);
83
+ uint32_t m = *(uint32_t *)(vzm + H1_4(col));
84
+
85
+ m = f16mop_adj_pair(m, pcol, 0);
86
+ *a = bfdotadd(*a, n, m);
87
+
88
+ col += 4;
89
+ pcol >>= 4;
90
+ }
91
+ } while (col & 15);
92
+ }
93
+ row += 4;
94
+ prow >>= 4;
95
+ } while (row & 15);
30
+ }
96
+ }
31
+}
97
+}
98
diff --git a/target/arm/translate-sme.c b/target/arm/translate-sme.c
99
index XXXXXXX..XXXXXXX 100644
100
--- a/target/arm/translate-sme.c
101
+++ b/target/arm/translate-sme.c
102
@@ -XXX,XX +XXX,XX @@ TRANS_FEAT(ADDVA_s, aa64_sme, do_adda, a, MO_32, gen_helper_sme_addva_s)
103
TRANS_FEAT(ADDHA_d, aa64_sme_i16i64, do_adda, a, MO_64, gen_helper_sme_addha_d)
104
TRANS_FEAT(ADDVA_d, aa64_sme_i16i64, do_adda, a, MO_64, gen_helper_sme_addva_d)
105
106
+static bool do_outprod(DisasContext *s, arg_op *a, MemOp esz,
107
+ gen_helper_gvec_5 *fn)
108
+{
109
+ int svl = streaming_vec_reg_size(s);
110
+ uint32_t desc = simd_desc(svl, svl, a->sub);
111
+ TCGv_ptr za, zn, zm, pn, pm;
32
+
112
+
33
+static void i6300esb_mem_writefn(void *opaque, hwaddr addr,
113
+ if (!sme_smza_enabled_check(s)) {
34
+ uint64_t value, unsigned size)
114
+ return true;
35
+{
36
+ switch (size) {
37
+ case 1:
38
+ i6300esb_mem_writeb(opaque, addr, value);
39
+ break;
40
+ case 2:
41
+ i6300esb_mem_writew(opaque, addr, value);
42
+ break;
43
+ case 4:
44
+ i6300esb_mem_writel(opaque, addr, value);
45
+ break;
46
+ default:
47
+ g_assert_not_reached();
48
+ }
115
+ }
116
+
117
+ /* Sum XZR+zad to find ZAd. */
118
+ za = get_tile_rowcol(s, esz, 31, a->zad, false);
119
+ zn = vec_full_reg_ptr(s, a->zn);
120
+ zm = vec_full_reg_ptr(s, a->zm);
121
+ pn = pred_full_reg_ptr(s, a->pn);
122
+ pm = pred_full_reg_ptr(s, a->pm);
123
+
124
+ fn(za, zn, zm, pn, pm, tcg_constant_i32(desc));
125
+
126
+ tcg_temp_free_ptr(za);
127
+ tcg_temp_free_ptr(zn);
128
+ tcg_temp_free_ptr(pn);
129
+ tcg_temp_free_ptr(pm);
130
+ return true;
49
+}
131
+}
50
+
132
+
51
static const MemoryRegionOps i6300esb_ops = {
133
static bool do_outprod_fpst(DisasContext *s, arg_op *a, MemOp esz,
52
- .old_mmio = {
134
gen_helper_gvec_5_ptr *fn)
53
- .read = {
135
{
54
- i6300esb_mem_readb,
136
@@ -XXX,XX +XXX,XX @@ static bool do_outprod_fpst(DisasContext *s, arg_op *a, MemOp esz,
55
- i6300esb_mem_readw,
137
56
- i6300esb_mem_readl,
138
TRANS_FEAT(FMOPA_s, aa64_sme, do_outprod_fpst, a, MO_32, gen_helper_sme_fmopa_s)
57
- },
139
TRANS_FEAT(FMOPA_d, aa64_sme_f64f64, do_outprod_fpst, a, MO_64, gen_helper_sme_fmopa_d)
58
- .write = {
140
+
59
- i6300esb_mem_writeb,
141
+/* TODO: FEAT_EBF16 */
60
- i6300esb_mem_writew,
142
+TRANS_FEAT(BFMOPA, aa64_sme, do_outprod, a, MO_32, gen_helper_sme_bfmopa)
61
- i6300esb_mem_writel,
62
- },
63
- },
64
+ .read = i6300esb_mem_readfn,
65
+ .write = i6300esb_mem_writefn,
66
+ .valid.min_access_size = 1,
67
+ .valid.max_access_size = 4,
68
.endianness = DEVICE_LITTLE_ENDIAN,
69
};
70
71
--
143
--
72
2.17.1
144
2.25.1
73
74
diff view generated by jsdifflib
1
Convert the mcf5206 device away from using the old_mmio field
1
From: Richard Henderson <richard.henderson@linaro.org>
2
of MemoryRegionOps. This device is used by the an5206 board.
3
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20220708151540.18136-27-richard.henderson@linaro.org
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Acked-by: Thomas Huth <huth@tuxfamily.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Message-id: 20180601141223.26630-3-peter.maydell@linaro.org
7
---
7
---
8
hw/m68k/mcf5206.c | 48 +++++++++++++++++++++++++++++++++++------------
8
target/arm/helper-sme.h | 2 ++
9
1 file changed, 36 insertions(+), 12 deletions(-)
9
target/arm/sme.decode | 1 +
10
target/arm/sme_helper.c | 74 ++++++++++++++++++++++++++++++++++++++
11
target/arm/translate-sme.c | 1 +
12
4 files changed, 78 insertions(+)
10
13
11
diff --git a/hw/m68k/mcf5206.c b/hw/m68k/mcf5206.c
14
diff --git a/target/arm/helper-sme.h b/target/arm/helper-sme.h
12
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/m68k/mcf5206.c
16
--- a/target/arm/helper-sme.h
14
+++ b/hw/m68k/mcf5206.c
17
+++ b/target/arm/helper-sme.h
15
@@ -XXX,XX +XXX,XX @@ static void m5206_mbar_writel(void *opaque, hwaddr offset,
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sme_addva_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
16
m5206_mbar_write(s, offset, value, 4);
19
DEF_HELPER_FLAGS_5(sme_addha_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
20
DEF_HELPER_FLAGS_5(sme_addva_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
21
22
+DEF_HELPER_FLAGS_7(sme_fmopa_h, TCG_CALL_NO_RWG,
23
+ void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
24
DEF_HELPER_FLAGS_7(sme_fmopa_s, TCG_CALL_NO_RWG,
25
void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
26
DEF_HELPER_FLAGS_7(sme_fmopa_d, TCG_CALL_NO_RWG,
27
diff --git a/target/arm/sme.decode b/target/arm/sme.decode
28
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/sme.decode
30
+++ b/target/arm/sme.decode
31
@@ -XXX,XX +XXX,XX @@ FMOPA_s 10000000 100 ..... ... ... ..... . 00 .. @op_32
32
FMOPA_d 10000000 110 ..... ... ... ..... . 0 ... @op_64
33
34
BFMOPA 10000001 100 ..... ... ... ..... . 00 .. @op_32
35
+FMOPA_h 10000001 101 ..... ... ... ..... . 00 .. @op_32
36
diff --git a/target/arm/sme_helper.c b/target/arm/sme_helper.c
37
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/sme_helper.c
39
+++ b/target/arm/sme_helper.c
40
@@ -XXX,XX +XXX,XX @@ static inline uint32_t f16mop_adj_pair(uint32_t pair, uint32_t pg, uint32_t neg)
41
return pair;
17
}
42
}
18
43
19
+static uint64_t m5206_mbar_readfn(void *opaque, hwaddr addr, unsigned size)
44
+static float32 f16_dotadd(float32 sum, uint32_t e1, uint32_t e2,
45
+ float_status *s_std, float_status *s_odd)
20
+{
46
+{
21
+ switch (size) {
47
+ float64 e1r = float16_to_float64(e1 & 0xffff, true, s_std);
22
+ case 1:
48
+ float64 e1c = float16_to_float64(e1 >> 16, true, s_std);
23
+ return m5206_mbar_readb(opaque, addr);
49
+ float64 e2r = float16_to_float64(e2 & 0xffff, true, s_std);
24
+ case 2:
50
+ float64 e2c = float16_to_float64(e2 >> 16, true, s_std);
25
+ return m5206_mbar_readw(opaque, addr);
51
+ float64 t64;
26
+ case 4:
52
+ float32 t32;
27
+ return m5206_mbar_readl(opaque, addr);
53
+
28
+ default:
54
+ /*
29
+ g_assert_not_reached();
55
+ * The ARM pseudocode function FPDot performs both multiplies
56
+ * and the add with a single rounding operation. Emulate this
57
+ * by performing the first multiply in round-to-odd, then doing
58
+ * the second multiply as fused multiply-add, and rounding to
59
+ * float32 all in one step.
60
+ */
61
+ t64 = float64_mul(e1r, e2r, s_odd);
62
+ t64 = float64r32_muladd(e1c, e2c, t64, 0, s_std);
63
+
64
+ /* This conversion is exact, because we've already rounded. */
65
+ t32 = float64_to_float32(t64, s_std);
66
+
67
+ /* The final accumulation step is not fused. */
68
+ return float32_add(sum, t32, s_std);
69
+}
70
+
71
+void HELPER(sme_fmopa_h)(void *vza, void *vzn, void *vzm, void *vpn,
72
+ void *vpm, void *vst, uint32_t desc)
73
+{
74
+ intptr_t row, col, oprsz = simd_maxsz(desc);
75
+ uint32_t neg = simd_data(desc) * 0x80008000u;
76
+ uint16_t *pn = vpn, *pm = vpm;
77
+ float_status fpst_odd, fpst_std;
78
+
79
+ /*
80
+ * Make a copy of float_status because this operation does not
81
+ * update the cumulative fp exception status. It also produces
82
+ * default nans. Make a second copy with round-to-odd -- see above.
83
+ */
84
+ fpst_std = *(float_status *)vst;
85
+ set_default_nan_mode(true, &fpst_std);
86
+ fpst_odd = fpst_std;
87
+ set_float_rounding_mode(float_round_to_odd, &fpst_odd);
88
+
89
+ for (row = 0; row < oprsz; ) {
90
+ uint16_t prow = pn[H2(row >> 4)];
91
+ do {
92
+ void *vza_row = vza + tile_vslice_offset(row);
93
+ uint32_t n = *(uint32_t *)(vzn + H1_4(row));
94
+
95
+ n = f16mop_adj_pair(n, prow, neg);
96
+
97
+ for (col = 0; col < oprsz; ) {
98
+ uint16_t pcol = pm[H2(col >> 4)];
99
+ do {
100
+ if (prow & pcol & 0b0101) {
101
+ uint32_t *a = vza_row + H1_4(col);
102
+ uint32_t m = *(uint32_t *)(vzm + H1_4(col));
103
+
104
+ m = f16mop_adj_pair(m, pcol, 0);
105
+ *a = f16_dotadd(*a, n, m, &fpst_std, &fpst_odd);
106
+
107
+ col += 4;
108
+ pcol >>= 4;
109
+ }
110
+ } while (col & 15);
111
+ }
112
+ row += 4;
113
+ prow >>= 4;
114
+ } while (row & 15);
30
+ }
115
+ }
31
+}
116
+}
32
+
117
+
33
+static void m5206_mbar_writefn(void *opaque, hwaddr addr,
118
void HELPER(sme_bfmopa)(void *vza, void *vzn, void *vzm, void *vpn,
34
+ uint64_t value, unsigned size)
119
void *vpm, uint32_t desc)
35
+{
120
{
36
+ switch (size) {
121
diff --git a/target/arm/translate-sme.c b/target/arm/translate-sme.c
37
+ case 1:
122
index XXXXXXX..XXXXXXX 100644
38
+ m5206_mbar_writeb(opaque, addr, value);
123
--- a/target/arm/translate-sme.c
39
+ break;
124
+++ b/target/arm/translate-sme.c
40
+ case 2:
125
@@ -XXX,XX +XXX,XX @@ static bool do_outprod_fpst(DisasContext *s, arg_op *a, MemOp esz,
41
+ m5206_mbar_writew(opaque, addr, value);
126
return true;
42
+ break;
127
}
43
+ case 4:
128
44
+ m5206_mbar_writel(opaque, addr, value);
129
+TRANS_FEAT(FMOPA_h, aa64_sme, do_outprod_fpst, a, MO_32, gen_helper_sme_fmopa_h)
45
+ break;
130
TRANS_FEAT(FMOPA_s, aa64_sme, do_outprod_fpst, a, MO_32, gen_helper_sme_fmopa_s)
46
+ default:
131
TRANS_FEAT(FMOPA_d, aa64_sme_f64f64, do_outprod_fpst, a, MO_64, gen_helper_sme_fmopa_d)
47
+ g_assert_not_reached();
48
+ }
49
+}
50
+
51
static const MemoryRegionOps m5206_mbar_ops = {
52
- .old_mmio = {
53
- .read = {
54
- m5206_mbar_readb,
55
- m5206_mbar_readw,
56
- m5206_mbar_readl,
57
- },
58
- .write = {
59
- m5206_mbar_writeb,
60
- m5206_mbar_writew,
61
- m5206_mbar_writel,
62
- },
63
- },
64
+ .read = m5206_mbar_readfn,
65
+ .write = m5206_mbar_writefn,
66
+ .valid.min_access_size = 1,
67
+ .valid.max_access_size = 4,
68
.endianness = DEVICE_NATIVE_ENDIAN,
69
};
70
132
71
--
133
--
72
2.17.1
134
2.25.1
73
74
diff view generated by jsdifflib
1
Convert the sh7750 device away from using the old_mmio field
1
From: Richard Henderson <richard.henderson@linaro.org>
2
of MemoryRegionOps. This device is used by the sh4 r2d board.
3
2
3
This is SMOPA, SUMOPA, USMOPA_s, UMOPA, for both Int8 and Int16.
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20220708151540.18136-28-richard.henderson@linaro.org
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Message-id: 20180601141223.26630-2-peter.maydell@linaro.org
7
---
9
---
8
hw/sh4/sh7750.c | 44 ++++++++++++++++++++++++++++++++++++--------
10
target/arm/helper-sme.h | 16 ++++++++
9
1 file changed, 36 insertions(+), 8 deletions(-)
11
target/arm/sme.decode | 10 +++++
12
target/arm/sme_helper.c | 82 ++++++++++++++++++++++++++++++++++++++
13
target/arm/translate-sme.c | 10 +++++
14
4 files changed, 118 insertions(+)
10
15
11
diff --git a/hw/sh4/sh7750.c b/hw/sh4/sh7750.c
16
diff --git a/target/arm/helper-sme.h b/target/arm/helper-sme.h
12
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/sh4/sh7750.c
18
--- a/target/arm/helper-sme.h
14
+++ b/hw/sh4/sh7750.c
19
+++ b/target/arm/helper-sme.h
15
@@ -XXX,XX +XXX,XX @@ static void sh7750_mem_writel(void *opaque, hwaddr addr,
20
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_7(sme_fmopa_d, TCG_CALL_NO_RWG,
21
void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
22
DEF_HELPER_FLAGS_6(sme_bfmopa, TCG_CALL_NO_RWG,
23
void, ptr, ptr, ptr, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_6(sme_smopa_s, TCG_CALL_NO_RWG,
25
+ void, ptr, ptr, ptr, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_6(sme_umopa_s, TCG_CALL_NO_RWG,
27
+ void, ptr, ptr, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_6(sme_sumopa_s, TCG_CALL_NO_RWG,
29
+ void, ptr, ptr, ptr, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_6(sme_usmopa_s, TCG_CALL_NO_RWG,
31
+ void, ptr, ptr, ptr, ptr, ptr, i32)
32
+DEF_HELPER_FLAGS_6(sme_smopa_d, TCG_CALL_NO_RWG,
33
+ void, ptr, ptr, ptr, ptr, ptr, i32)
34
+DEF_HELPER_FLAGS_6(sme_umopa_d, TCG_CALL_NO_RWG,
35
+ void, ptr, ptr, ptr, ptr, ptr, i32)
36
+DEF_HELPER_FLAGS_6(sme_sumopa_d, TCG_CALL_NO_RWG,
37
+ void, ptr, ptr, ptr, ptr, ptr, i32)
38
+DEF_HELPER_FLAGS_6(sme_usmopa_d, TCG_CALL_NO_RWG,
39
+ void, ptr, ptr, ptr, ptr, ptr, i32)
40
diff --git a/target/arm/sme.decode b/target/arm/sme.decode
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/sme.decode
43
+++ b/target/arm/sme.decode
44
@@ -XXX,XX +XXX,XX @@ FMOPA_d 10000000 110 ..... ... ... ..... . 0 ... @op_64
45
46
BFMOPA 10000001 100 ..... ... ... ..... . 00 .. @op_32
47
FMOPA_h 10000001 101 ..... ... ... ..... . 00 .. @op_32
48
+
49
+SMOPA_s 1010000 0 10 0 ..... ... ... ..... . 00 .. @op_32
50
+SUMOPA_s 1010000 0 10 1 ..... ... ... ..... . 00 .. @op_32
51
+USMOPA_s 1010000 1 10 0 ..... ... ... ..... . 00 .. @op_32
52
+UMOPA_s 1010000 1 10 1 ..... ... ... ..... . 00 .. @op_32
53
+
54
+SMOPA_d 1010000 0 11 0 ..... ... ... ..... . 0 ... @op_64
55
+SUMOPA_d 1010000 0 11 1 ..... ... ... ..... . 0 ... @op_64
56
+USMOPA_d 1010000 1 11 0 ..... ... ... ..... . 0 ... @op_64
57
+UMOPA_d 1010000 1 11 1 ..... ... ... ..... . 0 ... @op_64
58
diff --git a/target/arm/sme_helper.c b/target/arm/sme_helper.c
59
index XXXXXXX..XXXXXXX 100644
60
--- a/target/arm/sme_helper.c
61
+++ b/target/arm/sme_helper.c
62
@@ -XXX,XX +XXX,XX @@ void HELPER(sme_bfmopa)(void *vza, void *vzn, void *vzm, void *vpn,
63
} while (row & 15);
16
}
64
}
17
}
65
}
18
66
+
19
+static uint64_t sh7750_mem_readfn(void *opaque, hwaddr addr, unsigned size)
67
+typedef uint64_t IMOPFn(uint64_t, uint64_t, uint64_t, uint8_t, bool);
68
+
69
+static inline void do_imopa(uint64_t *za, uint64_t *zn, uint64_t *zm,
70
+ uint8_t *pn, uint8_t *pm,
71
+ uint32_t desc, IMOPFn *fn)
20
+{
72
+{
21
+ switch (size) {
73
+ intptr_t row, col, oprsz = simd_oprsz(desc) / 8;
22
+ case 1:
74
+ bool neg = simd_data(desc);
23
+ return sh7750_mem_readb(opaque, addr);
75
+
24
+ case 2:
76
+ for (row = 0; row < oprsz; ++row) {
25
+ return sh7750_mem_readw(opaque, addr);
77
+ uint8_t pa = pn[H1(row)];
26
+ case 4:
78
+ uint64_t *za_row = &za[tile_vslice_index(row)];
27
+ return sh7750_mem_readl(opaque, addr);
79
+ uint64_t n = zn[row];
28
+ default:
80
+
29
+ g_assert_not_reached();
81
+ for (col = 0; col < oprsz; ++col) {
82
+ uint8_t pb = pm[H1(col)];
83
+ uint64_t *a = &za_row[col];
84
+
85
+ *a = fn(n, zm[col], *a, pa & pb, neg);
86
+ }
30
+ }
87
+ }
31
+}
88
+}
32
+
89
+
33
+static void sh7750_mem_writefn(void *opaque, hwaddr addr,
90
+#define DEF_IMOP_32(NAME, NTYPE, MTYPE) \
34
+ uint64_t value, unsigned size)
91
+static uint64_t NAME(uint64_t n, uint64_t m, uint64_t a, uint8_t p, bool neg) \
35
+{
92
+{ \
36
+ switch (size) {
93
+ uint32_t sum0 = 0, sum1 = 0; \
37
+ case 1:
94
+ /* Apply P to N as a mask, making the inactive elements 0. */ \
38
+ sh7750_mem_writeb(opaque, addr, value);
95
+ n &= expand_pred_b(p); \
39
+ break;
96
+ sum0 += (NTYPE)(n >> 0) * (MTYPE)(m >> 0); \
40
+ case 2:
97
+ sum0 += (NTYPE)(n >> 8) * (MTYPE)(m >> 8); \
41
+ sh7750_mem_writew(opaque, addr, value);
98
+ sum0 += (NTYPE)(n >> 16) * (MTYPE)(m >> 16); \
42
+ break;
99
+ sum0 += (NTYPE)(n >> 24) * (MTYPE)(m >> 24); \
43
+ case 4:
100
+ sum1 += (NTYPE)(n >> 32) * (MTYPE)(m >> 32); \
44
+ sh7750_mem_writel(opaque, addr, value);
101
+ sum1 += (NTYPE)(n >> 40) * (MTYPE)(m >> 40); \
45
+ break;
102
+ sum1 += (NTYPE)(n >> 48) * (MTYPE)(m >> 48); \
46
+ default:
103
+ sum1 += (NTYPE)(n >> 56) * (MTYPE)(m >> 56); \
47
+ g_assert_not_reached();
104
+ if (neg) { \
48
+ }
105
+ sum0 = (uint32_t)a - sum0, sum1 = (uint32_t)(a >> 32) - sum1; \
106
+ } else { \
107
+ sum0 = (uint32_t)a + sum0, sum1 = (uint32_t)(a >> 32) + sum1; \
108
+ } \
109
+ return ((uint64_t)sum1 << 32) | sum0; \
49
+}
110
+}
50
+
111
+
51
static const MemoryRegionOps sh7750_mem_ops = {
112
+#define DEF_IMOP_64(NAME, NTYPE, MTYPE) \
52
- .old_mmio = {
113
+static uint64_t NAME(uint64_t n, uint64_t m, uint64_t a, uint8_t p, bool neg) \
53
- .read = {sh7750_mem_readb,
114
+{ \
54
- sh7750_mem_readw,
115
+ uint64_t sum = 0; \
55
- sh7750_mem_readl },
116
+ /* Apply P to N as a mask, making the inactive elements 0. */ \
56
- .write = {sh7750_mem_writeb,
117
+ n &= expand_pred_h(p); \
57
- sh7750_mem_writew,
118
+ sum += (NTYPE)(n >> 0) * (MTYPE)(m >> 0); \
58
- sh7750_mem_writel },
119
+ sum += (NTYPE)(n >> 16) * (MTYPE)(m >> 16); \
59
- },
120
+ sum += (NTYPE)(n >> 32) * (MTYPE)(m >> 32); \
60
+ .read = sh7750_mem_readfn,
121
+ sum += (NTYPE)(n >> 48) * (MTYPE)(m >> 48); \
61
+ .write = sh7750_mem_writefn,
122
+ return neg ? a - sum : a + sum; \
62
+ .valid.min_access_size = 1,
123
+}
63
+ .valid.max_access_size = 4,
124
+
64
.endianness = DEVICE_NATIVE_ENDIAN,
125
+DEF_IMOP_32(smopa_s, int8_t, int8_t)
65
};
126
+DEF_IMOP_32(umopa_s, uint8_t, uint8_t)
66
127
+DEF_IMOP_32(sumopa_s, int8_t, uint8_t)
128
+DEF_IMOP_32(usmopa_s, uint8_t, int8_t)
129
+
130
+DEF_IMOP_64(smopa_d, int16_t, int16_t)
131
+DEF_IMOP_64(umopa_d, uint16_t, uint16_t)
132
+DEF_IMOP_64(sumopa_d, int16_t, uint16_t)
133
+DEF_IMOP_64(usmopa_d, uint16_t, int16_t)
134
+
135
+#define DEF_IMOPH(NAME) \
136
+ void HELPER(sme_##NAME)(void *vza, void *vzn, void *vzm, void *vpn, \
137
+ void *vpm, uint32_t desc) \
138
+ { do_imopa(vza, vzn, vzm, vpn, vpm, desc, NAME); }
139
+
140
+DEF_IMOPH(smopa_s)
141
+DEF_IMOPH(umopa_s)
142
+DEF_IMOPH(sumopa_s)
143
+DEF_IMOPH(usmopa_s)
144
+DEF_IMOPH(smopa_d)
145
+DEF_IMOPH(umopa_d)
146
+DEF_IMOPH(sumopa_d)
147
+DEF_IMOPH(usmopa_d)
148
diff --git a/target/arm/translate-sme.c b/target/arm/translate-sme.c
149
index XXXXXXX..XXXXXXX 100644
150
--- a/target/arm/translate-sme.c
151
+++ b/target/arm/translate-sme.c
152
@@ -XXX,XX +XXX,XX @@ TRANS_FEAT(FMOPA_d, aa64_sme_f64f64, do_outprod_fpst, a, MO_64, gen_helper_sme_f
153
154
/* TODO: FEAT_EBF16 */
155
TRANS_FEAT(BFMOPA, aa64_sme, do_outprod, a, MO_32, gen_helper_sme_bfmopa)
156
+
157
+TRANS_FEAT(SMOPA_s, aa64_sme, do_outprod, a, MO_32, gen_helper_sme_smopa_s)
158
+TRANS_FEAT(UMOPA_s, aa64_sme, do_outprod, a, MO_32, gen_helper_sme_umopa_s)
159
+TRANS_FEAT(SUMOPA_s, aa64_sme, do_outprod, a, MO_32, gen_helper_sme_sumopa_s)
160
+TRANS_FEAT(USMOPA_s, aa64_sme, do_outprod, a, MO_32, gen_helper_sme_usmopa_s)
161
+
162
+TRANS_FEAT(SMOPA_d, aa64_sme_i16i64, do_outprod, a, MO_64, gen_helper_sme_smopa_d)
163
+TRANS_FEAT(UMOPA_d, aa64_sme_i16i64, do_outprod, a, MO_64, gen_helper_sme_umopa_d)
164
+TRANS_FEAT(SUMOPA_d, aa64_sme_i16i64, do_outprod, a, MO_64, gen_helper_sme_sumopa_d)
165
+TRANS_FEAT(USMOPA_d, aa64_sme_i16i64, do_outprod, a, MO_64, gen_helper_sme_usmopa_d)
67
--
166
--
68
2.17.1
167
2.25.1
69
70
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
This is an SVE instruction that operates using the SVE vector
4
length but that it is present only if SME is implemented.
2
5
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-13-richard.henderson@linaro.org
8
Message-id: 20220708151540.18136-29-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
target/arm/helper-sve.h | 44 +++++++++++++++++++
11
target/arm/sve.decode | 20 +++++++++++++
9
target/arm/sve_helper.c | 88 ++++++++++++++++++++++++++++++++++++++
12
target/arm/translate-sve.c | 57 ++++++++++++++++++++++++++++++++++++++
10
target/arm/translate-sve.c | 66 ++++++++++++++++++++++++++++
13
2 files changed, 77 insertions(+)
11
target/arm/sve.decode | 23 ++++++++++
12
4 files changed, 221 insertions(+)
13
14
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
--- a/target/arm/sve.decode
17
+++ b/target/arm/helper-sve.h
18
+++ b/target/arm/sve.decode
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve_cmplo_ppzw_s, TCG_CALL_NO_RWG,
19
@@ -XXX,XX +XXX,XX @@ BFMLALT_zzxw 01100100 11 1 ..... 0100.1 ..... ..... @rrxr_3a esz=2
19
DEF_HELPER_FLAGS_5(sve_cmpls_ppzw_s, TCG_CALL_NO_RWG,
20
20
i32, ptr, ptr, ptr, ptr, i32)
21
### SVE2 floating-point bfloat16 dot-product (indexed)
21
22
BFDOT_zzxz 01100100 01 1 ..... 010000 ..... ..... @rrxr_2 esz=2
22
+DEF_HELPER_FLAGS_4(sve_cmpeq_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
23
+DEF_HELPER_FLAGS_4(sve_cmpne_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_4(sve_cmpgt_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_4(sve_cmpge_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_4(sve_cmplt_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_4(sve_cmple_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_4(sve_cmphs_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_4(sve_cmphi_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_4(sve_cmplo_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_4(sve_cmpls_ppzi_b, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
32
+
23
+
33
+DEF_HELPER_FLAGS_4(sve_cmpeq_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
24
+### SVE broadcast predicate element
34
+DEF_HELPER_FLAGS_4(sve_cmpne_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
35
+DEF_HELPER_FLAGS_4(sve_cmpgt_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
36
+DEF_HELPER_FLAGS_4(sve_cmpge_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
37
+DEF_HELPER_FLAGS_4(sve_cmplt_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
38
+DEF_HELPER_FLAGS_4(sve_cmple_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
39
+DEF_HELPER_FLAGS_4(sve_cmphs_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
40
+DEF_HELPER_FLAGS_4(sve_cmphi_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
41
+DEF_HELPER_FLAGS_4(sve_cmplo_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
42
+DEF_HELPER_FLAGS_4(sve_cmpls_ppzi_h, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
43
+
25
+
44
+DEF_HELPER_FLAGS_4(sve_cmpeq_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
26
+&psel esz pd pn pm rv imm
45
+DEF_HELPER_FLAGS_4(sve_cmpne_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
27
+%psel_rv 16:2 !function=plus_12
46
+DEF_HELPER_FLAGS_4(sve_cmpgt_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
28
+%psel_imm_b 22:2 19:2
47
+DEF_HELPER_FLAGS_4(sve_cmpge_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
29
+%psel_imm_h 22:2 20:1
48
+DEF_HELPER_FLAGS_4(sve_cmplt_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
30
+%psel_imm_s 22:2
49
+DEF_HELPER_FLAGS_4(sve_cmple_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
31
+%psel_imm_d 23:1
50
+DEF_HELPER_FLAGS_4(sve_cmphs_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
32
+@psel ........ .. . ... .. .. pn:4 . pm:4 . pd:4 \
51
+DEF_HELPER_FLAGS_4(sve_cmphi_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
33
+ &psel rv=%psel_rv
52
+DEF_HELPER_FLAGS_4(sve_cmplo_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
53
+DEF_HELPER_FLAGS_4(sve_cmpls_ppzi_s, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
54
+
34
+
55
+DEF_HELPER_FLAGS_4(sve_cmpeq_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
35
+PSEL 00100101 .. 1 ..1 .. 01 .... 0 .... 0 .... \
56
+DEF_HELPER_FLAGS_4(sve_cmpne_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
36
+ @psel esz=0 imm=%psel_imm_b
57
+DEF_HELPER_FLAGS_4(sve_cmpgt_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
37
+PSEL 00100101 .. 1 .10 .. 01 .... 0 .... 0 .... \
58
+DEF_HELPER_FLAGS_4(sve_cmpge_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
38
+ @psel esz=1 imm=%psel_imm_h
59
+DEF_HELPER_FLAGS_4(sve_cmplt_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
39
+PSEL 00100101 .. 1 100 .. 01 .... 0 .... 0 .... \
60
+DEF_HELPER_FLAGS_4(sve_cmple_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
40
+ @psel esz=2 imm=%psel_imm_s
61
+DEF_HELPER_FLAGS_4(sve_cmphs_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
41
+PSEL 00100101 .1 1 000 .. 01 .... 0 .... 0 .... \
62
+DEF_HELPER_FLAGS_4(sve_cmphi_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
42
+ @psel esz=3 imm=%psel_imm_d
63
+DEF_HELPER_FLAGS_4(sve_cmplo_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
64
+DEF_HELPER_FLAGS_4(sve_cmpls_ppzi_d, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
65
+
66
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
67
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
68
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
69
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
70
index XXXXXXX..XXXXXXX 100644
71
--- a/target/arm/sve_helper.c
72
+++ b/target/arm/sve_helper.c
73
@@ -XXX,XX +XXX,XX @@ DO_CMP_PPZW_S(sve_cmpls_ppzw_s, uint32_t, uint64_t, <=)
74
#undef DO_CMP_PPZW_H
75
#undef DO_CMP_PPZW_S
76
#undef DO_CMP_PPZW
77
+
78
+/* Similar, but the second source is immediate. */
79
+#define DO_CMP_PPZI(NAME, TYPE, OP, H, MASK) \
80
+uint32_t HELPER(NAME)(void *vd, void *vn, void *vg, uint32_t desc) \
81
+{ \
82
+ intptr_t opr_sz = simd_oprsz(desc); \
83
+ uint32_t flags = PREDTEST_INIT; \
84
+ TYPE mm = simd_data(desc); \
85
+ intptr_t i = opr_sz; \
86
+ do { \
87
+ uint64_t out = 0, pg; \
88
+ do { \
89
+ i -= sizeof(TYPE), out <<= sizeof(TYPE); \
90
+ TYPE nn = *(TYPE *)(vn + H(i)); \
91
+ out |= nn OP mm; \
92
+ } while (i & 63); \
93
+ pg = *(uint64_t *)(vg + (i >> 3)) & MASK; \
94
+ out &= pg; \
95
+ *(uint64_t *)(vd + (i >> 3)) = out; \
96
+ flags = iter_predtest_bwd(out, pg, flags); \
97
+ } while (i > 0); \
98
+ return flags; \
99
+}
100
+
101
+#define DO_CMP_PPZI_B(NAME, TYPE, OP) \
102
+ DO_CMP_PPZI(NAME, TYPE, OP, H1, 0xffffffffffffffffull)
103
+#define DO_CMP_PPZI_H(NAME, TYPE, OP) \
104
+ DO_CMP_PPZI(NAME, TYPE, OP, H1_2, 0x5555555555555555ull)
105
+#define DO_CMP_PPZI_S(NAME, TYPE, OP) \
106
+ DO_CMP_PPZI(NAME, TYPE, OP, H1_4, 0x1111111111111111ull)
107
+#define DO_CMP_PPZI_D(NAME, TYPE, OP) \
108
+ DO_CMP_PPZI(NAME, TYPE, OP, , 0x0101010101010101ull)
109
+
110
+DO_CMP_PPZI_B(sve_cmpeq_ppzi_b, uint8_t, ==)
111
+DO_CMP_PPZI_H(sve_cmpeq_ppzi_h, uint16_t, ==)
112
+DO_CMP_PPZI_S(sve_cmpeq_ppzi_s, uint32_t, ==)
113
+DO_CMP_PPZI_D(sve_cmpeq_ppzi_d, uint64_t, ==)
114
+
115
+DO_CMP_PPZI_B(sve_cmpne_ppzi_b, uint8_t, !=)
116
+DO_CMP_PPZI_H(sve_cmpne_ppzi_h, uint16_t, !=)
117
+DO_CMP_PPZI_S(sve_cmpne_ppzi_s, uint32_t, !=)
118
+DO_CMP_PPZI_D(sve_cmpne_ppzi_d, uint64_t, !=)
119
+
120
+DO_CMP_PPZI_B(sve_cmpgt_ppzi_b, int8_t, >)
121
+DO_CMP_PPZI_H(sve_cmpgt_ppzi_h, int16_t, >)
122
+DO_CMP_PPZI_S(sve_cmpgt_ppzi_s, int32_t, >)
123
+DO_CMP_PPZI_D(sve_cmpgt_ppzi_d, int64_t, >)
124
+
125
+DO_CMP_PPZI_B(sve_cmpge_ppzi_b, int8_t, >=)
126
+DO_CMP_PPZI_H(sve_cmpge_ppzi_h, int16_t, >=)
127
+DO_CMP_PPZI_S(sve_cmpge_ppzi_s, int32_t, >=)
128
+DO_CMP_PPZI_D(sve_cmpge_ppzi_d, int64_t, >=)
129
+
130
+DO_CMP_PPZI_B(sve_cmphi_ppzi_b, uint8_t, >)
131
+DO_CMP_PPZI_H(sve_cmphi_ppzi_h, uint16_t, >)
132
+DO_CMP_PPZI_S(sve_cmphi_ppzi_s, uint32_t, >)
133
+DO_CMP_PPZI_D(sve_cmphi_ppzi_d, uint64_t, >)
134
+
135
+DO_CMP_PPZI_B(sve_cmphs_ppzi_b, uint8_t, >=)
136
+DO_CMP_PPZI_H(sve_cmphs_ppzi_h, uint16_t, >=)
137
+DO_CMP_PPZI_S(sve_cmphs_ppzi_s, uint32_t, >=)
138
+DO_CMP_PPZI_D(sve_cmphs_ppzi_d, uint64_t, >=)
139
+
140
+DO_CMP_PPZI_B(sve_cmplt_ppzi_b, int8_t, <)
141
+DO_CMP_PPZI_H(sve_cmplt_ppzi_h, int16_t, <)
142
+DO_CMP_PPZI_S(sve_cmplt_ppzi_s, int32_t, <)
143
+DO_CMP_PPZI_D(sve_cmplt_ppzi_d, int64_t, <)
144
+
145
+DO_CMP_PPZI_B(sve_cmple_ppzi_b, int8_t, <=)
146
+DO_CMP_PPZI_H(sve_cmple_ppzi_h, int16_t, <=)
147
+DO_CMP_PPZI_S(sve_cmple_ppzi_s, int32_t, <=)
148
+DO_CMP_PPZI_D(sve_cmple_ppzi_d, int64_t, <=)
149
+
150
+DO_CMP_PPZI_B(sve_cmplo_ppzi_b, uint8_t, <)
151
+DO_CMP_PPZI_H(sve_cmplo_ppzi_h, uint16_t, <)
152
+DO_CMP_PPZI_S(sve_cmplo_ppzi_s, uint32_t, <)
153
+DO_CMP_PPZI_D(sve_cmplo_ppzi_d, uint64_t, <)
154
+
155
+DO_CMP_PPZI_B(sve_cmpls_ppzi_b, uint8_t, <=)
156
+DO_CMP_PPZI_H(sve_cmpls_ppzi_h, uint16_t, <=)
157
+DO_CMP_PPZI_S(sve_cmpls_ppzi_s, uint32_t, <=)
158
+DO_CMP_PPZI_D(sve_cmpls_ppzi_d, uint64_t, <=)
159
+
160
+#undef DO_CMP_PPZI_B
161
+#undef DO_CMP_PPZI_H
162
+#undef DO_CMP_PPZI_S
163
+#undef DO_CMP_PPZI_D
164
+#undef DO_CMP_PPZI
165
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
43
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
166
index XXXXXXX..XXXXXXX 100644
44
index XXXXXXX..XXXXXXX 100644
167
--- a/target/arm/translate-sve.c
45
--- a/target/arm/translate-sve.c
168
+++ b/target/arm/translate-sve.c
46
+++ b/target/arm/translate-sve.c
169
@@ -XXX,XX +XXX,XX @@
47
@@ -XXX,XX +XXX,XX @@ static bool do_BFMLAL_zzxw(DisasContext *s, arg_rrxr_esz *a, bool sel)
170
#include "translate-a64.h"
48
171
49
TRANS_FEAT(BFMLALB_zzxw, aa64_sve_bf16, do_BFMLAL_zzxw, a, false)
172
50
TRANS_FEAT(BFMLALT_zzxw, aa64_sve_bf16, do_BFMLAL_zzxw, a, true)
173
+typedef void gen_helper_gvec_flags_3(TCGv_i32, TCGv_ptr, TCGv_ptr,
174
+ TCGv_ptr, TCGv_i32);
175
typedef void gen_helper_gvec_flags_4(TCGv_i32, TCGv_ptr, TCGv_ptr,
176
TCGv_ptr, TCGv_ptr, TCGv_i32);
177
178
@@ -XXX,XX +XXX,XX @@ DO_PPZW(CMPLS, cmpls)
179
180
#undef DO_PPZW
181
182
+/*
183
+ *** SVE Integer Compare - Immediate Groups
184
+ */
185
+
51
+
186
+static bool do_ppzi_flags(DisasContext *s, arg_rpri_esz *a,
52
+static bool trans_PSEL(DisasContext *s, arg_psel *a)
187
+ gen_helper_gvec_flags_3 *gen_fn)
188
+{
53
+{
189
+ TCGv_ptr pd, zn, pg;
54
+ int vl = vec_full_reg_size(s);
190
+ unsigned vsz;
55
+ int pl = pred_gvec_reg_size(s);
191
+ TCGv_i32 t;
56
+ int elements = vl >> a->esz;
57
+ TCGv_i64 tmp, didx, dbit;
58
+ TCGv_ptr ptr;
192
+
59
+
193
+ if (gen_fn == NULL) {
60
+ if (!dc_isar_feature(aa64_sme, s)) {
194
+ return false;
61
+ return false;
195
+ }
62
+ }
196
+ if (!sve_access_check(s)) {
63
+ if (!sve_access_check(s)) {
197
+ return true;
64
+ return true;
198
+ }
65
+ }
199
+
66
+
200
+ vsz = vec_full_reg_size(s);
67
+ tmp = tcg_temp_new_i64();
201
+ t = tcg_const_i32(simd_desc(vsz, vsz, a->imm));
68
+ dbit = tcg_temp_new_i64();
202
+ pd = tcg_temp_new_ptr();
69
+ didx = tcg_temp_new_i64();
203
+ zn = tcg_temp_new_ptr();
70
+ ptr = tcg_temp_new_ptr();
204
+ pg = tcg_temp_new_ptr();
205
+
71
+
206
+ tcg_gen_addi_ptr(pd, cpu_env, pred_full_reg_offset(s, a->rd));
72
+ /* Compute the predicate element. */
207
+ tcg_gen_addi_ptr(zn, cpu_env, vec_full_reg_offset(s, a->rn));
73
+ tcg_gen_addi_i64(tmp, cpu_reg(s, a->rv), a->imm);
208
+ tcg_gen_addi_ptr(pg, cpu_env, pred_full_reg_offset(s, a->pg));
74
+ if (is_power_of_2(elements)) {
75
+ tcg_gen_andi_i64(tmp, tmp, elements - 1);
76
+ } else {
77
+ tcg_gen_remu_i64(tmp, tmp, tcg_constant_i64(elements));
78
+ }
209
+
79
+
210
+ gen_fn(t, pd, zn, pg, t);
80
+ /* Extract the predicate byte and bit indices. */
81
+ tcg_gen_shli_i64(tmp, tmp, a->esz);
82
+ tcg_gen_andi_i64(dbit, tmp, 7);
83
+ tcg_gen_shri_i64(didx, tmp, 3);
84
+ if (HOST_BIG_ENDIAN) {
85
+ tcg_gen_xori_i64(didx, didx, 7);
86
+ }
211
+
87
+
212
+ tcg_temp_free_ptr(pd);
88
+ /* Load the predicate word. */
213
+ tcg_temp_free_ptr(zn);
89
+ tcg_gen_trunc_i64_ptr(ptr, didx);
214
+ tcg_temp_free_ptr(pg);
90
+ tcg_gen_add_ptr(ptr, ptr, cpu_env);
91
+ tcg_gen_ld8u_i64(tmp, ptr, pred_full_reg_offset(s, a->pm));
215
+
92
+
216
+ do_pred_flags(t);
93
+ /* Extract the predicate bit and replicate to MO_64. */
94
+ tcg_gen_shr_i64(tmp, tmp, dbit);
95
+ tcg_gen_andi_i64(tmp, tmp, 1);
96
+ tcg_gen_neg_i64(tmp, tmp);
217
+
97
+
218
+ tcg_temp_free_i32(t);
98
+ /* Apply to either copy the source, or write zeros. */
99
+ tcg_gen_gvec_ands(MO_64, pred_full_reg_offset(s, a->pd),
100
+ pred_full_reg_offset(s, a->pn), tmp, pl, pl);
101
+
102
+ tcg_temp_free_i64(tmp);
103
+ tcg_temp_free_i64(dbit);
104
+ tcg_temp_free_i64(didx);
105
+ tcg_temp_free_ptr(ptr);
219
+ return true;
106
+ return true;
220
+}
107
+}
221
+
222
+#define DO_PPZI(NAME, name) \
223
+static bool trans_##NAME##_ppzi(DisasContext *s, arg_rpri_esz *a, \
224
+ uint32_t insn) \
225
+{ \
226
+ static gen_helper_gvec_flags_3 * const fns[4] = { \
227
+ gen_helper_sve_##name##_ppzi_b, gen_helper_sve_##name##_ppzi_h, \
228
+ gen_helper_sve_##name##_ppzi_s, gen_helper_sve_##name##_ppzi_d, \
229
+ }; \
230
+ return do_ppzi_flags(s, a, fns[a->esz]); \
231
+}
232
+
233
+DO_PPZI(CMPEQ, cmpeq)
234
+DO_PPZI(CMPNE, cmpne)
235
+DO_PPZI(CMPGT, cmpgt)
236
+DO_PPZI(CMPGE, cmpge)
237
+DO_PPZI(CMPHI, cmphi)
238
+DO_PPZI(CMPHS, cmphs)
239
+DO_PPZI(CMPLT, cmplt)
240
+DO_PPZI(CMPLE, cmple)
241
+DO_PPZI(CMPLO, cmplo)
242
+DO_PPZI(CMPLS, cmpls)
243
+
244
+#undef DO_PPZI
245
+
246
/*
247
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
248
*/
249
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
250
index XXXXXXX..XXXXXXX 100644
251
--- a/target/arm/sve.decode
252
+++ b/target/arm/sve.decode
253
@@ -XXX,XX +XXX,XX @@
254
@rdn_dbm ........ .. .... dbm:13 rd:5 \
255
&rr_dbm rn=%reg_movprfx
256
257
+# Predicate output, vector and immediate input,
258
+# controlling predicate, element size.
259
+@pd_pg_rn_i7 ........ esz:2 . imm:7 . pg:3 rn:5 . rd:4 &rpri_esz
260
+@pd_pg_rn_i5 ........ esz:2 . imm:s5 ... pg:3 rn:5 . rd:4 &rpri_esz
261
+
262
# Basic Load/Store with 9-bit immediate offset
263
@pd_rn_i9 ........ ........ ...... rn:5 . rd:4 \
264
&rri imm=%imm9_16_10
265
@@ -XXX,XX +XXX,XX @@ CMPHI_ppzw 00100100 .. 0 ..... 110 ... ..... 1 .... @pd_pg_rn_rm
266
CMPLO_ppzw 00100100 .. 0 ..... 111 ... ..... 0 .... @pd_pg_rn_rm
267
CMPLS_ppzw 00100100 .. 0 ..... 111 ... ..... 1 .... @pd_pg_rn_rm
268
269
+### SVE Integer Compare - Unsigned Immediate Group
270
+
271
+# SVE integer compare with unsigned immediate
272
+CMPHS_ppzi 00100100 .. 1 ....... 0 ... ..... 0 .... @pd_pg_rn_i7
273
+CMPHI_ppzi 00100100 .. 1 ....... 0 ... ..... 1 .... @pd_pg_rn_i7
274
+CMPLO_ppzi 00100100 .. 1 ....... 1 ... ..... 0 .... @pd_pg_rn_i7
275
+CMPLS_ppzi 00100100 .. 1 ....... 1 ... ..... 1 .... @pd_pg_rn_i7
276
+
277
+### SVE Integer Compare - Signed Immediate Group
278
+
279
+# SVE integer compare with signed immediate
280
+CMPGE_ppzi 00100101 .. 0 ..... 000 ... ..... 0 .... @pd_pg_rn_i5
281
+CMPGT_ppzi 00100101 .. 0 ..... 000 ... ..... 1 .... @pd_pg_rn_i5
282
+CMPLT_ppzi 00100101 .. 0 ..... 001 ... ..... 0 .... @pd_pg_rn_i5
283
+CMPLE_ppzi 00100101 .. 0 ..... 001 ... ..... 1 .... @pd_pg_rn_i5
284
+CMPEQ_ppzi 00100101 .. 0 ..... 100 ... ..... 0 .... @pd_pg_rn_i5
285
+CMPNE_ppzi 00100101 .. 0 ..... 100 ... ..... 1 .... @pd_pg_rn_i5
286
+
287
### SVE Predicate Logical Operations Group
288
289
# SVE predicate logical operations
290
--
108
--
291
2.17.1
109
2.25.1
292
293
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
This is an SVE instruction that operates using the SVE vector
4
length but that it is present only if SME is implemented.
2
5
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-11-richard.henderson@linaro.org
8
Message-id: 20220708151540.18136-30-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
target/arm/helper-sve.h | 9 +++++++
11
target/arm/helper-sve.h | 2 ++
9
target/arm/sve_helper.c | 55 ++++++++++++++++++++++++++++++++++++++
12
target/arm/sve.decode | 1 +
13
target/arm/sve_helper.c | 16 ++++++++++++++++
10
target/arm/translate-sve.c | 2 ++
14
target/arm/translate-sve.c | 2 ++
11
target/arm/sve.decode | 6 +++++
15
4 files changed, 21 insertions(+)
12
4 files changed, 72 insertions(+)
13
16
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
17
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
19
--- a/target/arm/helper-sve.h
17
+++ b/target/arm/helper-sve.h
20
+++ b/target/arm/helper-sve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve_lsl_zpzz_s, TCG_CALL_NO_RWG,
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_revh_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
19
DEF_HELPER_FLAGS_5(sve_lsl_zpzz_d, TCG_CALL_NO_RWG,
22
20
void, ptr, ptr, ptr, ptr, i32)
23
DEF_HELPER_FLAGS_4(sve_revw_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
21
24
22
+DEF_HELPER_FLAGS_5(sve_sel_zpzz_b, TCG_CALL_NO_RWG,
25
+DEF_HELPER_FLAGS_4(sme_revd_q, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
23
+ void, ptr, ptr, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_5(sve_sel_zpzz_h, TCG_CALL_NO_RWG,
25
+ void, ptr, ptr, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_5(sve_sel_zpzz_s, TCG_CALL_NO_RWG,
27
+ void, ptr, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_5(sve_sel_zpzz_d, TCG_CALL_NO_RWG,
29
+ void, ptr, ptr, ptr, ptr, i32)
30
+
26
+
31
DEF_HELPER_FLAGS_5(sve_asr_zpzw_b, TCG_CALL_NO_RWG,
27
DEF_HELPER_FLAGS_4(sve_rbit_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
32
void, ptr, ptr, ptr, ptr, i32)
28
DEF_HELPER_FLAGS_4(sve_rbit_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
33
DEF_HELPER_FLAGS_5(sve_asr_zpzw_h, TCG_CALL_NO_RWG,
29
DEF_HELPER_FLAGS_4(sve_rbit_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
31
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/sve.decode
33
+++ b/target/arm/sve.decode
34
@@ -XXX,XX +XXX,XX @@ REVB 00000101 .. 1001 00 100 ... ..... ..... @rd_pg_rn
35
REVH 00000101 .. 1001 01 100 ... ..... ..... @rd_pg_rn
36
REVW 00000101 .. 1001 10 100 ... ..... ..... @rd_pg_rn
37
RBIT 00000101 .. 1001 11 100 ... ..... ..... @rd_pg_rn
38
+REVD 00000101 00 1011 10 100 ... ..... ..... @rd_pg_rn_e0
39
40
# SVE vector splice (predicated, destructive)
41
SPLICE 00000101 .. 101 100 100 ... ..... ..... @rdn_pg_rm
34
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
42
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
35
index XXXXXXX..XXXXXXX 100644
43
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/sve_helper.c
44
--- a/target/arm/sve_helper.c
37
+++ b/target/arm/sve_helper.c
45
+++ b/target/arm/sve_helper.c
38
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_splice)(void *vd, void *vn, void *vm, void *vg, uint32_t desc)
46
@@ -XXX,XX +XXX,XX @@ DO_ZPZ_D(sve_revh_d, uint64_t, hswap64)
39
}
47
40
swap_memmove(vd + len, vm, opr_sz * 8 - len);
48
DO_ZPZ_D(sve_revw_d, uint64_t, wswap64)
41
}
49
42
+
50
+void HELPER(sme_revd_q)(void *vd, void *vn, void *vg, uint32_t desc)
43
+void HELPER(sve_sel_zpzz_b)(void *vd, void *vn, void *vm,
44
+ void *vg, uint32_t desc)
45
+{
51
+{
46
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
52
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
47
+ uint64_t *d = vd, *n = vn, *m = vm;
53
+ uint64_t *d = vd, *n = vn;
48
+ uint8_t *pg = vg;
54
+ uint8_t *pg = vg;
49
+
55
+
50
+ for (i = 0; i < opr_sz; i += 1) {
56
+ for (i = 0; i < opr_sz; i += 2) {
51
+ uint64_t nn = n[i], mm = m[i];
57
+ if (pg[H1(i)] & 1) {
52
+ uint64_t pp = expand_pred_b(pg[H1(i)]);
58
+ uint64_t n0 = n[i + 0];
53
+ d[i] = (nn & pp) | (mm & ~pp);
59
+ uint64_t n1 = n[i + 1];
60
+ d[i + 0] = n1;
61
+ d[i + 1] = n0;
62
+ }
54
+ }
63
+ }
55
+}
64
+}
56
+
65
+
57
+void HELPER(sve_sel_zpzz_h)(void *vd, void *vn, void *vm,
66
DO_ZPZ(sve_rbit_b, uint8_t, H1, revbit8)
58
+ void *vg, uint32_t desc)
67
DO_ZPZ(sve_rbit_h, uint16_t, H1_2, revbit16)
59
+{
68
DO_ZPZ(sve_rbit_s, uint32_t, H1_4, revbit32)
60
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
61
+ uint64_t *d = vd, *n = vn, *m = vm;
62
+ uint8_t *pg = vg;
63
+
64
+ for (i = 0; i < opr_sz; i += 1) {
65
+ uint64_t nn = n[i], mm = m[i];
66
+ uint64_t pp = expand_pred_h(pg[H1(i)]);
67
+ d[i] = (nn & pp) | (mm & ~pp);
68
+ }
69
+}
70
+
71
+void HELPER(sve_sel_zpzz_s)(void *vd, void *vn, void *vm,
72
+ void *vg, uint32_t desc)
73
+{
74
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
75
+ uint64_t *d = vd, *n = vn, *m = vm;
76
+ uint8_t *pg = vg;
77
+
78
+ for (i = 0; i < opr_sz; i += 1) {
79
+ uint64_t nn = n[i], mm = m[i];
80
+ uint64_t pp = expand_pred_s(pg[H1(i)]);
81
+ d[i] = (nn & pp) | (mm & ~pp);
82
+ }
83
+}
84
+
85
+void HELPER(sve_sel_zpzz_d)(void *vd, void *vn, void *vm,
86
+ void *vg, uint32_t desc)
87
+{
88
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
89
+ uint64_t *d = vd, *n = vn, *m = vm;
90
+ uint8_t *pg = vg;
91
+
92
+ for (i = 0; i < opr_sz; i += 1) {
93
+ uint64_t nn = n[i], mm = m[i];
94
+ d[i] = (pg[H1(i)] & 1 ? nn : mm);
95
+ }
96
+}
97
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
69
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
98
index XXXXXXX..XXXXXXX 100644
70
index XXXXXXX..XXXXXXX 100644
99
--- a/target/arm/translate-sve.c
71
--- a/target/arm/translate-sve.c
100
+++ b/target/arm/translate-sve.c
72
+++ b/target/arm/translate-sve.c
101
@@ -XXX,XX +XXX,XX @@ static bool trans_UDIV_zpzz(DisasContext *s, arg_rprr_esz *a, uint32_t insn)
73
@@ -XXX,XX +XXX,XX @@ TRANS_FEAT(REVH, aa64_sve, gen_gvec_ool_arg_zpz, revh_fns[a->esz], a, 0)
102
return do_zpzz_ool(s, a, fns[a->esz]);
74
TRANS_FEAT(REVW, aa64_sve, gen_gvec_ool_arg_zpz,
103
}
75
a->esz == 3 ? gen_helper_sve_revw_d : NULL, a, 0)
104
76
105
+DO_ZPZZ(SEL, sel)
77
+TRANS_FEAT(REVD, aa64_sme, gen_gvec_ool_arg_zpz, gen_helper_sme_revd_q, a, 0)
106
+
78
+
107
#undef DO_ZPZZ
79
TRANS_FEAT(SPLICE, aa64_sve, gen_gvec_ool_arg_zpzz,
108
80
gen_helper_sve_splice, a, a->esz)
109
/*
81
110
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
111
index XXXXXXX..XXXXXXX 100644
112
--- a/target/arm/sve.decode
113
+++ b/target/arm/sve.decode
114
@@ -XXX,XX +XXX,XX @@
115
&rprr_esz rn=%reg_movprfx
116
@rdm_pg_rn ........ esz:2 ... ... ... pg:3 rn:5 rd:5 \
117
&rprr_esz rm=%reg_movprfx
118
+@rd_pg4_rn_rm ........ esz:2 . rm:5 .. pg:4 rn:5 rd:5 &rprr_esz
119
120
# Three register operand, with governing predicate, vector element size
121
@rda_pg_rn_rm ........ esz:2 . rm:5 ... pg:3 rn:5 rd:5 \
122
@@ -XXX,XX +XXX,XX @@ RBIT 00000101 .. 1001 11 100 ... ..... ..... @rd_pg_rn
123
# SVE vector splice (predicated)
124
SPLICE 00000101 .. 101 100 100 ... ..... ..... @rdn_pg_rm
125
126
+### SVE Select Vectors Group
127
+
128
+# SVE select vector elements (predicated)
129
+SEL_zpzz 00000101 .. 1 ..... 11 .... ..... ..... @rd_pg4_rn_rm
130
+
131
### SVE Predicate Logical Operations Group
132
133
# SVE predicate logical operations
134
--
82
--
135
2.17.1
83
2.25.1
136
137
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
This is an SVE instruction that operates using the SVE vector
4
length but that it is present only if SME is implemented.
2
5
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-19-richard.henderson@linaro.org
8
Message-id: 20220708151540.18136-31-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
target/arm/helper-sve.h | 14 ++++++++
11
target/arm/helper.h | 18 +++++++
9
target/arm/helper.h | 19 +++++++++++
12
target/arm/sve.decode | 5 ++
10
target/arm/translate-sve.c | 42 +++++++++++++++++++++++
13
target/arm/translate-sve.c | 102 +++++++++++++++++++++++++++++++++++++
11
target/arm/vec_helper.c | 69 ++++++++++++++++++++++++++++++++++++++
14
target/arm/vec_helper.c | 24 +++++++++
12
target/arm/sve.decode | 10 ++++++
15
4 files changed, 149 insertions(+)
13
5 files changed, 154 insertions(+)
14
16
15
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper-sve.h
18
+++ b/target/arm/helper-sve.h
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_umini_b, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
20
DEF_HELPER_FLAGS_4(sve_umini_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
21
DEF_HELPER_FLAGS_4(sve_umini_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
22
DEF_HELPER_FLAGS_4(sve_umini_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
23
+
24
+DEF_HELPER_FLAGS_5(gvec_recps_h, TCG_CALL_NO_RWG,
25
+ void, ptr, ptr, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_5(gvec_recps_s, TCG_CALL_NO_RWG,
27
+ void, ptr, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_5(gvec_recps_d, TCG_CALL_NO_RWG,
29
+ void, ptr, ptr, ptr, ptr, i32)
30
+
31
+DEF_HELPER_FLAGS_5(gvec_rsqrts_h, TCG_CALL_NO_RWG,
32
+ void, ptr, ptr, ptr, ptr, i32)
33
+DEF_HELPER_FLAGS_5(gvec_rsqrts_s, TCG_CALL_NO_RWG,
34
+ void, ptr, ptr, ptr, ptr, i32)
35
+DEF_HELPER_FLAGS_5(gvec_rsqrts_d, TCG_CALL_NO_RWG,
36
+ void, ptr, ptr, ptr, ptr, i32)
37
diff --git a/target/arm/helper.h b/target/arm/helper.h
17
diff --git a/target/arm/helper.h b/target/arm/helper.h
38
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/helper.h
19
--- a/target/arm/helper.h
40
+++ b/target/arm/helper.h
20
+++ b/target/arm/helper.h
41
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_fcmlas_idx, TCG_CALL_NO_RWG,
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_6(gvec_bfmlal, TCG_CALL_NO_RWG,
42
DEF_HELPER_FLAGS_5(gvec_fcmlad, TCG_CALL_NO_RWG,
22
DEF_HELPER_FLAGS_6(gvec_bfmlal_idx, TCG_CALL_NO_RWG,
43
void, ptr, ptr, ptr, ptr, i32)
23
void, ptr, ptr, ptr, ptr, ptr, i32)
44
24
45
+DEF_HELPER_FLAGS_5(gvec_fadd_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_5(gvec_sclamp_b, TCG_CALL_NO_RWG,
46
+DEF_HELPER_FLAGS_5(gvec_fadd_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
26
+ void, ptr, ptr, ptr, ptr, i32)
47
+DEF_HELPER_FLAGS_5(gvec_fadd_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_5(gvec_sclamp_h, TCG_CALL_NO_RWG,
48
+
28
+ void, ptr, ptr, ptr, ptr, i32)
49
+DEF_HELPER_FLAGS_5(gvec_fsub_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_5(gvec_sclamp_s, TCG_CALL_NO_RWG,
50
+DEF_HELPER_FLAGS_5(gvec_fsub_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
30
+ void, ptr, ptr, ptr, ptr, i32)
51
+DEF_HELPER_FLAGS_5(gvec_fsub_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_5(gvec_sclamp_d, TCG_CALL_NO_RWG,
52
+
32
+ void, ptr, ptr, ptr, ptr, i32)
53
+DEF_HELPER_FLAGS_5(gvec_fmul_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
33
+
54
+DEF_HELPER_FLAGS_5(gvec_fmul_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
34
+DEF_HELPER_FLAGS_5(gvec_uclamp_b, TCG_CALL_NO_RWG,
55
+DEF_HELPER_FLAGS_5(gvec_fmul_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
35
+ void, ptr, ptr, ptr, ptr, i32)
56
+
36
+DEF_HELPER_FLAGS_5(gvec_uclamp_h, TCG_CALL_NO_RWG,
57
+DEF_HELPER_FLAGS_5(gvec_ftsmul_h, TCG_CALL_NO_RWG,
37
+ void, ptr, ptr, ptr, ptr, i32)
58
+ void, ptr, ptr, ptr, ptr, i32)
38
+DEF_HELPER_FLAGS_5(gvec_uclamp_s, TCG_CALL_NO_RWG,
59
+DEF_HELPER_FLAGS_5(gvec_ftsmul_s, TCG_CALL_NO_RWG,
39
+ void, ptr, ptr, ptr, ptr, i32)
60
+ void, ptr, ptr, ptr, ptr, i32)
40
+DEF_HELPER_FLAGS_5(gvec_uclamp_d, TCG_CALL_NO_RWG,
61
+DEF_HELPER_FLAGS_5(gvec_ftsmul_d, TCG_CALL_NO_RWG,
62
+ void, ptr, ptr, ptr, ptr, i32)
41
+ void, ptr, ptr, ptr, ptr, i32)
63
+
42
+
64
#ifdef TARGET_AARCH64
43
#ifdef TARGET_AARCH64
65
#include "helper-a64.h"
44
#include "helper-a64.h"
66
#include "helper-sve.h"
45
#include "helper-sve.h"
46
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/sve.decode
49
+++ b/target/arm/sve.decode
50
@@ -XXX,XX +XXX,XX @@ PSEL 00100101 .. 1 100 .. 01 .... 0 .... 0 .... \
51
@psel esz=2 imm=%psel_imm_s
52
PSEL 00100101 .1 1 000 .. 01 .... 0 .... 0 .... \
53
@psel esz=3 imm=%psel_imm_d
54
+
55
+### SVE clamp
56
+
57
+SCLAMP 01000100 .. 0 ..... 110000 ..... ..... @rda_rn_rm
58
+UCLAMP 01000100 .. 0 ..... 110001 ..... ..... @rda_rn_rm
67
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
59
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
68
index XXXXXXX..XXXXXXX 100644
60
index XXXXXXX..XXXXXXX 100644
69
--- a/target/arm/translate-sve.c
61
--- a/target/arm/translate-sve.c
70
+++ b/target/arm/translate-sve.c
62
+++ b/target/arm/translate-sve.c
71
@@ -XXX,XX +XXX,XX @@ DO_ZZI(UMIN, umin)
63
@@ -XXX,XX +XXX,XX @@ static bool trans_PSEL(DisasContext *s, arg_psel *a)
72
64
tcg_temp_free_ptr(ptr);
73
#undef DO_ZZI
65
return true;
74
66
}
75
+/*
67
+
76
+ *** SVE Floating Point Arithmetic - Unpredicated Group
68
+static void gen_sclamp_i32(TCGv_i32 d, TCGv_i32 n, TCGv_i32 m, TCGv_i32 a)
77
+ */
69
+{
78
+
70
+ tcg_gen_smax_i32(d, a, n);
79
+static bool do_zzz_fp(DisasContext *s, arg_rrr_esz *a,
71
+ tcg_gen_smin_i32(d, d, m);
80
+ gen_helper_gvec_3_ptr *fn)
72
+}
81
+{
73
+
82
+ if (fn == NULL) {
74
+static void gen_sclamp_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, TCGv_i64 a)
83
+ return false;
75
+{
84
+ }
76
+ tcg_gen_smax_i64(d, a, n);
85
+ if (sve_access_check(s)) {
77
+ tcg_gen_smin_i64(d, d, m);
86
+ unsigned vsz = vec_full_reg_size(s);
78
+}
87
+ TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
79
+
88
+ tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, a->rd),
80
+static void gen_sclamp_vec(unsigned vece, TCGv_vec d, TCGv_vec n,
89
+ vec_full_reg_offset(s, a->rn),
81
+ TCGv_vec m, TCGv_vec a)
90
+ vec_full_reg_offset(s, a->rm),
82
+{
91
+ status, vsz, vsz, 0, fn);
83
+ tcg_gen_smax_vec(vece, d, a, n);
92
+ tcg_temp_free_ptr(status);
84
+ tcg_gen_smin_vec(vece, d, d, m);
93
+ }
85
+}
94
+ return true;
86
+
95
+}
87
+static void gen_sclamp(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
96
+
88
+ uint32_t a, uint32_t oprsz, uint32_t maxsz)
97
+
89
+{
98
+#define DO_FP3(NAME, name) \
90
+ static const TCGOpcode vecop[] = {
99
+static bool trans_##NAME(DisasContext *s, arg_rrr_esz *a, uint32_t insn) \
91
+ INDEX_op_smin_vec, INDEX_op_smax_vec, 0
100
+{ \
92
+ };
101
+ static gen_helper_gvec_3_ptr * const fns[4] = { \
93
+ static const GVecGen4 ops[4] = {
102
+ NULL, gen_helper_gvec_##name##_h, \
94
+ { .fniv = gen_sclamp_vec,
103
+ gen_helper_gvec_##name##_s, gen_helper_gvec_##name##_d \
95
+ .fno = gen_helper_gvec_sclamp_b,
104
+ }; \
96
+ .opt_opc = vecop,
105
+ return do_zzz_fp(s, a, fns[a->esz]); \
97
+ .vece = MO_8 },
106
+}
98
+ { .fniv = gen_sclamp_vec,
107
+
99
+ .fno = gen_helper_gvec_sclamp_h,
108
+DO_FP3(FADD_zzz, fadd)
100
+ .opt_opc = vecop,
109
+DO_FP3(FSUB_zzz, fsub)
101
+ .vece = MO_16 },
110
+DO_FP3(FMUL_zzz, fmul)
102
+ { .fni4 = gen_sclamp_i32,
111
+DO_FP3(FTSMUL, ftsmul)
103
+ .fniv = gen_sclamp_vec,
112
+DO_FP3(FRECPS, recps)
104
+ .fno = gen_helper_gvec_sclamp_s,
113
+DO_FP3(FRSQRTS, rsqrts)
105
+ .opt_opc = vecop,
114
+
106
+ .vece = MO_32 },
115
+#undef DO_FP3
107
+ { .fni8 = gen_sclamp_i64,
116
+
108
+ .fniv = gen_sclamp_vec,
117
/*
109
+ .fno = gen_helper_gvec_sclamp_d,
118
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
110
+ .opt_opc = vecop,
119
*/
111
+ .vece = MO_64,
112
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64 }
113
+ };
114
+ tcg_gen_gvec_4(d, n, m, a, oprsz, maxsz, &ops[vece]);
115
+}
116
+
117
+TRANS_FEAT(SCLAMP, aa64_sme, gen_gvec_fn_arg_zzzz, gen_sclamp, a)
118
+
119
+static void gen_uclamp_i32(TCGv_i32 d, TCGv_i32 n, TCGv_i32 m, TCGv_i32 a)
120
+{
121
+ tcg_gen_umax_i32(d, a, n);
122
+ tcg_gen_umin_i32(d, d, m);
123
+}
124
+
125
+static void gen_uclamp_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, TCGv_i64 a)
126
+{
127
+ tcg_gen_umax_i64(d, a, n);
128
+ tcg_gen_umin_i64(d, d, m);
129
+}
130
+
131
+static void gen_uclamp_vec(unsigned vece, TCGv_vec d, TCGv_vec n,
132
+ TCGv_vec m, TCGv_vec a)
133
+{
134
+ tcg_gen_umax_vec(vece, d, a, n);
135
+ tcg_gen_umin_vec(vece, d, d, m);
136
+}
137
+
138
+static void gen_uclamp(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
139
+ uint32_t a, uint32_t oprsz, uint32_t maxsz)
140
+{
141
+ static const TCGOpcode vecop[] = {
142
+ INDEX_op_umin_vec, INDEX_op_umax_vec, 0
143
+ };
144
+ static const GVecGen4 ops[4] = {
145
+ { .fniv = gen_uclamp_vec,
146
+ .fno = gen_helper_gvec_uclamp_b,
147
+ .opt_opc = vecop,
148
+ .vece = MO_8 },
149
+ { .fniv = gen_uclamp_vec,
150
+ .fno = gen_helper_gvec_uclamp_h,
151
+ .opt_opc = vecop,
152
+ .vece = MO_16 },
153
+ { .fni4 = gen_uclamp_i32,
154
+ .fniv = gen_uclamp_vec,
155
+ .fno = gen_helper_gvec_uclamp_s,
156
+ .opt_opc = vecop,
157
+ .vece = MO_32 },
158
+ { .fni8 = gen_uclamp_i64,
159
+ .fniv = gen_uclamp_vec,
160
+ .fno = gen_helper_gvec_uclamp_d,
161
+ .opt_opc = vecop,
162
+ .vece = MO_64,
163
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64 }
164
+ };
165
+ tcg_gen_gvec_4(d, n, m, a, oprsz, maxsz, &ops[vece]);
166
+}
167
+
168
+TRANS_FEAT(UCLAMP, aa64_sme, gen_gvec_fn_arg_zzzz, gen_uclamp, a)
120
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
169
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
121
index XXXXXXX..XXXXXXX 100644
170
index XXXXXXX..XXXXXXX 100644
122
--- a/target/arm/vec_helper.c
171
--- a/target/arm/vec_helper.c
123
+++ b/target/arm/vec_helper.c
172
+++ b/target/arm/vec_helper.c
124
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_fcmlad)(void *vd, void *vn, void *vm,
173
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_bfmlal_idx)(void *vd, void *vn, void *vm,
125
}
174
}
126
clear_tail(d, opr_sz, simd_maxsz(desc));
175
clear_tail(d, opr_sz, simd_maxsz(desc));
127
}
176
}
128
+
177
+
129
+/* Floating-point trigonometric starting value.
178
+#define DO_CLAMP(NAME, TYPE) \
130
+ * See the ARM ARM pseudocode function FPTrigSMul.
179
+void HELPER(NAME)(void *d, void *n, void *m, void *a, uint32_t desc) \
131
+ */
180
+{ \
132
+static float16 float16_ftsmul(float16 op1, uint16_t op2, float_status *stat)
181
+ intptr_t i, opr_sz = simd_oprsz(desc); \
133
+{
182
+ for (i = 0; i < opr_sz; i += sizeof(TYPE)) { \
134
+ float16 result = float16_mul(op1, op1, stat);
183
+ TYPE aa = *(TYPE *)(a + i); \
135
+ if (!float16_is_any_nan(result)) {
184
+ TYPE nn = *(TYPE *)(n + i); \
136
+ result = float16_set_sign(result, op2 & 1);
185
+ TYPE mm = *(TYPE *)(m + i); \
137
+ }
186
+ TYPE dd = MIN(MAX(aa, nn), mm); \
138
+ return result;
187
+ *(TYPE *)(d + i) = dd; \
139
+}
188
+ } \
140
+
189
+ clear_tail(d, opr_sz, simd_maxsz(desc)); \
141
+static float32 float32_ftsmul(float32 op1, uint32_t op2, float_status *stat)
190
+}
142
+{
191
+
143
+ float32 result = float32_mul(op1, op1, stat);
192
+DO_CLAMP(gvec_sclamp_b, int8_t)
144
+ if (!float32_is_any_nan(result)) {
193
+DO_CLAMP(gvec_sclamp_h, int16_t)
145
+ result = float32_set_sign(result, op2 & 1);
194
+DO_CLAMP(gvec_sclamp_s, int32_t)
146
+ }
195
+DO_CLAMP(gvec_sclamp_d, int64_t)
147
+ return result;
196
+
148
+}
197
+DO_CLAMP(gvec_uclamp_b, uint8_t)
149
+
198
+DO_CLAMP(gvec_uclamp_h, uint16_t)
150
+static float64 float64_ftsmul(float64 op1, uint64_t op2, float_status *stat)
199
+DO_CLAMP(gvec_uclamp_s, uint32_t)
151
+{
200
+DO_CLAMP(gvec_uclamp_d, uint64_t)
152
+ float64 result = float64_mul(op1, op1, stat);
153
+ if (!float64_is_any_nan(result)) {
154
+ result = float64_set_sign(result, op2 & 1);
155
+ }
156
+ return result;
157
+}
158
+
159
+#define DO_3OP(NAME, FUNC, TYPE) \
160
+void HELPER(NAME)(void *vd, void *vn, void *vm, void *stat, uint32_t desc) \
161
+{ \
162
+ intptr_t i, oprsz = simd_oprsz(desc); \
163
+ TYPE *d = vd, *n = vn, *m = vm; \
164
+ for (i = 0; i < oprsz / sizeof(TYPE); i++) { \
165
+ d[i] = FUNC(n[i], m[i], stat); \
166
+ } \
167
+}
168
+
169
+DO_3OP(gvec_fadd_h, float16_add, float16)
170
+DO_3OP(gvec_fadd_s, float32_add, float32)
171
+DO_3OP(gvec_fadd_d, float64_add, float64)
172
+
173
+DO_3OP(gvec_fsub_h, float16_sub, float16)
174
+DO_3OP(gvec_fsub_s, float32_sub, float32)
175
+DO_3OP(gvec_fsub_d, float64_sub, float64)
176
+
177
+DO_3OP(gvec_fmul_h, float16_mul, float16)
178
+DO_3OP(gvec_fmul_s, float32_mul, float32)
179
+DO_3OP(gvec_fmul_d, float64_mul, float64)
180
+
181
+DO_3OP(gvec_ftsmul_h, float16_ftsmul, float16)
182
+DO_3OP(gvec_ftsmul_s, float32_ftsmul, float32)
183
+DO_3OP(gvec_ftsmul_d, float64_ftsmul, float64)
184
+
185
+#ifdef TARGET_AARCH64
186
+
187
+DO_3OP(gvec_recps_h, helper_recpsf_f16, float16)
188
+DO_3OP(gvec_recps_s, helper_recpsf_f32, float32)
189
+DO_3OP(gvec_recps_d, helper_recpsf_f64, float64)
190
+
191
+DO_3OP(gvec_rsqrts_h, helper_rsqrtsf_f16, float16)
192
+DO_3OP(gvec_rsqrts_s, helper_rsqrtsf_f32, float32)
193
+DO_3OP(gvec_rsqrts_d, helper_rsqrtsf_f64, float64)
194
+
195
+#endif
196
+#undef DO_3OP
197
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
198
index XXXXXXX..XXXXXXX 100644
199
--- a/target/arm/sve.decode
200
+++ b/target/arm/sve.decode
201
@@ -XXX,XX +XXX,XX @@ UMIN_zzi 00100101 .. 101 011 110 ........ ..... @rdn_i8u
202
# SVE integer multiply immediate (unpredicated)
203
MUL_zzi 00100101 .. 110 000 110 ........ ..... @rdn_i8s
204
205
+### SVE Floating Point Arithmetic - Unpredicated Group
206
+
207
+# SVE floating-point arithmetic (unpredicated)
208
+FADD_zzz 01100101 .. 0 ..... 000 000 ..... ..... @rd_rn_rm
209
+FSUB_zzz 01100101 .. 0 ..... 000 001 ..... ..... @rd_rn_rm
210
+FMUL_zzz 01100101 .. 0 ..... 000 010 ..... ..... @rd_rn_rm
211
+FTSMUL 01100101 .. 0 ..... 000 011 ..... ..... @rd_rn_rm
212
+FRECPS 01100101 .. 0 ..... 000 110 ..... ..... @rd_rn_rm
213
+FRSQRTS 01100101 .. 0 ..... 000 111 ..... ..... @rd_rn_rm
214
+
215
### SVE Memory - 32-bit Gather and Unsized Contiguous Group
216
217
# SVE load predicate register
218
--
201
--
219
2.17.1
202
2.25.1
220
221
diff view generated by jsdifflib
1
Currently we don't support board configurations that put an IOMMU
1
From: Richard Henderson <richard.henderson@linaro.org>
2
in the path of the CPU's memory transactions, and instead just
3
assert() if the memory region fonud in address_space_translate_for_iotlb()
4
is an IOMMUMemoryRegion.
5
2
6
Remove this limitation by having the function handle IOMMUs.
3
We can handle both exception entry and exception return by
7
This is mostly straightforward, but we must make sure we have
4
hooking into aarch64_sve_change_el.
8
a notifier registered for every IOMMU that a transaction has
9
passed through, so that we can flush the TLB appropriately
10
when any of the IOMMUs change their mappings.
11
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20220708151540.18136-32-richard.henderson@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
14
Message-id: 20180604152941.20374-5-peter.maydell@linaro.org
15
---
10
---
16
include/exec/exec-all.h | 3 +-
11
target/arm/helper.c | 15 +++++++++++++--
17
include/qom/cpu.h | 3 +
12
1 file changed, 13 insertions(+), 2 deletions(-)
18
accel/tcg/cputlb.c | 3 +-
19
exec.c | 135 +++++++++++++++++++++++++++++++++++++++-
20
4 files changed, 140 insertions(+), 4 deletions(-)
21
13
22
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
23
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
24
--- a/include/exec/exec-all.h
16
--- a/target/arm/helper.c
25
+++ b/include/exec/exec-all.h
17
+++ b/target/arm/helper.c
26
@@ -XXX,XX +XXX,XX @@ void tb_flush_jmp_cache(CPUState *cpu, target_ulong addr);
18
@@ -XXX,XX +XXX,XX @@ void aarch64_sve_change_el(CPUARMState *env, int old_el,
27
19
return;
28
MemoryRegionSection *
20
}
29
address_space_translate_for_iotlb(CPUState *cpu, int asidx, hwaddr addr,
21
30
- hwaddr *xlat, hwaddr *plen);
22
+ old_a64 = old_el ? arm_el_is_aa64(env, old_el) : el0_a64;
31
+ hwaddr *xlat, hwaddr *plen,
23
+ new_a64 = new_el ? arm_el_is_aa64(env, new_el) : el0_a64;
32
+ MemTxAttrs attrs, int *prot);
33
hwaddr memory_region_section_get_iotlb(CPUState *cpu,
34
MemoryRegionSection *section,
35
target_ulong vaddr,
36
diff --git a/include/qom/cpu.h b/include/qom/cpu.h
37
index XXXXXXX..XXXXXXX 100644
38
--- a/include/qom/cpu.h
39
+++ b/include/qom/cpu.h
40
@@ -XXX,XX +XXX,XX @@ struct CPUState {
41
uint16_t pending_tlb_flush;
42
43
int hvf_fd;
44
+
24
+
45
+ /* track IOMMUs whose translations we've cached in the TCG TLB */
25
+ /*
46
+ GArray *iommu_notifiers;
26
+ * Both AArch64.TakeException and AArch64.ExceptionReturn
47
};
27
+ * invoke ResetSVEState when taking an exception from, or
48
28
+ * returning to, AArch32 state when PSTATE.SM is enabled.
49
QTAILQ_HEAD(CPUTailQ, CPUState);
29
+ */
50
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
30
+ if (old_a64 != new_a64 && FIELD_EX64(env->svcr, SVCR, SM)) {
51
index XXXXXXX..XXXXXXX 100644
31
+ arm_reset_sve_state(env);
52
--- a/accel/tcg/cputlb.c
53
+++ b/accel/tcg/cputlb.c
54
@@ -XXX,XX +XXX,XX @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
55
}
56
57
sz = size;
58
- section = address_space_translate_for_iotlb(cpu, asidx, paddr, &xlat, &sz);
59
+ section = address_space_translate_for_iotlb(cpu, asidx, paddr, &xlat, &sz,
60
+ attrs, &prot);
61
assert(sz >= TARGET_PAGE_SIZE);
62
63
tlb_debug("vaddr=" TARGET_FMT_lx " paddr=0x" TARGET_FMT_plx
64
diff --git a/exec.c b/exec.c
65
index XXXXXXX..XXXXXXX 100644
66
--- a/exec.c
67
+++ b/exec.c
68
@@ -XXX,XX +XXX,XX @@ MemoryRegion *flatview_translate(FlatView *fv, hwaddr addr, hwaddr *xlat,
69
return mr;
70
}
71
72
+typedef struct TCGIOMMUNotifier {
73
+ IOMMUNotifier n;
74
+ MemoryRegion *mr;
75
+ CPUState *cpu;
76
+ int iommu_idx;
77
+ bool active;
78
+} TCGIOMMUNotifier;
79
+
80
+static void tcg_iommu_unmap_notify(IOMMUNotifier *n, IOMMUTLBEntry *iotlb)
81
+{
82
+ TCGIOMMUNotifier *notifier = container_of(n, TCGIOMMUNotifier, n);
83
+
84
+ if (!notifier->active) {
85
+ return;
32
+ return;
86
+ }
33
+ }
87
+ tlb_flush(notifier->cpu);
88
+ notifier->active = false;
89
+ /* We leave the notifier struct on the list to avoid reallocating it later.
90
+ * Generally the number of IOMMUs a CPU deals with will be small.
91
+ * In any case we can't unregister the iommu notifier from a notify
92
+ * callback.
93
+ */
94
+}
95
+
34
+
96
+static void tcg_register_iommu_notifier(CPUState *cpu,
35
/*
97
+ IOMMUMemoryRegion *iommu_mr,
36
* DDI0584A.d sec 3.2: "If SVE instructions are disabled or trapped
98
+ int iommu_idx)
37
* at ELx, or not available because the EL is in AArch32 state, then
99
+{
38
@@ -XXX,XX +XXX,XX @@ void aarch64_sve_change_el(CPUARMState *env, int old_el,
100
+ /* Make sure this CPU has an IOMMU notifier registered for this
39
* we already have the correct register contents when encountering the
101
+ * IOMMU/IOMMU index combination, so that we can flush its TLB
40
* vq0->vq0 transition between EL0->EL1.
102
+ * when the IOMMU tells us the mappings we've cached have changed.
41
*/
103
+ */
42
- old_a64 = old_el ? arm_el_is_aa64(env, old_el) : el0_a64;
104
+ MemoryRegion *mr = MEMORY_REGION(iommu_mr);
43
old_len = (old_a64 && !sve_exception_el(env, old_el)
105
+ TCGIOMMUNotifier *notifier;
44
? sve_vqm1_for_el(env, old_el) : 0);
106
+ int i;
45
- new_a64 = new_el ? arm_el_is_aa64(env, new_el) : el0_a64;
107
+
46
new_len = (new_a64 && !sve_exception_el(env, new_el)
108
+ for (i = 0; i < cpu->iommu_notifiers->len; i++) {
47
? sve_vqm1_for_el(env, new_el) : 0);
109
+ notifier = &g_array_index(cpu->iommu_notifiers, TCGIOMMUNotifier, i);
110
+ if (notifier->mr == mr && notifier->iommu_idx == iommu_idx) {
111
+ break;
112
+ }
113
+ }
114
+ if (i == cpu->iommu_notifiers->len) {
115
+ /* Not found, add a new entry at the end of the array */
116
+ cpu->iommu_notifiers = g_array_set_size(cpu->iommu_notifiers, i + 1);
117
+ notifier = &g_array_index(cpu->iommu_notifiers, TCGIOMMUNotifier, i);
118
+
119
+ notifier->mr = mr;
120
+ notifier->iommu_idx = iommu_idx;
121
+ notifier->cpu = cpu;
122
+ /* Rather than trying to register interest in the specific part
123
+ * of the iommu's address space that we've accessed and then
124
+ * expand it later as subsequent accesses touch more of it, we
125
+ * just register interest in the whole thing, on the assumption
126
+ * that iommu reconfiguration will be rare.
127
+ */
128
+ iommu_notifier_init(&notifier->n,
129
+ tcg_iommu_unmap_notify,
130
+ IOMMU_NOTIFIER_UNMAP,
131
+ 0,
132
+ HWADDR_MAX,
133
+ iommu_idx);
134
+ memory_region_register_iommu_notifier(notifier->mr, &notifier->n);
135
+ }
136
+
137
+ if (!notifier->active) {
138
+ notifier->active = true;
139
+ }
140
+}
141
+
142
+static void tcg_iommu_free_notifier_list(CPUState *cpu)
143
+{
144
+ /* Destroy the CPU's notifier list */
145
+ int i;
146
+ TCGIOMMUNotifier *notifier;
147
+
148
+ for (i = 0; i < cpu->iommu_notifiers->len; i++) {
149
+ notifier = &g_array_index(cpu->iommu_notifiers, TCGIOMMUNotifier, i);
150
+ memory_region_unregister_iommu_notifier(notifier->mr, &notifier->n);
151
+ }
152
+ g_array_free(cpu->iommu_notifiers, true);
153
+}
154
+
155
/* Called from RCU critical section */
156
MemoryRegionSection *
157
address_space_translate_for_iotlb(CPUState *cpu, int asidx, hwaddr addr,
158
- hwaddr *xlat, hwaddr *plen)
159
+ hwaddr *xlat, hwaddr *plen,
160
+ MemTxAttrs attrs, int *prot)
161
{
162
MemoryRegionSection *section;
163
+ IOMMUMemoryRegion *iommu_mr;
164
+ IOMMUMemoryRegionClass *imrc;
165
+ IOMMUTLBEntry iotlb;
166
+ int iommu_idx;
167
AddressSpaceDispatch *d = atomic_rcu_read(&cpu->cpu_ases[asidx].memory_dispatch);
168
169
- section = address_space_translate_internal(d, addr, xlat, plen, false);
170
+ for (;;) {
171
+ section = address_space_translate_internal(d, addr, &addr, plen, false);
172
+
173
+ iommu_mr = memory_region_get_iommu(section->mr);
174
+ if (!iommu_mr) {
175
+ break;
176
+ }
177
+
178
+ imrc = memory_region_get_iommu_class_nocheck(iommu_mr);
179
+
180
+ iommu_idx = imrc->attrs_to_index(iommu_mr, attrs);
181
+ tcg_register_iommu_notifier(cpu, iommu_mr, iommu_idx);
182
+ /* We need all the permissions, so pass IOMMU_NONE so the IOMMU
183
+ * doesn't short-cut its translation table walk.
184
+ */
185
+ iotlb = imrc->translate(iommu_mr, addr, IOMMU_NONE, iommu_idx);
186
+ addr = ((iotlb.translated_addr & ~iotlb.addr_mask)
187
+ | (addr & iotlb.addr_mask));
188
+ /* Update the caller's prot bits to remove permissions the IOMMU
189
+ * is giving us a failure response for. If we get down to no
190
+ * permissions left at all we can give up now.
191
+ */
192
+ if (!(iotlb.perm & IOMMU_RO)) {
193
+ *prot &= ~(PAGE_READ | PAGE_EXEC);
194
+ }
195
+ if (!(iotlb.perm & IOMMU_WO)) {
196
+ *prot &= ~PAGE_WRITE;
197
+ }
198
+
199
+ if (!*prot) {
200
+ goto translate_fail;
201
+ }
202
+
203
+ d = flatview_to_dispatch(address_space_to_flatview(iotlb.target_as));
204
+ }
205
206
assert(!memory_region_is_iommu(section->mr));
207
+ *xlat = addr;
208
return section;
209
+
210
+translate_fail:
211
+ return &d->map.sections[PHYS_SECTION_UNASSIGNED];
212
}
213
#endif
214
215
@@ -XXX,XX +XXX,XX @@ void cpu_exec_unrealizefn(CPUState *cpu)
216
if (qdev_get_vmsd(DEVICE(cpu)) == NULL) {
217
vmstate_unregister(NULL, &vmstate_cpu_common, cpu);
218
}
219
+#ifndef CONFIG_USER_ONLY
220
+ tcg_iommu_free_notifier_list(cpu);
221
+#endif
222
}
223
224
Property cpu_common_props[] = {
225
@@ -XXX,XX +XXX,XX @@ void cpu_exec_realizefn(CPUState *cpu, Error **errp)
226
if (cc->vmsd != NULL) {
227
vmstate_register(NULL, cpu->cpu_index, cc->vmsd, cpu);
228
}
229
+
230
+ cpu->iommu_notifiers = g_array_new(false, true, sizeof(TCGIOMMUNotifier));
231
#endif
232
}
233
48
234
--
49
--
235
2.17.1
50
2.25.1
236
237
diff view generated by jsdifflib
1
The 'addr' field in the CPUIOTLBEntry struct has a rather non-obvious
1
From: Richard Henderson <richard.henderson@linaro.org>
2
use; add a comment documenting it (reverse-engineered from what
3
the code that sets it is doing).
4
2
3
Note that SME remains effectively disabled for user-only,
4
because we do not yet set CPACR_EL1.SMEN. This needs to
5
wait until the kernel ABI is implemented.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20220708151540.18136-33-richard.henderson@linaro.org
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20180611125633.32755-2-peter.maydell@linaro.org
9
---
11
---
10
include/exec/cpu-defs.h | 9 +++++++++
12
docs/system/arm/emulation.rst | 4 ++++
11
accel/tcg/cputlb.c | 12 ++++++++++++
13
target/arm/cpu64.c | 11 +++++++++++
12
2 files changed, 21 insertions(+)
14
2 files changed, 15 insertions(+)
13
15
14
diff --git a/include/exec/cpu-defs.h b/include/exec/cpu-defs.h
16
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
15
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
16
--- a/include/exec/cpu-defs.h
18
--- a/docs/system/arm/emulation.rst
17
+++ b/include/exec/cpu-defs.h
19
+++ b/docs/system/arm/emulation.rst
18
@@ -XXX,XX +XXX,XX @@ QEMU_BUILD_BUG_ON(sizeof(CPUTLBEntry) != (1 << CPU_TLB_ENTRY_BITS));
20
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
19
* structs into one.)
21
- FEAT_SHA512 (Advanced SIMD SHA512 instructions)
20
*/
22
- FEAT_SM3 (Advanced SIMD SM3 instructions)
21
typedef struct CPUIOTLBEntry {
23
- FEAT_SM4 (Advanced SIMD SM4 instructions)
22
+ /*
24
+- FEAT_SME (Scalable Matrix Extension)
23
+ * @addr contains:
25
+- FEAT_SME_FA64 (Full A64 instruction set in Streaming SVE mode)
24
+ * - in the lower TARGET_PAGE_BITS, a physical section number
26
+- FEAT_SME_F64F64 (Double-precision floating-point outer product instructions)
25
+ * - with the lower TARGET_PAGE_BITS masked off, an offset which
27
+- FEAT_SME_I16I64 (16-bit to 64-bit integer widening outer product instructions)
26
+ * must be added to the virtual address to obtain:
28
- FEAT_SPECRES (Speculation restriction instructions)
27
+ * + the ram_addr_t of the target RAM (if the physical section
29
- FEAT_SSBS (Speculative Store Bypass Safe)
28
+ * number is PHYS_SECTION_NOTDIRTY or PHYS_SECTION_ROM)
30
- FEAT_TLBIOS (TLB invalidate instructions in Outer Shareable domain)
29
+ * + the offset within the target MemoryRegion (otherwise)
31
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
30
+ */
31
hwaddr addr;
32
MemTxAttrs attrs;
33
} CPUIOTLBEntry;
34
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
35
index XXXXXXX..XXXXXXX 100644
32
index XXXXXXX..XXXXXXX 100644
36
--- a/accel/tcg/cputlb.c
33
--- a/target/arm/cpu64.c
37
+++ b/accel/tcg/cputlb.c
34
+++ b/target/arm/cpu64.c
38
@@ -XXX,XX +XXX,XX @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
35
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
39
env->iotlb_v[mmu_idx][vidx] = env->iotlb[mmu_idx][index];
36
*/
40
37
t = FIELD_DP64(t, ID_AA64PFR1, MTE, 3); /* FEAT_MTE3 */
41
/* refill the tlb */
38
t = FIELD_DP64(t, ID_AA64PFR1, RAS_FRAC, 0); /* FEAT_RASv1p1 + FEAT_DoubleFault */
42
+ /*
39
+ t = FIELD_DP64(t, ID_AA64PFR1, SME, 1); /* FEAT_SME */
43
+ * At this point iotlb contains a physical section number in the lower
40
t = FIELD_DP64(t, ID_AA64PFR1, CSV2_FRAC, 0); /* FEAT_CSV2_2 */
44
+ * TARGET_PAGE_BITS, and either
41
cpu->isar.id_aa64pfr1 = t;
45
+ * + the ram_addr_t of the page base of the target RAM (if NOTDIRTY or ROM)
42
46
+ * + the offset within section->mr of the page base (otherwise)
43
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
47
+ * We subtract the vaddr (which is page aligned and thus won't
44
t = FIELD_DP64(t, ID_AA64DFR0, PMUVER, 5); /* FEAT_PMUv3p4 */
48
+ * disturb the low bits) to give an offset which can be added to the
45
cpu->isar.id_aa64dfr0 = t;
49
+ * (non-page-aligned) vaddr of the eventual memory access to get
46
50
+ * the MemoryRegion offset for the access. Note that the vaddr we
47
+ t = cpu->isar.id_aa64smfr0;
51
+ * subtract here is that of the page base, and not the same as the
48
+ t = FIELD_DP64(t, ID_AA64SMFR0, F32F32, 1); /* FEAT_SME */
52
+ * vaddr we add back in io_readx()/io_writex()/get_page_addr_code().
49
+ t = FIELD_DP64(t, ID_AA64SMFR0, B16F32, 1); /* FEAT_SME */
53
+ */
50
+ t = FIELD_DP64(t, ID_AA64SMFR0, F16F32, 1); /* FEAT_SME */
54
env->iotlb[mmu_idx][index].addr = iotlb - vaddr;
51
+ t = FIELD_DP64(t, ID_AA64SMFR0, I8I32, 0xf); /* FEAT_SME */
55
env->iotlb[mmu_idx][index].attrs = attrs;
52
+ t = FIELD_DP64(t, ID_AA64SMFR0, F64F64, 1); /* FEAT_SME_F64F64 */
53
+ t = FIELD_DP64(t, ID_AA64SMFR0, I16I64, 0xf); /* FEAT_SME_I16I64 */
54
+ t = FIELD_DP64(t, ID_AA64SMFR0, FA64, 1); /* FEAT_SME_FA64 */
55
+ cpu->isar.id_aa64smfr0 = t;
56
+
57
/* Replicate the same data to the 32-bit id registers. */
58
aa32_max_features(cpu);
56
59
57
--
60
--
58
2.17.1
61
2.25.1
59
60
diff view generated by jsdifflib
1
The API for cpu_transaction_failed() says that it takes the physical
1
From: Richard Henderson <richard.henderson@linaro.org>
2
address for the failed transaction. However we were actually passing
3
it the offset within the target MemoryRegion. We don't currently
4
have any target CPU implementations of this hook that require the
5
physical address; fix this bug so we don't get confused if we ever
6
do add one.
7
2
8
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20220708151540.18136-34-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20180611125633.32755-3-peter.maydell@linaro.org
13
---
7
---
14
include/exec/exec-all.h | 13 ++++++++++--
8
linux-user/aarch64/target_cpu.h | 5 ++++-
15
accel/tcg/cputlb.c | 44 +++++++++++++++++++++++++++++------------
9
1 file changed, 4 insertions(+), 1 deletion(-)
16
exec.c | 5 +++--
17
3 files changed, 45 insertions(+), 17 deletions(-)
18
10
19
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
11
diff --git a/linux-user/aarch64/target_cpu.h b/linux-user/aarch64/target_cpu.h
20
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
21
--- a/include/exec/exec-all.h
13
--- a/linux-user/aarch64/target_cpu.h
22
+++ b/include/exec/exec-all.h
14
+++ b/linux-user/aarch64/target_cpu.h
23
@@ -XXX,XX +XXX,XX @@ void tb_lock_reset(void);
15
@@ -XXX,XX +XXX,XX @@ static inline void cpu_clone_regs_parent(CPUARMState *env, unsigned flags)
24
16
25
#if !defined(CONFIG_USER_ONLY)
17
static inline void cpu_set_tls(CPUARMState *env, target_ulong newtls)
26
27
-struct MemoryRegion *iotlb_to_region(CPUState *cpu,
28
- hwaddr index, MemTxAttrs attrs);
29
+/**
30
+ * iotlb_to_section:
31
+ * @cpu: CPU performing the access
32
+ * @index: TCG CPU IOTLB entry
33
+ *
34
+ * Given a TCG CPU IOTLB entry, return the MemoryRegionSection that
35
+ * it refers to. @index will have been initially created and returned
36
+ * by memory_region_section_get_iotlb().
37
+ */
38
+struct MemoryRegionSection *iotlb_to_section(CPUState *cpu,
39
+ hwaddr index, MemTxAttrs attrs);
40
41
void tlb_fill(CPUState *cpu, target_ulong addr, int size,
42
MMUAccessType access_type, int mmu_idx, uintptr_t retaddr);
43
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/accel/tcg/cputlb.c
46
+++ b/accel/tcg/cputlb.c
47
@@ -XXX,XX +XXX,XX @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
48
target_ulong addr, uintptr_t retaddr, int size)
49
{
18
{
50
CPUState *cpu = ENV_GET_CPU(env);
19
- /* Note that AArch64 Linux keeps the TLS pointer in TPIDR; this is
51
- hwaddr physaddr = iotlbentry->addr;
20
+ /*
52
- MemoryRegion *mr = iotlb_to_region(cpu, physaddr, iotlbentry->attrs);
21
+ * Note that AArch64 Linux keeps the TLS pointer in TPIDR; this is
53
+ hwaddr mr_offset;
22
* different from AArch32 Linux, which uses TPIDRRO.
54
+ MemoryRegionSection *section;
23
*/
55
+ MemoryRegion *mr;
24
env->cp15.tpidr_el[0] = newtls;
56
uint64_t val;
25
+ /* TPIDR2_EL0 is cleared with CLONE_SETTLS. */
57
bool locked = false;
26
+ env->cp15.tpidr2_el0 = 0;
58
MemTxResult r;
59
60
- physaddr = (physaddr & TARGET_PAGE_MASK) + addr;
61
+ section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
62
+ mr = section->mr;
63
+ mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
64
cpu->mem_io_pc = retaddr;
65
if (mr != &io_mem_rom && mr != &io_mem_notdirty && !cpu->can_do_io) {
66
cpu_io_recompile(cpu, retaddr);
67
@@ -XXX,XX +XXX,XX @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
68
qemu_mutex_lock_iothread();
69
locked = true;
70
}
71
- r = memory_region_dispatch_read(mr, physaddr,
72
+ r = memory_region_dispatch_read(mr, mr_offset,
73
&val, size, iotlbentry->attrs);
74
if (r != MEMTX_OK) {
75
+ hwaddr physaddr = mr_offset +
76
+ section->offset_within_address_space -
77
+ section->offset_within_region;
78
+
79
cpu_transaction_failed(cpu, physaddr, addr, size, MMU_DATA_LOAD,
80
mmu_idx, iotlbentry->attrs, r, retaddr);
81
}
82
@@ -XXX,XX +XXX,XX @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
83
uintptr_t retaddr, int size)
84
{
85
CPUState *cpu = ENV_GET_CPU(env);
86
- hwaddr physaddr = iotlbentry->addr;
87
- MemoryRegion *mr = iotlb_to_region(cpu, physaddr, iotlbentry->attrs);
88
+ hwaddr mr_offset;
89
+ MemoryRegionSection *section;
90
+ MemoryRegion *mr;
91
bool locked = false;
92
MemTxResult r;
93
94
- physaddr = (physaddr & TARGET_PAGE_MASK) + addr;
95
+ section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
96
+ mr = section->mr;
97
+ mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
98
if (mr != &io_mem_rom && mr != &io_mem_notdirty && !cpu->can_do_io) {
99
cpu_io_recompile(cpu, retaddr);
100
}
101
@@ -XXX,XX +XXX,XX @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
102
qemu_mutex_lock_iothread();
103
locked = true;
104
}
105
- r = memory_region_dispatch_write(mr, physaddr,
106
+ r = memory_region_dispatch_write(mr, mr_offset,
107
val, size, iotlbentry->attrs);
108
if (r != MEMTX_OK) {
109
+ hwaddr physaddr = mr_offset +
110
+ section->offset_within_address_space -
111
+ section->offset_within_region;
112
+
113
cpu_transaction_failed(cpu, physaddr, addr, size, MMU_DATA_STORE,
114
mmu_idx, iotlbentry->attrs, r, retaddr);
115
}
116
@@ -XXX,XX +XXX,XX @@ static bool victim_tlb_hit(CPUArchState *env, size_t mmu_idx, size_t index,
117
*/
118
tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr)
119
{
120
- int mmu_idx, index, pd;
121
+ int mmu_idx, index;
122
void *p;
123
MemoryRegion *mr;
124
+ MemoryRegionSection *section;
125
CPUState *cpu = ENV_GET_CPU(env);
126
CPUIOTLBEntry *iotlbentry;
127
- hwaddr physaddr;
128
+ hwaddr physaddr, mr_offset;
129
130
index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
131
mmu_idx = cpu_mmu_index(env, true);
132
@@ -XXX,XX +XXX,XX @@ tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr)
133
}
134
}
135
iotlbentry = &env->iotlb[mmu_idx][index];
136
- pd = iotlbentry->addr & ~TARGET_PAGE_MASK;
137
- mr = iotlb_to_region(cpu, pd, iotlbentry->attrs);
138
+ section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
139
+ mr = section->mr;
140
if (memory_region_is_unassigned(mr)) {
141
qemu_mutex_lock_iothread();
142
if (memory_region_request_mmio_ptr(mr, addr)) {
143
@@ -XXX,XX +XXX,XX @@ tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr)
144
* and use the MemTXResult it produced). However it is the
145
* simplest place we have currently available for the check.
146
*/
147
- physaddr = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
148
+ mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
149
+ physaddr = mr_offset +
150
+ section->offset_within_address_space -
151
+ section->offset_within_region;
152
cpu_transaction_failed(cpu, physaddr, addr, 0, MMU_INST_FETCH, mmu_idx,
153
iotlbentry->attrs, MEMTX_DECODE_ERROR, 0);
154
155
diff --git a/exec.c b/exec.c
156
index XXXXXXX..XXXXXXX 100644
157
--- a/exec.c
158
+++ b/exec.c
159
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps readonly_mem_ops = {
160
},
161
};
162
163
-MemoryRegion *iotlb_to_region(CPUState *cpu, hwaddr index, MemTxAttrs attrs)
164
+MemoryRegionSection *iotlb_to_section(CPUState *cpu,
165
+ hwaddr index, MemTxAttrs attrs)
166
{
167
int asidx = cpu_asidx_from_attrs(cpu, attrs);
168
CPUAddressSpace *cpuas = &cpu->cpu_ases[asidx];
169
AddressSpaceDispatch *d = atomic_rcu_read(&cpuas->memory_dispatch);
170
MemoryRegionSection *sections = d->map.sections;
171
172
- return sections[index & ~TARGET_PAGE_MASK].mr;
173
+ return &sections[index & ~TARGET_PAGE_MASK];
174
}
27
}
175
28
176
static void io_mem_init(void)
29
static inline abi_ulong get_sp_from_cpustate(CPUARMState *state)
177
--
30
--
178
2.17.1
31
2.25.1
179
180
diff view generated by jsdifflib
1
The codebase has a bit of a mix of different multiline
1
From: Richard Henderson <richard.henderson@linaro.org>
2
comment styles. State a preference for the Linux kernel
3
style:
4
/*
5
* Star on the left for each line.
6
* Leading slash-star and trailing star-slash
7
* each go on a line of their own.
8
*/
9
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20220708151540.18136-35-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Eric Blake <eblake@redhat.com>
12
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
13
Reviewed-by: Markus Armbruster <armbru@redhat.com>
14
Reviewed-by: Alex Williamson <alex.williamson@redhat.com>
15
Reviewed-by: Thomas Huth <thuth@redhat.com>
16
Reviewed-by: John Snow <jsnow@redhat.com>
17
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
18
Message-id: 20180611141716.3813-1-peter.maydell@linaro.org
19
---
7
---
20
CODING_STYLE | 17 +++++++++++++++++
8
linux-user/aarch64/cpu_loop.c | 9 +++++++++
21
1 file changed, 17 insertions(+)
9
1 file changed, 9 insertions(+)
22
10
23
diff --git a/CODING_STYLE b/CODING_STYLE
11
diff --git a/linux-user/aarch64/cpu_loop.c b/linux-user/aarch64/cpu_loop.c
24
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
25
--- a/CODING_STYLE
13
--- a/linux-user/aarch64/cpu_loop.c
26
+++ b/CODING_STYLE
14
+++ b/linux-user/aarch64/cpu_loop.c
27
@@ -XXX,XX +XXX,XX @@ We use traditional C-style /* */ comments and avoid // comments.
15
@@ -XXX,XX +XXX,XX @@ void cpu_loop(CPUARMState *env)
28
Rationale: The // form is valid in C99, so this is purely a matter of
16
29
consistency of style. The checkpatch script will warn you about this.
17
switch (trapnr) {
30
18
case EXCP_SWI:
31
+Multiline comment blocks should have a row of stars on the left,
19
+ /*
32
+and the initial /* and terminating */ both on their own lines:
20
+ * On syscall, PSTATE.ZA is preserved, along with the ZA matrix.
33
+ /*
21
+ * PSTATE.SM is cleared, per SMSTOP, which does ResetSVEState.
34
+ * like
22
+ */
35
+ * this
23
+ if (FIELD_EX64(env->svcr, SVCR, SM)) {
36
+ */
24
+ env->svcr = FIELD_DP64(env->svcr, SVCR, SM, 0);
37
+This is the same format required by the Linux kernel coding style.
25
+ arm_rebuild_hflags(env);
38
+
26
+ arm_reset_sve_state(env);
39
+(Some of the existing comments in the codebase use the GNU Coding
27
+ }
40
+Standards form which does not have stars on the left, or other
28
ret = do_syscall(env,
41
+variations; avoid these when writing new comments, but don't worry
29
env->xregs[8],
42
+about converting to the preferred form unless you're editing that
30
env->xregs[0],
43
+comment anyway.)
44
+
45
+Rationale: Consistency, and ease of visually picking out a multiline
46
+comment from the surrounding code.
47
+
48
8. trace-events style
49
50
8.1 0x prefix
51
--
31
--
52
2.17.1
32
2.25.1
53
54
diff view generated by jsdifflib
1
For the IoTKit MPC support, we need to wire together the
1
From: Richard Henderson <richard.henderson@linaro.org>
2
interrupt outputs of 17 MPCs; this exceeds the current
3
value of MAX_OR_LINES. Increase MAX_OR_LINES to 32 (which
4
should be enough for anyone).
5
2
6
The tricky part is retaining the migration compatibility for
3
Make sure to zero the currently reserved fields.
7
existing OR gates; we add a subsection which is only used
8
for larger OR gates, and define it such that we can freely
9
increase MAX_OR_LINES in future (or even move to a dynamically
10
allocated levels[] array without an upper size limit) without
11
breaking compatibility.
12
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20220708151540.18136-36-richard.henderson@linaro.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
15
Message-id: 20180604152941.20374-10-peter.maydell@linaro.org
16
---
9
---
17
include/hw/or-irq.h | 5 ++++-
10
linux-user/aarch64/signal.c | 9 ++++++++-
18
hw/core/or-irq.c | 39 +++++++++++++++++++++++++++++++++++++--
11
1 file changed, 8 insertions(+), 1 deletion(-)
19
2 files changed, 41 insertions(+), 3 deletions(-)
20
12
21
diff --git a/include/hw/or-irq.h b/include/hw/or-irq.h
13
diff --git a/linux-user/aarch64/signal.c b/linux-user/aarch64/signal.c
22
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
23
--- a/include/hw/or-irq.h
15
--- a/linux-user/aarch64/signal.c
24
+++ b/include/hw/or-irq.h
16
+++ b/linux-user/aarch64/signal.c
25
@@ -XXX,XX +XXX,XX @@
17
@@ -XXX,XX +XXX,XX @@ struct target_extra_context {
26
18
struct target_sve_context {
27
#define TYPE_OR_IRQ "or-irq"
19
struct target_aarch64_ctx head;
28
20
uint16_t vl;
29
-#define MAX_OR_LINES 16
21
- uint16_t reserved[3];
30
+/* This can safely be increased if necessary without breaking
22
+ uint16_t flags;
31
+ * migration compatibility (as long as it remains greater than 15).
23
+ uint16_t reserved[2];
32
+ */
24
/* The actual SVE data immediately follows. It is laid out
33
+#define MAX_OR_LINES 32
25
* according to TARGET_SVE_SIG_{Z,P}REG_OFFSET, based off of
34
26
* the original struct pointer.
35
typedef struct OrIRQState qemu_or_irq;
27
@@ -XXX,XX +XXX,XX @@ struct target_sve_context {
36
28
#define TARGET_SVE_SIG_CONTEXT_SIZE(VQ) \
37
diff --git a/hw/core/or-irq.c b/hw/core/or-irq.c
29
(TARGET_SVE_SIG_PREG_OFFSET(VQ, 17))
38
index XXXXXXX..XXXXXXX 100644
30
39
--- a/hw/core/or-irq.c
31
+#define TARGET_SVE_SIG_FLAG_SM 1
40
+++ b/hw/core/or-irq.c
41
@@ -XXX,XX +XXX,XX @@ static void or_irq_init(Object *obj)
42
qdev_init_gpio_out(DEVICE(obj), &s->out_irq, 1);
43
}
44
45
+/* The original version of this device had a fixed 16 entries in its
46
+ * VMState array; devices with more inputs than this need to
47
+ * migrate the extra lines via a subsection.
48
+ * The subsection migrates as much of the levels[] array as is needed
49
+ * (including repeating the first 16 elements), to avoid the awkwardness
50
+ * of splitting it in two to meet the requirements of VMSTATE_VARRAY_UINT16.
51
+ */
52
+#define OLD_MAX_OR_LINES 16
53
+#if MAX_OR_LINES < OLD_MAX_OR_LINES
54
+#error MAX_OR_LINES must be at least 16 for migration compatibility
55
+#endif
56
+
32
+
57
+static bool vmstate_extras_needed(void *opaque)
33
struct target_rt_sigframe {
58
+{
34
struct target_siginfo info;
59
+ qemu_or_irq *s = OR_IRQ(opaque);
35
struct target_ucontext uc;
60
+
36
@@ -XXX,XX +XXX,XX @@ static void target_setup_sve_record(struct target_sve_context *sve,
61
+ return s->num_lines >= OLD_MAX_OR_LINES;
37
{
62
+}
38
int i, j;
63
+
39
64
+static const VMStateDescription vmstate_or_irq_extras = {
40
+ memset(sve, 0, sizeof(*sve));
65
+ .name = "or-irq-extras",
41
__put_user(TARGET_SVE_MAGIC, &sve->head.magic);
66
+ .version_id = 1,
42
__put_user(size, &sve->head.size);
67
+ .minimum_version_id = 1,
43
__put_user(vq * TARGET_SVE_VQ_BYTES, &sve->vl);
68
+ .needed = vmstate_extras_needed,
44
+ if (FIELD_EX64(env->svcr, SVCR, SM)) {
69
+ .fields = (VMStateField[]) {
45
+ __put_user(TARGET_SVE_SIG_FLAG_SM, &sve->flags);
70
+ VMSTATE_VARRAY_UINT16_UNSAFE(levels, qemu_or_irq, num_lines, 0,
46
+ }
71
+ vmstate_info_bool, bool),
47
72
+ VMSTATE_END_OF_LIST(),
48
/* Note that SVE regs are stored as a byte stream, with each byte element
73
+ },
49
* at a subsequent address. This corresponds to a little-endian store
74
+};
75
+
76
static const VMStateDescription vmstate_or_irq = {
77
.name = TYPE_OR_IRQ,
78
.version_id = 1,
79
.minimum_version_id = 1,
80
.fields = (VMStateField[]) {
81
- VMSTATE_BOOL_ARRAY(levels, qemu_or_irq, MAX_OR_LINES),
82
+ VMSTATE_BOOL_SUB_ARRAY(levels, qemu_or_irq, 0, OLD_MAX_OR_LINES),
83
VMSTATE_END_OF_LIST(),
84
- }
85
+ },
86
+ .subsections = (const VMStateDescription*[]) {
87
+ &vmstate_or_irq_extras,
88
+ NULL
89
+ },
90
};
91
92
static Property or_irq_properties[] = {
93
--
50
--
94
2.17.1
51
2.25.1
95
96
diff view generated by jsdifflib
1
Remove the now-unused armv7m_init() function. This was a legacy from
1
From: Richard Henderson <richard.henderson@linaro.org>
2
before we properly QOMified ARMv7M, and it has some flaws:
3
2
4
* it combines work that needs to be done by an SoC object (creating
3
Fold the return value setting into the goto, so each
5
and initializing the TYPE_ARMV7M object) with work that needs to
4
point of failure need not do both.
6
be done by the board model (setting the system up to load the ELF
7
file specified with -kernel)
8
* TYPE_ARMV7M creation failure is fatal, but an SoC object wants to
9
arrange to propagate the failure outward
10
* it uses allocate-and-create via qdev_create() whereas the current
11
preferred style for SoC objects is to do creation in-place
12
5
13
Board and SoC models can instead do the two jobs this function
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
was doing themselves, in the right places and with whatever their
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
15
preferred style/error handling is.
8
Message-id: 20220708151540.18136-37-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
linux-user/aarch64/signal.c | 26 +++++++++++---------------
12
1 file changed, 11 insertions(+), 15 deletions(-)
16
13
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
diff --git a/linux-user/aarch64/signal.c b/linux-user/aarch64/signal.c
18
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
19
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
20
Message-id: 20180601144328.23817-3-peter.maydell@linaro.org
21
---
22
include/hw/arm/arm.h | 8 ++------
23
hw/arm/armv7m.c | 21 ---------------------
24
2 files changed, 2 insertions(+), 27 deletions(-)
25
26
diff --git a/include/hw/arm/arm.h b/include/hw/arm/arm.h
27
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
28
--- a/include/hw/arm/arm.h
16
--- a/linux-user/aarch64/signal.c
29
+++ b/include/hw/arm/arm.h
17
+++ b/linux-user/aarch64/signal.c
30
@@ -XXX,XX +XXX,XX @@ typedef enum {
18
@@ -XXX,XX +XXX,XX @@ static int target_restore_sigframe(CPUARMState *env,
31
ARM_ENDIANNESS_BE32,
19
struct target_sve_context *sve = NULL;
32
} arm_endianness;
20
uint64_t extra_datap = 0;
33
21
bool used_extra = false;
34
-/* armv7m.c */
22
- bool err = false;
35
-DeviceState *armv7m_init(MemoryRegion *system_memory, int mem_size, int num_irq,
23
int vq = 0, sve_size = 0;
36
- const char *kernel_filename, const char *cpu_type);
24
37
/**
25
target_restore_general_frame(env, sf);
38
* armv7m_load_kernel:
26
@@ -XXX,XX +XXX,XX @@ static int target_restore_sigframe(CPUARMState *env,
39
* @cpu: CPU
27
switch (magic) {
40
@@ -XXX,XX +XXX,XX @@ DeviceState *armv7m_init(MemoryRegion *system_memory, int mem_size, int num_irq,
28
case 0:
41
* @mem_size: mem_size: maximum image size to load
29
if (size != 0) {
42
*
30
- err = true;
43
* Load the guest image for an ARMv7M system. This must be called by
31
- goto exit;
44
- * any ARMv7M board, either directly or via armv7m_init(). (This is
32
+ goto err;
45
- * necessary to ensure that the CPU resets correctly on system reset,
33
}
46
- * as well as for kernel loading.)
34
if (used_extra) {
47
+ * any ARMv7M board. (This is necessary to ensure that the CPU resets
35
ctx = NULL;
48
+ * correctly on system reset, as well as for kernel loading.)
36
@@ -XXX,XX +XXX,XX @@ static int target_restore_sigframe(CPUARMState *env,
49
*/
37
50
void armv7m_load_kernel(ARMCPU *cpu, const char *kernel_filename, int mem_size);
38
case TARGET_FPSIMD_MAGIC:
51
39
if (fpsimd || size != sizeof(struct target_fpsimd_context)) {
52
diff --git a/hw/arm/armv7m.c b/hw/arm/armv7m.c
40
- err = true;
53
index XXXXXXX..XXXXXXX 100644
41
- goto exit;
54
--- a/hw/arm/armv7m.c
42
+ goto err;
55
+++ b/hw/arm/armv7m.c
43
}
56
@@ -XXX,XX +XXX,XX @@ static void armv7m_reset(void *opaque)
44
fpsimd = (struct target_fpsimd_context *)ctx;
57
cpu_reset(CPU(cpu));
45
break;
46
@@ -XXX,XX +XXX,XX @@ static int target_restore_sigframe(CPUARMState *env,
47
break;
48
}
49
}
50
- err = true;
51
- goto exit;
52
+ goto err;
53
54
case TARGET_EXTRA_MAGIC:
55
if (extra || size != sizeof(struct target_extra_context)) {
56
- err = true;
57
- goto exit;
58
+ goto err;
59
}
60
__get_user(extra_datap,
61
&((struct target_extra_context *)ctx)->datap);
62
@@ -XXX,XX +XXX,XX @@ static int target_restore_sigframe(CPUARMState *env,
63
/* Unknown record -- we certainly didn't generate it.
64
* Did we in fact get out of sync?
65
*/
66
- err = true;
67
- goto exit;
68
+ goto err;
69
}
70
ctx = (void *)ctx + size;
71
}
72
@@ -XXX,XX +XXX,XX @@ static int target_restore_sigframe(CPUARMState *env,
73
if (fpsimd) {
74
target_restore_fpsimd_record(env, fpsimd);
75
} else {
76
- err = true;
77
+ goto err;
78
}
79
80
/* SVE data, if present, overwrites FPSIMD data. */
81
if (sve) {
82
target_restore_sve_record(env, sve, vq);
83
}
84
-
85
- exit:
86
unlock_user(extra, extra_datap, 0);
87
- return err;
88
+ return 0;
89
+
90
+ err:
91
+ unlock_user(extra, extra_datap, 0);
92
+ return 1;
58
}
93
}
59
94
60
-/* Init CPU and memory for a v7-M based board.
95
static abi_ulong get_sigframe(struct target_sigaction *ka,
61
- mem_size is in bytes.
62
- Returns the ARMv7M device. */
63
-
64
-DeviceState *armv7m_init(MemoryRegion *system_memory, int mem_size, int num_irq,
65
- const char *kernel_filename, const char *cpu_type)
66
-{
67
- DeviceState *armv7m;
68
-
69
- armv7m = qdev_create(NULL, TYPE_ARMV7M);
70
- qdev_prop_set_uint32(armv7m, "num-irq", num_irq);
71
- qdev_prop_set_string(armv7m, "cpu-type", cpu_type);
72
- object_property_set_link(OBJECT(armv7m), OBJECT(get_system_memory()),
73
- "memory", &error_abort);
74
- /* This will exit with an error if the user passed us a bad cpu_type */
75
- qdev_init_nofail(armv7m);
76
-
77
- armv7m_load_kernel(ARM_CPU(first_cpu), kernel_filename, mem_size);
78
- return armv7m;
79
-}
80
-
81
void armv7m_load_kernel(ARMCPU *cpu, const char *kernel_filename, int mem_size)
82
{
83
int image_size;
84
--
96
--
85
2.17.1
97
2.25.1
86
87
diff view generated by jsdifflib
1
The stellaris board is still using the legacy armv7m_init() function,
1
From: Richard Henderson <richard.henderson@linaro.org>
2
which predates conversion of the ARMv7M into a proper QOM container
3
object. Make the board code directly create the ARMv7M object instead.
4
2
3
In parse_user_sigframe, the kernel rejects duplicate sve records,
4
or records that are smaller than the header. We were silently
5
allowing these cases to pass, dropping the record.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20220708151540.18136-38-richard.henderson@linaro.org
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
8
Message-id: 20180601144328.23817-2-peter.maydell@linaro.org
9
---
11
---
10
hw/arm/stellaris.c | 12 ++++++++++--
12
linux-user/aarch64/signal.c | 5 ++++-
11
1 file changed, 10 insertions(+), 2 deletions(-)
13
1 file changed, 4 insertions(+), 1 deletion(-)
12
14
13
diff --git a/hw/arm/stellaris.c b/hw/arm/stellaris.c
15
diff --git a/linux-user/aarch64/signal.c b/linux-user/aarch64/signal.c
14
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/arm/stellaris.c
17
--- a/linux-user/aarch64/signal.c
16
+++ b/hw/arm/stellaris.c
18
+++ b/linux-user/aarch64/signal.c
17
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@ static int target_restore_sigframe(CPUARMState *env,
18
#include "qemu/log.h"
20
break;
19
#include "exec/address-spaces.h"
21
20
#include "sysemu/sysemu.h"
22
case TARGET_SVE_MAGIC:
21
+#include "hw/arm/armv7m.h"
23
+ if (sve || size < sizeof(struct target_sve_context)) {
22
#include "hw/char/pl011.h"
24
+ goto err;
23
#include "hw/misc/unimp.h"
25
+ }
24
#include "cpu.h"
26
if (cpu_isar_feature(aa64_sve, env_archcpu(env))) {
25
@@ -XXX,XX +XXX,XX @@ static void stellaris_init(MachineState *ms, stellaris_board_info *board)
27
vq = sve_vq(env);
26
&error_fatal);
28
sve_size = QEMU_ALIGN_UP(TARGET_SVE_SIG_CONTEXT_SIZE(vq), 16);
27
memory_region_add_subregion(system_memory, 0x20000000, sram);
29
- if (!sve && size == sve_size) {
28
30
+ if (size == sve_size) {
29
- nvic = armv7m_init(system_memory, flash_size, NUM_IRQ_LINES,
31
sve = (struct target_sve_context *)ctx;
30
- ms->kernel_filename, ms->cpu_type);
32
break;
31
+ nvic = qdev_create(NULL, TYPE_ARMV7M);
33
}
32
+ qdev_prop_set_uint32(nvic, "num-irq", NUM_IRQ_LINES);
33
+ qdev_prop_set_string(nvic, "cpu-type", ms->cpu_type);
34
+ object_property_set_link(OBJECT(nvic), OBJECT(get_system_memory()),
35
+ "memory", &error_abort);
36
+ /* This will exit with an error if the user passed us a bad cpu_type */
37
+ qdev_init_nofail(nvic);
38
39
qdev_connect_gpio_out_named(nvic, "SYSRESETREQ", 0,
40
qemu_allocate_irq(&do_sys_reset, NULL, 0));
41
@@ -XXX,XX +XXX,XX @@ static void stellaris_init(MachineState *ms, stellaris_board_info *board)
42
create_unimplemented_device("analogue-comparator", 0x4003c000, 0x1000);
43
create_unimplemented_device("hibernation", 0x400fc000, 0x1000);
44
create_unimplemented_device("flash-control", 0x400fd000, 0x1000);
45
+
46
+ armv7m_load_kernel(ARM_CPU(first_cpu), ms->kernel_filename, flash_size);
47
}
48
49
/* FIXME: Figure out how to generate these from stellaris_boards. */
50
--
34
--
51
2.17.1
35
2.25.1
52
53
diff view generated by jsdifflib
1
Convert the pckbd device away from using the old_mmio field
1
From: Richard Henderson <richard.henderson@linaro.org>
2
of MemoryRegionOps. This change only affects the memory-mapped
3
variant of the i8042, which is used by the Unicore32 'puv3'
4
board and the MIPS Jazz boards 'magnum' and 'pica61'.
5
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20220708151540.18136-39-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Message-id: 20180601141223.26630-6-peter.maydell@linaro.org
9
---
7
---
10
hw/input/pckbd.c | 14 ++++++++------
8
linux-user/aarch64/signal.c | 3 +++
11
1 file changed, 8 insertions(+), 6 deletions(-)
9
1 file changed, 3 insertions(+)
12
10
13
diff --git a/hw/input/pckbd.c b/hw/input/pckbd.c
11
diff --git a/linux-user/aarch64/signal.c b/linux-user/aarch64/signal.c
14
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/input/pckbd.c
13
--- a/linux-user/aarch64/signal.c
16
+++ b/hw/input/pckbd.c
14
+++ b/linux-user/aarch64/signal.c
17
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_kbd = {
15
@@ -XXX,XX +XXX,XX @@ static int target_restore_sigframe(CPUARMState *env,
18
};
16
__get_user(extra_size,
19
17
&((struct target_extra_context *)ctx)->size);
20
/* Memory mapped interface */
18
extra = lock_user(VERIFY_READ, extra_datap, extra_size, 0);
21
-static uint32_t kbd_mm_readb (void *opaque, hwaddr addr)
19
+ if (!extra) {
22
+static uint64_t kbd_mm_readfn(void *opaque, hwaddr addr, unsigned size)
20
+ return 1;
23
{
21
+ }
24
KBDState *s = opaque;
22
break;
25
23
26
@@ -XXX,XX +XXX,XX @@ static uint32_t kbd_mm_readb (void *opaque, hwaddr addr)
24
default:
27
return kbd_read_data(s, 0, 1) & 0xff;
28
}
29
30
-static void kbd_mm_writeb (void *opaque, hwaddr addr, uint32_t value)
31
+static void kbd_mm_writefn(void *opaque, hwaddr addr,
32
+ uint64_t value, unsigned size)
33
{
34
KBDState *s = opaque;
35
36
@@ -XXX,XX +XXX,XX @@ static void kbd_mm_writeb (void *opaque, hwaddr addr, uint32_t value)
37
kbd_write_data(s, 0, value & 0xff, 1);
38
}
39
40
+
41
static const MemoryRegionOps i8042_mmio_ops = {
42
+ .read = kbd_mm_readfn,
43
+ .write = kbd_mm_writefn,
44
+ .valid.min_access_size = 1,
45
+ .valid.max_access_size = 4,
46
.endianness = DEVICE_NATIVE_ENDIAN,
47
- .old_mmio = {
48
- .read = { kbd_mm_readb, kbd_mm_readb, kbd_mm_readb },
49
- .write = { kbd_mm_writeb, kbd_mm_writeb, kbd_mm_writeb },
50
- },
51
};
52
53
void i8042_mm_init(qemu_irq kbd_irq, qemu_irq mouse_irq,
54
--
25
--
55
2.17.1
26
2.25.1
56
57
diff view generated by jsdifflib
1
Add support for multiple IOMMU indexes to the IOMMU notifier APIs.
1
From: Richard Henderson <richard.henderson@linaro.org>
2
When initializing a notifier with iommu_notifier_init(), the caller
3
must pass the IOMMU index that it is interested in. When a change
4
happens, the IOMMU implementation must pass
5
memory_region_notify_iommu() the IOMMU index that has changed and
6
that notifiers must be called for.
7
2
8
IOMMUs which support only a single index don't need to change.
3
Move the checks out of the parsing loop and into the
9
Callers which only really support working with IOMMUs with a single
4
restore function. This more closely mirrors the code
10
index can use the result of passing MEMTXATTRS_UNSPECIFIED to
5
structure in the kernel, and is slightly clearer.
11
memory_region_iommu_attrs_to_index().
12
6
7
Reject rather than silently skip incorrect VL and SVE record sizes,
8
bringing our checks in to line with those the kernel does.
9
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20220708151540.18136-40-richard.henderson@linaro.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
16
Message-id: 20180604152941.20374-3-peter.maydell@linaro.org
17
---
14
---
18
include/exec/memory.h | 7 ++++++-
15
linux-user/aarch64/signal.c | 51 +++++++++++++++++++++++++------------
19
hw/i386/intel_iommu.c | 6 +++---
16
1 file changed, 35 insertions(+), 16 deletions(-)
20
hw/ppc/spapr_iommu.c | 2 +-
21
hw/s390x/s390-pci-inst.c | 4 ++--
22
hw/vfio/common.c | 6 +++++-
23
hw/virtio/vhost.c | 7 ++++++-
24
memory.c | 8 +++++++-
25
7 files changed, 30 insertions(+), 10 deletions(-)
26
17
27
diff --git a/include/exec/memory.h b/include/exec/memory.h
18
diff --git a/linux-user/aarch64/signal.c b/linux-user/aarch64/signal.c
28
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
29
--- a/include/exec/memory.h
20
--- a/linux-user/aarch64/signal.c
30
+++ b/include/exec/memory.h
21
+++ b/linux-user/aarch64/signal.c
31
@@ -XXX,XX +XXX,XX @@ struct IOMMUNotifier {
22
@@ -XXX,XX +XXX,XX @@ static void target_restore_fpsimd_record(CPUARMState *env,
32
/* Notify for address space range start <= addr <= end */
23
}
33
hwaddr start;
24
}
34
hwaddr end;
25
35
+ int iommu_idx;
26
-static void target_restore_sve_record(CPUARMState *env,
36
QLIST_ENTRY(IOMMUNotifier) node;
27
- struct target_sve_context *sve, int vq)
37
};
28
+static bool target_restore_sve_record(CPUARMState *env,
38
typedef struct IOMMUNotifier IOMMUNotifier;
29
+ struct target_sve_context *sve,
39
30
+ int size)
40
static inline void iommu_notifier_init(IOMMUNotifier *n, IOMMUNotify fn,
41
IOMMUNotifierFlag flags,
42
- hwaddr start, hwaddr end)
43
+ hwaddr start, hwaddr end,
44
+ int iommu_idx)
45
{
31
{
46
n->notify = fn;
32
- int i, j;
47
n->notifier_flags = flags;
33
+ int i, j, vl, vq;
48
n->start = start;
34
49
n->end = end;
35
- /* Note that SVE regs are stored as a byte stream, with each byte element
50
+ n->iommu_idx = iommu_idx;
36
+ if (!cpu_isar_feature(aa64_sve, env_archcpu(env))) {
51
}
37
+ return false;
52
38
+ }
53
/*
39
+
54
@@ -XXX,XX +XXX,XX @@ uint64_t memory_region_iommu_get_min_page_size(IOMMUMemoryRegion *iommu_mr);
40
+ __get_user(vl, &sve->vl);
55
* should be notified with an UNMAP followed by a MAP.
41
+ vq = sve_vq(env);
56
*
42
+
57
* @iommu_mr: the memory region that was changed
43
+ /* Reject mismatched VL. */
58
+ * @iommu_idx: the IOMMU index for the translation table which has changed
44
+ if (vl != vq * TARGET_SVE_VQ_BYTES) {
59
* @entry: the new entry in the IOMMU translation table. The entry
45
+ return false;
60
* replaces all old entries for the same virtual I/O address range.
46
+ }
61
* Deleted entries have .@perm == 0.
47
+
62
*/
48
+ /* Accept empty record -- used to clear PSTATE.SM. */
63
void memory_region_notify_iommu(IOMMUMemoryRegion *iommu_mr,
49
+ if (size <= sizeof(*sve)) {
64
+ int iommu_idx,
50
+ return true;
65
IOMMUTLBEntry entry);
51
+ }
66
52
+
67
/**
53
+ /* Reject non-empty but incomplete record. */
68
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
54
+ if (size < TARGET_SVE_SIG_CONTEXT_SIZE(vq)) {
69
index XXXXXXX..XXXXXXX 100644
55
+ return false;
70
--- a/hw/i386/intel_iommu.c
56
+ }
71
+++ b/hw/i386/intel_iommu.c
57
+
72
@@ -XXX,XX +XXX,XX @@ static int vtd_dev_to_context_entry(IntelIOMMUState *s, uint8_t bus_num,
58
+ /*
73
static int vtd_sync_shadow_page_hook(IOMMUTLBEntry *entry,
59
+ * Note that SVE regs are stored as a byte stream, with each byte element
74
void *private)
60
* at a subsequent address. This corresponds to a little-endian load
75
{
61
* of our 64-bit hunks.
76
- memory_region_notify_iommu((IOMMUMemoryRegion *)private, *entry);
62
*/
77
+ memory_region_notify_iommu((IOMMUMemoryRegion *)private, 0, *entry);
63
@@ -XXX,XX +XXX,XX @@ static void target_restore_sve_record(CPUARMState *env,
78
return 0;
79
}
80
81
@@ -XXX,XX +XXX,XX @@ static void vtd_iotlb_page_invalidate_notify(IntelIOMMUState *s,
82
.addr_mask = size - 1,
83
.perm = IOMMU_NONE,
84
};
85
- memory_region_notify_iommu(&vtd_as->iommu, entry);
86
+ memory_region_notify_iommu(&vtd_as->iommu, 0, entry);
87
}
64
}
88
}
65
}
89
}
66
}
90
@@ -XXX,XX +XXX,XX @@ static bool vtd_process_device_iotlb_desc(IntelIOMMUState *s,
67
+ return true;
91
entry.iova = addr;
92
entry.perm = IOMMU_NONE;
93
entry.translated_addr = 0;
94
- memory_region_notify_iommu(&vtd_dev_as->iommu, entry);
95
+ memory_region_notify_iommu(&vtd_dev_as->iommu, 0, entry);
96
97
done:
98
return true;
99
diff --git a/hw/ppc/spapr_iommu.c b/hw/ppc/spapr_iommu.c
100
index XXXXXXX..XXXXXXX 100644
101
--- a/hw/ppc/spapr_iommu.c
102
+++ b/hw/ppc/spapr_iommu.c
103
@@ -XXX,XX +XXX,XX @@ static target_ulong put_tce_emu(sPAPRTCETable *tcet, target_ulong ioba,
104
entry.translated_addr = tce & page_mask;
105
entry.addr_mask = ~page_mask;
106
entry.perm = spapr_tce_iommu_access_flags(tce);
107
- memory_region_notify_iommu(&tcet->iommu, entry);
108
+ memory_region_notify_iommu(&tcet->iommu, 0, entry);
109
110
return H_SUCCESS;
111
}
68
}
112
diff --git a/hw/s390x/s390-pci-inst.c b/hw/s390x/s390-pci-inst.c
69
113
index XXXXXXX..XXXXXXX 100644
70
static int target_restore_sigframe(CPUARMState *env,
114
--- a/hw/s390x/s390-pci-inst.c
71
@@ -XXX,XX +XXX,XX @@ static int target_restore_sigframe(CPUARMState *env,
115
+++ b/hw/s390x/s390-pci-inst.c
72
struct target_sve_context *sve = NULL;
116
@@ -XXX,XX +XXX,XX @@ static void s390_pci_update_iotlb(S390PCIIOMMU *iommu, S390IOTLBEntry *entry)
73
uint64_t extra_datap = 0;
74
bool used_extra = false;
75
- int vq = 0, sve_size = 0;
76
+ int sve_size = 0;
77
78
target_restore_general_frame(env, sf);
79
80
@@ -XXX,XX +XXX,XX @@ static int target_restore_sigframe(CPUARMState *env,
81
if (sve || size < sizeof(struct target_sve_context)) {
82
goto err;
117
}
83
}
118
84
- if (cpu_isar_feature(aa64_sve, env_archcpu(env))) {
119
notify.perm = IOMMU_NONE;
85
- vq = sve_vq(env);
120
- memory_region_notify_iommu(&iommu->iommu_mr, notify);
86
- sve_size = QEMU_ALIGN_UP(TARGET_SVE_SIG_CONTEXT_SIZE(vq), 16);
121
+ memory_region_notify_iommu(&iommu->iommu_mr, 0, notify);
87
- if (size == sve_size) {
122
notify.perm = entry->perm;
88
- sve = (struct target_sve_context *)ctx;
123
}
89
- break;
124
90
- }
125
@@ -XXX,XX +XXX,XX @@ static void s390_pci_update_iotlb(S390PCIIOMMU *iommu, S390IOTLBEntry *entry)
91
- }
126
g_hash_table_replace(iommu->iotlb, &cache->iova, cache);
92
- goto err;
93
+ sve = (struct target_sve_context *)ctx;
94
+ sve_size = size;
95
+ break;
96
97
case TARGET_EXTRA_MAGIC:
98
if (extra || size != sizeof(struct target_extra_context)) {
99
@@ -XXX,XX +XXX,XX @@ static int target_restore_sigframe(CPUARMState *env,
127
}
100
}
128
101
129
- memory_region_notify_iommu(&iommu->iommu_mr, notify);
102
/* SVE data, if present, overwrites FPSIMD data. */
130
+ memory_region_notify_iommu(&iommu->iommu_mr, 0, notify);
103
- if (sve) {
131
}
104
- target_restore_sve_record(env, sve, vq);
132
105
+ if (sve && !target_restore_sve_record(env, sve, sve_size)) {
133
int rpcit_service_call(S390CPU *cpu, uint8_t r1, uint8_t r2, uintptr_t ra)
106
+ goto err;
134
diff --git a/hw/vfio/common.c b/hw/vfio/common.c
135
index XXXXXXX..XXXXXXX 100644
136
--- a/hw/vfio/common.c
137
+++ b/hw/vfio/common.c
138
@@ -XXX,XX +XXX,XX @@ static void vfio_listener_region_add(MemoryListener *listener,
139
if (memory_region_is_iommu(section->mr)) {
140
VFIOGuestIOMMU *giommu;
141
IOMMUMemoryRegion *iommu_mr = IOMMU_MEMORY_REGION(section->mr);
142
+ int iommu_idx;
143
144
trace_vfio_listener_region_add_iommu(iova, end);
145
/*
146
@@ -XXX,XX +XXX,XX @@ static void vfio_listener_region_add(MemoryListener *listener,
147
llend = int128_add(int128_make64(section->offset_within_region),
148
section->size);
149
llend = int128_sub(llend, int128_one());
150
+ iommu_idx = memory_region_iommu_attrs_to_index(iommu_mr,
151
+ MEMTXATTRS_UNSPECIFIED);
152
iommu_notifier_init(&giommu->n, vfio_iommu_map_notify,
153
IOMMU_NOTIFIER_ALL,
154
section->offset_within_region,
155
- int128_get64(llend));
156
+ int128_get64(llend),
157
+ iommu_idx);
158
QLIST_INSERT_HEAD(&container->giommu_list, giommu, giommu_next);
159
160
memory_region_register_iommu_notifier(section->mr, &giommu->n);
161
diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
162
index XXXXXXX..XXXXXXX 100644
163
--- a/hw/virtio/vhost.c
164
+++ b/hw/virtio/vhost.c
165
@@ -XXX,XX +XXX,XX @@ static void vhost_iommu_region_add(MemoryListener *listener,
166
iommu_listener);
167
struct vhost_iommu *iommu;
168
Int128 end;
169
+ int iommu_idx;
170
+ IOMMUMemoryRegion *iommu_mr = IOMMU_MEMORY_REGION(section->mr);
171
172
if (!memory_region_is_iommu(section->mr)) {
173
return;
174
@@ -XXX,XX +XXX,XX @@ static void vhost_iommu_region_add(MemoryListener *listener,
175
end = int128_add(int128_make64(section->offset_within_region),
176
section->size);
177
end = int128_sub(end, int128_one());
178
+ iommu_idx = memory_region_iommu_attrs_to_index(iommu_mr,
179
+ MEMTXATTRS_UNSPECIFIED);
180
iommu_notifier_init(&iommu->n, vhost_iommu_unmap_notify,
181
IOMMU_NOTIFIER_UNMAP,
182
section->offset_within_region,
183
- int128_get64(end));
184
+ int128_get64(end),
185
+ iommu_idx);
186
iommu->mr = section->mr;
187
iommu->iommu_offset = section->offset_within_address_space -
188
section->offset_within_region;
189
diff --git a/memory.c b/memory.c
190
index XXXXXXX..XXXXXXX 100644
191
--- a/memory.c
192
+++ b/memory.c
193
@@ -XXX,XX +XXX,XX @@ void memory_region_register_iommu_notifier(MemoryRegion *mr,
194
iommu_mr = IOMMU_MEMORY_REGION(mr);
195
assert(n->notifier_flags != IOMMU_NOTIFIER_NONE);
196
assert(n->start <= n->end);
197
+ assert(n->iommu_idx >= 0 &&
198
+ n->iommu_idx < memory_region_iommu_num_indexes(iommu_mr));
199
+
200
QLIST_INSERT_HEAD(&iommu_mr->iommu_notify, n, node);
201
memory_region_update_iommu_notify_flags(iommu_mr);
202
}
203
@@ -XXX,XX +XXX,XX @@ void memory_region_notify_one(IOMMUNotifier *notifier,
204
}
205
206
void memory_region_notify_iommu(IOMMUMemoryRegion *iommu_mr,
207
+ int iommu_idx,
208
IOMMUTLBEntry entry)
209
{
210
IOMMUNotifier *iommu_notifier;
211
@@ -XXX,XX +XXX,XX @@ void memory_region_notify_iommu(IOMMUMemoryRegion *iommu_mr,
212
assert(memory_region_is_iommu(MEMORY_REGION(iommu_mr)));
213
214
IOMMU_NOTIFIER_FOREACH(iommu_notifier, iommu_mr) {
215
- memory_region_notify_one(iommu_notifier, &entry);
216
+ if (iommu_notifier->iommu_idx == iommu_idx) {
217
+ memory_region_notify_one(iommu_notifier, &entry);
218
+ }
219
}
107
}
220
}
108
unlock_user(extra, extra_datap, 0);
221
109
return 0;
222
--
110
--
223
2.17.1
111
2.25.1
224
225
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
Set the SM bit in the SVE record on signal delivery, create the ZA record.
4
Restore SM and ZA state according to the records present on return.
2
5
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180613015641.5667-4-richard.henderson@linaro.org
8
Message-id: 20220708151540.18136-41-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
target/arm/helper-sve.h | 6 +
11
linux-user/aarch64/signal.c | 167 +++++++++++++++++++++++++++++++++---
9
target/arm/sve_helper.c | 290 +++++++++++++++++++++++++++++++++++++
12
1 file changed, 154 insertions(+), 13 deletions(-)
10
target/arm/translate-sve.c | 120 +++++++++++++++
11
target/arm/sve.decode | 18 +++
12
4 files changed, 434 insertions(+)
13
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
14
diff --git a/linux-user/aarch64/signal.c b/linux-user/aarch64/signal.c
15
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
16
--- a/linux-user/aarch64/signal.c
17
+++ b/target/arm/helper-sve.h
17
+++ b/linux-user/aarch64/signal.c
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(sve_uunpk_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
18
@@ -XXX,XX +XXX,XX @@ struct target_sve_context {
19
DEF_HELPER_FLAGS_3(sve_uunpk_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
19
20
DEF_HELPER_FLAGS_3(sve_uunpk_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
20
#define TARGET_SVE_SIG_FLAG_SM 1
21
21
22
+DEF_HELPER_FLAGS_4(sve_zip_p, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
22
+#define TARGET_ZA_MAGIC 0x54366345
23
+DEF_HELPER_FLAGS_4(sve_uzp_p, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
23
+
24
+DEF_HELPER_FLAGS_4(sve_trn_p, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
+struct target_za_context {
25
+DEF_HELPER_FLAGS_3(sve_rev_p, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
25
+ struct target_aarch64_ctx head;
26
+DEF_HELPER_FLAGS_3(sve_punpk_p, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
26
+ uint16_t vl;
27
+
27
+ uint16_t reserved[3];
28
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
28
+ /* The actual ZA data immediately follows. */
29
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
30
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
31
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/sve_helper.c
34
+++ b/target/arm/sve_helper.c
35
@@ -XXX,XX +XXX,XX @@ DO_UNPK(sve_uunpk_s, uint32_t, uint16_t, H4, H2)
36
DO_UNPK(sve_uunpk_d, uint64_t, uint32_t, , H4)
37
38
#undef DO_UNPK
39
+
40
+/* Mask of bits included in the even numbered predicates of width esz.
41
+ * We also use this for expand_bits/compress_bits, and so extend the
42
+ * same pattern out to 16-bit units.
43
+ */
44
+static const uint64_t even_bit_esz_masks[5] = {
45
+ 0x5555555555555555ull,
46
+ 0x3333333333333333ull,
47
+ 0x0f0f0f0f0f0f0f0full,
48
+ 0x00ff00ff00ff00ffull,
49
+ 0x0000ffff0000ffffull,
50
+};
29
+};
51
+
30
+
52
+/* Zero-extend units of 2**N bits to units of 2**(N+1) bits.
31
+#define TARGET_ZA_SIG_REGS_OFFSET \
53
+ * For N==0, this corresponds to the operation that in qemu/bitops.h
32
+ QEMU_ALIGN_UP(sizeof(struct target_za_context), TARGET_SVE_VQ_BYTES)
54
+ * we call half_shuffle64; this algorithm is from Hacker's Delight,
33
+#define TARGET_ZA_SIG_ZAV_OFFSET(VQ, N) \
55
+ * section 7-2 Shuffling Bits.
34
+ (TARGET_ZA_SIG_REGS_OFFSET + (VQ) * TARGET_SVE_VQ_BYTES * (N))
56
+ */
35
+#define TARGET_ZA_SIG_CONTEXT_SIZE(VQ) \
57
+static uint64_t expand_bits(uint64_t x, int n)
36
+ TARGET_ZA_SIG_ZAV_OFFSET(VQ, VQ * TARGET_SVE_VQ_BYTES)
37
+
38
struct target_rt_sigframe {
39
struct target_siginfo info;
40
struct target_ucontext uc;
41
@@ -XXX,XX +XXX,XX @@ static void target_setup_end_record(struct target_aarch64_ctx *end)
42
}
43
44
static void target_setup_sve_record(struct target_sve_context *sve,
45
- CPUARMState *env, int vq, int size)
46
+ CPUARMState *env, int size)
47
{
48
- int i, j;
49
+ int i, j, vq = sve_vq(env);
50
51
memset(sve, 0, sizeof(*sve));
52
__put_user(TARGET_SVE_MAGIC, &sve->head.magic);
53
@@ -XXX,XX +XXX,XX @@ static void target_setup_sve_record(struct target_sve_context *sve,
54
}
55
}
56
57
+static void target_setup_za_record(struct target_za_context *za,
58
+ CPUARMState *env, int size)
58
+{
59
+{
59
+ int i;
60
+ int vq = sme_vq(env);
60
+
61
+ int vl = vq * TARGET_SVE_VQ_BYTES;
61
+ x &= 0xffffffffu;
62
+ int i, j;
62
+ for (i = 4; i >= n; i--) {
63
+
63
+ int sh = 1 << i;
64
+ memset(za, 0, sizeof(*za));
64
+ x = ((x << sh) | x) & even_bit_esz_masks[i];
65
+ __put_user(TARGET_ZA_MAGIC, &za->head.magic);
65
+ }
66
+ __put_user(size, &za->head.size);
66
+ return x;
67
+ __put_user(vl, &za->vl);
68
+
69
+ if (size == TARGET_ZA_SIG_CONTEXT_SIZE(0)) {
70
+ return;
71
+ }
72
+ assert(size == TARGET_ZA_SIG_CONTEXT_SIZE(vq));
73
+
74
+ /*
75
+ * Note that ZA vectors are stored as a byte stream,
76
+ * with each byte element at a subsequent address.
77
+ */
78
+ for (i = 0; i < vl; ++i) {
79
+ uint64_t *z = (void *)za + TARGET_ZA_SIG_ZAV_OFFSET(vq, i);
80
+ for (j = 0; j < vq * 2; ++j) {
81
+ __put_user_e(env->zarray[i].d[j], z + j, le);
82
+ }
83
+ }
67
+}
84
+}
68
+
85
+
69
+/* Compress units of 2**(N+1) bits to units of 2**N bits.
86
static void target_restore_general_frame(CPUARMState *env,
70
+ * For N==0, this corresponds to the operation that in qemu/bitops.h
87
struct target_rt_sigframe *sf)
71
+ * we call half_unshuffle64; this algorithm is from Hacker's Delight,
88
{
72
+ * section 7-2 Shuffling Bits, where it is called an inverse half shuffle.
89
@@ -XXX,XX +XXX,XX @@ static void target_restore_fpsimd_record(CPUARMState *env,
73
+ */
90
74
+static uint64_t compress_bits(uint64_t x, int n)
91
static bool target_restore_sve_record(CPUARMState *env,
75
+{
92
struct target_sve_context *sve,
76
+ int i;
93
- int size)
77
+
94
+ int size, int *svcr)
78
+ for (i = n; i <= 4; i++) {
95
{
79
+ int sh = 1 << i;
96
- int i, j, vl, vq;
80
+ x &= even_bit_esz_masks[i];
97
+ int i, j, vl, vq, flags;
81
+ x = (x >> sh) | x;
98
+ bool sm;
82
+ }
99
83
+ return x & 0xffffffffu;
100
- if (!cpu_isar_feature(aa64_sve, env_archcpu(env))) {
84
+}
101
+ __get_user(vl, &sve->vl);
85
+
102
+ __get_user(flags, &sve->flags);
86
+void HELPER(sve_zip_p)(void *vd, void *vn, void *vm, uint32_t pred_desc)
103
+
87
+{
104
+ sm = flags & TARGET_SVE_SIG_FLAG_SM;
88
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
105
+
89
+ int esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
106
+ /* The cpu must support Streaming or Non-streaming SVE. */
90
+ intptr_t high = extract32(pred_desc, SIMD_DATA_SHIFT + 2, 1);
107
+ if (sm
91
+ uint64_t *d = vd;
108
+ ? !cpu_isar_feature(aa64_sme, env_archcpu(env))
92
+ intptr_t i;
109
+ : !cpu_isar_feature(aa64_sve, env_archcpu(env))) {
93
+
110
return false;
94
+ if (oprsz <= 8) {
111
}
95
+ uint64_t nn = *(uint64_t *)vn;
112
96
+ uint64_t mm = *(uint64_t *)vm;
113
- __get_user(vl, &sve->vl);
97
+ int half = 4 * oprsz;
114
- vq = sve_vq(env);
98
+
115
+ /*
99
+ nn = extract64(nn, high * half, half);
116
+ * Note that we cannot use sve_vq() because that depends on the
100
+ mm = extract64(mm, high * half, half);
117
+ * current setting of PSTATE.SM, not the state to be restored.
101
+ nn = expand_bits(nn, esz);
118
+ */
102
+ mm = expand_bits(mm, esz);
119
+ vq = sve_vqm1_for_el_sm(env, 0, sm) + 1;
103
+ d[0] = nn + (mm << (1 << esz));
120
104
+ } else {
121
/* Reject mismatched VL. */
105
+ ARMPredicateReg tmp_n, tmp_m;
122
if (vl != vq * TARGET_SVE_VQ_BYTES) {
106
+
123
@@ -XXX,XX +XXX,XX @@ static bool target_restore_sve_record(CPUARMState *env,
107
+ /* We produce output faster than we consume input.
124
return false;
108
+ Therefore we must be mindful of possible overlap. */
125
}
109
+ if ((vn - vd) < (uintptr_t)oprsz) {
126
110
+ vn = memcpy(&tmp_n, vn, oprsz);
127
+ *svcr = FIELD_DP64(*svcr, SVCR, SM, sm);
111
+ }
128
+
112
+ if ((vm - vd) < (uintptr_t)oprsz) {
129
/*
113
+ vm = memcpy(&tmp_m, vm, oprsz);
130
* Note that SVE regs are stored as a byte stream, with each byte element
114
+ }
131
* at a subsequent address. This corresponds to a little-endian load
115
+ if (high) {
132
@@ -XXX,XX +XXX,XX @@ static bool target_restore_sve_record(CPUARMState *env,
116
+ high = oprsz >> 1;
117
+ }
118
+
119
+ if ((high & 3) == 0) {
120
+ uint32_t *n = vn, *m = vm;
121
+ high >>= 2;
122
+
123
+ for (i = 0; i < DIV_ROUND_UP(oprsz, 8); i++) {
124
+ uint64_t nn = n[H4(high + i)];
125
+ uint64_t mm = m[H4(high + i)];
126
+
127
+ nn = expand_bits(nn, esz);
128
+ mm = expand_bits(mm, esz);
129
+ d[i] = nn + (mm << (1 << esz));
130
+ }
131
+ } else {
132
+ uint8_t *n = vn, *m = vm;
133
+ uint16_t *d16 = vd;
134
+
135
+ for (i = 0; i < oprsz / 2; i++) {
136
+ uint16_t nn = n[H1(high + i)];
137
+ uint16_t mm = m[H1(high + i)];
138
+
139
+ nn = expand_bits(nn, esz);
140
+ mm = expand_bits(mm, esz);
141
+ d16[H2(i)] = nn + (mm << (1 << esz));
142
+ }
143
+ }
144
+ }
145
+}
146
+
147
+void HELPER(sve_uzp_p)(void *vd, void *vn, void *vm, uint32_t pred_desc)
148
+{
149
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
150
+ int esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
151
+ int odd = extract32(pred_desc, SIMD_DATA_SHIFT + 2, 1) << esz;
152
+ uint64_t *d = vd, *n = vn, *m = vm;
153
+ uint64_t l, h;
154
+ intptr_t i;
155
+
156
+ if (oprsz <= 8) {
157
+ l = compress_bits(n[0] >> odd, esz);
158
+ h = compress_bits(m[0] >> odd, esz);
159
+ d[0] = extract64(l + (h << (4 * oprsz)), 0, 8 * oprsz);
160
+ } else {
161
+ ARMPredicateReg tmp_m;
162
+ intptr_t oprsz_16 = oprsz / 16;
163
+
164
+ if ((vm - vd) < (uintptr_t)oprsz) {
165
+ m = memcpy(&tmp_m, vm, oprsz);
166
+ }
167
+
168
+ for (i = 0; i < oprsz_16; i++) {
169
+ l = n[2 * i + 0];
170
+ h = n[2 * i + 1];
171
+ l = compress_bits(l >> odd, esz);
172
+ h = compress_bits(h >> odd, esz);
173
+ d[i] = l + (h << 32);
174
+ }
175
+
176
+ /* For VL which is not a power of 2, the results from M do not
177
+ align nicely with the uint64_t for D. Put the aligned results
178
+ from M into TMP_M and then copy it into place afterward. */
179
+ if (oprsz & 15) {
180
+ d[i] = compress_bits(n[2 * i] >> odd, esz);
181
+
182
+ for (i = 0; i < oprsz_16; i++) {
183
+ l = m[2 * i + 0];
184
+ h = m[2 * i + 1];
185
+ l = compress_bits(l >> odd, esz);
186
+ h = compress_bits(h >> odd, esz);
187
+ tmp_m.p[i] = l + (h << 32);
188
+ }
189
+ tmp_m.p[i] = compress_bits(m[2 * i] >> odd, esz);
190
+
191
+ swap_memmove(vd + oprsz / 2, &tmp_m, oprsz / 2);
192
+ } else {
193
+ for (i = 0; i < oprsz_16; i++) {
194
+ l = m[2 * i + 0];
195
+ h = m[2 * i + 1];
196
+ l = compress_bits(l >> odd, esz);
197
+ h = compress_bits(h >> odd, esz);
198
+ d[oprsz_16 + i] = l + (h << 32);
199
+ }
200
+ }
201
+ }
202
+}
203
+
204
+void HELPER(sve_trn_p)(void *vd, void *vn, void *vm, uint32_t pred_desc)
205
+{
206
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
207
+ uintptr_t esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
208
+ bool odd = extract32(pred_desc, SIMD_DATA_SHIFT + 2, 1);
209
+ uint64_t *d = vd, *n = vn, *m = vm;
210
+ uint64_t mask;
211
+ int shr, shl;
212
+ intptr_t i;
213
+
214
+ shl = 1 << esz;
215
+ shr = 0;
216
+ mask = even_bit_esz_masks[esz];
217
+ if (odd) {
218
+ mask <<= shl;
219
+ shr = shl;
220
+ shl = 0;
221
+ }
222
+
223
+ for (i = 0; i < DIV_ROUND_UP(oprsz, 8); i++) {
224
+ uint64_t nn = (n[i] & mask) >> shr;
225
+ uint64_t mm = (m[i] & mask) << shl;
226
+ d[i] = nn + mm;
227
+ }
228
+}
229
+
230
+/* Reverse units of 2**N bits. */
231
+static uint64_t reverse_bits_64(uint64_t x, int n)
232
+{
233
+ int i, sh;
234
+
235
+ x = bswap64(x);
236
+ for (i = 2, sh = 4; i >= n; i--, sh >>= 1) {
237
+ uint64_t mask = even_bit_esz_masks[i];
238
+ x = ((x & mask) << sh) | ((x >> sh) & mask);
239
+ }
240
+ return x;
241
+}
242
+
243
+static uint8_t reverse_bits_8(uint8_t x, int n)
244
+{
245
+ static const uint8_t mask[3] = { 0x55, 0x33, 0x0f };
246
+ int i, sh;
247
+
248
+ for (i = 2, sh = 4; i >= n; i--, sh >>= 1) {
249
+ x = ((x & mask[i]) << sh) | ((x >> sh) & mask[i]);
250
+ }
251
+ return x;
252
+}
253
+
254
+void HELPER(sve_rev_p)(void *vd, void *vn, uint32_t pred_desc)
255
+{
256
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
257
+ int esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
258
+ intptr_t i, oprsz_2 = oprsz / 2;
259
+
260
+ if (oprsz <= 8) {
261
+ uint64_t l = *(uint64_t *)vn;
262
+ l = reverse_bits_64(l << (64 - 8 * oprsz), esz);
263
+ *(uint64_t *)vd = l;
264
+ } else if ((oprsz & 15) == 0) {
265
+ for (i = 0; i < oprsz_2; i += 8) {
266
+ intptr_t ih = oprsz - 8 - i;
267
+ uint64_t l = reverse_bits_64(*(uint64_t *)(vn + i), esz);
268
+ uint64_t h = reverse_bits_64(*(uint64_t *)(vn + ih), esz);
269
+ *(uint64_t *)(vd + i) = h;
270
+ *(uint64_t *)(vd + ih) = l;
271
+ }
272
+ } else {
273
+ for (i = 0; i < oprsz_2; i += 1) {
274
+ intptr_t il = H1(i);
275
+ intptr_t ih = H1(oprsz - 1 - i);
276
+ uint8_t l = reverse_bits_8(*(uint8_t *)(vn + il), esz);
277
+ uint8_t h = reverse_bits_8(*(uint8_t *)(vn + ih), esz);
278
+ *(uint8_t *)(vd + il) = h;
279
+ *(uint8_t *)(vd + ih) = l;
280
+ }
281
+ }
282
+}
283
+
284
+void HELPER(sve_punpk_p)(void *vd, void *vn, uint32_t pred_desc)
285
+{
286
+ intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
287
+ intptr_t high = extract32(pred_desc, SIMD_DATA_SHIFT + 2, 1);
288
+ uint64_t *d = vd;
289
+ intptr_t i;
290
+
291
+ if (oprsz <= 8) {
292
+ uint64_t nn = *(uint64_t *)vn;
293
+ int half = 4 * oprsz;
294
+
295
+ nn = extract64(nn, high * half, half);
296
+ nn = expand_bits(nn, 0);
297
+ d[0] = nn;
298
+ } else {
299
+ ARMPredicateReg tmp_n;
300
+
301
+ /* We produce output faster than we consume input.
302
+ Therefore we must be mindful of possible overlap. */
303
+ if ((vn - vd) < (uintptr_t)oprsz) {
304
+ vn = memcpy(&tmp_n, vn, oprsz);
305
+ }
306
+ if (high) {
307
+ high = oprsz >> 1;
308
+ }
309
+
310
+ if ((high & 3) == 0) {
311
+ uint32_t *n = vn;
312
+ high >>= 2;
313
+
314
+ for (i = 0; i < DIV_ROUND_UP(oprsz, 8); i++) {
315
+ uint64_t nn = n[H4(high + i)];
316
+ d[i] = expand_bits(nn, 0);
317
+ }
318
+ } else {
319
+ uint16_t *d16 = vd;
320
+ uint8_t *n = vn;
321
+
322
+ for (i = 0; i < oprsz / 2; i++) {
323
+ uint16_t nn = n[H1(high + i)];
324
+ d16[H2(i)] = expand_bits(nn, 0);
325
+ }
326
+ }
327
+ }
328
+}
329
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
330
index XXXXXXX..XXXXXXX 100644
331
--- a/target/arm/translate-sve.c
332
+++ b/target/arm/translate-sve.c
333
@@ -XXX,XX +XXX,XX @@ static bool trans_UNPK(DisasContext *s, arg_UNPK *a, uint32_t insn)
334
return true;
133
return true;
335
}
134
}
336
135
337
+/*
136
+static bool target_restore_za_record(CPUARMState *env,
338
+ *** SVE Permute - Predicates Group
137
+ struct target_za_context *za,
339
+ */
138
+ int size, int *svcr)
340
+
341
+static bool do_perm_pred3(DisasContext *s, arg_rrr_esz *a, bool high_odd,
342
+ gen_helper_gvec_3 *fn)
343
+{
139
+{
344
+ if (!sve_access_check(s)) {
140
+ int i, j, vl, vq;
141
+
142
+ if (!cpu_isar_feature(aa64_sme, env_archcpu(env))) {
143
+ return false;
144
+ }
145
+
146
+ __get_user(vl, &za->vl);
147
+ vq = sme_vq(env);
148
+
149
+ /* Reject mismatched VL. */
150
+ if (vl != vq * TARGET_SVE_VQ_BYTES) {
151
+ return false;
152
+ }
153
+
154
+ /* Accept empty record -- used to clear PSTATE.ZA. */
155
+ if (size <= TARGET_ZA_SIG_CONTEXT_SIZE(0)) {
345
+ return true;
156
+ return true;
346
+ }
157
+ }
347
+
158
+
348
+ unsigned vsz = pred_full_reg_size(s);
159
+ /* Reject non-empty but incomplete record. */
349
+
160
+ if (size < TARGET_ZA_SIG_CONTEXT_SIZE(vq)) {
350
+ /* Predicate sizes may be smaller and cannot use simd_desc.
161
+ return false;
351
+ We cannot round up, as we do elsewhere, because we need
162
+ }
352
+ the exact size for ZIP2 and REV. We retain the style for
163
+
353
+ the other helpers for consistency. */
164
+ *svcr = FIELD_DP64(*svcr, SVCR, ZA, 1);
354
+ TCGv_ptr t_d = tcg_temp_new_ptr();
165
+
355
+ TCGv_ptr t_n = tcg_temp_new_ptr();
166
+ for (i = 0; i < vl; ++i) {
356
+ TCGv_ptr t_m = tcg_temp_new_ptr();
167
+ uint64_t *z = (void *)za + TARGET_ZA_SIG_ZAV_OFFSET(vq, i);
357
+ TCGv_i32 t_desc;
168
+ for (j = 0; j < vq * 2; ++j) {
358
+ int desc;
169
+ __get_user_e(env->zarray[i].d[j], z + j, le);
359
+
170
+ }
360
+ desc = vsz - 2;
171
+ }
361
+ desc = deposit32(desc, SIMD_DATA_SHIFT, 2, a->esz);
362
+ desc = deposit32(desc, SIMD_DATA_SHIFT + 2, 2, high_odd);
363
+
364
+ tcg_gen_addi_ptr(t_d, cpu_env, pred_full_reg_offset(s, a->rd));
365
+ tcg_gen_addi_ptr(t_n, cpu_env, pred_full_reg_offset(s, a->rn));
366
+ tcg_gen_addi_ptr(t_m, cpu_env, pred_full_reg_offset(s, a->rm));
367
+ t_desc = tcg_const_i32(desc);
368
+
369
+ fn(t_d, t_n, t_m, t_desc);
370
+
371
+ tcg_temp_free_ptr(t_d);
372
+ tcg_temp_free_ptr(t_n);
373
+ tcg_temp_free_ptr(t_m);
374
+ tcg_temp_free_i32(t_desc);
375
+ return true;
172
+ return true;
376
+}
173
+}
377
+
174
+
378
+static bool do_perm_pred2(DisasContext *s, arg_rr_esz *a, bool high_odd,
175
static int target_restore_sigframe(CPUARMState *env,
379
+ gen_helper_gvec_2 *fn)
176
struct target_rt_sigframe *sf)
380
+{
177
{
381
+ if (!sve_access_check(s)) {
178
struct target_aarch64_ctx *ctx, *extra = NULL;
382
+ return true;
179
struct target_fpsimd_context *fpsimd = NULL;
383
+ }
180
struct target_sve_context *sve = NULL;
384
+
181
+ struct target_za_context *za = NULL;
385
+ unsigned vsz = pred_full_reg_size(s);
182
uint64_t extra_datap = 0;
386
+ TCGv_ptr t_d = tcg_temp_new_ptr();
183
bool used_extra = false;
387
+ TCGv_ptr t_n = tcg_temp_new_ptr();
184
int sve_size = 0;
388
+ TCGv_i32 t_desc;
185
+ int za_size = 0;
389
+ int desc;
186
+ int svcr = 0;
390
+
187
391
+ tcg_gen_addi_ptr(t_d, cpu_env, pred_full_reg_offset(s, a->rd));
188
target_restore_general_frame(env, sf);
392
+ tcg_gen_addi_ptr(t_n, cpu_env, pred_full_reg_offset(s, a->rn));
189
393
+
190
@@ -XXX,XX +XXX,XX @@ static int target_restore_sigframe(CPUARMState *env,
394
+ /* Predicate sizes may be smaller and cannot use simd_desc.
191
sve_size = size;
395
+ We cannot round up, as we do elsewhere, because we need
192
break;
396
+ the exact size for ZIP2 and REV. We retain the style for
193
397
+ the other helpers for consistency. */
194
+ case TARGET_ZA_MAGIC:
398
+
195
+ if (za || size < sizeof(struct target_za_context)) {
399
+ desc = vsz - 2;
196
+ goto err;
400
+ desc = deposit32(desc, SIMD_DATA_SHIFT, 2, a->esz);
197
+ }
401
+ desc = deposit32(desc, SIMD_DATA_SHIFT + 2, 2, high_odd);
198
+ za = (struct target_za_context *)ctx;
402
+ t_desc = tcg_const_i32(desc);
199
+ za_size = size;
403
+
200
+ break;
404
+ fn(t_d, t_n, t_desc);
201
+
405
+
202
case TARGET_EXTRA_MAGIC:
406
+ tcg_temp_free_i32(t_desc);
203
if (extra || size != sizeof(struct target_extra_context)) {
407
+ tcg_temp_free_ptr(t_d);
204
goto err;
408
+ tcg_temp_free_ptr(t_n);
205
@@ -XXX,XX +XXX,XX @@ static int target_restore_sigframe(CPUARMState *env,
409
+ return true;
206
}
410
+}
207
411
+
208
/* SVE data, if present, overwrites FPSIMD data. */
412
+static bool trans_ZIP1_p(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
209
- if (sve && !target_restore_sve_record(env, sve, sve_size)) {
413
+{
210
+ if (sve && !target_restore_sve_record(env, sve, sve_size, &svcr)) {
414
+ return do_perm_pred3(s, a, 0, gen_helper_sve_zip_p);
211
goto err;
415
+}
212
}
416
+
213
+ if (za && !target_restore_za_record(env, za, za_size, &svcr)) {
417
+static bool trans_ZIP2_p(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
214
+ goto err;
418
+{
215
+ }
419
+ return do_perm_pred3(s, a, 1, gen_helper_sve_zip_p);
216
+ if (env->svcr != svcr) {
420
+}
217
+ env->svcr = svcr;
421
+
218
+ arm_rebuild_hflags(env);
422
+static bool trans_UZP1_p(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
219
+ }
423
+{
220
unlock_user(extra, extra_datap, 0);
424
+ return do_perm_pred3(s, a, 0, gen_helper_sve_uzp_p);
221
return 0;
425
+}
222
426
+
223
@@ -XXX,XX +XXX,XX @@ static void target_setup_frame(int usig, struct target_sigaction *ka,
427
+static bool trans_UZP2_p(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
224
.total_size = offsetof(struct target_rt_sigframe,
428
+{
225
uc.tuc_mcontext.__reserved),
429
+ return do_perm_pred3(s, a, 1, gen_helper_sve_uzp_p);
226
};
430
+}
227
- int fpsimd_ofs, fr_ofs, sve_ofs = 0, vq = 0, sve_size = 0;
431
+
228
+ int fpsimd_ofs, fr_ofs, sve_ofs = 0, za_ofs = 0;
432
+static bool trans_TRN1_p(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
229
+ int sve_size = 0, za_size = 0;
433
+{
230
struct target_rt_sigframe *frame;
434
+ return do_perm_pred3(s, a, 0, gen_helper_sve_trn_p);
231
struct target_rt_frame_record *fr;
435
+}
232
abi_ulong frame_addr, return_addr;
436
+
233
@@ -XXX,XX +XXX,XX @@ static void target_setup_frame(int usig, struct target_sigaction *ka,
437
+static bool trans_TRN2_p(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
234
&layout);
438
+{
235
439
+ return do_perm_pred3(s, a, 1, gen_helper_sve_trn_p);
236
/* SVE state needs saving only if it exists. */
440
+}
237
- if (cpu_isar_feature(aa64_sve, env_archcpu(env))) {
441
+
238
- vq = sve_vq(env);
442
+static bool trans_REV_p(DisasContext *s, arg_rr_esz *a, uint32_t insn)
239
- sve_size = QEMU_ALIGN_UP(TARGET_SVE_SIG_CONTEXT_SIZE(vq), 16);
443
+{
240
+ if (cpu_isar_feature(aa64_sve, env_archcpu(env)) ||
444
+ return do_perm_pred2(s, a, 0, gen_helper_sve_rev_p);
241
+ cpu_isar_feature(aa64_sme, env_archcpu(env))) {
445
+}
242
+ sve_size = QEMU_ALIGN_UP(TARGET_SVE_SIG_CONTEXT_SIZE(sve_vq(env)), 16);
446
+
243
sve_ofs = alloc_sigframe_space(sve_size, &layout);
447
+static bool trans_PUNPKLO(DisasContext *s, arg_PUNPKLO *a, uint32_t insn)
244
}
448
+{
245
+ if (cpu_isar_feature(aa64_sme, env_archcpu(env))) {
449
+ return do_perm_pred2(s, a, 0, gen_helper_sve_punpk_p);
246
+ /* ZA state needs saving only if it is enabled. */
450
+}
247
+ if (FIELD_EX64(env->svcr, SVCR, ZA)) {
451
+
248
+ za_size = TARGET_ZA_SIG_CONTEXT_SIZE(sme_vq(env));
452
+static bool trans_PUNPKHI(DisasContext *s, arg_PUNPKHI *a, uint32_t insn)
249
+ } else {
453
+{
250
+ za_size = TARGET_ZA_SIG_CONTEXT_SIZE(0);
454
+ return do_perm_pred2(s, a, 1, gen_helper_sve_punpk_p);
251
+ }
455
+}
252
+ za_ofs = alloc_sigframe_space(za_size, &layout);
456
+
253
+ }
457
/*
254
458
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
255
if (layout.extra_ofs) {
459
*/
256
/* Reserve space for the extra end marker. The standard end marker
460
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
257
@@ -XXX,XX +XXX,XX @@ static void target_setup_frame(int usig, struct target_sigaction *ka,
461
index XXXXXXX..XXXXXXX 100644
258
target_setup_end_record((void *)frame + layout.extra_end_ofs);
462
--- a/target/arm/sve.decode
259
}
463
+++ b/target/arm/sve.decode
260
if (sve_ofs) {
464
@@ -XXX,XX +XXX,XX @@
261
- target_setup_sve_record((void *)frame + sve_ofs, env, vq, sve_size);
465
262
+ target_setup_sve_record((void *)frame + sve_ofs, env, sve_size);
466
# Three operand, vector element size
263
+ }
467
@rd_rn_rm ........ esz:2 . rm:5 ... ... rn:5 rd:5 &rrr_esz
264
+ if (za_ofs) {
468
+@pd_pn_pm ........ esz:2 .. rm:4 ....... rn:4 . rd:4 &rrr_esz
265
+ target_setup_za_record((void *)frame + za_ofs, env, za_size);
469
@rdn_rm ........ esz:2 ...... ...... rm:5 rd:5 \
266
}
470
&rrr_esz rn=%reg_movprfx
267
471
268
/* Set up the stack frame for unwinding. */
472
@@ -XXX,XX +XXX,XX @@ TBL 00000101 .. 1 ..... 001100 ..... ..... @rd_rn_rm
269
@@ -XXX,XX +XXX,XX @@ static void target_setup_frame(int usig, struct target_sigaction *ka,
473
# SVE unpack vector elements
270
env->btype = 2;
474
UNPK 00000101 esz:2 1100 u:1 h:1 001110 rn:5 rd:5
271
}
475
272
476
+### SVE Permute - Predicates Group
273
+ /*
477
+
274
+ * Invoke the signal handler with both SM and ZA disabled.
478
+# SVE permute predicate elements
275
+ * When clearing SM, ResetSVEState, per SMSTOP.
479
+ZIP1_p 00000101 .. 10 .... 010 000 0 .... 0 .... @pd_pn_pm
276
+ */
480
+ZIP2_p 00000101 .. 10 .... 010 001 0 .... 0 .... @pd_pn_pm
277
+ if (FIELD_EX64(env->svcr, SVCR, SM)) {
481
+UZP1_p 00000101 .. 10 .... 010 010 0 .... 0 .... @pd_pn_pm
278
+ arm_reset_sve_state(env);
482
+UZP2_p 00000101 .. 10 .... 010 011 0 .... 0 .... @pd_pn_pm
279
+ }
483
+TRN1_p 00000101 .. 10 .... 010 100 0 .... 0 .... @pd_pn_pm
280
+ if (env->svcr) {
484
+TRN2_p 00000101 .. 10 .... 010 101 0 .... 0 .... @pd_pn_pm
281
+ env->svcr = 0;
485
+
282
+ arm_rebuild_hflags(env);
486
+# SVE reverse predicate elements
283
+ }
487
+REV_p 00000101 .. 11 0100 010 000 0 .... 0 .... @pd_pn
284
+
488
+
285
if (info) {
489
+# SVE unpack predicate elements
286
tswap_siginfo(&frame->info, info);
490
+PUNPKLO 00000101 00 11 0000 010 000 0 .... 0 .... @pd_pn_e0
287
env->xregs[1] = frame_addr + offsetof(struct target_rt_sigframe, info);
491
+PUNPKHI 00000101 00 11 0001 010 000 0 .... 0 .... @pd_pn_e0
492
+
493
### SVE Predicate Logical Operations Group
494
495
# SVE predicate logical operations
496
--
288
--
497
2.17.1
289
2.25.1
498
499
diff view generated by jsdifflib
1
Convert the pflash_cfi02 device away from using the old_mmio field
1
From: Richard Henderson <richard.henderson@linaro.org>
2
of MemoryRegionOps.
3
2
3
Add "sve" to the sve prctl functions, to distinguish
4
them from the coming "sme" prctls with similar names.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20220708151540.18136-42-richard.henderson@linaro.org
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Acked-by: Max Reitz <mreitz@redhat.com>
7
Message-id: 20180601141223.26630-4-peter.maydell@linaro.org
8
---
10
---
9
hw/block/pflash_cfi02.c | 97 ++++++++---------------------------------
11
linux-user/aarch64/target_prctl.h | 8 ++++----
10
1 file changed, 18 insertions(+), 79 deletions(-)
12
linux-user/syscall.c | 12 ++++++------
13
2 files changed, 10 insertions(+), 10 deletions(-)
11
14
12
diff --git a/hw/block/pflash_cfi02.c b/hw/block/pflash_cfi02.c
15
diff --git a/linux-user/aarch64/target_prctl.h b/linux-user/aarch64/target_prctl.h
13
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
14
--- a/hw/block/pflash_cfi02.c
17
--- a/linux-user/aarch64/target_prctl.h
15
+++ b/hw/block/pflash_cfi02.c
18
+++ b/linux-user/aarch64/target_prctl.h
16
@@ -XXX,XX +XXX,XX @@ static void pflash_write (pflash_t *pfl, hwaddr offset,
19
@@ -XXX,XX +XXX,XX @@
17
pfl->cmd = 0;
20
#ifndef AARCH64_TARGET_PRCTL_H
21
#define AARCH64_TARGET_PRCTL_H
22
23
-static abi_long do_prctl_get_vl(CPUArchState *env)
24
+static abi_long do_prctl_sve_get_vl(CPUArchState *env)
25
{
26
ARMCPU *cpu = env_archcpu(env);
27
if (cpu_isar_feature(aa64_sve, cpu)) {
28
@@ -XXX,XX +XXX,XX @@ static abi_long do_prctl_get_vl(CPUArchState *env)
29
}
30
return -TARGET_EINVAL;
18
}
31
}
19
32
-#define do_prctl_get_vl do_prctl_get_vl
20
-
33
+#define do_prctl_sve_get_vl do_prctl_sve_get_vl
21
-static uint32_t pflash_readb_be(void *opaque, hwaddr addr)
34
22
+static uint64_t pflash_be_readfn(void *opaque, hwaddr addr, unsigned size)
35
-static abi_long do_prctl_set_vl(CPUArchState *env, abi_long arg2)
36
+static abi_long do_prctl_sve_set_vl(CPUArchState *env, abi_long arg2)
23
{
37
{
24
- return pflash_read(opaque, addr, 1, 1);
38
/*
25
+ return pflash_read(opaque, addr, size, 1);
39
* We cannot support either PR_SVE_SET_VL_ONEXEC or PR_SVE_VL_INHERIT.
40
@@ -XXX,XX +XXX,XX @@ static abi_long do_prctl_set_vl(CPUArchState *env, abi_long arg2)
41
}
42
return -TARGET_EINVAL;
26
}
43
}
27
44
-#define do_prctl_set_vl do_prctl_set_vl
28
-static uint32_t pflash_readb_le(void *opaque, hwaddr addr)
45
+#define do_prctl_sve_set_vl do_prctl_sve_set_vl
29
+static void pflash_be_writefn(void *opaque, hwaddr addr,
46
30
+ uint64_t value, unsigned size)
47
static abi_long do_prctl_reset_keys(CPUArchState *env, abi_long arg2)
31
{
48
{
32
- return pflash_read(opaque, addr, 1, 0);
49
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
33
+ pflash_write(opaque, addr, value, size, 1);
50
index XXXXXXX..XXXXXXX 100644
34
}
51
--- a/linux-user/syscall.c
35
52
+++ b/linux-user/syscall.c
36
-static uint32_t pflash_readw_be(void *opaque, hwaddr addr)
53
@@ -XXX,XX +XXX,XX @@ static abi_long do_prctl_inval1(CPUArchState *env, abi_long arg2)
37
+static uint64_t pflash_le_readfn(void *opaque, hwaddr addr, unsigned size)
54
#ifndef do_prctl_set_fp_mode
38
{
55
#define do_prctl_set_fp_mode do_prctl_inval1
39
- pflash_t *pfl = opaque;
56
#endif
40
-
57
-#ifndef do_prctl_get_vl
41
- return pflash_read(pfl, addr, 2, 1);
58
-#define do_prctl_get_vl do_prctl_inval0
42
+ return pflash_read(opaque, addr, size, 0);
59
+#ifndef do_prctl_sve_get_vl
43
}
60
+#define do_prctl_sve_get_vl do_prctl_inval0
44
61
#endif
45
-static uint32_t pflash_readw_le(void *opaque, hwaddr addr)
62
-#ifndef do_prctl_set_vl
46
+static void pflash_le_writefn(void *opaque, hwaddr addr,
63
-#define do_prctl_set_vl do_prctl_inval1
47
+ uint64_t value, unsigned size)
64
+#ifndef do_prctl_sve_set_vl
48
{
65
+#define do_prctl_sve_set_vl do_prctl_inval1
49
- pflash_t *pfl = opaque;
66
#endif
50
-
67
#ifndef do_prctl_reset_keys
51
- return pflash_read(pfl, addr, 2, 0);
68
#define do_prctl_reset_keys do_prctl_inval1
52
-}
69
@@ -XXX,XX +XXX,XX @@ static abi_long do_prctl(CPUArchState *env, abi_long option, abi_long arg2,
53
-
70
case PR_SET_FP_MODE:
54
-static uint32_t pflash_readl_be(void *opaque, hwaddr addr)
71
return do_prctl_set_fp_mode(env, arg2);
55
-{
72
case PR_SVE_GET_VL:
56
- pflash_t *pfl = opaque;
73
- return do_prctl_get_vl(env);
57
-
74
+ return do_prctl_sve_get_vl(env);
58
- return pflash_read(pfl, addr, 4, 1);
75
case PR_SVE_SET_VL:
59
-}
76
- return do_prctl_set_vl(env, arg2);
60
-
77
+ return do_prctl_sve_set_vl(env, arg2);
61
-static uint32_t pflash_readl_le(void *opaque, hwaddr addr)
78
case PR_PAC_RESET_KEYS:
62
-{
79
if (arg3 || arg4 || arg5) {
63
- pflash_t *pfl = opaque;
80
return -TARGET_EINVAL;
64
-
65
- return pflash_read(pfl, addr, 4, 0);
66
-}
67
-
68
-static void pflash_writeb_be(void *opaque, hwaddr addr,
69
- uint32_t value)
70
-{
71
- pflash_write(opaque, addr, value, 1, 1);
72
-}
73
-
74
-static void pflash_writeb_le(void *opaque, hwaddr addr,
75
- uint32_t value)
76
-{
77
- pflash_write(opaque, addr, value, 1, 0);
78
-}
79
-
80
-static void pflash_writew_be(void *opaque, hwaddr addr,
81
- uint32_t value)
82
-{
83
- pflash_t *pfl = opaque;
84
-
85
- pflash_write(pfl, addr, value, 2, 1);
86
-}
87
-
88
-static void pflash_writew_le(void *opaque, hwaddr addr,
89
- uint32_t value)
90
-{
91
- pflash_t *pfl = opaque;
92
-
93
- pflash_write(pfl, addr, value, 2, 0);
94
-}
95
-
96
-static void pflash_writel_be(void *opaque, hwaddr addr,
97
- uint32_t value)
98
-{
99
- pflash_t *pfl = opaque;
100
-
101
- pflash_write(pfl, addr, value, 4, 1);
102
-}
103
-
104
-static void pflash_writel_le(void *opaque, hwaddr addr,
105
- uint32_t value)
106
-{
107
- pflash_t *pfl = opaque;
108
-
109
- pflash_write(pfl, addr, value, 4, 0);
110
+ pflash_write(opaque, addr, value, size, 0);
111
}
112
113
static const MemoryRegionOps pflash_cfi02_ops_be = {
114
- .old_mmio = {
115
- .read = { pflash_readb_be, pflash_readw_be, pflash_readl_be, },
116
- .write = { pflash_writeb_be, pflash_writew_be, pflash_writel_be, },
117
- },
118
+ .read = pflash_be_readfn,
119
+ .write = pflash_be_writefn,
120
+ .valid.min_access_size = 1,
121
+ .valid.max_access_size = 4,
122
.endianness = DEVICE_NATIVE_ENDIAN,
123
};
124
125
static const MemoryRegionOps pflash_cfi02_ops_le = {
126
- .old_mmio = {
127
- .read = { pflash_readb_le, pflash_readw_le, pflash_readl_le, },
128
- .write = { pflash_writeb_le, pflash_writew_le, pflash_writel_le, },
129
- },
130
+ .read = pflash_le_readfn,
131
+ .write = pflash_le_writefn,
132
+ .valid.min_access_size = 1,
133
+ .valid.max_access_size = 4,
134
.endianness = DEVICE_NATIVE_ENDIAN,
135
};
136
137
--
81
--
138
2.17.1
82
2.25.1
139
140
diff view generated by jsdifflib
1
From: Shannon Zhao <zhaoshenglong@huawei.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
While for_each_dist_irq_reg loop starts from GIC_INTERNAL, it forgot to
3
These prctl set the Streaming SVE vector length, which may
4
offset the date array and index. This will overlap the GICR registers
4
be completely different from the Normal SVE vector length.
5
value and leave the last GIC_INTERNAL irq's registers out of update.
6
5
7
Fixes: 367b9f527becdd20ddf116e17a3c0c2bbc486920
8
Cc: qemu-stable@nongnu.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Eric Auger <eric.auger@redhat.com>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Signed-off-by: Shannon Zhao <zhaoshenglong@huawei.com>
8
Message-id: 20220708151540.18136-43-richard.henderson@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
10
---
14
hw/intc/arm_gicv3_kvm.c | 18 ++++++++++++++++--
11
linux-user/aarch64/target_prctl.h | 54 +++++++++++++++++++++++++++++++
15
1 file changed, 16 insertions(+), 2 deletions(-)
12
linux-user/syscall.c | 16 +++++++++
13
2 files changed, 70 insertions(+)
16
14
17
diff --git a/hw/intc/arm_gicv3_kvm.c b/hw/intc/arm_gicv3_kvm.c
15
diff --git a/linux-user/aarch64/target_prctl.h b/linux-user/aarch64/target_prctl.h
18
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/intc/arm_gicv3_kvm.c
17
--- a/linux-user/aarch64/target_prctl.h
20
+++ b/hw/intc/arm_gicv3_kvm.c
18
+++ b/linux-user/aarch64/target_prctl.h
21
@@ -XXX,XX +XXX,XX @@ static void kvm_dist_get_priority(GICv3State *s, uint32_t offset, uint8_t *bmp)
19
@@ -XXX,XX +XXX,XX @@ static abi_long do_prctl_sve_get_vl(CPUArchState *env)
22
uint32_t reg, *field;
20
{
23
int irq;
21
ARMCPU *cpu = env_archcpu(env);
24
22
if (cpu_isar_feature(aa64_sve, cpu)) {
25
- field = (uint32_t *)bmp;
23
+ /* PSTATE.SM is always unset on syscall entry. */
26
+ /* For the KVM GICv3, affinity routing is always enabled, and the first 8
24
return sve_vq(env) * 16;
27
+ * GICD_IPRIORITYR<n> registers are always RAZ/WI. The corresponding
25
}
28
+ * functionality is replaced by GICR_IPRIORITYR<n>. It doesn't need to
26
return -TARGET_EINVAL;
29
+ * sync them. So it needs to skip the field of GIC_INTERNAL irqs in bmp and
27
@@ -XXX,XX +XXX,XX @@ static abi_long do_prctl_sve_set_vl(CPUArchState *env, abi_long arg2)
30
+ * offset.
28
&& arg2 >= 0 && arg2 <= 512 * 16 && !(arg2 & 15)) {
29
uint32_t vq, old_vq;
30
31
+ /* PSTATE.SM is always unset on syscall entry. */
32
old_vq = sve_vq(env);
33
34
/*
35
@@ -XXX,XX +XXX,XX @@ static abi_long do_prctl_sve_set_vl(CPUArchState *env, abi_long arg2)
36
}
37
#define do_prctl_sve_set_vl do_prctl_sve_set_vl
38
39
+static abi_long do_prctl_sme_get_vl(CPUArchState *env)
40
+{
41
+ ARMCPU *cpu = env_archcpu(env);
42
+ if (cpu_isar_feature(aa64_sme, cpu)) {
43
+ return sme_vq(env) * 16;
44
+ }
45
+ return -TARGET_EINVAL;
46
+}
47
+#define do_prctl_sme_get_vl do_prctl_sme_get_vl
48
+
49
+static abi_long do_prctl_sme_set_vl(CPUArchState *env, abi_long arg2)
50
+{
51
+ /*
52
+ * We cannot support either PR_SME_SET_VL_ONEXEC or PR_SME_VL_INHERIT.
53
+ * Note the kernel definition of sve_vl_valid allows for VQ=512,
54
+ * i.e. VL=8192, even though the architectural maximum is VQ=16.
31
+ */
55
+ */
32
+ field = (uint32_t *)(bmp + GIC_INTERNAL);
56
+ if (cpu_isar_feature(aa64_sme, env_archcpu(env))
33
+ offset += (GIC_INTERNAL * 8) / 8;
57
+ && arg2 >= 0 && arg2 <= 512 * 16 && !(arg2 & 15)) {
34
for_each_dist_irq_reg(irq, s->num_irq, 8) {
58
+ int vq, old_vq;
35
kvm_gicd_access(s, offset, &reg, false);
59
+
36
*field = reg;
60
+ old_vq = sme_vq(env);
37
@@ -XXX,XX +XXX,XX @@ static void kvm_dist_put_priority(GICv3State *s, uint32_t offset, uint8_t *bmp)
61
+
38
uint32_t reg, *field;
62
+ /*
39
int irq;
63
+ * Bound the value of vq, so that we know that it fits into
40
64
+ * the 4-bit field in SMCR_EL1. Because PSTATE.SM is cleared
41
- field = (uint32_t *)bmp;
65
+ * on syscall entry, we are not modifying the current SVE
42
+ /* For the KVM GICv3, affinity routing is always enabled, and the first 8
66
+ * vector length.
43
+ * GICD_IPRIORITYR<n> registers are always RAZ/WI. The corresponding
67
+ */
44
+ * functionality is replaced by GICR_IPRIORITYR<n>. It doesn't need to
68
+ vq = MAX(arg2 / 16, 1);
45
+ * sync them. So it needs to skip the field of GIC_INTERNAL irqs in bmp and
69
+ vq = MIN(vq, 16);
46
+ * offset.
70
+ env->vfp.smcr_el[1] =
47
+ */
71
+ FIELD_DP64(env->vfp.smcr_el[1], SMCR, LEN, vq - 1);
48
+ field = (uint32_t *)(bmp + GIC_INTERNAL);
72
+
49
+ offset += (GIC_INTERNAL * 8) / 8;
73
+ /* Delay rebuilding hflags until we know if ZA must change. */
50
for_each_dist_irq_reg(irq, s->num_irq, 8) {
74
+ vq = sve_vqm1_for_el_sm(env, 0, true) + 1;
51
reg = *field;
75
+
52
kvm_gicd_access(s, offset, &reg, true);
76
+ if (vq != old_vq) {
77
+ /*
78
+ * PSTATE.ZA state is cleared on any change to SVL.
79
+ * We need not call arm_rebuild_hflags because PSTATE.SM was
80
+ * cleared on syscall entry, so this hasn't changed VL.
81
+ */
82
+ env->svcr = FIELD_DP64(env->svcr, SVCR, ZA, 0);
83
+ arm_rebuild_hflags(env);
84
+ }
85
+ return vq * 16;
86
+ }
87
+ return -TARGET_EINVAL;
88
+}
89
+#define do_prctl_sme_set_vl do_prctl_sme_set_vl
90
+
91
static abi_long do_prctl_reset_keys(CPUArchState *env, abi_long arg2)
92
{
93
ARMCPU *cpu = env_archcpu(env);
94
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
95
index XXXXXXX..XXXXXXX 100644
96
--- a/linux-user/syscall.c
97
+++ b/linux-user/syscall.c
98
@@ -XXX,XX +XXX,XX @@ abi_long do_arch_prctl(CPUX86State *env, int code, abi_ulong addr)
99
#ifndef PR_SET_SYSCALL_USER_DISPATCH
100
# define PR_SET_SYSCALL_USER_DISPATCH 59
101
#endif
102
+#ifndef PR_SME_SET_VL
103
+# define PR_SME_SET_VL 63
104
+# define PR_SME_GET_VL 64
105
+# define PR_SME_VL_LEN_MASK 0xffff
106
+# define PR_SME_VL_INHERIT (1 << 17)
107
+#endif
108
109
#include "target_prctl.h"
110
111
@@ -XXX,XX +XXX,XX @@ static abi_long do_prctl_inval1(CPUArchState *env, abi_long arg2)
112
#ifndef do_prctl_set_unalign
113
#define do_prctl_set_unalign do_prctl_inval1
114
#endif
115
+#ifndef do_prctl_sme_get_vl
116
+#define do_prctl_sme_get_vl do_prctl_inval0
117
+#endif
118
+#ifndef do_prctl_sme_set_vl
119
+#define do_prctl_sme_set_vl do_prctl_inval1
120
+#endif
121
122
static abi_long do_prctl(CPUArchState *env, abi_long option, abi_long arg2,
123
abi_long arg3, abi_long arg4, abi_long arg5)
124
@@ -XXX,XX +XXX,XX @@ static abi_long do_prctl(CPUArchState *env, abi_long option, abi_long arg2,
125
return do_prctl_sve_get_vl(env);
126
case PR_SVE_SET_VL:
127
return do_prctl_sve_set_vl(env, arg2);
128
+ case PR_SME_GET_VL:
129
+ return do_prctl_sme_get_vl(env);
130
+ case PR_SME_SET_VL:
131
+ return do_prctl_sme_set_vl(env, arg2);
132
case PR_PAC_RESET_KEYS:
133
if (arg3 || arg4 || arg5) {
134
return -TARGET_EINVAL;
53
--
135
--
54
2.17.1
136
2.25.1
55
56
diff view generated by jsdifflib
1
The Cortex-M CPU and its NVIC are two intimately intertwined parts of
1
From: Richard Henderson <richard.henderson@linaro.org>
2
the same hardware; it is not possible to use one without the other.
3
Unfortunately a lot of our board models don't do any sanity checking
4
on the CPU type the user asks for, so a command line like
5
qemu-system-arm -M versatilepb -cpu cortex-m3
6
will create an M3 without an NVIC, and coredump immediately.
7
In the other direction, trying a non-M-profile CPU in an M-profile
8
board won't blow up, but doesn't do anything useful either:
9
qemu-system-arm -M lm3s6965evb -cpu arm926
10
2
11
Add some checking in the NVIC and CPU realize functions that the
3
There's no reason to set CPACR_EL1.ZEN if SVE disabled.
12
user isn't trying to use an NVIC without an M-profile CPU or
13
an M-profile CPU without an NVIC, so we can produce a helpful
14
error message rather than a core dump.
15
4
16
Fixes: https://bugs.launchpad.net/qemu/+bug/1766896
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20220708151540.18136-44-richard.henderson@linaro.org
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
19
Message-id: 20180601160355.15393-1-peter.maydell@linaro.org
20
---
9
---
21
hw/arm/armv7m.c | 7 ++++++-
10
target/arm/cpu.c | 7 +++----
22
hw/intc/armv7m_nvic.c | 6 +++++-
11
1 file changed, 3 insertions(+), 4 deletions(-)
23
target/arm/cpu.c | 18 ++++++++++++++++++
24
3 files changed, 29 insertions(+), 2 deletions(-)
25
12
26
diff --git a/hw/arm/armv7m.c b/hw/arm/armv7m.c
27
index XXXXXXX..XXXXXXX 100644
28
--- a/hw/arm/armv7m.c
29
+++ b/hw/arm/armv7m.c
30
@@ -XXX,XX +XXX,XX @@ static void armv7m_realize(DeviceState *dev, Error **errp)
31
return;
32
}
33
}
34
+
35
+ /* Tell the CPU where the NVIC is; it will fail realize if it doesn't
36
+ * have one.
37
+ */
38
+ s->cpu->env.nvic = &s->nvic;
39
+
40
object_property_set_bool(OBJECT(s->cpu), true, "realized", &err);
41
if (err != NULL) {
42
error_propagate(errp, err);
43
@@ -XXX,XX +XXX,XX @@ static void armv7m_realize(DeviceState *dev, Error **errp)
44
sbd = SYS_BUS_DEVICE(&s->nvic);
45
sysbus_connect_irq(sbd, 0,
46
qdev_get_gpio_in(DEVICE(s->cpu), ARM_CPU_IRQ));
47
- s->cpu->env.nvic = &s->nvic;
48
49
memory_region_add_subregion(&s->container, 0xe000e000,
50
sysbus_mmio_get_region(sbd, 0));
51
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
52
index XXXXXXX..XXXXXXX 100644
53
--- a/hw/intc/armv7m_nvic.c
54
+++ b/hw/intc/armv7m_nvic.c
55
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
56
int regionlen;
57
58
s->cpu = ARM_CPU(qemu_get_cpu(0));
59
- assert(s->cpu);
60
+
61
+ if (!s->cpu || !arm_feature(&s->cpu->env, ARM_FEATURE_M)) {
62
+ error_setg(errp, "The NVIC can only be used with a Cortex-M CPU");
63
+ return;
64
+ }
65
66
if (s->num_irq > NVIC_MAX_IRQ) {
67
error_setg(errp, "num-irq %d exceeds NVIC maximum", s->num_irq);
68
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
13
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
69
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
70
--- a/target/arm/cpu.c
15
--- a/target/arm/cpu.c
71
+++ b/target/arm/cpu.c
16
+++ b/target/arm/cpu.c
72
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
17
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
73
return;
18
/* and to the FP/Neon instructions */
74
}
19
env->cp15.cpacr_el1 = FIELD_DP64(env->cp15.cpacr_el1,
75
20
CPACR_EL1, FPEN, 3);
76
+#ifndef CONFIG_USER_ONLY
21
- /* and to the SVE instructions */
77
+ /* The NVIC and M-profile CPU are two halves of a single piece of
22
- env->cp15.cpacr_el1 = FIELD_DP64(env->cp15.cpacr_el1,
78
+ * hardware; trying to use one without the other is a command line
23
- CPACR_EL1, ZEN, 3);
79
+ * error and will result in segfaults if not caught here.
24
- /* with reasonable vector length */
80
+ */
25
+ /* and to the SVE instructions, with default vector length */
81
+ if (arm_feature(env, ARM_FEATURE_M)) {
26
if (cpu_isar_feature(aa64_sve, cpu)) {
82
+ if (!env->nvic) {
27
+ env->cp15.cpacr_el1 = FIELD_DP64(env->cp15.cpacr_el1,
83
+ error_setg(errp, "This board cannot be used with Cortex-M CPUs");
28
+ CPACR_EL1, ZEN, 3);
84
+ return;
29
env->vfp.zcr_el[1] = cpu->sve_default_vq - 1;
85
+ }
30
}
86
+ } else {
31
/*
87
+ if (env->nvic) {
88
+ error_setg(errp, "This board can only be used with Cortex-M CPUs");
89
+ return;
90
+ }
91
+ }
92
+#endif
93
+
94
cpu_exec_realizefn(cs, &local_err);
95
if (local_err != NULL) {
96
error_propagate(errp, local_err);
97
--
32
--
98
2.17.1
33
2.25.1
99
100
diff view generated by jsdifflib
1
From: Julia Suvorova <jusual@mail.ru>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
ARMv6-M supports 6 Thumb2 instructions. This patch checks for these
3
Enable SME, TPIDR2_EL0, and FA64 if supported by the cpu.
4
instructions and allows their execution.
5
Like Thumb2 cores, ARMv6-M always interprets BL instruction as 32-bit.
6
4
7
This patch is required for future Cortex-M0 support.
8
9
Signed-off-by: Julia Suvorova <jusual@mail.ru>
10
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
11
Message-id: 20180612204632.28780-1-jusual@mail.ru
12
[PMM: move armv6m_insn[] and armv6m_mask[] closer to
13
point of use, and mark 'const'. Check for M-and-not-v7
14
rather than M-and-6.]
15
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20220708151540.18136-45-richard.henderson@linaro.org
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
9
---
18
target/arm/translate.c | 43 +++++++++++++++++++++++++++++++++++++-----
10
target/arm/cpu.c | 11 +++++++++++
19
1 file changed, 38 insertions(+), 5 deletions(-)
11
1 file changed, 11 insertions(+)
20
12
21
diff --git a/target/arm/translate.c b/target/arm/translate.c
13
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
22
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/translate.c
15
--- a/target/arm/cpu.c
24
+++ b/target/arm/translate.c
16
+++ b/target/arm/cpu.c
25
@@ -XXX,XX +XXX,XX @@ static bool thumb_insn_is_16bit(DisasContext *s, uint32_t insn)
17
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
26
* end up actually treating this as two 16-bit insns, though,
18
CPACR_EL1, ZEN, 3);
27
* if it's half of a bl/blx pair that might span a page boundary.
19
env->vfp.zcr_el[1] = cpu->sve_default_vq - 1;
28
*/
20
}
29
- if (arm_dc_feature(s, ARM_FEATURE_THUMB2)) {
21
+ /* and for SME instructions, with default vector length, and TPIDR2 */
30
+ if (arm_dc_feature(s, ARM_FEATURE_THUMB2) ||
22
+ if (cpu_isar_feature(aa64_sme, cpu)) {
31
+ arm_dc_feature(s, ARM_FEATURE_M)) {
23
+ env->cp15.sctlr_el[1] |= SCTLR_EnTP2;
32
/* Thumb2 cores (including all M profile ones) always treat
24
+ env->cp15.cpacr_el1 = FIELD_DP64(env->cp15.cpacr_el1,
33
* 32-bit insns as 32-bit.
25
+ CPACR_EL1, SMEN, 3);
34
*/
26
+ env->vfp.smcr_el[1] = cpu->sme_default_vq - 1;
35
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
27
+ if (cpu_isar_feature(aa64_sme_fa64, cpu)) {
36
int conds;
28
+ env->vfp.smcr_el[1] = FIELD_DP64(env->vfp.smcr_el[1],
37
int logic_cc;
29
+ SMCR, FA64, 1);
38
39
- /* The only 32 bit insn that's allowed for Thumb1 is the combined
40
- * BL/BLX prefix and suffix.
41
+ /*
42
+ * ARMv6-M supports a limited subset of Thumb2 instructions.
43
+ * Other Thumb1 architectures allow only 32-bit
44
+ * combined BL/BLX prefix and suffix.
45
*/
46
- if ((insn & 0xf800e800) != 0xf000e800) {
47
+ if (arm_dc_feature(s, ARM_FEATURE_M) &&
48
+ !arm_dc_feature(s, ARM_FEATURE_V7)) {
49
+ int i;
50
+ bool found = false;
51
+ const uint32_t armv6m_insn[] = {0xf3808000 /* msr */,
52
+ 0xf3b08040 /* dsb */,
53
+ 0xf3b08050 /* dmb */,
54
+ 0xf3b08060 /* isb */,
55
+ 0xf3e08000 /* mrs */,
56
+ 0xf000d000 /* bl */};
57
+ const uint32_t armv6m_mask[] = {0xffe0d000,
58
+ 0xfff0d0f0,
59
+ 0xfff0d0f0,
60
+ 0xfff0d0f0,
61
+ 0xffe0d000,
62
+ 0xf800d000};
63
+
64
+ for (i = 0; i < ARRAY_SIZE(armv6m_insn); i++) {
65
+ if ((insn & armv6m_mask[i]) == armv6m_insn[i]) {
66
+ found = true;
67
+ break;
68
+ }
30
+ }
69
+ }
31
+ }
70
+ if (!found) {
32
/*
71
+ goto illegal_op;
33
* Enable 48-bit address space (TODO: take reserved_va into account).
72
+ }
34
* Enable TBI0 but not TBI1.
73
+ } else if ((insn & 0xf800e800) != 0xf000e800) {
74
ARCH(6T2);
75
}
76
77
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
78
}
79
break;
80
case 3: /* Special control operations. */
81
- ARCH(7);
82
+ if (!arm_dc_feature(s, ARM_FEATURE_V7) &&
83
+ !(arm_dc_feature(s, ARM_FEATURE_V6) &&
84
+ arm_dc_feature(s, ARM_FEATURE_M))) {
85
+ goto illegal_op;
86
+ }
87
op = (insn >> 4) & 0xf;
88
switch (op) {
89
case 2: /* clrex */
90
--
35
--
91
2.17.1
36
2.25.1
92
93
diff view generated by jsdifflib
1
From: Joel Stanley <joel@jms.id.au>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The ASPEED SoCs contain a single register that returns random data when
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
read. This models that register so that guests can use it.
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
5
Message-id: 20220708151540.18136-46-richard.henderson@linaro.org
6
The random number data register has a corresponding control register,
7
however it returns data regardless of the state of the enabled bit, so
8
the model follows this behaviour.
9
10
When the qcrypto call fails we exit as the guest uses the random number
11
device to feed it's entropy pool, which is used for cryptographic
12
purposes.
13
14
Reviewed-by: Cédric Le Goater <clg@kaod.org>
15
Signed-off-by: Joel Stanley <joel@jms.id.au>
16
Message-id: 20180613114836.9265-1-joel@jms.id.au
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
---
7
---
19
hw/misc/aspeed_scu.c | 20 ++++++++++++++++++++
8
linux-user/elfload.c | 20 ++++++++++++++++++++
20
1 file changed, 20 insertions(+)
9
1 file changed, 20 insertions(+)
21
10
22
diff --git a/hw/misc/aspeed_scu.c b/hw/misc/aspeed_scu.c
11
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
23
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
24
--- a/hw/misc/aspeed_scu.c
13
--- a/linux-user/elfload.c
25
+++ b/hw/misc/aspeed_scu.c
14
+++ b/linux-user/elfload.c
26
@@ -XXX,XX +XXX,XX @@
15
@@ -XXX,XX +XXX,XX @@ enum {
27
#include "qapi/visitor.h"
16
ARM_HWCAP2_A64_RNG = 1 << 16,
28
#include "qemu/bitops.h"
17
ARM_HWCAP2_A64_BTI = 1 << 17,
29
#include "qemu/log.h"
18
ARM_HWCAP2_A64_MTE = 1 << 18,
30
+#include "crypto/random.h"
19
+ ARM_HWCAP2_A64_ECV = 1 << 19,
31
#include "trace.h"
20
+ ARM_HWCAP2_A64_AFP = 1 << 20,
32
21
+ ARM_HWCAP2_A64_RPRES = 1 << 21,
33
#define TO_REG(offset) ((offset) >> 2)
22
+ ARM_HWCAP2_A64_MTE3 = 1 << 22,
34
@@ -XXX,XX +XXX,XX @@ static const uint32_t ast2500_a1_resets[ASPEED_SCU_NR_REGS] = {
23
+ ARM_HWCAP2_A64_SME = 1 << 23,
35
[BMC_DEV_ID] = 0x00002402U
24
+ ARM_HWCAP2_A64_SME_I16I64 = 1 << 24,
25
+ ARM_HWCAP2_A64_SME_F64F64 = 1 << 25,
26
+ ARM_HWCAP2_A64_SME_I8I32 = 1 << 26,
27
+ ARM_HWCAP2_A64_SME_F16F32 = 1 << 27,
28
+ ARM_HWCAP2_A64_SME_B16F32 = 1 << 28,
29
+ ARM_HWCAP2_A64_SME_F32F32 = 1 << 29,
30
+ ARM_HWCAP2_A64_SME_FA64 = 1 << 30,
36
};
31
};
37
32
38
+static uint32_t aspeed_scu_get_random(void)
33
#define ELF_HWCAP get_elf_hwcap()
39
+{
34
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap2(void)
40
+ Error *err = NULL;
35
GET_FEATURE_ID(aa64_rndr, ARM_HWCAP2_A64_RNG);
41
+ uint32_t num;
36
GET_FEATURE_ID(aa64_bti, ARM_HWCAP2_A64_BTI);
42
+
37
GET_FEATURE_ID(aa64_mte, ARM_HWCAP2_A64_MTE);
43
+ if (qcrypto_random_bytes((uint8_t *)&num, sizeof(num), &err)) {
38
+ GET_FEATURE_ID(aa64_sme, (ARM_HWCAP2_A64_SME |
44
+ error_report_err(err);
39
+ ARM_HWCAP2_A64_SME_F32F32 |
45
+ exit(1);
40
+ ARM_HWCAP2_A64_SME_B16F32 |
46
+ }
41
+ ARM_HWCAP2_A64_SME_F16F32 |
47
+
42
+ ARM_HWCAP2_A64_SME_I8I32));
48
+ return num;
43
+ GET_FEATURE_ID(aa64_sme_f64f64, ARM_HWCAP2_A64_SME_F64F64);
49
+}
44
+ GET_FEATURE_ID(aa64_sme_i16i64, ARM_HWCAP2_A64_SME_I16I64);
50
+
45
+ GET_FEATURE_ID(aa64_sme_fa64, ARM_HWCAP2_A64_SME_FA64);
51
static uint64_t aspeed_scu_read(void *opaque, hwaddr offset, unsigned size)
46
52
{
47
return hwcaps;
53
AspeedSCUState *s = ASPEED_SCU(opaque);
48
}
54
@@ -XXX,XX +XXX,XX @@ static uint64_t aspeed_scu_read(void *opaque, hwaddr offset, unsigned size)
55
}
56
57
switch (reg) {
58
+ case RNG_DATA:
59
+ /* On hardware, RNG_DATA works regardless of
60
+ * the state of the enable bit in RNG_CTRL
61
+ */
62
+ s->regs[RNG_DATA] = aspeed_scu_get_random();
63
+ break;
64
case WAKEUP_EN:
65
qemu_log_mask(LOG_GUEST_ERROR,
66
"%s: Read of write-only offset 0x%" HWADDR_PRIx "\n",
67
--
49
--
68
2.17.1
50
2.25.1
69
70
diff view generated by jsdifflib