1
As promised, another pullreq... This one's mostly RTH's patches.
1
Two small bugfixes, plus most of RTH's refactoring of cpregs
2
handling.
2
3
3
thanks
4
-- PMM
4
-- PMM
5
5
6
The following changes since commit 784c2e4f232adf5ef47a84a262ec72a07d068d6a:
6
The following changes since commit 1fba9dc71a170b3a05b9d3272dd8ecfe7f26e215:
7
7
8
Merge remote-tracking branch 'remotes/jasowang/tags/net-pull-request' into staging (2018-10-19 15:30:40 +0100)
8
Merge tag 'pull-request-2022-05-04' of https://gitlab.com/thuth/qemu into staging (2022-05-04 08:07:02 -0700)
9
9
10
are available in the Git repository at:
10
are available in the Git repository at:
11
11
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20181019
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20220505
13
13
14
for you to fetch changes up to 88c9add25e7120e8622796c81ad3f3fb7f8d40e7:
14
for you to fetch changes up to 99a50d1a67c602126fc2b3a4812d3000eba9bf34:
15
15
16
target/arm: Only flush tlb if ASID changes (2018-10-19 17:38:48 +0100)
16
target/arm: read access to performance counters from EL0 (2022-05-05 09:36:22 +0100)
17
17
18
----------------------------------------------------------------
18
----------------------------------------------------------------
19
target-arm queue:
19
target-arm queue:
20
* ssi-sd: Make devices picking up backends unavailable with -device
20
* Enable read access to performance counters from EL0
21
* Add support for VCPU event states
21
* Enable SCTLR_EL1.BT0 for aarch64-linux-user
22
* Move towards making ID registers the source of truth for
22
* Refactoring of cpreg handling
23
whether a guest CPU implements a feature, rather than having
24
parallel ID registers and feature bit flags
25
* Implement various HCR hypervisor trap/config bits
26
* Get IL bit correct for v7 syndrome values
27
* Report correct syndrome for FP/SIMD traps to Hyp mode
28
* hw/arm/boot: Increase compliance with kernel arm64 boot protocol
29
* Refactor A32 Neon to use generic vector infrastructure
30
* Fix a bug in A32 VLD2 "(multiple 2-element structures)" insn
31
* net: cadence_gem: Report features correctly in ID register
32
* Avoid some unnecessary TLB flushes on TTBR register writes
33
23
34
----------------------------------------------------------------
24
----------------------------------------------------------------
35
Dongjiu Geng (1):
25
Alex Zuepke (1):
36
target/arm: Add support for VCPU event states
26
target/arm: read access to performance counters from EL0
37
27
38
Edgar E. Iglesias (2):
28
Richard Henderson (22):
39
net: cadence_gem: Announce availability of priority queues
29
target/arm: Enable SCTLR_EL1.BT0 for aarch64-linux-user
40
net: cadence_gem: Announce 64bit addressing support
30
target/arm: Split out cpregs.h
31
target/arm: Reorg CPAccessResult and access_check_cp_reg
32
target/arm: Replace sentinels with ARRAY_SIZE in cpregs.h
33
target/arm: Make some more cpreg data static const
34
target/arm: Reorg ARMCPRegInfo type field bits
35
target/arm: Avoid bare abort() or assert(0)
36
target/arm: Change cpreg access permissions to enum
37
target/arm: Name CPState type
38
target/arm: Name CPSecureState type
39
target/arm: Drop always-true test in define_arm_vh_e2h_redirects_aliases
40
target/arm: Store cpregs key in the hash table directly
41
target/arm: Merge allocation of the cpreg and its name
42
target/arm: Hoist computation of key in add_cpreg_to_hashtable
43
target/arm: Consolidate cpreg updates in add_cpreg_to_hashtable
44
target/arm: Use bool for is64 and ns in add_cpreg_to_hashtable
45
target/arm: Hoist isbanked computation in add_cpreg_to_hashtable
46
target/arm: Perform override check early in add_cpreg_to_hashtable
47
target/arm: Reformat comments in add_cpreg_to_hashtable
48
target/arm: Remove HOST_BIG_ENDIAN ifdef in add_cpreg_to_hashtable
49
target/arm: Add isar predicates for FEAT_Debugv8p2
50
target/arm: Add isar_feature_{aa64,any}_ras
41
51
42
Markus Armbruster (1):
52
target/arm/cpregs.h | 453 ++++++++++++++++++++++++++++++++++++++
43
ssi-sd: Make devices picking up backends unavailable with -device
53
target/arm/cpu.h | 393 +++------------------------------
44
54
hw/arm/pxa2xx.c | 2 +-
45
Peter Maydell (10):
55
hw/arm/pxa2xx_pic.c | 2 +-
46
target/arm: Improve debug logging of AArch32 exception return
56
hw/intc/arm_gicv3_cpuif.c | 6 +-
47
target/arm: Make switch_mode() file-local
57
hw/intc/arm_gicv3_kvm.c | 3 +-
48
target/arm: Implement HCR.FB
58
target/arm/cpu.c | 25 +--
49
target/arm: Implement HCR.DC
59
target/arm/cpu64.c | 2 +-
50
target/arm: ISR_EL1 bits track virtual interrupts if IMO/FMO set
60
target/arm/cpu_tcg.c | 5 +-
51
target/arm: Implement HCR.VI and VF
61
target/arm/gdbstub.c | 5 +-
52
target/arm: Implement HCR.PTW
62
target/arm/helper.c | 358 +++++++++++++-----------------
53
target/arm: New utility function to extract EC from syndrome
63
target/arm/hvf/hvf.c | 2 +-
54
target/arm: Get IL bit correct for v7 syndrome values
64
target/arm/kvm-stub.c | 4 +-
55
target/arm: Report correct syndrome for FP/SIMD traps to Hyp mode
65
target/arm/kvm.c | 4 +-
56
66
target/arm/machine.c | 4 +-
57
Richard Henderson (30):
67
target/arm/op_helper.c | 57 ++---
58
target/arm: Move some system registers into a substructure
68
target/arm/translate-a64.c | 14 +-
59
target/arm: V8M should not imply V7VE
69
target/arm/translate-neon.c | 2 +-
60
target/arm: Convert v8 extensions from feature bits to isar tests
70
target/arm/translate.c | 13 +-
61
target/arm: Convert division from feature bits to isar0 tests
71
tests/tcg/aarch64/bti-3.c | 42 ++++
62
target/arm: Convert jazelle from feature bit to isar1 test
72
tests/tcg/aarch64/Makefile.target | 6 +-
63
target/arm: Convert t32ee from feature bit to isar3 test
73
21 files changed, 738 insertions(+), 664 deletions(-)
64
target/arm: Convert sve from feature bit to aa64pfr0 test
74
create mode 100644 target/arm/cpregs.h
65
target/arm: Convert v8.2-fp16 from feature bit to aa64pfr0 test
75
create mode 100644 tests/tcg/aarch64/bti-3.c
66
target/arm: Hoist address increment for vector memory ops
67
target/arm: Don't call tcg_clear_temp_count
68
target/arm: Use tcg_gen_gvec_dup_i64 for LD[1-4]R
69
target/arm: Promote consecutive memory ops for aa64
70
target/arm: Mark some arrays const
71
target/arm: Use gvec for NEON VDUP
72
target/arm: Use gvec for NEON VMOV, VMVN, VBIC & VORR (immediate)
73
target/arm: Use gvec for NEON_3R_LOGIC insns
74
target/arm: Use gvec for NEON_3R_VADD_VSUB insns
75
target/arm: Use gvec for NEON_2RM_VMN, NEON_2RM_VNEG
76
target/arm: Use gvec for NEON_3R_VMUL
77
target/arm: Use gvec for VSHR, VSHL
78
target/arm: Use gvec for VSRA
79
target/arm: Use gvec for VSRI, VSLI
80
target/arm: Use gvec for NEON_3R_VML
81
target/arm: Use gvec for NEON_3R_VTST_VCEQ, NEON_3R_VCGT, NEON_3R_VCGE
82
target/arm: Use gvec for NEON VLD all lanes
83
target/arm: Reorg NEON VLD/VST all elements
84
target/arm: Promote consecutive memory ops for aa32
85
target/arm: Reorg NEON VLD/VST single element to one lane
86
target/arm: Remove writefn from TTBR0_EL3
87
target/arm: Only flush tlb if ASID changes
88
89
Stewart Hildebrand (1):
90
hw/arm/boot: Increase compliance with kernel arm64 boot protocol
91
92
target/arm/cpu.h | 227 ++++++-
93
target/arm/internals.h | 45 +-
94
target/arm/kvm_arm.h | 24 +
95
target/arm/translate.h | 21 +
96
hw/arm/boot.c | 18 +
97
hw/intc/armv7m_nvic.c | 12 +-
98
hw/net/cadence_gem.c | 9 +-
99
hw/sd/ssi-sd.c | 2 +
100
linux-user/aarch64/signal.c | 4 +-
101
linux-user/elfload.c | 60 +-
102
linux-user/syscall.c | 10 +-
103
target/arm/cpu.c | 242 ++++----
104
target/arm/cpu64.c | 148 +++--
105
target/arm/helper.c | 397 ++++++++----
106
target/arm/kvm.c | 60 ++
107
target/arm/kvm32.c | 13 +
108
target/arm/kvm64.c | 15 +-
109
target/arm/machine.c | 28 +-
110
target/arm/op_helper.c | 2 +-
111
target/arm/translate-a64.c | 715 ++++-----------------
112
target/arm/translate.c | 1451 ++++++++++++++++++++++++++++---------------
113
21 files changed, 2021 insertions(+), 1482 deletions(-)
114
diff view generated by jsdifflib
Deleted patch
1
From: Markus Armbruster <armbru@redhat.com>
2
1
3
Device models aren't supposed to go on fishing expeditions for
4
backends. They should expose suitable properties for the user to set.
5
For onboard devices, board code sets them.
6
7
Device ssi-sd picks up its block backend in its init() method with
8
drive_get_next() instead. This mistake is already marked FIXME since
9
commit af9e40a.
10
11
Unset user_creatable to remove the mistake from our external
12
interface. Since the SSI bus doesn't support hotplug, only -device
13
can be affected. Only certain ARM machines have ssi-sd and provide an
14
SSI bus for it; this patch breaks -device ssi-sd for these machines.
15
No actual use of -device ssi-sd is known.
16
17
Signed-off-by: Markus Armbruster <armbru@redhat.com>
18
Acked-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
19
Acked-by: Thomas Huth <thuth@redhat.com>
20
Message-id: 20181009060835.4608-1-armbru@redhat.com
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
---
23
hw/sd/ssi-sd.c | 2 ++
24
1 file changed, 2 insertions(+)
25
26
diff --git a/hw/sd/ssi-sd.c b/hw/sd/ssi-sd.c
27
index XXXXXXX..XXXXXXX 100644
28
--- a/hw/sd/ssi-sd.c
29
+++ b/hw/sd/ssi-sd.c
30
@@ -XXX,XX +XXX,XX @@ static void ssi_sd_class_init(ObjectClass *klass, void *data)
31
k->cs_polarity = SSI_CS_LOW;
32
dc->vmsd = &vmstate_ssi_sd;
33
dc->reset = ssi_sd_reset;
34
+ /* Reason: init() method uses drive_get_next() */
35
+ dc->user_creatable = false;
36
}
37
38
static const TypeInfo ssi_sd_info = {
39
--
40
2.19.1
41
42
diff view generated by jsdifflib
Deleted patch
1
From: Dongjiu Geng <gengdongjiu@huawei.com>
2
1
3
This patch extends the qemu-kvm state sync logic with support for
4
KVM_GET/SET_VCPU_EVENTS, giving access to yet missing SError exception.
5
And also it can support the exception state migration.
6
7
The SError exception states include SError pending state and ESR value,
8
the kvm_put/get_vcpu_events() will be called when set or get system
9
registers. When do migration, if source machine has SError pending,
10
QEMU will do this migration regardless whether the target machine supports
11
to specify guest ESR value, because if target machine does not support that,
12
it can also inject the SError with zero ESR value.
13
14
Signed-off-by: Dongjiu Geng <gengdongjiu@huawei.com>
15
Reviewed-by: Andrew Jones <drjones@redhat.com>
16
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
17
Message-id: 1538067351-23931-3-git-send-email-gengdongjiu@huawei.com
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
---
20
target/arm/cpu.h | 7 ++++++
21
target/arm/kvm_arm.h | 24 ++++++++++++++++++
22
target/arm/kvm.c | 60 ++++++++++++++++++++++++++++++++++++++++++++
23
target/arm/kvm32.c | 13 ++++++++++
24
target/arm/kvm64.c | 13 ++++++++++
25
target/arm/machine.c | 22 ++++++++++++++++
26
6 files changed, 139 insertions(+)
27
28
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/cpu.h
31
+++ b/target/arm/cpu.h
32
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
33
*/
34
} exception;
35
36
+ /* Information associated with an SError */
37
+ struct {
38
+ uint8_t pending;
39
+ uint8_t has_esr;
40
+ uint64_t esr;
41
+ } serror;
42
+
43
/* Thumb-2 EE state. */
44
uint32_t teecr;
45
uint32_t teehbr;
46
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/kvm_arm.h
49
+++ b/target/arm/kvm_arm.h
50
@@ -XXX,XX +XXX,XX @@ bool write_kvmstate_to_list(ARMCPU *cpu);
51
*/
52
void kvm_arm_reset_vcpu(ARMCPU *cpu);
53
54
+/**
55
+ * kvm_arm_init_serror_injection:
56
+ * @cs: CPUState
57
+ *
58
+ * Check whether KVM can set guest SError syndrome.
59
+ */
60
+void kvm_arm_init_serror_injection(CPUState *cs);
61
+
62
+/**
63
+ * kvm_get_vcpu_events:
64
+ * @cpu: ARMCPU
65
+ *
66
+ * Get VCPU related state from kvm.
67
+ */
68
+int kvm_get_vcpu_events(ARMCPU *cpu);
69
+
70
+/**
71
+ * kvm_put_vcpu_events:
72
+ * @cpu: ARMCPU
73
+ *
74
+ * Put VCPU related state to kvm.
75
+ */
76
+int kvm_put_vcpu_events(ARMCPU *cpu);
77
+
78
#ifdef CONFIG_KVM
79
/**
80
* kvm_arm_create_scratch_host_vcpu:
81
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
82
index XXXXXXX..XXXXXXX 100644
83
--- a/target/arm/kvm.c
84
+++ b/target/arm/kvm.c
85
@@ -XXX,XX +XXX,XX @@ const KVMCapabilityInfo kvm_arch_required_capabilities[] = {
86
};
87
88
static bool cap_has_mp_state;
89
+static bool cap_has_inject_serror_esr;
90
91
static ARMHostCPUFeatures arm_host_cpu_features;
92
93
@@ -XXX,XX +XXX,XX @@ int kvm_arm_vcpu_init(CPUState *cs)
94
return kvm_vcpu_ioctl(cs, KVM_ARM_VCPU_INIT, &init);
95
}
96
97
+void kvm_arm_init_serror_injection(CPUState *cs)
98
+{
99
+ cap_has_inject_serror_esr = kvm_check_extension(cs->kvm_state,
100
+ KVM_CAP_ARM_INJECT_SERROR_ESR);
101
+}
102
+
103
bool kvm_arm_create_scratch_host_vcpu(const uint32_t *cpus_to_try,
104
int *fdarray,
105
struct kvm_vcpu_init *init)
106
@@ -XXX,XX +XXX,XX @@ int kvm_arm_sync_mpstate_to_qemu(ARMCPU *cpu)
107
return 0;
108
}
109
110
+int kvm_put_vcpu_events(ARMCPU *cpu)
111
+{
112
+ CPUARMState *env = &cpu->env;
113
+ struct kvm_vcpu_events events;
114
+ int ret;
115
+
116
+ if (!kvm_has_vcpu_events()) {
117
+ return 0;
118
+ }
119
+
120
+ memset(&events, 0, sizeof(events));
121
+ events.exception.serror_pending = env->serror.pending;
122
+
123
+ /* Inject SError to guest with specified syndrome if host kernel
124
+ * supports it, otherwise inject SError without syndrome.
125
+ */
126
+ if (cap_has_inject_serror_esr) {
127
+ events.exception.serror_has_esr = env->serror.has_esr;
128
+ events.exception.serror_esr = env->serror.esr;
129
+ }
130
+
131
+ ret = kvm_vcpu_ioctl(CPU(cpu), KVM_SET_VCPU_EVENTS, &events);
132
+ if (ret) {
133
+ error_report("failed to put vcpu events");
134
+ }
135
+
136
+ return ret;
137
+}
138
+
139
+int kvm_get_vcpu_events(ARMCPU *cpu)
140
+{
141
+ CPUARMState *env = &cpu->env;
142
+ struct kvm_vcpu_events events;
143
+ int ret;
144
+
145
+ if (!kvm_has_vcpu_events()) {
146
+ return 0;
147
+ }
148
+
149
+ memset(&events, 0, sizeof(events));
150
+ ret = kvm_vcpu_ioctl(CPU(cpu), KVM_GET_VCPU_EVENTS, &events);
151
+ if (ret) {
152
+ error_report("failed to get vcpu events");
153
+ return ret;
154
+ }
155
+
156
+ env->serror.pending = events.exception.serror_pending;
157
+ env->serror.has_esr = events.exception.serror_has_esr;
158
+ env->serror.esr = events.exception.serror_esr;
159
+
160
+ return 0;
161
+}
162
+
163
void kvm_arch_pre_run(CPUState *cs, struct kvm_run *run)
164
{
165
}
166
diff --git a/target/arm/kvm32.c b/target/arm/kvm32.c
167
index XXXXXXX..XXXXXXX 100644
168
--- a/target/arm/kvm32.c
169
+++ b/target/arm/kvm32.c
170
@@ -XXX,XX +XXX,XX @@ int kvm_arch_init_vcpu(CPUState *cs)
171
}
172
cpu->mp_affinity = mpidr & ARM32_AFFINITY_MASK;
173
174
+ /* Check whether userspace can specify guest syndrome value */
175
+ kvm_arm_init_serror_injection(cs);
176
+
177
return kvm_arm_init_cpreg_list(cpu);
178
}
179
180
@@ -XXX,XX +XXX,XX @@ int kvm_arch_put_registers(CPUState *cs, int level)
181
return ret;
182
}
183
184
+ ret = kvm_put_vcpu_events(cpu);
185
+ if (ret) {
186
+ return ret;
187
+ }
188
+
189
/* Note that we do not call write_cpustate_to_list()
190
* here, so we are only writing the tuple list back to
191
* KVM. This is safe because nothing can change the
192
@@ -XXX,XX +XXX,XX @@ int kvm_arch_get_registers(CPUState *cs)
193
}
194
vfp_set_fpscr(env, fpscr);
195
196
+ ret = kvm_get_vcpu_events(cpu);
197
+ if (ret) {
198
+ return ret;
199
+ }
200
+
201
if (!write_kvmstate_to_list(cpu)) {
202
return EINVAL;
203
}
204
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
205
index XXXXXXX..XXXXXXX 100644
206
--- a/target/arm/kvm64.c
207
+++ b/target/arm/kvm64.c
208
@@ -XXX,XX +XXX,XX @@ int kvm_arch_init_vcpu(CPUState *cs)
209
210
kvm_arm_init_debug(cs);
211
212
+ /* Check whether user space can specify guest syndrome value */
213
+ kvm_arm_init_serror_injection(cs);
214
+
215
return kvm_arm_init_cpreg_list(cpu);
216
}
217
218
@@ -XXX,XX +XXX,XX @@ int kvm_arch_put_registers(CPUState *cs, int level)
219
return ret;
220
}
221
222
+ ret = kvm_put_vcpu_events(cpu);
223
+ if (ret) {
224
+ return ret;
225
+ }
226
+
227
if (!write_list_to_kvmstate(cpu, level)) {
228
return EINVAL;
229
}
230
@@ -XXX,XX +XXX,XX @@ int kvm_arch_get_registers(CPUState *cs)
231
}
232
vfp_set_fpcr(env, fpr);
233
234
+ ret = kvm_get_vcpu_events(cpu);
235
+ if (ret) {
236
+ return ret;
237
+ }
238
+
239
if (!write_kvmstate_to_list(cpu)) {
240
return EINVAL;
241
}
242
diff --git a/target/arm/machine.c b/target/arm/machine.c
243
index XXXXXXX..XXXXXXX 100644
244
--- a/target/arm/machine.c
245
+++ b/target/arm/machine.c
246
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_sve = {
247
};
248
#endif /* AARCH64 */
249
250
+static bool serror_needed(void *opaque)
251
+{
252
+ ARMCPU *cpu = opaque;
253
+ CPUARMState *env = &cpu->env;
254
+
255
+ return env->serror.pending != 0;
256
+}
257
+
258
+static const VMStateDescription vmstate_serror = {
259
+ .name = "cpu/serror",
260
+ .version_id = 1,
261
+ .minimum_version_id = 1,
262
+ .needed = serror_needed,
263
+ .fields = (VMStateField[]) {
264
+ VMSTATE_UINT8(env.serror.pending, ARMCPU),
265
+ VMSTATE_UINT8(env.serror.has_esr, ARMCPU),
266
+ VMSTATE_UINT64(env.serror.esr, ARMCPU),
267
+ VMSTATE_END_OF_LIST()
268
+ }
269
+};
270
+
271
static bool m_needed(void *opaque)
272
{
273
ARMCPU *cpu = opaque;
274
@@ -XXX,XX +XXX,XX @@ const VMStateDescription vmstate_arm_cpu = {
275
#ifdef TARGET_AARCH64
276
&vmstate_sve,
277
#endif
278
+ &vmstate_serror,
279
NULL
280
}
281
};
282
--
283
2.19.1
284
285
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Instantiating mps2-an505 (cortex-m33) will fail make check when
3
This controls whether the PACI{A,B}SP instructions trap with BTYPE=3
4
V7VE asserts that ID_ISAR0.Divide includes ARM division. It is
4
(indirect branch from register other than x16/x17). The linux kernel
5
also wrong to include ARM_FEATURE_LPAE.
5
sets this in bti_enable().
6
6
7
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/998
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20181016223115.24100-3-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 20220427042312.294300-1-richard.henderson@linaro.org
11
[PMM: remove stray change to makefile comment]
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
13
---
12
target/arm/cpu.c | 6 +++++-
14
target/arm/cpu.c | 2 ++
13
1 file changed, 5 insertions(+), 1 deletion(-)
15
tests/tcg/aarch64/bti-3.c | 42 +++++++++++++++++++++++++++++++
16
tests/tcg/aarch64/Makefile.target | 6 ++---
17
3 files changed, 47 insertions(+), 3 deletions(-)
18
create mode 100644 tests/tcg/aarch64/bti-3.c
14
19
15
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
20
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
16
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpu.c
22
--- a/target/arm/cpu.c
18
+++ b/target/arm/cpu.c
23
+++ b/target/arm/cpu.c
19
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
24
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
20
25
/* Enable all PAC keys. */
21
/* Some features automatically imply others: */
26
env->cp15.sctlr_el[1] |= (SCTLR_EnIA | SCTLR_EnIB |
22
if (arm_feature(env, ARM_FEATURE_V8)) {
27
SCTLR_EnDA | SCTLR_EnDB);
23
- set_feature(env, ARM_FEATURE_V7VE);
28
+ /* Trap on btype=3 for PACIxSP. */
24
+ if (arm_feature(env, ARM_FEATURE_M)) {
29
+ env->cp15.sctlr_el[1] |= SCTLR_BT0;
25
+ set_feature(env, ARM_FEATURE_V7);
30
/* and to the FP/Neon instructions */
26
+ } else {
31
env->cp15.cpacr_el1 = deposit64(env->cp15.cpacr_el1, 20, 2, 3);
27
+ set_feature(env, ARM_FEATURE_V7VE);
32
/* and to the SVE instructions */
28
+ }
33
diff --git a/tests/tcg/aarch64/bti-3.c b/tests/tcg/aarch64/bti-3.c
29
}
34
new file mode 100644
30
if (arm_feature(env, ARM_FEATURE_V7VE)) {
35
index XXXXXXX..XXXXXXX
31
/* v7 Virtualization Extensions. In real hardware this implies
36
--- /dev/null
37
+++ b/tests/tcg/aarch64/bti-3.c
38
@@ -XXX,XX +XXX,XX @@
39
+/*
40
+ * BTI vs PACIASP
41
+ */
42
+
43
+#include "bti-crt.inc.c"
44
+
45
+static void skip2_sigill(int sig, siginfo_t *info, ucontext_t *uc)
46
+{
47
+ uc->uc_mcontext.pc += 8;
48
+ uc->uc_mcontext.pstate = 1;
49
+}
50
+
51
+#define BTYPE_1() \
52
+ asm("mov %0,#1; adr x16, 1f; br x16; 1: hint #25; mov %0,#0" \
53
+ : "=r"(skipped) : : "x16", "x30")
54
+
55
+#define BTYPE_2() \
56
+ asm("mov %0,#1; adr x16, 1f; blr x16; 1: hint #25; mov %0,#0" \
57
+ : "=r"(skipped) : : "x16", "x30")
58
+
59
+#define BTYPE_3() \
60
+ asm("mov %0,#1; adr x15, 1f; br x15; 1: hint #25; mov %0,#0" \
61
+ : "=r"(skipped) : : "x15", "x30")
62
+
63
+#define TEST(WHICH, EXPECT) \
64
+ do { WHICH(); fail += skipped ^ EXPECT; } while (0)
65
+
66
+int main()
67
+{
68
+ int fail = 0;
69
+ int skipped;
70
+
71
+ /* Signal-like with SA_SIGINFO. */
72
+ signal_info(SIGILL, skip2_sigill);
73
+
74
+ /* With SCTLR_EL1.BT0 set, PACIASP is not compatible with type=3. */
75
+ TEST(BTYPE_1, 0);
76
+ TEST(BTYPE_2, 0);
77
+ TEST(BTYPE_3, 1);
78
+
79
+ return fail;
80
+}
81
diff --git a/tests/tcg/aarch64/Makefile.target b/tests/tcg/aarch64/Makefile.target
82
index XXXXXXX..XXXXXXX 100644
83
--- a/tests/tcg/aarch64/Makefile.target
84
+++ b/tests/tcg/aarch64/Makefile.target
85
@@ -XXX,XX +XXX,XX @@ endif
86
# BTI Tests
87
# bti-1 tests the elf notes, so we require special compiler support.
88
ifneq ($(CROSS_CC_HAS_ARMV8_BTI),)
89
-AARCH64_TESTS += bti-1
90
-bti-1: CFLAGS += -mbranch-protection=standard
91
-bti-1: LDFLAGS += -nostdlib
92
+AARCH64_TESTS += bti-1 bti-3
93
+bti-1 bti-3: CFLAGS += -mbranch-protection=standard
94
+bti-1 bti-3: LDFLAGS += -nostdlib
95
endif
96
# bti-2 tests PROT_BTI, so no special compiler support required.
97
AARCH64_TESTS += bti-2
32
--
98
--
33
2.19.1
99
2.25.1
34
35
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Most of the v8 extensions are self-contained within the ISAR
3
Move ARMCPRegInfo and all related declarations to a new
4
registers and are not implied by other feature bits, which
4
internal header, out of the public cpu.h.
5
makes them the easiest to convert.
6
5
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20181016223115.24100-4-richard.henderson@linaro.org
9
Message-id: 20220501055028.646596-2-richard.henderson@linaro.org
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
11
---
13
target/arm/cpu.h | 131 +++++++++++++++++++++++++++++++++----
12
target/arm/cpregs.h | 413 +++++++++++++++++++++++++++++++++++++
14
target/arm/translate.h | 7 ++
13
target/arm/cpu.h | 368 ---------------------------------
15
linux-user/elfload.c | 46 ++++++++-----
14
hw/arm/pxa2xx.c | 1 +
16
target/arm/cpu.c | 27 +++++---
15
hw/arm/pxa2xx_pic.c | 1 +
17
target/arm/cpu64.c | 57 +++++++++-------
16
hw/intc/arm_gicv3_cpuif.c | 1 +
18
target/arm/translate-a64.c | 101 ++++++++++++++--------------
17
hw/intc/arm_gicv3_kvm.c | 2 +
19
target/arm/translate.c | 36 +++++-----
18
target/arm/cpu.c | 1 +
20
7 files changed, 273 insertions(+), 132 deletions(-)
19
target/arm/cpu64.c | 1 +
20
target/arm/cpu_tcg.c | 1 +
21
target/arm/gdbstub.c | 3 +-
22
target/arm/helper.c | 1 +
23
target/arm/op_helper.c | 1 +
24
target/arm/translate-a64.c | 4 +-
25
target/arm/translate.c | 3 +-
26
14 files changed, 427 insertions(+), 374 deletions(-)
27
create mode 100644 target/arm/cpregs.h
21
28
29
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
30
new file mode 100644
31
index XXXXXXX..XXXXXXX
32
--- /dev/null
33
+++ b/target/arm/cpregs.h
34
@@ -XXX,XX +XXX,XX @@
35
+/*
36
+ * QEMU ARM CP Register access and descriptions
37
+ *
38
+ * Copyright (c) 2022 Linaro Ltd
39
+ *
40
+ * This program is free software; you can redistribute it and/or
41
+ * modify it under the terms of the GNU General Public License
42
+ * as published by the Free Software Foundation; either version 2
43
+ * of the License, or (at your option) any later version.
44
+ *
45
+ * This program is distributed in the hope that it will be useful,
46
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
47
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
48
+ * GNU General Public License for more details.
49
+ *
50
+ * You should have received a copy of the GNU General Public License
51
+ * along with this program; if not, see
52
+ * <http://www.gnu.org/licenses/gpl-2.0.html>
53
+ */
54
+
55
+#ifndef TARGET_ARM_CPREGS_H
56
+#define TARGET_ARM_CPREGS_H
57
+
58
+/*
59
+ * ARMCPRegInfo type field bits. If the SPECIAL bit is set this is a
60
+ * special-behaviour cp reg and bits [11..8] indicate what behaviour
61
+ * it has. Otherwise it is a simple cp reg, where CONST indicates that
62
+ * TCG can assume the value to be constant (ie load at translate time)
63
+ * and 64BIT indicates a 64 bit wide coprocessor register. SUPPRESS_TB_END
64
+ * indicates that the TB should not be ended after a write to this register
65
+ * (the default is that the TB ends after cp writes). OVERRIDE permits
66
+ * a register definition to override a previous definition for the
67
+ * same (cp, is64, crn, crm, opc1, opc2) tuple: either the new or the
68
+ * old must have the OVERRIDE bit set.
69
+ * ALIAS indicates that this register is an alias view of some underlying
70
+ * state which is also visible via another register, and that the other
71
+ * register is handling migration and reset; registers marked ALIAS will not be
72
+ * migrated but may have their state set by syncing of register state from KVM.
73
+ * NO_RAW indicates that this register has no underlying state and does not
74
+ * support raw access for state saving/loading; it will not be used for either
75
+ * migration or KVM state synchronization. (Typically this is for "registers"
76
+ * which are actually used as instructions for cache maintenance and so on.)
77
+ * IO indicates that this register does I/O and therefore its accesses
78
+ * need to be marked with gen_io_start() and also end the TB. In particular,
79
+ * registers which implement clocks or timers require this.
80
+ * RAISES_EXC is for when the read or write hook might raise an exception;
81
+ * the generated code will synchronize the CPU state before calling the hook
82
+ * so that it is safe for the hook to call raise_exception().
83
+ * NEWEL is for writes to registers that might change the exception
84
+ * level - typically on older ARM chips. For those cases we need to
85
+ * re-read the new el when recomputing the translation flags.
86
+ */
87
+#define ARM_CP_SPECIAL 0x0001
88
+#define ARM_CP_CONST 0x0002
89
+#define ARM_CP_64BIT 0x0004
90
+#define ARM_CP_SUPPRESS_TB_END 0x0008
91
+#define ARM_CP_OVERRIDE 0x0010
92
+#define ARM_CP_ALIAS 0x0020
93
+#define ARM_CP_IO 0x0040
94
+#define ARM_CP_NO_RAW 0x0080
95
+#define ARM_CP_NOP (ARM_CP_SPECIAL | 0x0100)
96
+#define ARM_CP_WFI (ARM_CP_SPECIAL | 0x0200)
97
+#define ARM_CP_NZCV (ARM_CP_SPECIAL | 0x0300)
98
+#define ARM_CP_CURRENTEL (ARM_CP_SPECIAL | 0x0400)
99
+#define ARM_CP_DC_ZVA (ARM_CP_SPECIAL | 0x0500)
100
+#define ARM_CP_DC_GVA (ARM_CP_SPECIAL | 0x0600)
101
+#define ARM_CP_DC_GZVA (ARM_CP_SPECIAL | 0x0700)
102
+#define ARM_LAST_SPECIAL ARM_CP_DC_GZVA
103
+#define ARM_CP_FPU 0x1000
104
+#define ARM_CP_SVE 0x2000
105
+#define ARM_CP_NO_GDB 0x4000
106
+#define ARM_CP_RAISES_EXC 0x8000
107
+#define ARM_CP_NEWEL 0x10000
108
+/* Used only as a terminator for ARMCPRegInfo lists */
109
+#define ARM_CP_SENTINEL 0xfffff
110
+/* Mask of only the flag bits in a type field */
111
+#define ARM_CP_FLAG_MASK 0x1f0ff
112
+
113
+/*
114
+ * Valid values for ARMCPRegInfo state field, indicating which of
115
+ * the AArch32 and AArch64 execution states this register is visible in.
116
+ * If the reginfo doesn't explicitly specify then it is AArch32 only.
117
+ * If the reginfo is declared to be visible in both states then a second
118
+ * reginfo is synthesised for the AArch32 view of the AArch64 register,
119
+ * such that the AArch32 view is the lower 32 bits of the AArch64 one.
120
+ * Note that we rely on the values of these enums as we iterate through
121
+ * the various states in some places.
122
+ */
123
+enum {
124
+ ARM_CP_STATE_AA32 = 0,
125
+ ARM_CP_STATE_AA64 = 1,
126
+ ARM_CP_STATE_BOTH = 2,
127
+};
128
+
129
+/*
130
+ * ARM CP register secure state flags. These flags identify security state
131
+ * attributes for a given CP register entry.
132
+ * The existence of both or neither secure and non-secure flags indicates that
133
+ * the register has both a secure and non-secure hash entry. A single one of
134
+ * these flags causes the register to only be hashed for the specified
135
+ * security state.
136
+ * Although definitions may have any combination of the S/NS bits, each
137
+ * registered entry will only have one to identify whether the entry is secure
138
+ * or non-secure.
139
+ */
140
+enum {
141
+ ARM_CP_SECSTATE_S = (1 << 0), /* bit[0]: Secure state register */
142
+ ARM_CP_SECSTATE_NS = (1 << 1), /* bit[1]: Non-secure state register */
143
+};
144
+
145
+/*
146
+ * Return true if cptype is a valid type field. This is used to try to
147
+ * catch errors where the sentinel has been accidentally left off the end
148
+ * of a list of registers.
149
+ */
150
+static inline bool cptype_valid(int cptype)
151
+{
152
+ return ((cptype & ~ARM_CP_FLAG_MASK) == 0)
153
+ || ((cptype & ARM_CP_SPECIAL) &&
154
+ ((cptype & ~ARM_CP_FLAG_MASK) <= ARM_LAST_SPECIAL));
155
+}
156
+
157
+/*
158
+ * Access rights:
159
+ * We define bits for Read and Write access for what rev C of the v7-AR ARM ARM
160
+ * defines as PL0 (user), PL1 (fiq/irq/svc/abt/und/sys, ie privileged), and
161
+ * PL2 (hyp). The other level which has Read and Write bits is Secure PL1
162
+ * (ie any of the privileged modes in Secure state, or Monitor mode).
163
+ * If a register is accessible in one privilege level it's always accessible
164
+ * in higher privilege levels too. Since "Secure PL1" also follows this rule
165
+ * (ie anything visible in PL2 is visible in S-PL1, some things are only
166
+ * visible in S-PL1) but "Secure PL1" is a bit of a mouthful, we bend the
167
+ * terminology a little and call this PL3.
168
+ * In AArch64 things are somewhat simpler as the PLx bits line up exactly
169
+ * with the ELx exception levels.
170
+ *
171
+ * If access permissions for a register are more complex than can be
172
+ * described with these bits, then use a laxer set of restrictions, and
173
+ * do the more restrictive/complex check inside a helper function.
174
+ */
175
+#define PL3_R 0x80
176
+#define PL3_W 0x40
177
+#define PL2_R (0x20 | PL3_R)
178
+#define PL2_W (0x10 | PL3_W)
179
+#define PL1_R (0x08 | PL2_R)
180
+#define PL1_W (0x04 | PL2_W)
181
+#define PL0_R (0x02 | PL1_R)
182
+#define PL0_W (0x01 | PL1_W)
183
+
184
+/*
185
+ * For user-mode some registers are accessible to EL0 via a kernel
186
+ * trap-and-emulate ABI. In this case we define the read permissions
187
+ * as actually being PL0_R. However some bits of any given register
188
+ * may still be masked.
189
+ */
190
+#ifdef CONFIG_USER_ONLY
191
+#define PL0U_R PL0_R
192
+#else
193
+#define PL0U_R PL1_R
194
+#endif
195
+
196
+#define PL3_RW (PL3_R | PL3_W)
197
+#define PL2_RW (PL2_R | PL2_W)
198
+#define PL1_RW (PL1_R | PL1_W)
199
+#define PL0_RW (PL0_R | PL0_W)
200
+
201
+typedef enum CPAccessResult {
202
+ /* Access is permitted */
203
+ CP_ACCESS_OK = 0,
204
+ /*
205
+ * Access fails due to a configurable trap or enable which would
206
+ * result in a categorized exception syndrome giving information about
207
+ * the failing instruction (ie syndrome category 0x3, 0x4, 0x5, 0x6,
208
+ * 0xc or 0x18). The exception is taken to the usual target EL (EL1 or
209
+ * PL1 if in EL0, otherwise to the current EL).
210
+ */
211
+ CP_ACCESS_TRAP = 1,
212
+ /*
213
+ * Access fails and results in an exception syndrome 0x0 ("uncategorized").
214
+ * Note that this is not a catch-all case -- the set of cases which may
215
+ * result in this failure is specifically defined by the architecture.
216
+ */
217
+ CP_ACCESS_TRAP_UNCATEGORIZED = 2,
218
+ /* As CP_ACCESS_TRAP, but for traps directly to EL2 or EL3 */
219
+ CP_ACCESS_TRAP_EL2 = 3,
220
+ CP_ACCESS_TRAP_EL3 = 4,
221
+ /* As CP_ACCESS_UNCATEGORIZED, but for traps directly to EL2 or EL3 */
222
+ CP_ACCESS_TRAP_UNCATEGORIZED_EL2 = 5,
223
+ CP_ACCESS_TRAP_UNCATEGORIZED_EL3 = 6,
224
+} CPAccessResult;
225
+
226
+typedef struct ARMCPRegInfo ARMCPRegInfo;
227
+
228
+/*
229
+ * Access functions for coprocessor registers. These cannot fail and
230
+ * may not raise exceptions.
231
+ */
232
+typedef uint64_t CPReadFn(CPUARMState *env, const ARMCPRegInfo *opaque);
233
+typedef void CPWriteFn(CPUARMState *env, const ARMCPRegInfo *opaque,
234
+ uint64_t value);
235
+/* Access permission check functions for coprocessor registers. */
236
+typedef CPAccessResult CPAccessFn(CPUARMState *env,
237
+ const ARMCPRegInfo *opaque,
238
+ bool isread);
239
+/* Hook function for register reset */
240
+typedef void CPResetFn(CPUARMState *env, const ARMCPRegInfo *opaque);
241
+
242
+#define CP_ANY 0xff
243
+
244
+/* Definition of an ARM coprocessor register */
245
+struct ARMCPRegInfo {
246
+ /* Name of register (useful mainly for debugging, need not be unique) */
247
+ const char *name;
248
+ /*
249
+ * Location of register: coprocessor number and (crn,crm,opc1,opc2)
250
+ * tuple. Any of crm, opc1 and opc2 may be CP_ANY to indicate a
251
+ * 'wildcard' field -- any value of that field in the MRC/MCR insn
252
+ * will be decoded to this register. The register read and write
253
+ * callbacks will be passed an ARMCPRegInfo with the crn/crm/opc1/opc2
254
+ * used by the program, so it is possible to register a wildcard and
255
+ * then behave differently on read/write if necessary.
256
+ * For 64 bit registers, only crm and opc1 are relevant; crn and opc2
257
+ * must both be zero.
258
+ * For AArch64-visible registers, opc0 is also used.
259
+ * Since there are no "coprocessors" in AArch64, cp is purely used as a
260
+ * way to distinguish (for KVM's benefit) guest-visible system registers
261
+ * from demuxed ones provided to preserve the "no side effects on
262
+ * KVM register read/write from QEMU" semantics. cp==0x13 is guest
263
+ * visible (to match KVM's encoding); cp==0 will be converted to
264
+ * cp==0x13 when the ARMCPRegInfo is registered, for convenience.
265
+ */
266
+ uint8_t cp;
267
+ uint8_t crn;
268
+ uint8_t crm;
269
+ uint8_t opc0;
270
+ uint8_t opc1;
271
+ uint8_t opc2;
272
+ /* Execution state in which this register is visible: ARM_CP_STATE_* */
273
+ int state;
274
+ /* Register type: ARM_CP_* bits/values */
275
+ int type;
276
+ /* Access rights: PL*_[RW] */
277
+ int access;
278
+ /* Security state: ARM_CP_SECSTATE_* bits/values */
279
+ int secure;
280
+ /*
281
+ * The opaque pointer passed to define_arm_cp_regs_with_opaque() when
282
+ * this register was defined: can be used to hand data through to the
283
+ * register read/write functions, since they are passed the ARMCPRegInfo*.
284
+ */
285
+ void *opaque;
286
+ /*
287
+ * Value of this register, if it is ARM_CP_CONST. Otherwise, if
288
+ * fieldoffset is non-zero, the reset value of the register.
289
+ */
290
+ uint64_t resetvalue;
291
+ /*
292
+ * Offset of the field in CPUARMState for this register.
293
+ * This is not needed if either:
294
+ * 1. type is ARM_CP_CONST or one of the ARM_CP_SPECIALs
295
+ * 2. both readfn and writefn are specified
296
+ */
297
+ ptrdiff_t fieldoffset; /* offsetof(CPUARMState, field) */
298
+
299
+ /*
300
+ * Offsets of the secure and non-secure fields in CPUARMState for the
301
+ * register if it is banked. These fields are only used during the static
302
+ * registration of a register. During hashing the bank associated
303
+ * with a given security state is copied to fieldoffset which is used from
304
+ * there on out.
305
+ *
306
+ * It is expected that register definitions use either fieldoffset or
307
+ * bank_fieldoffsets in the definition but not both. It is also expected
308
+ * that both bank offsets are set when defining a banked register. This
309
+ * use indicates that a register is banked.
310
+ */
311
+ ptrdiff_t bank_fieldoffsets[2];
312
+
313
+ /*
314
+ * Function for making any access checks for this register in addition to
315
+ * those specified by the 'access' permissions bits. If NULL, no extra
316
+ * checks required. The access check is performed at runtime, not at
317
+ * translate time.
318
+ */
319
+ CPAccessFn *accessfn;
320
+ /*
321
+ * Function for handling reads of this register. If NULL, then reads
322
+ * will be done by loading from the offset into CPUARMState specified
323
+ * by fieldoffset.
324
+ */
325
+ CPReadFn *readfn;
326
+ /*
327
+ * Function for handling writes of this register. If NULL, then writes
328
+ * will be done by writing to the offset into CPUARMState specified
329
+ * by fieldoffset.
330
+ */
331
+ CPWriteFn *writefn;
332
+ /*
333
+ * Function for doing a "raw" read; used when we need to copy
334
+ * coprocessor state to the kernel for KVM or out for
335
+ * migration. This only needs to be provided if there is also a
336
+ * readfn and it has side effects (for instance clear-on-read bits).
337
+ */
338
+ CPReadFn *raw_readfn;
339
+ /*
340
+ * Function for doing a "raw" write; used when we need to copy KVM
341
+ * kernel coprocessor state into userspace, or for inbound
342
+ * migration. This only needs to be provided if there is also a
343
+ * writefn and it masks out "unwritable" bits or has write-one-to-clear
344
+ * or similar behaviour.
345
+ */
346
+ CPWriteFn *raw_writefn;
347
+ /*
348
+ * Function for resetting the register. If NULL, then reset will be done
349
+ * by writing resetvalue to the field specified in fieldoffset. If
350
+ * fieldoffset is 0 then no reset will be done.
351
+ */
352
+ CPResetFn *resetfn;
353
+
354
+ /*
355
+ * "Original" writefn and readfn.
356
+ * For ARMv8.1-VHE register aliases, we overwrite the read/write
357
+ * accessor functions of various EL1/EL0 to perform the runtime
358
+ * check for which sysreg should actually be modified, and then
359
+ * forwards the operation. Before overwriting the accessors,
360
+ * the original function is copied here, so that accesses that
361
+ * really do go to the EL1/EL0 version proceed normally.
362
+ * (The corresponding EL2 register is linked via opaque.)
363
+ */
364
+ CPReadFn *orig_readfn;
365
+ CPWriteFn *orig_writefn;
366
+};
367
+
368
+/*
369
+ * Macros which are lvalues for the field in CPUARMState for the
370
+ * ARMCPRegInfo *ri.
371
+ */
372
+#define CPREG_FIELD32(env, ri) \
373
+ (*(uint32_t *)((char *)(env) + (ri)->fieldoffset))
374
+#define CPREG_FIELD64(env, ri) \
375
+ (*(uint64_t *)((char *)(env) + (ri)->fieldoffset))
376
+
377
+#define REGINFO_SENTINEL { .type = ARM_CP_SENTINEL }
378
+
379
+void define_arm_cp_regs_with_opaque(ARMCPU *cpu,
380
+ const ARMCPRegInfo *regs, void *opaque);
381
+void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
382
+ const ARMCPRegInfo *regs, void *opaque);
383
+static inline void define_arm_cp_regs(ARMCPU *cpu, const ARMCPRegInfo *regs)
384
+{
385
+ define_arm_cp_regs_with_opaque(cpu, regs, 0);
386
+}
387
+static inline void define_one_arm_cp_reg(ARMCPU *cpu, const ARMCPRegInfo *regs)
388
+{
389
+ define_one_arm_cp_reg_with_opaque(cpu, regs, 0);
390
+}
391
+const ARMCPRegInfo *get_arm_cp_reginfo(GHashTable *cpregs, uint32_t encoded_cp);
392
+
393
+/*
394
+ * Definition of an ARM co-processor register as viewed from
395
+ * userspace. This is used for presenting sanitised versions of
396
+ * registers to userspace when emulating the Linux AArch64 CPU
397
+ * ID/feature ABI (advertised as HWCAP_CPUID).
398
+ */
399
+typedef struct ARMCPRegUserSpaceInfo {
400
+ /* Name of register */
401
+ const char *name;
402
+
403
+ /* Is the name actually a glob pattern */
404
+ bool is_glob;
405
+
406
+ /* Only some bits are exported to user space */
407
+ uint64_t exported_bits;
408
+
409
+ /* Fixed bits are applied after the mask */
410
+ uint64_t fixed_bits;
411
+} ARMCPRegUserSpaceInfo;
412
+
413
+#define REGUSERINFO_SENTINEL { .name = NULL }
414
+
415
+void modify_arm_cp_regs(ARMCPRegInfo *regs, const ARMCPRegUserSpaceInfo *mods);
416
+
417
+/* CPWriteFn that can be used to implement writes-ignored behaviour */
418
+void arm_cp_write_ignore(CPUARMState *env, const ARMCPRegInfo *ri,
419
+ uint64_t value);
420
+/* CPReadFn that can be used for read-as-zero behaviour */
421
+uint64_t arm_cp_read_zero(CPUARMState *env, const ARMCPRegInfo *ri);
422
+
423
+/*
424
+ * CPResetFn that does nothing, for use if no reset is required even
425
+ * if fieldoffset is non zero.
426
+ */
427
+void arm_cp_reset_ignore(CPUARMState *env, const ARMCPRegInfo *opaque);
428
+
429
+/*
430
+ * Return true if this reginfo struct's field in the cpu state struct
431
+ * is 64 bits wide.
432
+ */
433
+static inline bool cpreg_field_is_64bit(const ARMCPRegInfo *ri)
434
+{
435
+ return (ri->state == ARM_CP_STATE_AA64) || (ri->type & ARM_CP_64BIT);
436
+}
437
+
438
+static inline bool cp_access_ok(int current_el,
439
+ const ARMCPRegInfo *ri, int isread)
440
+{
441
+ return (ri->access >> ((current_el * 2) + isread)) & 1;
442
+}
443
+
444
+/* Raw read of a coprocessor register (as needed for migration, etc) */
445
+uint64_t read_raw_cp_reg(CPUARMState *env, const ARMCPRegInfo *ri);
446
+
447
+#endif /* TARGET_ARM_CPREGS_H */
22
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
448
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
23
index XXXXXXX..XXXXXXX 100644
449
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/cpu.h
450
--- a/target/arm/cpu.h
25
+++ b/target/arm/cpu.h
451
+++ b/target/arm/cpu.h
26
@@ -XXX,XX +XXX,XX @@ typedef enum ARMPSCIState {
452
@@ -XXX,XX +XXX,XX @@ static inline uint64_t cpreg_to_kvm_id(uint32_t cpregid)
27
PSCI_ON_PENDING = 2
453
return kvmid;
28
} ARMPSCIState;
454
}
29
455
30
+typedef struct ARMISARegisters ARMISARegisters;
456
-/* ARMCPRegInfo type field bits. If the SPECIAL bit is set this is a
31
+
457
- * special-behaviour cp reg and bits [11..8] indicate what behaviour
458
- * it has. Otherwise it is a simple cp reg, where CONST indicates that
459
- * TCG can assume the value to be constant (ie load at translate time)
460
- * and 64BIT indicates a 64 bit wide coprocessor register. SUPPRESS_TB_END
461
- * indicates that the TB should not be ended after a write to this register
462
- * (the default is that the TB ends after cp writes). OVERRIDE permits
463
- * a register definition to override a previous definition for the
464
- * same (cp, is64, crn, crm, opc1, opc2) tuple: either the new or the
465
- * old must have the OVERRIDE bit set.
466
- * ALIAS indicates that this register is an alias view of some underlying
467
- * state which is also visible via another register, and that the other
468
- * register is handling migration and reset; registers marked ALIAS will not be
469
- * migrated but may have their state set by syncing of register state from KVM.
470
- * NO_RAW indicates that this register has no underlying state and does not
471
- * support raw access for state saving/loading; it will not be used for either
472
- * migration or KVM state synchronization. (Typically this is for "registers"
473
- * which are actually used as instructions for cache maintenance and so on.)
474
- * IO indicates that this register does I/O and therefore its accesses
475
- * need to be marked with gen_io_start() and also end the TB. In particular,
476
- * registers which implement clocks or timers require this.
477
- * RAISES_EXC is for when the read or write hook might raise an exception;
478
- * the generated code will synchronize the CPU state before calling the hook
479
- * so that it is safe for the hook to call raise_exception().
480
- * NEWEL is for writes to registers that might change the exception
481
- * level - typically on older ARM chips. For those cases we need to
482
- * re-read the new el when recomputing the translation flags.
483
- */
484
-#define ARM_CP_SPECIAL 0x0001
485
-#define ARM_CP_CONST 0x0002
486
-#define ARM_CP_64BIT 0x0004
487
-#define ARM_CP_SUPPRESS_TB_END 0x0008
488
-#define ARM_CP_OVERRIDE 0x0010
489
-#define ARM_CP_ALIAS 0x0020
490
-#define ARM_CP_IO 0x0040
491
-#define ARM_CP_NO_RAW 0x0080
492
-#define ARM_CP_NOP (ARM_CP_SPECIAL | 0x0100)
493
-#define ARM_CP_WFI (ARM_CP_SPECIAL | 0x0200)
494
-#define ARM_CP_NZCV (ARM_CP_SPECIAL | 0x0300)
495
-#define ARM_CP_CURRENTEL (ARM_CP_SPECIAL | 0x0400)
496
-#define ARM_CP_DC_ZVA (ARM_CP_SPECIAL | 0x0500)
497
-#define ARM_CP_DC_GVA (ARM_CP_SPECIAL | 0x0600)
498
-#define ARM_CP_DC_GZVA (ARM_CP_SPECIAL | 0x0700)
499
-#define ARM_LAST_SPECIAL ARM_CP_DC_GZVA
500
-#define ARM_CP_FPU 0x1000
501
-#define ARM_CP_SVE 0x2000
502
-#define ARM_CP_NO_GDB 0x4000
503
-#define ARM_CP_RAISES_EXC 0x8000
504
-#define ARM_CP_NEWEL 0x10000
505
-/* Used only as a terminator for ARMCPRegInfo lists */
506
-#define ARM_CP_SENTINEL 0xfffff
507
-/* Mask of only the flag bits in a type field */
508
-#define ARM_CP_FLAG_MASK 0x1f0ff
509
-
510
-/* Valid values for ARMCPRegInfo state field, indicating which of
511
- * the AArch32 and AArch64 execution states this register is visible in.
512
- * If the reginfo doesn't explicitly specify then it is AArch32 only.
513
- * If the reginfo is declared to be visible in both states then a second
514
- * reginfo is synthesised for the AArch32 view of the AArch64 register,
515
- * such that the AArch32 view is the lower 32 bits of the AArch64 one.
516
- * Note that we rely on the values of these enums as we iterate through
517
- * the various states in some places.
518
- */
519
-enum {
520
- ARM_CP_STATE_AA32 = 0,
521
- ARM_CP_STATE_AA64 = 1,
522
- ARM_CP_STATE_BOTH = 2,
523
-};
524
-
525
-/* ARM CP register secure state flags. These flags identify security state
526
- * attributes for a given CP register entry.
527
- * The existence of both or neither secure and non-secure flags indicates that
528
- * the register has both a secure and non-secure hash entry. A single one of
529
- * these flags causes the register to only be hashed for the specified
530
- * security state.
531
- * Although definitions may have any combination of the S/NS bits, each
532
- * registered entry will only have one to identify whether the entry is secure
533
- * or non-secure.
534
- */
535
-enum {
536
- ARM_CP_SECSTATE_S = (1 << 0), /* bit[0]: Secure state register */
537
- ARM_CP_SECSTATE_NS = (1 << 1), /* bit[1]: Non-secure state register */
538
-};
539
-
540
-/* Return true if cptype is a valid type field. This is used to try to
541
- * catch errors where the sentinel has been accidentally left off the end
542
- * of a list of registers.
543
- */
544
-static inline bool cptype_valid(int cptype)
545
-{
546
- return ((cptype & ~ARM_CP_FLAG_MASK) == 0)
547
- || ((cptype & ARM_CP_SPECIAL) &&
548
- ((cptype & ~ARM_CP_FLAG_MASK) <= ARM_LAST_SPECIAL));
549
-}
550
-
551
-/* Access rights:
552
- * We define bits for Read and Write access for what rev C of the v7-AR ARM ARM
553
- * defines as PL0 (user), PL1 (fiq/irq/svc/abt/und/sys, ie privileged), and
554
- * PL2 (hyp). The other level which has Read and Write bits is Secure PL1
555
- * (ie any of the privileged modes in Secure state, or Monitor mode).
556
- * If a register is accessible in one privilege level it's always accessible
557
- * in higher privilege levels too. Since "Secure PL1" also follows this rule
558
- * (ie anything visible in PL2 is visible in S-PL1, some things are only
559
- * visible in S-PL1) but "Secure PL1" is a bit of a mouthful, we bend the
560
- * terminology a little and call this PL3.
561
- * In AArch64 things are somewhat simpler as the PLx bits line up exactly
562
- * with the ELx exception levels.
563
- *
564
- * If access permissions for a register are more complex than can be
565
- * described with these bits, then use a laxer set of restrictions, and
566
- * do the more restrictive/complex check inside a helper function.
567
- */
568
-#define PL3_R 0x80
569
-#define PL3_W 0x40
570
-#define PL2_R (0x20 | PL3_R)
571
-#define PL2_W (0x10 | PL3_W)
572
-#define PL1_R (0x08 | PL2_R)
573
-#define PL1_W (0x04 | PL2_W)
574
-#define PL0_R (0x02 | PL1_R)
575
-#define PL0_W (0x01 | PL1_W)
576
-
577
-/*
578
- * For user-mode some registers are accessible to EL0 via a kernel
579
- * trap-and-emulate ABI. In this case we define the read permissions
580
- * as actually being PL0_R. However some bits of any given register
581
- * may still be masked.
582
- */
583
-#ifdef CONFIG_USER_ONLY
584
-#define PL0U_R PL0_R
585
-#else
586
-#define PL0U_R PL1_R
587
-#endif
588
-
589
-#define PL3_RW (PL3_R | PL3_W)
590
-#define PL2_RW (PL2_R | PL2_W)
591
-#define PL1_RW (PL1_R | PL1_W)
592
-#define PL0_RW (PL0_R | PL0_W)
593
-
594
/* Return the highest implemented Exception Level */
595
static inline int arm_highest_el(CPUARMState *env)
596
{
597
@@ -XXX,XX +XXX,XX @@ static inline int arm_current_el(CPUARMState *env)
598
}
599
}
600
601
-typedef struct ARMCPRegInfo ARMCPRegInfo;
602
-
603
-typedef enum CPAccessResult {
604
- /* Access is permitted */
605
- CP_ACCESS_OK = 0,
606
- /* Access fails due to a configurable trap or enable which would
607
- * result in a categorized exception syndrome giving information about
608
- * the failing instruction (ie syndrome category 0x3, 0x4, 0x5, 0x6,
609
- * 0xc or 0x18). The exception is taken to the usual target EL (EL1 or
610
- * PL1 if in EL0, otherwise to the current EL).
611
- */
612
- CP_ACCESS_TRAP = 1,
613
- /* Access fails and results in an exception syndrome 0x0 ("uncategorized").
614
- * Note that this is not a catch-all case -- the set of cases which may
615
- * result in this failure is specifically defined by the architecture.
616
- */
617
- CP_ACCESS_TRAP_UNCATEGORIZED = 2,
618
- /* As CP_ACCESS_TRAP, but for traps directly to EL2 or EL3 */
619
- CP_ACCESS_TRAP_EL2 = 3,
620
- CP_ACCESS_TRAP_EL3 = 4,
621
- /* As CP_ACCESS_UNCATEGORIZED, but for traps directly to EL2 or EL3 */
622
- CP_ACCESS_TRAP_UNCATEGORIZED_EL2 = 5,
623
- CP_ACCESS_TRAP_UNCATEGORIZED_EL3 = 6,
624
-} CPAccessResult;
625
-
626
-/* Access functions for coprocessor registers. These cannot fail and
627
- * may not raise exceptions.
628
- */
629
-typedef uint64_t CPReadFn(CPUARMState *env, const ARMCPRegInfo *opaque);
630
-typedef void CPWriteFn(CPUARMState *env, const ARMCPRegInfo *opaque,
631
- uint64_t value);
632
-/* Access permission check functions for coprocessor registers. */
633
-typedef CPAccessResult CPAccessFn(CPUARMState *env,
634
- const ARMCPRegInfo *opaque,
635
- bool isread);
636
-/* Hook function for register reset */
637
-typedef void CPResetFn(CPUARMState *env, const ARMCPRegInfo *opaque);
638
-
639
-#define CP_ANY 0xff
640
-
641
-/* Definition of an ARM coprocessor register */
642
-struct ARMCPRegInfo {
643
- /* Name of register (useful mainly for debugging, need not be unique) */
644
- const char *name;
645
- /* Location of register: coprocessor number and (crn,crm,opc1,opc2)
646
- * tuple. Any of crm, opc1 and opc2 may be CP_ANY to indicate a
647
- * 'wildcard' field -- any value of that field in the MRC/MCR insn
648
- * will be decoded to this register. The register read and write
649
- * callbacks will be passed an ARMCPRegInfo with the crn/crm/opc1/opc2
650
- * used by the program, so it is possible to register a wildcard and
651
- * then behave differently on read/write if necessary.
652
- * For 64 bit registers, only crm and opc1 are relevant; crn and opc2
653
- * must both be zero.
654
- * For AArch64-visible registers, opc0 is also used.
655
- * Since there are no "coprocessors" in AArch64, cp is purely used as a
656
- * way to distinguish (for KVM's benefit) guest-visible system registers
657
- * from demuxed ones provided to preserve the "no side effects on
658
- * KVM register read/write from QEMU" semantics. cp==0x13 is guest
659
- * visible (to match KVM's encoding); cp==0 will be converted to
660
- * cp==0x13 when the ARMCPRegInfo is registered, for convenience.
661
- */
662
- uint8_t cp;
663
- uint8_t crn;
664
- uint8_t crm;
665
- uint8_t opc0;
666
- uint8_t opc1;
667
- uint8_t opc2;
668
- /* Execution state in which this register is visible: ARM_CP_STATE_* */
669
- int state;
670
- /* Register type: ARM_CP_* bits/values */
671
- int type;
672
- /* Access rights: PL*_[RW] */
673
- int access;
674
- /* Security state: ARM_CP_SECSTATE_* bits/values */
675
- int secure;
676
- /* The opaque pointer passed to define_arm_cp_regs_with_opaque() when
677
- * this register was defined: can be used to hand data through to the
678
- * register read/write functions, since they are passed the ARMCPRegInfo*.
679
- */
680
- void *opaque;
681
- /* Value of this register, if it is ARM_CP_CONST. Otherwise, if
682
- * fieldoffset is non-zero, the reset value of the register.
683
- */
684
- uint64_t resetvalue;
685
- /* Offset of the field in CPUARMState for this register.
686
- *
687
- * This is not needed if either:
688
- * 1. type is ARM_CP_CONST or one of the ARM_CP_SPECIALs
689
- * 2. both readfn and writefn are specified
690
- */
691
- ptrdiff_t fieldoffset; /* offsetof(CPUARMState, field) */
692
-
693
- /* Offsets of the secure and non-secure fields in CPUARMState for the
694
- * register if it is banked. These fields are only used during the static
695
- * registration of a register. During hashing the bank associated
696
- * with a given security state is copied to fieldoffset which is used from
697
- * there on out.
698
- *
699
- * It is expected that register definitions use either fieldoffset or
700
- * bank_fieldoffsets in the definition but not both. It is also expected
701
- * that both bank offsets are set when defining a banked register. This
702
- * use indicates that a register is banked.
703
- */
704
- ptrdiff_t bank_fieldoffsets[2];
705
-
706
- /* Function for making any access checks for this register in addition to
707
- * those specified by the 'access' permissions bits. If NULL, no extra
708
- * checks required. The access check is performed at runtime, not at
709
- * translate time.
710
- */
711
- CPAccessFn *accessfn;
712
- /* Function for handling reads of this register. If NULL, then reads
713
- * will be done by loading from the offset into CPUARMState specified
714
- * by fieldoffset.
715
- */
716
- CPReadFn *readfn;
717
- /* Function for handling writes of this register. If NULL, then writes
718
- * will be done by writing to the offset into CPUARMState specified
719
- * by fieldoffset.
720
- */
721
- CPWriteFn *writefn;
722
- /* Function for doing a "raw" read; used when we need to copy
723
- * coprocessor state to the kernel for KVM or out for
724
- * migration. This only needs to be provided if there is also a
725
- * readfn and it has side effects (for instance clear-on-read bits).
726
- */
727
- CPReadFn *raw_readfn;
728
- /* Function for doing a "raw" write; used when we need to copy KVM
729
- * kernel coprocessor state into userspace, or for inbound
730
- * migration. This only needs to be provided if there is also a
731
- * writefn and it masks out "unwritable" bits or has write-one-to-clear
732
- * or similar behaviour.
733
- */
734
- CPWriteFn *raw_writefn;
735
- /* Function for resetting the register. If NULL, then reset will be done
736
- * by writing resetvalue to the field specified in fieldoffset. If
737
- * fieldoffset is 0 then no reset will be done.
738
- */
739
- CPResetFn *resetfn;
740
-
741
- /*
742
- * "Original" writefn and readfn.
743
- * For ARMv8.1-VHE register aliases, we overwrite the read/write
744
- * accessor functions of various EL1/EL0 to perform the runtime
745
- * check for which sysreg should actually be modified, and then
746
- * forwards the operation. Before overwriting the accessors,
747
- * the original function is copied here, so that accesses that
748
- * really do go to the EL1/EL0 version proceed normally.
749
- * (The corresponding EL2 register is linked via opaque.)
750
- */
751
- CPReadFn *orig_readfn;
752
- CPWriteFn *orig_writefn;
753
-};
754
-
755
-/* Macros which are lvalues for the field in CPUARMState for the
756
- * ARMCPRegInfo *ri.
757
- */
758
-#define CPREG_FIELD32(env, ri) \
759
- (*(uint32_t *)((char *)(env) + (ri)->fieldoffset))
760
-#define CPREG_FIELD64(env, ri) \
761
- (*(uint64_t *)((char *)(env) + (ri)->fieldoffset))
762
-
763
-#define REGINFO_SENTINEL { .type = ARM_CP_SENTINEL }
764
-
765
-void define_arm_cp_regs_with_opaque(ARMCPU *cpu,
766
- const ARMCPRegInfo *regs, void *opaque);
767
-void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
768
- const ARMCPRegInfo *regs, void *opaque);
769
-static inline void define_arm_cp_regs(ARMCPU *cpu, const ARMCPRegInfo *regs)
770
-{
771
- define_arm_cp_regs_with_opaque(cpu, regs, 0);
772
-}
773
-static inline void define_one_arm_cp_reg(ARMCPU *cpu, const ARMCPRegInfo *regs)
774
-{
775
- define_one_arm_cp_reg_with_opaque(cpu, regs, 0);
776
-}
777
-const ARMCPRegInfo *get_arm_cp_reginfo(GHashTable *cpregs, uint32_t encoded_cp);
778
-
779
-/*
780
- * Definition of an ARM co-processor register as viewed from
781
- * userspace. This is used for presenting sanitised versions of
782
- * registers to userspace when emulating the Linux AArch64 CPU
783
- * ID/feature ABI (advertised as HWCAP_CPUID).
784
- */
785
-typedef struct ARMCPRegUserSpaceInfo {
786
- /* Name of register */
787
- const char *name;
788
-
789
- /* Is the name actually a glob pattern */
790
- bool is_glob;
791
-
792
- /* Only some bits are exported to user space */
793
- uint64_t exported_bits;
794
-
795
- /* Fixed bits are applied after the mask */
796
- uint64_t fixed_bits;
797
-} ARMCPRegUserSpaceInfo;
798
-
799
-#define REGUSERINFO_SENTINEL { .name = NULL }
800
-
801
-void modify_arm_cp_regs(ARMCPRegInfo *regs, const ARMCPRegUserSpaceInfo *mods);
802
-
803
-/* CPWriteFn that can be used to implement writes-ignored behaviour */
804
-void arm_cp_write_ignore(CPUARMState *env, const ARMCPRegInfo *ri,
805
- uint64_t value);
806
-/* CPReadFn that can be used for read-as-zero behaviour */
807
-uint64_t arm_cp_read_zero(CPUARMState *env, const ARMCPRegInfo *ri);
808
-
809
-/* CPResetFn that does nothing, for use if no reset is required even
810
- * if fieldoffset is non zero.
811
- */
812
-void arm_cp_reset_ignore(CPUARMState *env, const ARMCPRegInfo *opaque);
813
-
814
-/* Return true if this reginfo struct's field in the cpu state struct
815
- * is 64 bits wide.
816
- */
817
-static inline bool cpreg_field_is_64bit(const ARMCPRegInfo *ri)
818
-{
819
- return (ri->state == ARM_CP_STATE_AA64) || (ri->type & ARM_CP_64BIT);
820
-}
821
-
822
-static inline bool cp_access_ok(int current_el,
823
- const ARMCPRegInfo *ri, int isread)
824
-{
825
- return (ri->access >> ((current_el * 2) + isread)) & 1;
826
-}
827
-
828
-/* Raw read of a coprocessor register (as needed for migration, etc) */
829
-uint64_t read_raw_cp_reg(CPUARMState *env, const ARMCPRegInfo *ri);
830
-
32
/**
831
/**
33
* ARMCPU:
832
* write_list_to_cpustate
34
* @env: #CPUARMState
833
* @cpu: ARMCPU
35
@@ -XXX,XX +XXX,XX @@ enum arm_features {
834
diff --git a/hw/arm/pxa2xx.c b/hw/arm/pxa2xx.c
36
ARM_FEATURE_LPAE, /* has Large Physical Address Extension */
835
index XXXXXXX..XXXXXXX 100644
37
ARM_FEATURE_V8,
836
--- a/hw/arm/pxa2xx.c
38
ARM_FEATURE_AARCH64, /* supports 64 bit mode */
837
+++ b/hw/arm/pxa2xx.c
39
- ARM_FEATURE_V8_AES, /* implements AES part of v8 Crypto Extensions */
838
@@ -XXX,XX +XXX,XX @@
40
ARM_FEATURE_CBAR, /* has cp15 CBAR */
839
#include "qemu/cutils.h"
41
ARM_FEATURE_CRC, /* ARMv8 CRC instructions */
840
#include "qemu/log.h"
42
ARM_FEATURE_CBAR_RO, /* has cp15 CBAR and it is read-only */
841
#include "qom/object.h"
43
ARM_FEATURE_EL2, /* has EL2 Virtualization support */
842
+#include "target/arm/cpregs.h"
44
ARM_FEATURE_EL3, /* has EL3 Secure monitor support */
843
45
- ARM_FEATURE_V8_SHA1, /* implements SHA1 part of v8 Crypto Extensions */
844
static struct {
46
- ARM_FEATURE_V8_SHA256, /* implements SHA256 part of v8 Crypto Extensions */
845
hwaddr io_base;
47
- ARM_FEATURE_V8_PMULL, /* implements PMULL part of v8 Crypto Extensions */
846
diff --git a/hw/arm/pxa2xx_pic.c b/hw/arm/pxa2xx_pic.c
48
ARM_FEATURE_THUMB_DSP, /* DSP insns supported in the Thumb encodings */
847
index XXXXXXX..XXXXXXX 100644
49
ARM_FEATURE_PMU, /* has PMU support */
848
--- a/hw/arm/pxa2xx_pic.c
50
ARM_FEATURE_VBAR, /* has cp15 VBAR */
849
+++ b/hw/arm/pxa2xx_pic.c
51
ARM_FEATURE_M_SECURITY, /* M profile Security Extension */
850
@@ -XXX,XX +XXX,XX @@
52
ARM_FEATURE_JAZELLE, /* has (trivial) Jazelle implementation */
851
#include "hw/sysbus.h"
53
ARM_FEATURE_SVE, /* has Scalable Vector Extension */
852
#include "migration/vmstate.h"
54
- ARM_FEATURE_V8_SHA512, /* implements SHA512 part of v8 Crypto Extensions */
853
#include "qom/object.h"
55
- ARM_FEATURE_V8_SHA3, /* implements SHA3 part of v8 Crypto Extensions */
854
+#include "target/arm/cpregs.h"
56
- ARM_FEATURE_V8_SM3, /* implements SM3 part of v8 Crypto Extensions */
855
57
- ARM_FEATURE_V8_SM4, /* implements SM4 part of v8 Crypto Extensions */
856
#define ICIP    0x00    /* Interrupt Controller IRQ Pending register */
58
- ARM_FEATURE_V8_ATOMICS, /* ARMv8.1-Atomics feature */
857
#define ICMR    0x04    /* Interrupt Controller Mask register */
59
- ARM_FEATURE_V8_RDM, /* implements v8.1 simd round multiply */
858
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
60
- ARM_FEATURE_V8_DOTPROD, /* implements v8.2 simd dot product */
859
index XXXXXXX..XXXXXXX 100644
61
ARM_FEATURE_V8_FP16, /* implements v8.2 half-precision float */
860
--- a/hw/intc/arm_gicv3_cpuif.c
62
- ARM_FEATURE_V8_FCMA, /* has complex number part of v8.3 extensions. */
861
+++ b/hw/intc/arm_gicv3_cpuif.c
63
ARM_FEATURE_M_MAIN, /* M profile Main Extension */
862
@@ -XXX,XX +XXX,XX @@
64
};
863
#include "gicv3_internal.h"
65
864
#include "hw/irq.h"
66
@@ -XXX,XX +XXX,XX @@ static inline uint64_t *aa64_vfp_qreg(CPUARMState *env, unsigned regno)
865
#include "cpu.h"
67
/* Shared between translate-sve.c and sve_helper.c. */
866
+#include "target/arm/cpregs.h"
68
extern const uint64_t pred_esz_masks[4];
867
69
868
/*
70
+/*
869
* Special case return value from hppvi_index(); must be larger than
71
+ * 32-bit feature tests via id registers.
870
diff --git a/hw/intc/arm_gicv3_kvm.c b/hw/intc/arm_gicv3_kvm.c
72
+ */
871
index XXXXXXX..XXXXXXX 100644
73
+static inline bool isar_feature_aa32_aes(const ARMISARegisters *id)
872
--- a/hw/intc/arm_gicv3_kvm.c
74
+{
873
+++ b/hw/intc/arm_gicv3_kvm.c
75
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, AES) != 0;
874
@@ -XXX,XX +XXX,XX @@
76
+}
875
#include "vgic_common.h"
77
+
876
#include "migration/blocker.h"
78
+static inline bool isar_feature_aa32_pmull(const ARMISARegisters *id)
877
#include "qom/object.h"
79
+{
878
+#include "target/arm/cpregs.h"
80
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, AES) > 1;
879
+
81
+}
880
82
+
881
#ifdef DEBUG_GICV3_KVM
83
+static inline bool isar_feature_aa32_sha1(const ARMISARegisters *id)
882
#define DPRINTF(fmt, ...) \
84
+{
85
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, SHA1) != 0;
86
+}
87
+
88
+static inline bool isar_feature_aa32_sha2(const ARMISARegisters *id)
89
+{
90
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, SHA2) != 0;
91
+}
92
+
93
+static inline bool isar_feature_aa32_crc32(const ARMISARegisters *id)
94
+{
95
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, CRC32) != 0;
96
+}
97
+
98
+static inline bool isar_feature_aa32_rdm(const ARMISARegisters *id)
99
+{
100
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, RDM) != 0;
101
+}
102
+
103
+static inline bool isar_feature_aa32_vcma(const ARMISARegisters *id)
104
+{
105
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, VCMA) != 0;
106
+}
107
+
108
+static inline bool isar_feature_aa32_dp(const ARMISARegisters *id)
109
+{
110
+ return FIELD_EX32(id->id_isar6, ID_ISAR6, DP) != 0;
111
+}
112
+
113
+/*
114
+ * 64-bit feature tests via id registers.
115
+ */
116
+static inline bool isar_feature_aa64_aes(const ARMISARegisters *id)
117
+{
118
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, AES) != 0;
119
+}
120
+
121
+static inline bool isar_feature_aa64_pmull(const ARMISARegisters *id)
122
+{
123
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, AES) > 1;
124
+}
125
+
126
+static inline bool isar_feature_aa64_sha1(const ARMISARegisters *id)
127
+{
128
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SHA1) != 0;
129
+}
130
+
131
+static inline bool isar_feature_aa64_sha256(const ARMISARegisters *id)
132
+{
133
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SHA2) != 0;
134
+}
135
+
136
+static inline bool isar_feature_aa64_sha512(const ARMISARegisters *id)
137
+{
138
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SHA2) > 1;
139
+}
140
+
141
+static inline bool isar_feature_aa64_crc32(const ARMISARegisters *id)
142
+{
143
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, CRC32) != 0;
144
+}
145
+
146
+static inline bool isar_feature_aa64_atomics(const ARMISARegisters *id)
147
+{
148
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, ATOMIC) != 0;
149
+}
150
+
151
+static inline bool isar_feature_aa64_rdm(const ARMISARegisters *id)
152
+{
153
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, RDM) != 0;
154
+}
155
+
156
+static inline bool isar_feature_aa64_sha3(const ARMISARegisters *id)
157
+{
158
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SHA3) != 0;
159
+}
160
+
161
+static inline bool isar_feature_aa64_sm3(const ARMISARegisters *id)
162
+{
163
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SM3) != 0;
164
+}
165
+
166
+static inline bool isar_feature_aa64_sm4(const ARMISARegisters *id)
167
+{
168
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SM4) != 0;
169
+}
170
+
171
+static inline bool isar_feature_aa64_dp(const ARMISARegisters *id)
172
+{
173
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, DP) != 0;
174
+}
175
+
176
+static inline bool isar_feature_aa64_fcma(const ARMISARegisters *id)
177
+{
178
+ return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, FCMA) != 0;
179
+}
180
+
181
+/*
182
+ * Forward to the above feature tests given an ARMCPU pointer.
183
+ */
184
+#define cpu_isar_feature(name, cpu) \
185
+ ({ ARMCPU *cpu_ = (cpu); isar_feature_##name(&cpu_->isar); })
186
+
187
#endif
188
diff --git a/target/arm/translate.h b/target/arm/translate.h
189
index XXXXXXX..XXXXXXX 100644
190
--- a/target/arm/translate.h
191
+++ b/target/arm/translate.h
192
@@ -XXX,XX +XXX,XX @@
193
/* internal defines */
194
typedef struct DisasContext {
195
DisasContextBase base;
196
+ const ARMISARegisters *isar;
197
198
target_ulong pc;
199
target_ulong page_start;
200
@@ -XXX,XX +XXX,XX @@ static inline TCGv_i32 get_ahp_flag(void)
201
return ret;
202
}
203
204
+/*
205
+ * Forward to the isar_feature_* tests given a DisasContext pointer.
206
+ */
207
+#define dc_isar_feature(name, ctx) \
208
+ ({ DisasContext *ctx_ = (ctx); isar_feature_##name(ctx_->isar); })
209
+
210
#endif /* TARGET_ARM_TRANSLATE_H */
211
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
212
index XXXXXXX..XXXXXXX 100644
213
--- a/linux-user/elfload.c
214
+++ b/linux-user/elfload.c
215
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
216
/* probe for the extra features */
217
#define GET_FEATURE(feat, hwcap) \
218
do { if (arm_feature(&cpu->env, feat)) { hwcaps |= hwcap; } } while (0)
219
+
220
+#define GET_FEATURE_ID(feat, hwcap) \
221
+ do { if (cpu_isar_feature(feat, cpu)) { hwcaps |= hwcap; } } while (0)
222
+
223
/* EDSP is in v5TE and above, but all our v5 CPUs are v5TE */
224
GET_FEATURE(ARM_FEATURE_V5, ARM_HWCAP_ARM_EDSP);
225
GET_FEATURE(ARM_FEATURE_VFP, ARM_HWCAP_ARM_VFP);
226
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap2(void)
227
ARMCPU *cpu = ARM_CPU(thread_cpu);
228
uint32_t hwcaps = 0;
229
230
- GET_FEATURE(ARM_FEATURE_V8_AES, ARM_HWCAP2_ARM_AES);
231
- GET_FEATURE(ARM_FEATURE_V8_PMULL, ARM_HWCAP2_ARM_PMULL);
232
- GET_FEATURE(ARM_FEATURE_V8_SHA1, ARM_HWCAP2_ARM_SHA1);
233
- GET_FEATURE(ARM_FEATURE_V8_SHA256, ARM_HWCAP2_ARM_SHA2);
234
- GET_FEATURE(ARM_FEATURE_CRC, ARM_HWCAP2_ARM_CRC32);
235
+ GET_FEATURE_ID(aa32_aes, ARM_HWCAP2_ARM_AES);
236
+ GET_FEATURE_ID(aa32_pmull, ARM_HWCAP2_ARM_PMULL);
237
+ GET_FEATURE_ID(aa32_sha1, ARM_HWCAP2_ARM_SHA1);
238
+ GET_FEATURE_ID(aa32_sha2, ARM_HWCAP2_ARM_SHA2);
239
+ GET_FEATURE_ID(aa32_crc32, ARM_HWCAP2_ARM_CRC32);
240
return hwcaps;
241
}
242
243
#undef GET_FEATURE
244
+#undef GET_FEATURE_ID
245
246
#else
247
/* 64 bit ARM definitions */
248
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
249
/* probe for the extra features */
250
#define GET_FEATURE(feat, hwcap) \
251
do { if (arm_feature(&cpu->env, feat)) { hwcaps |= hwcap; } } while (0)
252
- GET_FEATURE(ARM_FEATURE_V8_AES, ARM_HWCAP_A64_AES);
253
- GET_FEATURE(ARM_FEATURE_V8_PMULL, ARM_HWCAP_A64_PMULL);
254
- GET_FEATURE(ARM_FEATURE_V8_SHA1, ARM_HWCAP_A64_SHA1);
255
- GET_FEATURE(ARM_FEATURE_V8_SHA256, ARM_HWCAP_A64_SHA2);
256
- GET_FEATURE(ARM_FEATURE_CRC, ARM_HWCAP_A64_CRC32);
257
- GET_FEATURE(ARM_FEATURE_V8_SHA3, ARM_HWCAP_A64_SHA3);
258
- GET_FEATURE(ARM_FEATURE_V8_SM3, ARM_HWCAP_A64_SM3);
259
- GET_FEATURE(ARM_FEATURE_V8_SM4, ARM_HWCAP_A64_SM4);
260
- GET_FEATURE(ARM_FEATURE_V8_SHA512, ARM_HWCAP_A64_SHA512);
261
+#define GET_FEATURE_ID(feat, hwcap) \
262
+ do { if (cpu_isar_feature(feat, cpu)) { hwcaps |= hwcap; } } while (0)
263
+
264
+ GET_FEATURE_ID(aa64_aes, ARM_HWCAP_A64_AES);
265
+ GET_FEATURE_ID(aa64_pmull, ARM_HWCAP_A64_PMULL);
266
+ GET_FEATURE_ID(aa64_sha1, ARM_HWCAP_A64_SHA1);
267
+ GET_FEATURE_ID(aa64_sha256, ARM_HWCAP_A64_SHA2);
268
+ GET_FEATURE_ID(aa64_sha512, ARM_HWCAP_A64_SHA512);
269
+ GET_FEATURE_ID(aa64_crc32, ARM_HWCAP_A64_CRC32);
270
+ GET_FEATURE_ID(aa64_sha3, ARM_HWCAP_A64_SHA3);
271
+ GET_FEATURE_ID(aa64_sm3, ARM_HWCAP_A64_SM3);
272
+ GET_FEATURE_ID(aa64_sm4, ARM_HWCAP_A64_SM4);
273
GET_FEATURE(ARM_FEATURE_V8_FP16,
274
ARM_HWCAP_A64_FPHP | ARM_HWCAP_A64_ASIMDHP);
275
- GET_FEATURE(ARM_FEATURE_V8_ATOMICS, ARM_HWCAP_A64_ATOMICS);
276
- GET_FEATURE(ARM_FEATURE_V8_RDM, ARM_HWCAP_A64_ASIMDRDM);
277
- GET_FEATURE(ARM_FEATURE_V8_DOTPROD, ARM_HWCAP_A64_ASIMDDP);
278
- GET_FEATURE(ARM_FEATURE_V8_FCMA, ARM_HWCAP_A64_FCMA);
279
+ GET_FEATURE_ID(aa64_atomics, ARM_HWCAP_A64_ATOMICS);
280
+ GET_FEATURE_ID(aa64_rdm, ARM_HWCAP_A64_ASIMDRDM);
281
+ GET_FEATURE_ID(aa64_dp, ARM_HWCAP_A64_ASIMDDP);
282
+ GET_FEATURE_ID(aa64_fcma, ARM_HWCAP_A64_FCMA);
283
GET_FEATURE(ARM_FEATURE_SVE, ARM_HWCAP_A64_SVE);
284
+
285
#undef GET_FEATURE
286
+#undef GET_FEATURE_ID
287
288
return hwcaps;
289
}
290
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
883
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
291
index XXXXXXX..XXXXXXX 100644
884
index XXXXXXX..XXXXXXX 100644
292
--- a/target/arm/cpu.c
885
--- a/target/arm/cpu.c
293
+++ b/target/arm/cpu.c
886
+++ b/target/arm/cpu.c
294
@@ -XXX,XX +XXX,XX @@ static void arm_max_initfn(Object *obj)
887
@@ -XXX,XX +XXX,XX @@
295
cortex_a15_initfn(obj);
888
#include "kvm_arm.h"
296
#ifdef CONFIG_USER_ONLY
889
#include "disas/capstone.h"
297
/* We don't set these in system emulation mode for the moment,
890
#include "fpu/softfloat.h"
298
- * since we don't correctly set the ID registers to advertise them,
891
+#include "cpregs.h"
299
+ * since we don't correctly set (all of) the ID registers to
892
300
+ * advertise them.
893
static void arm_cpu_set_pc(CPUState *cs, vaddr value)
301
*/
894
{
302
set_feature(&cpu->env, ARM_FEATURE_V8);
303
- set_feature(&cpu->env, ARM_FEATURE_V8_AES);
304
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA1);
305
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA256);
306
- set_feature(&cpu->env, ARM_FEATURE_V8_PMULL);
307
- set_feature(&cpu->env, ARM_FEATURE_CRC);
308
- set_feature(&cpu->env, ARM_FEATURE_V8_RDM);
309
- set_feature(&cpu->env, ARM_FEATURE_V8_DOTPROD);
310
- set_feature(&cpu->env, ARM_FEATURE_V8_FCMA);
311
+ {
312
+ uint32_t t;
313
+
314
+ t = cpu->isar.id_isar5;
315
+ t = FIELD_DP32(t, ID_ISAR5, AES, 2);
316
+ t = FIELD_DP32(t, ID_ISAR5, SHA1, 1);
317
+ t = FIELD_DP32(t, ID_ISAR5, SHA2, 1);
318
+ t = FIELD_DP32(t, ID_ISAR5, CRC32, 1);
319
+ t = FIELD_DP32(t, ID_ISAR5, RDM, 1);
320
+ t = FIELD_DP32(t, ID_ISAR5, VCMA, 1);
321
+ cpu->isar.id_isar5 = t;
322
+
323
+ t = cpu->isar.id_isar6;
324
+ t = FIELD_DP32(t, ID_ISAR6, DP, 1);
325
+ cpu->isar.id_isar6 = t;
326
+ }
327
#endif
328
}
329
}
330
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
895
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
331
index XXXXXXX..XXXXXXX 100644
896
index XXXXXXX..XXXXXXX 100644
332
--- a/target/arm/cpu64.c
897
--- a/target/arm/cpu64.c
333
+++ b/target/arm/cpu64.c
898
+++ b/target/arm/cpu64.c
334
@@ -XXX,XX +XXX,XX @@ static void aarch64_a57_initfn(Object *obj)
899
@@ -XXX,XX +XXX,XX @@
335
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
900
#include "hvf_arm.h"
336
set_feature(&cpu->env, ARM_FEATURE_AARCH64);
901
#include "qapi/visitor.h"
337
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
902
#include "hw/qdev-properties.h"
338
- set_feature(&cpu->env, ARM_FEATURE_V8_AES);
903
+#include "cpregs.h"
339
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA1);
904
340
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA256);
905
341
- set_feature(&cpu->env, ARM_FEATURE_V8_PMULL);
906
#ifndef CONFIG_USER_ONLY
342
- set_feature(&cpu->env, ARM_FEATURE_CRC);
907
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
343
set_feature(&cpu->env, ARM_FEATURE_EL2);
908
index XXXXXXX..XXXXXXX 100644
344
set_feature(&cpu->env, ARM_FEATURE_EL3);
909
--- a/target/arm/cpu_tcg.c
345
set_feature(&cpu->env, ARM_FEATURE_PMU);
910
+++ b/target/arm/cpu_tcg.c
346
@@ -XXX,XX +XXX,XX @@ static void aarch64_a53_initfn(Object *obj)
911
@@ -XXX,XX +XXX,XX @@
347
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
912
#if !defined(CONFIG_USER_ONLY)
348
set_feature(&cpu->env, ARM_FEATURE_AARCH64);
913
#include "hw/boards.h"
349
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
914
#endif
350
- set_feature(&cpu->env, ARM_FEATURE_V8_AES);
915
+#include "cpregs.h"
351
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA1);
916
352
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA256);
917
/* CPU models. These are not needed for the AArch64 linux-user build. */
353
- set_feature(&cpu->env, ARM_FEATURE_V8_PMULL);
918
#if !defined(CONFIG_USER_ONLY) || !defined(TARGET_AARCH64)
354
- set_feature(&cpu->env, ARM_FEATURE_CRC);
919
diff --git a/target/arm/gdbstub.c b/target/arm/gdbstub.c
355
set_feature(&cpu->env, ARM_FEATURE_EL2);
920
index XXXXXXX..XXXXXXX 100644
356
set_feature(&cpu->env, ARM_FEATURE_EL3);
921
--- a/target/arm/gdbstub.c
357
set_feature(&cpu->env, ARM_FEATURE_PMU);
922
+++ b/target/arm/gdbstub.c
358
@@ -XXX,XX +XXX,XX @@ static void aarch64_a72_initfn(Object *obj)
923
@@ -XXX,XX +XXX,XX @@
359
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
924
*/
360
set_feature(&cpu->env, ARM_FEATURE_AARCH64);
925
#include "qemu/osdep.h"
361
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
926
#include "cpu.h"
362
- set_feature(&cpu->env, ARM_FEATURE_V8_AES);
927
-#include "internals.h"
363
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA1);
928
#include "exec/gdbstub.h"
364
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA256);
929
+#include "internals.h"
365
- set_feature(&cpu->env, ARM_FEATURE_V8_PMULL);
930
+#include "cpregs.h"
366
- set_feature(&cpu->env, ARM_FEATURE_CRC);
931
367
set_feature(&cpu->env, ARM_FEATURE_EL2);
932
typedef struct RegisterSysregXmlParam {
368
set_feature(&cpu->env, ARM_FEATURE_EL3);
933
CPUState *cs;
369
set_feature(&cpu->env, ARM_FEATURE_PMU);
934
diff --git a/target/arm/helper.c b/target/arm/helper.c
370
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
935
index XXXXXXX..XXXXXXX 100644
371
if (kvm_enabled()) {
936
--- a/target/arm/helper.c
372
kvm_arm_set_cpu_features_from_host(cpu);
937
+++ b/target/arm/helper.c
373
} else {
938
@@ -XXX,XX +XXX,XX @@
374
+ uint64_t t;
939
#include "exec/cpu_ldst.h"
375
+ uint32_t u;
940
#include "semihosting/common-semi.h"
376
aarch64_a57_initfn(obj);
941
#endif
377
+
942
+#include "cpregs.h"
378
+ t = cpu->isar.id_aa64isar0;
943
379
+ t = FIELD_DP64(t, ID_AA64ISAR0, AES, 2); /* AES + PMULL */
944
#define ARM_CPU_FREQ 1000000000 /* FIXME: 1 GHz, should be configurable */
380
+ t = FIELD_DP64(t, ID_AA64ISAR0, SHA1, 1);
945
#define PMCR_NUM_COUNTERS 4 /* QEMU IMPDEF choice */
381
+ t = FIELD_DP64(t, ID_AA64ISAR0, SHA2, 2); /* SHA512 */
946
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
382
+ t = FIELD_DP64(t, ID_AA64ISAR0, CRC32, 1);
947
index XXXXXXX..XXXXXXX 100644
383
+ t = FIELD_DP64(t, ID_AA64ISAR0, ATOMIC, 2);
948
--- a/target/arm/op_helper.c
384
+ t = FIELD_DP64(t, ID_AA64ISAR0, RDM, 1);
949
+++ b/target/arm/op_helper.c
385
+ t = FIELD_DP64(t, ID_AA64ISAR0, SHA3, 1);
950
@@ -XXX,XX +XXX,XX @@
386
+ t = FIELD_DP64(t, ID_AA64ISAR0, SM3, 1);
951
#include "internals.h"
387
+ t = FIELD_DP64(t, ID_AA64ISAR0, SM4, 1);
952
#include "exec/exec-all.h"
388
+ t = FIELD_DP64(t, ID_AA64ISAR0, DP, 1);
953
#include "exec/cpu_ldst.h"
389
+ cpu->isar.id_aa64isar0 = t;
954
+#include "cpregs.h"
390
+
955
391
+ t = cpu->isar.id_aa64isar1;
956
#define SIGNBIT (uint32_t)0x80000000
392
+ t = FIELD_DP64(t, ID_AA64ISAR1, FCMA, 1);
957
#define SIGNBIT64 ((uint64_t)1 << 63)
393
+ cpu->isar.id_aa64isar1 = t;
394
+
395
+ /* Replicate the same data to the 32-bit id registers. */
396
+ u = cpu->isar.id_isar5;
397
+ u = FIELD_DP32(u, ID_ISAR5, AES, 2); /* AES + PMULL */
398
+ u = FIELD_DP32(u, ID_ISAR5, SHA1, 1);
399
+ u = FIELD_DP32(u, ID_ISAR5, SHA2, 1);
400
+ u = FIELD_DP32(u, ID_ISAR5, CRC32, 1);
401
+ u = FIELD_DP32(u, ID_ISAR5, RDM, 1);
402
+ u = FIELD_DP32(u, ID_ISAR5, VCMA, 1);
403
+ cpu->isar.id_isar5 = u;
404
+
405
+ u = cpu->isar.id_isar6;
406
+ u = FIELD_DP32(u, ID_ISAR6, DP, 1);
407
+ cpu->isar.id_isar6 = u;
408
+
409
#ifdef CONFIG_USER_ONLY
410
/* We don't set these in system emulation mode for the moment,
411
* since we don't correctly set the ID registers to advertise them,
412
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
413
* whereas the architecture requires them to be present in both if
414
* present in either.
415
*/
416
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA512);
417
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA3);
418
- set_feature(&cpu->env, ARM_FEATURE_V8_SM3);
419
- set_feature(&cpu->env, ARM_FEATURE_V8_SM4);
420
- set_feature(&cpu->env, ARM_FEATURE_V8_ATOMICS);
421
- set_feature(&cpu->env, ARM_FEATURE_V8_RDM);
422
- set_feature(&cpu->env, ARM_FEATURE_V8_DOTPROD);
423
set_feature(&cpu->env, ARM_FEATURE_V8_FP16);
424
- set_feature(&cpu->env, ARM_FEATURE_V8_FCMA);
425
set_feature(&cpu->env, ARM_FEATURE_SVE);
426
/* For usermode -cpu max we can use a larger and more efficient DCZ
427
* blocksize since we don't have to follow what the hardware does.
428
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
958
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
429
index XXXXXXX..XXXXXXX 100644
959
index XXXXXXX..XXXXXXX 100644
430
--- a/target/arm/translate-a64.c
960
--- a/target/arm/translate-a64.c
431
+++ b/target/arm/translate-a64.c
961
+++ b/target/arm/translate-a64.c
432
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
962
@@ -XXX,XX +XXX,XX @@
433
}
963
#include "translate.h"
434
if (rt2 == 31
964
#include "internals.h"
435
&& ((rt | rs) & 1) == 0
965
#include "qemu/host-utils.h"
436
- && arm_dc_feature(s, ARM_FEATURE_V8_ATOMICS)) {
966
-
437
+ && dc_isar_feature(aa64_atomics, s)) {
967
#include "semihosting/semihost.h"
438
/* CASP / CASPL */
968
#include "exec/gen-icount.h"
439
gen_compare_and_swap_pair(s, rs, rt, rn, size | 2);
969
-
440
return;
970
#include "exec/helper-proto.h"
441
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
971
#include "exec/helper-gen.h"
442
}
972
#include "exec/log.h"
443
if (rt2 == 31
973
-
444
&& ((rt | rs) & 1) == 0
974
+#include "cpregs.h"
445
- && arm_dc_feature(s, ARM_FEATURE_V8_ATOMICS)) {
975
#include "translate-a64.h"
446
+ && dc_isar_feature(aa64_atomics, s)) {
976
#include "qemu/atomic128.h"
447
/* CASPA / CASPAL */
448
gen_compare_and_swap_pair(s, rs, rt, rn, size | 2);
449
return;
450
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
451
case 0xb: /* CASL */
452
case 0xe: /* CASA */
453
case 0xf: /* CASAL */
454
- if (rt2 == 31 && arm_dc_feature(s, ARM_FEATURE_V8_ATOMICS)) {
455
+ if (rt2 == 31 && dc_isar_feature(aa64_atomics, s)) {
456
gen_compare_and_swap(s, rs, rt, rn, size);
457
return;
458
}
459
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
460
int rs = extract32(insn, 16, 5);
461
int rn = extract32(insn, 5, 5);
462
int o3_opc = extract32(insn, 12, 4);
463
- int feature = ARM_FEATURE_V8_ATOMICS;
464
TCGv_i64 tcg_rn, tcg_rs;
465
AtomicThreeOpFn *fn;
466
467
- if (is_vector) {
468
+ if (is_vector || !dc_isar_feature(aa64_atomics, s)) {
469
unallocated_encoding(s);
470
return;
471
}
472
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
473
unallocated_encoding(s);
474
return;
475
}
476
- if (!arm_dc_feature(s, feature)) {
477
- unallocated_encoding(s);
478
- return;
479
- }
480
481
if (rn == 31) {
482
gen_check_sp_alignment(s);
483
@@ -XXX,XX +XXX,XX @@ static void handle_crc32(DisasContext *s,
484
TCGv_i64 tcg_acc, tcg_val;
485
TCGv_i32 tcg_bytes;
486
487
- if (!arm_dc_feature(s, ARM_FEATURE_CRC)
488
+ if (!dc_isar_feature(aa64_crc32, s)
489
|| (sf == 1 && sz != 3)
490
|| (sf == 0 && sz == 3)) {
491
unallocated_encoding(s);
492
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_same_extra(DisasContext *s,
493
bool u = extract32(insn, 29, 1);
494
TCGv_i32 ele1, ele2, ele3;
495
TCGv_i64 res;
496
- int feature;
497
+ bool feature;
498
499
switch (u * 16 + opcode) {
500
case 0x10: /* SQRDMLAH (vector) */
501
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_same_extra(DisasContext *s,
502
unallocated_encoding(s);
503
return;
504
}
505
- feature = ARM_FEATURE_V8_RDM;
506
+ feature = dc_isar_feature(aa64_rdm, s);
507
break;
508
default:
509
unallocated_encoding(s);
510
return;
511
}
512
- if (!arm_dc_feature(s, feature)) {
513
+ if (!feature) {
514
unallocated_encoding(s);
515
return;
516
}
517
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_diff(DisasContext *s, uint32_t insn)
518
return;
519
}
520
if (size == 3) {
521
- if (!arm_dc_feature(s, ARM_FEATURE_V8_PMULL)) {
522
+ if (!dc_isar_feature(aa64_pmull, s)) {
523
unallocated_encoding(s);
524
return;
525
}
526
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
527
int size = extract32(insn, 22, 2);
528
bool u = extract32(insn, 29, 1);
529
bool is_q = extract32(insn, 30, 1);
530
- int feature, rot;
531
+ bool feature;
532
+ int rot;
533
534
switch (u * 16 + opcode) {
535
case 0x10: /* SQRDMLAH (vector) */
536
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
537
unallocated_encoding(s);
538
return;
539
}
540
- feature = ARM_FEATURE_V8_RDM;
541
+ feature = dc_isar_feature(aa64_rdm, s);
542
break;
543
case 0x02: /* SDOT (vector) */
544
case 0x12: /* UDOT (vector) */
545
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
546
unallocated_encoding(s);
547
return;
548
}
549
- feature = ARM_FEATURE_V8_DOTPROD;
550
+ feature = dc_isar_feature(aa64_dp, s);
551
break;
552
case 0x18: /* FCMLA, #0 */
553
case 0x19: /* FCMLA, #90 */
554
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
555
unallocated_encoding(s);
556
return;
557
}
558
- feature = ARM_FEATURE_V8_FCMA;
559
+ feature = dc_isar_feature(aa64_fcma, s);
560
break;
561
default:
562
unallocated_encoding(s);
563
return;
564
}
565
- if (!arm_dc_feature(s, feature)) {
566
+ if (!feature) {
567
unallocated_encoding(s);
568
return;
569
}
570
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
571
break;
572
case 0x1d: /* SQRDMLAH */
573
case 0x1f: /* SQRDMLSH */
574
- if (!arm_dc_feature(s, ARM_FEATURE_V8_RDM)) {
575
+ if (!dc_isar_feature(aa64_rdm, s)) {
576
unallocated_encoding(s);
577
return;
578
}
579
break;
580
case 0x0e: /* SDOT */
581
case 0x1e: /* UDOT */
582
- if (size != MO_32 || !arm_dc_feature(s, ARM_FEATURE_V8_DOTPROD)) {
583
+ if (size != MO_32 || !dc_isar_feature(aa64_dp, s)) {
584
unallocated_encoding(s);
585
return;
586
}
587
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
588
case 0x13: /* FCMLA #90 */
589
case 0x15: /* FCMLA #180 */
590
case 0x17: /* FCMLA #270 */
591
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FCMA)) {
592
+ if (!dc_isar_feature(aa64_fcma, s)) {
593
unallocated_encoding(s);
594
return;
595
}
596
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_aes(DisasContext *s, uint32_t insn)
597
TCGv_i32 tcg_decrypt;
598
CryptoThreeOpIntFn *genfn;
599
600
- if (!arm_dc_feature(s, ARM_FEATURE_V8_AES)
601
- || size != 0) {
602
+ if (!dc_isar_feature(aa64_aes, s) || size != 0) {
603
unallocated_encoding(s);
604
return;
605
}
606
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha(DisasContext *s, uint32_t insn)
607
int rd = extract32(insn, 0, 5);
608
CryptoThreeOpFn *genfn;
609
TCGv_ptr tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr;
610
- int feature = ARM_FEATURE_V8_SHA256;
611
+ bool feature;
612
613
if (size != 0) {
614
unallocated_encoding(s);
615
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha(DisasContext *s, uint32_t insn)
616
case 2: /* SHA1M */
617
case 3: /* SHA1SU0 */
618
genfn = NULL;
619
- feature = ARM_FEATURE_V8_SHA1;
620
+ feature = dc_isar_feature(aa64_sha1, s);
621
break;
622
case 4: /* SHA256H */
623
genfn = gen_helper_crypto_sha256h;
624
+ feature = dc_isar_feature(aa64_sha256, s);
625
break;
626
case 5: /* SHA256H2 */
627
genfn = gen_helper_crypto_sha256h2;
628
+ feature = dc_isar_feature(aa64_sha256, s);
629
break;
630
case 6: /* SHA256SU1 */
631
genfn = gen_helper_crypto_sha256su1;
632
+ feature = dc_isar_feature(aa64_sha256, s);
633
break;
634
default:
635
unallocated_encoding(s);
636
return;
637
}
638
639
- if (!arm_dc_feature(s, feature)) {
640
+ if (!feature) {
641
unallocated_encoding(s);
642
return;
643
}
644
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha(DisasContext *s, uint32_t insn)
645
int rn = extract32(insn, 5, 5);
646
int rd = extract32(insn, 0, 5);
647
CryptoTwoOpFn *genfn;
648
- int feature;
649
+ bool feature;
650
TCGv_ptr tcg_rd_ptr, tcg_rn_ptr;
651
652
if (size != 0) {
653
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha(DisasContext *s, uint32_t insn)
654
655
switch (opcode) {
656
case 0: /* SHA1H */
657
- feature = ARM_FEATURE_V8_SHA1;
658
+ feature = dc_isar_feature(aa64_sha1, s);
659
genfn = gen_helper_crypto_sha1h;
660
break;
661
case 1: /* SHA1SU1 */
662
- feature = ARM_FEATURE_V8_SHA1;
663
+ feature = dc_isar_feature(aa64_sha1, s);
664
genfn = gen_helper_crypto_sha1su1;
665
break;
666
case 2: /* SHA256SU0 */
667
- feature = ARM_FEATURE_V8_SHA256;
668
+ feature = dc_isar_feature(aa64_sha256, s);
669
genfn = gen_helper_crypto_sha256su0;
670
break;
671
default:
672
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha(DisasContext *s, uint32_t insn)
673
return;
674
}
675
676
- if (!arm_dc_feature(s, feature)) {
677
+ if (!feature) {
678
unallocated_encoding(s);
679
return;
680
}
681
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
682
int rm = extract32(insn, 16, 5);
683
int rn = extract32(insn, 5, 5);
684
int rd = extract32(insn, 0, 5);
685
- int feature;
686
+ bool feature;
687
CryptoThreeOpFn *genfn;
688
689
if (o == 0) {
690
switch (opcode) {
691
case 0: /* SHA512H */
692
- feature = ARM_FEATURE_V8_SHA512;
693
+ feature = dc_isar_feature(aa64_sha512, s);
694
genfn = gen_helper_crypto_sha512h;
695
break;
696
case 1: /* SHA512H2 */
697
- feature = ARM_FEATURE_V8_SHA512;
698
+ feature = dc_isar_feature(aa64_sha512, s);
699
genfn = gen_helper_crypto_sha512h2;
700
break;
701
case 2: /* SHA512SU1 */
702
- feature = ARM_FEATURE_V8_SHA512;
703
+ feature = dc_isar_feature(aa64_sha512, s);
704
genfn = gen_helper_crypto_sha512su1;
705
break;
706
case 3: /* RAX1 */
707
- feature = ARM_FEATURE_V8_SHA3;
708
+ feature = dc_isar_feature(aa64_sha3, s);
709
genfn = NULL;
710
break;
711
}
712
} else {
713
switch (opcode) {
714
case 0: /* SM3PARTW1 */
715
- feature = ARM_FEATURE_V8_SM3;
716
+ feature = dc_isar_feature(aa64_sm3, s);
717
genfn = gen_helper_crypto_sm3partw1;
718
break;
719
case 1: /* SM3PARTW2 */
720
- feature = ARM_FEATURE_V8_SM3;
721
+ feature = dc_isar_feature(aa64_sm3, s);
722
genfn = gen_helper_crypto_sm3partw2;
723
break;
724
case 2: /* SM4EKEY */
725
- feature = ARM_FEATURE_V8_SM4;
726
+ feature = dc_isar_feature(aa64_sm4, s);
727
genfn = gen_helper_crypto_sm4ekey;
728
break;
729
default:
730
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
731
}
732
}
733
734
- if (!arm_dc_feature(s, feature)) {
735
+ if (!feature) {
736
unallocated_encoding(s);
737
return;
738
}
739
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha512(DisasContext *s, uint32_t insn)
740
int rn = extract32(insn, 5, 5);
741
int rd = extract32(insn, 0, 5);
742
TCGv_ptr tcg_rd_ptr, tcg_rn_ptr;
743
- int feature;
744
+ bool feature;
745
CryptoTwoOpFn *genfn;
746
747
switch (opcode) {
748
case 0: /* SHA512SU0 */
749
- feature = ARM_FEATURE_V8_SHA512;
750
+ feature = dc_isar_feature(aa64_sha512, s);
751
genfn = gen_helper_crypto_sha512su0;
752
break;
753
case 1: /* SM4E */
754
- feature = ARM_FEATURE_V8_SM4;
755
+ feature = dc_isar_feature(aa64_sm4, s);
756
genfn = gen_helper_crypto_sm4e;
757
break;
758
default:
759
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha512(DisasContext *s, uint32_t insn)
760
return;
761
}
762
763
- if (!arm_dc_feature(s, feature)) {
764
+ if (!feature) {
765
unallocated_encoding(s);
766
return;
767
}
768
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_four_reg(DisasContext *s, uint32_t insn)
769
int ra = extract32(insn, 10, 5);
770
int rn = extract32(insn, 5, 5);
771
int rd = extract32(insn, 0, 5);
772
- int feature;
773
+ bool feature;
774
775
switch (op0) {
776
case 0: /* EOR3 */
777
case 1: /* BCAX */
778
- feature = ARM_FEATURE_V8_SHA3;
779
+ feature = dc_isar_feature(aa64_sha3, s);
780
break;
781
case 2: /* SM3SS1 */
782
- feature = ARM_FEATURE_V8_SM3;
783
+ feature = dc_isar_feature(aa64_sm3, s);
784
break;
785
default:
786
unallocated_encoding(s);
787
return;
788
}
789
790
- if (!arm_dc_feature(s, feature)) {
791
+ if (!feature) {
792
unallocated_encoding(s);
793
return;
794
}
795
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_xar(DisasContext *s, uint32_t insn)
796
TCGv_i64 tcg_op1, tcg_op2, tcg_res[2];
797
int pass;
798
799
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA3)) {
800
+ if (!dc_isar_feature(aa64_sha3, s)) {
801
unallocated_encoding(s);
802
return;
803
}
804
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_imm2(DisasContext *s, uint32_t insn)
805
TCGv_ptr tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr;
806
TCGv_i32 tcg_imm2, tcg_opcode;
807
808
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SM3)) {
809
+ if (!dc_isar_feature(aa64_sm3, s)) {
810
unallocated_encoding(s);
811
return;
812
}
813
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
814
ARMCPU *arm_cpu = arm_env_get_cpu(env);
815
int bound;
816
817
+ dc->isar = &arm_cpu->isar;
818
dc->pc = dc->base.pc_first;
819
dc->condjmp = 0;
820
977
821
diff --git a/target/arm/translate.c b/target/arm/translate.c
978
diff --git a/target/arm/translate.c b/target/arm/translate.c
822
index XXXXXXX..XXXXXXX 100644
979
index XXXXXXX..XXXXXXX 100644
823
--- a/target/arm/translate.c
980
--- a/target/arm/translate.c
824
+++ b/target/arm/translate.c
981
+++ b/target/arm/translate.c
825
@@ -XXX,XX +XXX,XX @@ static const uint8_t neon_2rm_sizes[] = {
982
@@ -XXX,XX +XXX,XX @@
826
static int do_v81_helper(DisasContext *s, gen_helper_gvec_3_ptr *fn,
983
#include "qemu/bitops.h"
827
int q, int rd, int rn, int rm)
984
#include "arm_ldst.h"
828
{
985
#include "semihosting/semihost.h"
829
- if (arm_dc_feature(s, ARM_FEATURE_V8_RDM)) {
986
-
830
+ if (dc_isar_feature(aa32_rdm, s)) {
987
#include "exec/helper-proto.h"
831
int opr_sz = (1 + q) * 8;
988
#include "exec/helper-gen.h"
832
tcg_gen_gvec_3_ptr(vfp_reg_offset(1, rd),
989
-
833
vfp_reg_offset(1, rn),
990
#include "exec/log.h"
834
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
991
+#include "cpregs.h"
835
return 1;
992
836
}
993
837
if (!u) { /* SHA-1 */
994
#define ENABLE_ARCH_4T arm_dc_feature(s, ARM_FEATURE_V4T)
838
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA1)) {
839
+ if (!dc_isar_feature(aa32_sha1, s)) {
840
return 1;
841
}
842
ptr1 = vfp_reg_ptr(true, rd);
843
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
844
gen_helper_crypto_sha1_3reg(ptr1, ptr2, ptr3, tmp4);
845
tcg_temp_free_i32(tmp4);
846
} else { /* SHA-256 */
847
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA256) || size == 3) {
848
+ if (!dc_isar_feature(aa32_sha2, s) || size == 3) {
849
return 1;
850
}
851
ptr1 = vfp_reg_ptr(true, rd);
852
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
853
if (op == 14 && size == 2) {
854
TCGv_i64 tcg_rn, tcg_rm, tcg_rd;
855
856
- if (!arm_dc_feature(s, ARM_FEATURE_V8_PMULL)) {
857
+ if (!dc_isar_feature(aa32_pmull, s)) {
858
return 1;
859
}
860
tcg_rn = tcg_temp_new_i64();
861
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
862
{
863
NeonGenThreeOpEnvFn *fn;
864
865
- if (!arm_dc_feature(s, ARM_FEATURE_V8_RDM)) {
866
+ if (!dc_isar_feature(aa32_rdm, s)) {
867
return 1;
868
}
869
if (u && ((rd | rn) & 1)) {
870
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
871
break;
872
}
873
case NEON_2RM_AESE: case NEON_2RM_AESMC:
874
- if (!arm_dc_feature(s, ARM_FEATURE_V8_AES)
875
- || ((rm | rd) & 1)) {
876
+ if (!dc_isar_feature(aa32_aes, s) || ((rm | rd) & 1)) {
877
return 1;
878
}
879
ptr1 = vfp_reg_ptr(true, rd);
880
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
881
tcg_temp_free_i32(tmp3);
882
break;
883
case NEON_2RM_SHA1H:
884
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA1)
885
- || ((rm | rd) & 1)) {
886
+ if (!dc_isar_feature(aa32_sha1, s) || ((rm | rd) & 1)) {
887
return 1;
888
}
889
ptr1 = vfp_reg_ptr(true, rd);
890
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
891
}
892
/* bit 6 (q): set -> SHA256SU0, cleared -> SHA1SU1 */
893
if (q) {
894
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA256)) {
895
+ if (!dc_isar_feature(aa32_sha2, s)) {
896
return 1;
897
}
898
- } else if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA1)) {
899
+ } else if (!dc_isar_feature(aa32_sha1, s)) {
900
return 1;
901
}
902
ptr1 = vfp_reg_ptr(true, rd);
903
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
904
/* VCMLA -- 1111 110R R.1S .... .... 1000 ...0 .... */
905
int size = extract32(insn, 20, 1);
906
data = extract32(insn, 23, 2); /* rot */
907
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FCMA)
908
+ if (!dc_isar_feature(aa32_vcma, s)
909
|| (!size && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))) {
910
return 1;
911
}
912
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
913
/* VCADD -- 1111 110R 1.0S .... .... 1000 ...0 .... */
914
int size = extract32(insn, 20, 1);
915
data = extract32(insn, 24, 1); /* rot */
916
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FCMA)
917
+ if (!dc_isar_feature(aa32_vcma, s)
918
|| (!size && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))) {
919
return 1;
920
}
921
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
922
} else if ((insn & 0xfeb00f00) == 0xfc200d00) {
923
/* V[US]DOT -- 1111 1100 0.10 .... .... 1101 .Q.U .... */
924
bool u = extract32(insn, 4, 1);
925
- if (!arm_dc_feature(s, ARM_FEATURE_V8_DOTPROD)) {
926
+ if (!dc_isar_feature(aa32_dp, s)) {
927
return 1;
928
}
929
fn_gvec = u ? gen_helper_gvec_udot_b : gen_helper_gvec_sdot_b;
930
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_2reg_scalar_ext(DisasContext *s, uint32_t insn)
931
int size = extract32(insn, 23, 1);
932
int index;
933
934
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FCMA)) {
935
+ if (!dc_isar_feature(aa32_vcma, s)) {
936
return 1;
937
}
938
if (size == 0) {
939
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_2reg_scalar_ext(DisasContext *s, uint32_t insn)
940
} else if ((insn & 0xffb00f00) == 0xfe200d00) {
941
/* V[US]DOT -- 1111 1110 0.10 .... .... 1101 .Q.U .... */
942
int u = extract32(insn, 4, 1);
943
- if (!arm_dc_feature(s, ARM_FEATURE_V8_DOTPROD)) {
944
+ if (!dc_isar_feature(aa32_dp, s)) {
945
return 1;
946
}
947
fn_gvec = u ? gen_helper_gvec_udot_idx_b : gen_helper_gvec_sdot_idx_b;
948
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
949
* op1 == 3 is UNPREDICTABLE but handle as UNDEFINED.
950
* Bits 8, 10 and 11 should be zero.
951
*/
952
- if (!arm_dc_feature(s, ARM_FEATURE_CRC) || op1 == 0x3 ||
953
- (c & 0xd) != 0) {
954
+ if (!dc_isar_feature(aa32_crc32, s) || op1 == 0x3 || (c & 0xd) != 0) {
955
goto illegal_op;
956
}
957
958
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
959
case 0x28:
960
case 0x29:
961
case 0x2a:
962
- if (!arm_dc_feature(s, ARM_FEATURE_CRC)) {
963
+ if (!dc_isar_feature(aa32_crc32, s)) {
964
goto illegal_op;
965
}
966
break;
967
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
968
CPUARMState *env = cs->env_ptr;
969
ARMCPU *cpu = arm_env_get_cpu(env);
970
971
+ dc->isar = &cpu->isar;
972
dc->pc = dc->base.pc_first;
973
dc->condjmp = 0;
974
975
--
995
--
976
2.19.1
996
2.25.1
977
997
978
998
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Instead of shifts and masks, use direct loads and stores from the neon
3
Rearrange the values of the enumerators of CPAccessResult
4
register file. Mirror the iteration structure of the ARM pseudocode
4
so that we may directly extract the target el. For the two
5
more closely. Correct the parameters of the VLD2 A2 insn.
5
special cases in access_check_cp_reg, use CPAccessResult.
6
6
7
Note that this includes a bugfix for handling of the insn
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
"VLD2 (multiple 2-element structures)" -- we were using an
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
incorrect stride value.
10
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20181011205206.3552-19-richard.henderson@linaro.org
10
Message-id: 20220501055028.646596-3-richard.henderson@linaro.org
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
12
---
16
target/arm/translate.c | 170 ++++++++++++++++++-----------------------
13
target/arm/cpregs.h | 26 ++++++++++++--------
17
1 file changed, 74 insertions(+), 96 deletions(-)
14
target/arm/op_helper.c | 56 +++++++++++++++++++++---------------------
15
2 files changed, 44 insertions(+), 38 deletions(-)
18
16
19
diff --git a/target/arm/translate.c b/target/arm/translate.c
17
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
20
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/translate.c
19
--- a/target/arm/cpregs.h
22
+++ b/target/arm/translate.c
20
+++ b/target/arm/cpregs.h
23
@@ -XXX,XX +XXX,XX @@ static TCGv_i32 neon_load_reg(int reg, int pass)
21
@@ -XXX,XX +XXX,XX @@ static inline bool cptype_valid(int cptype)
24
return tmp;
22
typedef enum CPAccessResult {
25
}
23
/* Access is permitted */
26
24
CP_ACCESS_OK = 0,
27
+static void neon_load_element64(TCGv_i64 var, int reg, int ele, TCGMemOp mop)
28
+{
29
+ long offset = neon_element_offset(reg, ele, mop & MO_SIZE);
30
+
25
+
31
+ switch (mop) {
26
+ /*
32
+ case MO_UB:
27
+ * Combined with one of the following, the low 2 bits indicate the
33
+ tcg_gen_ld8u_i64(var, cpu_env, offset);
28
+ * target exception level. If 0, the exception is taken to the usual
29
+ * target EL (EL1 or PL1 if in EL0, otherwise to the current EL).
30
+ */
31
+ CP_ACCESS_EL_MASK = 3,
32
+
33
/*
34
* Access fails due to a configurable trap or enable which would
35
* result in a categorized exception syndrome giving information about
36
* the failing instruction (ie syndrome category 0x3, 0x4, 0x5, 0x6,
37
- * 0xc or 0x18). The exception is taken to the usual target EL (EL1 or
38
- * PL1 if in EL0, otherwise to the current EL).
39
+ * 0xc or 0x18).
40
*/
41
- CP_ACCESS_TRAP = 1,
42
+ CP_ACCESS_TRAP = (1 << 2),
43
+ CP_ACCESS_TRAP_EL2 = CP_ACCESS_TRAP | 2,
44
+ CP_ACCESS_TRAP_EL3 = CP_ACCESS_TRAP | 3,
45
+
46
/*
47
* Access fails and results in an exception syndrome 0x0 ("uncategorized").
48
* Note that this is not a catch-all case -- the set of cases which may
49
* result in this failure is specifically defined by the architecture.
50
*/
51
- CP_ACCESS_TRAP_UNCATEGORIZED = 2,
52
- /* As CP_ACCESS_TRAP, but for traps directly to EL2 or EL3 */
53
- CP_ACCESS_TRAP_EL2 = 3,
54
- CP_ACCESS_TRAP_EL3 = 4,
55
- /* As CP_ACCESS_UNCATEGORIZED, but for traps directly to EL2 or EL3 */
56
- CP_ACCESS_TRAP_UNCATEGORIZED_EL2 = 5,
57
- CP_ACCESS_TRAP_UNCATEGORIZED_EL3 = 6,
58
+ CP_ACCESS_TRAP_UNCATEGORIZED = (2 << 2),
59
+ CP_ACCESS_TRAP_UNCATEGORIZED_EL2 = CP_ACCESS_TRAP_UNCATEGORIZED | 2,
60
+ CP_ACCESS_TRAP_UNCATEGORIZED_EL3 = CP_ACCESS_TRAP_UNCATEGORIZED | 3,
61
} CPAccessResult;
62
63
typedef struct ARMCPRegInfo ARMCPRegInfo;
64
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
65
index XXXXXXX..XXXXXXX 100644
66
--- a/target/arm/op_helper.c
67
+++ b/target/arm/op_helper.c
68
@@ -XXX,XX +XXX,XX @@ void HELPER(access_check_cp_reg)(CPUARMState *env, void *rip, uint32_t syndrome,
69
uint32_t isread)
70
{
71
const ARMCPRegInfo *ri = rip;
72
+ CPAccessResult res = CP_ACCESS_OK;
73
int target_el;
74
75
if (arm_feature(env, ARM_FEATURE_XSCALE) && ri->cp < 14
76
&& extract32(env->cp15.c15_cpar, ri->cp, 1) == 0) {
77
- raise_exception(env, EXCP_UDEF, syndrome, exception_target_el(env));
78
+ res = CP_ACCESS_TRAP;
79
+ goto fail;
80
}
81
82
/*
83
@@ -XXX,XX +XXX,XX @@ void HELPER(access_check_cp_reg)(CPUARMState *env, void *rip, uint32_t syndrome,
84
mask &= ~((1 << 4) | (1 << 14));
85
86
if (env->cp15.hstr_el2 & mask) {
87
- target_el = 2;
88
- goto exept;
89
+ res = CP_ACCESS_TRAP_EL2;
90
+ goto fail;
91
}
92
}
93
94
- if (!ri->accessfn) {
95
+ if (ri->accessfn) {
96
+ res = ri->accessfn(env, ri, isread);
97
+ }
98
+ if (likely(res == CP_ACCESS_OK)) {
99
return;
100
}
101
102
- switch (ri->accessfn(env, ri, isread)) {
103
- case CP_ACCESS_OK:
104
- return;
105
+ fail:
106
+ switch (res & ~CP_ACCESS_EL_MASK) {
107
case CP_ACCESS_TRAP:
108
- target_el = exception_target_el(env);
109
- break;
110
- case CP_ACCESS_TRAP_EL2:
111
- /* Requesting a trap to EL2 when we're in EL3 is
112
- * a bug in the access function.
113
- */
114
- assert(arm_current_el(env) != 3);
115
- target_el = 2;
116
- break;
117
- case CP_ACCESS_TRAP_EL3:
118
- target_el = 3;
119
break;
120
case CP_ACCESS_TRAP_UNCATEGORIZED:
121
- target_el = exception_target_el(env);
122
- syndrome = syn_uncategorized();
123
- break;
124
- case CP_ACCESS_TRAP_UNCATEGORIZED_EL2:
125
- target_el = 2;
126
- syndrome = syn_uncategorized();
127
- break;
128
- case CP_ACCESS_TRAP_UNCATEGORIZED_EL3:
129
- target_el = 3;
130
syndrome = syn_uncategorized();
131
break;
132
default:
133
g_assert_not_reached();
134
}
135
136
-exept:
137
+ target_el = res & CP_ACCESS_EL_MASK;
138
+ switch (target_el) {
139
+ case 0:
140
+ target_el = exception_target_el(env);
34
+ break;
141
+ break;
35
+ case MO_UW:
142
+ case 2:
36
+ tcg_gen_ld16u_i64(var, cpu_env, offset);
143
+ assert(arm_current_el(env) != 3);
144
+ assert(arm_is_el2_enabled(env));
37
+ break;
145
+ break;
38
+ case MO_UL:
146
+ case 3:
39
+ tcg_gen_ld32u_i64(var, cpu_env, offset);
147
+ assert(arm_feature(env, ARM_FEATURE_EL3));
40
+ break;
41
+ case MO_Q:
42
+ tcg_gen_ld_i64(var, cpu_env, offset);
43
+ break;
148
+ break;
44
+ default:
149
+ default:
150
+ /* No "direct" traps to EL1 */
45
+ g_assert_not_reached();
151
+ g_assert_not_reached();
46
+ }
152
+ }
47
+}
48
+
153
+
49
static void neon_store_reg(int reg, int pass, TCGv_i32 var)
154
raise_exception(env, EXCP_UDEF, syndrome, target_el);
50
{
51
tcg_gen_st_i32(var, cpu_env, neon_reg_offset(reg, pass));
52
tcg_temp_free_i32(var);
53
}
155
}
54
156
55
+static void neon_store_element64(int reg, int ele, TCGMemOp size, TCGv_i64 var)
56
+{
57
+ long offset = neon_element_offset(reg, ele, size);
58
+
59
+ switch (size) {
60
+ case MO_8:
61
+ tcg_gen_st8_i64(var, cpu_env, offset);
62
+ break;
63
+ case MO_16:
64
+ tcg_gen_st16_i64(var, cpu_env, offset);
65
+ break;
66
+ case MO_32:
67
+ tcg_gen_st32_i64(var, cpu_env, offset);
68
+ break;
69
+ case MO_64:
70
+ tcg_gen_st_i64(var, cpu_env, offset);
71
+ break;
72
+ default:
73
+ g_assert_not_reached();
74
+ }
75
+}
76
+
77
static inline void neon_load_reg64(TCGv_i64 var, int reg)
78
{
79
tcg_gen_ld_i64(var, cpu_env, vfp_reg_offset(1, reg));
80
@@ -XXX,XX +XXX,XX @@ static struct {
81
int interleave;
82
int spacing;
83
} const neon_ls_element_type[11] = {
84
- {4, 4, 1},
85
- {4, 4, 2},
86
+ {1, 4, 1},
87
+ {1, 4, 2},
88
{4, 1, 1},
89
- {4, 2, 1},
90
- {3, 3, 1},
91
- {3, 3, 2},
92
+ {2, 2, 2},
93
+ {1, 3, 1},
94
+ {1, 3, 2},
95
{3, 1, 1},
96
{1, 1, 1},
97
- {2, 2, 1},
98
- {2, 2, 2},
99
+ {1, 2, 1},
100
+ {1, 2, 2},
101
{2, 1, 1}
102
};
103
104
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
105
int shift;
106
int n;
107
int vec_size;
108
+ int mmu_idx;
109
+ TCGMemOp endian;
110
TCGv_i32 addr;
111
TCGv_i32 tmp;
112
TCGv_i32 tmp2;
113
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
114
rn = (insn >> 16) & 0xf;
115
rm = insn & 0xf;
116
load = (insn & (1 << 21)) != 0;
117
+ endian = s->be_data;
118
+ mmu_idx = get_mem_index(s);
119
if ((insn & (1 << 23)) == 0) {
120
/* Load store all elements. */
121
op = (insn >> 8) & 0xf;
122
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
123
nregs = neon_ls_element_type[op].nregs;
124
interleave = neon_ls_element_type[op].interleave;
125
spacing = neon_ls_element_type[op].spacing;
126
- if (size == 3 && (interleave | spacing) != 1)
127
+ if (size == 3 && (interleave | spacing) != 1) {
128
return 1;
129
+ }
130
+ tmp64 = tcg_temp_new_i64();
131
addr = tcg_temp_new_i32();
132
+ tmp2 = tcg_const_i32(1 << size);
133
load_reg_var(s, addr, rn);
134
- stride = (1 << size) * interleave;
135
for (reg = 0; reg < nregs; reg++) {
136
- if (interleave > 2 || (interleave == 2 && nregs == 2)) {
137
- load_reg_var(s, addr, rn);
138
- tcg_gen_addi_i32(addr, addr, (1 << size) * reg);
139
- } else if (interleave == 2 && nregs == 4 && reg == 2) {
140
- load_reg_var(s, addr, rn);
141
- tcg_gen_addi_i32(addr, addr, 1 << size);
142
- }
143
- if (size == 3) {
144
- tmp64 = tcg_temp_new_i64();
145
- if (load) {
146
- gen_aa32_ld64(s, tmp64, addr, get_mem_index(s));
147
- neon_store_reg64(tmp64, rd);
148
- } else {
149
- neon_load_reg64(tmp64, rd);
150
- gen_aa32_st64(s, tmp64, addr, get_mem_index(s));
151
- }
152
- tcg_temp_free_i64(tmp64);
153
- tcg_gen_addi_i32(addr, addr, stride);
154
- } else {
155
- for (pass = 0; pass < 2; pass++) {
156
- if (size == 2) {
157
- if (load) {
158
- tmp = tcg_temp_new_i32();
159
- gen_aa32_ld32u(s, tmp, addr, get_mem_index(s));
160
- neon_store_reg(rd, pass, tmp);
161
- } else {
162
- tmp = neon_load_reg(rd, pass);
163
- gen_aa32_st32(s, tmp, addr, get_mem_index(s));
164
- tcg_temp_free_i32(tmp);
165
- }
166
- tcg_gen_addi_i32(addr, addr, stride);
167
- } else if (size == 1) {
168
- if (load) {
169
- tmp = tcg_temp_new_i32();
170
- gen_aa32_ld16u(s, tmp, addr, get_mem_index(s));
171
- tcg_gen_addi_i32(addr, addr, stride);
172
- tmp2 = tcg_temp_new_i32();
173
- gen_aa32_ld16u(s, tmp2, addr, get_mem_index(s));
174
- tcg_gen_addi_i32(addr, addr, stride);
175
- tcg_gen_shli_i32(tmp2, tmp2, 16);
176
- tcg_gen_or_i32(tmp, tmp, tmp2);
177
- tcg_temp_free_i32(tmp2);
178
- neon_store_reg(rd, pass, tmp);
179
- } else {
180
- tmp = neon_load_reg(rd, pass);
181
- tmp2 = tcg_temp_new_i32();
182
- tcg_gen_shri_i32(tmp2, tmp, 16);
183
- gen_aa32_st16(s, tmp, addr, get_mem_index(s));
184
- tcg_temp_free_i32(tmp);
185
- tcg_gen_addi_i32(addr, addr, stride);
186
- gen_aa32_st16(s, tmp2, addr, get_mem_index(s));
187
- tcg_temp_free_i32(tmp2);
188
- tcg_gen_addi_i32(addr, addr, stride);
189
- }
190
- } else /* size == 0 */ {
191
- if (load) {
192
- tmp2 = NULL;
193
- for (n = 0; n < 4; n++) {
194
- tmp = tcg_temp_new_i32();
195
- gen_aa32_ld8u(s, tmp, addr, get_mem_index(s));
196
- tcg_gen_addi_i32(addr, addr, stride);
197
- if (n == 0) {
198
- tmp2 = tmp;
199
- } else {
200
- tcg_gen_shli_i32(tmp, tmp, n * 8);
201
- tcg_gen_or_i32(tmp2, tmp2, tmp);
202
- tcg_temp_free_i32(tmp);
203
- }
204
- }
205
- neon_store_reg(rd, pass, tmp2);
206
- } else {
207
- tmp2 = neon_load_reg(rd, pass);
208
- for (n = 0; n < 4; n++) {
209
- tmp = tcg_temp_new_i32();
210
- if (n == 0) {
211
- tcg_gen_mov_i32(tmp, tmp2);
212
- } else {
213
- tcg_gen_shri_i32(tmp, tmp2, n * 8);
214
- }
215
- gen_aa32_st8(s, tmp, addr, get_mem_index(s));
216
- tcg_temp_free_i32(tmp);
217
- tcg_gen_addi_i32(addr, addr, stride);
218
- }
219
- tcg_temp_free_i32(tmp2);
220
- }
221
+ for (n = 0; n < 8 >> size; n++) {
222
+ int xs;
223
+ for (xs = 0; xs < interleave; xs++) {
224
+ int tt = rd + reg + spacing * xs;
225
+
226
+ if (load) {
227
+ gen_aa32_ld_i64(s, tmp64, addr, mmu_idx, endian | size);
228
+ neon_store_element64(tt, n, size, tmp64);
229
+ } else {
230
+ neon_load_element64(tmp64, tt, n, size);
231
+ gen_aa32_st_i64(s, tmp64, addr, mmu_idx, endian | size);
232
}
233
+ tcg_gen_add_i32(addr, addr, tmp2);
234
}
235
}
236
- rd += spacing;
237
}
238
tcg_temp_free_i32(addr);
239
- stride = nregs * 8;
240
+ tcg_temp_free_i32(tmp2);
241
+ tcg_temp_free_i64(tmp64);
242
+ stride = nregs * interleave * 8;
243
} else {
244
size = (insn >> 10) & 3;
245
if (size == 3) {
246
--
157
--
247
2.19.1
158
2.25.1
248
159
249
160
diff view generated by jsdifflib
1
The HCR.FB virtualization configuration register bit requests that
1
From: Richard Henderson <richard.henderson@linaro.org>
2
TLB maintenance, branch predictor invalidate-all and icache
3
invalidate-all operations performed in NS EL1 should be upgraded
4
from "local CPU only to "broadcast within Inner Shareable domain".
5
For QEMU we NOP the branch predictor and icache operations, so
6
we only need to upgrade the TLB invalidates:
7
AArch32 TLBIALL, TLBIMVA, TLBIASID, DTLBIALL, DTLBIMVA, DTLBIASID,
8
ITLBIALL, ITLBIMVA, ITLBIASID, TLBIMVAA, TLBIMVAL, TLBIMVAAL
9
AArch64 TLBI VMALLE1, TLBI VAE1, TLBI ASIDE1, TLBI VAAE1,
10
TLBI VALE1, TLBI VAALE1
11
2
3
Remove a possible source of error by removing REGINFO_SENTINEL
4
and using ARRAY_SIZE (convinently hidden inside a macro) to
5
find the end of the set of regs being registered or modified.
6
7
The space saved by not having the extra array element reduces
8
the executable's .data.rel.ro section by about 9k.
9
10
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20220501055028.646596-4-richard.henderson@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20181012144235.19646-4-peter.maydell@linaro.org
15
---
15
---
16
target/arm/helper.c | 191 +++++++++++++++++++++++++++-----------------
16
target/arm/cpregs.h | 53 +++++++++---------
17
1 file changed, 116 insertions(+), 75 deletions(-)
17
hw/arm/pxa2xx.c | 1 -
18
hw/arm/pxa2xx_pic.c | 1 -
19
hw/intc/arm_gicv3_cpuif.c | 5 --
20
hw/intc/arm_gicv3_kvm.c | 1 -
21
target/arm/cpu64.c | 1 -
22
target/arm/cpu_tcg.c | 4 --
23
target/arm/helper.c | 111 ++++++++------------------------------
24
8 files changed, 48 insertions(+), 129 deletions(-)
18
25
26
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
27
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/cpregs.h
29
+++ b/target/arm/cpregs.h
30
@@ -XXX,XX +XXX,XX @@
31
#define ARM_CP_NO_GDB 0x4000
32
#define ARM_CP_RAISES_EXC 0x8000
33
#define ARM_CP_NEWEL 0x10000
34
-/* Used only as a terminator for ARMCPRegInfo lists */
35
-#define ARM_CP_SENTINEL 0xfffff
36
/* Mask of only the flag bits in a type field */
37
#define ARM_CP_FLAG_MASK 0x1f0ff
38
39
@@ -XXX,XX +XXX,XX @@ enum {
40
ARM_CP_SECSTATE_NS = (1 << 1), /* bit[1]: Non-secure state register */
41
};
42
43
-/*
44
- * Return true if cptype is a valid type field. This is used to try to
45
- * catch errors where the sentinel has been accidentally left off the end
46
- * of a list of registers.
47
- */
48
-static inline bool cptype_valid(int cptype)
49
-{
50
- return ((cptype & ~ARM_CP_FLAG_MASK) == 0)
51
- || ((cptype & ARM_CP_SPECIAL) &&
52
- ((cptype & ~ARM_CP_FLAG_MASK) <= ARM_LAST_SPECIAL));
53
-}
54
-
55
/*
56
* Access rights:
57
* We define bits for Read and Write access for what rev C of the v7-AR ARM ARM
58
@@ -XXX,XX +XXX,XX @@ struct ARMCPRegInfo {
59
#define CPREG_FIELD64(env, ri) \
60
(*(uint64_t *)((char *)(env) + (ri)->fieldoffset))
61
62
-#define REGINFO_SENTINEL { .type = ARM_CP_SENTINEL }
63
+void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu, const ARMCPRegInfo *reg,
64
+ void *opaque);
65
66
-void define_arm_cp_regs_with_opaque(ARMCPU *cpu,
67
- const ARMCPRegInfo *regs, void *opaque);
68
-void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
69
- const ARMCPRegInfo *regs, void *opaque);
70
-static inline void define_arm_cp_regs(ARMCPU *cpu, const ARMCPRegInfo *regs)
71
-{
72
- define_arm_cp_regs_with_opaque(cpu, regs, 0);
73
-}
74
static inline void define_one_arm_cp_reg(ARMCPU *cpu, const ARMCPRegInfo *regs)
75
{
76
- define_one_arm_cp_reg_with_opaque(cpu, regs, 0);
77
+ define_one_arm_cp_reg_with_opaque(cpu, regs, NULL);
78
}
79
+
80
+void define_arm_cp_regs_with_opaque_len(ARMCPU *cpu, const ARMCPRegInfo *regs,
81
+ void *opaque, size_t len);
82
+
83
+#define define_arm_cp_regs_with_opaque(CPU, REGS, OPAQUE) \
84
+ do { \
85
+ QEMU_BUILD_BUG_ON(ARRAY_SIZE(REGS) == 0); \
86
+ define_arm_cp_regs_with_opaque_len(CPU, REGS, OPAQUE, \
87
+ ARRAY_SIZE(REGS)); \
88
+ } while (0)
89
+
90
+#define define_arm_cp_regs(CPU, REGS) \
91
+ define_arm_cp_regs_with_opaque(CPU, REGS, NULL)
92
+
93
const ARMCPRegInfo *get_arm_cp_reginfo(GHashTable *cpregs, uint32_t encoded_cp);
94
95
/*
96
@@ -XXX,XX +XXX,XX @@ typedef struct ARMCPRegUserSpaceInfo {
97
uint64_t fixed_bits;
98
} ARMCPRegUserSpaceInfo;
99
100
-#define REGUSERINFO_SENTINEL { .name = NULL }
101
+void modify_arm_cp_regs_with_len(ARMCPRegInfo *regs, size_t regs_len,
102
+ const ARMCPRegUserSpaceInfo *mods,
103
+ size_t mods_len);
104
105
-void modify_arm_cp_regs(ARMCPRegInfo *regs, const ARMCPRegUserSpaceInfo *mods);
106
+#define modify_arm_cp_regs(REGS, MODS) \
107
+ do { \
108
+ QEMU_BUILD_BUG_ON(ARRAY_SIZE(REGS) == 0); \
109
+ QEMU_BUILD_BUG_ON(ARRAY_SIZE(MODS) == 0); \
110
+ modify_arm_cp_regs_with_len(REGS, ARRAY_SIZE(REGS), \
111
+ MODS, ARRAY_SIZE(MODS)); \
112
+ } while (0)
113
114
/* CPWriteFn that can be used to implement writes-ignored behaviour */
115
void arm_cp_write_ignore(CPUARMState *env, const ARMCPRegInfo *ri,
116
diff --git a/hw/arm/pxa2xx.c b/hw/arm/pxa2xx.c
117
index XXXXXXX..XXXXXXX 100644
118
--- a/hw/arm/pxa2xx.c
119
+++ b/hw/arm/pxa2xx.c
120
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo pxa_cp_reginfo[] = {
121
{ .name = "PWRMODE", .cp = 14, .crn = 7, .crm = 0, .opc1 = 0, .opc2 = 0,
122
.access = PL1_RW, .type = ARM_CP_IO,
123
.readfn = arm_cp_read_zero, .writefn = pxa2xx_pwrmode_write },
124
- REGINFO_SENTINEL
125
};
126
127
static void pxa2xx_setup_cp14(PXA2xxState *s)
128
diff --git a/hw/arm/pxa2xx_pic.c b/hw/arm/pxa2xx_pic.c
129
index XXXXXXX..XXXXXXX 100644
130
--- a/hw/arm/pxa2xx_pic.c
131
+++ b/hw/arm/pxa2xx_pic.c
132
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo pxa_pic_cp_reginfo[] = {
133
REGINFO_FOR_PIC_CP("ICLR2", 8),
134
REGINFO_FOR_PIC_CP("ICFP2", 9),
135
REGINFO_FOR_PIC_CP("ICPR2", 0xa),
136
- REGINFO_SENTINEL
137
};
138
139
static const MemoryRegionOps pxa2xx_pic_ops = {
140
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
141
index XXXXXXX..XXXXXXX 100644
142
--- a/hw/intc/arm_gicv3_cpuif.c
143
+++ b/hw/intc/arm_gicv3_cpuif.c
144
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo gicv3_cpuif_reginfo[] = {
145
.readfn = icc_igrpen1_el3_read,
146
.writefn = icc_igrpen1_el3_write,
147
},
148
- REGINFO_SENTINEL
149
};
150
151
static uint64_t ich_ap_read(CPUARMState *env, const ARMCPRegInfo *ri)
152
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo gicv3_cpuif_hcr_reginfo[] = {
153
.readfn = ich_vmcr_read,
154
.writefn = ich_vmcr_write,
155
},
156
- REGINFO_SENTINEL
157
};
158
159
static const ARMCPRegInfo gicv3_cpuif_ich_apxr1_reginfo[] = {
160
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo gicv3_cpuif_ich_apxr1_reginfo[] = {
161
.readfn = ich_ap_read,
162
.writefn = ich_ap_write,
163
},
164
- REGINFO_SENTINEL
165
};
166
167
static const ARMCPRegInfo gicv3_cpuif_ich_apxr23_reginfo[] = {
168
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo gicv3_cpuif_ich_apxr23_reginfo[] = {
169
.readfn = ich_ap_read,
170
.writefn = ich_ap_write,
171
},
172
- REGINFO_SENTINEL
173
};
174
175
static void gicv3_cpuif_el_change_hook(ARMCPU *cpu, void *opaque)
176
@@ -XXX,XX +XXX,XX @@ void gicv3_init_cpuif(GICv3State *s)
177
.readfn = ich_lr_read,
178
.writefn = ich_lr_write,
179
},
180
- REGINFO_SENTINEL
181
};
182
define_arm_cp_regs(cpu, lr_regset);
183
}
184
diff --git a/hw/intc/arm_gicv3_kvm.c b/hw/intc/arm_gicv3_kvm.c
185
index XXXXXXX..XXXXXXX 100644
186
--- a/hw/intc/arm_gicv3_kvm.c
187
+++ b/hw/intc/arm_gicv3_kvm.c
188
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo gicv3_cpuif_reginfo[] = {
189
*/
190
.resetfn = arm_gicv3_icc_reset,
191
},
192
- REGINFO_SENTINEL
193
};
194
195
/**
196
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
197
index XXXXXXX..XXXXXXX 100644
198
--- a/target/arm/cpu64.c
199
+++ b/target/arm/cpu64.c
200
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cortex_a72_a57_a53_cp_reginfo[] = {
201
{ .name = "L2MERRSR",
202
.cp = 15, .opc1 = 3, .crm = 15,
203
.access = PL1_RW, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
204
- REGINFO_SENTINEL
205
};
206
207
static void aarch64_a57_initfn(Object *obj)
208
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
209
index XXXXXXX..XXXXXXX 100644
210
--- a/target/arm/cpu_tcg.c
211
+++ b/target/arm/cpu_tcg.c
212
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cortexa8_cp_reginfo[] = {
213
.access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
214
{ .name = "L2AUXCR", .cp = 15, .crn = 9, .crm = 0, .opc1 = 1, .opc2 = 2,
215
.access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
216
- REGINFO_SENTINEL
217
};
218
219
static void cortex_a8_initfn(Object *obj)
220
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cortexa9_cp_reginfo[] = {
221
.access = PL1_RW, .resetvalue = 0, .type = ARM_CP_CONST },
222
{ .name = "TLB_ATTR", .cp = 15, .crn = 15, .crm = 7, .opc1 = 5, .opc2 = 2,
223
.access = PL1_RW, .resetvalue = 0, .type = ARM_CP_CONST },
224
- REGINFO_SENTINEL
225
};
226
227
static void cortex_a9_initfn(Object *obj)
228
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cortexa15_cp_reginfo[] = {
229
#endif
230
{ .name = "L2ECTLR", .cp = 15, .crn = 9, .crm = 0, .opc1 = 1, .opc2 = 3,
231
.access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
232
- REGINFO_SENTINEL
233
};
234
235
static void cortex_a7_initfn(Object *obj)
236
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cortexr5_cp_reginfo[] = {
237
.access = PL1_RW, .type = ARM_CP_CONST },
238
{ .name = "DCACHE_INVAL", .cp = 15, .opc1 = 0, .crn = 15, .crm = 5,
239
.opc2 = 0, .access = PL1_W, .type = ARM_CP_NOP },
240
- REGINFO_SENTINEL
241
};
242
243
static void cortex_r5_initfn(Object *obj)
19
diff --git a/target/arm/helper.c b/target/arm/helper.c
244
diff --git a/target/arm/helper.c b/target/arm/helper.c
20
index XXXXXXX..XXXXXXX 100644
245
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/helper.c
246
--- a/target/arm/helper.c
22
+++ b/target/arm/helper.c
247
+++ b/target/arm/helper.c
23
@@ -XXX,XX +XXX,XX @@ static void contextidr_write(CPUARMState *env, const ARMCPRegInfo *ri,
248
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cp_reginfo[] = {
24
raw_write(env, ri, value);
249
.secure = ARM_CP_SECSTATE_S,
25
}
250
.fieldoffset = offsetof(CPUARMState, cp15.contextidr_s),
26
251
.resetvalue = 0, .writefn = contextidr_write, .raw_writefn = raw_write, },
27
-static void tlbiall_write(CPUARMState *env, const ARMCPRegInfo *ri,
252
- REGINFO_SENTINEL
28
- uint64_t value)
253
};
29
-{
254
30
- /* Invalidate all (TLBIALL) */
255
static const ARMCPRegInfo not_v8_cp_reginfo[] = {
31
- ARMCPU *cpu = arm_env_get_cpu(env);
256
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo not_v8_cp_reginfo[] = {
32
-
257
{ .name = "CACHEMAINT", .cp = 15, .crn = 7, .crm = CP_ANY,
33
- tlb_flush(CPU(cpu));
258
.opc1 = 0, .opc2 = CP_ANY, .access = PL1_W,
34
-}
259
.type = ARM_CP_NOP | ARM_CP_OVERRIDE },
35
-
260
- REGINFO_SENTINEL
36
-static void tlbimva_write(CPUARMState *env, const ARMCPRegInfo *ri,
261
};
37
- uint64_t value)
262
38
-{
263
static const ARMCPRegInfo not_v6_cp_reginfo[] = {
39
- /* Invalidate single TLB entry by MVA and ASID (TLBIMVA) */
264
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo not_v6_cp_reginfo[] = {
40
- ARMCPU *cpu = arm_env_get_cpu(env);
265
*/
41
-
266
{ .name = "WFI_v5", .cp = 15, .crn = 7, .crm = 8, .opc1 = 0, .opc2 = 2,
42
- tlb_flush_page(CPU(cpu), value & TARGET_PAGE_MASK);
267
.access = PL1_W, .type = ARM_CP_WFI },
43
-}
268
- REGINFO_SENTINEL
44
-
269
};
45
-static void tlbiasid_write(CPUARMState *env, const ARMCPRegInfo *ri,
270
46
- uint64_t value)
271
static const ARMCPRegInfo not_v7_cp_reginfo[] = {
47
-{
272
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo not_v7_cp_reginfo[] = {
48
- /* Invalidate by ASID (TLBIASID) */
273
.opc1 = 0, .opc2 = 0, .access = PL1_RW, .type = ARM_CP_NOP },
49
- ARMCPU *cpu = arm_env_get_cpu(env);
274
{ .name = "NMRR", .cp = 15, .crn = 10, .crm = 2,
50
-
275
.opc1 = 0, .opc2 = 1, .access = PL1_RW, .type = ARM_CP_NOP },
51
- tlb_flush(CPU(cpu));
276
- REGINFO_SENTINEL
52
-}
277
};
53
-
278
54
-static void tlbimvaa_write(CPUARMState *env, const ARMCPRegInfo *ri,
279
static void cpacr_write(CPUARMState *env, const ARMCPRegInfo *ri,
55
- uint64_t value)
280
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v6_cp_reginfo[] = {
56
-{
281
.crn = 1, .crm = 0, .opc1 = 0, .opc2 = 2, .accessfn = cpacr_access,
57
- /* Invalidate single entry by MVA, all ASIDs (TLBIMVAA) */
282
.access = PL1_RW, .fieldoffset = offsetof(CPUARMState, cp15.cpacr_el1),
58
- ARMCPU *cpu = arm_env_get_cpu(env);
283
.resetfn = cpacr_reset, .writefn = cpacr_write, .readfn = cpacr_read },
59
-
284
- REGINFO_SENTINEL
60
- tlb_flush_page(CPU(cpu), value & TARGET_PAGE_MASK);
285
};
61
-}
286
62
-
287
typedef struct pm_event {
63
/* IS variants of TLB operations must affect all cores */
288
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
64
static void tlbiall_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
289
{ .name = "TLBIMVAA", .cp = 15, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 3,
65
uint64_t value)
290
.type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
66
@@ -XXX,XX +XXX,XX @@ static void tlbimvaa_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
291
.writefn = tlbimvaa_write },
67
tlb_flush_page_all_cpus_synced(cs, value & TARGET_PAGE_MASK);
292
- REGINFO_SENTINEL
68
}
293
};
69
294
70
+/*
295
static const ARMCPRegInfo v7mp_cp_reginfo[] = {
71
+ * Non-IS variants of TLB operations are upgraded to
296
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7mp_cp_reginfo[] = {
72
+ * IS versions if we are at NS EL1 and HCR_EL2.FB is set to
297
{ .name = "TLBIMVAAIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 3,
73
+ * force broadcast of these operations.
298
.type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
74
+ */
299
.writefn = tlbimvaa_is_write },
75
+static bool tlb_force_broadcast(CPUARMState *env)
300
- REGINFO_SENTINEL
76
+{
301
};
77
+ return (env->cp15.hcr_el2 & HCR_FB) &&
302
78
+ arm_current_el(env) == 1 && arm_is_secure_below_el3(env);
303
static const ARMCPRegInfo pmovsset_cp_reginfo[] = {
79
+}
304
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo pmovsset_cp_reginfo[] = {
305
.fieldoffset = offsetof(CPUARMState, cp15.c9_pmovsr),
306
.writefn = pmovsset_write,
307
.raw_writefn = raw_write },
308
- REGINFO_SENTINEL
309
};
310
311
static void teecr_write(CPUARMState *env, const ARMCPRegInfo *ri,
312
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo t2ee_cp_reginfo[] = {
313
{ .name = "TEEHBR", .cp = 14, .crn = 1, .crm = 0, .opc1 = 6, .opc2 = 0,
314
.access = PL0_RW, .fieldoffset = offsetof(CPUARMState, teehbr),
315
.accessfn = teehbr_access, .resetvalue = 0 },
316
- REGINFO_SENTINEL
317
};
318
319
static const ARMCPRegInfo v6k_cp_reginfo[] = {
320
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v6k_cp_reginfo[] = {
321
.bank_fieldoffsets = { offsetoflow32(CPUARMState, cp15.tpidrprw_s),
322
offsetoflow32(CPUARMState, cp15.tpidrprw_ns) },
323
.resetvalue = 0 },
324
- REGINFO_SENTINEL
325
};
326
327
#ifndef CONFIG_USER_ONLY
328
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo generic_timer_cp_reginfo[] = {
329
.fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_SEC].cval),
330
.writefn = gt_sec_cval_write, .raw_writefn = raw_write,
331
},
332
- REGINFO_SENTINEL
333
};
334
335
static CPAccessResult e2h_access(CPUARMState *env, const ARMCPRegInfo *ri,
336
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo generic_timer_cp_reginfo[] = {
337
.access = PL0_R, .type = ARM_CP_NO_RAW | ARM_CP_IO,
338
.readfn = gt_virt_cnt_read,
339
},
340
- REGINFO_SENTINEL
341
};
342
343
#endif
344
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo vapa_cp_reginfo[] = {
345
.access = PL1_W, .accessfn = ats_access,
346
.writefn = ats_write, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC },
347
#endif
348
- REGINFO_SENTINEL
349
};
350
351
/* Return basic MPU access permission bits. */
352
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo pmsav7_cp_reginfo[] = {
353
.fieldoffset = offsetof(CPUARMState, pmsav7.rnr[M_REG_NS]),
354
.writefn = pmsav7_rgnr_write,
355
.resetfn = arm_cp_reset_ignore },
356
- REGINFO_SENTINEL
357
};
358
359
static const ARMCPRegInfo pmsav5_cp_reginfo[] = {
360
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo pmsav5_cp_reginfo[] = {
361
{ .name = "946_PRBS7", .cp = 15, .crn = 6, .crm = 7, .opc1 = 0,
362
.opc2 = CP_ANY, .access = PL1_RW, .resetvalue = 0,
363
.fieldoffset = offsetof(CPUARMState, cp15.c6_region[7]) },
364
- REGINFO_SENTINEL
365
};
366
367
static void vmsa_ttbcr_raw_write(CPUARMState *env, const ARMCPRegInfo *ri,
368
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo vmsa_pmsa_cp_reginfo[] = {
369
.access = PL1_RW, .accessfn = access_tvm_trvm,
370
.fieldoffset = offsetof(CPUARMState, cp15.far_el[1]),
371
.resetvalue = 0, },
372
- REGINFO_SENTINEL
373
};
374
375
static const ARMCPRegInfo vmsa_cp_reginfo[] = {
376
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo vmsa_cp_reginfo[] = {
377
/* No offsetoflow32 -- pass the entire TCR to writefn/raw_writefn. */
378
.bank_fieldoffsets = { offsetof(CPUARMState, cp15.tcr_el[3]),
379
offsetof(CPUARMState, cp15.tcr_el[1])} },
380
- REGINFO_SENTINEL
381
};
382
383
/* Note that unlike TTBCR, writing to TTBCR2 does not require flushing
384
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo omap_cp_reginfo[] = {
385
{ .name = "C9", .cp = 15, .crn = 9,
386
.crm = CP_ANY, .opc1 = CP_ANY, .opc2 = CP_ANY, .access = PL1_RW,
387
.type = ARM_CP_CONST | ARM_CP_OVERRIDE, .resetvalue = 0 },
388
- REGINFO_SENTINEL
389
};
390
391
static void xscale_cpar_write(CPUARMState *env, const ARMCPRegInfo *ri,
392
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo xscale_cp_reginfo[] = {
393
{ .name = "XSCALE_UNLOCK_DCACHE",
394
.cp = 15, .opc1 = 0, .crn = 9, .crm = 2, .opc2 = 1,
395
.access = PL1_W, .type = ARM_CP_NOP },
396
- REGINFO_SENTINEL
397
};
398
399
static const ARMCPRegInfo dummy_c15_cp_reginfo[] = {
400
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo dummy_c15_cp_reginfo[] = {
401
.access = PL1_RW,
402
.type = ARM_CP_CONST | ARM_CP_NO_RAW | ARM_CP_OVERRIDE,
403
.resetvalue = 0 },
404
- REGINFO_SENTINEL
405
};
406
407
static const ARMCPRegInfo cache_dirty_status_cp_reginfo[] = {
408
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cache_dirty_status_cp_reginfo[] = {
409
{ .name = "CDSR", .cp = 15, .crn = 7, .crm = 10, .opc1 = 0, .opc2 = 6,
410
.access = PL1_R, .type = ARM_CP_CONST | ARM_CP_NO_RAW,
411
.resetvalue = 0 },
412
- REGINFO_SENTINEL
413
};
414
415
static const ARMCPRegInfo cache_block_ops_cp_reginfo[] = {
416
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cache_block_ops_cp_reginfo[] = {
417
.access = PL0_W, .type = ARM_CP_NOP|ARM_CP_64BIT },
418
{ .name = "CIDCR", .cp = 15, .crm = 14, .opc1 = 0,
419
.access = PL1_W, .type = ARM_CP_NOP|ARM_CP_64BIT },
420
- REGINFO_SENTINEL
421
};
422
423
static const ARMCPRegInfo cache_test_clean_cp_reginfo[] = {
424
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cache_test_clean_cp_reginfo[] = {
425
{ .name = "TCI_DCACHE", .cp = 15, .crn = 7, .crm = 14, .opc1 = 0, .opc2 = 3,
426
.access = PL0_R, .type = ARM_CP_CONST | ARM_CP_NO_RAW,
427
.resetvalue = (1 << 30) },
428
- REGINFO_SENTINEL
429
};
430
431
static const ARMCPRegInfo strongarm_cp_reginfo[] = {
432
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo strongarm_cp_reginfo[] = {
433
.crm = CP_ANY, .opc1 = CP_ANY, .opc2 = CP_ANY,
434
.access = PL1_RW, .resetvalue = 0,
435
.type = ARM_CP_CONST | ARM_CP_OVERRIDE | ARM_CP_NO_RAW },
436
- REGINFO_SENTINEL
437
};
438
439
static uint64_t midr_read(CPUARMState *env, const ARMCPRegInfo *ri)
440
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo lpae_cp_reginfo[] = {
441
.bank_fieldoffsets = { offsetof(CPUARMState, cp15.ttbr1_s),
442
offsetof(CPUARMState, cp15.ttbr1_ns) },
443
.writefn = vmsa_ttbr_write, },
444
- REGINFO_SENTINEL
445
};
446
447
static uint64_t aa64_fpcr_read(CPUARMState *env, const ARMCPRegInfo *ri)
448
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
449
.access = PL1_RW, .accessfn = access_trap_aa32s_el1,
450
.writefn = sdcr_write,
451
.fieldoffset = offsetoflow32(CPUARMState, cp15.mdcr_el3) },
452
- REGINFO_SENTINEL
453
};
454
455
/* Used to describe the behaviour of EL2 regs when EL2 does not exist. */
456
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el3_no_el2_cp_reginfo[] = {
457
.type = ARM_CP_CONST,
458
.cp = 15, .opc1 = 4, .crn = 6, .crm = 0, .opc2 = 2,
459
.access = PL2_RW, .resetvalue = 0 },
460
- REGINFO_SENTINEL
461
};
462
463
/* Ditto, but for registers which exist in ARMv8 but not v7 */
464
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el3_no_el2_v8_cp_reginfo[] = {
465
.cp = 15, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 4,
466
.access = PL2_RW,
467
.type = ARM_CP_CONST, .resetvalue = 0 },
468
- REGINFO_SENTINEL
469
};
470
471
static void do_hcr_write(CPUARMState *env, uint64_t value, uint64_t valid_mask)
472
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
473
.cp = 15, .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 3,
474
.access = PL2_RW,
475
.fieldoffset = offsetof(CPUARMState, cp15.hstr_el2) },
476
- REGINFO_SENTINEL
477
};
478
479
static const ARMCPRegInfo el2_v8_cp_reginfo[] = {
480
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_v8_cp_reginfo[] = {
481
.access = PL2_RW,
482
.fieldoffset = offsetofhigh32(CPUARMState, cp15.hcr_el2),
483
.writefn = hcr_writehigh },
484
- REGINFO_SENTINEL
485
};
486
487
static CPAccessResult sel2_access(CPUARMState *env, const ARMCPRegInfo *ri,
488
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_sec_cp_reginfo[] = {
489
.opc0 = 3, .opc1 = 4, .crn = 2, .crm = 6, .opc2 = 2,
490
.access = PL2_RW, .accessfn = sel2_access,
491
.fieldoffset = offsetof(CPUARMState, cp15.vstcr_el2) },
492
- REGINFO_SENTINEL
493
};
494
495
static CPAccessResult nsacr_access(CPUARMState *env, const ARMCPRegInfo *ri,
496
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el3_cp_reginfo[] = {
497
.opc0 = 1, .opc1 = 6, .crn = 8, .crm = 7, .opc2 = 5,
498
.access = PL3_W, .type = ARM_CP_NO_RAW,
499
.writefn = tlbi_aa64_vae3_write },
500
- REGINFO_SENTINEL
501
};
502
503
#ifndef CONFIG_USER_ONLY
504
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo debug_cp_reginfo[] = {
505
.cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 0,
506
.access = PL1_RW, .accessfn = access_tda,
507
.type = ARM_CP_NOP },
508
- REGINFO_SENTINEL
509
};
510
511
static const ARMCPRegInfo debug_lpae_cp_reginfo[] = {
512
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo debug_lpae_cp_reginfo[] = {
513
.access = PL0_R, .type = ARM_CP_CONST|ARM_CP_64BIT, .resetvalue = 0 },
514
{ .name = "DBGDSAR", .cp = 14, .crm = 2, .opc1 = 0,
515
.access = PL0_R, .type = ARM_CP_CONST|ARM_CP_64BIT, .resetvalue = 0 },
516
- REGINFO_SENTINEL
517
};
518
519
/* Return the exception level to which exceptions should be taken
520
@@ -XXX,XX +XXX,XX @@ static void define_debug_regs(ARMCPU *cpu)
521
.fieldoffset = offsetof(CPUARMState, cp15.dbgbcr[i]),
522
.writefn = dbgbcr_write, .raw_writefn = raw_write
523
},
524
- REGINFO_SENTINEL
525
};
526
define_arm_cp_regs(cpu, dbgregs);
527
}
528
@@ -XXX,XX +XXX,XX @@ static void define_debug_regs(ARMCPU *cpu)
529
.fieldoffset = offsetof(CPUARMState, cp15.dbgwcr[i]),
530
.writefn = dbgwcr_write, .raw_writefn = raw_write
531
},
532
- REGINFO_SENTINEL
533
};
534
define_arm_cp_regs(cpu, dbgregs);
535
}
536
@@ -XXX,XX +XXX,XX @@ static void define_pmu_regs(ARMCPU *cpu)
537
.type = ARM_CP_IO,
538
.readfn = pmevtyper_readfn, .writefn = pmevtyper_writefn,
539
.raw_writefn = pmevtyper_rawwrite },
540
- REGINFO_SENTINEL
541
};
542
define_arm_cp_regs(cpu, pmev_regs);
543
g_free(pmevcntr_name);
544
@@ -XXX,XX +XXX,XX @@ static void define_pmu_regs(ARMCPU *cpu)
545
.cp = 15, .opc1 = 0, .crn = 9, .crm = 14, .opc2 = 5,
546
.access = PL0_R, .accessfn = pmreg_access, .type = ARM_CP_CONST,
547
.resetvalue = extract64(cpu->pmceid1, 32, 32) },
548
- REGINFO_SENTINEL
549
};
550
define_arm_cp_regs(cpu, v81_pmu_regs);
551
}
552
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo lor_reginfo[] = {
553
.opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 7,
554
.access = PL1_R, .accessfn = access_lor_ns,
555
.type = ARM_CP_CONST, .resetvalue = 0 },
556
- REGINFO_SENTINEL
557
};
558
559
#ifdef TARGET_AARCH64
560
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo pauth_reginfo[] = {
561
.opc0 = 3, .opc1 = 0, .crn = 2, .crm = 1, .opc2 = 3,
562
.access = PL1_RW, .accessfn = access_pauth,
563
.fieldoffset = offsetof(CPUARMState, keys.apib.hi) },
564
- REGINFO_SENTINEL
565
};
566
567
static const ARMCPRegInfo tlbirange_reginfo[] = {
568
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbirange_reginfo[] = {
569
.opc0 = 1, .opc1 = 6, .crn = 8, .crm = 6, .opc2 = 5,
570
.access = PL3_W, .type = ARM_CP_NO_RAW,
571
.writefn = tlbi_aa64_rvae3_write },
572
- REGINFO_SENTINEL
573
};
574
575
static const ARMCPRegInfo tlbios_reginfo[] = {
576
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbios_reginfo[] = {
577
.opc0 = 1, .opc1 = 6, .crn = 8, .crm = 1, .opc2 = 5,
578
.access = PL3_W, .type = ARM_CP_NO_RAW,
579
.writefn = tlbi_aa64_vae3is_write },
580
- REGINFO_SENTINEL
581
};
582
583
static uint64_t rndr_readfn(CPUARMState *env, const ARMCPRegInfo *ri)
584
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo rndr_reginfo[] = {
585
.type = ARM_CP_NO_RAW | ARM_CP_SUPPRESS_TB_END | ARM_CP_IO,
586
.opc0 = 3, .opc1 = 3, .crn = 2, .crm = 4, .opc2 = 1,
587
.access = PL0_R, .readfn = rndr_readfn },
588
- REGINFO_SENTINEL
589
};
590
591
#ifndef CONFIG_USER_ONLY
592
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo dcpop_reg[] = {
593
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 12, .opc2 = 1,
594
.access = PL0_W, .type = ARM_CP_NO_RAW | ARM_CP_SUPPRESS_TB_END,
595
.accessfn = aa64_cacheop_poc_access, .writefn = dccvap_writefn },
596
- REGINFO_SENTINEL
597
};
598
599
static const ARMCPRegInfo dcpodp_reg[] = {
600
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo dcpodp_reg[] = {
601
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 13, .opc2 = 1,
602
.access = PL0_W, .type = ARM_CP_NO_RAW | ARM_CP_SUPPRESS_TB_END,
603
.accessfn = aa64_cacheop_poc_access, .writefn = dccvap_writefn },
604
- REGINFO_SENTINEL
605
};
606
#endif /*CONFIG_USER_ONLY*/
607
608
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo mte_reginfo[] = {
609
{ .name = "DC_CIGDSW", .state = ARM_CP_STATE_AA64,
610
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 14, .opc2 = 6,
611
.type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_tsw },
612
- REGINFO_SENTINEL
613
};
614
615
static const ARMCPRegInfo mte_tco_ro_reginfo[] = {
616
{ .name = "TCO", .state = ARM_CP_STATE_AA64,
617
.opc0 = 3, .opc1 = 3, .crn = 4, .crm = 2, .opc2 = 7,
618
.type = ARM_CP_CONST, .access = PL0_RW, },
619
- REGINFO_SENTINEL
620
};
621
622
static const ARMCPRegInfo mte_el0_cacheop_reginfo[] = {
623
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo mte_el0_cacheop_reginfo[] = {
624
.accessfn = aa64_zva_access,
625
#endif
626
},
627
- REGINFO_SENTINEL
628
};
629
630
#endif
631
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo predinv_reginfo[] = {
632
{ .name = "CPPRCTX", .state = ARM_CP_STATE_AA32,
633
.cp = 15, .opc1 = 0, .crn = 7, .crm = 3, .opc2 = 7,
634
.type = ARM_CP_NOP, .access = PL0_W, .accessfn = access_predinv },
635
- REGINFO_SENTINEL
636
};
637
638
static uint64_t ccsidr2_read(CPUARMState *env, const ARMCPRegInfo *ri)
639
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo ccsidr2_reginfo[] = {
640
.access = PL1_R,
641
.accessfn = access_aa64_tid2,
642
.readfn = ccsidr2_read, .type = ARM_CP_NO_RAW },
643
- REGINFO_SENTINEL
644
};
645
646
static CPAccessResult access_aa64_tid3(CPUARMState *env, const ARMCPRegInfo *ri,
647
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo jazelle_regs[] = {
648
.cp = 14, .crn = 2, .crm = 0, .opc1 = 7, .opc2 = 0,
649
.accessfn = access_joscr_jmcr,
650
.access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
651
- REGINFO_SENTINEL
652
};
653
654
static const ARMCPRegInfo vhe_reginfo[] = {
655
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo vhe_reginfo[] = {
656
.access = PL2_RW, .accessfn = e2h_access,
657
.writefn = gt_virt_cval_write, .raw_writefn = raw_write },
658
#endif
659
- REGINFO_SENTINEL
660
};
661
662
#ifndef CONFIG_USER_ONLY
663
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo ats1e1_reginfo[] = {
664
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 9, .opc2 = 1,
665
.access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
666
.writefn = ats_write64 },
667
- REGINFO_SENTINEL
668
};
669
670
static const ARMCPRegInfo ats1cp_reginfo[] = {
671
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo ats1cp_reginfo[] = {
672
.cp = 15, .opc1 = 0, .crn = 7, .crm = 9, .opc2 = 1,
673
.access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
674
.writefn = ats_write },
675
- REGINFO_SENTINEL
676
};
677
#endif
678
679
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo actlr2_hactlr2_reginfo[] = {
680
.cp = 15, .opc1 = 4, .crn = 1, .crm = 0, .opc2 = 3,
681
.access = PL2_RW, .type = ARM_CP_CONST,
682
.resetvalue = 0 },
683
- REGINFO_SENTINEL
684
};
685
686
void register_cp_regs_for_features(ARMCPU *cpu)
687
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
688
.access = PL1_R, .type = ARM_CP_CONST,
689
.accessfn = access_aa32_tid3,
690
.resetvalue = cpu->isar.id_isar6 },
691
- REGINFO_SENTINEL
692
};
693
define_arm_cp_regs(cpu, v6_idregs);
694
define_arm_cp_regs(cpu, v6_cp_reginfo);
695
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
696
.opc0 = 3, .opc1 = 3, .crn = 9, .crm = 12, .opc2 = 7,
697
.access = PL0_R, .accessfn = pmreg_access, .type = ARM_CP_CONST,
698
.resetvalue = cpu->pmceid1 },
699
- REGINFO_SENTINEL
700
};
701
#ifdef CONFIG_USER_ONLY
702
ARMCPRegUserSpaceInfo v8_user_idregs[] = {
703
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
704
.exported_bits = 0x000000f0ffffffff },
705
{ .name = "ID_AA64ISAR*_EL1_RESERVED",
706
.is_glob = true },
707
- REGUSERINFO_SENTINEL
708
};
709
modify_arm_cp_regs(v8_idregs, v8_user_idregs);
710
#endif
711
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
712
.access = PL2_RW,
713
.resetvalue = vmpidr_def,
714
.fieldoffset = offsetof(CPUARMState, cp15.vmpidr_el2) },
715
- REGINFO_SENTINEL
716
};
717
define_arm_cp_regs(cpu, vpidr_regs);
718
define_arm_cp_regs(cpu, el2_cp_reginfo);
719
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
720
.access = PL2_RW, .accessfn = access_el3_aa32ns,
721
.type = ARM_CP_NO_RAW,
722
.writefn = arm_cp_write_ignore, .readfn = mpidr_read },
723
- REGINFO_SENTINEL
724
};
725
define_arm_cp_regs(cpu, vpidr_regs);
726
define_arm_cp_regs(cpu, el3_no_el2_cp_reginfo);
727
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
728
.raw_writefn = raw_write, .writefn = sctlr_write,
729
.fieldoffset = offsetof(CPUARMState, cp15.sctlr_el[3]),
730
.resetvalue = cpu->reset_sctlr },
731
- REGINFO_SENTINEL
732
};
733
734
define_arm_cp_regs(cpu, el3_regs);
735
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
736
{ .name = "DUMMY",
737
.cp = 15, .crn = 0, .crm = 7, .opc1 = 0, .opc2 = CP_ANY,
738
.access = PL1_R, .type = ARM_CP_CONST, .resetvalue = 0 },
739
- REGINFO_SENTINEL
740
};
741
ARMCPRegInfo id_v8_midr_cp_reginfo[] = {
742
{ .name = "MIDR_EL1", .state = ARM_CP_STATE_BOTH,
743
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
744
.access = PL1_R,
745
.accessfn = access_aa64_tid1,
746
.type = ARM_CP_CONST, .resetvalue = cpu->revidr },
747
- REGINFO_SENTINEL
748
};
749
ARMCPRegInfo id_cp_reginfo[] = {
750
/* These are common to v8 and pre-v8 */
751
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
752
.access = PL1_R,
753
.accessfn = access_aa32_tid1,
754
.type = ARM_CP_CONST, .resetvalue = 0 },
755
- REGINFO_SENTINEL
756
};
757
/* TLBTR is specific to VMSA */
758
ARMCPRegInfo id_tlbtr_reginfo = {
759
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
760
{ .name = "MIDR_EL1",
761
.exported_bits = 0x00000000ffffffff },
762
{ .name = "REVIDR_EL1" },
763
- REGUSERINFO_SENTINEL
764
};
765
modify_arm_cp_regs(id_v8_midr_cp_reginfo, id_v8_user_midr_cp_reginfo);
766
#endif
767
if (arm_feature(env, ARM_FEATURE_OMAPCP) ||
768
arm_feature(env, ARM_FEATURE_STRONGARM)) {
769
- ARMCPRegInfo *r;
770
+ size_t i;
771
/* Register the blanket "writes ignored" value first to cover the
772
* whole space. Then update the specific ID registers to allow write
773
* access, so that they ignore writes rather than causing them to
774
* UNDEF.
775
*/
776
define_one_arm_cp_reg(cpu, &crn0_wi_reginfo);
777
- for (r = id_pre_v8_midr_cp_reginfo;
778
- r->type != ARM_CP_SENTINEL; r++) {
779
- r->access = PL1_RW;
780
+ for (i = 0; i < ARRAY_SIZE(id_pre_v8_midr_cp_reginfo); ++i) {
781
+ id_pre_v8_midr_cp_reginfo[i].access = PL1_RW;
782
}
783
- for (r = id_cp_reginfo; r->type != ARM_CP_SENTINEL; r++) {
784
- r->access = PL1_RW;
785
+ for (i = 0; i < ARRAY_SIZE(id_cp_reginfo); ++i) {
786
+ id_cp_reginfo[i].access = PL1_RW;
787
}
788
id_mpuir_reginfo.access = PL1_RW;
789
id_tlbtr_reginfo.access = PL1_RW;
790
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
791
{ .name = "MPIDR_EL1", .state = ARM_CP_STATE_BOTH,
792
.opc0 = 3, .crn = 0, .crm = 0, .opc1 = 0, .opc2 = 5,
793
.access = PL1_R, .readfn = mpidr_read, .type = ARM_CP_NO_RAW },
794
- REGINFO_SENTINEL
795
};
796
#ifdef CONFIG_USER_ONLY
797
ARMCPRegUserSpaceInfo mpidr_user_cp_reginfo[] = {
798
{ .name = "MPIDR_EL1",
799
.fixed_bits = 0x0000000080000000 },
800
- REGUSERINFO_SENTINEL
801
};
802
modify_arm_cp_regs(mpidr_cp_reginfo, mpidr_user_cp_reginfo);
803
#endif
804
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
805
.opc0 = 3, .opc1 = 6, .crn = 1, .crm = 0, .opc2 = 1,
806
.access = PL3_RW, .type = ARM_CP_CONST,
807
.resetvalue = 0 },
808
- REGINFO_SENTINEL
809
};
810
define_arm_cp_regs(cpu, auxcr_reginfo);
811
if (cpu_isar_feature(aa32_ac2, cpu)) {
812
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
813
.type = ARM_CP_CONST,
814
.opc0 = 3, .opc1 = 1, .crn = 15, .crm = 3, .opc2 = 0,
815
.access = PL1_R, .resetvalue = cpu->reset_cbar },
816
- REGINFO_SENTINEL
817
};
818
/* We don't implement a r/w 64 bit CBAR currently */
819
assert(arm_feature(env, ARM_FEATURE_CBAR_RO));
820
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
821
.bank_fieldoffsets = { offsetof(CPUARMState, cp15.vbar_s),
822
offsetof(CPUARMState, cp15.vbar_ns) },
823
.resetvalue = 0 },
824
- REGINFO_SENTINEL
825
};
826
define_arm_cp_regs(cpu, vbar_cp_reginfo);
827
}
828
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
829
r->writefn);
830
}
831
}
832
- /* Bad type field probably means missing sentinel at end of reg list */
833
- assert(cptype_valid(r->type));
80
+
834
+
81
+static void tlbiall_write(CPUARMState *env, const ARMCPRegInfo *ri,
835
for (crm = crmmin; crm <= crmmax; crm++) {
82
+ uint64_t value)
836
for (opc1 = opc1min; opc1 <= opc1max; opc1++) {
83
+{
837
for (opc2 = opc2min; opc2 <= opc2max; opc2++) {
84
+ /* Invalidate all (TLBIALL) */
838
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
85
+ ARMCPU *cpu = arm_env_get_cpu(env);
86
+
87
+ if (tlb_force_broadcast(env)) {
88
+ tlbiall_is_write(env, NULL, value);
89
+ return;
90
+ }
91
+
92
+ tlb_flush(CPU(cpu));
93
+}
94
+
95
+static void tlbimva_write(CPUARMState *env, const ARMCPRegInfo *ri,
96
+ uint64_t value)
97
+{
98
+ /* Invalidate single TLB entry by MVA and ASID (TLBIMVA) */
99
+ ARMCPU *cpu = arm_env_get_cpu(env);
100
+
101
+ if (tlb_force_broadcast(env)) {
102
+ tlbimva_is_write(env, NULL, value);
103
+ return;
104
+ }
105
+
106
+ tlb_flush_page(CPU(cpu), value & TARGET_PAGE_MASK);
107
+}
108
+
109
+static void tlbiasid_write(CPUARMState *env, const ARMCPRegInfo *ri,
110
+ uint64_t value)
111
+{
112
+ /* Invalidate by ASID (TLBIASID) */
113
+ ARMCPU *cpu = arm_env_get_cpu(env);
114
+
115
+ if (tlb_force_broadcast(env)) {
116
+ tlbiasid_is_write(env, NULL, value);
117
+ return;
118
+ }
119
+
120
+ tlb_flush(CPU(cpu));
121
+}
122
+
123
+static void tlbimvaa_write(CPUARMState *env, const ARMCPRegInfo *ri,
124
+ uint64_t value)
125
+{
126
+ /* Invalidate single entry by MVA, all ASIDs (TLBIMVAA) */
127
+ ARMCPU *cpu = arm_env_get_cpu(env);
128
+
129
+ if (tlb_force_broadcast(env)) {
130
+ tlbimvaa_is_write(env, NULL, value);
131
+ return;
132
+ }
133
+
134
+ tlb_flush_page(CPU(cpu), value & TARGET_PAGE_MASK);
135
+}
136
+
137
static void tlbiall_nsnh_write(CPUARMState *env, const ARMCPRegInfo *ri,
138
uint64_t value)
139
{
140
@@ -XXX,XX +XXX,XX @@ static CPAccessResult aa64_cacheop_access(CPUARMState *env,
141
* Page D4-1736 (DDI0487A.b)
142
*/
143
144
-static void tlbi_aa64_vmalle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
145
- uint64_t value)
146
-{
147
- CPUState *cs = ENV_GET_CPU(env);
148
-
149
- if (arm_is_secure_below_el3(env)) {
150
- tlb_flush_by_mmuidx(cs,
151
- ARMMMUIdxBit_S1SE1 |
152
- ARMMMUIdxBit_S1SE0);
153
- } else {
154
- tlb_flush_by_mmuidx(cs,
155
- ARMMMUIdxBit_S12NSE1 |
156
- ARMMMUIdxBit_S12NSE0);
157
- }
158
-}
159
-
160
static void tlbi_aa64_vmalle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
161
uint64_t value)
162
{
163
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vmalle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
164
}
839
}
165
}
840
}
166
841
167
+static void tlbi_aa64_vmalle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
842
-void define_arm_cp_regs_with_opaque(ARMCPU *cpu,
168
+ uint64_t value)
843
- const ARMCPRegInfo *regs, void *opaque)
169
+{
844
+/* Define a whole list of registers */
170
+ CPUState *cs = ENV_GET_CPU(env);
845
+void define_arm_cp_regs_with_opaque_len(ARMCPU *cpu, const ARMCPRegInfo *regs,
171
+
846
+ void *opaque, size_t len)
172
+ if (tlb_force_broadcast(env)) {
173
+ tlbi_aa64_vmalle1_write(env, NULL, value);
174
+ return;
175
+ }
176
+
177
+ if (arm_is_secure_below_el3(env)) {
178
+ tlb_flush_by_mmuidx(cs,
179
+ ARMMMUIdxBit_S1SE1 |
180
+ ARMMMUIdxBit_S1SE0);
181
+ } else {
182
+ tlb_flush_by_mmuidx(cs,
183
+ ARMMMUIdxBit_S12NSE1 |
184
+ ARMMMUIdxBit_S12NSE0);
185
+ }
186
+}
187
+
188
static void tlbi_aa64_alle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
189
uint64_t value)
190
{
847
{
191
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
848
- /* Define a whole list of registers */
192
tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_S1E3);
849
- const ARMCPRegInfo *r;
193
}
850
- for (r = regs; r->type != ARM_CP_SENTINEL; r++) {
194
851
- define_one_arm_cp_reg_with_opaque(cpu, r, opaque);
195
-static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri,
852
+ size_t i;
196
- uint64_t value)
853
+ for (i = 0; i < len; ++i) {
197
-{
854
+ define_one_arm_cp_reg_with_opaque(cpu, regs + i, opaque);
198
- /* Invalidate by VA, EL1&0 (AArch64 version).
199
- * Currently handles all of VAE1, VAAE1, VAALE1 and VALE1,
200
- * since we don't support flush-for-specific-ASID-only or
201
- * flush-last-level-only.
202
- */
203
- ARMCPU *cpu = arm_env_get_cpu(env);
204
- CPUState *cs = CPU(cpu);
205
- uint64_t pageaddr = sextract64(value << 12, 0, 56);
206
-
207
- if (arm_is_secure_below_el3(env)) {
208
- tlb_flush_page_by_mmuidx(cs, pageaddr,
209
- ARMMMUIdxBit_S1SE1 |
210
- ARMMMUIdxBit_S1SE0);
211
- } else {
212
- tlb_flush_page_by_mmuidx(cs, pageaddr,
213
- ARMMMUIdxBit_S12NSE1 |
214
- ARMMMUIdxBit_S12NSE0);
215
- }
216
-}
217
-
218
static void tlbi_aa64_vae2_write(CPUARMState *env, const ARMCPRegInfo *ri,
219
uint64_t value)
220
{
221
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
222
}
855
}
223
}
856
}
224
857
225
+static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri,
858
@@ -XXX,XX +XXX,XX @@ void define_arm_cp_regs_with_opaque(ARMCPU *cpu,
226
+ uint64_t value)
859
* user-space cannot alter any values and dynamic values pertaining to
227
+{
860
* execution state are hidden from user space view anyway.
228
+ /* Invalidate by VA, EL1&0 (AArch64 version).
861
*/
229
+ * Currently handles all of VAE1, VAAE1, VAALE1 and VALE1,
862
-void modify_arm_cp_regs(ARMCPRegInfo *regs, const ARMCPRegUserSpaceInfo *mods)
230
+ * since we don't support flush-for-specific-ASID-only or
863
+void modify_arm_cp_regs_with_len(ARMCPRegInfo *regs, size_t regs_len,
231
+ * flush-last-level-only.
864
+ const ARMCPRegUserSpaceInfo *mods,
232
+ */
865
+ size_t mods_len)
233
+ ARMCPU *cpu = arm_env_get_cpu(env);
866
{
234
+ CPUState *cs = CPU(cpu);
867
- const ARMCPRegUserSpaceInfo *m;
235
+ uint64_t pageaddr = sextract64(value << 12, 0, 56);
868
- ARMCPRegInfo *r;
869
-
870
- for (m = mods; m->name; m++) {
871
+ for (size_t mi = 0; mi < mods_len; ++mi) {
872
+ const ARMCPRegUserSpaceInfo *m = mods + mi;
873
GPatternSpec *pat = NULL;
236
+
874
+
237
+ if (tlb_force_broadcast(env)) {
875
if (m->is_glob) {
238
+ tlbi_aa64_vae1is_write(env, NULL, value);
876
pat = g_pattern_spec_new(m->name);
239
+ return;
877
}
240
+ }
878
- for (r = regs; r->type != ARM_CP_SENTINEL; r++) {
879
+ for (size_t ri = 0; ri < regs_len; ++ri) {
880
+ ARMCPRegInfo *r = regs + ri;
241
+
881
+
242
+ if (arm_is_secure_below_el3(env)) {
882
if (pat && g_pattern_match_string(pat, r->name)) {
243
+ tlb_flush_page_by_mmuidx(cs, pageaddr,
883
r->type = ARM_CP_CONST;
244
+ ARMMMUIdxBit_S1SE1 |
884
r->access = PL0U_R;
245
+ ARMMMUIdxBit_S1SE0);
246
+ } else {
247
+ tlb_flush_page_by_mmuidx(cs, pageaddr,
248
+ ARMMMUIdxBit_S12NSE1 |
249
+ ARMMMUIdxBit_S12NSE0);
250
+ }
251
+}
252
+
253
static void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
254
uint64_t value)
255
{
256
--
885
--
257
2.19.1
886
2.25.1
258
887
259
888
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
3
These particular data structures are not modified at runtime.
4
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20181016223115.24100-8-richard.henderson@linaro.org
8
Message-id: 20220501055028.646596-5-richard.henderson@linaro.org
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
10
---
9
target/arm/cpu.h | 16 +++++++++++++++-
11
target/arm/helper.c | 16 ++++++++--------
10
linux-user/aarch64/signal.c | 4 ++--
12
1 file changed, 8 insertions(+), 8 deletions(-)
11
linux-user/elfload.c | 2 +-
12
linux-user/syscall.c | 10 ++++++----
13
target/arm/cpu64.c | 5 ++++-
14
target/arm/helper.c | 9 ++++++---
15
target/arm/machine.c | 3 +--
16
target/arm/translate-a64.c | 4 ++--
17
8 files changed, 37 insertions(+), 16 deletions(-)
18
13
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/cpu.h
22
+++ b/target/arm/cpu.h
23
@@ -XXX,XX +XXX,XX @@ FIELD(ID_AA64ISAR1, FRINTTS, 32, 4)
24
FIELD(ID_AA64ISAR1, SB, 36, 4)
25
FIELD(ID_AA64ISAR1, SPECRES, 40, 4)
26
27
+FIELD(ID_AA64PFR0, EL0, 0, 4)
28
+FIELD(ID_AA64PFR0, EL1, 4, 4)
29
+FIELD(ID_AA64PFR0, EL2, 8, 4)
30
+FIELD(ID_AA64PFR0, EL3, 12, 4)
31
+FIELD(ID_AA64PFR0, FP, 16, 4)
32
+FIELD(ID_AA64PFR0, ADVSIMD, 20, 4)
33
+FIELD(ID_AA64PFR0, GIC, 24, 4)
34
+FIELD(ID_AA64PFR0, RAS, 28, 4)
35
+FIELD(ID_AA64PFR0, SVE, 32, 4)
36
+
37
QEMU_BUILD_BUG_ON(ARRAY_SIZE(((ARMCPU *)0)->ccsidr) <= R_V7M_CSSELR_INDEX_MASK);
38
39
/* If adding a feature bit which corresponds to a Linux ELF
40
@@ -XXX,XX +XXX,XX @@ enum arm_features {
41
ARM_FEATURE_PMU, /* has PMU support */
42
ARM_FEATURE_VBAR, /* has cp15 VBAR */
43
ARM_FEATURE_M_SECURITY, /* M profile Security Extension */
44
- ARM_FEATURE_SVE, /* has Scalable Vector Extension */
45
ARM_FEATURE_V8_FP16, /* implements v8.2 half-precision float */
46
ARM_FEATURE_M_MAIN, /* M profile Main Extension */
47
};
48
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_fcma(const ARMISARegisters *id)
49
return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, FCMA) != 0;
50
}
51
52
+static inline bool isar_feature_aa64_sve(const ARMISARegisters *id)
53
+{
54
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, SVE) != 0;
55
+}
56
+
57
/*
58
* Forward to the above feature tests given an ARMCPU pointer.
59
*/
60
diff --git a/linux-user/aarch64/signal.c b/linux-user/aarch64/signal.c
61
index XXXXXXX..XXXXXXX 100644
62
--- a/linux-user/aarch64/signal.c
63
+++ b/linux-user/aarch64/signal.c
64
@@ -XXX,XX +XXX,XX @@ static int target_restore_sigframe(CPUARMState *env,
65
break;
66
67
case TARGET_SVE_MAGIC:
68
- if (arm_feature(env, ARM_FEATURE_SVE)) {
69
+ if (cpu_isar_feature(aa64_sve, arm_env_get_cpu(env))) {
70
vq = (env->vfp.zcr_el[1] & 0xf) + 1;
71
sve_size = QEMU_ALIGN_UP(TARGET_SVE_SIG_CONTEXT_SIZE(vq), 16);
72
if (!sve && size == sve_size) {
73
@@ -XXX,XX +XXX,XX @@ static void target_setup_frame(int usig, struct target_sigaction *ka,
74
&layout);
75
76
/* SVE state needs saving only if it exists. */
77
- if (arm_feature(env, ARM_FEATURE_SVE)) {
78
+ if (cpu_isar_feature(aa64_sve, arm_env_get_cpu(env))) {
79
vq = (env->vfp.zcr_el[1] & 0xf) + 1;
80
sve_size = QEMU_ALIGN_UP(TARGET_SVE_SIG_CONTEXT_SIZE(vq), 16);
81
sve_ofs = alloc_sigframe_space(sve_size, &layout);
82
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
83
index XXXXXXX..XXXXXXX 100644
84
--- a/linux-user/elfload.c
85
+++ b/linux-user/elfload.c
86
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
87
GET_FEATURE_ID(aa64_rdm, ARM_HWCAP_A64_ASIMDRDM);
88
GET_FEATURE_ID(aa64_dp, ARM_HWCAP_A64_ASIMDDP);
89
GET_FEATURE_ID(aa64_fcma, ARM_HWCAP_A64_FCMA);
90
- GET_FEATURE(ARM_FEATURE_SVE, ARM_HWCAP_A64_SVE);
91
+ GET_FEATURE_ID(aa64_sve, ARM_HWCAP_A64_SVE);
92
93
#undef GET_FEATURE
94
#undef GET_FEATURE_ID
95
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
96
index XXXXXXX..XXXXXXX 100644
97
--- a/linux-user/syscall.c
98
+++ b/linux-user/syscall.c
99
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
100
* even though the current architectural maximum is VQ=16.
101
*/
102
ret = -TARGET_EINVAL;
103
- if (arm_feature(cpu_env, ARM_FEATURE_SVE)
104
+ if (cpu_isar_feature(aa64_sve, arm_env_get_cpu(cpu_env))
105
&& arg2 >= 0 && arg2 <= 512 * 16 && !(arg2 & 15)) {
106
CPUARMState *env = cpu_env;
107
ARMCPU *cpu = arm_env_get_cpu(env);
108
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
109
return ret;
110
case TARGET_PR_SVE_GET_VL:
111
ret = -TARGET_EINVAL;
112
- if (arm_feature(cpu_env, ARM_FEATURE_SVE)) {
113
- CPUARMState *env = cpu_env;
114
- ret = ((env->vfp.zcr_el[1] & 0xf) + 1) * 16;
115
+ {
116
+ ARMCPU *cpu = arm_env_get_cpu(cpu_env);
117
+ if (cpu_isar_feature(aa64_sve, cpu)) {
118
+ ret = ((cpu->env.vfp.zcr_el[1] & 0xf) + 1) * 16;
119
+ }
120
}
121
return ret;
122
#endif /* AARCH64 */
123
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
124
index XXXXXXX..XXXXXXX 100644
125
--- a/target/arm/cpu64.c
126
+++ b/target/arm/cpu64.c
127
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
128
t = FIELD_DP64(t, ID_AA64ISAR1, FCMA, 1);
129
cpu->isar.id_aa64isar1 = t;
130
131
+ t = cpu->isar.id_aa64pfr0;
132
+ t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1);
133
+ cpu->isar.id_aa64pfr0 = t;
134
+
135
/* Replicate the same data to the 32-bit id registers. */
136
u = cpu->isar.id_isar5;
137
u = FIELD_DP32(u, ID_ISAR5, AES, 2); /* AES + PMULL */
138
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
139
* present in either.
140
*/
141
set_feature(&cpu->env, ARM_FEATURE_V8_FP16);
142
- set_feature(&cpu->env, ARM_FEATURE_SVE);
143
/* For usermode -cpu max we can use a larger and more efficient DCZ
144
* blocksize since we don't have to follow what the hardware does.
145
*/
146
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
147
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
148
--- a/target/arm/helper.c
16
--- a/target/arm/helper.c
149
+++ b/target/arm/helper.c
17
+++ b/target/arm/helper.c
150
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
18
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
151
define_one_arm_cp_reg(cpu, &sctlr);
19
.resetvalue = cpu->pmceid1 },
20
};
21
#ifdef CONFIG_USER_ONLY
22
- ARMCPRegUserSpaceInfo v8_user_idregs[] = {
23
+ static const ARMCPRegUserSpaceInfo v8_user_idregs[] = {
24
{ .name = "ID_AA64PFR0_EL1",
25
.exported_bits = 0x000f000f00ff0000,
26
.fixed_bits = 0x0000000000000011 },
27
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
28
*/
29
if (arm_feature(env, ARM_FEATURE_EL3)) {
30
if (arm_feature(env, ARM_FEATURE_AARCH64)) {
31
- ARMCPRegInfo nsacr = {
32
+ static const ARMCPRegInfo nsacr = {
33
.name = "NSACR", .type = ARM_CP_CONST,
34
.cp = 15, .opc1 = 0, .crn = 1, .crm = 1, .opc2 = 2,
35
.access = PL1_RW, .accessfn = nsacr_access,
36
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
37
};
38
define_one_arm_cp_reg(cpu, &nsacr);
39
} else {
40
- ARMCPRegInfo nsacr = {
41
+ static const ARMCPRegInfo nsacr = {
42
.name = "NSACR",
43
.cp = 15, .opc1 = 0, .crn = 1, .crm = 1, .opc2 = 2,
44
.access = PL3_RW | PL1_R,
45
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
46
}
47
} else {
48
if (arm_feature(env, ARM_FEATURE_V8)) {
49
- ARMCPRegInfo nsacr = {
50
+ static const ARMCPRegInfo nsacr = {
51
.name = "NSACR", .type = ARM_CP_CONST,
52
.cp = 15, .opc1 = 0, .crn = 1, .crm = 1, .opc2 = 2,
53
.access = PL1_R,
54
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
55
.access = PL1_R, .type = ARM_CP_CONST,
56
.resetvalue = cpu->pmsav7_dregion << 8
57
};
58
- ARMCPRegInfo crn0_wi_reginfo = {
59
+ static const ARMCPRegInfo crn0_wi_reginfo = {
60
.name = "CRN0_WI", .cp = 15, .crn = 0, .crm = CP_ANY,
61
.opc1 = CP_ANY, .opc2 = CP_ANY, .access = PL1_W,
62
.type = ARM_CP_NOP | ARM_CP_OVERRIDE
63
};
64
#ifdef CONFIG_USER_ONLY
65
- ARMCPRegUserSpaceInfo id_v8_user_midr_cp_reginfo[] = {
66
+ static const ARMCPRegUserSpaceInfo id_v8_user_midr_cp_reginfo[] = {
67
{ .name = "MIDR_EL1",
68
.exported_bits = 0x00000000ffffffff },
69
{ .name = "REVIDR_EL1" },
70
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
71
.access = PL1_R, .readfn = mpidr_read, .type = ARM_CP_NO_RAW },
72
};
73
#ifdef CONFIG_USER_ONLY
74
- ARMCPRegUserSpaceInfo mpidr_user_cp_reginfo[] = {
75
+ static const ARMCPRegUserSpaceInfo mpidr_user_cp_reginfo[] = {
76
{ .name = "MPIDR_EL1",
77
.fixed_bits = 0x0000000080000000 },
78
};
79
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
152
}
80
}
153
81
154
- if (arm_feature(env, ARM_FEATURE_SVE)) {
82
if (arm_feature(env, ARM_FEATURE_VBAR)) {
155
+ if (cpu_isar_feature(aa64_sve, cpu)) {
83
- ARMCPRegInfo vbar_cp_reginfo[] = {
156
define_one_arm_cp_reg(cpu, &zcr_el1_reginfo);
84
+ static const ARMCPRegInfo vbar_cp_reginfo[] = {
157
if (arm_feature(env, ARM_FEATURE_EL2)) {
85
{ .name = "VBAR", .state = ARM_CP_STATE_BOTH,
158
define_one_arm_cp_reg(cpu, &zcr_el2_reginfo);
86
.opc0 = 3, .crn = 12, .crm = 0, .opc1 = 0, .opc2 = 0,
159
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
87
.access = PL1_RW, .writefn = vbar_write,
160
uint32_t flags;
161
162
if (is_a64(env)) {
163
+ ARMCPU *cpu = arm_env_get_cpu(env);
164
+
165
*pc = env->pc;
166
flags = ARM_TBFLAG_AARCH64_STATE_MASK;
167
/* Get control bits for tagged addresses */
168
flags |= (arm_regime_tbi0(env, mmu_idx) << ARM_TBFLAG_TBI0_SHIFT);
169
flags |= (arm_regime_tbi1(env, mmu_idx) << ARM_TBFLAG_TBI1_SHIFT);
170
171
- if (arm_feature(env, ARM_FEATURE_SVE)) {
172
+ if (cpu_isar_feature(aa64_sve, cpu)) {
173
int sve_el = sve_exception_el(env, current_el);
174
uint32_t zcr_len;
175
176
@@ -XXX,XX +XXX,XX @@ void aarch64_sve_narrow_vq(CPUARMState *env, unsigned vq)
177
void aarch64_sve_change_el(CPUARMState *env, int old_el,
178
int new_el, bool el0_a64)
179
{
180
+ ARMCPU *cpu = arm_env_get_cpu(env);
181
int old_len, new_len;
182
bool old_a64, new_a64;
183
184
/* Nothing to do if no SVE. */
185
- if (!arm_feature(env, ARM_FEATURE_SVE)) {
186
+ if (!cpu_isar_feature(aa64_sve, cpu)) {
187
return;
188
}
189
190
diff --git a/target/arm/machine.c b/target/arm/machine.c
191
index XXXXXXX..XXXXXXX 100644
192
--- a/target/arm/machine.c
193
+++ b/target/arm/machine.c
194
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_iwmmxt = {
195
static bool sve_needed(void *opaque)
196
{
197
ARMCPU *cpu = opaque;
198
- CPUARMState *env = &cpu->env;
199
200
- return arm_feature(env, ARM_FEATURE_SVE);
201
+ return cpu_isar_feature(aa64_sve, cpu);
202
}
203
204
/* The first two words of each Zreg is stored in VFP state. */
205
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
206
index XXXXXXX..XXXXXXX 100644
207
--- a/target/arm/translate-a64.c
208
+++ b/target/arm/translate-a64.c
209
@@ -XXX,XX +XXX,XX @@ void aarch64_cpu_dump_state(CPUState *cs, FILE *f,
210
cpu_fprintf(f, " FPCR=%08x FPSR=%08x\n",
211
vfp_get_fpcr(env), vfp_get_fpsr(env));
212
213
- if (arm_feature(env, ARM_FEATURE_SVE) && sve_exception_el(env, el) == 0) {
214
+ if (cpu_isar_feature(aa64_sve, cpu) && sve_exception_el(env, el) == 0) {
215
int j, zcr_len = sve_zcr_len_for_el(env, el);
216
217
for (i = 0; i <= FFR_PRED_NUM; i++) {
218
@@ -XXX,XX +XXX,XX @@ static void disas_a64_insn(CPUARMState *env, DisasContext *s)
219
unallocated_encoding(s);
220
break;
221
case 0x2:
222
- if (!arm_dc_feature(s, ARM_FEATURE_SVE) || !disas_sve(s, insn)) {
223
+ if (!dc_isar_feature(aa64_sve, s) || !disas_sve(s, insn)) {
224
unallocated_encoding(s);
225
}
226
break;
227
--
88
--
228
2.19.1
89
2.25.1
229
90
230
91
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Move cmtst_op expanders from translate-a64.c.
3
Instead of defining ARM_CP_FLAG_MASK to remove flags,
4
define ARM_CP_SPECIAL_MASK to isolate special cases.
5
Sort the specials to the low bits. Use an enum.
6
7
Split the large comment block so as to document each
8
value separately.
4
9
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20181011205206.3552-17-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Message-id: 20220501055028.646596-6-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
14
---
10
target/arm/translate.h | 2 +
15
target/arm/cpregs.h | 130 +++++++++++++++++++++++--------------
11
target/arm/translate-a64.c | 38 ------------------
16
target/arm/cpu.c | 4 +-
12
target/arm/translate.c | 81 +++++++++++++++++++++++++++-----------
17
target/arm/helper.c | 4 +-
13
3 files changed, 60 insertions(+), 61 deletions(-)
18
target/arm/translate-a64.c | 6 +-
14
19
target/arm/translate.c | 6 +-
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
20
5 files changed, 92 insertions(+), 58 deletions(-)
16
index XXXXXXX..XXXXXXX 100644
21
17
--- a/target/arm/translate.h
22
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
18
+++ b/target/arm/translate.h
23
index XXXXXXX..XXXXXXX 100644
19
@@ -XXX,XX +XXX,XX @@ extern const GVecGen3 bit_op;
24
--- a/target/arm/cpregs.h
20
extern const GVecGen3 bif_op;
25
+++ b/target/arm/cpregs.h
21
extern const GVecGen3 mla_op[4];
26
@@ -XXX,XX +XXX,XX @@
22
extern const GVecGen3 mls_op[4];
27
#define TARGET_ARM_CPREGS_H
23
+extern const GVecGen3 cmtst_op[4];
24
extern const GVecGen2i ssra_op[4];
25
extern const GVecGen2i usra_op[4];
26
extern const GVecGen2i sri_op[4];
27
extern const GVecGen2i sli_op[4];
28
+void gen_cmtst_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b);
29
28
30
/*
29
/*
31
* Forward to the isar_feature_* tests given a DisasContext pointer.
30
- * ARMCPRegInfo type field bits. If the SPECIAL bit is set this is a
31
- * special-behaviour cp reg and bits [11..8] indicate what behaviour
32
- * it has. Otherwise it is a simple cp reg, where CONST indicates that
33
- * TCG can assume the value to be constant (ie load at translate time)
34
- * and 64BIT indicates a 64 bit wide coprocessor register. SUPPRESS_TB_END
35
- * indicates that the TB should not be ended after a write to this register
36
- * (the default is that the TB ends after cp writes). OVERRIDE permits
37
- * a register definition to override a previous definition for the
38
- * same (cp, is64, crn, crm, opc1, opc2) tuple: either the new or the
39
- * old must have the OVERRIDE bit set.
40
- * ALIAS indicates that this register is an alias view of some underlying
41
- * state which is also visible via another register, and that the other
42
- * register is handling migration and reset; registers marked ALIAS will not be
43
- * migrated but may have their state set by syncing of register state from KVM.
44
- * NO_RAW indicates that this register has no underlying state and does not
45
- * support raw access for state saving/loading; it will not be used for either
46
- * migration or KVM state synchronization. (Typically this is for "registers"
47
- * which are actually used as instructions for cache maintenance and so on.)
48
- * IO indicates that this register does I/O and therefore its accesses
49
- * need to be marked with gen_io_start() and also end the TB. In particular,
50
- * registers which implement clocks or timers require this.
51
- * RAISES_EXC is for when the read or write hook might raise an exception;
52
- * the generated code will synchronize the CPU state before calling the hook
53
- * so that it is safe for the hook to call raise_exception().
54
- * NEWEL is for writes to registers that might change the exception
55
- * level - typically on older ARM chips. For those cases we need to
56
- * re-read the new el when recomputing the translation flags.
57
+ * ARMCPRegInfo type field bits:
58
*/
59
-#define ARM_CP_SPECIAL 0x0001
60
-#define ARM_CP_CONST 0x0002
61
-#define ARM_CP_64BIT 0x0004
62
-#define ARM_CP_SUPPRESS_TB_END 0x0008
63
-#define ARM_CP_OVERRIDE 0x0010
64
-#define ARM_CP_ALIAS 0x0020
65
-#define ARM_CP_IO 0x0040
66
-#define ARM_CP_NO_RAW 0x0080
67
-#define ARM_CP_NOP (ARM_CP_SPECIAL | 0x0100)
68
-#define ARM_CP_WFI (ARM_CP_SPECIAL | 0x0200)
69
-#define ARM_CP_NZCV (ARM_CP_SPECIAL | 0x0300)
70
-#define ARM_CP_CURRENTEL (ARM_CP_SPECIAL | 0x0400)
71
-#define ARM_CP_DC_ZVA (ARM_CP_SPECIAL | 0x0500)
72
-#define ARM_CP_DC_GVA (ARM_CP_SPECIAL | 0x0600)
73
-#define ARM_CP_DC_GZVA (ARM_CP_SPECIAL | 0x0700)
74
-#define ARM_LAST_SPECIAL ARM_CP_DC_GZVA
75
-#define ARM_CP_FPU 0x1000
76
-#define ARM_CP_SVE 0x2000
77
-#define ARM_CP_NO_GDB 0x4000
78
-#define ARM_CP_RAISES_EXC 0x8000
79
-#define ARM_CP_NEWEL 0x10000
80
-/* Mask of only the flag bits in a type field */
81
-#define ARM_CP_FLAG_MASK 0x1f0ff
82
+enum {
83
+ /*
84
+ * Register must be handled specially during translation.
85
+ * The method is one of the values below:
86
+ */
87
+ ARM_CP_SPECIAL_MASK = 0x000f,
88
+ /* Special: no change to PE state: writes ignored, reads ignored. */
89
+ ARM_CP_NOP = 0x0001,
90
+ /* Special: sysreg is WFI, for v5 and v6. */
91
+ ARM_CP_WFI = 0x0002,
92
+ /* Special: sysreg is NZCV. */
93
+ ARM_CP_NZCV = 0x0003,
94
+ /* Special: sysreg is CURRENTEL. */
95
+ ARM_CP_CURRENTEL = 0x0004,
96
+ /* Special: sysreg is DC ZVA or similar. */
97
+ ARM_CP_DC_ZVA = 0x0005,
98
+ ARM_CP_DC_GVA = 0x0006,
99
+ ARM_CP_DC_GZVA = 0x0007,
100
+
101
+ /* Flag: reads produce resetvalue; writes ignored. */
102
+ ARM_CP_CONST = 1 << 4,
103
+ /* Flag: For ARM_CP_STATE_AA32, sysreg is 64-bit. */
104
+ ARM_CP_64BIT = 1 << 5,
105
+ /*
106
+ * Flag: TB should not be ended after a write to this register
107
+ * (the default is that the TB ends after cp writes).
108
+ */
109
+ ARM_CP_SUPPRESS_TB_END = 1 << 6,
110
+ /*
111
+ * Flag: Permit a register definition to override a previous definition
112
+ * for the same (cp, is64, crn, crm, opc1, opc2) tuple: either the new
113
+ * or the old must have the ARM_CP_OVERRIDE bit set.
114
+ */
115
+ ARM_CP_OVERRIDE = 1 << 7,
116
+ /*
117
+ * Flag: Register is an alias view of some underlying state which is also
118
+ * visible via another register, and that the other register is handling
119
+ * migration and reset; registers marked ARM_CP_ALIAS will not be migrated
120
+ * but may have their state set by syncing of register state from KVM.
121
+ */
122
+ ARM_CP_ALIAS = 1 << 8,
123
+ /*
124
+ * Flag: Register does I/O and therefore its accesses need to be marked
125
+ * with gen_io_start() and also end the TB. In particular, registers which
126
+ * implement clocks or timers require this.
127
+ */
128
+ ARM_CP_IO = 1 << 9,
129
+ /*
130
+ * Flag: Register has no underlying state and does not support raw access
131
+ * for state saving/loading; it will not be used for either migration or
132
+ * KVM state synchronization. Typically this is for "registers" which are
133
+ * actually used as instructions for cache maintenance and so on.
134
+ */
135
+ ARM_CP_NO_RAW = 1 << 10,
136
+ /*
137
+ * Flag: The read or write hook might raise an exception; the generated
138
+ * code will synchronize the CPU state before calling the hook so that it
139
+ * is safe for the hook to call raise_exception().
140
+ */
141
+ ARM_CP_RAISES_EXC = 1 << 11,
142
+ /*
143
+ * Flag: Writes to the sysreg might change the exception level - typically
144
+ * on older ARM chips. For those cases we need to re-read the new el when
145
+ * recomputing the translation flags.
146
+ */
147
+ ARM_CP_NEWEL = 1 << 12,
148
+ /*
149
+ * Flag: Access check for this sysreg is identical to accessing FPU state
150
+ * from an instruction: use translation fp_access_check().
151
+ */
152
+ ARM_CP_FPU = 1 << 13,
153
+ /*
154
+ * Flag: Access check for this sysreg is identical to accessing SVE state
155
+ * from an instruction: use translation sve_access_check().
156
+ */
157
+ ARM_CP_SVE = 1 << 14,
158
+ /* Flag: Do not expose in gdb sysreg xml. */
159
+ ARM_CP_NO_GDB = 1 << 15,
160
+};
161
162
/*
163
* Valid values for ARMCPRegInfo state field, indicating which of
164
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
165
index XXXXXXX..XXXXXXX 100644
166
--- a/target/arm/cpu.c
167
+++ b/target/arm/cpu.c
168
@@ -XXX,XX +XXX,XX @@ static void cp_reg_reset(gpointer key, gpointer value, gpointer opaque)
169
ARMCPRegInfo *ri = value;
170
ARMCPU *cpu = opaque;
171
172
- if (ri->type & (ARM_CP_SPECIAL | ARM_CP_ALIAS)) {
173
+ if (ri->type & (ARM_CP_SPECIAL_MASK | ARM_CP_ALIAS)) {
174
return;
175
}
176
177
@@ -XXX,XX +XXX,XX @@ static void cp_reg_check_reset(gpointer key, gpointer value, gpointer opaque)
178
ARMCPU *cpu = opaque;
179
uint64_t oldvalue, newvalue;
180
181
- if (ri->type & (ARM_CP_SPECIAL | ARM_CP_ALIAS | ARM_CP_NO_RAW)) {
182
+ if (ri->type & (ARM_CP_SPECIAL_MASK | ARM_CP_ALIAS | ARM_CP_NO_RAW)) {
183
return;
184
}
185
186
diff --git a/target/arm/helper.c b/target/arm/helper.c
187
index XXXXXXX..XXXXXXX 100644
188
--- a/target/arm/helper.c
189
+++ b/target/arm/helper.c
190
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
191
* multiple times. Special registers (ie NOP/WFI) are
192
* never migratable and not even raw-accessible.
193
*/
194
- if ((r->type & ARM_CP_SPECIAL)) {
195
+ if (r->type & ARM_CP_SPECIAL_MASK) {
196
r2->type |= ARM_CP_NO_RAW;
197
}
198
if (((r->crm == CP_ANY) && crm != 0) ||
199
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
200
/* Check that the register definition has enough info to handle
201
* reads and writes if they are permitted.
202
*/
203
- if (!(r->type & (ARM_CP_SPECIAL|ARM_CP_CONST))) {
204
+ if (!(r->type & (ARM_CP_SPECIAL_MASK | ARM_CP_CONST))) {
205
if (r->access & PL3_R) {
206
assert((r->fieldoffset ||
207
(r->bank_fieldoffsets[0] && r->bank_fieldoffsets[1])) ||
32
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
208
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
33
index XXXXXXX..XXXXXXX 100644
209
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/translate-a64.c
210
--- a/target/arm/translate-a64.c
35
+++ b/target/arm/translate-a64.c
211
+++ b/target/arm/translate-a64.c
36
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_diff(DisasContext *s, uint32_t insn)
212
@@ -XXX,XX +XXX,XX @@ static void handle_sys(DisasContext *s, uint32_t insn, bool isread,
37
}
213
}
38
}
214
39
215
/* Handle special cases first */
40
-/* CMTST : test is "if (X & Y != 0)". */
216
- switch (ri->type & ~(ARM_CP_FLAG_MASK & ~ARM_CP_SPECIAL)) {
41
-static void gen_cmtst_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
217
+ switch (ri->type & ARM_CP_SPECIAL_MASK) {
42
-{
218
+ case 0:
43
- tcg_gen_and_i32(d, a, b);
219
+ break;
44
- tcg_gen_setcondi_i32(TCG_COND_NE, d, d, 0);
220
case ARM_CP_NOP:
45
- tcg_gen_neg_i32(d, d);
221
return;
46
-}
222
case ARM_CP_NZCV:
47
-
223
@@ -XXX,XX +XXX,XX @@ static void handle_sys(DisasContext *s, uint32_t insn, bool isread,
48
-static void gen_cmtst_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
224
}
49
-{
225
return;
50
- tcg_gen_and_i64(d, a, b);
226
default:
51
- tcg_gen_setcondi_i64(TCG_COND_NE, d, d, 0);
227
- break;
52
- tcg_gen_neg_i64(d, d);
228
+ g_assert_not_reached();
53
-}
229
}
54
-
230
if ((ri->type & ARM_CP_FPU) && !fp_access_check(s)) {
55
-static void gen_cmtst_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
231
return;
56
-{
57
- tcg_gen_and_vec(vece, d, a, b);
58
- tcg_gen_dupi_vec(vece, a, 0);
59
- tcg_gen_cmp_vec(TCG_COND_NE, vece, d, d, a);
60
-}
61
-
62
static void handle_3same_64(DisasContext *s, int opcode, bool u,
63
TCGv_i64 tcg_rd, TCGv_i64 tcg_rn, TCGv_i64 tcg_rm)
64
{
65
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
66
/* Integer op subgroup of C3.6.16. */
67
static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
68
{
69
- static const GVecGen3 cmtst_op[4] = {
70
- { .fni4 = gen_helper_neon_tst_u8,
71
- .fniv = gen_cmtst_vec,
72
- .vece = MO_8 },
73
- { .fni4 = gen_helper_neon_tst_u16,
74
- .fniv = gen_cmtst_vec,
75
- .vece = MO_16 },
76
- { .fni4 = gen_cmtst_i32,
77
- .fniv = gen_cmtst_vec,
78
- .vece = MO_32 },
79
- { .fni8 = gen_cmtst_i64,
80
- .fniv = gen_cmtst_vec,
81
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
82
- .vece = MO_64 },
83
- };
84
-
85
int is_q = extract32(insn, 30, 1);
86
int u = extract32(insn, 29, 1);
87
int size = extract32(insn, 22, 2);
88
diff --git a/target/arm/translate.c b/target/arm/translate.c
232
diff --git a/target/arm/translate.c b/target/arm/translate.c
89
index XXXXXXX..XXXXXXX 100644
233
index XXXXXXX..XXXXXXX 100644
90
--- a/target/arm/translate.c
234
--- a/target/arm/translate.c
91
+++ b/target/arm/translate.c
235
+++ b/target/arm/translate.c
92
@@ -XXX,XX +XXX,XX @@ const GVecGen3 mls_op[4] = {
236
@@ -XXX,XX +XXX,XX @@ static void do_coproc_insn(DisasContext *s, int cpnum, int is64,
93
.vece = MO_64 },
94
};
95
96
+/* CMTST : test is "if (X & Y != 0)". */
97
+static void gen_cmtst_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
98
+{
99
+ tcg_gen_and_i32(d, a, b);
100
+ tcg_gen_setcondi_i32(TCG_COND_NE, d, d, 0);
101
+ tcg_gen_neg_i32(d, d);
102
+}
103
+
104
+void gen_cmtst_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
105
+{
106
+ tcg_gen_and_i64(d, a, b);
107
+ tcg_gen_setcondi_i64(TCG_COND_NE, d, d, 0);
108
+ tcg_gen_neg_i64(d, d);
109
+}
110
+
111
+static void gen_cmtst_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
112
+{
113
+ tcg_gen_and_vec(vece, d, a, b);
114
+ tcg_gen_dupi_vec(vece, a, 0);
115
+ tcg_gen_cmp_vec(TCG_COND_NE, vece, d, d, a);
116
+}
117
+
118
+const GVecGen3 cmtst_op[4] = {
119
+ { .fni4 = gen_helper_neon_tst_u8,
120
+ .fniv = gen_cmtst_vec,
121
+ .vece = MO_8 },
122
+ { .fni4 = gen_helper_neon_tst_u16,
123
+ .fniv = gen_cmtst_vec,
124
+ .vece = MO_16 },
125
+ { .fni4 = gen_cmtst_i32,
126
+ .fniv = gen_cmtst_vec,
127
+ .vece = MO_32 },
128
+ { .fni8 = gen_cmtst_i64,
129
+ .fniv = gen_cmtst_vec,
130
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
131
+ .vece = MO_64 },
132
+};
133
+
134
/* Translate a NEON data processing instruction. Return nonzero if the
135
instruction is invalid.
136
We process data in a mixture of 32-bit and 64-bit chunks.
137
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
138
tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size,
139
u ? &mls_op[size] : &mla_op[size]);
140
return 0;
141
+
142
+ case NEON_3R_VTST_VCEQ:
143
+ if (u) { /* VCEQ */
144
+ tcg_gen_gvec_cmp(TCG_COND_EQ, size, rd_ofs, rn_ofs, rm_ofs,
145
+ vec_size, vec_size);
146
+ } else { /* VTST */
147
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs,
148
+ vec_size, vec_size, &cmtst_op[size]);
149
+ }
150
+ return 0;
151
+
152
+ case NEON_3R_VCGT:
153
+ tcg_gen_gvec_cmp(u ? TCG_COND_GTU : TCG_COND_GT, size,
154
+ rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size);
155
+ return 0;
156
+
157
+ case NEON_3R_VCGE:
158
+ tcg_gen_gvec_cmp(u ? TCG_COND_GEU : TCG_COND_GE, size,
159
+ rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size);
160
+ return 0;
161
}
237
}
162
238
163
if (size == 3) {
239
/* Handle special cases first */
164
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
240
- switch (ri->type & ~(ARM_CP_FLAG_MASK & ~ARM_CP_SPECIAL)) {
165
case NEON_3R_VQSUB:
241
+ switch (ri->type & ARM_CP_SPECIAL_MASK) {
166
GEN_NEON_INTEGER_OP_ENV(qsub);
242
+ case 0:
167
break;
243
+ break;
168
- case NEON_3R_VCGT:
244
case ARM_CP_NOP:
169
- GEN_NEON_INTEGER_OP(cgt);
245
return;
246
case ARM_CP_WFI:
247
@@ -XXX,XX +XXX,XX @@ static void do_coproc_insn(DisasContext *s, int cpnum, int is64,
248
s->base.is_jmp = DISAS_WFI;
249
return;
250
default:
170
- break;
251
- break;
171
- case NEON_3R_VCGE:
252
+ g_assert_not_reached();
172
- GEN_NEON_INTEGER_OP(cge);
253
}
173
- break;
254
174
case NEON_3R_VSHL:
255
if ((tb_cflags(s->base.tb) & CF_USE_ICOUNT) && (ri->type & ARM_CP_IO)) {
175
GEN_NEON_INTEGER_OP(shl);
176
break;
177
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
178
tmp2 = neon_load_reg(rd, pass);
179
gen_neon_add(size, tmp, tmp2);
180
break;
181
- case NEON_3R_VTST_VCEQ:
182
- if (!u) { /* VTST */
183
- switch (size) {
184
- case 0: gen_helper_neon_tst_u8(tmp, tmp, tmp2); break;
185
- case 1: gen_helper_neon_tst_u16(tmp, tmp, tmp2); break;
186
- case 2: gen_helper_neon_tst_u32(tmp, tmp, tmp2); break;
187
- default: abort();
188
- }
189
- } else { /* VCEQ */
190
- switch (size) {
191
- case 0: gen_helper_neon_ceq_u8(tmp, tmp, tmp2); break;
192
- case 1: gen_helper_neon_ceq_u16(tmp, tmp, tmp2); break;
193
- case 2: gen_helper_neon_ceq_u32(tmp, tmp, tmp2); break;
194
- default: abort();
195
- }
196
- }
197
- break;
198
case NEON_3R_VMUL:
199
/* VMUL.P8; other cases already eliminated. */
200
gen_helper_neon_mul_p8(tmp, tmp, tmp2);
201
--
256
--
202
2.19.1
257
2.25.1
203
204
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Move expanders for VBSL, VBIT, and VBIF from translate-a64.c.
3
Standardize on g_assert_not_reached() for "should not happen".
4
Retain abort() when preceeded by fprintf or error_report.
4
5
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20181011205206.3552-9-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20220501055028.646596-7-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
---
10
target/arm/translate.h | 6 ++
11
target/arm/helper.c | 7 +++----
11
target/arm/translate-a64.c | 61 --------------
12
target/arm/hvf/hvf.c | 2 +-
12
target/arm/translate.c | 162 +++++++++++++++++++++++++++----------
13
target/arm/kvm-stub.c | 4 ++--
13
3 files changed, 124 insertions(+), 105 deletions(-)
14
target/arm/kvm.c | 4 ++--
15
target/arm/machine.c | 4 ++--
16
target/arm/translate-a64.c | 4 ++--
17
target/arm/translate-neon.c | 2 +-
18
target/arm/translate.c | 4 ++--
19
8 files changed, 15 insertions(+), 16 deletions(-)
14
20
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
21
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate.h
23
--- a/target/arm/helper.c
18
+++ b/target/arm/translate.h
24
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@ static inline TCGv_i32 get_ahp_flag(void)
25
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
20
return ret;
26
break;
27
default:
28
/* broken reginfo with out-of-range opc1 */
29
- assert(false);
30
- break;
31
+ g_assert_not_reached();
32
}
33
/* assert our permissions are not too lax (stricter is fine) */
34
assert((r->access & ~mask) == 0);
35
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
36
break;
37
default:
38
/* Never happens, but compiler isn't smart enough to tell. */
39
- abort();
40
+ g_assert_not_reached();
41
}
42
}
43
*prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot);
44
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
45
break;
46
default:
47
/* Never happens, but compiler isn't smart enough to tell. */
48
- abort();
49
+ g_assert_not_reached();
50
}
51
}
52
if (domain_prot == 3) {
53
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/target/arm/hvf/hvf.c
56
+++ b/target/arm/hvf/hvf.c
57
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
58
/* we got kicked, no exit to process */
59
return 0;
60
default:
61
- assert(0);
62
+ g_assert_not_reached();
63
}
64
65
hvf_sync_vtimer(cpu);
66
diff --git a/target/arm/kvm-stub.c b/target/arm/kvm-stub.c
67
index XXXXXXX..XXXXXXX 100644
68
--- a/target/arm/kvm-stub.c
69
+++ b/target/arm/kvm-stub.c
70
@@ -XXX,XX +XXX,XX @@
71
72
bool write_kvmstate_to_list(ARMCPU *cpu)
73
{
74
- abort();
75
+ g_assert_not_reached();
21
}
76
}
22
77
23
+
78
bool write_list_to_kvmstate(ARMCPU *cpu, int level)
24
+/* Vector operations shared between ARM and AArch64. */
79
{
25
+extern const GVecGen3 bsl_op;
80
- abort();
26
+extern const GVecGen3 bit_op;
81
+ g_assert_not_reached();
27
+extern const GVecGen3 bif_op;
82
}
28
+
83
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
29
/*
84
index XXXXXXX..XXXXXXX 100644
30
* Forward to the isar_feature_* tests given a DisasContext pointer.
85
--- a/target/arm/kvm.c
31
*/
86
+++ b/target/arm/kvm.c
87
@@ -XXX,XX +XXX,XX @@ bool write_kvmstate_to_list(ARMCPU *cpu)
88
ret = kvm_vcpu_ioctl(cs, KVM_GET_ONE_REG, &r);
89
break;
90
default:
91
- abort();
92
+ g_assert_not_reached();
93
}
94
if (ret) {
95
ok = false;
96
@@ -XXX,XX +XXX,XX @@ bool write_list_to_kvmstate(ARMCPU *cpu, int level)
97
r.addr = (uintptr_t)(cpu->cpreg_values + i);
98
break;
99
default:
100
- abort();
101
+ g_assert_not_reached();
102
}
103
ret = kvm_vcpu_ioctl(cs, KVM_SET_ONE_REG, &r);
104
if (ret) {
105
diff --git a/target/arm/machine.c b/target/arm/machine.c
106
index XXXXXXX..XXXXXXX 100644
107
--- a/target/arm/machine.c
108
+++ b/target/arm/machine.c
109
@@ -XXX,XX +XXX,XX @@ static int cpu_pre_save(void *opaque)
110
if (kvm_enabled()) {
111
if (!write_kvmstate_to_list(cpu)) {
112
/* This should never fail */
113
- abort();
114
+ g_assert_not_reached();
115
}
116
117
/*
118
@@ -XXX,XX +XXX,XX @@ static int cpu_pre_save(void *opaque)
119
} else {
120
if (!write_cpustate_to_list(cpu, false)) {
121
/* This should never fail. */
122
- abort();
123
+ g_assert_not_reached();
124
}
125
}
126
32
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
127
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
33
index XXXXXXX..XXXXXXX 100644
128
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/translate-a64.c
129
--- a/target/arm/translate-a64.c
35
+++ b/target/arm/translate-a64.c
130
+++ b/target/arm/translate-a64.c
36
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_diff(DisasContext *s, uint32_t insn)
131
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_half(DisasContext *s, int opcode, int rd, int rn)
132
gen_helper_advsimd_rinth(tcg_res, tcg_op, fpst);
133
break;
134
default:
135
- abort();
136
+ g_assert_not_reached();
137
}
138
139
write_fp_sreg(s, rd, tcg_res);
140
@@ -XXX,XX +XXX,XX @@ static void handle_fp_fcvt(DisasContext *s, int opcode,
141
break;
142
}
143
default:
144
- abort();
145
+ g_assert_not_reached();
37
}
146
}
38
}
147
}
39
148
40
-static void gen_bsl_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
149
diff --git a/target/arm/translate-neon.c b/target/arm/translate-neon.c
41
-{
150
index XXXXXXX..XXXXXXX 100644
42
- tcg_gen_xor_i64(rn, rn, rm);
151
--- a/target/arm/translate-neon.c
43
- tcg_gen_and_i64(rn, rn, rd);
152
+++ b/target/arm/translate-neon.c
44
- tcg_gen_xor_i64(rd, rm, rn);
153
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDST_single(DisasContext *s, arg_VLDST_single *a)
45
-}
154
}
46
-
155
break;
47
-static void gen_bit_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
156
default:
48
-{
157
- abort();
49
- tcg_gen_xor_i64(rn, rn, rd);
158
+ g_assert_not_reached();
50
- tcg_gen_and_i64(rn, rn, rm);
159
}
51
- tcg_gen_xor_i64(rd, rd, rn);
160
if ((vd + a->stride * (nregs - 1)) > 31) {
52
-}
161
/*
53
-
54
-static void gen_bif_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
55
-{
56
- tcg_gen_xor_i64(rn, rn, rd);
57
- tcg_gen_andc_i64(rn, rn, rm);
58
- tcg_gen_xor_i64(rd, rd, rn);
59
-}
60
-
61
-static void gen_bsl_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
62
-{
63
- tcg_gen_xor_vec(vece, rn, rn, rm);
64
- tcg_gen_and_vec(vece, rn, rn, rd);
65
- tcg_gen_xor_vec(vece, rd, rm, rn);
66
-}
67
-
68
-static void gen_bit_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
69
-{
70
- tcg_gen_xor_vec(vece, rn, rn, rd);
71
- tcg_gen_and_vec(vece, rn, rn, rm);
72
- tcg_gen_xor_vec(vece, rd, rd, rn);
73
-}
74
-
75
-static void gen_bif_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
76
-{
77
- tcg_gen_xor_vec(vece, rn, rn, rd);
78
- tcg_gen_andc_vec(vece, rn, rn, rm);
79
- tcg_gen_xor_vec(vece, rd, rd, rn);
80
-}
81
-
82
/* Logic op (opcode == 3) subgroup of C3.6.16. */
83
static void disas_simd_3same_logic(DisasContext *s, uint32_t insn)
84
{
85
- static const GVecGen3 bsl_op = {
86
- .fni8 = gen_bsl_i64,
87
- .fniv = gen_bsl_vec,
88
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
89
- .load_dest = true
90
- };
91
- static const GVecGen3 bit_op = {
92
- .fni8 = gen_bit_i64,
93
- .fniv = gen_bit_vec,
94
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
95
- .load_dest = true
96
- };
97
- static const GVecGen3 bif_op = {
98
- .fni8 = gen_bif_i64,
99
- .fniv = gen_bif_vec,
100
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
101
- .load_dest = true
102
- };
103
-
104
int rd = extract32(insn, 0, 5);
105
int rn = extract32(insn, 5, 5);
106
int rm = extract32(insn, 16, 5);
107
diff --git a/target/arm/translate.c b/target/arm/translate.c
162
diff --git a/target/arm/translate.c b/target/arm/translate.c
108
index XXXXXXX..XXXXXXX 100644
163
index XXXXXXX..XXXXXXX 100644
109
--- a/target/arm/translate.c
164
--- a/target/arm/translate.c
110
+++ b/target/arm/translate.c
165
+++ b/target/arm/translate.c
111
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
166
@@ -XXX,XX +XXX,XX @@ static void gen_srs(DisasContext *s,
112
return 0;
167
offset = 4;
113
}
168
break;
114
169
default:
115
-/* Bitwise select. dest = c ? t : f. Clobbers T and F. */
170
- abort();
116
-static void gen_neon_bsl(TCGv_i32 dest, TCGv_i32 t, TCGv_i32 f, TCGv_i32 c)
171
+ g_assert_not_reached();
117
-{
172
}
118
- tcg_gen_and_i32(t, t, c);
173
tcg_gen_addi_i32(addr, addr, offset);
119
- tcg_gen_andc_i32(f, f, c);
174
tmp = load_reg(s, 14);
120
- tcg_gen_or_i32(dest, t, f);
175
@@ -XXX,XX +XXX,XX @@ static void gen_srs(DisasContext *s,
121
-}
176
offset = 0;
122
-
177
break;
123
static inline void gen_neon_narrow(int size, TCGv_i32 dest, TCGv_i64 src)
178
default:
124
{
179
- abort();
125
switch (size) {
180
+ g_assert_not_reached();
126
@@ -XXX,XX +XXX,XX @@ static int do_v81_helper(DisasContext *s, gen_helper_gvec_3_ptr *fn,
127
return 1;
128
}
129
130
+/*
131
+ * Expanders for VBitOps_VBIF, VBIT, VBSL.
132
+ */
133
+static void gen_bsl_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
134
+{
135
+ tcg_gen_xor_i64(rn, rn, rm);
136
+ tcg_gen_and_i64(rn, rn, rd);
137
+ tcg_gen_xor_i64(rd, rm, rn);
138
+}
139
+
140
+static void gen_bit_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
141
+{
142
+ tcg_gen_xor_i64(rn, rn, rd);
143
+ tcg_gen_and_i64(rn, rn, rm);
144
+ tcg_gen_xor_i64(rd, rd, rn);
145
+}
146
+
147
+static void gen_bif_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
148
+{
149
+ tcg_gen_xor_i64(rn, rn, rd);
150
+ tcg_gen_andc_i64(rn, rn, rm);
151
+ tcg_gen_xor_i64(rd, rd, rn);
152
+}
153
+
154
+static void gen_bsl_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
155
+{
156
+ tcg_gen_xor_vec(vece, rn, rn, rm);
157
+ tcg_gen_and_vec(vece, rn, rn, rd);
158
+ tcg_gen_xor_vec(vece, rd, rm, rn);
159
+}
160
+
161
+static void gen_bit_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
162
+{
163
+ tcg_gen_xor_vec(vece, rn, rn, rd);
164
+ tcg_gen_and_vec(vece, rn, rn, rm);
165
+ tcg_gen_xor_vec(vece, rd, rd, rn);
166
+}
167
+
168
+static void gen_bif_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
169
+{
170
+ tcg_gen_xor_vec(vece, rn, rn, rd);
171
+ tcg_gen_andc_vec(vece, rn, rn, rm);
172
+ tcg_gen_xor_vec(vece, rd, rd, rn);
173
+}
174
+
175
+const GVecGen3 bsl_op = {
176
+ .fni8 = gen_bsl_i64,
177
+ .fniv = gen_bsl_vec,
178
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
179
+ .load_dest = true
180
+};
181
+
182
+const GVecGen3 bit_op = {
183
+ .fni8 = gen_bit_i64,
184
+ .fniv = gen_bit_vec,
185
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
186
+ .load_dest = true
187
+};
188
+
189
+const GVecGen3 bif_op = {
190
+ .fni8 = gen_bif_i64,
191
+ .fniv = gen_bif_vec,
192
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
193
+ .load_dest = true
194
+};
195
+
196
+
197
/* Translate a NEON data processing instruction. Return nonzero if the
198
instruction is invalid.
199
We process data in a mixture of 32-bit and 64-bit chunks.
200
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
201
{
202
int op;
203
int q;
204
- int rd, rn, rm;
205
+ int rd, rn, rm, rd_ofs, rn_ofs, rm_ofs;
206
int size;
207
int shift;
208
int pass;
209
int count;
210
int pairwise;
211
int u;
212
+ int vec_size;
213
uint32_t imm, mask;
214
TCGv_i32 tmp, tmp2, tmp3, tmp4, tmp5;
215
TCGv_ptr ptr1, ptr2, ptr3;
216
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
217
VFP_DREG_N(rn, insn);
218
VFP_DREG_M(rm, insn);
219
size = (insn >> 20) & 3;
220
+ vec_size = q ? 16 : 8;
221
+ rd_ofs = neon_reg_offset(rd, 0);
222
+ rn_ofs = neon_reg_offset(rn, 0);
223
+ rm_ofs = neon_reg_offset(rm, 0);
224
+
225
if ((insn & (1 << 23)) == 0) {
226
/* Three register same length. */
227
op = ((insn >> 7) & 0x1e) | ((insn >> 4) & 1);
228
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
229
q, rd, rn, rm);
230
}
231
return 1;
232
+
233
+ case NEON_3R_LOGIC: /* Logic ops. */
234
+ switch ((u << 2) | size) {
235
+ case 0: /* VAND */
236
+ tcg_gen_gvec_and(0, rd_ofs, rn_ofs, rm_ofs,
237
+ vec_size, vec_size);
238
+ break;
239
+ case 1: /* VBIC */
240
+ tcg_gen_gvec_andc(0, rd_ofs, rn_ofs, rm_ofs,
241
+ vec_size, vec_size);
242
+ break;
243
+ case 2:
244
+ if (rn == rm) {
245
+ /* VMOV */
246
+ tcg_gen_gvec_mov(0, rd_ofs, rn_ofs, vec_size, vec_size);
247
+ } else {
248
+ /* VORR */
249
+ tcg_gen_gvec_or(0, rd_ofs, rn_ofs, rm_ofs,
250
+ vec_size, vec_size);
251
+ }
252
+ break;
253
+ case 3: /* VORN */
254
+ tcg_gen_gvec_orc(0, rd_ofs, rn_ofs, rm_ofs,
255
+ vec_size, vec_size);
256
+ break;
257
+ case 4: /* VEOR */
258
+ tcg_gen_gvec_xor(0, rd_ofs, rn_ofs, rm_ofs,
259
+ vec_size, vec_size);
260
+ break;
261
+ case 5: /* VBSL */
262
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs,
263
+ vec_size, vec_size, &bsl_op);
264
+ break;
265
+ case 6: /* VBIT */
266
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs,
267
+ vec_size, vec_size, &bit_op);
268
+ break;
269
+ case 7: /* VBIF */
270
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs,
271
+ vec_size, vec_size, &bif_op);
272
+ break;
273
+ }
274
+ return 0;
275
}
181
}
276
- if (size == 3 && op != NEON_3R_LOGIC) {
182
tcg_gen_addi_i32(addr, addr, offset);
277
+ if (size == 3) {
183
gen_helper_set_r13_banked(cpu_env, tcg_constant_i32(mode), addr);
278
/* 64-bit element instructions. */
279
for (pass = 0; pass < (q ? 2 : 1); pass++) {
280
neon_load_reg64(cpu_V0, rn + pass);
281
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
282
case NEON_3R_VRHADD:
283
GEN_NEON_INTEGER_OP(rhadd);
284
break;
285
- case NEON_3R_LOGIC: /* Logic ops. */
286
- switch ((u << 2) | size) {
287
- case 0: /* VAND */
288
- tcg_gen_and_i32(tmp, tmp, tmp2);
289
- break;
290
- case 1: /* BIC */
291
- tcg_gen_andc_i32(tmp, tmp, tmp2);
292
- break;
293
- case 2: /* VORR */
294
- tcg_gen_or_i32(tmp, tmp, tmp2);
295
- break;
296
- case 3: /* VORN */
297
- tcg_gen_orc_i32(tmp, tmp, tmp2);
298
- break;
299
- case 4: /* VEOR */
300
- tcg_gen_xor_i32(tmp, tmp, tmp2);
301
- break;
302
- case 5: /* VBSL */
303
- tmp3 = neon_load_reg(rd, pass);
304
- gen_neon_bsl(tmp, tmp, tmp2, tmp3);
305
- tcg_temp_free_i32(tmp3);
306
- break;
307
- case 6: /* VBIT */
308
- tmp3 = neon_load_reg(rd, pass);
309
- gen_neon_bsl(tmp, tmp, tmp3, tmp2);
310
- tcg_temp_free_i32(tmp3);
311
- break;
312
- case 7: /* VBIF */
313
- tmp3 = neon_load_reg(rd, pass);
314
- gen_neon_bsl(tmp, tmp3, tmp, tmp2);
315
- tcg_temp_free_i32(tmp3);
316
- break;
317
- }
318
- break;
319
case NEON_3R_VHSUB:
320
GEN_NEON_INTEGER_OP(hsub);
321
break;
322
--
184
--
323
2.19.1
185
2.25.1
324
325
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
For a sequence of loads or stores from a single register,
3
Create a typedef as well, and use it in ARMCPRegInfo.
4
little-endian operations can be promoted to an 8-byte op.
4
This won't be perfect for debugging, but it'll nicely
5
This can reduce the number of operations by a factor of 8.
5
display the most common cases.
6
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20181011205206.3552-20-richard.henderson@linaro.org
9
Message-id: 20220501055028.646596-8-richard.henderson@linaro.org
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
11
---
13
target/arm/translate.c | 10 ++++++++++
12
target/arm/cpregs.h | 44 +++++++++++++++++++++++---------------------
14
1 file changed, 10 insertions(+)
13
target/arm/helper.c | 2 +-
14
2 files changed, 24 insertions(+), 22 deletions(-)
15
15
16
diff --git a/target/arm/translate.c b/target/arm/translate.c
16
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
17
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/translate.c
18
--- a/target/arm/cpregs.h
19
+++ b/target/arm/translate.c
19
+++ b/target/arm/cpregs.h
20
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
20
@@ -XXX,XX +XXX,XX @@ enum {
21
if (size == 3 && (interleave | spacing) != 1) {
21
* described with these bits, then use a laxer set of restrictions, and
22
return 1;
22
* do the more restrictive/complex check inside a helper function.
23
}
23
*/
24
+ /* For our purposes, bytes are always little-endian. */
24
-#define PL3_R 0x80
25
+ if (size == 0) {
25
-#define PL3_W 0x40
26
+ endian = MO_LE;
26
-#define PL2_R (0x20 | PL3_R)
27
+ }
27
-#define PL2_W (0x10 | PL3_W)
28
+ /* Consecutive little-endian elements from a single register
28
-#define PL1_R (0x08 | PL2_R)
29
+ * can be promoted to a larger little-endian operation.
29
-#define PL1_W (0x04 | PL2_W)
30
+ */
30
-#define PL0_R (0x02 | PL1_R)
31
+ if (interleave == 1 && endian == MO_LE) {
31
-#define PL0_W (0x01 | PL1_W)
32
+ size = 3;
32
+typedef enum {
33
+ }
33
+ PL3_R = 0x80,
34
tmp64 = tcg_temp_new_i64();
34
+ PL3_W = 0x40,
35
addr = tcg_temp_new_i32();
35
+ PL2_R = 0x20 | PL3_R,
36
tmp2 = tcg_const_i32(1 << size);
36
+ PL2_W = 0x10 | PL3_W,
37
+ PL1_R = 0x08 | PL2_R,
38
+ PL1_W = 0x04 | PL2_W,
39
+ PL0_R = 0x02 | PL1_R,
40
+ PL0_W = 0x01 | PL1_W,
41
42
-/*
43
- * For user-mode some registers are accessible to EL0 via a kernel
44
- * trap-and-emulate ABI. In this case we define the read permissions
45
- * as actually being PL0_R. However some bits of any given register
46
- * may still be masked.
47
- */
48
+ /*
49
+ * For user-mode some registers are accessible to EL0 via a kernel
50
+ * trap-and-emulate ABI. In this case we define the read permissions
51
+ * as actually being PL0_R. However some bits of any given register
52
+ * may still be masked.
53
+ */
54
#ifdef CONFIG_USER_ONLY
55
-#define PL0U_R PL0_R
56
+ PL0U_R = PL0_R,
57
#else
58
-#define PL0U_R PL1_R
59
+ PL0U_R = PL1_R,
60
#endif
61
62
-#define PL3_RW (PL3_R | PL3_W)
63
-#define PL2_RW (PL2_R | PL2_W)
64
-#define PL1_RW (PL1_R | PL1_W)
65
-#define PL0_RW (PL0_R | PL0_W)
66
+ PL3_RW = PL3_R | PL3_W,
67
+ PL2_RW = PL2_R | PL2_W,
68
+ PL1_RW = PL1_R | PL1_W,
69
+ PL0_RW = PL0_R | PL0_W,
70
+} CPAccessRights;
71
72
typedef enum CPAccessResult {
73
/* Access is permitted */
74
@@ -XXX,XX +XXX,XX @@ struct ARMCPRegInfo {
75
/* Register type: ARM_CP_* bits/values */
76
int type;
77
/* Access rights: PL*_[RW] */
78
- int access;
79
+ CPAccessRights access;
80
/* Security state: ARM_CP_SECSTATE_* bits/values */
81
int secure;
82
/*
83
diff --git a/target/arm/helper.c b/target/arm/helper.c
84
index XXXXXXX..XXXXXXX 100644
85
--- a/target/arm/helper.c
86
+++ b/target/arm/helper.c
87
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
88
* to encompass the generic architectural permission check.
89
*/
90
if (r->state != ARM_CP_STATE_AA32) {
91
- int mask = 0;
92
+ CPAccessRights mask;
93
switch (r->opc1) {
94
case 0:
95
/* min_EL EL1, but some accessible to EL0 via kernel ABI */
37
--
96
--
38
2.19.1
97
2.25.1
39
40
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Move ssra_op and usra_op expanders from translate-a64.c.
3
Give this enum a name and use in ARMCPRegInfo,
4
add_cpreg_to_hashtable and define_one_arm_cp_reg_with_opaque.
4
5
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20181011205206.3552-14-richard.henderson@linaro.org
9
Message-id: 20220501055028.646596-9-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
11
---
10
target/arm/translate.h | 2 +
12
target/arm/cpregs.h | 6 +++---
11
target/arm/translate-a64.c | 106 ----------------------------
13
target/arm/helper.c | 6 ++++--
12
target/arm/translate.c | 139 ++++++++++++++++++++++++++++++++++---
14
2 files changed, 7 insertions(+), 5 deletions(-)
13
3 files changed, 130 insertions(+), 117 deletions(-)
14
15
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
16
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
16
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate.h
18
--- a/target/arm/cpregs.h
18
+++ b/target/arm/translate.h
19
+++ b/target/arm/cpregs.h
19
@@ -XXX,XX +XXX,XX @@ static inline TCGv_i32 get_ahp_flag(void)
20
@@ -XXX,XX +XXX,XX @@ enum {
20
extern const GVecGen3 bsl_op;
21
* Note that we rely on the values of these enums as we iterate through
21
extern const GVecGen3 bit_op;
22
* the various states in some places.
22
extern const GVecGen3 bif_op;
23
*/
23
+extern const GVecGen2i ssra_op[4];
24
-enum {
24
+extern const GVecGen2i usra_op[4];
25
+typedef enum {
26
ARM_CP_STATE_AA32 = 0,
27
ARM_CP_STATE_AA64 = 1,
28
ARM_CP_STATE_BOTH = 2,
29
-};
30
+} CPState;
25
31
26
/*
32
/*
27
* Forward to the isar_feature_* tests given a DisasContext pointer.
33
* ARM CP register secure state flags. These flags identify security state
28
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
34
@@ -XXX,XX +XXX,XX @@ struct ARMCPRegInfo {
35
uint8_t opc1;
36
uint8_t opc2;
37
/* Execution state in which this register is visible: ARM_CP_STATE_* */
38
- int state;
39
+ CPState state;
40
/* Register type: ARM_CP_* bits/values */
41
int type;
42
/* Access rights: PL*_[RW] */
43
diff --git a/target/arm/helper.c b/target/arm/helper.c
29
index XXXXXXX..XXXXXXX 100644
44
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/translate-a64.c
45
--- a/target/arm/helper.c
31
+++ b/target/arm/translate-a64.c
46
+++ b/target/arm/helper.c
32
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
47
@@ -XXX,XX +XXX,XX @@ CpuDefinitionInfoList *qmp_query_cpu_definitions(Error **errp)
33
}
34
}
48
}
35
49
36
-static void gen_ssra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
50
static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
37
-{
51
- void *opaque, int state, int secstate,
38
- tcg_gen_vec_sar8i_i64(a, a, shift);
52
+ void *opaque, CPState state, int secstate,
39
- tcg_gen_vec_add8_i64(d, d, a);
53
int crm, int opc1, int opc2,
40
-}
54
const char *name)
41
-
42
-static void gen_ssra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
43
-{
44
- tcg_gen_vec_sar16i_i64(a, a, shift);
45
- tcg_gen_vec_add16_i64(d, d, a);
46
-}
47
-
48
-static void gen_ssra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
49
-{
50
- tcg_gen_sari_i32(a, a, shift);
51
- tcg_gen_add_i32(d, d, a);
52
-}
53
-
54
-static void gen_ssra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
55
-{
56
- tcg_gen_sari_i64(a, a, shift);
57
- tcg_gen_add_i64(d, d, a);
58
-}
59
-
60
-static void gen_ssra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
61
-{
62
- tcg_gen_sari_vec(vece, a, a, sh);
63
- tcg_gen_add_vec(vece, d, d, a);
64
-}
65
-
66
-static void gen_usra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
67
-{
68
- tcg_gen_vec_shr8i_i64(a, a, shift);
69
- tcg_gen_vec_add8_i64(d, d, a);
70
-}
71
-
72
-static void gen_usra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
73
-{
74
- tcg_gen_vec_shr16i_i64(a, a, shift);
75
- tcg_gen_vec_add16_i64(d, d, a);
76
-}
77
-
78
-static void gen_usra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
79
-{
80
- tcg_gen_shri_i32(a, a, shift);
81
- tcg_gen_add_i32(d, d, a);
82
-}
83
-
84
-static void gen_usra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
85
-{
86
- tcg_gen_shri_i64(a, a, shift);
87
- tcg_gen_add_i64(d, d, a);
88
-}
89
-
90
-static void gen_usra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
91
-{
92
- tcg_gen_shri_vec(vece, a, a, sh);
93
- tcg_gen_add_vec(vece, d, d, a);
94
-}
95
-
96
static void gen_shr8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
97
{
55
{
98
uint64_t mask = dup_const(MO_8, 0xff >> shift);
56
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
99
@@ -XXX,XX +XXX,XX @@ static void gen_shr_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
57
* bits; the ARM_CP_64BIT* flag applies only to the AArch32 view of
100
static void handle_vec_simd_shri(DisasContext *s, bool is_q, bool is_u,
58
* the register, if any.
101
int immh, int immb, int opcode, int rn, int rd)
59
*/
102
{
60
- int crm, opc1, opc2, state;
103
- static const GVecGen2i ssra_op[4] = {
61
+ int crm, opc1, opc2;
104
- { .fni8 = gen_ssra8_i64,
62
int crmmin = (r->crm == CP_ANY) ? 0 : r->crm;
105
- .fniv = gen_ssra_vec,
63
int crmmax = (r->crm == CP_ANY) ? 15 : r->crm;
106
- .load_dest = true,
64
int opc1min = (r->opc1 == CP_ANY) ? 0 : r->opc1;
107
- .opc = INDEX_op_sari_vec,
65
int opc1max = (r->opc1 == CP_ANY) ? 7 : r->opc1;
108
- .vece = MO_8 },
66
int opc2min = (r->opc2 == CP_ANY) ? 0 : r->opc2;
109
- { .fni8 = gen_ssra16_i64,
67
int opc2max = (r->opc2 == CP_ANY) ? 7 : r->opc2;
110
- .fniv = gen_ssra_vec,
68
+ CPState state;
111
- .load_dest = true,
112
- .opc = INDEX_op_sari_vec,
113
- .vece = MO_16 },
114
- { .fni4 = gen_ssra32_i32,
115
- .fniv = gen_ssra_vec,
116
- .load_dest = true,
117
- .opc = INDEX_op_sari_vec,
118
- .vece = MO_32 },
119
- { .fni8 = gen_ssra64_i64,
120
- .fniv = gen_ssra_vec,
121
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
122
- .load_dest = true,
123
- .opc = INDEX_op_sari_vec,
124
- .vece = MO_64 },
125
- };
126
- static const GVecGen2i usra_op[4] = {
127
- { .fni8 = gen_usra8_i64,
128
- .fniv = gen_usra_vec,
129
- .load_dest = true,
130
- .opc = INDEX_op_shri_vec,
131
- .vece = MO_8, },
132
- { .fni8 = gen_usra16_i64,
133
- .fniv = gen_usra_vec,
134
- .load_dest = true,
135
- .opc = INDEX_op_shri_vec,
136
- .vece = MO_16, },
137
- { .fni4 = gen_usra32_i32,
138
- .fniv = gen_usra_vec,
139
- .load_dest = true,
140
- .opc = INDEX_op_shri_vec,
141
- .vece = MO_32, },
142
- { .fni8 = gen_usra64_i64,
143
- .fniv = gen_usra_vec,
144
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
145
- .load_dest = true,
146
- .opc = INDEX_op_shri_vec,
147
- .vece = MO_64, },
148
- };
149
static const GVecGen2i sri_op[4] = {
150
{ .fni8 = gen_shr8_ins_i64,
151
.fniv = gen_shr_ins_vec,
152
diff --git a/target/arm/translate.c b/target/arm/translate.c
153
index XXXXXXX..XXXXXXX 100644
154
--- a/target/arm/translate.c
155
+++ b/target/arm/translate.c
156
@@ -XXX,XX +XXX,XX @@ const GVecGen3 bif_op = {
157
.load_dest = true
158
};
159
160
+static void gen_ssra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
161
+{
162
+ tcg_gen_vec_sar8i_i64(a, a, shift);
163
+ tcg_gen_vec_add8_i64(d, d, a);
164
+}
165
+
69
+
166
+static void gen_ssra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
70
/* 64 bit registers have only CRm and Opc1 fields */
167
+{
71
assert(!((r->type & ARM_CP_64BIT) && (r->opc2 || r->crn)));
168
+ tcg_gen_vec_sar16i_i64(a, a, shift);
72
/* op0 only exists in the AArch64 encodings */
169
+ tcg_gen_vec_add16_i64(d, d, a);
170
+}
171
+
172
+static void gen_ssra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
173
+{
174
+ tcg_gen_sari_i32(a, a, shift);
175
+ tcg_gen_add_i32(d, d, a);
176
+}
177
+
178
+static void gen_ssra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
179
+{
180
+ tcg_gen_sari_i64(a, a, shift);
181
+ tcg_gen_add_i64(d, d, a);
182
+}
183
+
184
+static void gen_ssra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
185
+{
186
+ tcg_gen_sari_vec(vece, a, a, sh);
187
+ tcg_gen_add_vec(vece, d, d, a);
188
+}
189
+
190
+const GVecGen2i ssra_op[4] = {
191
+ { .fni8 = gen_ssra8_i64,
192
+ .fniv = gen_ssra_vec,
193
+ .load_dest = true,
194
+ .opc = INDEX_op_sari_vec,
195
+ .vece = MO_8 },
196
+ { .fni8 = gen_ssra16_i64,
197
+ .fniv = gen_ssra_vec,
198
+ .load_dest = true,
199
+ .opc = INDEX_op_sari_vec,
200
+ .vece = MO_16 },
201
+ { .fni4 = gen_ssra32_i32,
202
+ .fniv = gen_ssra_vec,
203
+ .load_dest = true,
204
+ .opc = INDEX_op_sari_vec,
205
+ .vece = MO_32 },
206
+ { .fni8 = gen_ssra64_i64,
207
+ .fniv = gen_ssra_vec,
208
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
209
+ .load_dest = true,
210
+ .opc = INDEX_op_sari_vec,
211
+ .vece = MO_64 },
212
+};
213
+
214
+static void gen_usra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
215
+{
216
+ tcg_gen_vec_shr8i_i64(a, a, shift);
217
+ tcg_gen_vec_add8_i64(d, d, a);
218
+}
219
+
220
+static void gen_usra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
221
+{
222
+ tcg_gen_vec_shr16i_i64(a, a, shift);
223
+ tcg_gen_vec_add16_i64(d, d, a);
224
+}
225
+
226
+static void gen_usra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
227
+{
228
+ tcg_gen_shri_i32(a, a, shift);
229
+ tcg_gen_add_i32(d, d, a);
230
+}
231
+
232
+static void gen_usra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
233
+{
234
+ tcg_gen_shri_i64(a, a, shift);
235
+ tcg_gen_add_i64(d, d, a);
236
+}
237
+
238
+static void gen_usra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
239
+{
240
+ tcg_gen_shri_vec(vece, a, a, sh);
241
+ tcg_gen_add_vec(vece, d, d, a);
242
+}
243
+
244
+const GVecGen2i usra_op[4] = {
245
+ { .fni8 = gen_usra8_i64,
246
+ .fniv = gen_usra_vec,
247
+ .load_dest = true,
248
+ .opc = INDEX_op_shri_vec,
249
+ .vece = MO_8, },
250
+ { .fni8 = gen_usra16_i64,
251
+ .fniv = gen_usra_vec,
252
+ .load_dest = true,
253
+ .opc = INDEX_op_shri_vec,
254
+ .vece = MO_16, },
255
+ { .fni4 = gen_usra32_i32,
256
+ .fniv = gen_usra_vec,
257
+ .load_dest = true,
258
+ .opc = INDEX_op_shri_vec,
259
+ .vece = MO_32, },
260
+ { .fni8 = gen_usra64_i64,
261
+ .fniv = gen_usra_vec,
262
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
263
+ .load_dest = true,
264
+ .opc = INDEX_op_shri_vec,
265
+ .vece = MO_64, },
266
+};
267
268
/* Translate a NEON data processing instruction. Return nonzero if the
269
instruction is invalid.
270
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
271
}
272
return 0;
273
274
+ case 1: /* VSRA */
275
+ /* Right shift comes here negative. */
276
+ shift = -shift;
277
+ /* Shifts larger than the element size are architecturally
278
+ * valid. Unsigned results in all zeros; signed results
279
+ * in all sign bits.
280
+ */
281
+ if (!u) {
282
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, vec_size, vec_size,
283
+ MIN(shift, (8 << size) - 1),
284
+ &ssra_op[size]);
285
+ } else if (shift >= 8 << size) {
286
+ /* rd += 0 */
287
+ } else {
288
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, vec_size, vec_size,
289
+ shift, &usra_op[size]);
290
+ }
291
+ return 0;
292
+
293
case 5: /* VSHL, VSLI */
294
if (!u) { /* VSHL */
295
/* Shifts larger than the element size are
296
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
297
neon_load_reg64(cpu_V0, rm + pass);
298
tcg_gen_movi_i64(cpu_V1, imm);
299
switch (op) {
300
- case 1: /* VSRA */
301
- if (u)
302
- gen_helper_neon_shl_u64(cpu_V0, cpu_V0, cpu_V1);
303
- else
304
- gen_helper_neon_shl_s64(cpu_V0, cpu_V0, cpu_V1);
305
- break;
306
case 2: /* VRSHR */
307
case 3: /* VRSRA */
308
if (u)
309
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
310
default:
311
g_assert_not_reached();
312
}
313
- if (op == 1 || op == 3) {
314
+ if (op == 3) {
315
/* Accumulate. */
316
neon_load_reg64(cpu_V1, rd + pass);
317
tcg_gen_add_i64(cpu_V0, cpu_V0, cpu_V1);
318
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
319
tmp2 = tcg_temp_new_i32();
320
tcg_gen_movi_i32(tmp2, imm);
321
switch (op) {
322
- case 1: /* VSRA */
323
- GEN_NEON_INTEGER_OP(shl);
324
- break;
325
case 2: /* VRSHR */
326
case 3: /* VRSRA */
327
GEN_NEON_INTEGER_OP(rshl);
328
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
329
}
330
tcg_temp_free_i32(tmp2);
331
332
- if (op == 1 || op == 3) {
333
+ if (op == 3) {
334
/* Accumulate. */
335
tmp2 = neon_load_reg(rd, pass);
336
gen_neon_add(size, tmp, tmp2);
337
--
73
--
338
2.19.1
74
2.25.1
339
75
340
76
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Give this enum a name and use in ARMCPRegInfo and add_cpreg_to_hashtable.
4
Add the enumerator ARM_CP_SECSTATE_BOTH to clarify how 0
5
is handled in define_one_arm_cp_reg_with_opaque.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20181011205206.3552-13-richard.henderson@linaro.org
9
Message-id: 20220501055028.646596-10-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
11
---
8
target/arm/translate.c | 70 +++++++++++++++++++++++++++++-------------
12
target/arm/cpregs.h | 7 ++++---
9
1 file changed, 48 insertions(+), 22 deletions(-)
13
target/arm/helper.c | 7 +++++--
14
2 files changed, 9 insertions(+), 5 deletions(-)
10
15
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
16
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
12
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
18
--- a/target/arm/cpregs.h
14
+++ b/target/arm/translate.c
19
+++ b/target/arm/cpregs.h
15
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
20
@@ -XXX,XX +XXX,XX @@ typedef enum {
16
size--;
21
* registered entry will only have one to identify whether the entry is secure
17
}
22
* or non-secure.
18
shift = (insn >> 16) & ((1 << (3 + size)) - 1);
23
*/
19
- /* To avoid excessive duplication of ops we implement shift
24
-enum {
20
- by immediate using the variable shift operations. */
25
+typedef enum {
21
if (op < 8) {
26
+ ARM_CP_SECSTATE_BOTH = 0, /* define one cpreg for each secstate */
22
/* Shift by immediate:
27
ARM_CP_SECSTATE_S = (1 << 0), /* bit[0]: Secure state register */
23
VSHR, VSRA, VRSHR, VRSRA, VSRI, VSHL, VQSHL, VQSHLU. */
28
ARM_CP_SECSTATE_NS = (1 << 1), /* bit[1]: Non-secure state register */
24
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
29
-};
25
}
30
+} CPSecureState;
26
/* Right shifts are encoded as N - shift, where N is the
31
27
element size in bits. */
32
/*
28
- if (op <= 4)
33
* Access rights:
29
+ if (op <= 4) {
34
@@ -XXX,XX +XXX,XX @@ struct ARMCPRegInfo {
30
shift = shift - (1 << (size + 3));
35
/* Access rights: PL*_[RW] */
31
+ }
36
CPAccessRights access;
32
+
37
/* Security state: ARM_CP_SECSTATE_* bits/values */
33
+ switch (op) {
38
- int secure;
34
+ case 0: /* VSHR */
39
+ CPSecureState secure;
35
+ /* Right shift comes here negative. */
40
/*
36
+ shift = -shift;
41
* The opaque pointer passed to define_arm_cp_regs_with_opaque() when
37
+ /* Shifts larger than the element size are architecturally
42
* this register was defined: can be used to hand data through to the
38
+ * valid. Unsigned results in all zeros; signed results
43
diff --git a/target/arm/helper.c b/target/arm/helper.c
39
+ * in all sign bits.
44
index XXXXXXX..XXXXXXX 100644
40
+ */
45
--- a/target/arm/helper.c
41
+ if (!u) {
46
+++ b/target/arm/helper.c
42
+ tcg_gen_gvec_sari(size, rd_ofs, rm_ofs,
47
@@ -XXX,XX +XXX,XX @@ CpuDefinitionInfoList *qmp_query_cpu_definitions(Error **errp)
43
+ MIN(shift, (8 << size) - 1),
48
}
44
+ vec_size, vec_size);
49
45
+ } else if (shift >= 8 << size) {
50
static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
46
+ tcg_gen_gvec_dup8i(rd_ofs, vec_size, vec_size, 0);
51
- void *opaque, CPState state, int secstate,
47
+ } else {
52
+ void *opaque, CPState state,
48
+ tcg_gen_gvec_shri(size, rd_ofs, rm_ofs, shift,
53
+ CPSecureState secstate,
49
+ vec_size, vec_size);
54
int crm, int opc1, int opc2,
50
+ }
55
const char *name)
51
+ return 0;
56
{
52
+
57
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
53
+ case 5: /* VSHL, VSLI */
58
r->secure, crm, opc1, opc2,
54
+ if (!u) { /* VSHL */
59
r->name);
55
+ /* Shifts larger than the element size are
60
break;
56
+ * architecturally valid and results in zero.
61
- default:
57
+ */
62
+ case ARM_CP_SECSTATE_BOTH:
58
+ if (shift >= 8 << size) {
63
name = g_strdup_printf("%s_S", r->name);
59
+ tcg_gen_gvec_dup8i(rd_ofs, vec_size, vec_size, 0);
64
add_cpreg_to_hashtable(cpu, r, opaque, state,
60
+ } else {
65
ARM_CP_SECSTATE_S,
61
+ tcg_gen_gvec_shli(size, rd_ofs, rm_ofs, shift,
66
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
62
+ vec_size, vec_size);
67
ARM_CP_SECSTATE_NS,
63
+ }
68
crm, opc1, opc2, r->name);
64
+ return 0;
65
+ }
66
+ break;
67
+ }
68
+
69
if (size == 3) {
70
count = q + 1;
71
} else {
72
count = q ? 4: 2;
73
}
74
- switch (size) {
75
- case 0:
76
- imm = (uint8_t) shift;
77
- imm |= imm << 8;
78
- imm |= imm << 16;
79
- break;
80
- case 1:
81
- imm = (uint16_t) shift;
82
- imm |= imm << 16;
83
- break;
84
- case 2:
85
- case 3:
86
- imm = shift;
87
- break;
88
- default:
89
- abort();
90
- }
91
+
92
+ /* To avoid excessive duplication of ops we implement shift
93
+ * by immediate using the variable shift operations.
94
+ */
95
+ imm = dup_const(size, shift);
96
97
for (pass = 0; pass < count; pass++) {
98
if (size == 3) {
99
neon_load_reg64(cpu_V0, rm + pass);
100
tcg_gen_movi_i64(cpu_V1, imm);
101
switch (op) {
102
- case 0: /* VSHR */
103
case 1: /* VSRA */
104
if (u)
105
gen_helper_neon_shl_u64(cpu_V0, cpu_V0, cpu_V1);
106
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
107
cpu_V0, cpu_V1);
108
}
109
break;
69
break;
110
+ default:
70
+ default:
111
+ g_assert_not_reached();
71
+ g_assert_not_reached();
112
}
72
}
113
if (op == 1 || op == 3) {
73
} else {
114
/* Accumulate. */
74
/* AArch64 registers get mapped to non-secure instance
115
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
116
tmp2 = tcg_temp_new_i32();
117
tcg_gen_movi_i32(tmp2, imm);
118
switch (op) {
119
- case 0: /* VSHR */
120
case 1: /* VSRA */
121
GEN_NEON_INTEGER_OP(shl);
122
break;
123
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
124
case 7: /* VQSHL */
125
GEN_NEON_INTEGER_OP_ENV(qshl);
126
break;
127
+ default:
128
+ g_assert_not_reached();
129
}
130
tcg_temp_free_i32(tmp2);
131
132
--
75
--
133
2.19.1
76
2.25.1
134
135
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Move mla_op and mls_op expanders from translate-a64.c.
3
The new_key field is always non-zero -- drop the if.
4
4
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20181011205206.3552-16-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20220501055028.646596-11-richard.henderson@linaro.org
8
[PMM: reinstated dropped PL3_RW mask]
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
---
10
target/arm/translate.h | 2 +
11
target/arm/helper.c | 23 +++++++++++------------
11
target/arm/translate-a64.c | 106 -----------------------------
12
1 file changed, 11 insertions(+), 12 deletions(-)
12
target/arm/translate.c | 134 ++++++++++++++++++++++++++++++++-----
13
3 files changed, 120 insertions(+), 122 deletions(-)
14
13
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate.h
16
--- a/target/arm/helper.c
18
+++ b/target/arm/translate.h
17
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@ static inline TCGv_i32 get_ahp_flag(void)
18
@@ -XXX,XX +XXX,XX @@ static void define_arm_vh_e2h_redirects_aliases(ARMCPU *cpu)
20
extern const GVecGen3 bsl_op;
19
21
extern const GVecGen3 bit_op;
20
for (i = 0; i < ARRAY_SIZE(aliases); i++) {
22
extern const GVecGen3 bif_op;
21
const struct E2HAlias *a = &aliases[i];
23
+extern const GVecGen3 mla_op[4];
22
- ARMCPRegInfo *src_reg, *dst_reg;
24
+extern const GVecGen3 mls_op[4];
23
+ ARMCPRegInfo *src_reg, *dst_reg, *new_reg;
25
extern const GVecGen2i ssra_op[4];
24
+ uint32_t *new_key;
26
extern const GVecGen2i usra_op[4];
25
+ bool ok;
27
extern const GVecGen2i sri_op[4];
26
28
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
27
if (a->feature && !a->feature(&cpu->isar)) {
29
index XXXXXXX..XXXXXXX 100644
28
continue;
30
--- a/target/arm/translate-a64.c
29
@@ -XXX,XX +XXX,XX @@ static void define_arm_vh_e2h_redirects_aliases(ARMCPU *cpu)
31
+++ b/target/arm/translate-a64.c
30
g_assert(src_reg->opaque == NULL);
32
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
31
33
}
32
/* Create alias before redirection so we dup the right data. */
34
}
33
- if (a->new_key) {
35
34
- ARMCPRegInfo *new_reg = g_memdup(src_reg, sizeof(ARMCPRegInfo));
36
-static void gen_mla8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
35
- uint32_t *new_key = g_memdup(&a->new_key, sizeof(uint32_t));
37
-{
36
- bool ok;
38
- gen_helper_neon_mul_u8(a, a, b);
37
+ new_reg = g_memdup(src_reg, sizeof(ARMCPRegInfo));
39
- gen_helper_neon_add_u8(d, d, a);
38
+ new_key = g_memdup(&a->new_key, sizeof(uint32_t));
40
-}
39
41
-
40
- new_reg->name = a->new_name;
42
-static void gen_mla16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
41
- new_reg->type |= ARM_CP_ALIAS;
43
-{
42
- /* Remove PL1/PL0 access, leaving PL2/PL3 R/W in place. */
44
- gen_helper_neon_mul_u16(a, a, b);
43
- new_reg->access &= PL2_RW | PL3_RW;
45
- gen_helper_neon_add_u16(d, d, a);
44
+ new_reg->name = a->new_name;
46
-}
45
+ new_reg->type |= ARM_CP_ALIAS;
47
-
46
+ /* Remove PL1/PL0 access, leaving PL2/PL3 R/W in place. */
48
-static void gen_mla32_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
47
+ new_reg->access &= PL2_RW | PL3_RW;
49
-{
48
50
- tcg_gen_mul_i32(a, a, b);
49
- ok = g_hash_table_insert(cpu->cp_regs, new_key, new_reg);
51
- tcg_gen_add_i32(d, d, a);
50
- g_assert(ok);
52
-}
51
- }
53
-
52
+ ok = g_hash_table_insert(cpu->cp_regs, new_key, new_reg);
54
-static void gen_mla64_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
53
+ g_assert(ok);
55
-{
54
56
- tcg_gen_mul_i64(a, a, b);
55
src_reg->opaque = dst_reg;
57
- tcg_gen_add_i64(d, d, a);
56
src_reg->orig_readfn = src_reg->readfn ?: raw_read;
58
-}
59
-
60
-static void gen_mla_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
61
-{
62
- tcg_gen_mul_vec(vece, a, a, b);
63
- tcg_gen_add_vec(vece, d, d, a);
64
-}
65
-
66
-static void gen_mls8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
67
-{
68
- gen_helper_neon_mul_u8(a, a, b);
69
- gen_helper_neon_sub_u8(d, d, a);
70
-}
71
-
72
-static void gen_mls16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
73
-{
74
- gen_helper_neon_mul_u16(a, a, b);
75
- gen_helper_neon_sub_u16(d, d, a);
76
-}
77
-
78
-static void gen_mls32_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
79
-{
80
- tcg_gen_mul_i32(a, a, b);
81
- tcg_gen_sub_i32(d, d, a);
82
-}
83
-
84
-static void gen_mls64_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
85
-{
86
- tcg_gen_mul_i64(a, a, b);
87
- tcg_gen_sub_i64(d, d, a);
88
-}
89
-
90
-static void gen_mls_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
91
-{
92
- tcg_gen_mul_vec(vece, a, a, b);
93
- tcg_gen_sub_vec(vece, d, d, a);
94
-}
95
-
96
/* Integer op subgroup of C3.6.16. */
97
static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
98
{
99
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
100
.prefer_i64 = TCG_TARGET_REG_BITS == 64,
101
.vece = MO_64 },
102
};
103
- static const GVecGen3 mla_op[4] = {
104
- { .fni4 = gen_mla8_i32,
105
- .fniv = gen_mla_vec,
106
- .opc = INDEX_op_mul_vec,
107
- .load_dest = true,
108
- .vece = MO_8 },
109
- { .fni4 = gen_mla16_i32,
110
- .fniv = gen_mla_vec,
111
- .opc = INDEX_op_mul_vec,
112
- .load_dest = true,
113
- .vece = MO_16 },
114
- { .fni4 = gen_mla32_i32,
115
- .fniv = gen_mla_vec,
116
- .opc = INDEX_op_mul_vec,
117
- .load_dest = true,
118
- .vece = MO_32 },
119
- { .fni8 = gen_mla64_i64,
120
- .fniv = gen_mla_vec,
121
- .opc = INDEX_op_mul_vec,
122
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
123
- .load_dest = true,
124
- .vece = MO_64 },
125
- };
126
- static const GVecGen3 mls_op[4] = {
127
- { .fni4 = gen_mls8_i32,
128
- .fniv = gen_mls_vec,
129
- .opc = INDEX_op_mul_vec,
130
- .load_dest = true,
131
- .vece = MO_8 },
132
- { .fni4 = gen_mls16_i32,
133
- .fniv = gen_mls_vec,
134
- .opc = INDEX_op_mul_vec,
135
- .load_dest = true,
136
- .vece = MO_16 },
137
- { .fni4 = gen_mls32_i32,
138
- .fniv = gen_mls_vec,
139
- .opc = INDEX_op_mul_vec,
140
- .load_dest = true,
141
- .vece = MO_32 },
142
- { .fni8 = gen_mls64_i64,
143
- .fniv = gen_mls_vec,
144
- .opc = INDEX_op_mul_vec,
145
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
146
- .load_dest = true,
147
- .vece = MO_64 },
148
- };
149
150
int is_q = extract32(insn, 30, 1);
151
int u = extract32(insn, 29, 1);
152
diff --git a/target/arm/translate.c b/target/arm/translate.c
153
index XXXXXXX..XXXXXXX 100644
154
--- a/target/arm/translate.c
155
+++ b/target/arm/translate.c
156
@@ -XXX,XX +XXX,XX @@ static void gen_neon_narrow_op(int op, int u, int size,
157
#define NEON_3R_VABA 15
158
#define NEON_3R_VADD_VSUB 16
159
#define NEON_3R_VTST_VCEQ 17
160
-#define NEON_3R_VML 18 /* VMLA, VMLAL, VMLS, VMLSL */
161
+#define NEON_3R_VML 18 /* VMLA, VMLS */
162
#define NEON_3R_VMUL 19
163
#define NEON_3R_VPMAX 20
164
#define NEON_3R_VPMIN 21
165
@@ -XXX,XX +XXX,XX @@ const GVecGen2i sli_op[4] = {
166
.vece = MO_64 },
167
};
168
169
+static void gen_mla8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
170
+{
171
+ gen_helper_neon_mul_u8(a, a, b);
172
+ gen_helper_neon_add_u8(d, d, a);
173
+}
174
+
175
+static void gen_mls8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
176
+{
177
+ gen_helper_neon_mul_u8(a, a, b);
178
+ gen_helper_neon_sub_u8(d, d, a);
179
+}
180
+
181
+static void gen_mla16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
182
+{
183
+ gen_helper_neon_mul_u16(a, a, b);
184
+ gen_helper_neon_add_u16(d, d, a);
185
+}
186
+
187
+static void gen_mls16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
188
+{
189
+ gen_helper_neon_mul_u16(a, a, b);
190
+ gen_helper_neon_sub_u16(d, d, a);
191
+}
192
+
193
+static void gen_mla32_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
194
+{
195
+ tcg_gen_mul_i32(a, a, b);
196
+ tcg_gen_add_i32(d, d, a);
197
+}
198
+
199
+static void gen_mls32_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
200
+{
201
+ tcg_gen_mul_i32(a, a, b);
202
+ tcg_gen_sub_i32(d, d, a);
203
+}
204
+
205
+static void gen_mla64_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
206
+{
207
+ tcg_gen_mul_i64(a, a, b);
208
+ tcg_gen_add_i64(d, d, a);
209
+}
210
+
211
+static void gen_mls64_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
212
+{
213
+ tcg_gen_mul_i64(a, a, b);
214
+ tcg_gen_sub_i64(d, d, a);
215
+}
216
+
217
+static void gen_mla_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
218
+{
219
+ tcg_gen_mul_vec(vece, a, a, b);
220
+ tcg_gen_add_vec(vece, d, d, a);
221
+}
222
+
223
+static void gen_mls_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
224
+{
225
+ tcg_gen_mul_vec(vece, a, a, b);
226
+ tcg_gen_sub_vec(vece, d, d, a);
227
+}
228
+
229
+/* Note that while NEON does not support VMLA and VMLS as 64-bit ops,
230
+ * these tables are shared with AArch64 which does support them.
231
+ */
232
+const GVecGen3 mla_op[4] = {
233
+ { .fni4 = gen_mla8_i32,
234
+ .fniv = gen_mla_vec,
235
+ .opc = INDEX_op_mul_vec,
236
+ .load_dest = true,
237
+ .vece = MO_8 },
238
+ { .fni4 = gen_mla16_i32,
239
+ .fniv = gen_mla_vec,
240
+ .opc = INDEX_op_mul_vec,
241
+ .load_dest = true,
242
+ .vece = MO_16 },
243
+ { .fni4 = gen_mla32_i32,
244
+ .fniv = gen_mla_vec,
245
+ .opc = INDEX_op_mul_vec,
246
+ .load_dest = true,
247
+ .vece = MO_32 },
248
+ { .fni8 = gen_mla64_i64,
249
+ .fniv = gen_mla_vec,
250
+ .opc = INDEX_op_mul_vec,
251
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
252
+ .load_dest = true,
253
+ .vece = MO_64 },
254
+};
255
+
256
+const GVecGen3 mls_op[4] = {
257
+ { .fni4 = gen_mls8_i32,
258
+ .fniv = gen_mls_vec,
259
+ .opc = INDEX_op_mul_vec,
260
+ .load_dest = true,
261
+ .vece = MO_8 },
262
+ { .fni4 = gen_mls16_i32,
263
+ .fniv = gen_mls_vec,
264
+ .opc = INDEX_op_mul_vec,
265
+ .load_dest = true,
266
+ .vece = MO_16 },
267
+ { .fni4 = gen_mls32_i32,
268
+ .fniv = gen_mls_vec,
269
+ .opc = INDEX_op_mul_vec,
270
+ .load_dest = true,
271
+ .vece = MO_32 },
272
+ { .fni8 = gen_mls64_i64,
273
+ .fniv = gen_mls_vec,
274
+ .opc = INDEX_op_mul_vec,
275
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
276
+ .load_dest = true,
277
+ .vece = MO_64 },
278
+};
279
+
280
/* Translate a NEON data processing instruction. Return nonzero if the
281
instruction is invalid.
282
We process data in a mixture of 32-bit and 64-bit chunks.
283
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
284
return 0;
285
}
286
break;
287
+
288
+ case NEON_3R_VML: /* VMLA, VMLS */
289
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size,
290
+ u ? &mls_op[size] : &mla_op[size]);
291
+ return 0;
292
}
293
+
294
if (size == 3) {
295
/* 64-bit element instructions. */
296
for (pass = 0; pass < (q ? 2 : 1); pass++) {
297
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
298
}
299
}
300
break;
301
- case NEON_3R_VML: /* VMLA, VMLAL, VMLS,VMLSL */
302
- switch (size) {
303
- case 0: gen_helper_neon_mul_u8(tmp, tmp, tmp2); break;
304
- case 1: gen_helper_neon_mul_u16(tmp, tmp, tmp2); break;
305
- case 2: tcg_gen_mul_i32(tmp, tmp, tmp2); break;
306
- default: abort();
307
- }
308
- tcg_temp_free_i32(tmp2);
309
- tmp2 = neon_load_reg(rd, pass);
310
- if (u) { /* VMLS */
311
- gen_neon_rsb(size, tmp, tmp2);
312
- } else { /* VMLA */
313
- gen_neon_add(size, tmp, tmp2);
314
- }
315
- break;
316
case NEON_3R_VMUL:
317
/* VMUL.P8; other cases already eliminated. */
318
gen_helper_neon_mul_p8(tmp, tmp, tmp2);
319
--
57
--
320
2.19.1
58
2.25.1
321
322
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
3
Cast the uint32_t key into a gpointer directly, which
4
allows us to avoid allocating storage for each key.
5
6
Use g_hash_table_lookup when we already have a gpointer
7
(e.g. for callbacks like count_cpreg), or when using
8
get_arm_cp_reginfo would require casting away const.
9
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20181016223115.24100-7-richard.henderson@linaro.org
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Message-id: 20220501055028.646596-12-richard.henderson@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
14
---
9
target/arm/cpu.h | 6 +++++-
15
target/arm/cpu.c | 4 ++--
10
linux-user/elfload.c | 2 +-
16
target/arm/gdbstub.c | 2 +-
11
target/arm/cpu.c | 4 ----
17
target/arm/helper.c | 41 ++++++++++++++++++-----------------------
12
target/arm/helper.c | 2 +-
18
3 files changed, 21 insertions(+), 26 deletions(-)
13
target/arm/machine.c | 3 +--
14
5 files changed, 8 insertions(+), 9 deletions(-)
15
19
16
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/cpu.h
19
+++ b/target/arm/cpu.h
20
@@ -XXX,XX +XXX,XX @@ enum arm_features {
21
ARM_FEATURE_NEON,
22
ARM_FEATURE_M, /* Microcontroller profile. */
23
ARM_FEATURE_OMAPCP, /* OMAP specific CP15 ops handling. */
24
- ARM_FEATURE_THUMB2EE,
25
ARM_FEATURE_V7MP, /* v7 Multiprocessing Extensions */
26
ARM_FEATURE_V7VE, /* v7 Virtualization Extensions (non-EL2 parts) */
27
ARM_FEATURE_V4T,
28
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_jazelle(const ARMISARegisters *id)
29
return FIELD_EX32(id->id_isar1, ID_ISAR1, JAZELLE) != 0;
30
}
31
32
+static inline bool isar_feature_t32ee(const ARMISARegisters *id)
33
+{
34
+ return FIELD_EX32(id->id_isar3, ID_ISAR3, T32EE) != 0;
35
+}
36
+
37
static inline bool isar_feature_aa32_aes(const ARMISARegisters *id)
38
{
39
return FIELD_EX32(id->id_isar5, ID_ISAR5, AES) != 0;
40
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/linux-user/elfload.c
43
+++ b/linux-user/elfload.c
44
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
45
GET_FEATURE(ARM_FEATURE_V5, ARM_HWCAP_ARM_EDSP);
46
GET_FEATURE(ARM_FEATURE_VFP, ARM_HWCAP_ARM_VFP);
47
GET_FEATURE(ARM_FEATURE_IWMMXT, ARM_HWCAP_ARM_IWMMXT);
48
- GET_FEATURE(ARM_FEATURE_THUMB2EE, ARM_HWCAP_ARM_THUMBEE);
49
+ GET_FEATURE_ID(t32ee, ARM_HWCAP_ARM_THUMBEE);
50
GET_FEATURE(ARM_FEATURE_NEON, ARM_HWCAP_ARM_NEON);
51
GET_FEATURE(ARM_FEATURE_VFP3, ARM_HWCAP_ARM_VFPv3);
52
GET_FEATURE(ARM_FEATURE_V6K, ARM_HWCAP_ARM_TLS);
53
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
20
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
54
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
55
--- a/target/arm/cpu.c
22
--- a/target/arm/cpu.c
56
+++ b/target/arm/cpu.c
23
+++ b/target/arm/cpu.c
57
@@ -XXX,XX +XXX,XX @@ static void cortex_a8_initfn(Object *obj)
24
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_initfn(Object *obj)
58
set_feature(&cpu->env, ARM_FEATURE_V7);
25
ARMCPU *cpu = ARM_CPU(obj);
59
set_feature(&cpu->env, ARM_FEATURE_VFP3);
26
60
set_feature(&cpu->env, ARM_FEATURE_NEON);
27
cpu_set_cpustate_pointers(cpu);
61
- set_feature(&cpu->env, ARM_FEATURE_THUMB2EE);
28
- cpu->cp_regs = g_hash_table_new_full(g_int_hash, g_int_equal,
62
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
29
- g_free, cpreg_hashtable_data_destroy);
63
set_feature(&cpu->env, ARM_FEATURE_EL3);
30
+ cpu->cp_regs = g_hash_table_new_full(g_direct_hash, g_direct_equal,
64
cpu->midr = 0x410fc080;
31
+ NULL, cpreg_hashtable_data_destroy);
65
@@ -XXX,XX +XXX,XX @@ static void cortex_a9_initfn(Object *obj)
32
66
set_feature(&cpu->env, ARM_FEATURE_VFP3);
33
QLIST_INIT(&cpu->pre_el_change_hooks);
67
set_feature(&cpu->env, ARM_FEATURE_VFP_FP16);
34
QLIST_INIT(&cpu->el_change_hooks);
68
set_feature(&cpu->env, ARM_FEATURE_NEON);
35
diff --git a/target/arm/gdbstub.c b/target/arm/gdbstub.c
69
- set_feature(&cpu->env, ARM_FEATURE_THUMB2EE);
36
index XXXXXXX..XXXXXXX 100644
70
set_feature(&cpu->env, ARM_FEATURE_EL3);
37
--- a/target/arm/gdbstub.c
71
/* Note that A9 supports the MP extensions even for
38
+++ b/target/arm/gdbstub.c
72
* A9UP and single-core A9MP (which are both different
39
@@ -XXX,XX +XXX,XX @@ static void arm_gen_one_xml_sysreg_tag(GString *s, DynamicGDBXMLInfo *dyn_xml,
73
@@ -XXX,XX +XXX,XX @@ static void cortex_a7_initfn(Object *obj)
40
static void arm_register_sysreg_for_xml(gpointer key, gpointer value,
74
set_feature(&cpu->env, ARM_FEATURE_V7VE);
41
gpointer p)
75
set_feature(&cpu->env, ARM_FEATURE_VFP4);
42
{
76
set_feature(&cpu->env, ARM_FEATURE_NEON);
43
- uint32_t ri_key = *(uint32_t *)key;
77
- set_feature(&cpu->env, ARM_FEATURE_THUMB2EE);
44
+ uint32_t ri_key = (uintptr_t)key;
78
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
45
ARMCPRegInfo *ri = value;
79
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
46
RegisterSysregXmlParam *param = (RegisterSysregXmlParam *)p;
80
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
47
GString *s = param->s;
81
@@ -XXX,XX +XXX,XX @@ static void cortex_a15_initfn(Object *obj)
82
set_feature(&cpu->env, ARM_FEATURE_V7VE);
83
set_feature(&cpu->env, ARM_FEATURE_VFP4);
84
set_feature(&cpu->env, ARM_FEATURE_NEON);
85
- set_feature(&cpu->env, ARM_FEATURE_THUMB2EE);
86
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
87
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
88
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
89
diff --git a/target/arm/helper.c b/target/arm/helper.c
48
diff --git a/target/arm/helper.c b/target/arm/helper.c
90
index XXXXXXX..XXXXXXX 100644
49
index XXXXXXX..XXXXXXX 100644
91
--- a/target/arm/helper.c
50
--- a/target/arm/helper.c
92
+++ b/target/arm/helper.c
51
+++ b/target/arm/helper.c
93
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
52
@@ -XXX,XX +XXX,XX @@ bool write_list_to_cpustate(ARMCPU *cpu)
94
define_arm_cp_regs(cpu, vmsa_pmsa_cp_reginfo);
53
static void add_cpreg_to_list(gpointer key, gpointer opaque)
95
define_arm_cp_regs(cpu, vmsa_cp_reginfo);
96
}
97
- if (arm_feature(env, ARM_FEATURE_THUMB2EE)) {
98
+ if (cpu_isar_feature(t32ee, cpu)) {
99
define_arm_cp_regs(cpu, t2ee_cp_reginfo);
100
}
101
if (arm_feature(env, ARM_FEATURE_GENERIC_TIMER)) {
102
diff --git a/target/arm/machine.c b/target/arm/machine.c
103
index XXXXXXX..XXXXXXX 100644
104
--- a/target/arm/machine.c
105
+++ b/target/arm/machine.c
106
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_m = {
107
static bool thumb2ee_needed(void *opaque)
108
{
54
{
109
ARMCPU *cpu = opaque;
55
ARMCPU *cpu = opaque;
110
- CPUARMState *env = &cpu->env;
56
- uint64_t regidx;
111
57
- const ARMCPRegInfo *ri;
112
- return arm_feature(env, ARM_FEATURE_THUMB2EE);
58
-
113
+ return cpu_isar_feature(t32ee, cpu);
59
- regidx = *(uint32_t *)key;
60
- ri = get_arm_cp_reginfo(cpu->cp_regs, regidx);
61
+ uint32_t regidx = (uintptr_t)key;
62
+ const ARMCPRegInfo *ri = get_arm_cp_reginfo(cpu->cp_regs, regidx);
63
64
if (!(ri->type & (ARM_CP_NO_RAW|ARM_CP_ALIAS))) {
65
cpu->cpreg_indexes[cpu->cpreg_array_len] = cpreg_to_kvm_id(regidx);
66
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_list(gpointer key, gpointer opaque)
67
static void count_cpreg(gpointer key, gpointer opaque)
68
{
69
ARMCPU *cpu = opaque;
70
- uint64_t regidx;
71
const ARMCPRegInfo *ri;
72
73
- regidx = *(uint32_t *)key;
74
- ri = get_arm_cp_reginfo(cpu->cp_regs, regidx);
75
+ ri = g_hash_table_lookup(cpu->cp_regs, key);
76
77
if (!(ri->type & (ARM_CP_NO_RAW|ARM_CP_ALIAS))) {
78
cpu->cpreg_array_len++;
79
@@ -XXX,XX +XXX,XX @@ static void count_cpreg(gpointer key, gpointer opaque)
80
81
static gint cpreg_key_compare(gconstpointer a, gconstpointer b)
82
{
83
- uint64_t aidx = cpreg_to_kvm_id(*(uint32_t *)a);
84
- uint64_t bidx = cpreg_to_kvm_id(*(uint32_t *)b);
85
+ uint64_t aidx = cpreg_to_kvm_id((uintptr_t)a);
86
+ uint64_t bidx = cpreg_to_kvm_id((uintptr_t)b);
87
88
if (aidx > bidx) {
89
return 1;
90
@@ -XXX,XX +XXX,XX @@ static void define_arm_vh_e2h_redirects_aliases(ARMCPU *cpu)
91
for (i = 0; i < ARRAY_SIZE(aliases); i++) {
92
const struct E2HAlias *a = &aliases[i];
93
ARMCPRegInfo *src_reg, *dst_reg, *new_reg;
94
- uint32_t *new_key;
95
bool ok;
96
97
if (a->feature && !a->feature(&cpu->isar)) {
98
continue;
99
}
100
101
- src_reg = g_hash_table_lookup(cpu->cp_regs, &a->src_key);
102
- dst_reg = g_hash_table_lookup(cpu->cp_regs, &a->dst_key);
103
+ src_reg = g_hash_table_lookup(cpu->cp_regs,
104
+ (gpointer)(uintptr_t)a->src_key);
105
+ dst_reg = g_hash_table_lookup(cpu->cp_regs,
106
+ (gpointer)(uintptr_t)a->dst_key);
107
g_assert(src_reg != NULL);
108
g_assert(dst_reg != NULL);
109
110
@@ -XXX,XX +XXX,XX @@ static void define_arm_vh_e2h_redirects_aliases(ARMCPU *cpu)
111
112
/* Create alias before redirection so we dup the right data. */
113
new_reg = g_memdup(src_reg, sizeof(ARMCPRegInfo));
114
- new_key = g_memdup(&a->new_key, sizeof(uint32_t));
115
116
new_reg->name = a->new_name;
117
new_reg->type |= ARM_CP_ALIAS;
118
/* Remove PL1/PL0 access, leaving PL2/PL3 R/W in place. */
119
new_reg->access &= PL2_RW | PL3_RW;
120
121
- ok = g_hash_table_insert(cpu->cp_regs, new_key, new_reg);
122
+ ok = g_hash_table_insert(cpu->cp_regs,
123
+ (gpointer)(uintptr_t)a->new_key, new_reg);
124
g_assert(ok);
125
126
src_reg->opaque = dst_reg;
127
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
128
/* Private utility function for define_one_arm_cp_reg_with_opaque():
129
* add a single reginfo struct to the hash table.
130
*/
131
- uint32_t *key = g_new(uint32_t, 1);
132
+ uint32_t key;
133
ARMCPRegInfo *r2 = g_memdup(r, sizeof(ARMCPRegInfo));
134
int is64 = (r->type & ARM_CP_64BIT) ? 1 : 0;
135
int ns = (secstate & ARM_CP_SECSTATE_NS) ? 1 : 0;
136
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
137
if (r->cp == 0 || r->state == ARM_CP_STATE_BOTH) {
138
r2->cp = CP_REG_ARM64_SYSREG_CP;
139
}
140
- *key = ENCODE_AA64_CP_REG(r2->cp, r2->crn, crm,
141
- r2->opc0, opc1, opc2);
142
+ key = ENCODE_AA64_CP_REG(r2->cp, r2->crn, crm,
143
+ r2->opc0, opc1, opc2);
144
} else {
145
- *key = ENCODE_CP_REG(r2->cp, is64, ns, r2->crn, crm, opc1, opc2);
146
+ key = ENCODE_CP_REG(r2->cp, is64, ns, r2->crn, crm, opc1, opc2);
147
}
148
if (opaque) {
149
r2->opaque = opaque;
150
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
151
* requested.
152
*/
153
if (!(r->type & ARM_CP_OVERRIDE)) {
154
- ARMCPRegInfo *oldreg;
155
- oldreg = g_hash_table_lookup(cpu->cp_regs, key);
156
+ const ARMCPRegInfo *oldreg = get_arm_cp_reginfo(cpu->cp_regs, key);
157
if (oldreg && !(oldreg->type & ARM_CP_OVERRIDE)) {
158
fprintf(stderr, "Register redefined: cp=%d %d bit "
159
"crn=%d crm=%d opc1=%d opc2=%d, "
160
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
161
g_assert_not_reached();
162
}
163
}
164
- g_hash_table_insert(cpu->cp_regs, key, r2);
165
+ g_hash_table_insert(cpu->cp_regs, (gpointer)(uintptr_t)key, r2);
114
}
166
}
115
167
116
static const VMStateDescription vmstate_thumb2ee = {
168
169
@@ -XXX,XX +XXX,XX @@ void modify_arm_cp_regs_with_len(ARMCPRegInfo *regs, size_t regs_len,
170
171
const ARMCPRegInfo *get_arm_cp_reginfo(GHashTable *cpregs, uint32_t encoded_cp)
172
{
173
- return g_hash_table_lookup(cpregs, &encoded_cp);
174
+ return g_hash_table_lookup(cpregs, (gpointer)(uintptr_t)encoded_cp);
175
}
176
177
void arm_cp_write_ignore(CPUARMState *env, const ARMCPRegInfo *ri,
117
--
178
--
118
2.19.1
179
2.25.1
119
120
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Create struct ARMISARegisters, to be accessed during translation.
3
Simplify freeing cp_regs hash table entries by using a single
4
allocation for the entire value.
5
6
This fixes a theoretical bug if we were to ever free the entire
7
hash table, because we've been installing string literal constants
8
into the cpreg structure in define_arm_vh_e2h_redirects_aliases.
9
However, at present we only free entries created for AArch32
10
wildcard cpregs which get overwritten by more specific cpregs,
11
so this bug is never exposed.
4
12
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20181016223115.24100-2-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Message-id: 20220501055028.646596-13-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
17
---
10
target/arm/cpu.h | 32 ++++----
18
target/arm/cpu.c | 16 +---------------
11
hw/intc/armv7m_nvic.c | 12 +--
19
target/arm/helper.c | 10 ++++++++--
12
target/arm/cpu.c | 178 +++++++++++++++++++++---------------------
20
2 files changed, 9 insertions(+), 17 deletions(-)
13
target/arm/cpu64.c | 70 ++++++++---------
14
target/arm/helper.c | 28 +++----
15
5 files changed, 162 insertions(+), 158 deletions(-)
16
21
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/cpu.h
20
+++ b/target/arm/cpu.h
21
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
22
* ARMv7AR ARM Architecture Reference Manual. A reset_ prefix
23
* is used for reset values of non-constant registers; no reset_
24
* prefix means a constant register.
25
+ * Some of these registers are split out into a substructure that
26
+ * is shared with the translators to control the ISA.
27
*/
28
+ struct ARMISARegisters {
29
+ uint32_t id_isar0;
30
+ uint32_t id_isar1;
31
+ uint32_t id_isar2;
32
+ uint32_t id_isar3;
33
+ uint32_t id_isar4;
34
+ uint32_t id_isar5;
35
+ uint32_t id_isar6;
36
+ uint32_t mvfr0;
37
+ uint32_t mvfr1;
38
+ uint32_t mvfr2;
39
+ uint64_t id_aa64isar0;
40
+ uint64_t id_aa64isar1;
41
+ uint64_t id_aa64pfr0;
42
+ uint64_t id_aa64pfr1;
43
+ } isar;
44
uint32_t midr;
45
uint32_t revidr;
46
uint32_t reset_fpsid;
47
- uint32_t mvfr0;
48
- uint32_t mvfr1;
49
- uint32_t mvfr2;
50
uint32_t ctr;
51
uint32_t reset_sctlr;
52
uint32_t id_pfr0;
53
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
54
uint32_t id_mmfr2;
55
uint32_t id_mmfr3;
56
uint32_t id_mmfr4;
57
- uint32_t id_isar0;
58
- uint32_t id_isar1;
59
- uint32_t id_isar2;
60
- uint32_t id_isar3;
61
- uint32_t id_isar4;
62
- uint32_t id_isar5;
63
- uint32_t id_isar6;
64
- uint64_t id_aa64pfr0;
65
- uint64_t id_aa64pfr1;
66
uint64_t id_aa64dfr0;
67
uint64_t id_aa64dfr1;
68
uint64_t id_aa64afr0;
69
uint64_t id_aa64afr1;
70
- uint64_t id_aa64isar0;
71
- uint64_t id_aa64isar1;
72
uint64_t id_aa64mmfr0;
73
uint64_t id_aa64mmfr1;
74
uint32_t dbgdidr;
75
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
76
index XXXXXXX..XXXXXXX 100644
77
--- a/hw/intc/armv7m_nvic.c
78
+++ b/hw/intc/armv7m_nvic.c
79
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
80
case 0xd5c: /* MMFR3. */
81
return cpu->id_mmfr3;
82
case 0xd60: /* ISAR0. */
83
- return cpu->id_isar0;
84
+ return cpu->isar.id_isar0;
85
case 0xd64: /* ISAR1. */
86
- return cpu->id_isar1;
87
+ return cpu->isar.id_isar1;
88
case 0xd68: /* ISAR2. */
89
- return cpu->id_isar2;
90
+ return cpu->isar.id_isar2;
91
case 0xd6c: /* ISAR3. */
92
- return cpu->id_isar3;
93
+ return cpu->isar.id_isar3;
94
case 0xd70: /* ISAR4. */
95
- return cpu->id_isar4;
96
+ return cpu->isar.id_isar4;
97
case 0xd74: /* ISAR5. */
98
- return cpu->id_isar5;
99
+ return cpu->isar.id_isar5;
100
case 0xd78: /* CLIDR */
101
return cpu->clidr;
102
case 0xd7c: /* CTR */
103
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
22
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
104
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
105
--- a/target/arm/cpu.c
24
--- a/target/arm/cpu.c
106
+++ b/target/arm/cpu.c
25
+++ b/target/arm/cpu.c
107
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(CPUState *s)
26
@@ -XXX,XX +XXX,XX @@ uint64_t arm_cpu_mp_affinity(int idx, uint8_t clustersz)
108
g_hash_table_foreach(cpu->cp_regs, cp_reg_check_reset, cpu);
27
return (Aff1 << ARM_AFF1_SHIFT) | Aff0;
109
110
env->vfp.xregs[ARM_VFP_FPSID] = cpu->reset_fpsid;
111
- env->vfp.xregs[ARM_VFP_MVFR0] = cpu->mvfr0;
112
- env->vfp.xregs[ARM_VFP_MVFR1] = cpu->mvfr1;
113
- env->vfp.xregs[ARM_VFP_MVFR2] = cpu->mvfr2;
114
+ env->vfp.xregs[ARM_VFP_MVFR0] = cpu->isar.mvfr0;
115
+ env->vfp.xregs[ARM_VFP_MVFR1] = cpu->isar.mvfr1;
116
+ env->vfp.xregs[ARM_VFP_MVFR2] = cpu->isar.mvfr2;
117
118
cpu->power_state = cpu->start_powered_off ? PSCI_OFF : PSCI_ON;
119
s->halted = cpu->start_powered_off;
120
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
121
* registers as well. These are id_pfr1[7:4] and id_aa64pfr0[15:12].
122
*/
123
cpu->id_pfr1 &= ~0xf0;
124
- cpu->id_aa64pfr0 &= ~0xf000;
125
+ cpu->isar.id_aa64pfr0 &= ~0xf000;
126
}
127
128
if (!cpu->has_el2) {
129
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
130
* registers if we don't have EL2. These are id_pfr1[15:12] and
131
* id_aa64pfr0_el1[11:8].
132
*/
133
- cpu->id_aa64pfr0 &= ~0xf00;
134
+ cpu->isar.id_aa64pfr0 &= ~0xf00;
135
cpu->id_pfr1 &= ~0xf000;
136
}
137
138
@@ -XXX,XX +XXX,XX @@ static void arm1136_r2_initfn(Object *obj)
139
set_feature(&cpu->env, ARM_FEATURE_CACHE_BLOCK_OPS);
140
cpu->midr = 0x4107b362;
141
cpu->reset_fpsid = 0x410120b4;
142
- cpu->mvfr0 = 0x11111111;
143
- cpu->mvfr1 = 0x00000000;
144
+ cpu->isar.mvfr0 = 0x11111111;
145
+ cpu->isar.mvfr1 = 0x00000000;
146
cpu->ctr = 0x1dd20d2;
147
cpu->reset_sctlr = 0x00050078;
148
cpu->id_pfr0 = 0x111;
149
@@ -XXX,XX +XXX,XX @@ static void arm1136_r2_initfn(Object *obj)
150
cpu->id_mmfr0 = 0x01130003;
151
cpu->id_mmfr1 = 0x10030302;
152
cpu->id_mmfr2 = 0x01222110;
153
- cpu->id_isar0 = 0x00140011;
154
- cpu->id_isar1 = 0x12002111;
155
- cpu->id_isar2 = 0x11231111;
156
- cpu->id_isar3 = 0x01102131;
157
- cpu->id_isar4 = 0x141;
158
+ cpu->isar.id_isar0 = 0x00140011;
159
+ cpu->isar.id_isar1 = 0x12002111;
160
+ cpu->isar.id_isar2 = 0x11231111;
161
+ cpu->isar.id_isar3 = 0x01102131;
162
+ cpu->isar.id_isar4 = 0x141;
163
cpu->reset_auxcr = 7;
164
}
28
}
165
29
166
@@ -XXX,XX +XXX,XX @@ static void arm1136_initfn(Object *obj)
30
-static void cpreg_hashtable_data_destroy(gpointer data)
167
set_feature(&cpu->env, ARM_FEATURE_CACHE_BLOCK_OPS);
31
-{
168
cpu->midr = 0x4117b363;
32
- /*
169
cpu->reset_fpsid = 0x410120b4;
33
- * Destroy function for cpu->cp_regs hashtable data entries.
170
- cpu->mvfr0 = 0x11111111;
34
- * We must free the name string because it was g_strdup()ed in
171
- cpu->mvfr1 = 0x00000000;
35
- * add_cpreg_to_hashtable(). It's OK to cast away the 'const'
172
+ cpu->isar.mvfr0 = 0x11111111;
36
- * from r->name because we know we definitely allocated it.
173
+ cpu->isar.mvfr1 = 0x00000000;
37
- */
174
cpu->ctr = 0x1dd20d2;
38
- ARMCPRegInfo *r = data;
175
cpu->reset_sctlr = 0x00050078;
39
-
176
cpu->id_pfr0 = 0x111;
40
- g_free((void *)r->name);
177
@@ -XXX,XX +XXX,XX @@ static void arm1136_initfn(Object *obj)
41
- g_free(r);
178
cpu->id_mmfr0 = 0x01130003;
42
-}
179
cpu->id_mmfr1 = 0x10030302;
43
-
180
cpu->id_mmfr2 = 0x01222110;
44
static void arm_cpu_initfn(Object *obj)
181
- cpu->id_isar0 = 0x00140011;
45
{
182
- cpu->id_isar1 = 0x12002111;
46
ARMCPU *cpu = ARM_CPU(obj);
183
- cpu->id_isar2 = 0x11231111;
47
184
- cpu->id_isar3 = 0x01102131;
48
cpu_set_cpustate_pointers(cpu);
185
- cpu->id_isar4 = 0x141;
49
cpu->cp_regs = g_hash_table_new_full(g_direct_hash, g_direct_equal,
186
+ cpu->isar.id_isar0 = 0x00140011;
50
- NULL, cpreg_hashtable_data_destroy);
187
+ cpu->isar.id_isar1 = 0x12002111;
51
+ NULL, g_free);
188
+ cpu->isar.id_isar2 = 0x11231111;
52
189
+ cpu->isar.id_isar3 = 0x01102131;
53
QLIST_INIT(&cpu->pre_el_change_hooks);
190
+ cpu->isar.id_isar4 = 0x141;
54
QLIST_INIT(&cpu->el_change_hooks);
191
cpu->reset_auxcr = 7;
192
}
193
194
@@ -XXX,XX +XXX,XX @@ static void arm1176_initfn(Object *obj)
195
set_feature(&cpu->env, ARM_FEATURE_EL3);
196
cpu->midr = 0x410fb767;
197
cpu->reset_fpsid = 0x410120b5;
198
- cpu->mvfr0 = 0x11111111;
199
- cpu->mvfr1 = 0x00000000;
200
+ cpu->isar.mvfr0 = 0x11111111;
201
+ cpu->isar.mvfr1 = 0x00000000;
202
cpu->ctr = 0x1dd20d2;
203
cpu->reset_sctlr = 0x00050078;
204
cpu->id_pfr0 = 0x111;
205
@@ -XXX,XX +XXX,XX @@ static void arm1176_initfn(Object *obj)
206
cpu->id_mmfr0 = 0x01130003;
207
cpu->id_mmfr1 = 0x10030302;
208
cpu->id_mmfr2 = 0x01222100;
209
- cpu->id_isar0 = 0x0140011;
210
- cpu->id_isar1 = 0x12002111;
211
- cpu->id_isar2 = 0x11231121;
212
- cpu->id_isar3 = 0x01102131;
213
- cpu->id_isar4 = 0x01141;
214
+ cpu->isar.id_isar0 = 0x0140011;
215
+ cpu->isar.id_isar1 = 0x12002111;
216
+ cpu->isar.id_isar2 = 0x11231121;
217
+ cpu->isar.id_isar3 = 0x01102131;
218
+ cpu->isar.id_isar4 = 0x01141;
219
cpu->reset_auxcr = 7;
220
}
221
222
@@ -XXX,XX +XXX,XX @@ static void arm11mpcore_initfn(Object *obj)
223
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
224
cpu->midr = 0x410fb022;
225
cpu->reset_fpsid = 0x410120b4;
226
- cpu->mvfr0 = 0x11111111;
227
- cpu->mvfr1 = 0x00000000;
228
+ cpu->isar.mvfr0 = 0x11111111;
229
+ cpu->isar.mvfr1 = 0x00000000;
230
cpu->ctr = 0x1d192992; /* 32K icache 32K dcache */
231
cpu->id_pfr0 = 0x111;
232
cpu->id_pfr1 = 0x1;
233
@@ -XXX,XX +XXX,XX @@ static void arm11mpcore_initfn(Object *obj)
234
cpu->id_mmfr0 = 0x01100103;
235
cpu->id_mmfr1 = 0x10020302;
236
cpu->id_mmfr2 = 0x01222000;
237
- cpu->id_isar0 = 0x00100011;
238
- cpu->id_isar1 = 0x12002111;
239
- cpu->id_isar2 = 0x11221011;
240
- cpu->id_isar3 = 0x01102131;
241
- cpu->id_isar4 = 0x141;
242
+ cpu->isar.id_isar0 = 0x00100011;
243
+ cpu->isar.id_isar1 = 0x12002111;
244
+ cpu->isar.id_isar2 = 0x11221011;
245
+ cpu->isar.id_isar3 = 0x01102131;
246
+ cpu->isar.id_isar4 = 0x141;
247
cpu->reset_auxcr = 1;
248
}
249
250
@@ -XXX,XX +XXX,XX @@ static void cortex_m3_initfn(Object *obj)
251
cpu->id_mmfr1 = 0x00000000;
252
cpu->id_mmfr2 = 0x00000000;
253
cpu->id_mmfr3 = 0x00000000;
254
- cpu->id_isar0 = 0x01141110;
255
- cpu->id_isar1 = 0x02111000;
256
- cpu->id_isar2 = 0x21112231;
257
- cpu->id_isar3 = 0x01111110;
258
- cpu->id_isar4 = 0x01310102;
259
- cpu->id_isar5 = 0x00000000;
260
- cpu->id_isar6 = 0x00000000;
261
+ cpu->isar.id_isar0 = 0x01141110;
262
+ cpu->isar.id_isar1 = 0x02111000;
263
+ cpu->isar.id_isar2 = 0x21112231;
264
+ cpu->isar.id_isar3 = 0x01111110;
265
+ cpu->isar.id_isar4 = 0x01310102;
266
+ cpu->isar.id_isar5 = 0x00000000;
267
+ cpu->isar.id_isar6 = 0x00000000;
268
}
269
270
static void cortex_m4_initfn(Object *obj)
271
@@ -XXX,XX +XXX,XX @@ static void cortex_m4_initfn(Object *obj)
272
cpu->id_mmfr1 = 0x00000000;
273
cpu->id_mmfr2 = 0x00000000;
274
cpu->id_mmfr3 = 0x00000000;
275
- cpu->id_isar0 = 0x01141110;
276
- cpu->id_isar1 = 0x02111000;
277
- cpu->id_isar2 = 0x21112231;
278
- cpu->id_isar3 = 0x01111110;
279
- cpu->id_isar4 = 0x01310102;
280
- cpu->id_isar5 = 0x00000000;
281
- cpu->id_isar6 = 0x00000000;
282
+ cpu->isar.id_isar0 = 0x01141110;
283
+ cpu->isar.id_isar1 = 0x02111000;
284
+ cpu->isar.id_isar2 = 0x21112231;
285
+ cpu->isar.id_isar3 = 0x01111110;
286
+ cpu->isar.id_isar4 = 0x01310102;
287
+ cpu->isar.id_isar5 = 0x00000000;
288
+ cpu->isar.id_isar6 = 0x00000000;
289
}
290
291
static void cortex_m33_initfn(Object *obj)
292
@@ -XXX,XX +XXX,XX @@ static void cortex_m33_initfn(Object *obj)
293
cpu->id_mmfr1 = 0x00000000;
294
cpu->id_mmfr2 = 0x01000000;
295
cpu->id_mmfr3 = 0x00000000;
296
- cpu->id_isar0 = 0x01101110;
297
- cpu->id_isar1 = 0x02212000;
298
- cpu->id_isar2 = 0x20232232;
299
- cpu->id_isar3 = 0x01111131;
300
- cpu->id_isar4 = 0x01310132;
301
- cpu->id_isar5 = 0x00000000;
302
- cpu->id_isar6 = 0x00000000;
303
+ cpu->isar.id_isar0 = 0x01101110;
304
+ cpu->isar.id_isar1 = 0x02212000;
305
+ cpu->isar.id_isar2 = 0x20232232;
306
+ cpu->isar.id_isar3 = 0x01111131;
307
+ cpu->isar.id_isar4 = 0x01310132;
308
+ cpu->isar.id_isar5 = 0x00000000;
309
+ cpu->isar.id_isar6 = 0x00000000;
310
cpu->clidr = 0x00000000;
311
cpu->ctr = 0x8000c000;
312
}
313
@@ -XXX,XX +XXX,XX @@ static void cortex_r5_initfn(Object *obj)
314
cpu->id_mmfr1 = 0x00000000;
315
cpu->id_mmfr2 = 0x01200000;
316
cpu->id_mmfr3 = 0x0211;
317
- cpu->id_isar0 = 0x02101111;
318
- cpu->id_isar1 = 0x13112111;
319
- cpu->id_isar2 = 0x21232141;
320
- cpu->id_isar3 = 0x01112131;
321
- cpu->id_isar4 = 0x0010142;
322
- cpu->id_isar5 = 0x0;
323
- cpu->id_isar6 = 0x0;
324
+ cpu->isar.id_isar0 = 0x02101111;
325
+ cpu->isar.id_isar1 = 0x13112111;
326
+ cpu->isar.id_isar2 = 0x21232141;
327
+ cpu->isar.id_isar3 = 0x01112131;
328
+ cpu->isar.id_isar4 = 0x0010142;
329
+ cpu->isar.id_isar5 = 0x0;
330
+ cpu->isar.id_isar6 = 0x0;
331
cpu->mp_is_up = true;
332
cpu->pmsav7_dregion = 16;
333
define_arm_cp_regs(cpu, cortexr5_cp_reginfo);
334
@@ -XXX,XX +XXX,XX @@ static void cortex_a8_initfn(Object *obj)
335
set_feature(&cpu->env, ARM_FEATURE_EL3);
336
cpu->midr = 0x410fc080;
337
cpu->reset_fpsid = 0x410330c0;
338
- cpu->mvfr0 = 0x11110222;
339
- cpu->mvfr1 = 0x00011111;
340
+ cpu->isar.mvfr0 = 0x11110222;
341
+ cpu->isar.mvfr1 = 0x00011111;
342
cpu->ctr = 0x82048004;
343
cpu->reset_sctlr = 0x00c50078;
344
cpu->id_pfr0 = 0x1031;
345
@@ -XXX,XX +XXX,XX @@ static void cortex_a8_initfn(Object *obj)
346
cpu->id_mmfr1 = 0x20000000;
347
cpu->id_mmfr2 = 0x01202000;
348
cpu->id_mmfr3 = 0x11;
349
- cpu->id_isar0 = 0x00101111;
350
- cpu->id_isar1 = 0x12112111;
351
- cpu->id_isar2 = 0x21232031;
352
- cpu->id_isar3 = 0x11112131;
353
- cpu->id_isar4 = 0x00111142;
354
+ cpu->isar.id_isar0 = 0x00101111;
355
+ cpu->isar.id_isar1 = 0x12112111;
356
+ cpu->isar.id_isar2 = 0x21232031;
357
+ cpu->isar.id_isar3 = 0x11112131;
358
+ cpu->isar.id_isar4 = 0x00111142;
359
cpu->dbgdidr = 0x15141000;
360
cpu->clidr = (1 << 27) | (2 << 24) | 3;
361
cpu->ccsidr[0] = 0xe007e01a; /* 16k L1 dcache. */
362
@@ -XXX,XX +XXX,XX @@ static void cortex_a9_initfn(Object *obj)
363
set_feature(&cpu->env, ARM_FEATURE_CBAR);
364
cpu->midr = 0x410fc090;
365
cpu->reset_fpsid = 0x41033090;
366
- cpu->mvfr0 = 0x11110222;
367
- cpu->mvfr1 = 0x01111111;
368
+ cpu->isar.mvfr0 = 0x11110222;
369
+ cpu->isar.mvfr1 = 0x01111111;
370
cpu->ctr = 0x80038003;
371
cpu->reset_sctlr = 0x00c50078;
372
cpu->id_pfr0 = 0x1031;
373
@@ -XXX,XX +XXX,XX @@ static void cortex_a9_initfn(Object *obj)
374
cpu->id_mmfr1 = 0x20000000;
375
cpu->id_mmfr2 = 0x01230000;
376
cpu->id_mmfr3 = 0x00002111;
377
- cpu->id_isar0 = 0x00101111;
378
- cpu->id_isar1 = 0x13112111;
379
- cpu->id_isar2 = 0x21232041;
380
- cpu->id_isar3 = 0x11112131;
381
- cpu->id_isar4 = 0x00111142;
382
+ cpu->isar.id_isar0 = 0x00101111;
383
+ cpu->isar.id_isar1 = 0x13112111;
384
+ cpu->isar.id_isar2 = 0x21232041;
385
+ cpu->isar.id_isar3 = 0x11112131;
386
+ cpu->isar.id_isar4 = 0x00111142;
387
cpu->dbgdidr = 0x35141000;
388
cpu->clidr = (1 << 27) | (1 << 24) | 3;
389
cpu->ccsidr[0] = 0xe00fe019; /* 16k L1 dcache. */
390
@@ -XXX,XX +XXX,XX @@ static void cortex_a7_initfn(Object *obj)
391
cpu->kvm_target = QEMU_KVM_ARM_TARGET_CORTEX_A7;
392
cpu->midr = 0x410fc075;
393
cpu->reset_fpsid = 0x41023075;
394
- cpu->mvfr0 = 0x10110222;
395
- cpu->mvfr1 = 0x11111111;
396
+ cpu->isar.mvfr0 = 0x10110222;
397
+ cpu->isar.mvfr1 = 0x11111111;
398
cpu->ctr = 0x84448003;
399
cpu->reset_sctlr = 0x00c50078;
400
cpu->id_pfr0 = 0x00001131;
401
@@ -XXX,XX +XXX,XX @@ static void cortex_a7_initfn(Object *obj)
402
/* a7_mpcore_r0p5_trm, page 4-4 gives 0x01101110; but
403
* table 4-41 gives 0x02101110, which includes the arm div insns.
404
*/
405
- cpu->id_isar0 = 0x02101110;
406
- cpu->id_isar1 = 0x13112111;
407
- cpu->id_isar2 = 0x21232041;
408
- cpu->id_isar3 = 0x11112131;
409
- cpu->id_isar4 = 0x10011142;
410
+ cpu->isar.id_isar0 = 0x02101110;
411
+ cpu->isar.id_isar1 = 0x13112111;
412
+ cpu->isar.id_isar2 = 0x21232041;
413
+ cpu->isar.id_isar3 = 0x11112131;
414
+ cpu->isar.id_isar4 = 0x10011142;
415
cpu->dbgdidr = 0x3515f005;
416
cpu->clidr = 0x0a200023;
417
cpu->ccsidr[0] = 0x701fe00a; /* 32K L1 dcache */
418
@@ -XXX,XX +XXX,XX @@ static void cortex_a15_initfn(Object *obj)
419
cpu->kvm_target = QEMU_KVM_ARM_TARGET_CORTEX_A15;
420
cpu->midr = 0x412fc0f1;
421
cpu->reset_fpsid = 0x410430f0;
422
- cpu->mvfr0 = 0x10110222;
423
- cpu->mvfr1 = 0x11111111;
424
+ cpu->isar.mvfr0 = 0x10110222;
425
+ cpu->isar.mvfr1 = 0x11111111;
426
cpu->ctr = 0x8444c004;
427
cpu->reset_sctlr = 0x00c50078;
428
cpu->id_pfr0 = 0x00001131;
429
@@ -XXX,XX +XXX,XX @@ static void cortex_a15_initfn(Object *obj)
430
cpu->id_mmfr1 = 0x20000000;
431
cpu->id_mmfr2 = 0x01240000;
432
cpu->id_mmfr3 = 0x02102211;
433
- cpu->id_isar0 = 0x02101110;
434
- cpu->id_isar1 = 0x13112111;
435
- cpu->id_isar2 = 0x21232041;
436
- cpu->id_isar3 = 0x11112131;
437
- cpu->id_isar4 = 0x10011142;
438
+ cpu->isar.id_isar0 = 0x02101110;
439
+ cpu->isar.id_isar1 = 0x13112111;
440
+ cpu->isar.id_isar2 = 0x21232041;
441
+ cpu->isar.id_isar3 = 0x11112131;
442
+ cpu->isar.id_isar4 = 0x10011142;
443
cpu->dbgdidr = 0x3515f021;
444
cpu->clidr = 0x0a200023;
445
cpu->ccsidr[0] = 0x701fe00a; /* 32K L1 dcache */
446
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
447
index XXXXXXX..XXXXXXX 100644
448
--- a/target/arm/cpu64.c
449
+++ b/target/arm/cpu64.c
450
@@ -XXX,XX +XXX,XX @@ static void aarch64_a57_initfn(Object *obj)
451
cpu->midr = 0x411fd070;
452
cpu->revidr = 0x00000000;
453
cpu->reset_fpsid = 0x41034070;
454
- cpu->mvfr0 = 0x10110222;
455
- cpu->mvfr1 = 0x12111111;
456
- cpu->mvfr2 = 0x00000043;
457
+ cpu->isar.mvfr0 = 0x10110222;
458
+ cpu->isar.mvfr1 = 0x12111111;
459
+ cpu->isar.mvfr2 = 0x00000043;
460
cpu->ctr = 0x8444c004;
461
cpu->reset_sctlr = 0x00c50838;
462
cpu->id_pfr0 = 0x00000131;
463
@@ -XXX,XX +XXX,XX @@ static void aarch64_a57_initfn(Object *obj)
464
cpu->id_mmfr1 = 0x40000000;
465
cpu->id_mmfr2 = 0x01260000;
466
cpu->id_mmfr3 = 0x02102211;
467
- cpu->id_isar0 = 0x02101110;
468
- cpu->id_isar1 = 0x13112111;
469
- cpu->id_isar2 = 0x21232042;
470
- cpu->id_isar3 = 0x01112131;
471
- cpu->id_isar4 = 0x00011142;
472
- cpu->id_isar5 = 0x00011121;
473
- cpu->id_isar6 = 0;
474
- cpu->id_aa64pfr0 = 0x00002222;
475
+ cpu->isar.id_isar0 = 0x02101110;
476
+ cpu->isar.id_isar1 = 0x13112111;
477
+ cpu->isar.id_isar2 = 0x21232042;
478
+ cpu->isar.id_isar3 = 0x01112131;
479
+ cpu->isar.id_isar4 = 0x00011142;
480
+ cpu->isar.id_isar5 = 0x00011121;
481
+ cpu->isar.id_isar6 = 0;
482
+ cpu->isar.id_aa64pfr0 = 0x00002222;
483
cpu->id_aa64dfr0 = 0x10305106;
484
cpu->pmceid0 = 0x00000000;
485
cpu->pmceid1 = 0x00000000;
486
- cpu->id_aa64isar0 = 0x00011120;
487
+ cpu->isar.id_aa64isar0 = 0x00011120;
488
cpu->id_aa64mmfr0 = 0x00001124;
489
cpu->dbgdidr = 0x3516d000;
490
cpu->clidr = 0x0a200023;
491
@@ -XXX,XX +XXX,XX @@ static void aarch64_a53_initfn(Object *obj)
492
cpu->midr = 0x410fd034;
493
cpu->revidr = 0x00000000;
494
cpu->reset_fpsid = 0x41034070;
495
- cpu->mvfr0 = 0x10110222;
496
- cpu->mvfr1 = 0x12111111;
497
- cpu->mvfr2 = 0x00000043;
498
+ cpu->isar.mvfr0 = 0x10110222;
499
+ cpu->isar.mvfr1 = 0x12111111;
500
+ cpu->isar.mvfr2 = 0x00000043;
501
cpu->ctr = 0x84448004; /* L1Ip = VIPT */
502
cpu->reset_sctlr = 0x00c50838;
503
cpu->id_pfr0 = 0x00000131;
504
@@ -XXX,XX +XXX,XX @@ static void aarch64_a53_initfn(Object *obj)
505
cpu->id_mmfr1 = 0x40000000;
506
cpu->id_mmfr2 = 0x01260000;
507
cpu->id_mmfr3 = 0x02102211;
508
- cpu->id_isar0 = 0x02101110;
509
- cpu->id_isar1 = 0x13112111;
510
- cpu->id_isar2 = 0x21232042;
511
- cpu->id_isar3 = 0x01112131;
512
- cpu->id_isar4 = 0x00011142;
513
- cpu->id_isar5 = 0x00011121;
514
- cpu->id_isar6 = 0;
515
- cpu->id_aa64pfr0 = 0x00002222;
516
+ cpu->isar.id_isar0 = 0x02101110;
517
+ cpu->isar.id_isar1 = 0x13112111;
518
+ cpu->isar.id_isar2 = 0x21232042;
519
+ cpu->isar.id_isar3 = 0x01112131;
520
+ cpu->isar.id_isar4 = 0x00011142;
521
+ cpu->isar.id_isar5 = 0x00011121;
522
+ cpu->isar.id_isar6 = 0;
523
+ cpu->isar.id_aa64pfr0 = 0x00002222;
524
cpu->id_aa64dfr0 = 0x10305106;
525
- cpu->id_aa64isar0 = 0x00011120;
526
+ cpu->isar.id_aa64isar0 = 0x00011120;
527
cpu->id_aa64mmfr0 = 0x00001122; /* 40 bit physical addr */
528
cpu->dbgdidr = 0x3516d000;
529
cpu->clidr = 0x0a200023;
530
@@ -XXX,XX +XXX,XX @@ static void aarch64_a72_initfn(Object *obj)
531
cpu->midr = 0x410fd083;
532
cpu->revidr = 0x00000000;
533
cpu->reset_fpsid = 0x41034080;
534
- cpu->mvfr0 = 0x10110222;
535
- cpu->mvfr1 = 0x12111111;
536
- cpu->mvfr2 = 0x00000043;
537
+ cpu->isar.mvfr0 = 0x10110222;
538
+ cpu->isar.mvfr1 = 0x12111111;
539
+ cpu->isar.mvfr2 = 0x00000043;
540
cpu->ctr = 0x8444c004;
541
cpu->reset_sctlr = 0x00c50838;
542
cpu->id_pfr0 = 0x00000131;
543
@@ -XXX,XX +XXX,XX @@ static void aarch64_a72_initfn(Object *obj)
544
cpu->id_mmfr1 = 0x40000000;
545
cpu->id_mmfr2 = 0x01260000;
546
cpu->id_mmfr3 = 0x02102211;
547
- cpu->id_isar0 = 0x02101110;
548
- cpu->id_isar1 = 0x13112111;
549
- cpu->id_isar2 = 0x21232042;
550
- cpu->id_isar3 = 0x01112131;
551
- cpu->id_isar4 = 0x00011142;
552
- cpu->id_isar5 = 0x00011121;
553
- cpu->id_aa64pfr0 = 0x00002222;
554
+ cpu->isar.id_isar0 = 0x02101110;
555
+ cpu->isar.id_isar1 = 0x13112111;
556
+ cpu->isar.id_isar2 = 0x21232042;
557
+ cpu->isar.id_isar3 = 0x01112131;
558
+ cpu->isar.id_isar4 = 0x00011142;
559
+ cpu->isar.id_isar5 = 0x00011121;
560
+ cpu->isar.id_aa64pfr0 = 0x00002222;
561
cpu->id_aa64dfr0 = 0x10305106;
562
cpu->pmceid0 = 0x00000000;
563
cpu->pmceid1 = 0x00000000;
564
- cpu->id_aa64isar0 = 0x00011120;
565
+ cpu->isar.id_aa64isar0 = 0x00011120;
566
cpu->id_aa64mmfr0 = 0x00001124;
567
cpu->dbgdidr = 0x3516d000;
568
cpu->clidr = 0x0a200023;
569
diff --git a/target/arm/helper.c b/target/arm/helper.c
55
diff --git a/target/arm/helper.c b/target/arm/helper.c
570
index XXXXXXX..XXXXXXX 100644
56
index XXXXXXX..XXXXXXX 100644
571
--- a/target/arm/helper.c
57
--- a/target/arm/helper.c
572
+++ b/target/arm/helper.c
58
+++ b/target/arm/helper.c
573
@@ -XXX,XX +XXX,XX @@ static uint64_t id_pfr1_read(CPUARMState *env, const ARMCPRegInfo *ri)
59
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
574
static uint64_t id_aa64pfr0_read(CPUARMState *env, const ARMCPRegInfo *ri)
60
* add a single reginfo struct to the hash table.
575
{
61
*/
576
ARMCPU *cpu = arm_env_get_cpu(env);
62
uint32_t key;
577
- uint64_t pfr0 = cpu->id_aa64pfr0;
63
- ARMCPRegInfo *r2 = g_memdup(r, sizeof(ARMCPRegInfo));
578
+ uint64_t pfr0 = cpu->isar.id_aa64pfr0;
64
+ ARMCPRegInfo *r2;
579
65
int is64 = (r->type & ARM_CP_64BIT) ? 1 : 0;
580
if (env->gicv3state) {
66
int ns = (secstate & ARM_CP_SECSTATE_NS) ? 1 : 0;
581
pfr0 |= 1 << 24;
67
+ size_t name_len;
582
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
68
+
583
{ .name = "ID_ISAR0", .state = ARM_CP_STATE_BOTH,
69
+ /* Combine cpreg and name into one allocation. */
584
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 0,
70
+ name_len = strlen(name) + 1;
585
.access = PL1_R, .type = ARM_CP_CONST,
71
+ r2 = g_malloc(sizeof(*r2) + name_len);
586
- .resetvalue = cpu->id_isar0 },
72
+ *r2 = *r;
587
+ .resetvalue = cpu->isar.id_isar0 },
73
+ r2->name = memcpy(r2 + 1, name, name_len);
588
{ .name = "ID_ISAR1", .state = ARM_CP_STATE_BOTH,
74
589
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 1,
75
- r2->name = g_strdup(name);
590
.access = PL1_R, .type = ARM_CP_CONST,
76
/* Reset the secure state to the specific incoming state. This is
591
- .resetvalue = cpu->id_isar1 },
77
* necessary as the register may have been defined with both states.
592
+ .resetvalue = cpu->isar.id_isar1 },
78
*/
593
{ .name = "ID_ISAR2", .state = ARM_CP_STATE_BOTH,
594
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 2,
595
.access = PL1_R, .type = ARM_CP_CONST,
596
- .resetvalue = cpu->id_isar2 },
597
+ .resetvalue = cpu->isar.id_isar2 },
598
{ .name = "ID_ISAR3", .state = ARM_CP_STATE_BOTH,
599
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 3,
600
.access = PL1_R, .type = ARM_CP_CONST,
601
- .resetvalue = cpu->id_isar3 },
602
+ .resetvalue = cpu->isar.id_isar3 },
603
{ .name = "ID_ISAR4", .state = ARM_CP_STATE_BOTH,
604
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 4,
605
.access = PL1_R, .type = ARM_CP_CONST,
606
- .resetvalue = cpu->id_isar4 },
607
+ .resetvalue = cpu->isar.id_isar4 },
608
{ .name = "ID_ISAR5", .state = ARM_CP_STATE_BOTH,
609
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 5,
610
.access = PL1_R, .type = ARM_CP_CONST,
611
- .resetvalue = cpu->id_isar5 },
612
+ .resetvalue = cpu->isar.id_isar5 },
613
{ .name = "ID_MMFR4", .state = ARM_CP_STATE_BOTH,
614
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 6,
615
.access = PL1_R, .type = ARM_CP_CONST,
616
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
617
{ .name = "ID_ISAR6", .state = ARM_CP_STATE_BOTH,
618
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 7,
619
.access = PL1_R, .type = ARM_CP_CONST,
620
- .resetvalue = cpu->id_isar6 },
621
+ .resetvalue = cpu->isar.id_isar6 },
622
REGINFO_SENTINEL
623
};
624
define_arm_cp_regs(cpu, v6_idregs);
625
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
626
{ .name = "ID_AA64PFR1_EL1", .state = ARM_CP_STATE_AA64,
627
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 1,
628
.access = PL1_R, .type = ARM_CP_CONST,
629
- .resetvalue = cpu->id_aa64pfr1},
630
+ .resetvalue = cpu->isar.id_aa64pfr1},
631
{ .name = "ID_AA64PFR2_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
632
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 2,
633
.access = PL1_R, .type = ARM_CP_CONST,
634
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
635
{ .name = "ID_AA64ISAR0_EL1", .state = ARM_CP_STATE_AA64,
636
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 0,
637
.access = PL1_R, .type = ARM_CP_CONST,
638
- .resetvalue = cpu->id_aa64isar0 },
639
+ .resetvalue = cpu->isar.id_aa64isar0 },
640
{ .name = "ID_AA64ISAR1_EL1", .state = ARM_CP_STATE_AA64,
641
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 1,
642
.access = PL1_R, .type = ARM_CP_CONST,
643
- .resetvalue = cpu->id_aa64isar1 },
644
+ .resetvalue = cpu->isar.id_aa64isar1 },
645
{ .name = "ID_AA64ISAR2_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
646
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 2,
647
.access = PL1_R, .type = ARM_CP_CONST,
648
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
649
{ .name = "MVFR0_EL1", .state = ARM_CP_STATE_AA64,
650
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 0,
651
.access = PL1_R, .type = ARM_CP_CONST,
652
- .resetvalue = cpu->mvfr0 },
653
+ .resetvalue = cpu->isar.mvfr0 },
654
{ .name = "MVFR1_EL1", .state = ARM_CP_STATE_AA64,
655
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 1,
656
.access = PL1_R, .type = ARM_CP_CONST,
657
- .resetvalue = cpu->mvfr1 },
658
+ .resetvalue = cpu->isar.mvfr1 },
659
{ .name = "MVFR2_EL1", .state = ARM_CP_STATE_AA64,
660
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 2,
661
.access = PL1_R, .type = ARM_CP_CONST,
662
- .resetvalue = cpu->mvfr2 },
663
+ .resetvalue = cpu->isar.mvfr2 },
664
{ .name = "MVFR3_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
665
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 3,
666
.access = PL1_R, .type = ARM_CP_CONST,
667
--
79
--
668
2.19.1
80
2.25.1
669
670
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Both arm and thumb2 division are controlled by the same ISAR field,
4
which takes care of the arm implies thumb case. Having M imply
5
thumb2 division was wrong for cortex-m0, which is v6m and does not
6
have thumb2 at all, much less thumb2 division.
7
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20181016223115.24100-5-richard.henderson@linaro.org
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
target/arm/cpu.h | 12 ++++++++++--
15
linux-user/elfload.c | 4 ++--
16
target/arm/cpu.c | 10 +---------
17
target/arm/translate.c | 4 ++--
18
4 files changed, 15 insertions(+), 15 deletions(-)
19
20
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
21
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/cpu.h
23
+++ b/target/arm/cpu.h
24
@@ -XXX,XX +XXX,XX @@ enum arm_features {
25
ARM_FEATURE_VFP3,
26
ARM_FEATURE_VFP_FP16,
27
ARM_FEATURE_NEON,
28
- ARM_FEATURE_THUMB_DIV, /* divide supported in Thumb encoding */
29
ARM_FEATURE_M, /* Microcontroller profile. */
30
ARM_FEATURE_OMAPCP, /* OMAP specific CP15 ops handling. */
31
ARM_FEATURE_THUMB2EE,
32
@@ -XXX,XX +XXX,XX @@ enum arm_features {
33
ARM_FEATURE_V5,
34
ARM_FEATURE_STRONGARM,
35
ARM_FEATURE_VAPA, /* cp15 VA to PA lookups */
36
- ARM_FEATURE_ARM_DIV, /* divide supported in ARM encoding */
37
ARM_FEATURE_VFP4, /* VFPv4 (implies that NEON is v2) */
38
ARM_FEATURE_GENERIC_TIMER,
39
ARM_FEATURE_MVFR, /* Media and VFP Feature Registers 0 and 1 */
40
@@ -XXX,XX +XXX,XX @@ extern const uint64_t pred_esz_masks[4];
41
/*
42
* 32-bit feature tests via id registers.
43
*/
44
+static inline bool isar_feature_thumb_div(const ARMISARegisters *id)
45
+{
46
+ return FIELD_EX32(id->id_isar0, ID_ISAR0, DIVIDE) != 0;
47
+}
48
+
49
+static inline bool isar_feature_arm_div(const ARMISARegisters *id)
50
+{
51
+ return FIELD_EX32(id->id_isar0, ID_ISAR0, DIVIDE) > 1;
52
+}
53
+
54
static inline bool isar_feature_aa32_aes(const ARMISARegisters *id)
55
{
56
return FIELD_EX32(id->id_isar5, ID_ISAR5, AES) != 0;
57
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
58
index XXXXXXX..XXXXXXX 100644
59
--- a/linux-user/elfload.c
60
+++ b/linux-user/elfload.c
61
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
62
GET_FEATURE(ARM_FEATURE_VFP3, ARM_HWCAP_ARM_VFPv3);
63
GET_FEATURE(ARM_FEATURE_V6K, ARM_HWCAP_ARM_TLS);
64
GET_FEATURE(ARM_FEATURE_VFP4, ARM_HWCAP_ARM_VFPv4);
65
- GET_FEATURE(ARM_FEATURE_ARM_DIV, ARM_HWCAP_ARM_IDIVA);
66
- GET_FEATURE(ARM_FEATURE_THUMB_DIV, ARM_HWCAP_ARM_IDIVT);
67
+ GET_FEATURE_ID(arm_div, ARM_HWCAP_ARM_IDIVA);
68
+ GET_FEATURE_ID(thumb_div, ARM_HWCAP_ARM_IDIVT);
69
/* All QEMU's VFPv3 CPUs have 32 registers, see VFP_DREG in translate.c.
70
* Note that the ARM_HWCAP_ARM_VFPv3D16 bit is always the inverse of
71
* ARM_HWCAP_ARM_VFPD32 (and so always clear for QEMU); it is unrelated
72
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
73
index XXXXXXX..XXXXXXX 100644
74
--- a/target/arm/cpu.c
75
+++ b/target/arm/cpu.c
76
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
77
* Presence of EL2 itself is ARM_FEATURE_EL2, and of the
78
* Security Extensions is ARM_FEATURE_EL3.
79
*/
80
- set_feature(env, ARM_FEATURE_ARM_DIV);
81
+ assert(cpu_isar_feature(arm_div, cpu));
82
set_feature(env, ARM_FEATURE_LPAE);
83
set_feature(env, ARM_FEATURE_V7);
84
}
85
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
86
if (arm_feature(env, ARM_FEATURE_V5)) {
87
set_feature(env, ARM_FEATURE_V4T);
88
}
89
- if (arm_feature(env, ARM_FEATURE_M)) {
90
- set_feature(env, ARM_FEATURE_THUMB_DIV);
91
- }
92
- if (arm_feature(env, ARM_FEATURE_ARM_DIV)) {
93
- set_feature(env, ARM_FEATURE_THUMB_DIV);
94
- }
95
if (arm_feature(env, ARM_FEATURE_VFP4)) {
96
set_feature(env, ARM_FEATURE_VFP3);
97
set_feature(env, ARM_FEATURE_VFP_FP16);
98
@@ -XXX,XX +XXX,XX @@ static void cortex_r5_initfn(Object *obj)
99
ARMCPU *cpu = ARM_CPU(obj);
100
101
set_feature(&cpu->env, ARM_FEATURE_V7);
102
- set_feature(&cpu->env, ARM_FEATURE_THUMB_DIV);
103
- set_feature(&cpu->env, ARM_FEATURE_ARM_DIV);
104
set_feature(&cpu->env, ARM_FEATURE_V7MP);
105
set_feature(&cpu->env, ARM_FEATURE_PMSA);
106
cpu->midr = 0x411fc153; /* r1p3 */
107
diff --git a/target/arm/translate.c b/target/arm/translate.c
108
index XXXXXXX..XXXXXXX 100644
109
--- a/target/arm/translate.c
110
+++ b/target/arm/translate.c
111
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
112
case 1:
113
case 3:
114
/* SDIV, UDIV */
115
- if (!arm_dc_feature(s, ARM_FEATURE_ARM_DIV)) {
116
+ if (!dc_isar_feature(arm_div, s)) {
117
goto illegal_op;
118
}
119
if (((insn >> 5) & 7) || (rd != 15)) {
120
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
121
tmp2 = load_reg(s, rm);
122
if ((op & 0x50) == 0x10) {
123
/* sdiv, udiv */
124
- if (!arm_dc_feature(s, ARM_FEATURE_THUMB_DIV)) {
125
+ if (!dc_isar_feature(thumb_div, s)) {
126
goto illegal_op;
127
}
128
if (op & 0x20)
129
--
130
2.19.1
131
132
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Instead of shifts and masks, use direct loads and stores from
3
Move the computation of key to the top of the function.
4
the neon register file.
4
Hoist the resolution of cp as well, as an input to the
5
computation of key.
6
7
This will be required by a subsequent patch.
5
8
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20181011205206.3552-21-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20220501055028.646596-14-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
13
---
11
target/arm/translate.c | 92 +++++++++++++++++++++++-------------------
14
target/arm/helper.c | 49 +++++++++++++++++++++++++--------------------
12
1 file changed, 50 insertions(+), 42 deletions(-)
15
1 file changed, 27 insertions(+), 22 deletions(-)
13
16
14
diff --git a/target/arm/translate.c b/target/arm/translate.c
17
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/translate.c
19
--- a/target/arm/helper.c
17
+++ b/target/arm/translate.c
20
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@ static TCGv_i32 neon_load_reg(int reg, int pass)
21
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
19
return tmp;
22
ARMCPRegInfo *r2;
20
}
23
int is64 = (r->type & ARM_CP_64BIT) ? 1 : 0;
21
24
int ns = (secstate & ARM_CP_SECSTATE_NS) ? 1 : 0;
22
+static void neon_load_element(TCGv_i32 var, int reg, int ele, TCGMemOp mop)
25
+ int cp = r->cp;
23
+{
26
size_t name_len;
24
+ long offset = neon_element_offset(reg, ele, mop & MO_SIZE);
27
25
+
28
+ switch (state) {
26
+ switch (mop) {
29
+ case ARM_CP_STATE_AA32:
27
+ case MO_UB:
30
+ /* We assume it is a cp15 register if the .cp field is left unset. */
28
+ tcg_gen_ld8u_i32(var, cpu_env, offset);
31
+ if (cp == 0 && r->state == ARM_CP_STATE_BOTH) {
32
+ cp = 15;
33
+ }
34
+ key = ENCODE_CP_REG(cp, is64, ns, r->crn, crm, opc1, opc2);
29
+ break;
35
+ break;
30
+ case MO_UW:
36
+ case ARM_CP_STATE_AA64:
31
+ tcg_gen_ld16u_i32(var, cpu_env, offset);
37
+ /*
32
+ break;
38
+ * To allow abbreviation of ARMCPRegInfo definitions, we treat
33
+ case MO_UL:
39
+ * cp == 0 as equivalent to the value for "standard guest-visible
34
+ tcg_gen_ld_i32(var, cpu_env, offset);
40
+ * sysreg". STATE_BOTH definitions are also always "standard sysreg"
41
+ * in their AArch64 view (the .cp value may be non-zero for the
42
+ * benefit of the AArch32 view).
43
+ */
44
+ if (cp == 0 || r->state == ARM_CP_STATE_BOTH) {
45
+ cp = CP_REG_ARM64_SYSREG_CP;
46
+ }
47
+ key = ENCODE_AA64_CP_REG(cp, r->crn, crm, r->opc0, opc1, opc2);
35
+ break;
48
+ break;
36
+ default:
49
+ default:
37
+ g_assert_not_reached();
50
+ g_assert_not_reached();
38
+ }
51
+ }
39
+}
40
+
52
+
41
static void neon_load_element64(TCGv_i64 var, int reg, int ele, TCGMemOp mop)
53
/* Combine cpreg and name into one allocation. */
42
{
54
name_len = strlen(name) + 1;
43
long offset = neon_element_offset(reg, ele, mop & MO_SIZE);
55
r2 = g_malloc(sizeof(*r2) + name_len);
44
@@ -XXX,XX +XXX,XX @@ static void neon_store_reg(int reg, int pass, TCGv_i32 var)
56
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
45
tcg_temp_free_i32(var);
57
}
46
}
58
47
59
if (r->state == ARM_CP_STATE_BOTH) {
48
+static void neon_store_element(int reg, int ele, TCGMemOp size, TCGv_i32 var)
60
- /* We assume it is a cp15 register if the .cp field is left unset.
49
+{
61
- */
50
+ long offset = neon_element_offset(reg, ele, size);
62
- if (r2->cp == 0) {
51
+
63
- r2->cp = 15;
52
+ switch (size) {
64
- }
53
+ case MO_8:
65
-
54
+ tcg_gen_st8_i32(var, cpu_env, offset);
66
#if HOST_BIG_ENDIAN
55
+ break;
67
if (r2->fieldoffset) {
56
+ case MO_16:
68
r2->fieldoffset += sizeof(uint32_t);
57
+ tcg_gen_st16_i32(var, cpu_env, offset);
69
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
58
+ break;
70
#endif
59
+ case MO_32:
60
+ tcg_gen_st_i32(var, cpu_env, offset);
61
+ break;
62
+ default:
63
+ g_assert_not_reached();
64
+ }
65
+}
66
+
67
static void neon_store_element64(int reg, int ele, TCGMemOp size, TCGv_i64 var)
68
{
69
long offset = neon_element_offset(reg, ele, size);
70
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
71
int stride;
72
int size;
73
int reg;
74
- int pass;
75
int load;
76
- int shift;
77
int n;
78
int vec_size;
79
int mmu_idx;
80
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
81
} else {
82
/* Single element. */
83
int idx = (insn >> 4) & 0xf;
84
- pass = (insn >> 7) & 1;
85
+ int reg_idx;
86
switch (size) {
87
case 0:
88
- shift = ((insn >> 5) & 3) * 8;
89
+ reg_idx = (insn >> 5) & 7;
90
stride = 1;
91
break;
92
case 1:
93
- shift = ((insn >> 6) & 1) * 16;
94
+ reg_idx = (insn >> 6) & 3;
95
stride = (insn & (1 << 5)) ? 2 : 1;
96
break;
97
case 2:
98
- shift = 0;
99
+ reg_idx = (insn >> 7) & 1;
100
stride = (insn & (1 << 6)) ? 2 : 1;
101
break;
102
default:
103
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
104
*/
105
return 1;
106
}
107
+ tmp = tcg_temp_new_i32();
108
addr = tcg_temp_new_i32();
109
load_reg_var(s, addr, rn);
110
for (reg = 0; reg < nregs; reg++) {
111
if (load) {
112
- tmp = tcg_temp_new_i32();
113
- switch (size) {
114
- case 0:
115
- gen_aa32_ld8u(s, tmp, addr, get_mem_index(s));
116
- break;
117
- case 1:
118
- gen_aa32_ld16u(s, tmp, addr, get_mem_index(s));
119
- break;
120
- case 2:
121
- gen_aa32_ld32u(s, tmp, addr, get_mem_index(s));
122
- break;
123
- default: /* Avoid compiler warnings. */
124
- abort();
125
- }
126
- if (size != 2) {
127
- tmp2 = neon_load_reg(rd, pass);
128
- tcg_gen_deposit_i32(tmp, tmp2, tmp,
129
- shift, size ? 16 : 8);
130
- tcg_temp_free_i32(tmp2);
131
- }
132
- neon_store_reg(rd, pass, tmp);
133
+ gen_aa32_ld_i32(s, tmp, addr, get_mem_index(s),
134
+ s->be_data | size);
135
+ neon_store_element(rd, reg_idx, size, tmp);
136
} else { /* Store */
137
- tmp = neon_load_reg(rd, pass);
138
- if (shift)
139
- tcg_gen_shri_i32(tmp, tmp, shift);
140
- switch (size) {
141
- case 0:
142
- gen_aa32_st8(s, tmp, addr, get_mem_index(s));
143
- break;
144
- case 1:
145
- gen_aa32_st16(s, tmp, addr, get_mem_index(s));
146
- break;
147
- case 2:
148
- gen_aa32_st32(s, tmp, addr, get_mem_index(s));
149
- break;
150
- }
151
- tcg_temp_free_i32(tmp);
152
+ neon_load_element(tmp, rd, reg_idx, size);
153
+ gen_aa32_st_i32(s, tmp, addr, get_mem_index(s),
154
+ s->be_data | size);
155
}
156
rd += stride;
157
tcg_gen_addi_i32(addr, addr, 1 << size);
158
}
159
tcg_temp_free_i32(addr);
160
+ tcg_temp_free_i32(tmp);
161
stride = nregs * (1 << size);
162
}
71
}
163
}
72
}
73
- if (state == ARM_CP_STATE_AA64) {
74
- /* To allow abbreviation of ARMCPRegInfo
75
- * definitions, we treat cp == 0 as equivalent to
76
- * the value for "standard guest-visible sysreg".
77
- * STATE_BOTH definitions are also always "standard
78
- * sysreg" in their AArch64 view (the .cp value may
79
- * be non-zero for the benefit of the AArch32 view).
80
- */
81
- if (r->cp == 0 || r->state == ARM_CP_STATE_BOTH) {
82
- r2->cp = CP_REG_ARM64_SYSREG_CP;
83
- }
84
- key = ENCODE_AA64_CP_REG(r2->cp, r2->crn, crm,
85
- r2->opc0, opc1, opc2);
86
- } else {
87
- key = ENCODE_CP_REG(r2->cp, is64, ns, r2->crn, crm, opc1, opc2);
88
- }
89
if (opaque) {
90
r2->opaque = opaque;
91
}
92
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
93
/* Make sure reginfo passed to helpers for wildcarded regs
94
* has the correct crm/opc1/opc2 for this reg, not CP_ANY:
95
*/
96
+ r2->cp = cp;
97
r2->crm = crm;
98
r2->opc1 = opc1;
99
r2->opc2 = opc2;
164
--
100
--
165
2.19.1
101
2.25.1
166
167
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Since QEMU does not implement ASIDs, changes to the ASID must flush the
3
Put most of the value writeback to the same place,
4
tlb. However, if the ASID does not change there is no reason to flush.
4
and improve the comment that goes with them.
5
5
6
In testing a boot of the Ubuntu installer to the first menu, this reduces
7
the number of flushes by 30%, or nearly 600k instances.
8
9
Reviewed-by: Aaron Lindsay <aaron@os.amperecomputing.com>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Message-id: 20220501055028.646596-15-richard.henderson@linaro.org
13
Message-id: 20181019015617.22583-3-richard.henderson@linaro.org
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
10
---
16
target/arm/helper.c | 8 +++-----
11
target/arm/helper.c | 28 ++++++++++++----------------
17
1 file changed, 3 insertions(+), 5 deletions(-)
12
1 file changed, 12 insertions(+), 16 deletions(-)
18
13
19
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
20
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/helper.c
16
--- a/target/arm/helper.c
22
+++ b/target/arm/helper.c
17
+++ b/target/arm/helper.c
23
@@ -XXX,XX +XXX,XX @@ static void vmsa_tcr_el1_write(CPUARMState *env, const ARMCPRegInfo *ri,
18
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
24
static void vmsa_ttbr_write(CPUARMState *env, const ARMCPRegInfo *ri,
19
*r2 = *r;
25
uint64_t value)
20
r2->name = memcpy(r2 + 1, name, name_len);
26
{
21
27
- /* 64 bit accesses to the TTBRs can change the ASID and so we
22
- /* Reset the secure state to the specific incoming state. This is
28
- * must flush the TLB.
23
- * necessary as the register may have been defined with both states.
24
+ /*
25
+ * Update fields to match the instantiation, overwiting wildcards
26
+ * such as CP_ANY, ARM_CP_STATE_BOTH, or ARM_CP_SECSTATE_BOTH.
27
*/
28
+ r2->cp = cp;
29
+ r2->crm = crm;
30
+ r2->opc1 = opc1;
31
+ r2->opc2 = opc2;
32
+ r2->state = state;
33
r2->secure = secstate;
34
+ if (opaque) {
35
+ r2->opaque = opaque;
36
+ }
37
38
if (r->bank_fieldoffsets[0] && r->bank_fieldoffsets[1]) {
39
/* Register is banked (using both entries in array).
40
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
41
#endif
42
}
43
}
44
- if (opaque) {
45
- r2->opaque = opaque;
46
- }
47
- /* reginfo passed to helpers is correct for the actual access,
48
- * and is never ARM_CP_STATE_BOTH:
29
- */
49
- */
30
- if (cpreg_field_is_64bit(ri)) {
50
- r2->state = state;
31
+ /* If the ASID changes (with a 64-bit write), we must flush the TLB. */
51
- /* Make sure reginfo passed to helpers for wildcarded regs
32
+ if (cpreg_field_is_64bit(ri) &&
52
- * has the correct crm/opc1/opc2 for this reg, not CP_ANY:
33
+ extract64(raw_read(env, ri) ^ value, 48, 16) != 0) {
53
- */
34
ARMCPU *cpu = arm_env_get_cpu(env);
54
- r2->cp = cp;
35
-
55
- r2->crm = crm;
36
tlb_flush(CPU(cpu));
56
- r2->opc1 = opc1;
37
}
57
- r2->opc2 = opc2;
38
raw_write(env, ri, value);
58
+
59
/* By convention, for wildcarded registers only the first
60
* entry is used for migration; the others are marked as
61
* ALIAS so we don't try to transfer the register
39
--
62
--
40
2.19.1
63
2.25.1
41
42
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The EL3 version of this register does not include an ASID,
3
Bool is a more appropriate type for these variables.
4
and so the tlb_flush performed by vmsa_ttbr_write is not needed.
5
4
6
Reviewed-by: Aaron Lindsay <aaron@os.amperecomputing.com>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20181019015617.22583-2-richard.henderson@linaro.org
7
Message-id: 20220501055028.646596-16-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
9
---
12
target/arm/helper.c | 2 +-
10
target/arm/helper.c | 4 ++--
13
1 file changed, 1 insertion(+), 1 deletion(-)
11
1 file changed, 2 insertions(+), 2 deletions(-)
14
12
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
15
--- a/target/arm/helper.c
18
+++ b/target/arm/helper.c
16
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el3_cp_reginfo[] = {
17
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
20
.fieldoffset = offsetof(CPUARMState, cp15.mvbar) },
18
*/
21
{ .name = "TTBR0_EL3", .state = ARM_CP_STATE_AA64,
19
uint32_t key;
22
.opc0 = 3, .opc1 = 6, .crn = 2, .crm = 0, .opc2 = 0,
20
ARMCPRegInfo *r2;
23
- .access = PL3_RW, .writefn = vmsa_ttbr_write, .resetvalue = 0,
21
- int is64 = (r->type & ARM_CP_64BIT) ? 1 : 0;
24
+ .access = PL3_RW, .resetvalue = 0,
22
- int ns = (secstate & ARM_CP_SECSTATE_NS) ? 1 : 0;
25
.fieldoffset = offsetof(CPUARMState, cp15.ttbr0_el[3]) },
23
+ bool is64 = r->type & ARM_CP_64BIT;
26
{ .name = "TCR_EL3", .state = ARM_CP_STATE_AA64,
24
+ bool ns = secstate & ARM_CP_SECSTATE_NS;
27
.opc0 = 3, .opc1 = 6, .crn = 2, .crm = 0, .opc2 = 2,
25
int cp = r->cp;
26
size_t name_len;
27
28
--
28
--
29
2.19.1
29
2.25.1
30
31
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Move shi_op and sli_op expanders from translate-a64.c.
3
Computing isbanked only once makes the code
4
a bit easier to read.
4
5
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20181011205206.3552-15-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20220501055028.646596-17-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
---
10
target/arm/translate.h | 2 +
11
target/arm/helper.c | 6 ++++--
11
target/arm/translate-a64.c | 152 +----------------------
12
1 file changed, 4 insertions(+), 2 deletions(-)
12
target/arm/translate.c | 244 ++++++++++++++++++++++++++-----------
13
3 files changed, 179 insertions(+), 219 deletions(-)
14
13
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate.h
16
--- a/target/arm/helper.c
18
+++ b/target/arm/translate.h
17
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@ extern const GVecGen3 bit_op;
18
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
20
extern const GVecGen3 bif_op;
19
bool is64 = r->type & ARM_CP_64BIT;
21
extern const GVecGen2i ssra_op[4];
20
bool ns = secstate & ARM_CP_SECSTATE_NS;
22
extern const GVecGen2i usra_op[4];
21
int cp = r->cp;
23
+extern const GVecGen2i sri_op[4];
22
+ bool isbanked;
24
+extern const GVecGen2i sli_op[4];
23
size_t name_len;
25
24
26
/*
25
switch (state) {
27
* Forward to the isar_feature_* tests given a DisasContext pointer.
26
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
28
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
27
r2->opaque = opaque;
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/translate-a64.c
31
+++ b/target/arm/translate-a64.c
32
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
33
}
28
}
34
}
29
35
30
- if (r->bank_fieldoffsets[0] && r->bank_fieldoffsets[1]) {
36
-static void gen_shr8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
31
+ isbanked = r->bank_fieldoffsets[0] && r->bank_fieldoffsets[1];
37
-{
32
+ if (isbanked) {
38
- uint64_t mask = dup_const(MO_8, 0xff >> shift);
33
/* Register is banked (using both entries in array).
39
- TCGv_i64 t = tcg_temp_new_i64();
34
* Overwriting fieldoffset as the array is only used to define
40
-
35
* banked registers but later only fieldoffset is used.
41
- tcg_gen_shri_i64(t, a, shift);
36
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
42
- tcg_gen_andi_i64(t, t, mask);
43
- tcg_gen_andi_i64(d, d, ~mask);
44
- tcg_gen_or_i64(d, d, t);
45
- tcg_temp_free_i64(t);
46
-}
47
-
48
-static void gen_shr16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
49
-{
50
- uint64_t mask = dup_const(MO_16, 0xffff >> shift);
51
- TCGv_i64 t = tcg_temp_new_i64();
52
-
53
- tcg_gen_shri_i64(t, a, shift);
54
- tcg_gen_andi_i64(t, t, mask);
55
- tcg_gen_andi_i64(d, d, ~mask);
56
- tcg_gen_or_i64(d, d, t);
57
- tcg_temp_free_i64(t);
58
-}
59
-
60
-static void gen_shr32_ins_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
61
-{
62
- tcg_gen_shri_i32(a, a, shift);
63
- tcg_gen_deposit_i32(d, d, a, 0, 32 - shift);
64
-}
65
-
66
-static void gen_shr64_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
67
-{
68
- tcg_gen_shri_i64(a, a, shift);
69
- tcg_gen_deposit_i64(d, d, a, 0, 64 - shift);
70
-}
71
-
72
-static void gen_shr_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
73
-{
74
- uint64_t mask = (2ull << ((8 << vece) - 1)) - 1;
75
- TCGv_vec t = tcg_temp_new_vec_matching(d);
76
- TCGv_vec m = tcg_temp_new_vec_matching(d);
77
-
78
- tcg_gen_dupi_vec(vece, m, mask ^ (mask >> sh));
79
- tcg_gen_shri_vec(vece, t, a, sh);
80
- tcg_gen_and_vec(vece, d, d, m);
81
- tcg_gen_or_vec(vece, d, d, t);
82
-
83
- tcg_temp_free_vec(t);
84
- tcg_temp_free_vec(m);
85
-}
86
-
87
/* SSHR[RA]/USHR[RA] - Vector shift right (optional rounding/accumulate) */
88
static void handle_vec_simd_shri(DisasContext *s, bool is_q, bool is_u,
89
int immh, int immb, int opcode, int rn, int rd)
90
{
91
- static const GVecGen2i sri_op[4] = {
92
- { .fni8 = gen_shr8_ins_i64,
93
- .fniv = gen_shr_ins_vec,
94
- .load_dest = true,
95
- .opc = INDEX_op_shri_vec,
96
- .vece = MO_8 },
97
- { .fni8 = gen_shr16_ins_i64,
98
- .fniv = gen_shr_ins_vec,
99
- .load_dest = true,
100
- .opc = INDEX_op_shri_vec,
101
- .vece = MO_16 },
102
- { .fni4 = gen_shr32_ins_i32,
103
- .fniv = gen_shr_ins_vec,
104
- .load_dest = true,
105
- .opc = INDEX_op_shri_vec,
106
- .vece = MO_32 },
107
- { .fni8 = gen_shr64_ins_i64,
108
- .fniv = gen_shr_ins_vec,
109
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
110
- .load_dest = true,
111
- .opc = INDEX_op_shri_vec,
112
- .vece = MO_64 },
113
- };
114
-
115
int size = 32 - clz32(immh) - 1;
116
int immhb = immh << 3 | immb;
117
int shift = 2 * (8 << size) - immhb;
118
@@ -XXX,XX +XXX,XX @@ static void handle_vec_simd_shri(DisasContext *s, bool is_q, bool is_u,
119
clear_vec_high(s, is_q, rd);
120
}
121
122
-static void gen_shl8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
123
-{
124
- uint64_t mask = dup_const(MO_8, 0xff << shift);
125
- TCGv_i64 t = tcg_temp_new_i64();
126
-
127
- tcg_gen_shli_i64(t, a, shift);
128
- tcg_gen_andi_i64(t, t, mask);
129
- tcg_gen_andi_i64(d, d, ~mask);
130
- tcg_gen_or_i64(d, d, t);
131
- tcg_temp_free_i64(t);
132
-}
133
-
134
-static void gen_shl16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
135
-{
136
- uint64_t mask = dup_const(MO_16, 0xffff << shift);
137
- TCGv_i64 t = tcg_temp_new_i64();
138
-
139
- tcg_gen_shli_i64(t, a, shift);
140
- tcg_gen_andi_i64(t, t, mask);
141
- tcg_gen_andi_i64(d, d, ~mask);
142
- tcg_gen_or_i64(d, d, t);
143
- tcg_temp_free_i64(t);
144
-}
145
-
146
-static void gen_shl32_ins_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
147
-{
148
- tcg_gen_deposit_i32(d, d, a, shift, 32 - shift);
149
-}
150
-
151
-static void gen_shl64_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
152
-{
153
- tcg_gen_deposit_i64(d, d, a, shift, 64 - shift);
154
-}
155
-
156
-static void gen_shl_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
157
-{
158
- uint64_t mask = (1ull << sh) - 1;
159
- TCGv_vec t = tcg_temp_new_vec_matching(d);
160
- TCGv_vec m = tcg_temp_new_vec_matching(d);
161
-
162
- tcg_gen_dupi_vec(vece, m, mask);
163
- tcg_gen_shli_vec(vece, t, a, sh);
164
- tcg_gen_and_vec(vece, d, d, m);
165
- tcg_gen_or_vec(vece, d, d, t);
166
-
167
- tcg_temp_free_vec(t);
168
- tcg_temp_free_vec(m);
169
-}
170
-
171
/* SHL/SLI - Vector shift left */
172
static void handle_vec_simd_shli(DisasContext *s, bool is_q, bool insert,
173
int immh, int immb, int opcode, int rn, int rd)
174
{
175
- static const GVecGen2i shi_op[4] = {
176
- { .fni8 = gen_shl8_ins_i64,
177
- .fniv = gen_shl_ins_vec,
178
- .opc = INDEX_op_shli_vec,
179
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
180
- .load_dest = true,
181
- .vece = MO_8 },
182
- { .fni8 = gen_shl16_ins_i64,
183
- .fniv = gen_shl_ins_vec,
184
- .opc = INDEX_op_shli_vec,
185
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
186
- .load_dest = true,
187
- .vece = MO_16 },
188
- { .fni4 = gen_shl32_ins_i32,
189
- .fniv = gen_shl_ins_vec,
190
- .opc = INDEX_op_shli_vec,
191
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
192
- .load_dest = true,
193
- .vece = MO_32 },
194
- { .fni8 = gen_shl64_ins_i64,
195
- .fniv = gen_shl_ins_vec,
196
- .opc = INDEX_op_shli_vec,
197
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
198
- .load_dest = true,
199
- .vece = MO_64 },
200
- };
201
int size = 32 - clz32(immh) - 1;
202
int immhb = immh << 3 | immb;
203
int shift = immhb - (8 << size);
204
@@ -XXX,XX +XXX,XX @@ static void handle_vec_simd_shli(DisasContext *s, bool is_q, bool insert,
205
}
37
}
206
38
207
if (insert) {
39
if (state == ARM_CP_STATE_AA32) {
208
- gen_gvec_op2i(s, is_q, rd, rn, shift, &shi_op[size]);
40
- if (r->bank_fieldoffsets[0] && r->bank_fieldoffsets[1]) {
209
+ gen_gvec_op2i(s, is_q, rd, rn, shift, &sli_op[size]);
41
+ if (isbanked) {
210
} else {
42
/* If the register is banked then we don't need to migrate or
211
gen_gvec_fn2i(s, is_q, rd, rn, shift, tcg_gen_gvec_shli, size);
43
* reset the 32-bit instance in certain cases:
212
}
44
*
213
diff --git a/target/arm/translate.c b/target/arm/translate.c
214
index XXXXXXX..XXXXXXX 100644
215
--- a/target/arm/translate.c
216
+++ b/target/arm/translate.c
217
@@ -XXX,XX +XXX,XX @@ const GVecGen2i usra_op[4] = {
218
.vece = MO_64, },
219
};
220
221
+static void gen_shr8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
222
+{
223
+ uint64_t mask = dup_const(MO_8, 0xff >> shift);
224
+ TCGv_i64 t = tcg_temp_new_i64();
225
+
226
+ tcg_gen_shri_i64(t, a, shift);
227
+ tcg_gen_andi_i64(t, t, mask);
228
+ tcg_gen_andi_i64(d, d, ~mask);
229
+ tcg_gen_or_i64(d, d, t);
230
+ tcg_temp_free_i64(t);
231
+}
232
+
233
+static void gen_shr16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
234
+{
235
+ uint64_t mask = dup_const(MO_16, 0xffff >> shift);
236
+ TCGv_i64 t = tcg_temp_new_i64();
237
+
238
+ tcg_gen_shri_i64(t, a, shift);
239
+ tcg_gen_andi_i64(t, t, mask);
240
+ tcg_gen_andi_i64(d, d, ~mask);
241
+ tcg_gen_or_i64(d, d, t);
242
+ tcg_temp_free_i64(t);
243
+}
244
+
245
+static void gen_shr32_ins_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
246
+{
247
+ tcg_gen_shri_i32(a, a, shift);
248
+ tcg_gen_deposit_i32(d, d, a, 0, 32 - shift);
249
+}
250
+
251
+static void gen_shr64_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
252
+{
253
+ tcg_gen_shri_i64(a, a, shift);
254
+ tcg_gen_deposit_i64(d, d, a, 0, 64 - shift);
255
+}
256
+
257
+static void gen_shr_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
258
+{
259
+ if (sh == 0) {
260
+ tcg_gen_mov_vec(d, a);
261
+ } else {
262
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
263
+ TCGv_vec m = tcg_temp_new_vec_matching(d);
264
+
265
+ tcg_gen_dupi_vec(vece, m, MAKE_64BIT_MASK((8 << vece) - sh, sh));
266
+ tcg_gen_shri_vec(vece, t, a, sh);
267
+ tcg_gen_and_vec(vece, d, d, m);
268
+ tcg_gen_or_vec(vece, d, d, t);
269
+
270
+ tcg_temp_free_vec(t);
271
+ tcg_temp_free_vec(m);
272
+ }
273
+}
274
+
275
+const GVecGen2i sri_op[4] = {
276
+ { .fni8 = gen_shr8_ins_i64,
277
+ .fniv = gen_shr_ins_vec,
278
+ .load_dest = true,
279
+ .opc = INDEX_op_shri_vec,
280
+ .vece = MO_8 },
281
+ { .fni8 = gen_shr16_ins_i64,
282
+ .fniv = gen_shr_ins_vec,
283
+ .load_dest = true,
284
+ .opc = INDEX_op_shri_vec,
285
+ .vece = MO_16 },
286
+ { .fni4 = gen_shr32_ins_i32,
287
+ .fniv = gen_shr_ins_vec,
288
+ .load_dest = true,
289
+ .opc = INDEX_op_shri_vec,
290
+ .vece = MO_32 },
291
+ { .fni8 = gen_shr64_ins_i64,
292
+ .fniv = gen_shr_ins_vec,
293
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
294
+ .load_dest = true,
295
+ .opc = INDEX_op_shri_vec,
296
+ .vece = MO_64 },
297
+};
298
+
299
+static void gen_shl8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
300
+{
301
+ uint64_t mask = dup_const(MO_8, 0xff << shift);
302
+ TCGv_i64 t = tcg_temp_new_i64();
303
+
304
+ tcg_gen_shli_i64(t, a, shift);
305
+ tcg_gen_andi_i64(t, t, mask);
306
+ tcg_gen_andi_i64(d, d, ~mask);
307
+ tcg_gen_or_i64(d, d, t);
308
+ tcg_temp_free_i64(t);
309
+}
310
+
311
+static void gen_shl16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
312
+{
313
+ uint64_t mask = dup_const(MO_16, 0xffff << shift);
314
+ TCGv_i64 t = tcg_temp_new_i64();
315
+
316
+ tcg_gen_shli_i64(t, a, shift);
317
+ tcg_gen_andi_i64(t, t, mask);
318
+ tcg_gen_andi_i64(d, d, ~mask);
319
+ tcg_gen_or_i64(d, d, t);
320
+ tcg_temp_free_i64(t);
321
+}
322
+
323
+static void gen_shl32_ins_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
324
+{
325
+ tcg_gen_deposit_i32(d, d, a, shift, 32 - shift);
326
+}
327
+
328
+static void gen_shl64_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
329
+{
330
+ tcg_gen_deposit_i64(d, d, a, shift, 64 - shift);
331
+}
332
+
333
+static void gen_shl_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
334
+{
335
+ if (sh == 0) {
336
+ tcg_gen_mov_vec(d, a);
337
+ } else {
338
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
339
+ TCGv_vec m = tcg_temp_new_vec_matching(d);
340
+
341
+ tcg_gen_dupi_vec(vece, m, MAKE_64BIT_MASK(0, sh));
342
+ tcg_gen_shli_vec(vece, t, a, sh);
343
+ tcg_gen_and_vec(vece, d, d, m);
344
+ tcg_gen_or_vec(vece, d, d, t);
345
+
346
+ tcg_temp_free_vec(t);
347
+ tcg_temp_free_vec(m);
348
+ }
349
+}
350
+
351
+const GVecGen2i sli_op[4] = {
352
+ { .fni8 = gen_shl8_ins_i64,
353
+ .fniv = gen_shl_ins_vec,
354
+ .load_dest = true,
355
+ .opc = INDEX_op_shli_vec,
356
+ .vece = MO_8 },
357
+ { .fni8 = gen_shl16_ins_i64,
358
+ .fniv = gen_shl_ins_vec,
359
+ .load_dest = true,
360
+ .opc = INDEX_op_shli_vec,
361
+ .vece = MO_16 },
362
+ { .fni4 = gen_shl32_ins_i32,
363
+ .fniv = gen_shl_ins_vec,
364
+ .load_dest = true,
365
+ .opc = INDEX_op_shli_vec,
366
+ .vece = MO_32 },
367
+ { .fni8 = gen_shl64_ins_i64,
368
+ .fniv = gen_shl_ins_vec,
369
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
370
+ .load_dest = true,
371
+ .opc = INDEX_op_shli_vec,
372
+ .vece = MO_64 },
373
+};
374
+
375
/* Translate a NEON data processing instruction. Return nonzero if the
376
instruction is invalid.
377
We process data in a mixture of 32-bit and 64-bit chunks.
378
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
379
int pairwise;
380
int u;
381
int vec_size;
382
- uint32_t imm, mask;
383
+ uint32_t imm;
384
TCGv_i32 tmp, tmp2, tmp3, tmp4, tmp5;
385
TCGv_ptr ptr1, ptr2, ptr3;
386
TCGv_i64 tmp64;
387
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
388
}
389
return 0;
390
391
+ case 4: /* VSRI */
392
+ if (!u) {
393
+ return 1;
394
+ }
395
+ /* Right shift comes here negative. */
396
+ shift = -shift;
397
+ /* Shift out of range leaves destination unchanged. */
398
+ if (shift < 8 << size) {
399
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, vec_size, vec_size,
400
+ shift, &sri_op[size]);
401
+ }
402
+ return 0;
403
+
404
case 5: /* VSHL, VSLI */
405
- if (!u) { /* VSHL */
406
+ if (u) { /* VSLI */
407
+ /* Shift out of range leaves destination unchanged. */
408
+ if (shift < 8 << size) {
409
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, vec_size,
410
+ vec_size, shift, &sli_op[size]);
411
+ }
412
+ } else { /* VSHL */
413
/* Shifts larger than the element size are
414
* architecturally valid and results in zero.
415
*/
416
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
417
tcg_gen_gvec_shli(size, rd_ofs, rm_ofs, shift,
418
vec_size, vec_size);
419
}
420
- return 0;
421
}
422
- break;
423
+ return 0;
424
}
425
426
if (size == 3) {
427
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
428
else
429
gen_helper_neon_rshl_s64(cpu_V0, cpu_V0, cpu_V1);
430
break;
431
- case 4: /* VSRI */
432
- case 5: /* VSHL, VSLI */
433
- gen_helper_neon_shl_u64(cpu_V0, cpu_V0, cpu_V1);
434
- break;
435
case 6: /* VQSHLU */
436
gen_helper_neon_qshlu_s64(cpu_V0, cpu_env,
437
cpu_V0, cpu_V1);
438
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
439
/* Accumulate. */
440
neon_load_reg64(cpu_V1, rd + pass);
441
tcg_gen_add_i64(cpu_V0, cpu_V0, cpu_V1);
442
- } else if (op == 4 || (op == 5 && u)) {
443
- /* Insert */
444
- neon_load_reg64(cpu_V1, rd + pass);
445
- uint64_t mask;
446
- if (shift < -63 || shift > 63) {
447
- mask = 0;
448
- } else {
449
- if (op == 4) {
450
- mask = 0xffffffffffffffffull >> -shift;
451
- } else {
452
- mask = 0xffffffffffffffffull << shift;
453
- }
454
- }
455
- tcg_gen_andi_i64(cpu_V1, cpu_V1, ~mask);
456
- tcg_gen_or_i64(cpu_V0, cpu_V0, cpu_V1);
457
}
458
neon_store_reg64(cpu_V0, rd + pass);
459
} else { /* size < 3 */
460
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
461
case 3: /* VRSRA */
462
GEN_NEON_INTEGER_OP(rshl);
463
break;
464
- case 4: /* VSRI */
465
- case 5: /* VSHL, VSLI */
466
- switch (size) {
467
- case 0: gen_helper_neon_shl_u8(tmp, tmp, tmp2); break;
468
- case 1: gen_helper_neon_shl_u16(tmp, tmp, tmp2); break;
469
- case 2: gen_helper_neon_shl_u32(tmp, tmp, tmp2); break;
470
- default: abort();
471
- }
472
- break;
473
case 6: /* VQSHLU */
474
switch (size) {
475
case 0:
476
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
477
tmp2 = neon_load_reg(rd, pass);
478
gen_neon_add(size, tmp, tmp2);
479
tcg_temp_free_i32(tmp2);
480
- } else if (op == 4 || (op == 5 && u)) {
481
- /* Insert */
482
- switch (size) {
483
- case 0:
484
- if (op == 4)
485
- mask = 0xff >> -shift;
486
- else
487
- mask = (uint8_t)(0xff << shift);
488
- mask |= mask << 8;
489
- mask |= mask << 16;
490
- break;
491
- case 1:
492
- if (op == 4)
493
- mask = 0xffff >> -shift;
494
- else
495
- mask = (uint16_t)(0xffff << shift);
496
- mask |= mask << 16;
497
- break;
498
- case 2:
499
- if (shift < -31 || shift > 31) {
500
- mask = 0;
501
- } else {
502
- if (op == 4)
503
- mask = 0xffffffffu >> -shift;
504
- else
505
- mask = 0xffffffffu << shift;
506
- }
507
- break;
508
- default:
509
- abort();
510
- }
511
- tmp2 = neon_load_reg(rd, pass);
512
- tcg_gen_andi_i32(tmp, tmp, mask);
513
- tcg_gen_andi_i32(tmp2, tmp2, ~mask);
514
- tcg_gen_or_i32(tmp, tmp, tmp2);
515
- tcg_temp_free_i32(tmp2);
516
}
517
neon_store_reg(rd, pass, tmp);
518
}
519
--
45
--
520
2.19.1
46
2.25.1
521
522
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
For a sequence of loads or stores from a single register,
3
Perform the override check early, so that it is still done
4
little-endian operations can be promoted to an 8-byte op.
4
even when we decide to discard an unreachable cpreg.
5
This can reduce the number of operations by a factor of 8.
5
6
Use assert not printf+abort.
6
7
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20181011205206.3552-5-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 20220501055028.646596-18-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
---
12
target/arm/translate-a64.c | 66 +++++++++++++++++++++++---------------
13
target/arm/helper.c | 22 ++++++++--------------
13
1 file changed, 40 insertions(+), 26 deletions(-)
14
1 file changed, 8 insertions(+), 14 deletions(-)
14
15
15
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
16
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate-a64.c
18
--- a/target/arm/helper.c
18
+++ b/target/arm/translate-a64.c
19
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@ static void write_vec_element_i32(DisasContext *s, TCGv_i32 tcg_src,
20
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
20
21
g_assert_not_reached();
21
/* Store from vector register to memory */
22
static void do_vec_st(DisasContext *s, int srcidx, int element,
23
- TCGv_i64 tcg_addr, int size)
24
+ TCGv_i64 tcg_addr, int size, TCGMemOp endian)
25
{
26
- TCGMemOp memop = s->be_data + size;
27
TCGv_i64 tcg_tmp = tcg_temp_new_i64();
28
29
read_vec_element(s, tcg_tmp, srcidx, element, size);
30
- tcg_gen_qemu_st_i64(tcg_tmp, tcg_addr, get_mem_index(s), memop);
31
+ tcg_gen_qemu_st_i64(tcg_tmp, tcg_addr, get_mem_index(s), endian | size);
32
33
tcg_temp_free_i64(tcg_tmp);
34
}
35
36
/* Load from memory to vector register */
37
static void do_vec_ld(DisasContext *s, int destidx, int element,
38
- TCGv_i64 tcg_addr, int size)
39
+ TCGv_i64 tcg_addr, int size, TCGMemOp endian)
40
{
41
- TCGMemOp memop = s->be_data + size;
42
TCGv_i64 tcg_tmp = tcg_temp_new_i64();
43
44
- tcg_gen_qemu_ld_i64(tcg_tmp, tcg_addr, get_mem_index(s), memop);
45
+ tcg_gen_qemu_ld_i64(tcg_tmp, tcg_addr, get_mem_index(s), endian | size);
46
write_vec_element(s, tcg_tmp, destidx, element, size);
47
48
tcg_temp_free_i64(tcg_tmp);
49
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
50
bool is_postidx = extract32(insn, 23, 1);
51
bool is_q = extract32(insn, 30, 1);
52
TCGv_i64 tcg_addr, tcg_rn, tcg_ebytes;
53
+ TCGMemOp endian = s->be_data;
54
55
- int ebytes = 1 << size;
56
- int elements = (is_q ? 128 : 64) / (8 << size);
57
+ int ebytes; /* bytes per element */
58
+ int elements; /* elements per vector */
59
int rpt; /* num iterations */
60
int selem; /* structure elements */
61
int r;
62
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
63
gen_check_sp_alignment(s);
64
}
22
}
65
23
66
+ /* For our purposes, bytes are always little-endian. */
24
+ /* Overriding of an existing definition must be explicitly requested. */
67
+ if (size == 0) {
25
+ if (!(r->type & ARM_CP_OVERRIDE)) {
68
+ endian = MO_LE;
26
+ const ARMCPRegInfo *oldreg = get_arm_cp_reginfo(cpu->cp_regs, key);
69
+ }
27
+ if (oldreg) {
70
+
28
+ assert(oldreg->type & ARM_CP_OVERRIDE);
71
+ /* Consecutive little-endian elements from a single register
72
+ * can be promoted to a larger little-endian operation.
73
+ */
74
+ if (selem == 1 && endian == MO_LE) {
75
+ size = 3;
76
+ }
77
+ ebytes = 1 << size;
78
+ elements = (is_q ? 16 : 8) / ebytes;
79
+
80
tcg_rn = cpu_reg_sp(s, rn);
81
tcg_addr = tcg_temp_new_i64();
82
tcg_gen_mov_i64(tcg_addr, tcg_rn);
83
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
84
for (r = 0; r < rpt; r++) {
85
int e;
86
for (e = 0; e < elements; e++) {
87
- int tt = (rt + r) % 32;
88
int xs;
89
for (xs = 0; xs < selem; xs++) {
90
+ int tt = (rt + r + xs) % 32;
91
if (is_store) {
92
- do_vec_st(s, tt, e, tcg_addr, size);
93
+ do_vec_st(s, tt, e, tcg_addr, size, endian);
94
} else {
95
- do_vec_ld(s, tt, e, tcg_addr, size);
96
-
97
- /* For non-quad operations, setting a slice of the low
98
- * 64 bits of the register clears the high 64 bits (in
99
- * the ARM ARM pseudocode this is implicit in the fact
100
- * that 'rval' is a 64 bit wide variable).
101
- * For quad operations, we might still need to zero the
102
- * high bits of SVE. We optimize by noticing that we only
103
- * need to do this the first time we touch a register.
104
- */
105
- if (e == 0 && (r == 0 || xs == selem - 1)) {
106
- clear_vec_high(s, is_q, tt);
107
- }
108
+ do_vec_ld(s, tt, e, tcg_addr, size, endian);
109
}
110
tcg_gen_add_i64(tcg_addr, tcg_addr, tcg_ebytes);
111
- tt = (tt + 1) % 32;
112
}
113
}
114
}
115
116
+ if (!is_store) {
117
+ /* For non-quad operations, setting a slice of the low
118
+ * 64 bits of the register clears the high 64 bits (in
119
+ * the ARM ARM pseudocode this is implicit in the fact
120
+ * that 'rval' is a 64 bit wide variable).
121
+ * For quad operations, we might still need to zero the
122
+ * high bits of SVE.
123
+ */
124
+ for (r = 0; r < rpt * selem; r++) {
125
+ int tt = (rt + r) % 32;
126
+ clear_vec_high(s, is_q, tt);
127
+ }
29
+ }
128
+ }
30
+ }
129
+
31
+
130
if (is_postidx) {
32
/* Combine cpreg and name into one allocation. */
131
int rm = extract32(insn, 16, 5);
33
name_len = strlen(name) + 1;
132
if (rm == 31) {
34
r2 = g_malloc(sizeof(*r2) + name_len);
133
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
35
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
134
} else {
36
assert(!raw_accessors_invalid(r2));
135
/* Load/store one element per register */
37
}
136
if (is_load) {
38
137
- do_vec_ld(s, rt, index, tcg_addr, scale);
39
- /* Overriding of an existing definition must be explicitly
138
+ do_vec_ld(s, rt, index, tcg_addr, scale, s->be_data);
40
- * requested.
139
} else {
41
- */
140
- do_vec_st(s, rt, index, tcg_addr, scale);
42
- if (!(r->type & ARM_CP_OVERRIDE)) {
141
+ do_vec_st(s, rt, index, tcg_addr, scale, s->be_data);
43
- const ARMCPRegInfo *oldreg = get_arm_cp_reginfo(cpu->cp_regs, key);
142
}
44
- if (oldreg && !(oldreg->type & ARM_CP_OVERRIDE)) {
143
}
45
- fprintf(stderr, "Register redefined: cp=%d %d bit "
144
tcg_gen_add_i64(tcg_addr, tcg_addr, tcg_ebytes);
46
- "crn=%d crm=%d opc1=%d opc2=%d, "
47
- "was %s, now %s\n", r2->cp, 32 + 32 * is64,
48
- r2->crn, r2->crm, r2->opc1, r2->opc2,
49
- oldreg->name, r2->name);
50
- g_assert_not_reached();
51
- }
52
- }
53
g_hash_table_insert(cpu->cp_regs, (gpointer)(uintptr_t)key, r2);
54
}
55
145
--
56
--
146
2.19.1
57
2.25.1
147
148
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Also introduces neon_element_offset to find the env offset
3
Put the block comments into the current coding style.
4
of a specific element within a neon register.
5
4
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20181011205206.3552-7-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20220501055028.646596-19-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
9
---
11
target/arm/translate.c | 63 ++++++++++++++++++++++++------------------
10
target/arm/helper.c | 24 +++++++++++++++---------
12
1 file changed, 36 insertions(+), 27 deletions(-)
11
1 file changed, 15 insertions(+), 9 deletions(-)
13
12
14
diff --git a/target/arm/translate.c b/target/arm/translate.c
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/translate.c
15
--- a/target/arm/helper.c
17
+++ b/target/arm/translate.c
16
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@ neon_reg_offset (int reg, int n)
17
@@ -XXX,XX +XXX,XX @@ CpuDefinitionInfoList *qmp_query_cpu_definitions(Error **errp)
19
return vfp_reg_offset(0, sreg);
18
return cpu_list;
20
}
19
}
21
20
22
+/* Return the offset of a 2**SIZE piece of a NEON register, at index ELE,
21
+/*
23
+ * where 0 is the least significant end of the register.
22
+ * Private utility function for define_one_arm_cp_reg_with_opaque():
23
+ * add a single reginfo struct to the hash table.
24
+ */
24
+ */
25
+static inline long
25
static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
26
+neon_element_offset(int reg, int element, TCGMemOp size)
26
void *opaque, CPState state,
27
+{
27
CPSecureState secstate,
28
+ int element_size = 1 << size;
28
int crm, int opc1, int opc2,
29
+ int ofs = element * element_size;
29
const char *name)
30
+#ifdef HOST_WORDS_BIGENDIAN
31
+ /* Calculate the offset assuming fully little-endian,
32
+ * then XOR to account for the order of the 8-byte units.
33
+ */
34
+ if (element_size < 8) {
35
+ ofs ^= 8 - element_size;
36
+ }
37
+#endif
38
+ return neon_reg_offset(reg, 0) + ofs;
39
+}
40
+
41
static TCGv_i32 neon_load_reg(int reg, int pass)
42
{
30
{
43
TCGv_i32 tmp = tcg_temp_new_i32();
31
- /* Private utility function for define_one_arm_cp_reg_with_opaque():
44
@@ -XXX,XX +XXX,XX @@ static int disas_vfp_insn(DisasContext *s, uint32_t insn)
32
- * add a single reginfo struct to the hash table.
45
tmp = load_reg(s, rd);
33
- */
46
if (insn & (1 << 23)) {
34
uint32_t key;
47
/* VDUP */
35
ARMCPRegInfo *r2;
48
- if (size == 0) {
36
bool is64 = r->type & ARM_CP_64BIT;
49
- gen_neon_dup_u8(tmp, 0);
37
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
50
- } else if (size == 1) {
38
51
- gen_neon_dup_low16(tmp);
39
isbanked = r->bank_fieldoffsets[0] && r->bank_fieldoffsets[1];
52
- }
40
if (isbanked) {
53
- for (n = 0; n <= pass * 2; n++) {
41
- /* Register is banked (using both entries in array).
54
- tmp2 = tcg_temp_new_i32();
42
+ /*
55
- tcg_gen_mov_i32(tmp2, tmp);
43
+ * Register is banked (using both entries in array).
56
- neon_store_reg(rn, n, tmp2);
44
* Overwriting fieldoffset as the array is only used to define
57
- }
45
* banked registers but later only fieldoffset is used.
58
- neon_store_reg(rn, n, tmp);
46
*/
59
+ int vec_size = pass ? 16 : 8;
47
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
60
+ tcg_gen_gvec_dup_i32(size, neon_reg_offset(rn, 0),
48
61
+ vec_size, vec_size, tmp);
49
if (state == ARM_CP_STATE_AA32) {
62
+ tcg_temp_free_i32(tmp);
50
if (isbanked) {
63
} else {
51
- /* If the register is banked then we don't need to migrate or
64
/* VMOV */
52
+ /*
65
switch (size) {
53
+ * If the register is banked then we don't need to migrate or
66
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
54
* reset the 32-bit instance in certain cases:
67
tcg_temp_free_i32(tmp);
55
*
68
} else if ((insn & 0x380) == 0) {
56
* 1) If the register has both 32-bit and 64-bit instances then we
69
/* VDUP */
57
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
70
+ int element;
58
r2->type |= ARM_CP_ALIAS;
71
+ TCGMemOp size;
72
+
73
if ((insn & (7 << 16)) == 0 || (q && (rd & 1))) {
74
return 1;
75
}
76
- if (insn & (1 << 19)) {
77
- tmp = neon_load_reg(rm, 1);
78
- } else {
79
- tmp = neon_load_reg(rm, 0);
80
- }
81
if (insn & (1 << 16)) {
82
- gen_neon_dup_u8(tmp, ((insn >> 17) & 3) * 8);
83
+ size = MO_8;
84
+ element = (insn >> 17) & 7;
85
} else if (insn & (1 << 17)) {
86
- if ((insn >> 18) & 1)
87
- gen_neon_dup_high16(tmp);
88
- else
89
- gen_neon_dup_low16(tmp);
90
+ size = MO_16;
91
+ element = (insn >> 18) & 3;
92
+ } else {
93
+ size = MO_32;
94
+ element = (insn >> 19) & 1;
95
}
96
- for (pass = 0; pass < (q ? 4 : 2); pass++) {
97
- tmp2 = tcg_temp_new_i32();
98
- tcg_gen_mov_i32(tmp2, tmp);
99
- neon_store_reg(rd, pass, tmp2);
100
- }
101
- tcg_temp_free_i32(tmp);
102
+ tcg_gen_gvec_dup_mem(size, neon_reg_offset(rd, 0),
103
+ neon_element_offset(rm, element, size),
104
+ q ? 16 : 8, q ? 16 : 8);
105
} else {
106
return 1;
107
}
59
}
60
} else if ((secstate != r->secure) && !ns) {
61
- /* The register is not banked so we only want to allow migration of
62
- * the non-secure instance.
63
+ /*
64
+ * The register is not banked so we only want to allow migration
65
+ * of the non-secure instance.
66
*/
67
r2->type |= ARM_CP_ALIAS;
68
}
69
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
70
}
71
}
72
73
- /* By convention, for wildcarded registers only the first
74
+ /*
75
+ * By convention, for wildcarded registers only the first
76
* entry is used for migration; the others are marked as
77
* ALIAS so we don't try to transfer the register
78
* multiple times. Special registers (ie NOP/WFI) are
79
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
80
r2->type |= ARM_CP_ALIAS | ARM_CP_NO_GDB;
81
}
82
83
- /* Check that raw accesses are either forbidden or handled. Note that
84
+ /*
85
+ * Check that raw accesses are either forbidden or handled. Note that
86
* we can't assert this earlier because the setup of fieldoffset for
87
* banked registers has to be done first.
88
*/
108
--
89
--
109
2.19.1
90
2.25.1
110
111
diff view generated by jsdifflib
1
The HCR.DC virtualization configuration register bit has the
1
From: Richard Henderson <richard.henderson@linaro.org>
2
following effects:
3
* SCTLR.M behaves as if it is 0 for all purposes except
4
direct reads of the bit
5
* HCR.VM behaves as if it is 1 for all purposes except
6
direct reads of the bit
7
* the memory type produced by the first stage of the EL1&EL0
8
translation regime is Normal Non-Shareable,
9
Inner Write-Back Read-Allocate Write-Allocate,
10
Outer Write-Back Read-Allocate Write-Allocate.
11
2
12
Implement this behaviour.
3
Since e03b56863d2bc, our host endian indicator is unconditionally
4
set, which means that we can use a normal C condition.
13
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20220501055028.646596-20-richard.henderson@linaro.org
9
[PMM: quote correct git hash in commit message]
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Message-id: 20181012144235.19646-5-peter.maydell@linaro.org
17
---
11
---
18
target/arm/helper.c | 23 +++++++++++++++++++++--
12
target/arm/helper.c | 9 +++------
19
1 file changed, 21 insertions(+), 2 deletions(-)
13
1 file changed, 3 insertions(+), 6 deletions(-)
20
14
21
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
22
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/helper.c
17
--- a/target/arm/helper.c
24
+++ b/target/arm/helper.c
18
+++ b/target/arm/helper.c
25
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
19
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
26
* * The Non-secure TTBCR.EAE bit is set to 1
20
r2->type |= ARM_CP_ALIAS;
27
* * The implementation includes EL2, and the value of HCR.VM is 1
21
}
28
*
22
29
+ * (Note that HCR.DC makes HCR.VM behave as if it is 1.)
23
- if (r->state == ARM_CP_STATE_BOTH) {
30
+ *
24
-#if HOST_BIG_ENDIAN
31
* ATS1Hx always uses the 64bit format (not supported yet).
25
- if (r2->fieldoffset) {
32
*/
26
- r2->fieldoffset += sizeof(uint32_t);
33
format64 = arm_s1_regime_using_lpae_format(env, mmu_idx);
27
- }
34
28
-#endif
35
if (arm_feature(env, ARM_FEATURE_EL2)) {
29
+ if (HOST_BIG_ENDIAN &&
36
if (mmu_idx == ARMMMUIdx_S12NSE0 || mmu_idx == ARMMMUIdx_S12NSE1) {
30
+ r->state == ARM_CP_STATE_BOTH && r2->fieldoffset) {
37
- format64 |= env->cp15.hcr_el2 & HCR_VM;
31
+ r2->fieldoffset += sizeof(uint32_t);
38
+ format64 |= env->cp15.hcr_el2 & (HCR_VM | HCR_DC);
39
} else {
40
format64 |= arm_current_el(env) == 2;
41
}
42
@@ -XXX,XX +XXX,XX @@ static inline bool regime_translation_disabled(CPUARMState *env,
43
}
44
45
if (mmu_idx == ARMMMUIdx_S2NS) {
46
- return (env->cp15.hcr_el2 & HCR_VM) == 0;
47
+ /* HCR.DC means HCR.VM behaves as 1 */
48
+ return (env->cp15.hcr_el2 & (HCR_DC | HCR_VM)) == 0;
49
}
50
51
if (env->cp15.hcr_el2 & HCR_TGE) {
52
@@ -XXX,XX +XXX,XX @@ static inline bool regime_translation_disabled(CPUARMState *env,
53
}
32
}
54
}
33
}
55
34
56
+ if ((env->cp15.hcr_el2 & HCR_DC) &&
57
+ (mmu_idx == ARMMMUIdx_S1NSE0 || mmu_idx == ARMMMUIdx_S1NSE1)) {
58
+ /* HCR.DC means SCTLR_EL1.M behaves as 0 */
59
+ return true;
60
+ }
61
+
62
return (regime_sctlr(env, mmu_idx) & SCTLR_M) == 0;
63
}
64
65
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
66
67
/* Combine the S1 and S2 cache attributes, if needed */
68
if (!ret && cacheattrs != NULL) {
69
+ if (env->cp15.hcr_el2 & HCR_DC) {
70
+ /*
71
+ * HCR.DC forces the first stage attributes to
72
+ * Normal Non-Shareable,
73
+ * Inner Write-Back Read-Allocate Write-Allocate,
74
+ * Outer Write-Back Read-Allocate Write-Allocate.
75
+ */
76
+ cacheattrs->attrs = 0xff;
77
+ cacheattrs->shareability = 0;
78
+ }
79
*cacheattrs = combine_cacheattrs(*cacheattrs, cacheattrs2);
80
}
81
82
--
35
--
83
2.19.1
36
2.25.1
84
85
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20181016223115.24100-9-richard.henderson@linaro.org
5
Message-id: 20220501055028.646596-24-richard.henderson@linaro.org
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
7
---
9
target/arm/cpu.h | 17 +++++++++++++++-
8
target/arm/cpu.h | 15 +++++++++++++++
10
linux-user/elfload.c | 6 +-----
9
1 file changed, 15 insertions(+)
11
target/arm/cpu64.c | 16 ++++++++-------
12
target/arm/helper.c | 2 +-
13
target/arm/translate-a64.c | 40 +++++++++++++++++++-------------------
14
target/arm/translate.c | 6 +++---
15
6 files changed, 50 insertions(+), 37 deletions(-)
16
10
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
11
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
18
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/cpu.h
13
--- a/target/arm/cpu.h
20
+++ b/target/arm/cpu.h
14
+++ b/target/arm/cpu.h
21
@@ -XXX,XX +XXX,XX @@ enum arm_features {
15
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_ssbs(const ARMISARegisters *id)
22
ARM_FEATURE_PMU, /* has PMU support */
16
return FIELD_EX32(id->id_pfr2, ID_PFR2, SSBS) != 0;
23
ARM_FEATURE_VBAR, /* has cp15 VBAR */
24
ARM_FEATURE_M_SECURITY, /* M profile Security Extension */
25
- ARM_FEATURE_V8_FP16, /* implements v8.2 half-precision float */
26
ARM_FEATURE_M_MAIN, /* M profile Main Extension */
27
};
28
29
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_dp(const ARMISARegisters *id)
30
return FIELD_EX32(id->id_isar6, ID_ISAR6, DP) != 0;
31
}
17
}
32
18
33
+static inline bool isar_feature_aa32_fp16_arith(const ARMISARegisters *id)
19
+static inline bool isar_feature_aa32_debugv8p2(const ARMISARegisters *id)
34
+{
20
+{
35
+ /*
21
+ return FIELD_EX32(id->id_dfr0, ID_DFR0, COPDBG) >= 8;
36
+ * This is a placeholder for use by VCMA until the rest of
37
+ * the ARMv8.2-FP16 extension is implemented for aa32 mode.
38
+ * At which point we can properly set and check MVFR1.FPHP.
39
+ */
40
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, FP) == 1;
41
+}
22
+}
42
+
23
+
43
/*
24
/*
44
* 64-bit feature tests via id registers.
25
* 64-bit feature tests via id registers.
45
*/
26
*/
46
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_fcma(const ARMISARegisters *id)
27
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_ssbs(const ARMISARegisters *id)
47
return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, FCMA) != 0;
28
return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, SSBS) != 0;
48
}
29
}
49
30
50
+static inline bool isar_feature_aa64_fp16(const ARMISARegisters *id)
31
+static inline bool isar_feature_aa64_debugv8p2(const ARMISARegisters *id)
51
+{
32
+{
52
+ /* We always set the AdvSIMD and FP fields identically wrt FP16. */
33
+ return FIELD_EX64(id->id_aa64dfr0, ID_AA64DFR0, DEBUGVER) >= 8;
53
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, FP) == 1;
54
+}
34
+}
55
+
35
+
56
static inline bool isar_feature_aa64_sve(const ARMISARegisters *id)
36
static inline bool isar_feature_aa64_sve2(const ARMISARegisters *id)
57
{
37
{
58
return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, SVE) != 0;
38
return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, SVEVER) != 0;
59
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
39
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_any_tts2uxn(const ARMISARegisters *id)
60
index XXXXXXX..XXXXXXX 100644
40
return isar_feature_aa64_tts2uxn(id) || isar_feature_aa32_tts2uxn(id);
61
--- a/linux-user/elfload.c
41
}
62
+++ b/linux-user/elfload.c
42
63
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
43
+static inline bool isar_feature_any_debugv8p2(const ARMISARegisters *id)
64
hwcaps |= ARM_HWCAP_A64_ASIMD;
44
+{
65
45
+ return isar_feature_aa64_debugv8p2(id) || isar_feature_aa32_debugv8p2(id);
66
/* probe for the extra features */
46
+}
67
-#define GET_FEATURE(feat, hwcap) \
68
- do { if (arm_feature(&cpu->env, feat)) { hwcaps |= hwcap; } } while (0)
69
#define GET_FEATURE_ID(feat, hwcap) \
70
do { if (cpu_isar_feature(feat, cpu)) { hwcaps |= hwcap; } } while (0)
71
72
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
73
GET_FEATURE_ID(aa64_sha3, ARM_HWCAP_A64_SHA3);
74
GET_FEATURE_ID(aa64_sm3, ARM_HWCAP_A64_SM3);
75
GET_FEATURE_ID(aa64_sm4, ARM_HWCAP_A64_SM4);
76
- GET_FEATURE(ARM_FEATURE_V8_FP16,
77
- ARM_HWCAP_A64_FPHP | ARM_HWCAP_A64_ASIMDHP);
78
+ GET_FEATURE_ID(aa64_fp16, ARM_HWCAP_A64_FPHP | ARM_HWCAP_A64_ASIMDHP);
79
GET_FEATURE_ID(aa64_atomics, ARM_HWCAP_A64_ATOMICS);
80
GET_FEATURE_ID(aa64_rdm, ARM_HWCAP_A64_ASIMDRDM);
81
GET_FEATURE_ID(aa64_dp, ARM_HWCAP_A64_ASIMDDP);
82
GET_FEATURE_ID(aa64_fcma, ARM_HWCAP_A64_FCMA);
83
GET_FEATURE_ID(aa64_sve, ARM_HWCAP_A64_SVE);
84
85
-#undef GET_FEATURE
86
#undef GET_FEATURE_ID
87
88
return hwcaps;
89
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
90
index XXXXXXX..XXXXXXX 100644
91
--- a/target/arm/cpu64.c
92
+++ b/target/arm/cpu64.c
93
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
94
95
t = cpu->isar.id_aa64pfr0;
96
t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1);
97
+ t = FIELD_DP64(t, ID_AA64PFR0, FP, 1);
98
+ t = FIELD_DP64(t, ID_AA64PFR0, ADVSIMD, 1);
99
cpu->isar.id_aa64pfr0 = t;
100
101
/* Replicate the same data to the 32-bit id registers. */
102
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
103
u = FIELD_DP32(u, ID_ISAR6, DP, 1);
104
cpu->isar.id_isar6 = u;
105
106
-#ifdef CONFIG_USER_ONLY
107
- /* We don't set these in system emulation mode for the moment,
108
- * since we don't correctly set the ID registers to advertise them,
109
- * and in some cases they're only available in AArch64 and not AArch32,
110
- * whereas the architecture requires them to be present in both if
111
- * present in either.
112
+ /*
113
+ * FIXME: We do not yet support ARMv8.2-fp16 for AArch32 yet,
114
+ * so do not set MVFR1.FPHP. Strictly speaking this is not legal,
115
+ * but it is also not legal to enable SVE without support for FP16,
116
+ * and enabling SVE in system mode is more useful in the short term.
117
*/
118
- set_feature(&cpu->env, ARM_FEATURE_V8_FP16);
119
+
47
+
120
+#ifdef CONFIG_USER_ONLY
48
/*
121
/* For usermode -cpu max we can use a larger and more efficient DCZ
49
* Forward to the above feature tests given an ARMCPU pointer.
122
* blocksize since we don't have to follow what the hardware does.
50
*/
123
*/
124
diff --git a/target/arm/helper.c b/target/arm/helper.c
125
index XXXXXXX..XXXXXXX 100644
126
--- a/target/arm/helper.c
127
+++ b/target/arm/helper.c
128
@@ -XXX,XX +XXX,XX @@ void HELPER(vfp_set_fpscr)(CPUARMState *env, uint32_t val)
129
uint32_t changed;
130
131
/* When ARMv8.2-FP16 is not supported, FZ16 is RES0. */
132
- if (!arm_feature(env, ARM_FEATURE_V8_FP16)) {
133
+ if (!cpu_isar_feature(aa64_fp16, arm_env_get_cpu(env))) {
134
val &= ~FPCR_FZ16;
135
}
136
137
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
138
index XXXXXXX..XXXXXXX 100644
139
--- a/target/arm/translate-a64.c
140
+++ b/target/arm/translate-a64.c
141
@@ -XXX,XX +XXX,XX @@ static void disas_fp_compare(DisasContext *s, uint32_t insn)
142
break;
143
case 3:
144
size = MO_16;
145
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
146
+ if (dc_isar_feature(aa64_fp16, s)) {
147
break;
148
}
149
/* fallthru */
150
@@ -XXX,XX +XXX,XX @@ static void disas_fp_ccomp(DisasContext *s, uint32_t insn)
151
break;
152
case 3:
153
size = MO_16;
154
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
155
+ if (dc_isar_feature(aa64_fp16, s)) {
156
break;
157
}
158
/* fallthru */
159
@@ -XXX,XX +XXX,XX @@ static void disas_fp_csel(DisasContext *s, uint32_t insn)
160
break;
161
case 3:
162
sz = MO_16;
163
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
164
+ if (dc_isar_feature(aa64_fp16, s)) {
165
break;
166
}
167
/* fallthru */
168
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
169
handle_fp_1src_double(s, opcode, rd, rn);
170
break;
171
case 3:
172
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
173
+ if (!dc_isar_feature(aa64_fp16, s)) {
174
unallocated_encoding(s);
175
return;
176
}
177
@@ -XXX,XX +XXX,XX @@ static void disas_fp_2src(DisasContext *s, uint32_t insn)
178
handle_fp_2src_double(s, opcode, rd, rn, rm);
179
break;
180
case 3:
181
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
182
+ if (!dc_isar_feature(aa64_fp16, s)) {
183
unallocated_encoding(s);
184
return;
185
}
186
@@ -XXX,XX +XXX,XX @@ static void disas_fp_3src(DisasContext *s, uint32_t insn)
187
handle_fp_3src_double(s, o0, o1, rd, rn, rm, ra);
188
break;
189
case 3:
190
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
191
+ if (!dc_isar_feature(aa64_fp16, s)) {
192
unallocated_encoding(s);
193
return;
194
}
195
@@ -XXX,XX +XXX,XX @@ static void disas_fp_imm(DisasContext *s, uint32_t insn)
196
break;
197
case 3:
198
sz = MO_16;
199
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
200
+ if (dc_isar_feature(aa64_fp16, s)) {
201
break;
202
}
203
/* fallthru */
204
@@ -XXX,XX +XXX,XX @@ static void disas_fp_fixed_conv(DisasContext *s, uint32_t insn)
205
case 1: /* float64 */
206
break;
207
case 3: /* float16 */
208
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
209
+ if (dc_isar_feature(aa64_fp16, s)) {
210
break;
211
}
212
/* fallthru */
213
@@ -XXX,XX +XXX,XX @@ static void disas_fp_int_conv(DisasContext *s, uint32_t insn)
214
break;
215
case 0x6: /* 16-bit float, 32-bit int */
216
case 0xe: /* 16-bit float, 64-bit int */
217
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
218
+ if (dc_isar_feature(aa64_fp16, s)) {
219
break;
220
}
221
/* fallthru */
222
@@ -XXX,XX +XXX,XX @@ static void disas_fp_int_conv(DisasContext *s, uint32_t insn)
223
case 1: /* float64 */
224
break;
225
case 3: /* float16 */
226
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
227
+ if (dc_isar_feature(aa64_fp16, s)) {
228
break;
229
}
230
/* fallthru */
231
@@ -XXX,XX +XXX,XX @@ static void disas_simd_across_lanes(DisasContext *s, uint32_t insn)
232
*/
233
is_min = extract32(size, 1, 1);
234
is_fp = true;
235
- if (!is_u && arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
236
+ if (!is_u && dc_isar_feature(aa64_fp16, s)) {
237
size = 1;
238
} else if (!is_u || !is_q || extract32(size, 0, 1)) {
239
unallocated_encoding(s);
240
@@ -XXX,XX +XXX,XX @@ static void disas_simd_mod_imm(DisasContext *s, uint32_t insn)
241
242
if (o2 != 0 || ((cmode == 0xf) && is_neg && !is_q)) {
243
/* Check for FMOV (vector, immediate) - half-precision */
244
- if (!(arm_dc_feature(s, ARM_FEATURE_V8_FP16) && o2 && cmode == 0xf)) {
245
+ if (!(dc_isar_feature(aa64_fp16, s) && o2 && cmode == 0xf)) {
246
unallocated_encoding(s);
247
return;
248
}
249
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
250
case 0x2f: /* FMINP */
251
/* FP op, size[0] is 32 or 64 bit*/
252
if (!u) {
253
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
254
+ if (!dc_isar_feature(aa64_fp16, s)) {
255
unallocated_encoding(s);
256
return;
257
} else {
258
@@ -XXX,XX +XXX,XX @@ static void handle_simd_shift_intfp_conv(DisasContext *s, bool is_scalar,
259
size = MO_32;
260
} else if (immh & 2) {
261
size = MO_16;
262
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
263
+ if (!dc_isar_feature(aa64_fp16, s)) {
264
unallocated_encoding(s);
265
return;
266
}
267
@@ -XXX,XX +XXX,XX @@ static void handle_simd_shift_fpint_conv(DisasContext *s, bool is_scalar,
268
size = MO_32;
269
} else if (immh & 0x2) {
270
size = MO_16;
271
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
272
+ if (!dc_isar_feature(aa64_fp16, s)) {
273
unallocated_encoding(s);
274
return;
275
}
276
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_same_fp16(DisasContext *s,
277
return;
278
}
279
280
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
281
+ if (!dc_isar_feature(aa64_fp16, s)) {
282
unallocated_encoding(s);
283
}
284
285
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
286
TCGv_ptr fpst;
287
bool pairwise = false;
288
289
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
290
+ if (!dc_isar_feature(aa64_fp16, s)) {
291
unallocated_encoding(s);
292
return;
293
}
294
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
295
case 0x1c: /* FCADD, #90 */
296
case 0x1e: /* FCADD, #270 */
297
if (size == 0
298
- || (size == 1 && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))
299
+ || (size == 1 && !dc_isar_feature(aa64_fp16, s))
300
|| (size == 3 && !is_q)) {
301
unallocated_encoding(s);
302
return;
303
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
304
bool need_fpst = true;
305
int rmode;
306
307
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
308
+ if (!dc_isar_feature(aa64_fp16, s)) {
309
unallocated_encoding(s);
310
return;
311
}
312
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
313
}
314
break;
315
}
316
- if (is_fp16 && !arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
317
+ if (is_fp16 && !dc_isar_feature(aa64_fp16, s)) {
318
unallocated_encoding(s);
319
return;
320
}
321
diff --git a/target/arm/translate.c b/target/arm/translate.c
322
index XXXXXXX..XXXXXXX 100644
323
--- a/target/arm/translate.c
324
+++ b/target/arm/translate.c
325
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
326
int size = extract32(insn, 20, 1);
327
data = extract32(insn, 23, 2); /* rot */
328
if (!dc_isar_feature(aa32_vcma, s)
329
- || (!size && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))) {
330
+ || (!size && !dc_isar_feature(aa32_fp16_arith, s))) {
331
return 1;
332
}
333
fn_gvec_ptr = size ? gen_helper_gvec_fcmlas : gen_helper_gvec_fcmlah;
334
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
335
int size = extract32(insn, 20, 1);
336
data = extract32(insn, 24, 1); /* rot */
337
if (!dc_isar_feature(aa32_vcma, s)
338
- || (!size && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))) {
339
+ || (!size && !dc_isar_feature(aa32_fp16_arith, s))) {
340
return 1;
341
}
342
fn_gvec_ptr = size ? gen_helper_gvec_fcadds : gen_helper_gvec_fcaddh;
343
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_2reg_scalar_ext(DisasContext *s, uint32_t insn)
344
return 1;
345
}
346
if (size == 0) {
347
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
348
+ if (!dc_isar_feature(aa32_fp16_arith, s)) {
349
return 1;
350
}
351
/* For fp16, rm is just Vm, and index is M. */
352
--
51
--
353
2.19.1
52
2.25.1
354
355
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Having V6 alone imply jazelle was wrong for cortex-m0.
3
Add the aa64 predicate for detecting RAS support from id registers.
4
Change to an assertion for V6 & !M.
4
We already have the aa32 version from the M-profile work.
5
Add the 'any' predicate for testing both aa64 and aa32.
5
6
6
This was harmless, because the only place we tested ARM_FEATURE_JAZELLE
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
was for 'bxj' in disas_arm(), which is unreachable for M-profile cores.
8
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20181016223115.24100-6-richard.henderson@linaro.org
9
Message-id: 20220501055028.646596-34-richard.henderson@linaro.org
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
11
---
15
target/arm/cpu.h | 6 +++++-
12
target/arm/cpu.h | 10 ++++++++++
16
target/arm/cpu.c | 17 ++++++++++++++---
13
1 file changed, 10 insertions(+)
17
target/arm/translate.c | 2 +-
18
3 files changed, 20 insertions(+), 5 deletions(-)
19
14
20
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
21
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/cpu.h
17
--- a/target/arm/cpu.h
23
+++ b/target/arm/cpu.h
18
+++ b/target/arm/cpu.h
24
@@ -XXX,XX +XXX,XX @@ enum arm_features {
19
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_aa32_el1(const ARMISARegisters *id)
25
ARM_FEATURE_PMU, /* has PMU support */
20
return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, EL1) >= 2;
26
ARM_FEATURE_VBAR, /* has cp15 VBAR */
27
ARM_FEATURE_M_SECURITY, /* M profile Security Extension */
28
- ARM_FEATURE_JAZELLE, /* has (trivial) Jazelle implementation */
29
ARM_FEATURE_SVE, /* has Scalable Vector Extension */
30
ARM_FEATURE_V8_FP16, /* implements v8.2 half-precision float */
31
ARM_FEATURE_M_MAIN, /* M profile Main Extension */
32
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_arm_div(const ARMISARegisters *id)
33
return FIELD_EX32(id->id_isar0, ID_ISAR0, DIVIDE) > 1;
34
}
21
}
35
22
36
+static inline bool isar_feature_jazelle(const ARMISARegisters *id)
23
+static inline bool isar_feature_aa64_ras(const ARMISARegisters *id)
37
+{
24
+{
38
+ return FIELD_EX32(id->id_isar1, ID_ISAR1, JAZELLE) != 0;
25
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, RAS) != 0;
39
+}
26
+}
40
+
27
+
41
static inline bool isar_feature_aa32_aes(const ARMISARegisters *id)
28
static inline bool isar_feature_aa64_sve(const ARMISARegisters *id)
42
{
29
{
43
return FIELD_EX32(id->id_isar5, ID_ISAR5, AES) != 0;
30
return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, SVE) != 0;
44
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
31
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_any_debugv8p2(const ARMISARegisters *id)
45
index XXXXXXX..XXXXXXX 100644
32
return isar_feature_aa64_debugv8p2(id) || isar_feature_aa32_debugv8p2(id);
46
--- a/target/arm/cpu.c
33
}
47
+++ b/target/arm/cpu.c
34
48
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
35
+static inline bool isar_feature_any_ras(const ARMISARegisters *id)
49
}
36
+{
50
if (arm_feature(env, ARM_FEATURE_V6)) {
37
+ return isar_feature_aa64_ras(id) || isar_feature_aa32_ras(id);
51
set_feature(env, ARM_FEATURE_V5);
38
+}
52
- set_feature(env, ARM_FEATURE_JAZELLE);
53
if (!arm_feature(env, ARM_FEATURE_M)) {
54
+ assert(cpu_isar_feature(jazelle, cpu));
55
set_feature(env, ARM_FEATURE_AUXCR);
56
}
57
}
58
@@ -XXX,XX +XXX,XX @@ static void arm926_initfn(Object *obj)
59
set_feature(&cpu->env, ARM_FEATURE_VFP);
60
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
61
set_feature(&cpu->env, ARM_FEATURE_CACHE_TEST_CLEAN);
62
- set_feature(&cpu->env, ARM_FEATURE_JAZELLE);
63
cpu->midr = 0x41069265;
64
cpu->reset_fpsid = 0x41011090;
65
cpu->ctr = 0x1dd20d2;
66
cpu->reset_sctlr = 0x00090078;
67
+
39
+
68
+ /*
40
/*
69
+ * ARMv5 does not have the ID_ISAR registers, but we can still
41
* Forward to the above feature tests given an ARMCPU pointer.
70
+ * set the field to indicate Jazelle support within QEMU.
42
*/
71
+ */
72
+ cpu->isar.id_isar1 = FIELD_DP32(cpu->isar.id_isar1, ID_ISAR1, JAZELLE, 1);
73
}
74
75
static void arm946_initfn(Object *obj)
76
@@ -XXX,XX +XXX,XX @@ static void arm1026_initfn(Object *obj)
77
set_feature(&cpu->env, ARM_FEATURE_AUXCR);
78
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
79
set_feature(&cpu->env, ARM_FEATURE_CACHE_TEST_CLEAN);
80
- set_feature(&cpu->env, ARM_FEATURE_JAZELLE);
81
cpu->midr = 0x4106a262;
82
cpu->reset_fpsid = 0x410110a0;
83
cpu->ctr = 0x1dd20d2;
84
cpu->reset_sctlr = 0x00090078;
85
cpu->reset_auxcr = 1;
86
+
87
+ /*
88
+ * ARMv5 does not have the ID_ISAR registers, but we can still
89
+ * set the field to indicate Jazelle support within QEMU.
90
+ */
91
+ cpu->isar.id_isar1 = FIELD_DP32(cpu->isar.id_isar1, ID_ISAR1, JAZELLE, 1);
92
+
93
{
94
/* The 1026 had an IFAR at c6,c0,0,1 rather than the ARMv6 c6,c0,0,2 */
95
ARMCPRegInfo ifar = {
96
diff --git a/target/arm/translate.c b/target/arm/translate.c
97
index XXXXXXX..XXXXXXX 100644
98
--- a/target/arm/translate.c
99
+++ b/target/arm/translate.c
100
@@ -XXX,XX +XXX,XX @@
101
#define ENABLE_ARCH_5 arm_dc_feature(s, ARM_FEATURE_V5)
102
/* currently all emulated v5 cores are also v5TE, so don't bother */
103
#define ENABLE_ARCH_5TE arm_dc_feature(s, ARM_FEATURE_V5)
104
-#define ENABLE_ARCH_5J arm_dc_feature(s, ARM_FEATURE_JAZELLE)
105
+#define ENABLE_ARCH_5J dc_isar_feature(jazelle, s)
106
#define ENABLE_ARCH_6 arm_dc_feature(s, ARM_FEATURE_V6)
107
#define ENABLE_ARCH_6K arm_dc_feature(s, ARM_FEATURE_V6K)
108
#define ENABLE_ARCH_6T2 arm_dc_feature(s, ARM_FEATURE_THUMB2)
109
--
43
--
110
2.19.1
44
2.25.1
111
112
diff view generated by jsdifflib
Deleted patch
1
For AArch32, exception return happens through certain kinds
2
of CPSR write. We don't currently have any CPU_LOG_INT logging
3
of these events (unlike AArch64, where we log in the ERET
4
instruction). Add some suitable logging.
5
1
6
This will log exception returns like this:
7
Exception return from AArch32 hyp to usr PC 0x80100374
8
9
paralleling the existing logging in the exception_return
10
helper for AArch64 exception returns:
11
Exception return from AArch64 EL2 to AArch64 EL0 PC 0x8003045c
12
Exception return from AArch64 EL2 to AArch32 EL0 PC 0x8003045c
13
14
(Note that an AArch32 exception return can only be
15
AArch32->AArch32, never to AArch64.)
16
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
19
Message-id: 20181012144235.19646-2-peter.maydell@linaro.org
20
---
21
target/arm/internals.h | 18 ++++++++++++++++++
22
target/arm/helper.c | 10 ++++++++++
23
target/arm/translate.c | 7 +------
24
3 files changed, 29 insertions(+), 6 deletions(-)
25
26
diff --git a/target/arm/internals.h b/target/arm/internals.h
27
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/internals.h
29
+++ b/target/arm/internals.h
30
@@ -XXX,XX +XXX,XX @@ static inline uint32_t v7m_sp_limit(CPUARMState *env)
31
}
32
}
33
34
+/**
35
+ * aarch32_mode_name(): Return name of the AArch32 CPU mode
36
+ * @psr: Program Status Register indicating CPU mode
37
+ *
38
+ * Returns, for debug logging purposes, a printable representation
39
+ * of the AArch32 CPU mode ("svc", "usr", etc) as indicated by
40
+ * the low bits of the specified PSR.
41
+ */
42
+static inline const char *aarch32_mode_name(uint32_t psr)
43
+{
44
+ static const char cpu_mode_names[16][4] = {
45
+ "usr", "fiq", "irq", "svc", "???", "???", "mon", "abt",
46
+ "???", "???", "hyp", "und", "???", "???", "???", "sys"
47
+ };
48
+
49
+ return cpu_mode_names[psr & 0xf];
50
+}
51
+
52
#endif
53
diff --git a/target/arm/helper.c b/target/arm/helper.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/target/arm/helper.c
56
+++ b/target/arm/helper.c
57
@@ -XXX,XX +XXX,XX @@ void cpsr_write(CPUARMState *env, uint32_t val, uint32_t mask,
58
mask |= CPSR_IL;
59
val |= CPSR_IL;
60
}
61
+ qemu_log_mask(LOG_GUEST_ERROR,
62
+ "Illegal AArch32 mode switch attempt from %s to %s\n",
63
+ aarch32_mode_name(env->uncached_cpsr),
64
+ aarch32_mode_name(val));
65
} else {
66
+ qemu_log_mask(CPU_LOG_INT, "%s %s to %s PC 0x%" PRIx32 "\n",
67
+ write_type == CPSRWriteExceptionReturn ?
68
+ "Exception return from AArch32" :
69
+ "AArch32 mode switch from",
70
+ aarch32_mode_name(env->uncached_cpsr),
71
+ aarch32_mode_name(val), env->regs[15]);
72
switch_mode(env, val & CPSR_M);
73
}
74
}
75
diff --git a/target/arm/translate.c b/target/arm/translate.c
76
index XXXXXXX..XXXXXXX 100644
77
--- a/target/arm/translate.c
78
+++ b/target/arm/translate.c
79
@@ -XXX,XX +XXX,XX @@ void gen_intermediate_code(CPUState *cpu, TranslationBlock *tb)
80
translator_loop(ops, &dc.base, cpu, tb);
81
}
82
83
-static const char *cpu_mode_names[16] = {
84
- "usr", "fiq", "irq", "svc", "???", "???", "mon", "abt",
85
- "???", "???", "hyp", "und", "???", "???", "???", "sys"
86
-};
87
-
88
void arm_cpu_dump_state(CPUState *cs, FILE *f, fprintf_function cpu_fprintf,
89
int flags)
90
{
91
@@ -XXX,XX +XXX,XX @@ void arm_cpu_dump_state(CPUState *cs, FILE *f, fprintf_function cpu_fprintf,
92
psr & CPSR_V ? 'V' : '-',
93
psr & CPSR_T ? 'T' : 'A',
94
ns_status,
95
- cpu_mode_names[psr & 0xf], (psr & 0x10) ? 32 : 26);
96
+ aarch32_mode_name(psr), (psr & 0x10) ? 32 : 26);
97
}
98
99
if (flags & CPU_DUMP_FPU) {
100
--
101
2.19.1
102
103
diff view generated by jsdifflib
Deleted patch
1
The switch_mode() function is defined in target/arm/helper.c and used
2
only in that file and nowhere else, so we can make it file-local
3
rather than global.
4
1
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20181012144235.19646-3-peter.maydell@linaro.org
8
---
9
target/arm/internals.h | 1 -
10
target/arm/helper.c | 6 ++++--
11
2 files changed, 4 insertions(+), 3 deletions(-)
12
13
diff --git a/target/arm/internals.h b/target/arm/internals.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/internals.h
16
+++ b/target/arm/internals.h
17
@@ -XXX,XX +XXX,XX @@ static inline int bank_number(int mode)
18
g_assert_not_reached();
19
}
20
21
-void switch_mode(CPUARMState *, int);
22
void arm_cpu_register_gdb_regs_for_features(ARMCPU *cpu);
23
void arm_translate_init(void);
24
25
diff --git a/target/arm/helper.c b/target/arm/helper.c
26
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/helper.c
28
+++ b/target/arm/helper.c
29
@@ -XXX,XX +XXX,XX @@ static void v8m_security_lookup(CPUARMState *env, uint32_t address,
30
V8M_SAttributes *sattrs);
31
#endif
32
33
+static void switch_mode(CPUARMState *env, int mode);
34
+
35
static int vfp_gdb_get_reg(CPUARMState *env, uint8_t *buf, int reg)
36
{
37
int nregs;
38
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
39
return 0;
40
}
41
42
-void switch_mode(CPUARMState *env, int mode)
43
+static void switch_mode(CPUARMState *env, int mode)
44
{
45
ARMCPU *cpu = arm_env_get_cpu(env);
46
47
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_64_to_32(CPUARMState *env)
48
49
#else
50
51
-void switch_mode(CPUARMState *env, int mode)
52
+static void switch_mode(CPUARMState *env, int mode)
53
{
54
int old_mode;
55
int i;
56
--
57
2.19.1
58
59
diff view generated by jsdifflib
1
The A/I/F bits in ISR_EL1 should track the virtual interrupt
1
From: Alex Zuepke <alex.zuepke@tum.de>
2
status, not the physical interrupt status, if the associated
3
HCR_EL2.AMO/IMO/FMO bit is set. Implement this, rather than
4
always showing the physical interrupt status.
5
2
6
We don't currently implement anything to do with external
3
The ARMv8 manual defines that PMUSERENR_EL0.ER enables read-access
7
aborts, so this applies only to the I and F bits (though it
4
to both PMXEVCNTR_EL0 and PMEVCNTR<n>_EL0 registers, however,
8
ought to be possible for the outer guest to present a virtual
5
we only use it for PMXEVCNTR_EL0. Extend to PMEVCNTR<n>_EL0 as well.
9
external abort to the inner guest, even if QEMU doesn't
10
emulate physical external aborts, so there is missing
11
functionality in this area).
12
6
7
Signed-off-by: Alex Zuepke <alex.zuepke@tum.de>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20220428132717.84190-1-alex.zuepke@tum.de
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20181012144235.19646-6-peter.maydell@linaro.org
16
---
11
---
17
target/arm/helper.c | 22 ++++++++++++++++++----
12
target/arm/helper.c | 4 ++--
18
1 file changed, 18 insertions(+), 4 deletions(-)
13
1 file changed, 2 insertions(+), 2 deletions(-)
19
14
20
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
21
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/helper.c
17
--- a/target/arm/helper.c
23
+++ b/target/arm/helper.c
18
+++ b/target/arm/helper.c
24
@@ -XXX,XX +XXX,XX @@ static uint64_t isr_read(CPUARMState *env, const ARMCPRegInfo *ri)
19
@@ -XXX,XX +XXX,XX @@ static void define_pmu_regs(ARMCPU *cpu)
25
CPUState *cs = ENV_GET_CPU(env);
20
.crm = 8 | (3 & (i >> 3)), .opc1 = 0, .opc2 = i & 7,
26
uint64_t ret = 0;
21
.access = PL0_RW, .type = ARM_CP_IO | ARM_CP_ALIAS,
27
22
.readfn = pmevcntr_readfn, .writefn = pmevcntr_writefn,
28
- if (cs->interrupt_request & CPU_INTERRUPT_HARD) {
23
- .accessfn = pmreg_access },
29
- ret |= CPSR_I;
24
+ .accessfn = pmreg_access_xevcntr },
30
+ if (arm_hcr_el2_imo(env)) {
25
{ .name = pmevcntr_el0_name, .state = ARM_CP_STATE_AA64,
31
+ if (cs->interrupt_request & CPU_INTERRUPT_VIRQ) {
26
.opc0 = 3, .opc1 = 3, .crn = 14, .crm = 8 | (3 & (i >> 3)),
32
+ ret |= CPSR_I;
27
- .opc2 = i & 7, .access = PL0_RW, .accessfn = pmreg_access,
33
+ }
28
+ .opc2 = i & 7, .access = PL0_RW, .accessfn = pmreg_access_xevcntr,
34
+ } else {
29
.type = ARM_CP_IO,
35
+ if (cs->interrupt_request & CPU_INTERRUPT_HARD) {
30
.readfn = pmevcntr_readfn, .writefn = pmevcntr_writefn,
36
+ ret |= CPSR_I;
31
.raw_readfn = pmevcntr_rawread,
37
+ }
38
}
39
- if (cs->interrupt_request & CPU_INTERRUPT_FIQ) {
40
- ret |= CPSR_F;
41
+
42
+ if (arm_hcr_el2_fmo(env)) {
43
+ if (cs->interrupt_request & CPU_INTERRUPT_VFIQ) {
44
+ ret |= CPSR_F;
45
+ }
46
+ } else {
47
+ if (cs->interrupt_request & CPU_INTERRUPT_FIQ) {
48
+ ret |= CPSR_F;
49
+ }
50
}
51
+
52
/* External aborts are not possible in QEMU so A bit is always clear */
53
return ret;
54
}
55
--
32
--
56
2.19.1
33
2.25.1
57
58
diff view generated by jsdifflib
Deleted patch
1
The HCR_EL2 VI and VF bits are supposed to track whether there is
2
a pending virtual IRQ or virtual FIQ. For QEMU we store the
3
pending VIRQ/VFIQ status in cs->interrupt_request, so this means:
4
* if the register is read we must get these bit values from
5
cs->interrupt_request
6
* if the register is written then we must write the bit
7
values back into cs->interrupt_request
8
1
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20181012144235.19646-7-peter.maydell@linaro.org
12
---
13
target/arm/helper.c | 47 +++++++++++++++++++++++++++++++++++++++++----
14
1 file changed, 43 insertions(+), 4 deletions(-)
15
16
diff --git a/target/arm/helper.c b/target/arm/helper.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/helper.c
19
+++ b/target/arm/helper.c
20
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el3_no_el2_v8_cp_reginfo[] = {
21
static void hcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
22
{
23
ARMCPU *cpu = arm_env_get_cpu(env);
24
+ CPUState *cs = ENV_GET_CPU(env);
25
uint64_t valid_mask = HCR_MASK;
26
27
if (arm_feature(env, ARM_FEATURE_EL3)) {
28
@@ -XXX,XX +XXX,XX @@ static void hcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
29
/* Clear RES0 bits. */
30
value &= valid_mask;
31
32
+ /*
33
+ * VI and VF are kept in cs->interrupt_request. Modifying that
34
+ * requires that we have the iothread lock, which is done by
35
+ * marking the reginfo structs as ARM_CP_IO.
36
+ * Note that if a write to HCR pends a VIRQ or VFIQ it is never
37
+ * possible for it to be taken immediately, because VIRQ and
38
+ * VFIQ are masked unless running at EL0 or EL1, and HCR
39
+ * can only be written at EL2.
40
+ */
41
+ g_assert(qemu_mutex_iothread_locked());
42
+ if (value & HCR_VI) {
43
+ cs->interrupt_request |= CPU_INTERRUPT_VIRQ;
44
+ } else {
45
+ cs->interrupt_request &= ~CPU_INTERRUPT_VIRQ;
46
+ }
47
+ if (value & HCR_VF) {
48
+ cs->interrupt_request |= CPU_INTERRUPT_VFIQ;
49
+ } else {
50
+ cs->interrupt_request &= ~CPU_INTERRUPT_VFIQ;
51
+ }
52
+ value &= ~(HCR_VI | HCR_VF);
53
+
54
/* These bits change the MMU setup:
55
* HCR_VM enables stage 2 translation
56
* HCR_PTW forbids certain page-table setups
57
@@ -XXX,XX +XXX,XX @@ static void hcr_writelow(CPUARMState *env, const ARMCPRegInfo *ri,
58
hcr_write(env, NULL, value);
59
}
60
61
+static uint64_t hcr_read(CPUARMState *env, const ARMCPRegInfo *ri)
62
+{
63
+ /* The VI and VF bits live in cs->interrupt_request */
64
+ uint64_t ret = env->cp15.hcr_el2 & ~(HCR_VI | HCR_VF);
65
+ CPUState *cs = ENV_GET_CPU(env);
66
+
67
+ if (cs->interrupt_request & CPU_INTERRUPT_VIRQ) {
68
+ ret |= HCR_VI;
69
+ }
70
+ if (cs->interrupt_request & CPU_INTERRUPT_VFIQ) {
71
+ ret |= HCR_VF;
72
+ }
73
+ return ret;
74
+}
75
+
76
static const ARMCPRegInfo el2_cp_reginfo[] = {
77
{ .name = "HCR_EL2", .state = ARM_CP_STATE_AA64,
78
+ .type = ARM_CP_IO,
79
.opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 0,
80
.access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.hcr_el2),
81
- .writefn = hcr_write },
82
+ .writefn = hcr_write, .readfn = hcr_read },
83
{ .name = "HCR", .state = ARM_CP_STATE_AA32,
84
- .type = ARM_CP_ALIAS,
85
+ .type = ARM_CP_ALIAS | ARM_CP_IO,
86
.cp = 15, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 0,
87
.access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.hcr_el2),
88
- .writefn = hcr_writelow },
89
+ .writefn = hcr_writelow, .readfn = hcr_read },
90
{ .name = "ELR_EL2", .state = ARM_CP_STATE_AA64,
91
.type = ARM_CP_ALIAS,
92
.opc0 = 3, .opc1 = 4, .crn = 4, .crm = 0, .opc2 = 1,
93
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
94
95
static const ARMCPRegInfo el2_v8_cp_reginfo[] = {
96
{ .name = "HCR2", .state = ARM_CP_STATE_AA32,
97
- .type = ARM_CP_ALIAS,
98
+ .type = ARM_CP_ALIAS | ARM_CP_IO,
99
.cp = 15, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 4,
100
.access = PL2_RW,
101
.fieldoffset = offsetofhigh32(CPUARMState, cp15.hcr_el2),
102
--
103
2.19.1
104
105
diff view generated by jsdifflib
Deleted patch
1
If the HCR_EL2 PTW virtualizaiton configuration register bit
2
is set, then this means that a stage 2 Permission fault must
3
be generated if a stage 1 translation table access is made
4
to an address that is mapped as Device memory in stage 2.
5
Implement this.
6
1
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20181012144235.19646-8-peter.maydell@linaro.org
10
---
11
target/arm/helper.c | 21 ++++++++++++++++++++-
12
1 file changed, 20 insertions(+), 1 deletion(-)
13
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper.c
17
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
19
hwaddr s2pa;
20
int s2prot;
21
int ret;
22
+ ARMCacheAttrs cacheattrs = {};
23
+ ARMCacheAttrs *pcacheattrs = NULL;
24
+
25
+ if (env->cp15.hcr_el2 & HCR_PTW) {
26
+ /*
27
+ * PTW means we must fault if this S1 walk touches S2 Device
28
+ * memory; otherwise we don't care about the attributes and can
29
+ * save the S2 translation the effort of computing them.
30
+ */
31
+ pcacheattrs = &cacheattrs;
32
+ }
33
34
ret = get_phys_addr_lpae(env, addr, 0, ARMMMUIdx_S2NS, &s2pa,
35
- &txattrs, &s2prot, &s2size, fi, NULL);
36
+ &txattrs, &s2prot, &s2size, fi, pcacheattrs);
37
if (ret) {
38
assert(fi->type != ARMFault_None);
39
fi->s2addr = addr;
40
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
41
fi->s1ptw = true;
42
return ~0;
43
}
44
+ if (pcacheattrs && (pcacheattrs->attrs & 0xf0) == 0) {
45
+ /* Access was to Device memory: generate Permission fault */
46
+ fi->type = ARMFault_Permission;
47
+ fi->s2addr = addr;
48
+ fi->stage2 = true;
49
+ fi->s1ptw = true;
50
+ return ~0;
51
+ }
52
addr = s2pa;
53
}
54
return addr;
55
--
56
2.19.1
57
58
diff view generated by jsdifflib
Deleted patch
1
Create and use a utility function to extract the EC field
2
from a syndrome, rather than open-coding the shift.
3
1
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20181012144235.19646-9-peter.maydell@linaro.org
7
---
8
target/arm/internals.h | 5 +++++
9
target/arm/helper.c | 4 ++--
10
target/arm/kvm64.c | 2 +-
11
target/arm/op_helper.c | 2 +-
12
4 files changed, 9 insertions(+), 4 deletions(-)
13
14
diff --git a/target/arm/internals.h b/target/arm/internals.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/internals.h
17
+++ b/target/arm/internals.h
18
@@ -XXX,XX +XXX,XX @@ enum arm_exception_class {
19
#define ARM_EL_IL (1 << ARM_EL_IL_SHIFT)
20
#define ARM_EL_ISV (1 << ARM_EL_ISV_SHIFT)
21
22
+static inline uint32_t syn_get_ec(uint32_t syn)
23
+{
24
+ return syn >> ARM_EL_EC_SHIFT;
25
+}
26
+
27
/* Utility functions for constructing various kinds of syndrome value.
28
* Note that in general we follow the AArch64 syndrome values; in a
29
* few cases the value in HSR for exceptions taken to AArch32 Hyp
30
diff --git a/target/arm/helper.c b/target/arm/helper.c
31
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/helper.c
33
+++ b/target/arm/helper.c
34
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch32(CPUState *cs)
35
uint32_t moe;
36
37
/* If this is a debug exception we must update the DBGDSCR.MOE bits */
38
- switch (env->exception.syndrome >> ARM_EL_EC_SHIFT) {
39
+ switch (syn_get_ec(env->exception.syndrome)) {
40
case EC_BREAKPOINT:
41
case EC_BREAKPOINT_SAME_EL:
42
moe = 1;
43
@@ -XXX,XX +XXX,XX @@ void arm_cpu_do_interrupt(CPUState *cs)
44
if (qemu_loglevel_mask(CPU_LOG_INT)
45
&& !excp_is_internal(cs->exception_index)) {
46
qemu_log_mask(CPU_LOG_INT, "...with ESR 0x%x/0x%" PRIx32 "\n",
47
- env->exception.syndrome >> ARM_EL_EC_SHIFT,
48
+ syn_get_ec(env->exception.syndrome),
49
env->exception.syndrome);
50
}
51
52
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/target/arm/kvm64.c
55
+++ b/target/arm/kvm64.c
56
@@ -XXX,XX +XXX,XX @@ int kvm_arch_remove_sw_breakpoint(CPUState *cs, struct kvm_sw_breakpoint *bp)
57
58
bool kvm_arm_handle_debug(CPUState *cs, struct kvm_debug_exit_arch *debug_exit)
59
{
60
- int hsr_ec = debug_exit->hsr >> ARM_EL_EC_SHIFT;
61
+ int hsr_ec = syn_get_ec(debug_exit->hsr);
62
ARMCPU *cpu = ARM_CPU(cs);
63
CPUClass *cc = CPU_GET_CLASS(cs);
64
CPUARMState *env = &cpu->env;
65
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
66
index XXXXXXX..XXXXXXX 100644
67
--- a/target/arm/op_helper.c
68
+++ b/target/arm/op_helper.c
69
@@ -XXX,XX +XXX,XX @@ void raise_exception(CPUARMState *env, uint32_t excp,
70
* (see DDI0478C.a D1.10.4)
71
*/
72
target_el = 2;
73
- if (syndrome >> ARM_EL_EC_SHIFT == EC_ADVSIMDFPACCESSTRAP) {
74
+ if (syn_get_ec(syndrome) == EC_ADVSIMDFPACCESSTRAP) {
75
syndrome = syn_uncategorized();
76
}
77
}
78
--
79
2.19.1
80
81
diff view generated by jsdifflib
Deleted patch
1
For the v7 version of the Arm architecture, the IL bit in
2
syndrome register values where the field is not valid was
3
defined to be UNK/SBZP. In v8 this is RES1, which is what
4
QEMU currently implements. Handle the desired v7 behaviour
5
by squashing the IL bit for the affected cases:
6
* EC == EC_UNCATEGORIZED
7
* prefetch aborts
8
* data aborts where ISV is 0
9
1
10
(The fourth case listed in the v8 Arm ARM DDI 0487C.a in
11
section G7.2.70, "illegal state exception", can't happen
12
on a v7 CPU.)
13
14
This deals with a corner case noted in a comment.
15
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
18
Message-id: 20181012144235.19646-10-peter.maydell@linaro.org
19
---
20
target/arm/internals.h | 7 ++-----
21
target/arm/helper.c | 13 +++++++++++++
22
2 files changed, 15 insertions(+), 5 deletions(-)
23
24
diff --git a/target/arm/internals.h b/target/arm/internals.h
25
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/internals.h
27
+++ b/target/arm/internals.h
28
@@ -XXX,XX +XXX,XX @@ static inline uint32_t syn_get_ec(uint32_t syn)
29
/* Utility functions for constructing various kinds of syndrome value.
30
* Note that in general we follow the AArch64 syndrome values; in a
31
* few cases the value in HSR for exceptions taken to AArch32 Hyp
32
- * mode differs slightly, so if we ever implemented Hyp mode then the
33
- * syndrome value would need some massaging on exception entry.
34
- * (One example of this is that AArch64 defaults to IL bit set for
35
- * exceptions which don't specifically indicate information about the
36
- * trapping instruction, whereas AArch32 defaults to IL bit clear.)
37
+ * mode differs slightly, and we fix this up when populating HSR in
38
+ * arm_cpu_do_interrupt_aarch32_hyp().
39
*/
40
static inline uint32_t syn_uncategorized(void)
41
{
42
diff --git a/target/arm/helper.c b/target/arm/helper.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/helper.c
45
+++ b/target/arm/helper.c
46
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch32_hyp(CPUState *cs)
47
}
48
49
if (cs->exception_index != EXCP_IRQ && cs->exception_index != EXCP_FIQ) {
50
+ if (!arm_feature(env, ARM_FEATURE_V8)) {
51
+ /*
52
+ * QEMU syndrome values are v8-style. v7 has the IL bit
53
+ * UNK/SBZP for "field not valid" cases, where v8 uses RES1.
54
+ * If this is a v7 CPU, squash the IL bit in those cases.
55
+ */
56
+ if (cs->exception_index == EXCP_PREFETCH_ABORT ||
57
+ (cs->exception_index == EXCP_DATA_ABORT &&
58
+ !(env->exception.syndrome & ARM_EL_ISV)) ||
59
+ syn_get_ec(env->exception.syndrome) == EC_UNCATEGORIZED) {
60
+ env->exception.syndrome &= ~ARM_EL_IL;
61
+ }
62
+ }
63
env->cp15.esr_el[2] = env->exception.syndrome;
64
}
65
66
--
67
2.19.1
68
69
diff view generated by jsdifflib
Deleted patch
1
For traps of FP/SIMD instructions to AArch32 Hyp mode, the syndrome
2
provided in HSR has more information than is reported to AArch64.
3
Specifically, there are extra fields TA and coproc which indicate
4
whether the trapped instruction was FP or SIMD. Add this extra
5
information to the syndromes we construct, and mask it out when
6
taking the exception to AArch64.
7
1
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20181012144235.19646-11-peter.maydell@linaro.org
11
---
12
target/arm/internals.h | 14 +++++++++++++-
13
target/arm/helper.c | 9 +++++++++
14
target/arm/translate.c | 8 ++++----
15
3 files changed, 26 insertions(+), 5 deletions(-)
16
17
diff --git a/target/arm/internals.h b/target/arm/internals.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/internals.h
20
+++ b/target/arm/internals.h
21
@@ -XXX,XX +XXX,XX @@ static inline uint32_t syn_get_ec(uint32_t syn)
22
* few cases the value in HSR for exceptions taken to AArch32 Hyp
23
* mode differs slightly, and we fix this up when populating HSR in
24
* arm_cpu_do_interrupt_aarch32_hyp().
25
+ * The exception is FP/SIMD access traps -- these report extra information
26
+ * when taking an exception to AArch32. For those we include the extra coproc
27
+ * and TA fields, and mask them out when taking the exception to AArch64.
28
*/
29
static inline uint32_t syn_uncategorized(void)
30
{
31
@@ -XXX,XX +XXX,XX @@ static inline uint32_t syn_cp15_rrt_trap(int cv, int cond, int opc1, int crm,
32
33
static inline uint32_t syn_fp_access_trap(int cv, int cond, bool is_16bit)
34
{
35
+ /* AArch32 FP trap or any AArch64 FP/SIMD trap: TA == 0 coproc == 0xa */
36
return (EC_ADVSIMDFPACCESSTRAP << ARM_EL_EC_SHIFT)
37
| (is_16bit ? 0 : ARM_EL_IL)
38
- | (cv << 24) | (cond << 20);
39
+ | (cv << 24) | (cond << 20) | 0xa;
40
+}
41
+
42
+static inline uint32_t syn_simd_access_trap(int cv, int cond, bool is_16bit)
43
+{
44
+ /* AArch32 SIMD trap: TA == 1 coproc == 0 */
45
+ return (EC_ADVSIMDFPACCESSTRAP << ARM_EL_EC_SHIFT)
46
+ | (is_16bit ? 0 : ARM_EL_IL)
47
+ | (cv << 24) | (cond << 20) | (1 << 5);
48
}
49
50
static inline uint32_t syn_sve_access_trap(void)
51
diff --git a/target/arm/helper.c b/target/arm/helper.c
52
index XXXXXXX..XXXXXXX 100644
53
--- a/target/arm/helper.c
54
+++ b/target/arm/helper.c
55
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
56
case EXCP_HVC:
57
case EXCP_HYP_TRAP:
58
case EXCP_SMC:
59
+ if (syn_get_ec(env->exception.syndrome) == EC_ADVSIMDFPACCESSTRAP) {
60
+ /*
61
+ * QEMU internal FP/SIMD syndromes from AArch32 include the
62
+ * TA and coproc fields which are only exposed if the exception
63
+ * is taken to AArch32 Hyp mode. Mask them out to get a valid
64
+ * AArch64 format syndrome.
65
+ */
66
+ env->exception.syndrome &= ~MAKE_64BIT_MASK(0, 20);
67
+ }
68
env->cp15.esr_el[new_el] = env->exception.syndrome;
69
break;
70
case EXCP_IRQ:
71
diff --git a/target/arm/translate.c b/target/arm/translate.c
72
index XXXXXXX..XXXXXXX 100644
73
--- a/target/arm/translate.c
74
+++ b/target/arm/translate.c
75
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
76
*/
77
if (s->fp_excp_el) {
78
gen_exception_insn(s, 4, EXCP_UDEF,
79
- syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
80
+ syn_simd_access_trap(1, 0xe, false), s->fp_excp_el);
81
return 0;
82
}
83
84
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
85
*/
86
if (s->fp_excp_el) {
87
gen_exception_insn(s, 4, EXCP_UDEF,
88
- syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
89
+ syn_simd_access_trap(1, 0xe, false), s->fp_excp_el);
90
return 0;
91
}
92
93
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
94
95
if (s->fp_excp_el) {
96
gen_exception_insn(s, 4, EXCP_UDEF,
97
- syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
98
+ syn_simd_access_trap(1, 0xe, false), s->fp_excp_el);
99
return 0;
100
}
101
if (!s->vfp_enabled) {
102
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_2reg_scalar_ext(DisasContext *s, uint32_t insn)
103
104
if (s->fp_excp_el) {
105
gen_exception_insn(s, 4, EXCP_UDEF,
106
- syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
107
+ syn_simd_access_trap(1, 0xe, false), s->fp_excp_el);
108
return 0;
109
}
110
if (!s->vfp_enabled) {
111
--
112
2.19.1
113
114
diff view generated by jsdifflib
Deleted patch
1
From: Stewart Hildebrand <Stewart.Hildebrand@dornerworks.com>
2
1
3
"The Image must be placed text_offset bytes from a 2MB aligned base
4
address anywhere in usable system RAM and called there."
5
6
For the virt board, we write our startup bootloader at the very
7
bottom of RAM, so that bit can't be used for the image. To avoid
8
overlap in case the image requests to be loaded at an offset
9
smaller than our bootloader, we increment the load offset to the
10
next 2MB.
11
12
This fixes a boot failure for Xen AArch64.
13
14
Signed-off-by: Stewart Hildebrand <stewart.hildebrand@dornerworks.com>
15
Tested-by: Andre Przywara <andre.przywara@arm.com>
16
Message-id: b8a89518794b4436af0c151ed10de4fa@dornerworks.com
17
[PMM: Rephrased a comment a bit]
18
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
---
21
hw/arm/boot.c | 18 ++++++++++++++++++
22
1 file changed, 18 insertions(+)
23
24
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
25
index XXXXXXX..XXXXXXX 100644
26
--- a/hw/arm/boot.c
27
+++ b/hw/arm/boot.c
28
@@ -XXX,XX +XXX,XX @@
29
#include "qemu/config-file.h"
30
#include "qemu/option.h"
31
#include "exec/address-spaces.h"
32
+#include "qemu/units.h"
33
34
/* Kernel boot protocol is specified in the kernel docs
35
* Documentation/arm/Booting and Documentation/arm64/booting.txt
36
@@ -XXX,XX +XXX,XX @@
37
#define ARM64_TEXT_OFFSET_OFFSET 8
38
#define ARM64_MAGIC_OFFSET 56
39
40
+#define BOOTLOADER_MAX_SIZE (4 * KiB)
41
+
42
AddressSpace *arm_boot_address_space(ARMCPU *cpu,
43
const struct arm_boot_info *info)
44
{
45
@@ -XXX,XX +XXX,XX @@ static void write_bootloader(const char *name, hwaddr addr,
46
code[i] = tswap32(insn);
47
}
48
49
+ assert((len * sizeof(uint32_t)) < BOOTLOADER_MAX_SIZE);
50
+
51
rom_add_blob_fixed_as(name, code, len * sizeof(uint32_t), addr, as);
52
53
g_free(code);
54
@@ -XXX,XX +XXX,XX @@ static uint64_t load_aarch64_image(const char *filename, hwaddr mem_base,
55
memcpy(&hdrvals, buffer + ARM64_TEXT_OFFSET_OFFSET, sizeof(hdrvals));
56
if (hdrvals[1] != 0) {
57
kernel_load_offset = le64_to_cpu(hdrvals[0]);
58
+
59
+ /*
60
+ * We write our startup "bootloader" at the very bottom of RAM,
61
+ * so that bit can't be used for the image. Luckily the Image
62
+ * format specification is that the image requests only an offset
63
+ * from a 2MB boundary, not an absolute load address. So if the
64
+ * image requests an offset that might mean it overlaps with the
65
+ * bootloader, we can just load it starting at 2MB+offset rather
66
+ * than 0MB + offset.
67
+ */
68
+ if (kernel_load_offset < BOOTLOADER_MAX_SIZE) {
69
+ kernel_load_offset += 2 * MiB;
70
+ }
71
}
72
}
73
74
--
75
2.19.1
76
77
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <rth@twiddle.net>
2
1
3
This can reduce the number of opcodes required for certain
4
complex forms of load-multiple (e.g. ld4.16b).
5
6
Signed-off-by: Richard Henderson <rth@twiddle.net>
7
Message-id: 20181011205206.3552-2-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/translate-a64.c | 12 ++++++++----
12
1 file changed, 8 insertions(+), 4 deletions(-)
13
14
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/translate-a64.c
17
+++ b/target/arm/translate-a64.c
18
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
19
bool is_store = !extract32(insn, 22, 1);
20
bool is_postidx = extract32(insn, 23, 1);
21
bool is_q = extract32(insn, 30, 1);
22
- TCGv_i64 tcg_addr, tcg_rn;
23
+ TCGv_i64 tcg_addr, tcg_rn, tcg_ebytes;
24
25
int ebytes = 1 << size;
26
int elements = (is_q ? 128 : 64) / (8 << size);
27
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
28
tcg_rn = cpu_reg_sp(s, rn);
29
tcg_addr = tcg_temp_new_i64();
30
tcg_gen_mov_i64(tcg_addr, tcg_rn);
31
+ tcg_ebytes = tcg_const_i64(ebytes);
32
33
for (r = 0; r < rpt; r++) {
34
int e;
35
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
36
clear_vec_high(s, is_q, tt);
37
}
38
}
39
- tcg_gen_addi_i64(tcg_addr, tcg_addr, ebytes);
40
+ tcg_gen_add_i64(tcg_addr, tcg_addr, tcg_ebytes);
41
tt = (tt + 1) % 32;
42
}
43
}
44
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
45
tcg_gen_add_i64(tcg_rn, tcg_rn, cpu_reg(s, rm));
46
}
47
}
48
+ tcg_temp_free_i64(tcg_ebytes);
49
tcg_temp_free_i64(tcg_addr);
50
}
51
52
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
53
bool replicate = false;
54
int index = is_q << 3 | S << 2 | size;
55
int ebytes, xs;
56
- TCGv_i64 tcg_addr, tcg_rn;
57
+ TCGv_i64 tcg_addr, tcg_rn, tcg_ebytes;
58
59
switch (scale) {
60
case 3:
61
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
62
tcg_rn = cpu_reg_sp(s, rn);
63
tcg_addr = tcg_temp_new_i64();
64
tcg_gen_mov_i64(tcg_addr, tcg_rn);
65
+ tcg_ebytes = tcg_const_i64(ebytes);
66
67
for (xs = 0; xs < selem; xs++) {
68
if (replicate) {
69
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
70
do_vec_st(s, rt, index, tcg_addr, scale);
71
}
72
}
73
- tcg_gen_addi_i64(tcg_addr, tcg_addr, ebytes);
74
+ tcg_gen_add_i64(tcg_addr, tcg_addr, tcg_ebytes);
75
rt = (rt + 1) % 32;
76
}
77
78
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
79
tcg_gen_add_i64(tcg_rn, tcg_rn, cpu_reg(s, rm));
80
}
81
}
82
+ tcg_temp_free_i64(tcg_ebytes);
83
tcg_temp_free_i64(tcg_addr);
84
}
85
86
--
87
2.19.1
88
89
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
This is done generically in translator_loop.
4
5
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Message-id: 20181011205206.3552-3-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/translate-a64.c | 1 -
13
target/arm/translate.c | 1 -
14
2 files changed, 2 deletions(-)
15
16
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/translate-a64.c
19
+++ b/target/arm/translate-a64.c
20
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
21
22
static void aarch64_tr_tb_start(DisasContextBase *db, CPUState *cpu)
23
{
24
- tcg_clear_temp_count();
25
}
26
27
static void aarch64_tr_insn_start(DisasContextBase *dcbase, CPUState *cpu)
28
diff --git a/target/arm/translate.c b/target/arm/translate.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/translate.c
31
+++ b/target/arm/translate.c
32
@@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_start(DisasContextBase *dcbase, CPUState *cpu)
33
tcg_gen_movi_i32(tmp, 0);
34
store_cpu_field(tmp, condexec_bits);
35
}
36
- tcg_clear_temp_count();
37
}
38
39
static void arm_tr_insn_start(DisasContextBase *dcbase, CPUState *cpu)
40
--
41
2.19.1
42
43
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20181011205206.3552-4-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate-a64.c | 28 +++-------------------------
9
1 file changed, 3 insertions(+), 25 deletions(-)
10
11
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-a64.c
14
+++ b/target/arm/translate-a64.c
15
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
16
for (xs = 0; xs < selem; xs++) {
17
if (replicate) {
18
/* Load and replicate to all elements */
19
- uint64_t mulconst;
20
TCGv_i64 tcg_tmp = tcg_temp_new_i64();
21
22
tcg_gen_qemu_ld_i64(tcg_tmp, tcg_addr,
23
get_mem_index(s), s->be_data + scale);
24
- switch (scale) {
25
- case 0:
26
- mulconst = 0x0101010101010101ULL;
27
- break;
28
- case 1:
29
- mulconst = 0x0001000100010001ULL;
30
- break;
31
- case 2:
32
- mulconst = 0x0000000100000001ULL;
33
- break;
34
- case 3:
35
- mulconst = 0;
36
- break;
37
- default:
38
- g_assert_not_reached();
39
- }
40
- if (mulconst) {
41
- tcg_gen_muli_i64(tcg_tmp, tcg_tmp, mulconst);
42
- }
43
- write_vec_element(s, tcg_tmp, rt, 0, MO_64);
44
- if (is_q) {
45
- write_vec_element(s, tcg_tmp, rt, 1, MO_64);
46
- }
47
+ tcg_gen_gvec_dup_i64(scale, vec_full_reg_offset(s, rt),
48
+ (is_q + 1) * 8, vec_full_reg_size(s),
49
+ tcg_tmp);
50
tcg_temp_free_i64(tcg_tmp);
51
- clear_vec_high(s, is_q, rt);
52
} else {
53
/* Load/store one element per register */
54
if (is_load) {
55
--
56
2.19.1
57
58
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
Message-id: 20181011205206.3552-6-richard.henderson@linaro.org
6
[PMM: drop change to now-deleted cpu_mode_names array]
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/translate.c | 4 ++--
11
1 file changed, 2 insertions(+), 2 deletions(-)
12
13
diff --git a/target/arm/translate.c b/target/arm/translate.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate.c
16
+++ b/target/arm/translate.c
17
@@ -XXX,XX +XXX,XX @@ static TCGv_i64 cpu_F0d, cpu_F1d;
18
19
#include "exec/gen-icount.h"
20
21
-static const char *regnames[] =
22
+static const char * const regnames[] =
23
{ "r0", "r1", "r2", "r3", "r4", "r5", "r6", "r7",
24
"r8", "r9", "r10", "r11", "r12", "r13", "r14", "pc" };
25
26
@@ -XXX,XX +XXX,XX @@ static struct {
27
int nregs;
28
int interleave;
29
int spacing;
30
-} neon_ls_element_type[11] = {
31
+} const neon_ls_element_type[11] = {
32
{4, 4, 1},
33
{4, 4, 2},
34
{4, 1, 1},
35
--
36
2.19.1
37
38
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20181011205206.3552-8-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate.c | 67 ++++++++++++++++++++++++------------------
9
1 file changed, 39 insertions(+), 28 deletions(-)
10
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
14
+++ b/target/arm/translate.c
15
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
16
return 1;
17
}
18
} else { /* (insn & 0x00380080) == 0 */
19
- int invert;
20
+ int invert, reg_ofs, vec_size;
21
+
22
if (q && (rd & 1)) {
23
return 1;
24
}
25
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
26
break;
27
case 14:
28
imm |= (imm << 8) | (imm << 16) | (imm << 24);
29
- if (invert)
30
+ if (invert) {
31
imm = ~imm;
32
+ }
33
break;
34
case 15:
35
if (invert) {
36
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
37
| ((imm & 0x40) ? (0x1f << 25) : (1 << 30));
38
break;
39
}
40
- if (invert)
41
+ if (invert) {
42
imm = ~imm;
43
+ }
44
45
- for (pass = 0; pass < (q ? 4 : 2); pass++) {
46
- if (op & 1 && op < 12) {
47
- tmp = neon_load_reg(rd, pass);
48
- if (invert) {
49
- /* The immediate value has already been inverted, so
50
- BIC becomes AND. */
51
- tcg_gen_andi_i32(tmp, tmp, imm);
52
- } else {
53
- tcg_gen_ori_i32(tmp, tmp, imm);
54
- }
55
+ reg_ofs = neon_reg_offset(rd, 0);
56
+ vec_size = q ? 16 : 8;
57
+
58
+ if (op & 1 && op < 12) {
59
+ if (invert) {
60
+ /* The immediate value has already been inverted,
61
+ * so BIC becomes AND.
62
+ */
63
+ tcg_gen_gvec_andi(MO_32, reg_ofs, reg_ofs, imm,
64
+ vec_size, vec_size);
65
} else {
66
- /* VMOV, VMVN. */
67
- tmp = tcg_temp_new_i32();
68
- if (op == 14 && invert) {
69
- int n;
70
- uint32_t val;
71
- val = 0;
72
- for (n = 0; n < 4; n++) {
73
- if (imm & (1 << (n + (pass & 1) * 4)))
74
- val |= 0xff << (n * 8);
75
- }
76
- tcg_gen_movi_i32(tmp, val);
77
- } else {
78
- tcg_gen_movi_i32(tmp, imm);
79
- }
80
+ tcg_gen_gvec_ori(MO_32, reg_ofs, reg_ofs, imm,
81
+ vec_size, vec_size);
82
+ }
83
+ } else {
84
+ /* VMOV, VMVN. */
85
+ if (op == 14 && invert) {
86
+ TCGv_i64 t64 = tcg_temp_new_i64();
87
+
88
+ for (pass = 0; pass <= q; ++pass) {
89
+ uint64_t val = 0;
90
+ int n;
91
+
92
+ for (n = 0; n < 8; n++) {
93
+ if (imm & (1 << (n + pass * 8))) {
94
+ val |= 0xffull << (n * 8);
95
+ }
96
+ }
97
+ tcg_gen_movi_i64(t64, val);
98
+ neon_store_reg64(t64, rd + pass);
99
+ }
100
+ tcg_temp_free_i64(t64);
101
+ } else {
102
+ tcg_gen_gvec_dup32i(reg_ofs, vec_size, vec_size, imm);
103
}
104
- neon_store_reg(rd, pass, tmp);
105
}
106
}
107
} else { /* (insn & 0x00800010 == 0x00800000) */
108
--
109
2.19.1
110
111
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20181011205206.3552-10-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate.c | 29 ++++++++++-------------------
9
1 file changed, 10 insertions(+), 19 deletions(-)
10
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
14
+++ b/target/arm/translate.c
15
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
16
break;
17
}
18
return 0;
19
+
20
+ case NEON_3R_VADD_VSUB:
21
+ if (u) {
22
+ tcg_gen_gvec_sub(size, rd_ofs, rn_ofs, rm_ofs,
23
+ vec_size, vec_size);
24
+ } else {
25
+ tcg_gen_gvec_add(size, rd_ofs, rn_ofs, rm_ofs,
26
+ vec_size, vec_size);
27
+ }
28
+ return 0;
29
}
30
if (size == 3) {
31
/* 64-bit element instructions. */
32
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
33
cpu_V1, cpu_V0);
34
}
35
break;
36
- case NEON_3R_VADD_VSUB:
37
- if (u) {
38
- tcg_gen_sub_i64(CPU_V001);
39
- } else {
40
- tcg_gen_add_i64(CPU_V001);
41
- }
42
- break;
43
default:
44
abort();
45
}
46
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
47
tmp2 = neon_load_reg(rd, pass);
48
gen_neon_add(size, tmp, tmp2);
49
break;
50
- case NEON_3R_VADD_VSUB:
51
- if (!u) { /* VADD */
52
- gen_neon_add(size, tmp, tmp2);
53
- } else { /* VSUB */
54
- switch (size) {
55
- case 0: gen_helper_neon_sub_u8(tmp, tmp, tmp2); break;
56
- case 1: gen_helper_neon_sub_u16(tmp, tmp, tmp2); break;
57
- case 2: tcg_gen_sub_i32(tmp, tmp, tmp2); break;
58
- default: abort();
59
- }
60
- }
61
- break;
62
case NEON_3R_VTST_VCEQ:
63
if (!u) { /* VTST */
64
switch (size) {
65
--
66
2.19.1
67
68
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20181011205206.3552-11-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate.c | 16 ++++++++--------
9
1 file changed, 8 insertions(+), 8 deletions(-)
10
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
14
+++ b/target/arm/translate.c
15
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
16
tcg_temp_free_ptr(ptr1);
17
tcg_temp_free_ptr(ptr2);
18
break;
19
+
20
+ case NEON_2RM_VMVN:
21
+ tcg_gen_gvec_not(0, rd_ofs, rm_ofs, vec_size, vec_size);
22
+ break;
23
+ case NEON_2RM_VNEG:
24
+ tcg_gen_gvec_neg(size, rd_ofs, rm_ofs, vec_size, vec_size);
25
+ break;
26
+
27
default:
28
elementwise:
29
for (pass = 0; pass < (q ? 4 : 2); pass++) {
30
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
31
case NEON_2RM_VCNT:
32
gen_helper_neon_cnt_u8(tmp, tmp);
33
break;
34
- case NEON_2RM_VMVN:
35
- tcg_gen_not_i32(tmp, tmp);
36
- break;
37
case NEON_2RM_VQABS:
38
switch (size) {
39
case 0:
40
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
41
default: abort();
42
}
43
break;
44
- case NEON_2RM_VNEG:
45
- tmp2 = tcg_const_i32(0);
46
- gen_neon_rsb(size, tmp, tmp2);
47
- tcg_temp_free_i32(tmp2);
48
- break;
49
case NEON_2RM_VCGT0_F:
50
{
51
TCGv_ptr fpstatus = get_fpstatus_ptr(1);
52
--
53
2.19.1
54
55
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20181011205206.3552-12-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate.c | 31 +++++++++++++++----------------
9
1 file changed, 15 insertions(+), 16 deletions(-)
10
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
14
+++ b/target/arm/translate.c
15
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
16
vec_size, vec_size);
17
}
18
return 0;
19
+
20
+ case NEON_3R_VMUL: /* VMUL */
21
+ if (u) {
22
+ /* Polynomial case allows only P8 and is handled below. */
23
+ if (size != 0) {
24
+ return 1;
25
+ }
26
+ } else {
27
+ tcg_gen_gvec_mul(size, rd_ofs, rn_ofs, rm_ofs,
28
+ vec_size, vec_size);
29
+ return 0;
30
+ }
31
+ break;
32
}
33
if (size == 3) {
34
/* 64-bit element instructions. */
35
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
36
return 1;
37
}
38
break;
39
- case NEON_3R_VMUL:
40
- if (u && (size != 0)) {
41
- /* UNDEF on invalid size for polynomial subcase */
42
- return 1;
43
- }
44
- break;
45
case NEON_3R_VFM_VQRDMLSH:
46
if (!arm_dc_feature(s, ARM_FEATURE_VFP4)) {
47
return 1;
48
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
49
}
50
break;
51
case NEON_3R_VMUL:
52
- if (u) { /* polynomial */
53
- gen_helper_neon_mul_p8(tmp, tmp, tmp2);
54
- } else { /* Integer */
55
- switch (size) {
56
- case 0: gen_helper_neon_mul_u8(tmp, tmp, tmp2); break;
57
- case 1: gen_helper_neon_mul_u16(tmp, tmp, tmp2); break;
58
- case 2: tcg_gen_mul_i32(tmp, tmp, tmp2); break;
59
- default: abort();
60
- }
61
- }
62
+ /* VMUL.P8; other cases already eliminated. */
63
+ gen_helper_neon_mul_p8(tmp, tmp, tmp2);
64
break;
65
case NEON_3R_VPMAX:
66
GEN_NEON_INTEGER_OP(pmax);
67
--
68
2.19.1
69
70
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20181011205206.3552-18-richard.henderson@linaro.org
5
[PMM: added parens in ?: expression]
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
9
target/arm/translate.c | 81 ++++++++++++++----------------------------
10
1 file changed, 26 insertions(+), 55 deletions(-)
11
12
diff --git a/target/arm/translate.c b/target/arm/translate.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/translate.c
15
+++ b/target/arm/translate.c
16
@@ -XXX,XX +XXX,XX @@ static void gen_vfp_msr(TCGv_i32 tmp)
17
tcg_temp_free_i32(tmp);
18
}
19
20
-static void gen_neon_dup_u8(TCGv_i32 var, int shift)
21
-{
22
- TCGv_i32 tmp = tcg_temp_new_i32();
23
- if (shift)
24
- tcg_gen_shri_i32(var, var, shift);
25
- tcg_gen_ext8u_i32(var, var);
26
- tcg_gen_shli_i32(tmp, var, 8);
27
- tcg_gen_or_i32(var, var, tmp);
28
- tcg_gen_shli_i32(tmp, var, 16);
29
- tcg_gen_or_i32(var, var, tmp);
30
- tcg_temp_free_i32(tmp);
31
-}
32
-
33
static void gen_neon_dup_low16(TCGv_i32 var)
34
{
35
TCGv_i32 tmp = tcg_temp_new_i32();
36
@@ -XXX,XX +XXX,XX @@ static void gen_neon_dup_high16(TCGv_i32 var)
37
tcg_temp_free_i32(tmp);
38
}
39
40
-static TCGv_i32 gen_load_and_replicate(DisasContext *s, TCGv_i32 addr, int size)
41
-{
42
- /* Load a single Neon element and replicate into a 32 bit TCG reg */
43
- TCGv_i32 tmp = tcg_temp_new_i32();
44
- switch (size) {
45
- case 0:
46
- gen_aa32_ld8u(s, tmp, addr, get_mem_index(s));
47
- gen_neon_dup_u8(tmp, 0);
48
- break;
49
- case 1:
50
- gen_aa32_ld16u(s, tmp, addr, get_mem_index(s));
51
- gen_neon_dup_low16(tmp);
52
- break;
53
- case 2:
54
- gen_aa32_ld32u(s, tmp, addr, get_mem_index(s));
55
- break;
56
- default: /* Avoid compiler warnings. */
57
- abort();
58
- }
59
- return tmp;
60
-}
61
-
62
static int handle_vsel(uint32_t insn, uint32_t rd, uint32_t rn, uint32_t rm,
63
uint32_t dp)
64
{
65
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
66
int load;
67
int shift;
68
int n;
69
+ int vec_size;
70
TCGv_i32 addr;
71
TCGv_i32 tmp;
72
TCGv_i32 tmp2;
73
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
74
}
75
addr = tcg_temp_new_i32();
76
load_reg_var(s, addr, rn);
77
- if (nregs == 1) {
78
- /* VLD1 to all lanes: bit 5 indicates how many Dregs to write */
79
- tmp = gen_load_and_replicate(s, addr, size);
80
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd, 0));
81
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd, 1));
82
- if (insn & (1 << 5)) {
83
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd + 1, 0));
84
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd + 1, 1));
85
- }
86
- tcg_temp_free_i32(tmp);
87
- } else {
88
- /* VLD2/3/4 to all lanes: bit 5 indicates register stride */
89
- stride = (insn & (1 << 5)) ? 2 : 1;
90
- for (reg = 0; reg < nregs; reg++) {
91
- tmp = gen_load_and_replicate(s, addr, size);
92
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd, 0));
93
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd, 1));
94
- tcg_temp_free_i32(tmp);
95
- tcg_gen_addi_i32(addr, addr, 1 << size);
96
- rd += stride;
97
+
98
+ /* VLD1 to all lanes: bit 5 indicates how many Dregs to write.
99
+ * VLD2/3/4 to all lanes: bit 5 indicates register stride.
100
+ */
101
+ stride = (insn & (1 << 5)) ? 2 : 1;
102
+ vec_size = nregs == 1 ? stride * 8 : 8;
103
+
104
+ tmp = tcg_temp_new_i32();
105
+ for (reg = 0; reg < nregs; reg++) {
106
+ gen_aa32_ld_i32(s, tmp, addr, get_mem_index(s),
107
+ s->be_data | size);
108
+ if ((rd & 1) && vec_size == 16) {
109
+ /* We cannot write 16 bytes at once because the
110
+ * destination is unaligned.
111
+ */
112
+ tcg_gen_gvec_dup_i32(size, neon_reg_offset(rd, 0),
113
+ 8, 8, tmp);
114
+ tcg_gen_gvec_mov(0, neon_reg_offset(rd + 1, 0),
115
+ neon_reg_offset(rd, 0), 8, 8);
116
+ } else {
117
+ tcg_gen_gvec_dup_i32(size, neon_reg_offset(rd, 0),
118
+ vec_size, vec_size, tmp);
119
}
120
+ tcg_gen_addi_i32(addr, addr, 1 << size);
121
+ rd += stride;
122
}
123
+ tcg_temp_free_i32(tmp);
124
tcg_temp_free_i32(addr);
125
stride = (1 << size) * nregs;
126
} else {
127
--
128
2.19.1
129
130
diff view generated by jsdifflib
Deleted patch
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
2
1
3
Announce the availability of the various priority queues.
4
This fixes an issue where guest kernels would miss to
5
configure secondary queues due to inproper feature bits.
6
7
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
8
Message-id: 20181017213932.19973-2-edgar.iglesias@gmail.com
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
hw/net/cadence_gem.c | 8 +++++++-
13
1 file changed, 7 insertions(+), 1 deletion(-)
14
15
diff --git a/hw/net/cadence_gem.c b/hw/net/cadence_gem.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/net/cadence_gem.c
18
+++ b/hw/net/cadence_gem.c
19
@@ -XXX,XX +XXX,XX @@ static void gem_reset(DeviceState *d)
20
int i;
21
CadenceGEMState *s = CADENCE_GEM(d);
22
const uint8_t *a;
23
+ uint32_t queues_mask = 0;
24
25
DB_PRINT("\n");
26
27
@@ -XXX,XX +XXX,XX @@ static void gem_reset(DeviceState *d)
28
s->regs[GEM_DESCONF] = 0x02500111;
29
s->regs[GEM_DESCONF2] = 0x2ab13fff;
30
s->regs[GEM_DESCONF5] = 0x002f2045;
31
- s->regs[GEM_DESCONF6] = 0x00000200;
32
+ s->regs[GEM_DESCONF6] = 0x0;
33
+
34
+ if (s->num_priority_queues > 1) {
35
+ queues_mask = MAKE_64BIT_MASK(1, s->num_priority_queues - 1);
36
+ s->regs[GEM_DESCONF6] |= queues_mask;
37
+ }
38
39
/* Set MAC address */
40
a = &s->conf.macaddr.a[0];
41
--
42
2.19.1
43
44
diff view generated by jsdifflib
Deleted patch
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
2
1
3
Announce 64bit addressing support.
4
5
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
6
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Message-id: 20181017213932.19973-3-edgar.iglesias@gmail.com
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
hw/net/cadence_gem.c | 3 ++-
12
1 file changed, 2 insertions(+), 1 deletion(-)
13
14
diff --git a/hw/net/cadence_gem.c b/hw/net/cadence_gem.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/net/cadence_gem.c
17
+++ b/hw/net/cadence_gem.c
18
@@ -XXX,XX +XXX,XX @@
19
#define GEM_DESCONF4 (0x0000028C/4)
20
#define GEM_DESCONF5 (0x00000290/4)
21
#define GEM_DESCONF6 (0x00000294/4)
22
+#define GEM_DESCONF6_64B_MASK (1U << 23)
23
#define GEM_DESCONF7 (0x00000298/4)
24
25
#define GEM_INT_Q1_STATUS (0x00000400 / 4)
26
@@ -XXX,XX +XXX,XX @@ static void gem_reset(DeviceState *d)
27
s->regs[GEM_DESCONF] = 0x02500111;
28
s->regs[GEM_DESCONF2] = 0x2ab13fff;
29
s->regs[GEM_DESCONF5] = 0x002f2045;
30
- s->regs[GEM_DESCONF6] = 0x0;
31
+ s->regs[GEM_DESCONF6] = GEM_DESCONF6_64B_MASK;
32
33
if (s->num_priority_queues > 1) {
34
queues_mask = MAKE_64BIT_MASK(1, s->num_priority_queues - 1);
35
--
36
2.19.1
37
38
diff view generated by jsdifflib