1
target-arm queue: this time around is all small fixes
1
target-arm queue: two bug fixes, plus the KVM/SVE patchset,
2
and changes.
2
which is a new feature but one which was in my pre-softfreeze
3
pullreq (it just had to be dropped due to an unexpected test failure.)
3
4
4
thanks
5
thanks
5
-- PMM
6
-- PMM
6
7
7
The following changes since commit fec105c2abda8567ec15230429c41429b5ee307c:
8
The following changes since commit b7c9a7f353c0e260519bf735ff0d4aa01e72784b:
8
9
9
Merge remote-tracking branch 'remotes/kraxel/tags/audio-20190828-pull-request' into staging (2019-09-03 14:03:15 +0100)
10
Merge remote-tracking branch 'remotes/jnsnow/tags/ide-pull-request' into staging (2019-10-31 15:57:30 +0000)
10
11
11
are available in the Git repository at:
12
are available in the Git repository at:
12
13
13
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20190903
14
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20191101-1
14
15
15
for you to fetch changes up to 5e5584c89f36b302c666bc6db535fd3f7ff35ad2:
16
for you to fetch changes up to d9ae7624b659362cb2bb2b04fee53bf50829ca56:
16
17
17
target/arm: Don't abort on M-profile exception return in linux-user mode (2019-09-03 16:20:35 +0100)
18
target/arm: Allow reading flags from FPSCR for M-profile (2019-11-01 08:49:10 +0000)
18
19
19
----------------------------------------------------------------
20
----------------------------------------------------------------
20
target-arm queue:
21
target-arm queue:
21
* Revert and correctly fix refactoring of unallocated_encoding()
22
* Support SVE in KVM guests
22
* Take exceptions on ATS instructions when needed
23
* Don't UNDEF on M-profile 'vmrs apsr_nzcv, fpscr'
23
* aspeed/timer: Provide back-pressure information for short periods
24
* Update hflags after boot.c modifies CPU state
24
* memory: Remove unused memory_region_iommu_replay_all()
25
* hw/arm/smmuv3: Log a guest error when decoding an invalid STE
26
* hw/arm/smmuv3: Remove spurious error messages on IOVA invalidations
27
* target/arm: Fix SMMLS argument order
28
* hw/arm: Use ARM_CPU_TYPE_NAME() macro when appropriate
29
* hw/arm: Correct reference counting for creation of various objects
30
* includes: remove stale [smp|max]_cpus externs
31
* tcg/README: fix typo
32
* atomic_template: fix indentation in GEN_ATOMIC_HELPER
33
* include/exec/cpu-defs.h: fix typo
34
* target/arm: Free TCG temps in trans_VMOV_64_sp()
35
* target/arm: Don't abort on M-profile exception return in linux-user mode
36
25
37
----------------------------------------------------------------
26
----------------------------------------------------------------
38
Alex Bennée (2):
27
Andrew Jones (9):
39
includes: remove stale [smp|max]_cpus externs
28
target/arm/monitor: Introduce qmp_query_cpu_model_expansion
40
include/exec/cpu-defs.h: fix typo
29
tests: arm: Introduce cpu feature tests
30
target/arm: Allow SVE to be disabled via a CPU property
31
target/arm/cpu64: max cpu: Introduce sve<N> properties
32
target/arm/kvm64: Add kvm_arch_get/put_sve
33
target/arm/kvm64: max cpu: Enable SVE when available
34
target/arm/kvm: scratch vcpu: Preserve input kvm_vcpu_init features
35
target/arm/cpu64: max cpu: Support sve properties with KVM
36
target/arm/kvm: host cpu: Add support for sve<N> properties
41
37
42
Andrew Jeffery (1):
38
Christophe Lyon (1):
43
aspeed/timer: Provide back-pressure information for short periods
39
target/arm: Allow reading flags from FPSCR for M-profile
44
40
45
Emilio G. Cota (2):
41
Edgar E. Iglesias (1):
46
tcg/README: fix typo s/afterwise/afterwards/
42
hw/arm/boot: Rebuild hflags when modifying CPUState at boot
47
atomic_template: fix indentation in GEN_ATOMIC_HELPER
48
43
49
Eric Auger (3):
44
tests/Makefile.include | 5 +-
50
memory: Remove unused memory_region_iommu_replay_all()
45
qapi/machine-target.json | 6 +-
51
hw/arm/smmuv3: Log a guest error when decoding an invalid STE
46
include/qemu/bitops.h | 1 +
52
hw/arm/smmuv3: Remove spurious error messages on IOVA invalidations
47
target/arm/cpu.h | 21 ++
48
target/arm/kvm_arm.h | 39 +++
49
hw/arm/boot.c | 1 +
50
target/arm/cpu.c | 25 +-
51
target/arm/cpu64.c | 364 +++++++++++++++++++++++++--
52
target/arm/helper.c | 10 +-
53
target/arm/kvm.c | 25 +-
54
target/arm/kvm32.c | 6 +-
55
target/arm/kvm64.c | 325 +++++++++++++++++++++---
56
target/arm/monitor.c | 158 ++++++++++++
57
target/arm/translate-vfp.inc.c | 5 +-
58
tests/arm-cpu-features.c | 551 +++++++++++++++++++++++++++++++++++++++++
59
docs/arm-cpu-features.rst | 317 ++++++++++++++++++++++++
60
16 files changed, 1795 insertions(+), 64 deletions(-)
61
create mode 100644 tests/arm-cpu-features.c
62
create mode 100644 docs/arm-cpu-features.rst
53
63
54
Peter Maydell (4):
55
target/arm: Allow ARMCPRegInfo read/write functions to throw exceptions
56
target/arm: Take exceptions on ATS instructions when needed
57
target/arm: Free TCG temps in trans_VMOV_64_sp()
58
target/arm: Don't abort on M-profile exception return in linux-user mode
59
60
Philippe Mathieu-Daudé (6):
61
hw/arm: Use ARM_CPU_TYPE_NAME() macro when appropriate
62
hw/arm: Use object_initialize_child for correct reference counting
63
hw/arm: Use sysbus_init_child_obj for correct reference counting
64
hw/arm/fsl-imx: Add the cpu as child of the SoC object
65
hw/dma/xilinx_axi: Use object_initialize_child for correct ref. counting
66
hw/net/xilinx_axi: Use object_initialize_child for correct ref. counting
67
68
Richard Henderson (3):
69
Revert "target/arm: Use unallocated_encoding for aarch32"
70
target/arm: Factor out unallocated_encoding for aarch32
71
target/arm: Fix SMMLS argument order
72
73
accel/tcg/atomic_template.h | 2 +-
74
hw/arm/smmuv3-internal.h | 1 +
75
include/exec/cpu-defs.h | 2 +-
76
include/exec/memory.h | 10 ----
77
include/sysemu/sysemu.h | 2 -
78
target/arm/cpu.h | 6 ++-
79
target/arm/translate-a64.h | 2 +
80
target/arm/translate.h | 2 -
81
hw/arm/allwinner-a10.c | 3 +-
82
hw/arm/cubieboard.c | 3 +-
83
hw/arm/digic.c | 3 +-
84
hw/arm/exynos4_boards.c | 4 +-
85
hw/arm/fsl-imx25.c | 4 +-
86
hw/arm/fsl-imx31.c | 4 +-
87
hw/arm/fsl-imx6.c | 3 +-
88
hw/arm/fsl-imx6ul.c | 2 +-
89
hw/arm/mcimx7d-sabre.c | 9 ++--
90
hw/arm/mps2-tz.c | 15 +++---
91
hw/arm/musca.c | 9 ++--
92
hw/arm/smmuv3.c | 18 ++++---
93
hw/arm/xlnx-zynqmp.c | 8 +--
94
hw/dma/xilinx_axidma.c | 16 +++---
95
hw/net/xilinx_axienet.c | 17 +++----
96
hw/timer/aspeed_timer.c | 17 ++++++-
97
memory.c | 9 ----
98
target/arm/helper.c | 107 +++++++++++++++++++++++++++++++++++------
99
target/arm/translate-a64.c | 13 +++++
100
target/arm/translate-vfp.inc.c | 2 +
101
target/arm/translate.c | 50 +++++++++++++++++--
102
tcg/README | 2 +-
103
30 files changed, 244 insertions(+), 101 deletions(-)
104
diff view generated by jsdifflib
1
An attempt to do an exception-return (branch to one of the magic
1
From: Andrew Jones <drjones@redhat.com>
2
addresses) in linux-user mode for M-profile should behave like
2
3
a normal branch, because linux-user mode is always going to be
3
Add support for the query-cpu-model-expansion QMP command to Arm. We
4
in 'handler' mode. This used to work, but we broke it when we added
4
do this selectively, only exposing CPU properties which represent
5
support for the M-profile security extension in commit d02a8698d7ae2bfed.
5
optional CPU features which the user may want to enable/disable.
6
6
Additionally we restrict the list of queryable cpu models to 'max',
7
In that commit we allowed even handler-mode calls to magic return
7
'host', or the current type when KVM is in use. And, finally, we only
8
values to be checked for and dealt with by causing an
8
implement expansion type 'full', as Arm does not yet have a "base"
9
EXCP_EXCEPTION_EXIT exception to be taken, because this is
9
CPU type. More details and example queries are described in a new
10
needed for the FNC_RETURN return-from-non-secure-function-call
10
document (docs/arm-cpu-features.rst).
11
handling. For system mode we added a check in do_v7m_exception_exit()
11
12
to make any spurious calls from Handler mode behave correctly, but
12
Note, certainly more features may be added to the list of advertised
13
forgot that linux-user mode would also be affected.
13
features, e.g. 'vfp' and 'neon'. The only requirement is that we can
14
14
detect invalid configurations and emit failures at QMP query time.
15
How an attempted return-from-non-secure-function-call in linux-user
15
For 'vfp' and 'neon' this will require some refactoring to share a
16
mode should be handled is not clear -- on real hardware it would
16
validation function between the QMP query and the CPU realize
17
result in return to secure code (not to the Linux kernel) which
17
functions.
18
could then handle the error in any way it chose. For QEMU we take
18
19
the simple approach of treating this erroneous return the same way
19
Signed-off-by: Andrew Jones <drjones@redhat.com>
20
it would be handled on a CPU without the security extensions --
21
treat it as a normal branch.
22
23
The upshot of all this is that for linux-user mode we should never
24
do any of the bx_excret magic, so the code change is simple.
25
26
This ought to be a weird corner case that only affects broken guest
27
code (because Linux user processes should never be attempting to do
28
exception returns or NS function returns), except that the code that
29
assigns addresses in RAM for the process and stack in our linux-user
30
code does not attempt to avoid this magic address range, so
31
legitimate code attempting to return to a trampoline routine on the
32
stack can fall into this case. This change fixes those programs,
33
but we should also look at restricting the range of memory we
34
use for M-profile linux-user guests to the area that would be
35
real RAM in hardware.
36
37
Cc: qemu-stable@nongnu.org
38
Reported-by: Christophe Lyon <christophe.lyon@linaro.org>
39
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
40
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21
Reviewed-by: Eric Auger <eric.auger@redhat.com>
41
Message-id: 20190822131534.16602-1-peter.maydell@linaro.org
22
Reviewed-by: Beata Michalska <beata.michalska@linaro.org>
42
Fixes: https://bugs.launchpad.net/qemu/+bug/1840922
23
Message-id: 20191031142734.8590-2-drjones@redhat.com
43
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
44
---
25
---
45
target/arm/translate.c | 21 ++++++++++++++++++++-
26
qapi/machine-target.json | 6 +-
46
1 file changed, 20 insertions(+), 1 deletion(-)
27
target/arm/monitor.c | 146 ++++++++++++++++++++++++++++++++++++++
47
28
docs/arm-cpu-features.rst | 137 +++++++++++++++++++++++++++++++++++
48
diff --git a/target/arm/translate.c b/target/arm/translate.c
29
3 files changed, 286 insertions(+), 3 deletions(-)
30
create mode 100644 docs/arm-cpu-features.rst
31
32
diff --git a/qapi/machine-target.json b/qapi/machine-target.json
49
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
50
--- a/target/arm/translate.c
34
--- a/qapi/machine-target.json
51
+++ b/target/arm/translate.c
35
+++ b/qapi/machine-target.json
52
@@ -XXX,XX +XXX,XX @@ static inline void gen_bx(DisasContext *s, TCGv_i32 var)
36
@@ -XXX,XX +XXX,XX @@
53
store_cpu_field(var, thumb);
37
##
38
{ 'struct': 'CpuModelExpansionInfo',
39
'data': { 'model': 'CpuModelInfo' },
40
- 'if': 'defined(TARGET_S390X) || defined(TARGET_I386)' }
41
+ 'if': 'defined(TARGET_S390X) || defined(TARGET_I386) || defined(TARGET_ARM)' }
42
43
##
44
# @query-cpu-model-expansion:
45
@@ -XXX,XX +XXX,XX @@
46
# query-cpu-model-expansion while using these is not advised.
47
#
48
# Some architectures may not support all expansion types. s390x supports
49
-# "full" and "static".
50
+# "full" and "static". Arm only supports "full".
51
#
52
# Returns: a CpuModelExpansionInfo. Returns an error if expanding CPU models is
53
# not supported, if the model cannot be expanded, if the model contains
54
@@ -XXX,XX +XXX,XX @@
55
'data': { 'type': 'CpuModelExpansionType',
56
'model': 'CpuModelInfo' },
57
'returns': 'CpuModelExpansionInfo',
58
- 'if': 'defined(TARGET_S390X) || defined(TARGET_I386)' }
59
+ 'if': 'defined(TARGET_S390X) || defined(TARGET_I386) || defined(TARGET_ARM)' }
60
61
##
62
# @CpuDefinitionInfo:
63
diff --git a/target/arm/monitor.c b/target/arm/monitor.c
64
index XXXXXXX..XXXXXXX 100644
65
--- a/target/arm/monitor.c
66
+++ b/target/arm/monitor.c
67
@@ -XXX,XX +XXX,XX @@
68
*/
69
70
#include "qemu/osdep.h"
71
+#include "hw/boards.h"
72
#include "kvm_arm.h"
73
+#include "qapi/error.h"
74
+#include "qapi/visitor.h"
75
+#include "qapi/qobject-input-visitor.h"
76
+#include "qapi/qapi-commands-machine-target.h"
77
#include "qapi/qapi-commands-misc-target.h"
78
+#include "qapi/qmp/qerror.h"
79
+#include "qapi/qmp/qdict.h"
80
+#include "qom/qom-qobject.h"
81
82
static GICCapability *gic_cap_new(int version)
83
{
84
@@ -XXX,XX +XXX,XX @@ GICCapabilityList *qmp_query_gic_capabilities(Error **errp)
85
86
return head;
54
}
87
}
55
88
+
56
-/* Set PC and Thumb state from var. var is marked as dead.
57
+/*
89
+/*
58
+ * Set PC and Thumb state from var. var is marked as dead.
90
+ * These are cpu model features we want to advertise. The order here
59
* For M-profile CPUs, include logic to detect exception-return
91
+ * matters as this is the order in which qmp_query_cpu_model_expansion
60
* branches and handle them. This is needed for Thumb POP/LDM to PC, LDR to PC,
92
+ * will attempt to set them. If there are dependencies between features,
61
* and BX reg, and no others, and happens only for code in Handler mode.
93
+ * then the order that considers those dependencies must be used.
62
+ * The Security Extension also requires us to check for the FNC_RETURN
94
+ */
63
+ * which signals a function return from non-secure state; this can happen
95
+static const char *cpu_model_advertised_features[] = {
64
+ * in both Handler and Thread mode.
96
+ "aarch64", "pmu",
65
+ * To avoid having to do multiple comparisons in inline generated code,
97
+ NULL
66
+ * we make the check we do here loose, so it will match for EXC_RETURN
98
+};
67
+ * in Thread mode. For system emulation do_v7m_exception_exit() checks
99
+
68
+ * for these spurious cases and returns without doing anything (giving
100
+CpuModelExpansionInfo *qmp_query_cpu_model_expansion(CpuModelExpansionType type,
69
+ * the same behaviour as for a branch to a non-magic address).
101
+ CpuModelInfo *model,
70
+ *
102
+ Error **errp)
71
+ * In linux-user mode it is unclear what the right behaviour for an
103
+{
72
+ * attempted FNC_RETURN should be, because in real hardware this will go
104
+ CpuModelExpansionInfo *expansion_info;
73
+ * directly to Secure code (ie not the Linux kernel) which will then treat
105
+ const QDict *qdict_in = NULL;
74
+ * the error in any way it chooses. For QEMU we opt to make the FNC_RETURN
106
+ QDict *qdict_out;
75
+ * attempt behave the way it would on a CPU without the security extension,
107
+ ObjectClass *oc;
76
+ * which is to say "like a normal branch". That means we can simply treat
108
+ Object *obj;
77
+ * all branches as normal with no magic address behaviour.
109
+ const char *name;
78
*/
110
+ int i;
79
static inline void gen_bx_excret(DisasContext *s, TCGv_i32 var)
111
+
80
{
112
+ if (type != CPU_MODEL_EXPANSION_TYPE_FULL) {
81
@@ -XXX,XX +XXX,XX @@ static inline void gen_bx_excret(DisasContext *s, TCGv_i32 var)
113
+ error_setg(errp, "The requested expansion type is not supported");
82
* s->base.is_jmp that we need to do the rest of the work later.
114
+ return NULL;
83
*/
115
+ }
84
gen_bx(s, var);
116
+
85
+#ifndef CONFIG_USER_ONLY
117
+ if (!kvm_enabled() && !strcmp(model->name, "host")) {
86
if (arm_dc_feature(s, ARM_FEATURE_M_SECURITY) ||
118
+ error_setg(errp, "The CPU type '%s' requires KVM", model->name);
87
(s->v7m_handler_mode && arm_dc_feature(s, ARM_FEATURE_M))) {
119
+ return NULL;
88
s->base.is_jmp = DISAS_BX_EXCRET;
120
+ }
89
}
121
+
90
+#endif
122
+ oc = cpu_class_by_name(TYPE_ARM_CPU, model->name);
91
}
123
+ if (!oc) {
92
124
+ error_setg(errp, "The CPU type '%s' is not a recognized ARM CPU type",
93
static inline void gen_bx_excret_final_code(DisasContext *s)
125
+ model->name);
126
+ return NULL;
127
+ }
128
+
129
+ if (kvm_enabled()) {
130
+ const char *cpu_type = current_machine->cpu_type;
131
+ int len = strlen(cpu_type) - strlen(ARM_CPU_TYPE_SUFFIX);
132
+ bool supported = false;
133
+
134
+ if (!strcmp(model->name, "host") || !strcmp(model->name, "max")) {
135
+ /* These are kvmarm's recommended cpu types */
136
+ supported = true;
137
+ } else if (strlen(model->name) == len &&
138
+ !strncmp(model->name, cpu_type, len)) {
139
+ /* KVM is enabled and we're using this type, so it works. */
140
+ supported = true;
141
+ }
142
+ if (!supported) {
143
+ error_setg(errp, "We cannot guarantee the CPU type '%s' works "
144
+ "with KVM on this host", model->name);
145
+ return NULL;
146
+ }
147
+ }
148
+
149
+ if (model->props) {
150
+ qdict_in = qobject_to(QDict, model->props);
151
+ if (!qdict_in) {
152
+ error_setg(errp, QERR_INVALID_PARAMETER_TYPE, "props", "dict");
153
+ return NULL;
154
+ }
155
+ }
156
+
157
+ obj = object_new(object_class_get_name(oc));
158
+
159
+ if (qdict_in) {
160
+ Visitor *visitor;
161
+ Error *err = NULL;
162
+
163
+ visitor = qobject_input_visitor_new(model->props);
164
+ visit_start_struct(visitor, NULL, NULL, 0, &err);
165
+ if (err) {
166
+ visit_free(visitor);
167
+ object_unref(obj);
168
+ error_propagate(errp, err);
169
+ return NULL;
170
+ }
171
+
172
+ i = 0;
173
+ while ((name = cpu_model_advertised_features[i++]) != NULL) {
174
+ if (qdict_get(qdict_in, name)) {
175
+ object_property_set(obj, visitor, name, &err);
176
+ if (err) {
177
+ break;
178
+ }
179
+ }
180
+ }
181
+
182
+ if (!err) {
183
+ visit_check_struct(visitor, &err);
184
+ }
185
+ visit_end_struct(visitor, NULL);
186
+ visit_free(visitor);
187
+ if (err) {
188
+ object_unref(obj);
189
+ error_propagate(errp, err);
190
+ return NULL;
191
+ }
192
+ }
193
+
194
+ expansion_info = g_new0(CpuModelExpansionInfo, 1);
195
+ expansion_info->model = g_malloc0(sizeof(*expansion_info->model));
196
+ expansion_info->model->name = g_strdup(model->name);
197
+
198
+ qdict_out = qdict_new();
199
+
200
+ i = 0;
201
+ while ((name = cpu_model_advertised_features[i++]) != NULL) {
202
+ ObjectProperty *prop = object_property_find(obj, name, NULL);
203
+ if (prop) {
204
+ Error *err = NULL;
205
+ QObject *value;
206
+
207
+ assert(prop->get);
208
+ value = object_property_get_qobject(obj, name, &err);
209
+ assert(!err);
210
+
211
+ qdict_put_obj(qdict_out, name, value);
212
+ }
213
+ }
214
+
215
+ if (!qdict_size(qdict_out)) {
216
+ qobject_unref(qdict_out);
217
+ } else {
218
+ expansion_info->model->props = QOBJECT(qdict_out);
219
+ expansion_info->model->has_props = true;
220
+ }
221
+
222
+ object_unref(obj);
223
+
224
+ return expansion_info;
225
+}
226
diff --git a/docs/arm-cpu-features.rst b/docs/arm-cpu-features.rst
227
new file mode 100644
228
index XXXXXXX..XXXXXXX
229
--- /dev/null
230
+++ b/docs/arm-cpu-features.rst
231
@@ -XXX,XX +XXX,XX @@
232
+================
233
+ARM CPU Features
234
+================
235
+
236
+Examples of probing and using ARM CPU features
237
+
238
+Introduction
239
+============
240
+
241
+CPU features are optional features that a CPU of supporting type may
242
+choose to implement or not. In QEMU, optional CPU features have
243
+corresponding boolean CPU proprieties that, when enabled, indicate
244
+that the feature is implemented, and, conversely, when disabled,
245
+indicate that it is not implemented. An example of an ARM CPU feature
246
+is the Performance Monitoring Unit (PMU). CPU types such as the
247
+Cortex-A15 and the Cortex-A57, which respectively implement ARM
248
+architecture reference manuals ARMv7-A and ARMv8-A, may both optionally
249
+implement PMUs. For example, if a user wants to use a Cortex-A15 without
250
+a PMU, then the `-cpu` parameter should contain `pmu=off` on the QEMU
251
+command line, i.e. `-cpu cortex-a15,pmu=off`.
252
+
253
+As not all CPU types support all optional CPU features, then whether or
254
+not a CPU property exists depends on the CPU type. For example, CPUs
255
+that implement the ARMv8-A architecture reference manual may optionally
256
+support the AArch32 CPU feature, which may be enabled by disabling the
257
+`aarch64` CPU property. A CPU type such as the Cortex-A15, which does
258
+not implement ARMv8-A, will not have the `aarch64` CPU property.
259
+
260
+QEMU's support may be limited for some CPU features, only partially
261
+supporting the feature or only supporting the feature under certain
262
+configurations. For example, the `aarch64` CPU feature, which, when
263
+disabled, enables the optional AArch32 CPU feature, is only supported
264
+when using the KVM accelerator and when running on a host CPU type that
265
+supports the feature.
266
+
267
+CPU Feature Probing
268
+===================
269
+
270
+Determining which CPU features are available and functional for a given
271
+CPU type is possible with the `query-cpu-model-expansion` QMP command.
272
+Below are some examples where `scripts/qmp/qmp-shell` (see the top comment
273
+block in the script for usage) is used to issue the QMP commands.
274
+
275
+(1) Determine which CPU features are available for the `max` CPU type
276
+ (Note, we started QEMU with qemu-system-aarch64, so `max` is
277
+ implementing the ARMv8-A reference manual in this case)::
278
+
279
+ (QEMU) query-cpu-model-expansion type=full model={"name":"max"}
280
+ { "return": {
281
+ "model": { "name": "max", "props": {
282
+ "pmu": true, "aarch64": true
283
+ }}}}
284
+
285
+We see that the `max` CPU type has the `pmu` and `aarch64` CPU features.
286
+We also see that the CPU features are enabled, as they are all `true`.
287
+
288
+(2) Let's try to disable the PMU::
289
+
290
+ (QEMU) query-cpu-model-expansion type=full model={"name":"max","props":{"pmu":false}}
291
+ { "return": {
292
+ "model": { "name": "max", "props": {
293
+ "pmu": false, "aarch64": true
294
+ }}}}
295
+
296
+We see it worked, as `pmu` is now `false`.
297
+
298
+(3) Let's try to disable `aarch64`, which enables the AArch32 CPU feature::
299
+
300
+ (QEMU) query-cpu-model-expansion type=full model={"name":"max","props":{"aarch64":false}}
301
+ {"error": {
302
+ "class": "GenericError", "desc":
303
+ "'aarch64' feature cannot be disabled unless KVM is enabled and 32-bit EL1 is supported"
304
+ }}
305
+
306
+It looks like this feature is limited to a configuration we do not
307
+currently have.
308
+
309
+(4) Let's try probing CPU features for the Cortex-A15 CPU type::
310
+
311
+ (QEMU) query-cpu-model-expansion type=full model={"name":"cortex-a15"}
312
+ {"return": {"model": {"name": "cortex-a15", "props": {"pmu": true}}}}
313
+
314
+Only the `pmu` CPU feature is available.
315
+
316
+A note about CPU feature dependencies
317
+-------------------------------------
318
+
319
+It's possible for features to have dependencies on other features. I.e.
320
+it may be possible to change one feature at a time without error, but
321
+when attempting to change all features at once an error could occur
322
+depending on the order they are processed. It's also possible changing
323
+all at once doesn't generate an error, because a feature's dependencies
324
+are satisfied with other features, but the same feature cannot be changed
325
+independently without error. For these reasons callers should always
326
+attempt to make their desired changes all at once in order to ensure the
327
+collection is valid.
328
+
329
+A note about CPU models and KVM
330
+-------------------------------
331
+
332
+Named CPU models generally do not work with KVM. There are a few cases
333
+that do work, e.g. using the named CPU model `cortex-a57` with KVM on a
334
+seattle host, but mostly if KVM is enabled the `host` CPU type must be
335
+used. This means the guest is provided all the same CPU features as the
336
+host CPU type has. And, for this reason, the `host` CPU type should
337
+enable all CPU features that the host has by default. Indeed it's even
338
+a bit strange to allow disabling CPU features that the host has when using
339
+the `host` CPU type, but in the absence of CPU models it's the best we can
340
+do if we want to launch guests without all the host's CPU features enabled.
341
+
342
+Enabling KVM also affects the `query-cpu-model-expansion` QMP command. The
343
+affect is not only limited to specific features, as pointed out in example
344
+(3) of "CPU Feature Probing", but also to which CPU types may be expanded.
345
+When KVM is enabled, only the `max`, `host`, and current CPU type may be
346
+expanded. This restriction is necessary as it's not possible to know all
347
+CPU types that may work with KVM, but it does impose a small risk of users
348
+experiencing unexpected errors. For example on a seattle, as mentioned
349
+above, the `cortex-a57` CPU type is also valid when KVM is enabled.
350
+Therefore a user could use the `host` CPU type for the current type, but
351
+then attempt to query `cortex-a57`, however that query will fail with our
352
+restrictions. This shouldn't be an issue though as management layers and
353
+users have been preferring the `host` CPU type for use with KVM for quite
354
+some time. Additionally, if the KVM-enabled QEMU instance running on a
355
+seattle host is using the `cortex-a57` CPU type, then querying `cortex-a57`
356
+will work.
357
+
358
+Using CPU Features
359
+==================
360
+
361
+After determining which CPU features are available and supported for a
362
+given CPU type, then they may be selectively enabled or disabled on the
363
+QEMU command line with that CPU type::
364
+
365
+ $ qemu-system-aarch64 -M virt -cpu max,pmu=off
366
+
367
+The example above disables the PMU for the `max` CPU type.
368
+
94
--
369
--
95
2.20.1
370
2.20.1
96
371
97
372
diff view generated by jsdifflib
1
From: Alex Bennée <alex.bennee@linaro.org>
1
From: Andrew Jones <drjones@redhat.com>
2
2
3
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
3
Now that Arm CPUs have advertised features lets add tests to ensure
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
we maintain their expected availability with and without KVM.
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
5
6
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
6
Signed-off-by: Andrew Jones <drjones@redhat.com>
7
Message-id: 20190828165307.18321-10-alex.bennee@linaro.org
7
Reviewed-by: Eric Auger <eric.auger@redhat.com>
8
Message-id: 20191031142734.8590-3-drjones@redhat.com
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
---
10
include/exec/cpu-defs.h | 2 +-
11
tests/Makefile.include | 5 +-
11
1 file changed, 1 insertion(+), 1 deletion(-)
12
tests/arm-cpu-features.c | 253 +++++++++++++++++++++++++++++++++++++++
12
13
2 files changed, 257 insertions(+), 1 deletion(-)
13
diff --git a/include/exec/cpu-defs.h b/include/exec/cpu-defs.h
14
create mode 100644 tests/arm-cpu-features.c
15
16
diff --git a/tests/Makefile.include b/tests/Makefile.include
14
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
15
--- a/include/exec/cpu-defs.h
18
--- a/tests/Makefile.include
16
+++ b/include/exec/cpu-defs.h
19
+++ b/tests/Makefile.include
17
@@ -XXX,XX +XXX,XX @@ typedef struct CPUTLB { } CPUTLB;
20
@@ -XXX,XX +XXX,XX @@ check-qtest-sparc64-$(CONFIG_ISA_TESTDEV) = tests/endianness-test$(EXESUF)
18
#endif /* !CONFIG_USER_ONLY && CONFIG_TCG */
21
check-qtest-sparc64-y += tests/prom-env-test$(EXESUF)
19
22
check-qtest-sparc64-y += tests/boot-serial-test$(EXESUF)
20
/*
23
21
- * This structure must be placed in ArchCPU immedately
24
+check-qtest-arm-y += tests/arm-cpu-features$(EXESUF)
22
+ * This structure must be placed in ArchCPU immediately
25
check-qtest-arm-y += tests/microbit-test$(EXESUF)
23
* before CPUArchState, as a field named "neg".
26
check-qtest-arm-y += tests/m25p80-test$(EXESUF)
24
*/
27
check-qtest-arm-y += tests/test-arm-mptimer$(EXESUF)
25
typedef struct CPUNegativeOffsetState {
28
@@ -XXX,XX +XXX,XX @@ check-qtest-arm-y += tests/boot-serial-test$(EXESUF)
29
check-qtest-arm-y += tests/hexloader-test$(EXESUF)
30
check-qtest-arm-$(CONFIG_PFLASH_CFI02) += tests/pflash-cfi02-test$(EXESUF)
31
32
-check-qtest-aarch64-y = tests/numa-test$(EXESUF)
33
+check-qtest-aarch64-y += tests/arm-cpu-features$(EXESUF)
34
+check-qtest-aarch64-y += tests/numa-test$(EXESUF)
35
check-qtest-aarch64-y += tests/boot-serial-test$(EXESUF)
36
check-qtest-aarch64-y += tests/migration-test$(EXESUF)
37
# TODO: once aarch64 TCG is fixed on ARM 32 bit host, make test unconditional
38
@@ -XXX,XX +XXX,XX @@ tests/test-qapi-util$(EXESUF): tests/test-qapi-util.o $(test-util-obj-y)
39
tests/numa-test$(EXESUF): tests/numa-test.o
40
tests/vmgenid-test$(EXESUF): tests/vmgenid-test.o tests/boot-sector.o tests/acpi-utils.o
41
tests/cdrom-test$(EXESUF): tests/cdrom-test.o tests/boot-sector.o $(libqos-obj-y)
42
+tests/arm-cpu-features$(EXESUF): tests/arm-cpu-features.o
43
44
tests/migration/stress$(EXESUF): tests/migration/stress.o
45
    $(call quiet-command, $(LINKPROG) -static -O3 $(PTHREAD_LIB) -o $@ $< ,"LINK","$(TARGET_DIR)$@")
46
diff --git a/tests/arm-cpu-features.c b/tests/arm-cpu-features.c
47
new file mode 100644
48
index XXXXXXX..XXXXXXX
49
--- /dev/null
50
+++ b/tests/arm-cpu-features.c
51
@@ -XXX,XX +XXX,XX @@
52
+/*
53
+ * Arm CPU feature test cases
54
+ *
55
+ * Copyright (c) 2019 Red Hat Inc.
56
+ * Authors:
57
+ * Andrew Jones <drjones@redhat.com>
58
+ *
59
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
60
+ * See the COPYING file in the top-level directory.
61
+ */
62
+#include "qemu/osdep.h"
63
+#include "libqtest.h"
64
+#include "qapi/qmp/qdict.h"
65
+#include "qapi/qmp/qjson.h"
66
+
67
+#define MACHINE "-machine virt,gic-version=max,accel=tcg "
68
+#define MACHINE_KVM "-machine virt,gic-version=max,accel=kvm:tcg "
69
+#define QUERY_HEAD "{ 'execute': 'query-cpu-model-expansion', " \
70
+ " 'arguments': { 'type': 'full', "
71
+#define QUERY_TAIL "}}"
72
+
73
+static bool kvm_enabled(QTestState *qts)
74
+{
75
+ QDict *resp, *qdict;
76
+ bool enabled;
77
+
78
+ resp = qtest_qmp(qts, "{ 'execute': 'query-kvm' }");
79
+ g_assert(qdict_haskey(resp, "return"));
80
+ qdict = qdict_get_qdict(resp, "return");
81
+ g_assert(qdict_haskey(qdict, "enabled"));
82
+ enabled = qdict_get_bool(qdict, "enabled");
83
+ qobject_unref(resp);
84
+
85
+ return enabled;
86
+}
87
+
88
+static QDict *do_query_no_props(QTestState *qts, const char *cpu_type)
89
+{
90
+ return qtest_qmp(qts, QUERY_HEAD "'model': { 'name': %s }"
91
+ QUERY_TAIL, cpu_type);
92
+}
93
+
94
+static QDict *do_query(QTestState *qts, const char *cpu_type,
95
+ const char *fmt, ...)
96
+{
97
+ QDict *resp;
98
+
99
+ if (fmt) {
100
+ QDict *args;
101
+ va_list ap;
102
+
103
+ va_start(ap, fmt);
104
+ args = qdict_from_vjsonf_nofail(fmt, ap);
105
+ va_end(ap);
106
+
107
+ resp = qtest_qmp(qts, QUERY_HEAD "'model': { 'name': %s, "
108
+ "'props': %p }"
109
+ QUERY_TAIL, cpu_type, args);
110
+ } else {
111
+ resp = do_query_no_props(qts, cpu_type);
112
+ }
113
+
114
+ return resp;
115
+}
116
+
117
+static const char *resp_get_error(QDict *resp)
118
+{
119
+ QDict *qdict;
120
+
121
+ g_assert(resp);
122
+
123
+ qdict = qdict_get_qdict(resp, "error");
124
+ if (qdict) {
125
+ return qdict_get_str(qdict, "desc");
126
+ }
127
+
128
+ return NULL;
129
+}
130
+
131
+#define assert_error(qts, cpu_type, expected_error, fmt, ...) \
132
+({ \
133
+ QDict *_resp; \
134
+ const char *_error; \
135
+ \
136
+ _resp = do_query(qts, cpu_type, fmt, ##__VA_ARGS__); \
137
+ g_assert(_resp); \
138
+ _error = resp_get_error(_resp); \
139
+ g_assert(_error); \
140
+ g_assert(g_str_equal(_error, expected_error)); \
141
+ qobject_unref(_resp); \
142
+})
143
+
144
+static bool resp_has_props(QDict *resp)
145
+{
146
+ QDict *qdict;
147
+
148
+ g_assert(resp);
149
+
150
+ if (!qdict_haskey(resp, "return")) {
151
+ return false;
152
+ }
153
+ qdict = qdict_get_qdict(resp, "return");
154
+
155
+ if (!qdict_haskey(qdict, "model")) {
156
+ return false;
157
+ }
158
+ qdict = qdict_get_qdict(qdict, "model");
159
+
160
+ return qdict_haskey(qdict, "props");
161
+}
162
+
163
+static QDict *resp_get_props(QDict *resp)
164
+{
165
+ QDict *qdict;
166
+
167
+ g_assert(resp);
168
+ g_assert(resp_has_props(resp));
169
+
170
+ qdict = qdict_get_qdict(resp, "return");
171
+ qdict = qdict_get_qdict(qdict, "model");
172
+ qdict = qdict_get_qdict(qdict, "props");
173
+
174
+ return qdict;
175
+}
176
+
177
+#define assert_has_feature(qts, cpu_type, feature) \
178
+({ \
179
+ QDict *_resp = do_query_no_props(qts, cpu_type); \
180
+ g_assert(_resp); \
181
+ g_assert(resp_has_props(_resp)); \
182
+ g_assert(qdict_get(resp_get_props(_resp), feature)); \
183
+ qobject_unref(_resp); \
184
+})
185
+
186
+#define assert_has_not_feature(qts, cpu_type, feature) \
187
+({ \
188
+ QDict *_resp = do_query_no_props(qts, cpu_type); \
189
+ g_assert(_resp); \
190
+ g_assert(!resp_has_props(_resp) || \
191
+ !qdict_get(resp_get_props(_resp), feature)); \
192
+ qobject_unref(_resp); \
193
+})
194
+
195
+static void assert_type_full(QTestState *qts)
196
+{
197
+ const char *error;
198
+ QDict *resp;
199
+
200
+ resp = qtest_qmp(qts, "{ 'execute': 'query-cpu-model-expansion', "
201
+ "'arguments': { 'type': 'static', "
202
+ "'model': { 'name': 'foo' }}}");
203
+ g_assert(resp);
204
+ error = resp_get_error(resp);
205
+ g_assert(error);
206
+ g_assert(g_str_equal(error,
207
+ "The requested expansion type is not supported"));
208
+ qobject_unref(resp);
209
+}
210
+
211
+static void assert_bad_props(QTestState *qts, const char *cpu_type)
212
+{
213
+ const char *error;
214
+ QDict *resp;
215
+
216
+ resp = qtest_qmp(qts, "{ 'execute': 'query-cpu-model-expansion', "
217
+ "'arguments': { 'type': 'full', "
218
+ "'model': { 'name': %s, "
219
+ "'props': false }}}",
220
+ cpu_type);
221
+ g_assert(resp);
222
+ error = resp_get_error(resp);
223
+ g_assert(error);
224
+ g_assert(g_str_equal(error,
225
+ "Invalid parameter type for 'props', expected: dict"));
226
+ qobject_unref(resp);
227
+}
228
+
229
+static void test_query_cpu_model_expansion(const void *data)
230
+{
231
+ QTestState *qts;
232
+
233
+ qts = qtest_init(MACHINE "-cpu max");
234
+
235
+ /* Test common query-cpu-model-expansion input validation */
236
+ assert_type_full(qts);
237
+ assert_bad_props(qts, "max");
238
+ assert_error(qts, "foo", "The CPU type 'foo' is not a recognized "
239
+ "ARM CPU type", NULL);
240
+ assert_error(qts, "max", "Parameter 'not-a-prop' is unexpected",
241
+ "{ 'not-a-prop': false }");
242
+ assert_error(qts, "host", "The CPU type 'host' requires KVM", NULL);
243
+
244
+ /* Test expected feature presence/absence for some cpu types */
245
+ assert_has_feature(qts, "max", "pmu");
246
+ assert_has_feature(qts, "cortex-a15", "pmu");
247
+ assert_has_not_feature(qts, "cortex-a15", "aarch64");
248
+
249
+ if (g_str_equal(qtest_get_arch(), "aarch64")) {
250
+ assert_has_feature(qts, "max", "aarch64");
251
+ assert_has_feature(qts, "cortex-a57", "pmu");
252
+ assert_has_feature(qts, "cortex-a57", "aarch64");
253
+
254
+ /* Test that features that depend on KVM generate errors without. */
255
+ assert_error(qts, "max",
256
+ "'aarch64' feature cannot be disabled "
257
+ "unless KVM is enabled and 32-bit EL1 "
258
+ "is supported",
259
+ "{ 'aarch64': false }");
260
+ }
261
+
262
+ qtest_quit(qts);
263
+}
264
+
265
+static void test_query_cpu_model_expansion_kvm(const void *data)
266
+{
267
+ QTestState *qts;
268
+
269
+ qts = qtest_init(MACHINE_KVM "-cpu max");
270
+
271
+ /*
272
+ * These tests target the 'host' CPU type, so KVM must be enabled.
273
+ */
274
+ if (!kvm_enabled(qts)) {
275
+ qtest_quit(qts);
276
+ return;
277
+ }
278
+
279
+ if (g_str_equal(qtest_get_arch(), "aarch64")) {
280
+ assert_has_feature(qts, "host", "aarch64");
281
+ assert_has_feature(qts, "host", "pmu");
282
+
283
+ assert_error(qts, "cortex-a15",
284
+ "We cannot guarantee the CPU type 'cortex-a15' works "
285
+ "with KVM on this host", NULL);
286
+ } else {
287
+ assert_has_not_feature(qts, "host", "aarch64");
288
+ assert_has_not_feature(qts, "host", "pmu");
289
+ }
290
+
291
+ qtest_quit(qts);
292
+}
293
+
294
+int main(int argc, char **argv)
295
+{
296
+ g_test_init(&argc, &argv, NULL);
297
+
298
+ qtest_add_data_func("/arm/query-cpu-model-expansion",
299
+ NULL, test_query_cpu_model_expansion);
300
+ qtest_add_data_func("/arm/kvm/query-cpu-model-expansion",
301
+ NULL, test_query_cpu_model_expansion_kvm);
302
+
303
+ return g_test_run();
304
+}
26
--
305
--
27
2.20.1
306
2.20.1
28
307
29
308
diff view generated by jsdifflib
1
From: "Emilio G. Cota" <cota@braap.org>
1
From: Andrew Jones <drjones@redhat.com>
2
2
3
Afterwise is "wise after the fact", as in "hindsight".
3
Since 97a28b0eeac14 ("target/arm: Allow VFP and Neon to be disabled via
4
Here we meant "afterwards" (as in "subsequently"). Fix it.
4
a CPU property") we can disable the 'max' cpu model's VFP and neon
5
features, but there's no way to disable SVE. Add the 'sve=on|off'
6
property to give it that flexibility. We also rename
7
cpu_max_get/set_sve_vq to cpu_max_get/set_sve_max_vq in order for them
8
to follow the typical *_get/set_<property-name> pattern.
5
9
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
10
Signed-off-by: Andrew Jones <drjones@redhat.com>
7
Signed-off-by: Emilio G. Cota <cota@braap.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
12
Reviewed-by: Eric Auger <eric.auger@redhat.com>
10
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
13
Tested-by: Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
11
Message-id: 20190828165307.18321-7-alex.bennee@linaro.org
14
Reviewed-by: Beata Michalska <beata.michalska@linaro.org>
15
Message-id: 20191031142734.8590-4-drjones@redhat.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
17
---
14
tcg/README | 2 +-
18
target/arm/cpu.c | 3 ++-
15
1 file changed, 1 insertion(+), 1 deletion(-)
19
target/arm/cpu64.c | 52 ++++++++++++++++++++++++++++++++++------
20
target/arm/monitor.c | 2 +-
21
tests/arm-cpu-features.c | 1 +
22
4 files changed, 49 insertions(+), 9 deletions(-)
16
23
17
diff --git a/tcg/README b/tcg/README
24
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
18
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
19
--- a/tcg/README
26
--- a/target/arm/cpu.c
20
+++ b/tcg/README
27
+++ b/target/arm/cpu.c
21
@@ -XXX,XX +XXX,XX @@ This can be overridden using the following function modifiers:
28
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(CPUState *s)
22
canonical locations before calling the helper.
29
env->cp15.cpacr_el1 = deposit64(env->cp15.cpacr_el1, 16, 2, 3);
23
- TCG_CALL_NO_WRITE_GLOBALS means that the helper does not modify any globals.
30
env->cp15.cptr_el[3] |= CPTR_EZ;
24
They will only be saved to their canonical location before calling helpers,
31
/* with maximum vector length */
25
- but they won't be reloaded afterwise.
32
- env->vfp.zcr_el[1] = cpu->sve_max_vq - 1;
26
+ but they won't be reloaded afterwards.
33
+ env->vfp.zcr_el[1] = cpu_isar_feature(aa64_sve, cpu) ?
27
- TCG_CALL_NO_SIDE_EFFECTS means that the call to the function is removed if
34
+ cpu->sve_max_vq - 1 : 0;
28
the return value is not used.
35
env->vfp.zcr_el[2] = env->vfp.zcr_el[1];
36
env->vfp.zcr_el[3] = env->vfp.zcr_el[1];
37
/*
38
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/cpu64.c
41
+++ b/target/arm/cpu64.c
42
@@ -XXX,XX +XXX,XX @@ static void aarch64_a72_initfn(Object *obj)
43
define_arm_cp_regs(cpu, cortex_a72_a57_a53_cp_reginfo);
44
}
45
46
-static void cpu_max_get_sve_vq(Object *obj, Visitor *v, const char *name,
47
- void *opaque, Error **errp)
48
+static void cpu_max_get_sve_max_vq(Object *obj, Visitor *v, const char *name,
49
+ void *opaque, Error **errp)
50
{
51
ARMCPU *cpu = ARM_CPU(obj);
52
- visit_type_uint32(v, name, &cpu->sve_max_vq, errp);
53
+ uint32_t value;
54
+
55
+ /* All vector lengths are disabled when SVE is off. */
56
+ if (!cpu_isar_feature(aa64_sve, cpu)) {
57
+ value = 0;
58
+ } else {
59
+ value = cpu->sve_max_vq;
60
+ }
61
+ visit_type_uint32(v, name, &value, errp);
62
}
63
64
-static void cpu_max_set_sve_vq(Object *obj, Visitor *v, const char *name,
65
- void *opaque, Error **errp)
66
+static void cpu_max_set_sve_max_vq(Object *obj, Visitor *v, const char *name,
67
+ void *opaque, Error **errp)
68
{
69
ARMCPU *cpu = ARM_CPU(obj);
70
Error *err = NULL;
71
@@ -XXX,XX +XXX,XX @@ static void cpu_max_set_sve_vq(Object *obj, Visitor *v, const char *name,
72
error_propagate(errp, err);
73
}
74
75
+static void cpu_arm_get_sve(Object *obj, Visitor *v, const char *name,
76
+ void *opaque, Error **errp)
77
+{
78
+ ARMCPU *cpu = ARM_CPU(obj);
79
+ bool value = cpu_isar_feature(aa64_sve, cpu);
80
+
81
+ visit_type_bool(v, name, &value, errp);
82
+}
83
+
84
+static void cpu_arm_set_sve(Object *obj, Visitor *v, const char *name,
85
+ void *opaque, Error **errp)
86
+{
87
+ ARMCPU *cpu = ARM_CPU(obj);
88
+ Error *err = NULL;
89
+ bool value;
90
+ uint64_t t;
91
+
92
+ visit_type_bool(v, name, &value, &err);
93
+ if (err) {
94
+ error_propagate(errp, err);
95
+ return;
96
+ }
97
+
98
+ t = cpu->isar.id_aa64pfr0;
99
+ t = FIELD_DP64(t, ID_AA64PFR0, SVE, value);
100
+ cpu->isar.id_aa64pfr0 = t;
101
+}
102
+
103
/* -cpu max: if KVM is enabled, like -cpu host (best possible with this host);
104
* otherwise, a CPU with as many features enabled as our emulation supports.
105
* The version of '-cpu max' for qemu-system-arm is defined in cpu.c;
106
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
107
#endif
108
109
cpu->sve_max_vq = ARM_MAX_VQ;
110
- object_property_add(obj, "sve-max-vq", "uint32", cpu_max_get_sve_vq,
111
- cpu_max_set_sve_vq, NULL, NULL, &error_fatal);
112
+ object_property_add(obj, "sve-max-vq", "uint32", cpu_max_get_sve_max_vq,
113
+ cpu_max_set_sve_max_vq, NULL, NULL, &error_fatal);
114
+ object_property_add(obj, "sve", "bool", cpu_arm_get_sve,
115
+ cpu_arm_set_sve, NULL, NULL, &error_fatal);
116
}
117
}
118
119
diff --git a/target/arm/monitor.c b/target/arm/monitor.c
120
index XXXXXXX..XXXXXXX 100644
121
--- a/target/arm/monitor.c
122
+++ b/target/arm/monitor.c
123
@@ -XXX,XX +XXX,XX @@ GICCapabilityList *qmp_query_gic_capabilities(Error **errp)
124
* then the order that considers those dependencies must be used.
125
*/
126
static const char *cpu_model_advertised_features[] = {
127
- "aarch64", "pmu",
128
+ "aarch64", "pmu", "sve",
129
NULL
130
};
131
132
diff --git a/tests/arm-cpu-features.c b/tests/arm-cpu-features.c
133
index XXXXXXX..XXXXXXX 100644
134
--- a/tests/arm-cpu-features.c
135
+++ b/tests/arm-cpu-features.c
136
@@ -XXX,XX +XXX,XX @@ static void test_query_cpu_model_expansion(const void *data)
137
138
if (g_str_equal(qtest_get_arch(), "aarch64")) {
139
assert_has_feature(qts, "max", "aarch64");
140
+ assert_has_feature(qts, "max", "sve");
141
assert_has_feature(qts, "cortex-a57", "pmu");
142
assert_has_feature(qts, "cortex-a57", "aarch64");
29
143
30
--
144
--
31
2.20.1
145
2.20.1
32
146
33
147
diff view generated by jsdifflib
1
Currently the only part of an ARMCPRegInfo which is allowed to cause
1
From: Andrew Jones <drjones@redhat.com>
2
a CPU exception is the access function, which returns a value indicating
3
that some flavour of UNDEF should be generated.
4
2
5
For the ATS system instructions, we would like to conditionally
3
Introduce cpu properties to give fine control over SVE vector lengths.
6
generate exceptions as part of the writefn, because some faults
4
We introduce a property for each valid length up to the current
7
during the page table walk (like external aborts) should cause
5
maximum supported, which is 2048-bits. The properties are named, e.g.
8
an exception to be raised rather than returning a value.
6
sve128, sve256, sve384, sve512, ..., where the number is the number of
7
bits. See the updates to docs/arm-cpu-features.rst for a description
8
of the semantics and for example uses.
9
9
10
There are several ways we could do this:
10
Note, as sve-max-vq is still present and we'd like to be able to
11
* plumb the GETPC() value from the top level set_cp_reg/get_cp_reg
11
support qmp_query_cpu_model_expansion with guests launched with e.g.
12
helper functions through into the readfn and writefn hooks
12
-cpu max,sve-max-vq=8 on their command lines, then we do allow
13
* add extra readfn_with_ra/writefn_with_ra hooks that take the GETPC()
13
sve-max-vq and sve<N> properties to be provided at the same time, but
14
value
14
this is not recommended, and is why sve-max-vq is not mentioned in the
15
* require the ATS instructions to provide a dummy accessfn,
15
document. If sve-max-vq is provided then it enables all lengths smaller
16
which serves no purpose except to cause the code generation
16
than and including the max and disables all lengths larger. It also has
17
to emit TCG ops to sync the CPU state
17
the side-effect that no larger lengths may be enabled and that the max
18
* add an ARM_CP_ flag to mark the ARMCPRegInfo as possibly
18
itself cannot be disabled. Smaller non-power-of-two lengths may,
19
throwing an exception in its read/write hooks, and make the
19
however, be disabled, e.g. -cpu max,sve-max-vq=4,sve384=off provides a
20
codegen sync the CPU state before calling the hooks if the
20
guest the vector lengths 128, 256, and 512 bits.
21
flag is set
22
21
23
This patch opts for the last of these, as it is fairly simple
22
This patch has been co-authored with Richard Henderson, who reworked
24
to implement and doesn't require invasive changes like updating
23
the target/arm/cpu64.c changes in order to push all the validation and
25
the readfn/writefn hook function prototype signature.
24
auto-enabling/disabling steps into the finalizer, resulting in a nice
25
LOC reduction.
26
26
27
Signed-off-by: Andrew Jones <drjones@redhat.com>
28
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
29
Reviewed-by: Eric Auger <eric.auger@redhat.com>
30
Tested-by: Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
31
Reviewed-by: Beata Michalska <beata.michalska@linaro.org>
32
Message-id: 20191031142734.8590-5-drjones@redhat.com
27
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
33
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
28
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
29
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
30
Message-id: 20190816125802.25877-2-peter.maydell@linaro.org
31
---
34
---
32
target/arm/cpu.h | 6 +++++-
35
include/qemu/bitops.h | 1 +
33
target/arm/translate-a64.c | 6 ++++++
36
target/arm/cpu.h | 19 ++++
34
target/arm/translate.c | 7 +++++++
37
target/arm/cpu.c | 19 ++++
35
3 files changed, 18 insertions(+), 1 deletion(-)
38
target/arm/cpu64.c | 192 ++++++++++++++++++++++++++++++++++++-
39
target/arm/helper.c | 10 +-
40
target/arm/monitor.c | 12 +++
41
tests/arm-cpu-features.c | 194 ++++++++++++++++++++++++++++++++++++++
42
docs/arm-cpu-features.rst | 168 +++++++++++++++++++++++++++++++--
43
8 files changed, 606 insertions(+), 9 deletions(-)
36
44
45
diff --git a/include/qemu/bitops.h b/include/qemu/bitops.h
46
index XXXXXXX..XXXXXXX 100644
47
--- a/include/qemu/bitops.h
48
+++ b/include/qemu/bitops.h
49
@@ -XXX,XX +XXX,XX @@
50
#define BITS_PER_LONG (sizeof (unsigned long) * BITS_PER_BYTE)
51
52
#define BIT(nr) (1UL << (nr))
53
+#define BIT_ULL(nr) (1ULL << (nr))
54
#define BIT_MASK(nr) (1UL << ((nr) % BITS_PER_LONG))
55
#define BIT_WORD(nr) ((nr) / BITS_PER_LONG)
56
#define BITS_TO_LONGS(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(long))
37
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
57
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
38
index XXXXXXX..XXXXXXX 100644
58
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/cpu.h
59
--- a/target/arm/cpu.h
40
+++ b/target/arm/cpu.h
60
+++ b/target/arm/cpu.h
41
@@ -XXX,XX +XXX,XX @@ static inline uint64_t cpreg_to_kvm_id(uint32_t cpregid)
61
@@ -XXX,XX +XXX,XX @@ typedef struct {
42
* IO indicates that this register does I/O and therefore its accesses
62
43
* need to be surrounded by gen_io_start()/gen_io_end(). In particular,
63
#ifdef TARGET_AARCH64
44
* registers which implement clocks or timers require this.
64
# define ARM_MAX_VQ 16
45
+ * RAISES_EXC is for when the read or write hook might raise an exception;
65
+void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp);
46
+ * the generated code will synchronize the CPU state before calling the hook
66
+uint32_t arm_cpu_vq_map_next_smaller(ARMCPU *cpu, uint32_t vq);
47
+ * so that it is safe for the hook to call raise_exception().
67
#else
68
# define ARM_MAX_VQ 1
69
+static inline void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp) { }
70
+static inline uint32_t arm_cpu_vq_map_next_smaller(ARMCPU *cpu, uint32_t vq)
71
+{ return 0; }
72
#endif
73
74
typedef struct ARMVectorReg {
75
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
76
77
/* Used to set the maximum vector length the cpu will support. */
78
uint32_t sve_max_vq;
79
+
80
+ /*
81
+ * In sve_vq_map each set bit is a supported vector length of
82
+ * (bit-number + 1) * 16 bytes, i.e. each bit number + 1 is the vector
83
+ * length in quadwords.
84
+ *
85
+ * While processing properties during initialization, corresponding
86
+ * sve_vq_init bits are set for bits in sve_vq_map that have been
87
+ * set by properties.
88
+ */
89
+ DECLARE_BITMAP(sve_vq_map, ARM_MAX_VQ);
90
+ DECLARE_BITMAP(sve_vq_init, ARM_MAX_VQ);
91
};
92
93
void arm_cpu_post_init(Object *obj);
94
@@ -XXX,XX +XXX,XX @@ static inline int arm_feature(CPUARMState *env, int feature)
95
return (env->features & (1ULL << feature)) != 0;
96
}
97
98
+void arm_cpu_finalize_features(ARMCPU *cpu, Error **errp);
99
+
100
#if !defined(CONFIG_USER_ONLY)
101
/* Return true if exception levels below EL3 are in secure state,
102
* or would be following an exception return to that level.
103
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
104
index XXXXXXX..XXXXXXX 100644
105
--- a/target/arm/cpu.c
106
+++ b/target/arm/cpu.c
107
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_finalizefn(Object *obj)
108
#endif
109
}
110
111
+void arm_cpu_finalize_features(ARMCPU *cpu, Error **errp)
112
+{
113
+ Error *local_err = NULL;
114
+
115
+ if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
116
+ arm_cpu_sve_finalize(cpu, &local_err);
117
+ if (local_err != NULL) {
118
+ error_propagate(errp, local_err);
119
+ return;
120
+ }
121
+ }
122
+}
123
+
124
static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
125
{
126
CPUState *cs = CPU(dev);
127
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
128
return;
129
}
130
131
+ arm_cpu_finalize_features(cpu, &local_err);
132
+ if (local_err != NULL) {
133
+ error_propagate(errp, local_err);
134
+ return;
135
+ }
136
+
137
if (arm_feature(env, ARM_FEATURE_AARCH64) &&
138
cpu->has_vfp != cpu->has_neon) {
139
/*
140
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
141
index XXXXXXX..XXXXXXX 100644
142
--- a/target/arm/cpu64.c
143
+++ b/target/arm/cpu64.c
144
@@ -XXX,XX +XXX,XX @@ static void aarch64_a72_initfn(Object *obj)
145
define_arm_cp_regs(cpu, cortex_a72_a57_a53_cp_reginfo);
146
}
147
148
+void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
149
+{
150
+ /*
151
+ * If any vector lengths are explicitly enabled with sve<N> properties,
152
+ * then all other lengths are implicitly disabled. If sve-max-vq is
153
+ * specified then it is the same as explicitly enabling all lengths
154
+ * up to and including the specified maximum, which means all larger
155
+ * lengths will be implicitly disabled. If no sve<N> properties
156
+ * are enabled and sve-max-vq is not specified, then all lengths not
157
+ * explicitly disabled will be enabled. Additionally, all power-of-two
158
+ * vector lengths less than the maximum enabled length will be
159
+ * automatically enabled and all vector lengths larger than the largest
160
+ * disabled power-of-two vector length will be automatically disabled.
161
+ * Errors are generated if the user provided input that interferes with
162
+ * any of the above. Finally, if SVE is not disabled, then at least one
163
+ * vector length must be enabled.
164
+ */
165
+ DECLARE_BITMAP(tmp, ARM_MAX_VQ);
166
+ uint32_t vq, max_vq = 0;
167
+
168
+ /*
169
+ * Process explicit sve<N> properties.
170
+ * From the properties, sve_vq_map<N> implies sve_vq_init<N>.
171
+ * Check first for any sve<N> enabled.
172
+ */
173
+ if (!bitmap_empty(cpu->sve_vq_map, ARM_MAX_VQ)) {
174
+ max_vq = find_last_bit(cpu->sve_vq_map, ARM_MAX_VQ) + 1;
175
+
176
+ if (cpu->sve_max_vq && max_vq > cpu->sve_max_vq) {
177
+ error_setg(errp, "cannot enable sve%d", max_vq * 128);
178
+ error_append_hint(errp, "sve%d is larger than the maximum vector "
179
+ "length, sve-max-vq=%d (%d bits)\n",
180
+ max_vq * 128, cpu->sve_max_vq,
181
+ cpu->sve_max_vq * 128);
182
+ return;
183
+ }
184
+
185
+ /* Propagate enabled bits down through required powers-of-two. */
186
+ for (vq = pow2floor(max_vq); vq >= 1; vq >>= 1) {
187
+ if (!test_bit(vq - 1, cpu->sve_vq_init)) {
188
+ set_bit(vq - 1, cpu->sve_vq_map);
189
+ }
190
+ }
191
+ } else if (cpu->sve_max_vq == 0) {
192
+ /*
193
+ * No explicit bits enabled, and no implicit bits from sve-max-vq.
194
+ */
195
+ if (!cpu_isar_feature(aa64_sve, cpu)) {
196
+ /* SVE is disabled and so are all vector lengths. Good. */
197
+ return;
198
+ }
199
+
200
+ /* Disabling a power-of-two disables all larger lengths. */
201
+ if (test_bit(0, cpu->sve_vq_init)) {
202
+ error_setg(errp, "cannot disable sve128");
203
+ error_append_hint(errp, "Disabling sve128 results in all vector "
204
+ "lengths being disabled.\n");
205
+ error_append_hint(errp, "With SVE enabled, at least one vector "
206
+ "length must be enabled.\n");
207
+ return;
208
+ }
209
+ for (vq = 2; vq <= ARM_MAX_VQ; vq <<= 1) {
210
+ if (test_bit(vq - 1, cpu->sve_vq_init)) {
211
+ break;
212
+ }
213
+ }
214
+ max_vq = vq <= ARM_MAX_VQ ? vq - 1 : ARM_MAX_VQ;
215
+
216
+ bitmap_complement(cpu->sve_vq_map, cpu->sve_vq_init, max_vq);
217
+ max_vq = find_last_bit(cpu->sve_vq_map, max_vq) + 1;
218
+ }
219
+
220
+ /*
221
+ * Process the sve-max-vq property.
222
+ * Note that we know from the above that no bit above
223
+ * sve-max-vq is currently set.
224
+ */
225
+ if (cpu->sve_max_vq != 0) {
226
+ max_vq = cpu->sve_max_vq;
227
+
228
+ if (!test_bit(max_vq - 1, cpu->sve_vq_map) &&
229
+ test_bit(max_vq - 1, cpu->sve_vq_init)) {
230
+ error_setg(errp, "cannot disable sve%d", max_vq * 128);
231
+ error_append_hint(errp, "The maximum vector length must be "
232
+ "enabled, sve-max-vq=%d (%d bits)\n",
233
+ max_vq, max_vq * 128);
234
+ return;
235
+ }
236
+
237
+ /* Set all bits not explicitly set within sve-max-vq. */
238
+ bitmap_complement(tmp, cpu->sve_vq_init, max_vq);
239
+ bitmap_or(cpu->sve_vq_map, cpu->sve_vq_map, tmp, max_vq);
240
+ }
241
+
242
+ /*
243
+ * We should know what max-vq is now. Also, as we're done
244
+ * manipulating sve-vq-map, we ensure any bits above max-vq
245
+ * are clear, just in case anybody looks.
246
+ */
247
+ assert(max_vq != 0);
248
+ bitmap_clear(cpu->sve_vq_map, max_vq, ARM_MAX_VQ - max_vq);
249
+
250
+ /* Ensure all required powers-of-two are enabled. */
251
+ for (vq = pow2floor(max_vq); vq >= 1; vq >>= 1) {
252
+ if (!test_bit(vq - 1, cpu->sve_vq_map)) {
253
+ error_setg(errp, "cannot disable sve%d", vq * 128);
254
+ error_append_hint(errp, "sve%d is required as it "
255
+ "is a power-of-two length smaller than "
256
+ "the maximum, sve%d\n",
257
+ vq * 128, max_vq * 128);
258
+ return;
259
+ }
260
+ }
261
+
262
+ /*
263
+ * Now that we validated all our vector lengths, the only question
264
+ * left to answer is if we even want SVE at all.
265
+ */
266
+ if (!cpu_isar_feature(aa64_sve, cpu)) {
267
+ error_setg(errp, "cannot enable sve%d", max_vq * 128);
268
+ error_append_hint(errp, "SVE must be enabled to enable vector "
269
+ "lengths.\n");
270
+ error_append_hint(errp, "Add sve=on to the CPU property list.\n");
271
+ return;
272
+ }
273
+
274
+ /* From now on sve_max_vq is the actual maximum supported length. */
275
+ cpu->sve_max_vq = max_vq;
276
+}
277
+
278
+uint32_t arm_cpu_vq_map_next_smaller(ARMCPU *cpu, uint32_t vq)
279
+{
280
+ uint32_t bitnum;
281
+
282
+ /*
283
+ * We allow vq == ARM_MAX_VQ + 1 to be input because the caller may want
284
+ * to find the maximum vq enabled, which may be ARM_MAX_VQ, but this
285
+ * function always returns the next smaller than the input.
286
+ */
287
+ assert(vq && vq <= ARM_MAX_VQ + 1);
288
+
289
+ bitnum = find_last_bit(cpu->sve_vq_map, vq - 1);
290
+ return bitnum == vq - 1 ? 0 : bitnum + 1;
291
+}
292
+
293
static void cpu_max_get_sve_max_vq(Object *obj, Visitor *v, const char *name,
294
void *opaque, Error **errp)
295
{
296
@@ -XXX,XX +XXX,XX @@ static void cpu_max_set_sve_max_vq(Object *obj, Visitor *v, const char *name,
297
error_propagate(errp, err);
298
}
299
300
+static void cpu_arm_get_sve_vq(Object *obj, Visitor *v, const char *name,
301
+ void *opaque, Error **errp)
302
+{
303
+ ARMCPU *cpu = ARM_CPU(obj);
304
+ uint32_t vq = atoi(&name[3]) / 128;
305
+ bool value;
306
+
307
+ /* All vector lengths are disabled when SVE is off. */
308
+ if (!cpu_isar_feature(aa64_sve, cpu)) {
309
+ value = false;
310
+ } else {
311
+ value = test_bit(vq - 1, cpu->sve_vq_map);
312
+ }
313
+ visit_type_bool(v, name, &value, errp);
314
+}
315
+
316
+static void cpu_arm_set_sve_vq(Object *obj, Visitor *v, const char *name,
317
+ void *opaque, Error **errp)
318
+{
319
+ ARMCPU *cpu = ARM_CPU(obj);
320
+ uint32_t vq = atoi(&name[3]) / 128;
321
+ Error *err = NULL;
322
+ bool value;
323
+
324
+ visit_type_bool(v, name, &value, &err);
325
+ if (err) {
326
+ error_propagate(errp, err);
327
+ return;
328
+ }
329
+
330
+ if (value) {
331
+ set_bit(vq - 1, cpu->sve_vq_map);
332
+ } else {
333
+ clear_bit(vq - 1, cpu->sve_vq_map);
334
+ }
335
+ set_bit(vq - 1, cpu->sve_vq_init);
336
+}
337
+
338
static void cpu_arm_get_sve(Object *obj, Visitor *v, const char *name,
339
void *opaque, Error **errp)
340
{
341
@@ -XXX,XX +XXX,XX @@ static void cpu_arm_set_sve(Object *obj, Visitor *v, const char *name,
342
static void aarch64_max_initfn(Object *obj)
343
{
344
ARMCPU *cpu = ARM_CPU(obj);
345
+ uint32_t vq;
346
347
if (kvm_enabled()) {
348
kvm_arm_set_cpu_features_from_host(cpu);
349
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
350
cpu->dcz_blocksize = 7; /* 512 bytes */
351
#endif
352
353
- cpu->sve_max_vq = ARM_MAX_VQ;
354
object_property_add(obj, "sve-max-vq", "uint32", cpu_max_get_sve_max_vq,
355
cpu_max_set_sve_max_vq, NULL, NULL, &error_fatal);
356
object_property_add(obj, "sve", "bool", cpu_arm_get_sve,
357
cpu_arm_set_sve, NULL, NULL, &error_fatal);
358
+
359
+ for (vq = 1; vq <= ARM_MAX_VQ; ++vq) {
360
+ char name[8];
361
+ sprintf(name, "sve%d", vq * 128);
362
+ object_property_add(obj, name, "bool", cpu_arm_get_sve_vq,
363
+ cpu_arm_set_sve_vq, NULL, NULL, &error_fatal);
364
+ }
365
}
366
}
367
368
diff --git a/target/arm/helper.c b/target/arm/helper.c
369
index XXXXXXX..XXXXXXX 100644
370
--- a/target/arm/helper.c
371
+++ b/target/arm/helper.c
372
@@ -XXX,XX +XXX,XX @@ int sve_exception_el(CPUARMState *env, int el)
373
return 0;
374
}
375
376
+static uint32_t sve_zcr_get_valid_len(ARMCPU *cpu, uint32_t start_len)
377
+{
378
+ uint32_t start_vq = (start_len & 0xf) + 1;
379
+
380
+ return arm_cpu_vq_map_next_smaller(cpu, start_vq + 1) - 1;
381
+}
382
+
383
/*
384
* Given that SVE is enabled, return the vector length for EL.
48
*/
385
*/
49
#define ARM_CP_SPECIAL 0x0001
386
@@ -XXX,XX +XXX,XX @@ uint32_t sve_zcr_len_for_el(CPUARMState *env, int el)
50
#define ARM_CP_CONST 0x0002
387
if (arm_feature(env, ARM_FEATURE_EL3)) {
51
@@ -XXX,XX +XXX,XX @@ static inline uint64_t cpreg_to_kvm_id(uint32_t cpregid)
388
zcr_len = MIN(zcr_len, 0xf & (uint32_t)env->vfp.zcr_el[3]);
52
#define ARM_CP_FPU 0x1000
389
}
53
#define ARM_CP_SVE 0x2000
390
- return zcr_len;
54
#define ARM_CP_NO_GDB 0x4000
391
+
55
+#define ARM_CP_RAISES_EXC 0x8000
392
+ return sve_zcr_get_valid_len(cpu, zcr_len);
56
/* Used only as a terminator for ARMCPRegInfo lists */
393
}
57
#define ARM_CP_SENTINEL 0xffff
394
58
/* Mask of only the flag bits in a type field */
395
static void zcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
59
-#define ARM_CP_FLAG_MASK 0x70ff
396
diff --git a/target/arm/monitor.c b/target/arm/monitor.c
60
+#define ARM_CP_FLAG_MASK 0xf0ff
61
62
/* Valid values for ARMCPRegInfo state field, indicating which of
63
* the AArch32 and AArch64 execution states this register is visible in.
64
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
65
index XXXXXXX..XXXXXXX 100644
397
index XXXXXXX..XXXXXXX 100644
66
--- a/target/arm/translate-a64.c
398
--- a/target/arm/monitor.c
67
+++ b/target/arm/translate-a64.c
399
+++ b/target/arm/monitor.c
68
@@ -XXX,XX +XXX,XX @@ static void handle_sys(DisasContext *s, uint32_t insn, bool isread,
400
@@ -XXX,XX +XXX,XX @@ GICCapabilityList *qmp_query_gic_capabilities(Error **errp)
69
tcg_temp_free_ptr(tmpptr);
401
return head;
70
tcg_temp_free_i32(tcg_syn);
402
}
71
tcg_temp_free_i32(tcg_isread);
403
72
+ } else if (ri->type & ARM_CP_RAISES_EXC) {
404
+QEMU_BUILD_BUG_ON(ARM_MAX_VQ > 16);
73
+ /*
405
+
74
+ * The readfn or writefn might raise an exception;
406
/*
75
+ * synchronize the CPU state in case it does.
407
* These are cpu model features we want to advertise. The order here
76
+ */
408
* matters as this is the order in which qmp_query_cpu_model_expansion
77
+ gen_a64_set_pc_im(s->pc_curr);
409
@@ -XXX,XX +XXX,XX @@ GICCapabilityList *qmp_query_gic_capabilities(Error **errp)
410
*/
411
static const char *cpu_model_advertised_features[] = {
412
"aarch64", "pmu", "sve",
413
+ "sve128", "sve256", "sve384", "sve512",
414
+ "sve640", "sve768", "sve896", "sve1024", "sve1152", "sve1280",
415
+ "sve1408", "sve1536", "sve1664", "sve1792", "sve1920", "sve2048",
416
NULL
417
};
418
419
@@ -XXX,XX +XXX,XX @@ CpuModelExpansionInfo *qmp_query_cpu_model_expansion(CpuModelExpansionType type,
420
if (!err) {
421
visit_check_struct(visitor, &err);
422
}
423
+ if (!err) {
424
+ arm_cpu_finalize_features(ARM_CPU(obj), &err);
425
+ }
426
visit_end_struct(visitor, NULL);
427
visit_free(visitor);
428
if (err) {
429
@@ -XXX,XX +XXX,XX @@ CpuModelExpansionInfo *qmp_query_cpu_model_expansion(CpuModelExpansionType type,
430
error_propagate(errp, err);
431
return NULL;
432
}
433
+ } else {
434
+ Error *err = NULL;
435
+ arm_cpu_finalize_features(ARM_CPU(obj), &err);
436
+ assert(err == NULL);
78
}
437
}
79
438
80
/* Handle special cases first */
439
expansion_info = g_new0(CpuModelExpansionInfo, 1);
81
diff --git a/target/arm/translate.c b/target/arm/translate.c
440
diff --git a/tests/arm-cpu-features.c b/tests/arm-cpu-features.c
82
index XXXXXXX..XXXXXXX 100644
441
index XXXXXXX..XXXXXXX 100644
83
--- a/target/arm/translate.c
442
--- a/tests/arm-cpu-features.c
84
+++ b/target/arm/translate.c
443
+++ b/tests/arm-cpu-features.c
85
@@ -XXX,XX +XXX,XX @@ static int disas_coproc_insn(DisasContext *s, uint32_t insn)
444
@@ -XXX,XX +XXX,XX @@
86
tcg_temp_free_ptr(tmpptr);
445
* See the COPYING file in the top-level directory.
87
tcg_temp_free_i32(tcg_syn);
446
*/
88
tcg_temp_free_i32(tcg_isread);
447
#include "qemu/osdep.h"
89
+ } else if (ri->type & ARM_CP_RAISES_EXC) {
448
+#include "qemu/bitops.h"
90
+ /*
449
#include "libqtest.h"
91
+ * The readfn or writefn might raise an exception;
450
#include "qapi/qmp/qdict.h"
92
+ * synchronize the CPU state in case it does.
451
#include "qapi/qmp/qjson.h"
93
+ */
452
94
+ gen_set_condexec(s);
453
+/*
95
+ gen_set_pc_im(s, s->pc_curr);
454
+ * We expect the SVE max-vq to be 16. Also it must be <= 64
96
}
455
+ * for our test code, otherwise 'vls' can't just be a uint64_t.
97
456
+ */
98
/* Handle special cases first */
457
+#define SVE_MAX_VQ 16
458
+
459
#define MACHINE "-machine virt,gic-version=max,accel=tcg "
460
#define MACHINE_KVM "-machine virt,gic-version=max,accel=kvm:tcg "
461
#define QUERY_HEAD "{ 'execute': 'query-cpu-model-expansion', " \
462
@@ -XXX,XX +XXX,XX @@ static void assert_bad_props(QTestState *qts, const char *cpu_type)
463
qobject_unref(resp);
464
}
465
466
+static uint64_t resp_get_sve_vls(QDict *resp)
467
+{
468
+ QDict *props;
469
+ const QDictEntry *e;
470
+ uint64_t vls = 0;
471
+ int n = 0;
472
+
473
+ g_assert(resp);
474
+ g_assert(resp_has_props(resp));
475
+
476
+ props = resp_get_props(resp);
477
+
478
+ for (e = qdict_first(props); e; e = qdict_next(props, e)) {
479
+ if (strlen(e->key) > 3 && !strncmp(e->key, "sve", 3) &&
480
+ g_ascii_isdigit(e->key[3])) {
481
+ char *endptr;
482
+ int bits;
483
+
484
+ bits = g_ascii_strtoll(&e->key[3], &endptr, 10);
485
+ if (!bits || *endptr != '\0') {
486
+ continue;
487
+ }
488
+
489
+ if (qdict_get_bool(props, e->key)) {
490
+ vls |= BIT_ULL((bits / 128) - 1);
491
+ }
492
+ ++n;
493
+ }
494
+ }
495
+
496
+ g_assert(n == SVE_MAX_VQ);
497
+
498
+ return vls;
499
+}
500
+
501
+#define assert_sve_vls(qts, cpu_type, expected_vls, fmt, ...) \
502
+({ \
503
+ QDict *_resp = do_query(qts, cpu_type, fmt, ##__VA_ARGS__); \
504
+ g_assert(_resp); \
505
+ g_assert(resp_has_props(_resp)); \
506
+ g_assert(resp_get_sve_vls(_resp) == expected_vls); \
507
+ qobject_unref(_resp); \
508
+})
509
+
510
+static void sve_tests_default(QTestState *qts, const char *cpu_type)
511
+{
512
+ /*
513
+ * With no sve-max-vq or sve<N> properties on the command line
514
+ * the default is to have all vector lengths enabled. This also
515
+ * tests that 'sve' is 'on' by default.
516
+ */
517
+ assert_sve_vls(qts, cpu_type, BIT_ULL(SVE_MAX_VQ) - 1, NULL);
518
+
519
+ /* With SVE off, all vector lengths should also be off. */
520
+ assert_sve_vls(qts, cpu_type, 0, "{ 'sve': false }");
521
+
522
+ /* With SVE on, we must have at least one vector length enabled. */
523
+ assert_error(qts, cpu_type, "cannot disable sve128", "{ 'sve128': false }");
524
+
525
+ /* Basic enable/disable tests. */
526
+ assert_sve_vls(qts, cpu_type, 0x7, "{ 'sve384': true }");
527
+ assert_sve_vls(qts, cpu_type, ((BIT_ULL(SVE_MAX_VQ) - 1) & ~BIT_ULL(2)),
528
+ "{ 'sve384': false }");
529
+
530
+ /*
531
+ * ---------------------------------------------------------------------
532
+ * power-of-two(vq) all-power- can can
533
+ * of-two(< vq) enable disable
534
+ * ---------------------------------------------------------------------
535
+ * vq < max_vq no MUST* yes yes
536
+ * vq < max_vq yes MUST* yes no
537
+ * ---------------------------------------------------------------------
538
+ * vq == max_vq n/a MUST* yes** yes**
539
+ * ---------------------------------------------------------------------
540
+ * vq > max_vq n/a no no yes
541
+ * vq > max_vq n/a yes yes yes
542
+ * ---------------------------------------------------------------------
543
+ *
544
+ * [*] "MUST" means this requirement must already be satisfied,
545
+ * otherwise 'max_vq' couldn't itself be enabled.
546
+ *
547
+ * [**] Not testable with the QMP interface, only with the command line.
548
+ */
549
+
550
+ /* max_vq := 8 */
551
+ assert_sve_vls(qts, cpu_type, 0x8b, "{ 'sve1024': true }");
552
+
553
+ /* max_vq := 8, vq < max_vq, !power-of-two(vq) */
554
+ assert_sve_vls(qts, cpu_type, 0x8f,
555
+ "{ 'sve1024': true, 'sve384': true }");
556
+ assert_sve_vls(qts, cpu_type, 0x8b,
557
+ "{ 'sve1024': true, 'sve384': false }");
558
+
559
+ /* max_vq := 8, vq < max_vq, power-of-two(vq) */
560
+ assert_sve_vls(qts, cpu_type, 0x8b,
561
+ "{ 'sve1024': true, 'sve256': true }");
562
+ assert_error(qts, cpu_type, "cannot disable sve256",
563
+ "{ 'sve1024': true, 'sve256': false }");
564
+
565
+ /* max_vq := 3, vq > max_vq, !all-power-of-two(< vq) */
566
+ assert_error(qts, cpu_type, "cannot disable sve512",
567
+ "{ 'sve384': true, 'sve512': false, 'sve640': true }");
568
+
569
+ /*
570
+ * We can disable power-of-two vector lengths when all larger lengths
571
+ * are also disabled. We only need to disable the power-of-two length,
572
+ * as all non-enabled larger lengths will then be auto-disabled.
573
+ */
574
+ assert_sve_vls(qts, cpu_type, 0x7, "{ 'sve512': false }");
575
+
576
+ /* max_vq := 3, vq > max_vq, all-power-of-two(< vq) */
577
+ assert_sve_vls(qts, cpu_type, 0x1f,
578
+ "{ 'sve384': true, 'sve512': true, 'sve640': true }");
579
+ assert_sve_vls(qts, cpu_type, 0xf,
580
+ "{ 'sve384': true, 'sve512': true, 'sve640': false }");
581
+}
582
+
583
+static void sve_tests_sve_max_vq_8(const void *data)
584
+{
585
+ QTestState *qts;
586
+
587
+ qts = qtest_init(MACHINE "-cpu max,sve-max-vq=8");
588
+
589
+ assert_sve_vls(qts, "max", BIT_ULL(8) - 1, NULL);
590
+
591
+ /*
592
+ * Disabling the max-vq set by sve-max-vq is not allowed, but
593
+ * of course enabling it is OK.
594
+ */
595
+ assert_error(qts, "max", "cannot disable sve1024", "{ 'sve1024': false }");
596
+ assert_sve_vls(qts, "max", 0xff, "{ 'sve1024': true }");
597
+
598
+ /*
599
+ * Enabling anything larger than max-vq set by sve-max-vq is not
600
+ * allowed, but of course disabling everything larger is OK.
601
+ */
602
+ assert_error(qts, "max", "cannot enable sve1152", "{ 'sve1152': true }");
603
+ assert_sve_vls(qts, "max", 0xff, "{ 'sve1152': false }");
604
+
605
+ /*
606
+ * We can enable/disable non power-of-two lengths smaller than the
607
+ * max-vq set by sve-max-vq, but, while we can enable power-of-two
608
+ * lengths, we can't disable them.
609
+ */
610
+ assert_sve_vls(qts, "max", 0xff, "{ 'sve384': true }");
611
+ assert_sve_vls(qts, "max", 0xfb, "{ 'sve384': false }");
612
+ assert_sve_vls(qts, "max", 0xff, "{ 'sve256': true }");
613
+ assert_error(qts, "max", "cannot disable sve256", "{ 'sve256': false }");
614
+
615
+ qtest_quit(qts);
616
+}
617
+
618
+static void sve_tests_sve_off(const void *data)
619
+{
620
+ QTestState *qts;
621
+
622
+ qts = qtest_init(MACHINE "-cpu max,sve=off");
623
+
624
+ /* SVE is off, so the map should be empty. */
625
+ assert_sve_vls(qts, "max", 0, NULL);
626
+
627
+ /* The map stays empty even if we turn lengths off. */
628
+ assert_sve_vls(qts, "max", 0, "{ 'sve128': false }");
629
+
630
+ /* It's an error to enable lengths when SVE is off. */
631
+ assert_error(qts, "max", "cannot enable sve128", "{ 'sve128': true }");
632
+
633
+ /* With SVE re-enabled we should get all vector lengths enabled. */
634
+ assert_sve_vls(qts, "max", BIT_ULL(SVE_MAX_VQ) - 1, "{ 'sve': true }");
635
+
636
+ /* Or enable SVE with just specific vector lengths. */
637
+ assert_sve_vls(qts, "max", 0x3,
638
+ "{ 'sve': true, 'sve128': true, 'sve256': true }");
639
+
640
+ qtest_quit(qts);
641
+}
642
+
643
static void test_query_cpu_model_expansion(const void *data)
644
{
645
QTestState *qts;
646
@@ -XXX,XX +XXX,XX @@ static void test_query_cpu_model_expansion(const void *data)
647
if (g_str_equal(qtest_get_arch(), "aarch64")) {
648
assert_has_feature(qts, "max", "aarch64");
649
assert_has_feature(qts, "max", "sve");
650
+ assert_has_feature(qts, "max", "sve128");
651
assert_has_feature(qts, "cortex-a57", "pmu");
652
assert_has_feature(qts, "cortex-a57", "aarch64");
653
654
+ sve_tests_default(qts, "max");
655
+
656
/* Test that features that depend on KVM generate errors without. */
657
assert_error(qts, "max",
658
"'aarch64' feature cannot be disabled "
659
@@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv)
660
qtest_add_data_func("/arm/kvm/query-cpu-model-expansion",
661
NULL, test_query_cpu_model_expansion_kvm);
662
663
+ if (g_str_equal(qtest_get_arch(), "aarch64")) {
664
+ qtest_add_data_func("/arm/max/query-cpu-model-expansion/sve-max-vq-8",
665
+ NULL, sve_tests_sve_max_vq_8);
666
+ qtest_add_data_func("/arm/max/query-cpu-model-expansion/sve-off",
667
+ NULL, sve_tests_sve_off);
668
+ }
669
+
670
return g_test_run();
671
}
672
diff --git a/docs/arm-cpu-features.rst b/docs/arm-cpu-features.rst
673
index XXXXXXX..XXXXXXX 100644
674
--- a/docs/arm-cpu-features.rst
675
+++ b/docs/arm-cpu-features.rst
676
@@ -XXX,XX +XXX,XX @@ block in the script for usage) is used to issue the QMP commands.
677
(QEMU) query-cpu-model-expansion type=full model={"name":"max"}
678
{ "return": {
679
"model": { "name": "max", "props": {
680
- "pmu": true, "aarch64": true
681
+ "sve1664": true, "pmu": true, "sve1792": true, "sve1920": true,
682
+ "sve128": true, "aarch64": true, "sve1024": true, "sve": true,
683
+ "sve640": true, "sve768": true, "sve1408": true, "sve256": true,
684
+ "sve1152": true, "sve512": true, "sve384": true, "sve1536": true,
685
+ "sve896": true, "sve1280": true, "sve2048": true
686
}}}}
687
688
-We see that the `max` CPU type has the `pmu` and `aarch64` CPU features.
689
-We also see that the CPU features are enabled, as they are all `true`.
690
+We see that the `max` CPU type has the `pmu`, `aarch64`, `sve`, and many
691
+`sve<N>` CPU features. We also see that all the CPU features are
692
+enabled, as they are all `true`. (The `sve<N>` CPU features are all
693
+optional SVE vector lengths (see "SVE CPU Properties"). While with TCG
694
+all SVE vector lengths can be supported, when KVM is in use it's more
695
+likely that only a few lengths will be supported, if SVE is supported at
696
+all.)
697
698
(2) Let's try to disable the PMU::
699
700
(QEMU) query-cpu-model-expansion type=full model={"name":"max","props":{"pmu":false}}
701
{ "return": {
702
"model": { "name": "max", "props": {
703
- "pmu": false, "aarch64": true
704
+ "sve1664": true, "pmu": false, "sve1792": true, "sve1920": true,
705
+ "sve128": true, "aarch64": true, "sve1024": true, "sve": true,
706
+ "sve640": true, "sve768": true, "sve1408": true, "sve256": true,
707
+ "sve1152": true, "sve512": true, "sve384": true, "sve1536": true,
708
+ "sve896": true, "sve1280": true, "sve2048": true
709
}}}}
710
711
We see it worked, as `pmu` is now `false`.
712
@@ -XXX,XX +XXX,XX @@ We see it worked, as `pmu` is now `false`.
713
It looks like this feature is limited to a configuration we do not
714
currently have.
715
716
-(4) Let's try probing CPU features for the Cortex-A15 CPU type::
717
+(4) Let's disable `sve` and see what happens to all the optional SVE
718
+ vector lengths::
719
+
720
+ (QEMU) query-cpu-model-expansion type=full model={"name":"max","props":{"sve":false}}
721
+ { "return": {
722
+ "model": { "name": "max", "props": {
723
+ "sve1664": false, "pmu": true, "sve1792": false, "sve1920": false,
724
+ "sve128": false, "aarch64": true, "sve1024": false, "sve": false,
725
+ "sve640": false, "sve768": false, "sve1408": false, "sve256": false,
726
+ "sve1152": false, "sve512": false, "sve384": false, "sve1536": false,
727
+ "sve896": false, "sve1280": false, "sve2048": false
728
+ }}}}
729
+
730
+As expected they are now all `false`.
731
+
732
+(5) Let's try probing CPU features for the Cortex-A15 CPU type::
733
734
(QEMU) query-cpu-model-expansion type=full model={"name":"cortex-a15"}
735
{"return": {"model": {"name": "cortex-a15", "props": {"pmu": true}}}}
736
@@ -XXX,XX +XXX,XX @@ After determining which CPU features are available and supported for a
737
given CPU type, then they may be selectively enabled or disabled on the
738
QEMU command line with that CPU type::
739
740
- $ qemu-system-aarch64 -M virt -cpu max,pmu=off
741
+ $ qemu-system-aarch64 -M virt -cpu max,pmu=off,sve=on,sve128=on,sve256=on
742
743
-The example above disables the PMU for the `max` CPU type.
744
+The example above disables the PMU and enables the first two SVE vector
745
+lengths for the `max` CPU type. Note, the `sve=on` isn't actually
746
+necessary, because, as we observed above with our probe of the `max` CPU
747
+type, `sve` is already on by default. Also, based on our probe of
748
+defaults, it would seem we need to disable many SVE vector lengths, rather
749
+than only enabling the two we want. This isn't the case, because, as
750
+disabling many SVE vector lengths would be quite verbose, the `sve<N>` CPU
751
+properties have special semantics (see "SVE CPU Property Parsing
752
+Semantics").
753
+
754
+SVE CPU Properties
755
+==================
756
+
757
+There are two types of SVE CPU properties: `sve` and `sve<N>`. The first
758
+is used to enable or disable the entire SVE feature, just as the `pmu`
759
+CPU property completely enables or disables the PMU. The second type
760
+is used to enable or disable specific vector lengths, where `N` is the
761
+number of bits of the length. The `sve<N>` CPU properties have special
762
+dependencies and constraints, see "SVE CPU Property Dependencies and
763
+Constraints" below. Additionally, as we want all supported vector lengths
764
+to be enabled by default, then, in order to avoid overly verbose command
765
+lines (command lines full of `sve<N>=off`, for all `N` not wanted), we
766
+provide the parsing semantics listed in "SVE CPU Property Parsing
767
+Semantics".
768
+
769
+SVE CPU Property Dependencies and Constraints
770
+---------------------------------------------
771
+
772
+ 1) At least one vector length must be enabled when `sve` is enabled.
773
+
774
+ 2) If a vector length `N` is enabled, then all power-of-two vector
775
+ lengths smaller than `N` must also be enabled. E.g. if `sve512`
776
+ is enabled, then the 128-bit and 256-bit vector lengths must also
777
+ be enabled.
778
+
779
+SVE CPU Property Parsing Semantics
780
+----------------------------------
781
+
782
+ 1) If SVE is disabled (`sve=off`), then which SVE vector lengths
783
+ are enabled or disabled is irrelevant to the guest, as the entire
784
+ SVE feature is disabled and that disables all vector lengths for
785
+ the guest. However QEMU will still track any `sve<N>` CPU
786
+ properties provided by the user. If later an `sve=on` is provided,
787
+ then the guest will get only the enabled lengths. If no `sve=on`
788
+ is provided and there are explicitly enabled vector lengths, then
789
+ an error is generated.
790
+
791
+ 2) If SVE is enabled (`sve=on`), but no `sve<N>` CPU properties are
792
+ provided, then all supported vector lengths are enabled, including
793
+ the non-power-of-two lengths.
794
+
795
+ 3) If SVE is enabled, then an error is generated when attempting to
796
+ disable the last enabled vector length (see constraint (1) of "SVE
797
+ CPU Property Dependencies and Constraints").
798
+
799
+ 4) If one or more vector lengths have been explicitly enabled and at
800
+ at least one of the dependency lengths of the maximum enabled length
801
+ has been explicitly disabled, then an error is generated (see
802
+ constraint (2) of "SVE CPU Property Dependencies and Constraints").
803
+
804
+ 5) If one or more `sve<N>` CPU properties are set `off`, but no `sve<N>`,
805
+ CPU properties are set `on`, then the specified vector lengths are
806
+ disabled but the default for any unspecified lengths remains enabled.
807
+ Disabling a power-of-two vector length also disables all vector
808
+ lengths larger than the power-of-two length (see constraint (2) of
809
+ "SVE CPU Property Dependencies and Constraints").
810
+
811
+ 6) If one or more `sve<N>` CPU properties are set to `on`, then they
812
+ are enabled and all unspecified lengths default to disabled, except
813
+ for the required lengths per constraint (2) of "SVE CPU Property
814
+ Dependencies and Constraints", which will even be auto-enabled if
815
+ they were not explicitly enabled.
816
+
817
+ 7) If SVE was disabled (`sve=off`), allowing all vector lengths to be
818
+ explicitly disabled (i.e. avoiding the error specified in (3) of
819
+ "SVE CPU Property Parsing Semantics"), then if later an `sve=on` is
820
+ provided an error will be generated. To avoid this error, one must
821
+ enable at least one vector length prior to enabling SVE.
822
+
823
+SVE CPU Property Examples
824
+-------------------------
825
+
826
+ 1) Disable SVE::
827
+
828
+ $ qemu-system-aarch64 -M virt -cpu max,sve=off
829
+
830
+ 2) Implicitly enable all vector lengths for the `max` CPU type::
831
+
832
+ $ qemu-system-aarch64 -M virt -cpu max
833
+
834
+ 3) Only enable the 128-bit vector length::
835
+
836
+ $ qemu-system-aarch64 -M virt -cpu max,sve128=on
837
+
838
+ 4) Disable the 512-bit vector length and all larger vector lengths,
839
+ since 512 is a power-of-two. This results in all the smaller,
840
+ uninitialized lengths (128, 256, and 384) defaulting to enabled::
841
+
842
+ $ qemu-system-aarch64 -M virt -cpu max,sve512=off
843
+
844
+ 5) Enable the 128-bit, 256-bit, and 512-bit vector lengths::
845
+
846
+ $ qemu-system-aarch64 -M virt -cpu max,sve128=on,sve256=on,sve512=on
847
+
848
+ 6) The same as (5), but since the 128-bit and 256-bit vector
849
+ lengths are required for the 512-bit vector length to be enabled,
850
+ then allow them to be auto-enabled::
851
+
852
+ $ qemu-system-aarch64 -M virt -cpu max,sve512=on
853
+
854
+ 7) Do the same as (6), but by first disabling SVE and then re-enabling it::
855
+
856
+ $ qemu-system-aarch64 -M virt -cpu max,sve=off,sve512=on,sve=on
857
+
858
+ 8) Force errors regarding the last vector length::
859
+
860
+ $ qemu-system-aarch64 -M virt -cpu max,sve128=off
861
+ $ qemu-system-aarch64 -M virt -cpu max,sve=off,sve128=off,sve=on
862
+
863
+SVE CPU Property Recommendations
864
+--------------------------------
865
+
866
+The examples in "SVE CPU Property Examples" exhibit many ways to select
867
+vector lengths which developers may find useful in order to avoid overly
868
+verbose command lines. However, the recommended way to select vector
869
+lengths is to explicitly enable each desired length. Therefore only
870
+example's (1), (3), and (5) exhibit recommended uses of the properties.
871
99
--
872
--
100
2.20.1
873
2.20.1
101
874
102
875
diff view generated by jsdifflib
1
From: "Emilio G. Cota" <cota@braap.org>
1
From: Andrew Jones <drjones@redhat.com>
2
2
3
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
3
These are the SVE equivalents to kvm_arch_get/put_fpsimd. Note, the
4
Signed-off-by: Emilio G. Cota <cota@braap.org>
4
swabbing is different than it is for fpsmid because the vector format
5
is a little-endian stream of words.
6
7
Signed-off-by: Andrew Jones <drjones@redhat.com>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
9
Reviewed-by: Eric Auger <eric.auger@redhat.com>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
10
Tested-by: Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
8
Message-id: 20190828165307.18321-8-alex.bennee@linaro.org
11
Message-id: 20191031142734.8590-6-drjones@redhat.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
13
---
11
accel/tcg/atomic_template.h | 2 +-
14
target/arm/kvm64.c | 185 ++++++++++++++++++++++++++++++++++++++-------
12
1 file changed, 1 insertion(+), 1 deletion(-)
15
1 file changed, 156 insertions(+), 29 deletions(-)
13
16
14
diff --git a/accel/tcg/atomic_template.h b/accel/tcg/atomic_template.h
17
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
15
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
16
--- a/accel/tcg/atomic_template.h
19
--- a/target/arm/kvm64.c
17
+++ b/accel/tcg/atomic_template.h
20
+++ b/target/arm/kvm64.c
18
@@ -XXX,XX +XXX,XX @@ ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, target_ulong addr,
21
@@ -XXX,XX +XXX,XX @@ int kvm_arch_destroy_vcpu(CPUState *cs)
19
22
bool kvm_arm_reg_syncs_via_cpreg_list(uint64_t regidx)
20
#define GEN_ATOMIC_HELPER(X) \
23
{
21
ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong addr, \
24
/* Return true if the regidx is a register we should synchronize
22
- ABI_TYPE val EXTRA_ARGS) \
25
- * via the cpreg_tuples array (ie is not a core reg we sync by
23
+ ABI_TYPE val EXTRA_ARGS) \
26
- * hand in kvm_arch_get/put_registers())
24
{ \
27
+ * via the cpreg_tuples array (ie is not a core or sve reg that
25
ATOMIC_MMU_DECLS; \
28
+ * we sync by hand in kvm_arch_get/put_registers())
26
DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP; \
29
*/
30
switch (regidx & KVM_REG_ARM_COPROC_MASK) {
31
case KVM_REG_ARM_CORE:
32
+ case KVM_REG_ARM64_SVE:
33
return false;
34
default:
35
return true;
36
@@ -XXX,XX +XXX,XX @@ int kvm_arm_cpreg_level(uint64_t regidx)
37
38
static int kvm_arch_put_fpsimd(CPUState *cs)
39
{
40
- ARMCPU *cpu = ARM_CPU(cs);
41
- CPUARMState *env = &cpu->env;
42
+ CPUARMState *env = &ARM_CPU(cs)->env;
43
struct kvm_one_reg reg;
44
- uint32_t fpr;
45
int i, ret;
46
47
for (i = 0; i < 32; i++) {
48
@@ -XXX,XX +XXX,XX @@ static int kvm_arch_put_fpsimd(CPUState *cs)
49
}
50
}
51
52
- reg.addr = (uintptr_t)(&fpr);
53
- fpr = vfp_get_fpsr(env);
54
- reg.id = AARCH64_SIMD_CTRL_REG(fp_regs.fpsr);
55
- ret = kvm_vcpu_ioctl(cs, KVM_SET_ONE_REG, &reg);
56
- if (ret) {
57
- return ret;
58
+ return 0;
59
+}
60
+
61
+/*
62
+ * SVE registers are encoded in KVM's memory in an endianness-invariant format.
63
+ * The byte at offset i from the start of the in-memory representation contains
64
+ * the bits [(7 + 8 * i) : (8 * i)] of the register value. As this means the
65
+ * lowest offsets are stored in the lowest memory addresses, then that nearly
66
+ * matches QEMU's representation, which is to use an array of host-endian
67
+ * uint64_t's, where the lower offsets are at the lower indices. To complete
68
+ * the translation we just need to byte swap the uint64_t's on big-endian hosts.
69
+ */
70
+static uint64_t *sve_bswap64(uint64_t *dst, uint64_t *src, int nr)
71
+{
72
+#ifdef HOST_WORDS_BIGENDIAN
73
+ int i;
74
+
75
+ for (i = 0; i < nr; ++i) {
76
+ dst[i] = bswap64(src[i]);
77
}
78
79
- reg.addr = (uintptr_t)(&fpr);
80
- fpr = vfp_get_fpcr(env);
81
- reg.id = AARCH64_SIMD_CTRL_REG(fp_regs.fpcr);
82
+ return dst;
83
+#else
84
+ return src;
85
+#endif
86
+}
87
+
88
+/*
89
+ * KVM SVE registers come in slices where ZREGs have a slice size of 2048 bits
90
+ * and PREGS and the FFR have a slice size of 256 bits. However we simply hard
91
+ * code the slice index to zero for now as it's unlikely we'll need more than
92
+ * one slice for quite some time.
93
+ */
94
+static int kvm_arch_put_sve(CPUState *cs)
95
+{
96
+ ARMCPU *cpu = ARM_CPU(cs);
97
+ CPUARMState *env = &cpu->env;
98
+ uint64_t tmp[ARM_MAX_VQ * 2];
99
+ uint64_t *r;
100
+ struct kvm_one_reg reg;
101
+ int n, ret;
102
+
103
+ for (n = 0; n < KVM_ARM64_SVE_NUM_ZREGS; ++n) {
104
+ r = sve_bswap64(tmp, &env->vfp.zregs[n].d[0], cpu->sve_max_vq * 2);
105
+ reg.addr = (uintptr_t)r;
106
+ reg.id = KVM_REG_ARM64_SVE_ZREG(n, 0);
107
+ ret = kvm_vcpu_ioctl(cs, KVM_SET_ONE_REG, &reg);
108
+ if (ret) {
109
+ return ret;
110
+ }
111
+ }
112
+
113
+ for (n = 0; n < KVM_ARM64_SVE_NUM_PREGS; ++n) {
114
+ r = sve_bswap64(tmp, r = &env->vfp.pregs[n].p[0],
115
+ DIV_ROUND_UP(cpu->sve_max_vq * 2, 8));
116
+ reg.addr = (uintptr_t)r;
117
+ reg.id = KVM_REG_ARM64_SVE_PREG(n, 0);
118
+ ret = kvm_vcpu_ioctl(cs, KVM_SET_ONE_REG, &reg);
119
+ if (ret) {
120
+ return ret;
121
+ }
122
+ }
123
+
124
+ r = sve_bswap64(tmp, &env->vfp.pregs[FFR_PRED_NUM].p[0],
125
+ DIV_ROUND_UP(cpu->sve_max_vq * 2, 8));
126
+ reg.addr = (uintptr_t)r;
127
+ reg.id = KVM_REG_ARM64_SVE_FFR(0);
128
ret = kvm_vcpu_ioctl(cs, KVM_SET_ONE_REG, &reg);
129
if (ret) {
130
return ret;
131
@@ -XXX,XX +XXX,XX @@ int kvm_arch_put_registers(CPUState *cs, int level)
132
{
133
struct kvm_one_reg reg;
134
uint64_t val;
135
+ uint32_t fpr;
136
int i, ret;
137
unsigned int el;
138
139
@@ -XXX,XX +XXX,XX @@ int kvm_arch_put_registers(CPUState *cs, int level)
140
}
141
}
142
143
- ret = kvm_arch_put_fpsimd(cs);
144
+ if (cpu_isar_feature(aa64_sve, cpu)) {
145
+ ret = kvm_arch_put_sve(cs);
146
+ } else {
147
+ ret = kvm_arch_put_fpsimd(cs);
148
+ }
149
+ if (ret) {
150
+ return ret;
151
+ }
152
+
153
+ reg.addr = (uintptr_t)(&fpr);
154
+ fpr = vfp_get_fpsr(env);
155
+ reg.id = AARCH64_SIMD_CTRL_REG(fp_regs.fpsr);
156
+ ret = kvm_vcpu_ioctl(cs, KVM_SET_ONE_REG, &reg);
157
+ if (ret) {
158
+ return ret;
159
+ }
160
+
161
+ reg.addr = (uintptr_t)(&fpr);
162
+ fpr = vfp_get_fpcr(env);
163
+ reg.id = AARCH64_SIMD_CTRL_REG(fp_regs.fpcr);
164
+ ret = kvm_vcpu_ioctl(cs, KVM_SET_ONE_REG, &reg);
165
if (ret) {
166
return ret;
167
}
168
@@ -XXX,XX +XXX,XX @@ int kvm_arch_put_registers(CPUState *cs, int level)
169
170
static int kvm_arch_get_fpsimd(CPUState *cs)
171
{
172
- ARMCPU *cpu = ARM_CPU(cs);
173
- CPUARMState *env = &cpu->env;
174
+ CPUARMState *env = &ARM_CPU(cs)->env;
175
struct kvm_one_reg reg;
176
- uint32_t fpr;
177
int i, ret;
178
179
for (i = 0; i < 32; i++) {
180
@@ -XXX,XX +XXX,XX @@ static int kvm_arch_get_fpsimd(CPUState *cs)
181
}
182
}
183
184
- reg.addr = (uintptr_t)(&fpr);
185
- reg.id = AARCH64_SIMD_CTRL_REG(fp_regs.fpsr);
186
- ret = kvm_vcpu_ioctl(cs, KVM_GET_ONE_REG, &reg);
187
- if (ret) {
188
- return ret;
189
- }
190
- vfp_set_fpsr(env, fpr);
191
+ return 0;
192
+}
193
194
- reg.addr = (uintptr_t)(&fpr);
195
- reg.id = AARCH64_SIMD_CTRL_REG(fp_regs.fpcr);
196
+/*
197
+ * KVM SVE registers come in slices where ZREGs have a slice size of 2048 bits
198
+ * and PREGS and the FFR have a slice size of 256 bits. However we simply hard
199
+ * code the slice index to zero for now as it's unlikely we'll need more than
200
+ * one slice for quite some time.
201
+ */
202
+static int kvm_arch_get_sve(CPUState *cs)
203
+{
204
+ ARMCPU *cpu = ARM_CPU(cs);
205
+ CPUARMState *env = &cpu->env;
206
+ struct kvm_one_reg reg;
207
+ uint64_t *r;
208
+ int n, ret;
209
+
210
+ for (n = 0; n < KVM_ARM64_SVE_NUM_ZREGS; ++n) {
211
+ r = &env->vfp.zregs[n].d[0];
212
+ reg.addr = (uintptr_t)r;
213
+ reg.id = KVM_REG_ARM64_SVE_ZREG(n, 0);
214
+ ret = kvm_vcpu_ioctl(cs, KVM_GET_ONE_REG, &reg);
215
+ if (ret) {
216
+ return ret;
217
+ }
218
+ sve_bswap64(r, r, cpu->sve_max_vq * 2);
219
+ }
220
+
221
+ for (n = 0; n < KVM_ARM64_SVE_NUM_PREGS; ++n) {
222
+ r = &env->vfp.pregs[n].p[0];
223
+ reg.addr = (uintptr_t)r;
224
+ reg.id = KVM_REG_ARM64_SVE_PREG(n, 0);
225
+ ret = kvm_vcpu_ioctl(cs, KVM_GET_ONE_REG, &reg);
226
+ if (ret) {
227
+ return ret;
228
+ }
229
+ sve_bswap64(r, r, DIV_ROUND_UP(cpu->sve_max_vq * 2, 8));
230
+ }
231
+
232
+ r = &env->vfp.pregs[FFR_PRED_NUM].p[0];
233
+ reg.addr = (uintptr_t)r;
234
+ reg.id = KVM_REG_ARM64_SVE_FFR(0);
235
ret = kvm_vcpu_ioctl(cs, KVM_GET_ONE_REG, &reg);
236
if (ret) {
237
return ret;
238
}
239
- vfp_set_fpcr(env, fpr);
240
+ sve_bswap64(r, r, DIV_ROUND_UP(cpu->sve_max_vq * 2, 8));
241
242
return 0;
243
}
244
@@ -XXX,XX +XXX,XX @@ int kvm_arch_get_registers(CPUState *cs)
245
struct kvm_one_reg reg;
246
uint64_t val;
247
unsigned int el;
248
+ uint32_t fpr;
249
int i, ret;
250
251
ARMCPU *cpu = ARM_CPU(cs);
252
@@ -XXX,XX +XXX,XX @@ int kvm_arch_get_registers(CPUState *cs)
253
env->spsr = env->banked_spsr[i];
254
}
255
256
- ret = kvm_arch_get_fpsimd(cs);
257
+ if (cpu_isar_feature(aa64_sve, cpu)) {
258
+ ret = kvm_arch_get_sve(cs);
259
+ } else {
260
+ ret = kvm_arch_get_fpsimd(cs);
261
+ }
262
if (ret) {
263
return ret;
264
}
265
266
+ reg.addr = (uintptr_t)(&fpr);
267
+ reg.id = AARCH64_SIMD_CTRL_REG(fp_regs.fpsr);
268
+ ret = kvm_vcpu_ioctl(cs, KVM_GET_ONE_REG, &reg);
269
+ if (ret) {
270
+ return ret;
271
+ }
272
+ vfp_set_fpsr(env, fpr);
273
+
274
+ reg.addr = (uintptr_t)(&fpr);
275
+ reg.id = AARCH64_SIMD_CTRL_REG(fp_regs.fpcr);
276
+ ret = kvm_vcpu_ioctl(cs, KVM_GET_ONE_REG, &reg);
277
+ if (ret) {
278
+ return ret;
279
+ }
280
+ vfp_set_fpcr(env, fpr);
281
+
282
ret = kvm_get_vcpu_events(cpu);
283
if (ret) {
284
return ret;
27
--
285
--
28
2.20.1
286
2.20.1
29
287
30
288
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Andrew Jones <drjones@redhat.com>
2
2
3
Make this a static function private to translate.c.
3
Enable SVE in the KVM guest when the 'max' cpu type is configured
4
Thus we can use the same idiom between aarch64 and aarch32
4
and KVM supports it. KVM SVE requires use of the new finalize
5
without actually sharing function implementations.
5
vcpu ioctl, so we add that now too. For starters SVE can only be
6
6
turned on or off, getting all vector lengths the host CPU supports
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
when on. We'll add the other SVE CPU properties in later patches.
8
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
8
9
Message-id: 20190826151536.6771-3-richard.henderson@linaro.org
9
Signed-off-by: Andrew Jones <drjones@redhat.com>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Eric Auger <eric.auger@redhat.com>
12
Tested-by: Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
13
Reviewed-by: Beata Michalska <beata.michalska@linaro.org>
14
Message-id: 20191031142734.8590-7-drjones@redhat.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
16
---
12
target/arm/translate-vfp.inc.c | 3 +--
17
target/arm/kvm_arm.h | 27 +++++++++++++++++++++++++++
13
target/arm/translate.c | 22 ++++++++++++----------
18
target/arm/cpu64.c | 17 ++++++++++++++---
14
2 files changed, 13 insertions(+), 12 deletions(-)
19
target/arm/kvm.c | 5 +++++
15
20
target/arm/kvm64.c | 20 +++++++++++++++++++-
16
diff --git a/target/arm/translate-vfp.inc.c b/target/arm/translate-vfp.inc.c
21
tests/arm-cpu-features.c | 4 ++++
17
index XXXXXXX..XXXXXXX 100644
22
5 files changed, 69 insertions(+), 4 deletions(-)
18
--- a/target/arm/translate-vfp.inc.c
23
19
+++ b/target/arm/translate-vfp.inc.c
24
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
20
@@ -XXX,XX +XXX,XX @@ static bool full_vfp_access_check(DisasContext *s, bool ignore_vfp_enabled)
25
index XXXXXXX..XXXXXXX 100644
21
26
--- a/target/arm/kvm_arm.h
22
if (!s->vfp_enabled && !ignore_vfp_enabled) {
27
+++ b/target/arm/kvm_arm.h
23
assert(!arm_dc_feature(s, ARM_FEATURE_M));
28
@@ -XXX,XX +XXX,XX @@
24
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_uncategorized(),
29
*/
25
- default_exception_el(s));
30
int kvm_arm_vcpu_init(CPUState *cs);
26
+ unallocated_encoding(s);
31
27
return false;
32
+/**
28
}
33
+ * kvm_arm_vcpu_finalize
29
34
+ * @cs: CPUState
30
diff --git a/target/arm/translate.c b/target/arm/translate.c
35
+ * @feature: int
31
index XXXXXXX..XXXXXXX 100644
36
+ *
32
--- a/target/arm/translate.c
37
+ * Finalizes the configuration of the specified VCPU feature by
33
+++ b/target/arm/translate.c
38
+ * invoking the KVM_ARM_VCPU_FINALIZE ioctl. Features requiring
34
@@ -XXX,XX +XXX,XX @@ static void gen_exception_bkpt_insn(DisasContext *s, uint32_t syn)
39
+ * this are documented in the "KVM_ARM_VCPU_FINALIZE" section of
35
s->base.is_jmp = DISAS_NORETURN;
40
+ * KVM's API documentation.
36
}
41
+ *
37
42
+ * Returns: 0 if success else < 0 error code
38
+static void unallocated_encoding(DisasContext *s)
43
+ */
44
+int kvm_arm_vcpu_finalize(CPUState *cs, int feature);
45
+
46
/**
47
* kvm_arm_register_device:
48
* @mr: memory region for this device
49
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_aarch32_supported(CPUState *cs);
50
*/
51
bool kvm_arm_pmu_supported(CPUState *cs);
52
53
+/**
54
+ * bool kvm_arm_sve_supported:
55
+ * @cs: CPUState
56
+ *
57
+ * Returns true if the KVM VCPU can enable SVE and false otherwise.
58
+ */
59
+bool kvm_arm_sve_supported(CPUState *cs);
60
+
61
/**
62
* kvm_arm_get_max_vm_ipa_size - Returns the number of bits in the
63
* IPA address space supported by KVM
64
@@ -XXX,XX +XXX,XX @@ static inline bool kvm_arm_pmu_supported(CPUState *cs)
65
return false;
66
}
67
68
+static inline bool kvm_arm_sve_supported(CPUState *cs)
39
+{
69
+{
40
+ /* Unallocated and reserved encodings are uncategorized */
70
+ return false;
41
+ gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_uncategorized(),
42
+ default_exception_el(s));
43
+}
71
+}
44
+
72
+
45
/* Force a TB lookup after an instruction that changes the CPU state. */
73
static inline int kvm_arm_get_max_vm_ipa_size(MachineState *ms)
46
static inline void gen_lookup_tb(DisasContext *s)
47
{
74
{
48
@@ -XXX,XX +XXX,XX @@ static inline void gen_hlt(DisasContext *s, int imm)
75
return -ENOENT;
76
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
77
index XXXXXXX..XXXXXXX 100644
78
--- a/target/arm/cpu64.c
79
+++ b/target/arm/cpu64.c
80
@@ -XXX,XX +XXX,XX @@ static void cpu_arm_set_sve(Object *obj, Visitor *v, const char *name,
49
return;
81
return;
50
}
82
}
51
83
52
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_uncategorized(),
84
+ if (value && kvm_enabled() && !kvm_arm_sve_supported(CPU(cpu))) {
53
- default_exception_el(s));
85
+ error_setg(errp, "'sve' feature not supported by KVM on this host");
54
+ unallocated_encoding(s);
86
+ return;
55
}
87
+ }
56
88
+
57
static inline void gen_add_data_offset(DisasContext *s, unsigned int insn,
89
t = cpu->isar.id_aa64pfr0;
58
@@ -XXX,XX +XXX,XX @@ static void gen_srs(DisasContext *s,
90
t = FIELD_DP64(t, ID_AA64PFR0, SVE, value);
59
}
91
cpu->isar.id_aa64pfr0 = t;
60
92
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
61
if (undef) {
93
{
62
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_uncategorized(),
94
ARMCPU *cpu = ARM_CPU(obj);
63
- default_exception_el(s));
95
uint32_t vq;
64
+ unallocated_encoding(s);
96
+ uint64_t t;
65
return;
97
66
}
98
if (kvm_enabled()) {
67
99
kvm_arm_set_cpu_features_from_host(cpu);
68
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
100
+ if (kvm_arm_sve_supported(CPU(cpu))) {
69
break;
101
+ t = cpu->isar.id_aa64pfr0;
70
default:
102
+ t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1);
71
illegal_op:
103
+ cpu->isar.id_aa64pfr0 = t;
72
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_uncategorized(),
104
+ }
73
- default_exception_el(s));
105
} else {
74
+ unallocated_encoding(s);
106
- uint64_t t;
75
break;
107
uint32_t u;
108
aarch64_a57_initfn(obj);
109
110
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
111
112
object_property_add(obj, "sve-max-vq", "uint32", cpu_max_get_sve_max_vq,
113
cpu_max_set_sve_max_vq, NULL, NULL, &error_fatal);
114
- object_property_add(obj, "sve", "bool", cpu_arm_get_sve,
115
- cpu_arm_set_sve, NULL, NULL, &error_fatal);
116
117
for (vq = 1; vq <= ARM_MAX_VQ; ++vq) {
118
char name[8];
119
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
120
cpu_arm_set_sve_vq, NULL, NULL, &error_fatal);
76
}
121
}
77
}
122
}
78
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
123
+
79
}
124
+ object_property_add(obj, "sve", "bool", cpu_arm_get_sve,
80
return;
125
+ cpu_arm_set_sve, NULL, NULL, &error_fatal);
81
illegal_op:
126
}
82
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_uncategorized(),
127
83
- default_exception_el(s));
128
struct ARMCPUInfo {
84
+ unallocated_encoding(s);
129
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
85
}
130
index XXXXXXX..XXXXXXX 100644
86
131
--- a/target/arm/kvm.c
87
static void disas_thumb_insn(DisasContext *s, uint32_t insn)
132
+++ b/target/arm/kvm.c
88
@@ -XXX,XX +XXX,XX @@ static void disas_thumb_insn(DisasContext *s, uint32_t insn)
133
@@ -XXX,XX +XXX,XX @@ int kvm_arm_vcpu_init(CPUState *cs)
89
return;
134
return kvm_vcpu_ioctl(cs, KVM_ARM_VCPU_INIT, &init);
90
illegal_op:
135
}
91
undef:
136
92
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_uncategorized(),
137
+int kvm_arm_vcpu_finalize(CPUState *cs, int feature)
93
- default_exception_el(s));
138
+{
94
+ unallocated_encoding(s);
139
+ return kvm_vcpu_ioctl(cs, KVM_ARM_VCPU_FINALIZE, &feature);
95
}
140
+}
96
141
+
97
static bool insn_crosses_page(CPUARMState *env, DisasContext *s)
142
void kvm_arm_init_serror_injection(CPUState *cs)
143
{
144
cap_has_inject_serror_esr = kvm_check_extension(cs->kvm_state,
145
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
146
index XXXXXXX..XXXXXXX 100644
147
--- a/target/arm/kvm64.c
148
+++ b/target/arm/kvm64.c
149
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_aarch32_supported(CPUState *cpu)
150
return kvm_check_extension(s, KVM_CAP_ARM_EL1_32BIT);
151
}
152
153
+bool kvm_arm_sve_supported(CPUState *cpu)
154
+{
155
+ KVMState *s = KVM_STATE(current_machine->accelerator);
156
+
157
+ return kvm_check_extension(s, KVM_CAP_ARM_SVE);
158
+}
159
+
160
#define ARM_CPU_ID_MPIDR 3, 0, 0, 0, 5
161
162
int kvm_arch_init_vcpu(CPUState *cs)
163
@@ -XXX,XX +XXX,XX @@ int kvm_arch_init_vcpu(CPUState *cs)
164
cpu->kvm_init_features[0] |= 1 << KVM_ARM_VCPU_EL1_32BIT;
165
}
166
if (!kvm_check_extension(cs->kvm_state, KVM_CAP_ARM_PMU_V3)) {
167
- cpu->has_pmu = false;
168
+ cpu->has_pmu = false;
169
}
170
if (cpu->has_pmu) {
171
cpu->kvm_init_features[0] |= 1 << KVM_ARM_VCPU_PMU_V3;
172
} else {
173
unset_feature(&env->features, ARM_FEATURE_PMU);
174
}
175
+ if (cpu_isar_feature(aa64_sve, cpu)) {
176
+ assert(kvm_arm_sve_supported(cs));
177
+ cpu->kvm_init_features[0] |= 1 << KVM_ARM_VCPU_SVE;
178
+ }
179
180
/* Do KVM_ARM_VCPU_INIT ioctl */
181
ret = kvm_arm_vcpu_init(cs);
182
@@ -XXX,XX +XXX,XX @@ int kvm_arch_init_vcpu(CPUState *cs)
183
return ret;
184
}
185
186
+ if (cpu_isar_feature(aa64_sve, cpu)) {
187
+ ret = kvm_arm_vcpu_finalize(cs, KVM_ARM_VCPU_SVE);
188
+ if (ret) {
189
+ return ret;
190
+ }
191
+ }
192
+
193
/*
194
* When KVM is in use, PSCI is emulated in-kernel and not by qemu.
195
* Currently KVM has its own idea about MPIDR assignment, so we
196
diff --git a/tests/arm-cpu-features.c b/tests/arm-cpu-features.c
197
index XXXXXXX..XXXXXXX 100644
198
--- a/tests/arm-cpu-features.c
199
+++ b/tests/arm-cpu-features.c
200
@@ -XXX,XX +XXX,XX @@ static void test_query_cpu_model_expansion_kvm(const void *data)
201
assert_has_feature(qts, "host", "aarch64");
202
assert_has_feature(qts, "host", "pmu");
203
204
+ assert_has_feature(qts, "max", "sve");
205
+
206
assert_error(qts, "cortex-a15",
207
"We cannot guarantee the CPU type 'cortex-a15' works "
208
"with KVM on this host", NULL);
209
} else {
210
assert_has_not_feature(qts, "host", "aarch64");
211
assert_has_not_feature(qts, "host", "pmu");
212
+
213
+ assert_has_not_feature(qts, "max", "sve");
214
}
215
216
qtest_quit(qts);
98
--
217
--
99
2.20.1
218
2.20.1
100
219
101
220
diff view generated by jsdifflib
1
From: Eric Auger <eric.auger@redhat.com>
1
From: Andrew Jones <drjones@redhat.com>
2
2
3
Log a guest error when encountering an invalid STE.
3
kvm_arm_create_scratch_host_vcpu() takes a struct kvm_vcpu_init
4
parameter. Rather than just using it as an output parameter to
5
pass back the preferred target, use it also as an input parameter,
6
allowing a caller to pass a selected target if they wish and to
7
also pass cpu features. If the caller doesn't want to select a
8
target they can pass -1 for the target which indicates they want
9
to use the preferred target and have it passed back like before.
4
10
5
Signed-off-by: Eric Auger <eric.auger@redhat.com>
11
Signed-off-by: Andrew Jones <drjones@redhat.com>
6
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20190822172350.12008-5-eric.auger@redhat.com
13
Reviewed-by: Eric Auger <eric.auger@redhat.com>
14
Tested-by: Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
15
Reviewed-by: Beata Michalska <beata.michalska@linaro.org>
16
Message-id: 20191031142734.8590-8-drjones@redhat.com
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
18
---
10
hw/arm/smmuv3.c | 1 +
19
target/arm/kvm.c | 20 +++++++++++++++-----
11
1 file changed, 1 insertion(+)
20
target/arm/kvm32.c | 6 +++++-
21
target/arm/kvm64.c | 6 +++++-
22
3 files changed, 25 insertions(+), 7 deletions(-)
12
23
13
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
24
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
14
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/arm/smmuv3.c
26
--- a/target/arm/kvm.c
16
+++ b/hw/arm/smmuv3.c
27
+++ b/target/arm/kvm.c
17
@@ -XXX,XX +XXX,XX @@ static int decode_ste(SMMUv3State *s, SMMUTransCfg *cfg,
28
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_create_scratch_host_vcpu(const uint32_t *cpus_to_try,
18
uint32_t config;
29
int *fdarray,
19
30
struct kvm_vcpu_init *init)
20
if (!STE_VALID(ste)) {
31
{
21
+ qemu_log_mask(LOG_GUEST_ERROR, "invalid STE\n");
32
- int ret, kvmfd = -1, vmfd = -1, cpufd = -1;
22
goto bad_ste;
33
+ int ret = 0, kvmfd = -1, vmfd = -1, cpufd = -1;
34
35
kvmfd = qemu_open("/dev/kvm", O_RDWR);
36
if (kvmfd < 0) {
37
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_create_scratch_host_vcpu(const uint32_t *cpus_to_try,
38
goto finish;
23
}
39
}
24
40
41
- ret = ioctl(vmfd, KVM_ARM_PREFERRED_TARGET, init);
42
+ if (init->target == -1) {
43
+ struct kvm_vcpu_init preferred;
44
+
45
+ ret = ioctl(vmfd, KVM_ARM_PREFERRED_TARGET, &preferred);
46
+ if (!ret) {
47
+ init->target = preferred.target;
48
+ }
49
+ }
50
if (ret >= 0) {
51
ret = ioctl(cpufd, KVM_ARM_VCPU_INIT, init);
52
if (ret < 0) {
53
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_create_scratch_host_vcpu(const uint32_t *cpus_to_try,
54
* creating one kind of guest CPU which is its preferred
55
* CPU type.
56
*/
57
+ struct kvm_vcpu_init try;
58
+
59
while (*cpus_to_try != QEMU_KVM_ARM_TARGET_NONE) {
60
- init->target = *cpus_to_try++;
61
- memset(init->features, 0, sizeof(init->features));
62
- ret = ioctl(cpufd, KVM_ARM_VCPU_INIT, init);
63
+ try.target = *cpus_to_try++;
64
+ memcpy(try.features, init->features, sizeof(init->features));
65
+ ret = ioctl(cpufd, KVM_ARM_VCPU_INIT, &try);
66
if (ret >= 0) {
67
break;
68
}
69
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_create_scratch_host_vcpu(const uint32_t *cpus_to_try,
70
if (ret < 0) {
71
goto err;
72
}
73
+ init->target = try.target;
74
} else {
75
/* Treat a NULL cpus_to_try argument the same as an empty
76
* list, which means we will fail the call since this must
77
diff --git a/target/arm/kvm32.c b/target/arm/kvm32.c
78
index XXXXXXX..XXXXXXX 100644
79
--- a/target/arm/kvm32.c
80
+++ b/target/arm/kvm32.c
81
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
82
QEMU_KVM_ARM_TARGET_CORTEX_A15,
83
QEMU_KVM_ARM_TARGET_NONE
84
};
85
- struct kvm_vcpu_init init;
86
+ /*
87
+ * target = -1 informs kvm_arm_create_scratch_host_vcpu()
88
+ * to use the preferred target
89
+ */
90
+ struct kvm_vcpu_init init = { .target = -1, };
91
92
if (!kvm_arm_create_scratch_host_vcpu(cpus_to_try, fdarray, &init)) {
93
return false;
94
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
95
index XXXXXXX..XXXXXXX 100644
96
--- a/target/arm/kvm64.c
97
+++ b/target/arm/kvm64.c
98
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
99
KVM_ARM_TARGET_CORTEX_A57,
100
QEMU_KVM_ARM_TARGET_NONE
101
};
102
- struct kvm_vcpu_init init;
103
+ /*
104
+ * target = -1 informs kvm_arm_create_scratch_host_vcpu()
105
+ * to use the preferred target
106
+ */
107
+ struct kvm_vcpu_init init = { .target = -1, };
108
109
if (!kvm_arm_create_scratch_host_vcpu(cpus_to_try, fdarray, &init)) {
110
return false;
25
--
111
--
26
2.20.1
112
2.20.1
27
113
28
114
diff view generated by jsdifflib
1
From: Andrew Jeffery <andrew@aj.id.au>
1
From: Andrew Jones <drjones@redhat.com>
2
2
3
First up: This is not the way the hardware behaves.
3
Extend the SVE vq map initialization and validation with KVM's
4
supported vector lengths when KVM is enabled. In order to determine
5
and select supported lengths we add two new KVM functions for getting
6
and setting the KVM_REG_ARM64_SVE_VLS pseudo-register.
4
7
5
However, it helps resolve real-world problems with short periods being
8
This patch has been co-authored with Richard Henderson, who reworked
6
used under Linux. Commit 4451d3f59f2a ("clocksource/drivers/fttmr010:
9
the target/arm/cpu64.c changes in order to push all the validation and
7
Fix set_next_event handler") in Linux fixed the timer driver to
10
auto-enabling/disabling steps into the finalizer, resulting in a nice
8
correctly schedule the next event for the Aspeed controller, and in
11
LOC reduction.
9
combination with 5daa8212c08e ("ARM: dts: aspeed: Describe random number
10
device") Linux will now set a timer with a period as low as 1us.
11
12
12
Configuring a qemu timer with such a short period results in spending
13
Signed-off-by: Andrew Jones <drjones@redhat.com>
13
time handling the interrupt in the model rather than executing guest
14
Reviewed-by: Eric Auger <eric.auger@redhat.com>
14
code, leading to noticeable "sticky" behaviour in the guest.
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
16
Tested-by: Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
16
The behaviour of Linux is correct with respect to the hardware, so we
17
Message-id: 20191031142734.8590-9-drjones@redhat.com
17
need to improve our handling under emulation. The approach chosen is to
18
provide back-pressure information by calculating an acceptable minimum
19
number of ticks to be set on the model. Under Linux an additional read
20
is added in the timer configuration path to detect back-pressure, which
21
will never occur on hardware. However if back-pressure is observed, the
22
driver alerts the clock event subsystem, which then performs its own
23
next event dilation via a config option - d1748302f70b ("clockevents:
24
Make minimum delay adjustments configurable")
25
26
A minimum period of 5us was experimentally determined on a Lenovo
27
T480s, which I've increased to 20us for "safety".
28
29
Signed-off-by: Andrew Jeffery <andrew@aj.id.au>
30
Reviewed-by: Joel Stanley <joel@jms.id.au>
31
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
32
Tested-by: Joel Stanley <joel@jms.id.au>
33
Signed-off-by: Cédric Le Goater <clg@kaod.org>
34
Message-id: 20190704055150.4899-1-clg@kaod.org
35
[clg: - changed the computation of min_ticks to be done each time the
36
timer value is reloaded. It removes the ordering issue of the
37
timer and scu reset handlers but is slightly slower ]
38
- introduced TIMER_MIN_NS
39
- introduced calculate_min_ticks() ]
40
Signed-off-by: Cédric Le Goater <clg@kaod.org>
41
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
42
---
19
---
43
hw/timer/aspeed_timer.c | 17 ++++++++++++++++-
20
target/arm/kvm_arm.h | 12 +++
44
1 file changed, 16 insertions(+), 1 deletion(-)
21
target/arm/cpu64.c | 176 ++++++++++++++++++++++++++++----------
22
target/arm/kvm64.c | 100 +++++++++++++++++++++-
23
tests/arm-cpu-features.c | 104 +++++++++++++++++++++-
24
docs/arm-cpu-features.rst | 45 +++++++---
25
5 files changed, 379 insertions(+), 58 deletions(-)
45
26
46
diff --git a/hw/timer/aspeed_timer.c b/hw/timer/aspeed_timer.c
27
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
47
index XXXXXXX..XXXXXXX 100644
28
index XXXXXXX..XXXXXXX 100644
48
--- a/hw/timer/aspeed_timer.c
29
--- a/target/arm/kvm_arm.h
49
+++ b/hw/timer/aspeed_timer.c
30
+++ b/target/arm/kvm_arm.h
50
@@ -XXX,XX +XXX,XX @@ enum timer_ctrl_op {
31
@@ -XXX,XX +XXX,XX @@ typedef struct ARMHostCPUFeatures {
51
op_pulse_enable
32
*/
52
};
33
bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf);
53
34
54
+/*
35
+/**
55
+ * Minimum value of the reload register to filter out short period
36
+ * kvm_arm_sve_get_vls:
56
+ * timers which have a noticeable impact in emulation. 5us should be
37
+ * @cs: CPUState
57
+ * enough, use 20us for "safety".
38
+ * @map: bitmap to fill in
39
+ *
40
+ * Get all the SVE vector lengths supported by the KVM host, setting
41
+ * the bits corresponding to their length in quadwords minus one
42
+ * (vq - 1) in @map up to ARM_MAX_VQ.
58
+ */
43
+ */
59
+#define TIMER_MIN_NS (20 * SCALE_US)
44
+void kvm_arm_sve_get_vls(CPUState *cs, unsigned long *map);
60
+
45
+
61
/**
46
/**
62
* Avoid mutual references between AspeedTimerCtrlState and AspeedTimer
47
* kvm_arm_set_cpu_features_from_host:
63
* structs, as it's a waste of memory. The ptimer BH callback needs to know
48
* @cpu: ARMCPU to set the features for
64
@@ -XXX,XX +XXX,XX @@ static inline uint32_t calculate_ticks(struct AspeedTimer *t, uint64_t now_ns)
49
@@ -XXX,XX +XXX,XX @@ static inline int kvm_arm_vgic_probe(void)
65
return t->reload - MIN(t->reload, ticks);
50
static inline void kvm_arm_pmu_set_irq(CPUState *cs, int irq) {}
51
static inline void kvm_arm_pmu_init(CPUState *cs) {}
52
53
+static inline void kvm_arm_sve_get_vls(CPUState *cs, unsigned long *map) {}
54
#endif
55
56
static inline const char *gic_class_name(void)
57
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
58
index XXXXXXX..XXXXXXX 100644
59
--- a/target/arm/cpu64.c
60
+++ b/target/arm/cpu64.c
61
@@ -XXX,XX +XXX,XX @@ void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
62
* any of the above. Finally, if SVE is not disabled, then at least one
63
* vector length must be enabled.
64
*/
65
+ DECLARE_BITMAP(kvm_supported, ARM_MAX_VQ);
66
DECLARE_BITMAP(tmp, ARM_MAX_VQ);
67
uint32_t vq, max_vq = 0;
68
69
+ /* Collect the set of vector lengths supported by KVM. */
70
+ bitmap_zero(kvm_supported, ARM_MAX_VQ);
71
+ if (kvm_enabled() && kvm_arm_sve_supported(CPU(cpu))) {
72
+ kvm_arm_sve_get_vls(CPU(cpu), kvm_supported);
73
+ } else if (kvm_enabled()) {
74
+ assert(!cpu_isar_feature(aa64_sve, cpu));
75
+ }
76
+
77
/*
78
* Process explicit sve<N> properties.
79
* From the properties, sve_vq_map<N> implies sve_vq_init<N>.
80
@@ -XXX,XX +XXX,XX @@ void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
81
return;
82
}
83
84
- /* Propagate enabled bits down through required powers-of-two. */
85
- for (vq = pow2floor(max_vq); vq >= 1; vq >>= 1) {
86
- if (!test_bit(vq - 1, cpu->sve_vq_init)) {
87
- set_bit(vq - 1, cpu->sve_vq_map);
88
+ if (kvm_enabled()) {
89
+ /*
90
+ * For KVM we have to automatically enable all supported unitialized
91
+ * lengths, even when the smaller lengths are not all powers-of-two.
92
+ */
93
+ bitmap_andnot(tmp, kvm_supported, cpu->sve_vq_init, max_vq);
94
+ bitmap_or(cpu->sve_vq_map, cpu->sve_vq_map, tmp, max_vq);
95
+ } else {
96
+ /* Propagate enabled bits down through required powers-of-two. */
97
+ for (vq = pow2floor(max_vq); vq >= 1; vq >>= 1) {
98
+ if (!test_bit(vq - 1, cpu->sve_vq_init)) {
99
+ set_bit(vq - 1, cpu->sve_vq_map);
100
+ }
101
}
102
}
103
} else if (cpu->sve_max_vq == 0) {
104
@@ -XXX,XX +XXX,XX @@ void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
105
return;
106
}
107
108
- /* Disabling a power-of-two disables all larger lengths. */
109
- if (test_bit(0, cpu->sve_vq_init)) {
110
- error_setg(errp, "cannot disable sve128");
111
- error_append_hint(errp, "Disabling sve128 results in all vector "
112
- "lengths being disabled.\n");
113
- error_append_hint(errp, "With SVE enabled, at least one vector "
114
- "length must be enabled.\n");
115
- return;
116
- }
117
- for (vq = 2; vq <= ARM_MAX_VQ; vq <<= 1) {
118
- if (test_bit(vq - 1, cpu->sve_vq_init)) {
119
- break;
120
+ if (kvm_enabled()) {
121
+ /* Disabling a supported length disables all larger lengths. */
122
+ for (vq = 1; vq <= ARM_MAX_VQ; ++vq) {
123
+ if (test_bit(vq - 1, cpu->sve_vq_init) &&
124
+ test_bit(vq - 1, kvm_supported)) {
125
+ break;
126
+ }
127
}
128
+ max_vq = vq <= ARM_MAX_VQ ? vq - 1 : ARM_MAX_VQ;
129
+ bitmap_andnot(cpu->sve_vq_map, kvm_supported,
130
+ cpu->sve_vq_init, max_vq);
131
+ if (max_vq == 0 || bitmap_empty(cpu->sve_vq_map, max_vq)) {
132
+ error_setg(errp, "cannot disable sve%d", vq * 128);
133
+ error_append_hint(errp, "Disabling sve%d results in all "
134
+ "vector lengths being disabled.\n",
135
+ vq * 128);
136
+ error_append_hint(errp, "With SVE enabled, at least one "
137
+ "vector length must be enabled.\n");
138
+ return;
139
+ }
140
+ } else {
141
+ /* Disabling a power-of-two disables all larger lengths. */
142
+ if (test_bit(0, cpu->sve_vq_init)) {
143
+ error_setg(errp, "cannot disable sve128");
144
+ error_append_hint(errp, "Disabling sve128 results in all "
145
+ "vector lengths being disabled.\n");
146
+ error_append_hint(errp, "With SVE enabled, at least one "
147
+ "vector length must be enabled.\n");
148
+ return;
149
+ }
150
+ for (vq = 2; vq <= ARM_MAX_VQ; vq <<= 1) {
151
+ if (test_bit(vq - 1, cpu->sve_vq_init)) {
152
+ break;
153
+ }
154
+ }
155
+ max_vq = vq <= ARM_MAX_VQ ? vq - 1 : ARM_MAX_VQ;
156
+ bitmap_complement(cpu->sve_vq_map, cpu->sve_vq_init, max_vq);
157
}
158
- max_vq = vq <= ARM_MAX_VQ ? vq - 1 : ARM_MAX_VQ;
159
160
- bitmap_complement(cpu->sve_vq_map, cpu->sve_vq_init, max_vq);
161
max_vq = find_last_bit(cpu->sve_vq_map, max_vq) + 1;
162
}
163
164
@@ -XXX,XX +XXX,XX @@ void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
165
assert(max_vq != 0);
166
bitmap_clear(cpu->sve_vq_map, max_vq, ARM_MAX_VQ - max_vq);
167
168
- /* Ensure all required powers-of-two are enabled. */
169
- for (vq = pow2floor(max_vq); vq >= 1; vq >>= 1) {
170
- if (!test_bit(vq - 1, cpu->sve_vq_map)) {
171
- error_setg(errp, "cannot disable sve%d", vq * 128);
172
- error_append_hint(errp, "sve%d is required as it "
173
- "is a power-of-two length smaller than "
174
- "the maximum, sve%d\n",
175
- vq * 128, max_vq * 128);
176
+ if (kvm_enabled()) {
177
+ /* Ensure the set of lengths matches what KVM supports. */
178
+ bitmap_xor(tmp, cpu->sve_vq_map, kvm_supported, max_vq);
179
+ if (!bitmap_empty(tmp, max_vq)) {
180
+ vq = find_last_bit(tmp, max_vq) + 1;
181
+ if (test_bit(vq - 1, cpu->sve_vq_map)) {
182
+ if (cpu->sve_max_vq) {
183
+ error_setg(errp, "cannot set sve-max-vq=%d",
184
+ cpu->sve_max_vq);
185
+ error_append_hint(errp, "This KVM host does not support "
186
+ "the vector length %d-bits.\n",
187
+ vq * 128);
188
+ error_append_hint(errp, "It may not be possible to use "
189
+ "sve-max-vq with this KVM host. Try "
190
+ "using only sve<N> properties.\n");
191
+ } else {
192
+ error_setg(errp, "cannot enable sve%d", vq * 128);
193
+ error_append_hint(errp, "This KVM host does not support "
194
+ "the vector length %d-bits.\n",
195
+ vq * 128);
196
+ }
197
+ } else {
198
+ error_setg(errp, "cannot disable sve%d", vq * 128);
199
+ error_append_hint(errp, "The KVM host requires all "
200
+ "supported vector lengths smaller "
201
+ "than %d bits to also be enabled.\n",
202
+ max_vq * 128);
203
+ }
204
return;
205
}
206
+ } else {
207
+ /* Ensure all required powers-of-two are enabled. */
208
+ for (vq = pow2floor(max_vq); vq >= 1; vq >>= 1) {
209
+ if (!test_bit(vq - 1, cpu->sve_vq_map)) {
210
+ error_setg(errp, "cannot disable sve%d", vq * 128);
211
+ error_append_hint(errp, "sve%d is required as it "
212
+ "is a power-of-two length smaller than "
213
+ "the maximum, sve%d\n",
214
+ vq * 128, max_vq * 128);
215
+ return;
216
+ }
217
+ }
218
}
219
220
/*
221
@@ -XXX,XX +XXX,XX @@ static void cpu_max_set_sve_max_vq(Object *obj, Visitor *v, const char *name,
222
{
223
ARMCPU *cpu = ARM_CPU(obj);
224
Error *err = NULL;
225
+ uint32_t max_vq;
226
227
- visit_type_uint32(v, name, &cpu->sve_max_vq, &err);
228
-
229
- if (!err && (cpu->sve_max_vq == 0 || cpu->sve_max_vq > ARM_MAX_VQ)) {
230
- error_setg(&err, "unsupported SVE vector length");
231
- error_append_hint(&err, "Valid sve-max-vq in range [1-%d]\n",
232
- ARM_MAX_VQ);
233
+ visit_type_uint32(v, name, &max_vq, &err);
234
+ if (err) {
235
+ error_propagate(errp, err);
236
+ return;
237
}
238
- error_propagate(errp, err);
239
+
240
+ if (kvm_enabled() && !kvm_arm_sve_supported(CPU(cpu))) {
241
+ error_setg(errp, "cannot set sve-max-vq");
242
+ error_append_hint(errp, "SVE not supported by KVM on this host\n");
243
+ return;
244
+ }
245
+
246
+ if (max_vq == 0 || max_vq > ARM_MAX_VQ) {
247
+ error_setg(errp, "unsupported SVE vector length");
248
+ error_append_hint(errp, "Valid sve-max-vq in range [1-%d]\n",
249
+ ARM_MAX_VQ);
250
+ return;
251
+ }
252
+
253
+ cpu->sve_max_vq = max_vq;
66
}
254
}
67
255
68
+static uint32_t calculate_min_ticks(AspeedTimer *t, uint32_t value)
256
static void cpu_arm_get_sve_vq(Object *obj, Visitor *v, const char *name,
257
@@ -XXX,XX +XXX,XX @@ static void cpu_arm_set_sve_vq(Object *obj, Visitor *v, const char *name,
258
return;
259
}
260
261
+ if (value && kvm_enabled() && !kvm_arm_sve_supported(CPU(cpu))) {
262
+ error_setg(errp, "cannot enable %s", name);
263
+ error_append_hint(errp, "SVE not supported by KVM on this host\n");
264
+ return;
265
+ }
266
+
267
if (value) {
268
set_bit(vq - 1, cpu->sve_vq_map);
269
} else {
270
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
271
cpu->ctr = 0x80038003; /* 32 byte I and D cacheline size, VIPT icache */
272
cpu->dcz_blocksize = 7; /* 512 bytes */
273
#endif
274
-
275
- object_property_add(obj, "sve-max-vq", "uint32", cpu_max_get_sve_max_vq,
276
- cpu_max_set_sve_max_vq, NULL, NULL, &error_fatal);
277
-
278
- for (vq = 1; vq <= ARM_MAX_VQ; ++vq) {
279
- char name[8];
280
- sprintf(name, "sve%d", vq * 128);
281
- object_property_add(obj, name, "bool", cpu_arm_get_sve_vq,
282
- cpu_arm_set_sve_vq, NULL, NULL, &error_fatal);
283
- }
284
}
285
286
object_property_add(obj, "sve", "bool", cpu_arm_get_sve,
287
cpu_arm_set_sve, NULL, NULL, &error_fatal);
288
+ object_property_add(obj, "sve-max-vq", "uint32", cpu_max_get_sve_max_vq,
289
+ cpu_max_set_sve_max_vq, NULL, NULL, &error_fatal);
290
+
291
+ for (vq = 1; vq <= ARM_MAX_VQ; ++vq) {
292
+ char name[8];
293
+ sprintf(name, "sve%d", vq * 128);
294
+ object_property_add(obj, name, "bool", cpu_arm_get_sve_vq,
295
+ cpu_arm_set_sve_vq, NULL, NULL, &error_fatal);
296
+ }
297
}
298
299
struct ARMCPUInfo {
300
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
301
index XXXXXXX..XXXXXXX 100644
302
--- a/target/arm/kvm64.c
303
+++ b/target/arm/kvm64.c
304
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_sve_supported(CPUState *cpu)
305
return kvm_check_extension(s, KVM_CAP_ARM_SVE);
306
}
307
308
+QEMU_BUILD_BUG_ON(KVM_ARM64_SVE_VQ_MIN != 1);
309
+
310
+void kvm_arm_sve_get_vls(CPUState *cs, unsigned long *map)
69
+{
311
+{
70
+ uint32_t rate = calculate_rate(t);
312
+ /* Only call this function if kvm_arm_sve_supported() returns true. */
71
+ uint32_t min_ticks = muldiv64(TIMER_MIN_NS, rate, NANOSECONDS_PER_SECOND);
313
+ static uint64_t vls[KVM_ARM64_SVE_VLS_WORDS];
72
+
314
+ static bool probed;
73
+ return value < min_ticks ? min_ticks : value;
315
+ uint32_t vq = 0;
316
+ int i, j;
317
+
318
+ bitmap_clear(map, 0, ARM_MAX_VQ);
319
+
320
+ /*
321
+ * KVM ensures all host CPUs support the same set of vector lengths.
322
+ * So we only need to create the scratch VCPUs once and then cache
323
+ * the results.
324
+ */
325
+ if (!probed) {
326
+ struct kvm_vcpu_init init = {
327
+ .target = -1,
328
+ .features[0] = (1 << KVM_ARM_VCPU_SVE),
329
+ };
330
+ struct kvm_one_reg reg = {
331
+ .id = KVM_REG_ARM64_SVE_VLS,
332
+ .addr = (uint64_t)&vls[0],
333
+ };
334
+ int fdarray[3], ret;
335
+
336
+ probed = true;
337
+
338
+ if (!kvm_arm_create_scratch_host_vcpu(NULL, fdarray, &init)) {
339
+ error_report("failed to create scratch VCPU with SVE enabled");
340
+ abort();
341
+ }
342
+ ret = ioctl(fdarray[2], KVM_GET_ONE_REG, &reg);
343
+ kvm_arm_destroy_scratch_host_vcpu(fdarray);
344
+ if (ret) {
345
+ error_report("failed to get KVM_REG_ARM64_SVE_VLS: %s",
346
+ strerror(errno));
347
+ abort();
348
+ }
349
+
350
+ for (i = KVM_ARM64_SVE_VLS_WORDS - 1; i >= 0; --i) {
351
+ if (vls[i]) {
352
+ vq = 64 - clz64(vls[i]) + i * 64;
353
+ break;
354
+ }
355
+ }
356
+ if (vq > ARM_MAX_VQ) {
357
+ warn_report("KVM supports vector lengths larger than "
358
+ "QEMU can enable");
359
+ }
360
+ }
361
+
362
+ for (i = 0; i < KVM_ARM64_SVE_VLS_WORDS; ++i) {
363
+ if (!vls[i]) {
364
+ continue;
365
+ }
366
+ for (j = 1; j <= 64; ++j) {
367
+ vq = j + i * 64;
368
+ if (vq > ARM_MAX_VQ) {
369
+ return;
370
+ }
371
+ if (vls[i] & (1UL << (j - 1))) {
372
+ set_bit(vq - 1, map);
373
+ }
374
+ }
375
+ }
74
+}
376
+}
75
+
377
+
76
static inline uint64_t calculate_time(struct AspeedTimer *t, uint32_t ticks)
378
+static int kvm_arm_sve_set_vls(CPUState *cs)
379
+{
380
+ uint64_t vls[KVM_ARM64_SVE_VLS_WORDS] = {0};
381
+ struct kvm_one_reg reg = {
382
+ .id = KVM_REG_ARM64_SVE_VLS,
383
+ .addr = (uint64_t)&vls[0],
384
+ };
385
+ ARMCPU *cpu = ARM_CPU(cs);
386
+ uint32_t vq;
387
+ int i, j;
388
+
389
+ assert(cpu->sve_max_vq <= KVM_ARM64_SVE_VQ_MAX);
390
+
391
+ for (vq = 1; vq <= cpu->sve_max_vq; ++vq) {
392
+ if (test_bit(vq - 1, cpu->sve_vq_map)) {
393
+ i = (vq - 1) / 64;
394
+ j = (vq - 1) % 64;
395
+ vls[i] |= 1UL << j;
396
+ }
397
+ }
398
+
399
+ return kvm_vcpu_ioctl(cs, KVM_SET_ONE_REG, &reg);
400
+}
401
+
402
#define ARM_CPU_ID_MPIDR 3, 0, 0, 0, 5
403
404
int kvm_arch_init_vcpu(CPUState *cs)
405
@@ -XXX,XX +XXX,XX @@ int kvm_arch_init_vcpu(CPUState *cs)
406
407
if (cpu->kvm_target == QEMU_KVM_ARM_TARGET_NONE ||
408
!object_dynamic_cast(OBJECT(cpu), TYPE_AARCH64_CPU)) {
409
- fprintf(stderr, "KVM is not supported for this guest CPU type\n");
410
+ error_report("KVM is not supported for this guest CPU type");
411
return -EINVAL;
412
}
413
414
@@ -XXX,XX +XXX,XX @@ int kvm_arch_init_vcpu(CPUState *cs)
415
}
416
417
if (cpu_isar_feature(aa64_sve, cpu)) {
418
+ ret = kvm_arm_sve_set_vls(cs);
419
+ if (ret) {
420
+ return ret;
421
+ }
422
ret = kvm_arm_vcpu_finalize(cs, KVM_ARM_VCPU_SVE);
423
if (ret) {
424
return ret;
425
diff --git a/tests/arm-cpu-features.c b/tests/arm-cpu-features.c
426
index XXXXXXX..XXXXXXX 100644
427
--- a/tests/arm-cpu-features.c
428
+++ b/tests/arm-cpu-features.c
429
@@ -XXX,XX +XXX,XX @@ static QDict *resp_get_props(QDict *resp)
430
return qdict;
431
}
432
433
+static bool resp_get_feature(QDict *resp, const char *feature)
434
+{
435
+ QDict *props;
436
+
437
+ g_assert(resp);
438
+ g_assert(resp_has_props(resp));
439
+ props = resp_get_props(resp);
440
+ g_assert(qdict_get(props, feature));
441
+ return qdict_get_bool(props, feature);
442
+}
443
+
444
#define assert_has_feature(qts, cpu_type, feature) \
445
({ \
446
QDict *_resp = do_query_no_props(qts, cpu_type); \
447
@@ -XXX,XX +XXX,XX @@ static void sve_tests_sve_off(const void *data)
448
qtest_quit(qts);
449
}
450
451
+static void sve_tests_sve_off_kvm(const void *data)
452
+{
453
+ QTestState *qts;
454
+
455
+ qts = qtest_init(MACHINE_KVM "-cpu max,sve=off");
456
+
457
+ /*
458
+ * We don't know if this host supports SVE so we don't
459
+ * attempt to test enabling anything. We only test that
460
+ * everything is disabled (as it should be with sve=off)
461
+ * and that using sve<N>=off to explicitly disable vector
462
+ * lengths is OK too.
463
+ */
464
+ assert_sve_vls(qts, "max", 0, NULL);
465
+ assert_sve_vls(qts, "max", 0, "{ 'sve128': false }");
466
+
467
+ qtest_quit(qts);
468
+}
469
+
470
static void test_query_cpu_model_expansion(const void *data)
77
{
471
{
78
uint64_t delta_ns;
472
QTestState *qts;
79
@@ -XXX,XX +XXX,XX @@ static void aspeed_timer_set_value(AspeedTimerCtrlState *s, int timer, int reg,
473
@@ -XXX,XX +XXX,XX @@ static void test_query_cpu_model_expansion_kvm(const void *data)
80
switch (reg) {
474
}
81
case TIMER_REG_RELOAD:
475
82
old_reload = t->reload;
476
if (g_str_equal(qtest_get_arch(), "aarch64")) {
83
- t->reload = value;
477
+ bool kvm_supports_sve;
84
+ t->reload = calculate_min_ticks(t, value);
478
+ char max_name[8], name[8];
85
479
+ uint32_t max_vq, vq;
86
/* If the reload value was not previously set, or zero, and
480
+ uint64_t vls;
87
* the current value is valid, try to start the timer if it is
481
+ QDict *resp;
482
+ char *error;
483
+
484
assert_has_feature(qts, "host", "aarch64");
485
assert_has_feature(qts, "host", "pmu");
486
487
- assert_has_feature(qts, "max", "sve");
488
-
489
assert_error(qts, "cortex-a15",
490
"We cannot guarantee the CPU type 'cortex-a15' works "
491
"with KVM on this host", NULL);
492
+
493
+ assert_has_feature(qts, "max", "sve");
494
+ resp = do_query_no_props(qts, "max");
495
+ kvm_supports_sve = resp_get_feature(resp, "sve");
496
+ vls = resp_get_sve_vls(resp);
497
+ qobject_unref(resp);
498
+
499
+ if (kvm_supports_sve) {
500
+ g_assert(vls != 0);
501
+ max_vq = 64 - __builtin_clzll(vls);
502
+ sprintf(max_name, "sve%d", max_vq * 128);
503
+
504
+ /* Enabling a supported length is of course fine. */
505
+ assert_sve_vls(qts, "max", vls, "{ %s: true }", max_name);
506
+
507
+ /* Get the next supported length smaller than max-vq. */
508
+ vq = 64 - __builtin_clzll(vls & ~BIT_ULL(max_vq - 1));
509
+ if (vq) {
510
+ /*
511
+ * We have at least one length smaller than max-vq,
512
+ * so we can disable max-vq.
513
+ */
514
+ assert_sve_vls(qts, "max", (vls & ~BIT_ULL(max_vq - 1)),
515
+ "{ %s: false }", max_name);
516
+
517
+ /*
518
+ * Smaller, supported vector lengths cannot be disabled
519
+ * unless all larger, supported vector lengths are also
520
+ * disabled.
521
+ */
522
+ sprintf(name, "sve%d", vq * 128);
523
+ error = g_strdup_printf("cannot disable %s", name);
524
+ assert_error(qts, "max", error,
525
+ "{ %s: true, %s: false }",
526
+ max_name, name);
527
+ g_free(error);
528
+ }
529
+
530
+ /*
531
+ * The smallest, supported vector length is required, because
532
+ * we need at least one vector length enabled.
533
+ */
534
+ vq = __builtin_ffsll(vls);
535
+ sprintf(name, "sve%d", vq * 128);
536
+ error = g_strdup_printf("cannot disable %s", name);
537
+ assert_error(qts, "max", error, "{ %s: false }", name);
538
+ g_free(error);
539
+
540
+ /* Get an unsupported length. */
541
+ for (vq = 1; vq <= max_vq; ++vq) {
542
+ if (!(vls & BIT_ULL(vq - 1))) {
543
+ break;
544
+ }
545
+ }
546
+ if (vq <= SVE_MAX_VQ) {
547
+ sprintf(name, "sve%d", vq * 128);
548
+ error = g_strdup_printf("cannot enable %s", name);
549
+ assert_error(qts, "max", error, "{ %s: true }", name);
550
+ g_free(error);
551
+ }
552
+ } else {
553
+ g_assert(vls == 0);
554
+ }
555
} else {
556
assert_has_not_feature(qts, "host", "aarch64");
557
assert_has_not_feature(qts, "host", "pmu");
558
@@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv)
559
NULL, sve_tests_sve_max_vq_8);
560
qtest_add_data_func("/arm/max/query-cpu-model-expansion/sve-off",
561
NULL, sve_tests_sve_off);
562
+ qtest_add_data_func("/arm/kvm/query-cpu-model-expansion/sve-off",
563
+ NULL, sve_tests_sve_off_kvm);
564
}
565
566
return g_test_run();
567
diff --git a/docs/arm-cpu-features.rst b/docs/arm-cpu-features.rst
568
index XXXXXXX..XXXXXXX 100644
569
--- a/docs/arm-cpu-features.rst
570
+++ b/docs/arm-cpu-features.rst
571
@@ -XXX,XX +XXX,XX @@ SVE CPU Property Dependencies and Constraints
572
573
1) At least one vector length must be enabled when `sve` is enabled.
574
575
- 2) If a vector length `N` is enabled, then all power-of-two vector
576
- lengths smaller than `N` must also be enabled. E.g. if `sve512`
577
- is enabled, then the 128-bit and 256-bit vector lengths must also
578
- be enabled.
579
+ 2) If a vector length `N` is enabled, then, when KVM is enabled, all
580
+ smaller, host supported vector lengths must also be enabled. If
581
+ KVM is not enabled, then only all the smaller, power-of-two vector
582
+ lengths must be enabled. E.g. with KVM if the host supports all
583
+ vector lengths up to 512-bits (128, 256, 384, 512), then if `sve512`
584
+ is enabled, the 128-bit vector length, 256-bit vector length, and
585
+ 384-bit vector length must also be enabled. Without KVM, the 384-bit
586
+ vector length would not be required.
587
+
588
+ 3) If KVM is enabled then only vector lengths that the host CPU type
589
+ support may be enabled. If SVE is not supported by the host, then
590
+ no `sve*` properties may be enabled.
591
592
SVE CPU Property Parsing Semantics
593
----------------------------------
594
@@ -XXX,XX +XXX,XX @@ SVE CPU Property Parsing Semantics
595
an error is generated.
596
597
2) If SVE is enabled (`sve=on`), but no `sve<N>` CPU properties are
598
- provided, then all supported vector lengths are enabled, including
599
- the non-power-of-two lengths.
600
+ provided, then all supported vector lengths are enabled, which when
601
+ KVM is not in use means including the non-power-of-two lengths, and,
602
+ when KVM is in use, it means all vector lengths supported by the host
603
+ processor.
604
605
3) If SVE is enabled, then an error is generated when attempting to
606
disable the last enabled vector length (see constraint (1) of "SVE
607
@@ -XXX,XX +XXX,XX @@ SVE CPU Property Parsing Semantics
608
has been explicitly disabled, then an error is generated (see
609
constraint (2) of "SVE CPU Property Dependencies and Constraints").
610
611
- 5) If one or more `sve<N>` CPU properties are set `off`, but no `sve<N>`,
612
+ 5) When KVM is enabled, if the host does not support SVE, then an error
613
+ is generated when attempting to enable any `sve*` properties (see
614
+ constraint (3) of "SVE CPU Property Dependencies and Constraints").
615
+
616
+ 6) When KVM is enabled, if the host does support SVE, then an error is
617
+ generated when attempting to enable any vector lengths not supported
618
+ by the host (see constraint (3) of "SVE CPU Property Dependencies and
619
+ Constraints").
620
+
621
+ 7) If one or more `sve<N>` CPU properties are set `off`, but no `sve<N>`,
622
CPU properties are set `on`, then the specified vector lengths are
623
disabled but the default for any unspecified lengths remains enabled.
624
- Disabling a power-of-two vector length also disables all vector
625
- lengths larger than the power-of-two length (see constraint (2) of
626
- "SVE CPU Property Dependencies and Constraints").
627
+ When KVM is not enabled, disabling a power-of-two vector length also
628
+ disables all vector lengths larger than the power-of-two length.
629
+ When KVM is enabled, then disabling any supported vector length also
630
+ disables all larger vector lengths (see constraint (2) of "SVE CPU
631
+ Property Dependencies and Constraints").
632
633
- 6) If one or more `sve<N>` CPU properties are set to `on`, then they
634
+ 8) If one or more `sve<N>` CPU properties are set to `on`, then they
635
are enabled and all unspecified lengths default to disabled, except
636
for the required lengths per constraint (2) of "SVE CPU Property
637
Dependencies and Constraints", which will even be auto-enabled if
638
they were not explicitly enabled.
639
640
- 7) If SVE was disabled (`sve=off`), allowing all vector lengths to be
641
+ 9) If SVE was disabled (`sve=off`), allowing all vector lengths to be
642
explicitly disabled (i.e. avoiding the error specified in (3) of
643
"SVE CPU Property Parsing Semantics"), then if later an `sve=on` is
644
provided an error will be generated. To avoid this error, one must
88
--
645
--
89
2.20.1
646
2.20.1
90
647
91
648
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Andrew Jones <drjones@redhat.com>
2
2
3
This reverts commit 3cb36637157088892e9e33ddb1034bffd1251d3b.
3
Allow cpu 'host' to enable SVE when it's available, unless the
4
4
user chooses to disable it with the added 'sve=off' cpu property.
5
Despite the fact that the text for the call to gen_exception_insn
5
Also give the user the ability to select vector lengths with the
6
is identical for aarch64 and aarch32, the implementation inside
6
sve<N> properties. We don't adopt 'max' cpu's other sve property,
7
gen_exception_insn is totally different.
7
sve-max-vq, because that property is difficult to use with KVM.
8
8
That property assumes all vector lengths in the range from 1 up
9
This fixes exceptions raised from aarch64.
9
to and including the specified maximum length are supported, but
10
10
there may be optional lengths not supported by the host in that
11
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
11
range. With KVM one must be more specific when enabling vector
12
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
lengths.
13
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
13
14
Message-id: 20190826151536.6771-2-richard.henderson@linaro.org
14
Signed-off-by: Andrew Jones <drjones@redhat.com>
15
Reviewed-by: Eric Auger <eric.auger@redhat.com>
16
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
17
Tested-by: Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
18
Message-id: 20191031142734.8590-10-drjones@redhat.com
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
20
---
17
target/arm/translate-a64.h | 2 ++
21
target/arm/cpu.h | 2 ++
18
target/arm/translate.h | 2 --
22
target/arm/cpu.c | 3 +++
19
target/arm/translate-a64.c | 7 +++++++
23
target/arm/cpu64.c | 33 +++++++++++++++++----------------
20
target/arm/translate-vfp.inc.c | 3 ++-
24
target/arm/kvm64.c | 14 +++++++++++++-
21
target/arm/translate.c | 22 ++++++++++------------
25
tests/arm-cpu-features.c | 17 ++++++++---------
22
5 files changed, 21 insertions(+), 15 deletions(-)
26
docs/arm-cpu-features.rst | 19 ++++++++++++-------
23
27
6 files changed, 55 insertions(+), 33 deletions(-)
24
diff --git a/target/arm/translate-a64.h b/target/arm/translate-a64.h
28
25
index XXXXXXX..XXXXXXX 100644
29
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
26
--- a/target/arm/translate-a64.h
30
index XXXXXXX..XXXXXXX 100644
27
+++ b/target/arm/translate-a64.h
31
--- a/target/arm/cpu.h
28
@@ -XXX,XX +XXX,XX @@
32
+++ b/target/arm/cpu.h
29
#ifndef TARGET_ARM_TRANSLATE_A64_H
33
@@ -XXX,XX +XXX,XX @@ int aarch64_cpu_gdb_write_register(CPUState *cpu, uint8_t *buf, int reg);
30
#define TARGET_ARM_TRANSLATE_A64_H
34
void aarch64_sve_narrow_vq(CPUARMState *env, unsigned vq);
31
35
void aarch64_sve_change_el(CPUARMState *env, int old_el,
32
+void unallocated_encoding(DisasContext *s);
36
int new_el, bool el0_a64);
33
+
37
+void aarch64_add_sve_properties(Object *obj);
34
#define unsupported_encoding(s, insn) \
38
#else
35
do { \
39
static inline void aarch64_sve_narrow_vq(CPUARMState *env, unsigned vq) { }
36
qemu_log_mask(LOG_UNIMP, \
40
static inline void aarch64_sve_change_el(CPUARMState *env, int o,
37
diff --git a/target/arm/translate.h b/target/arm/translate.h
41
int n, bool a)
38
index XXXXXXX..XXXXXXX 100644
42
{ }
39
--- a/target/arm/translate.h
43
+static inline void aarch64_add_sve_properties(Object *obj) { }
40
+++ b/target/arm/translate.h
44
#endif
41
@@ -XXX,XX +XXX,XX @@ typedef struct DisasCompare {
45
42
bool value_global;
46
#if !defined(CONFIG_TCG)
43
} DisasCompare;
47
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
44
48
index XXXXXXX..XXXXXXX 100644
45
-void unallocated_encoding(DisasContext *s);
49
--- a/target/arm/cpu.c
50
+++ b/target/arm/cpu.c
51
@@ -XXX,XX +XXX,XX @@ static void arm_host_initfn(Object *obj)
52
ARMCPU *cpu = ARM_CPU(obj);
53
54
kvm_arm_set_cpu_features_from_host(cpu);
55
+ if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
56
+ aarch64_add_sve_properties(obj);
57
+ }
58
arm_cpu_post_init(obj);
59
}
60
61
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
62
index XXXXXXX..XXXXXXX 100644
63
--- a/target/arm/cpu64.c
64
+++ b/target/arm/cpu64.c
65
@@ -XXX,XX +XXX,XX @@ static void cpu_arm_set_sve(Object *obj, Visitor *v, const char *name,
66
cpu->isar.id_aa64pfr0 = t;
67
}
68
69
+void aarch64_add_sve_properties(Object *obj)
70
+{
71
+ uint32_t vq;
72
+
73
+ object_property_add(obj, "sve", "bool", cpu_arm_get_sve,
74
+ cpu_arm_set_sve, NULL, NULL, &error_fatal);
75
+
76
+ for (vq = 1; vq <= ARM_MAX_VQ; ++vq) {
77
+ char name[8];
78
+ sprintf(name, "sve%d", vq * 128);
79
+ object_property_add(obj, name, "bool", cpu_arm_get_sve_vq,
80
+ cpu_arm_set_sve_vq, NULL, NULL, &error_fatal);
81
+ }
82
+}
83
+
84
/* -cpu max: if KVM is enabled, like -cpu host (best possible with this host);
85
* otherwise, a CPU with as many features enabled as our emulation supports.
86
* The version of '-cpu max' for qemu-system-arm is defined in cpu.c;
87
@@ -XXX,XX +XXX,XX @@ static void cpu_arm_set_sve(Object *obj, Visitor *v, const char *name,
88
static void aarch64_max_initfn(Object *obj)
89
{
90
ARMCPU *cpu = ARM_CPU(obj);
91
- uint32_t vq;
92
- uint64_t t;
93
94
if (kvm_enabled()) {
95
kvm_arm_set_cpu_features_from_host(cpu);
96
- if (kvm_arm_sve_supported(CPU(cpu))) {
97
- t = cpu->isar.id_aa64pfr0;
98
- t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1);
99
- cpu->isar.id_aa64pfr0 = t;
100
- }
101
} else {
102
+ uint64_t t;
103
uint32_t u;
104
aarch64_a57_initfn(obj);
105
106
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
107
#endif
108
}
109
110
- object_property_add(obj, "sve", "bool", cpu_arm_get_sve,
111
- cpu_arm_set_sve, NULL, NULL, &error_fatal);
112
+ aarch64_add_sve_properties(obj);
113
object_property_add(obj, "sve-max-vq", "uint32", cpu_max_get_sve_max_vq,
114
cpu_max_set_sve_max_vq, NULL, NULL, &error_fatal);
46
-
115
-
47
/* Share the TCG temporaries common between 32 and 64 bit modes. */
116
- for (vq = 1; vq <= ARM_MAX_VQ; ++vq) {
48
extern TCGv_i32 cpu_NF, cpu_ZF, cpu_CF, cpu_VF;
117
- char name[8];
49
extern TCGv_i64 cpu_exclusive_addr;
118
- sprintf(name, "sve%d", vq * 128);
50
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
119
- object_property_add(obj, name, "bool", cpu_arm_get_sve_vq,
51
index XXXXXXX..XXXXXXX 100644
120
- cpu_arm_set_sve_vq, NULL, NULL, &error_fatal);
52
--- a/target/arm/translate-a64.c
121
- }
53
+++ b/target/arm/translate-a64.c
54
@@ -XXX,XX +XXX,XX @@ static inline void gen_goto_tb(DisasContext *s, int n, uint64_t dest)
55
}
56
}
122
}
57
123
58
+void unallocated_encoding(DisasContext *s)
124
struct ARMCPUInfo {
59
+{
125
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
60
+ /* Unallocated and reserved encodings are uncategorized */
126
index XXXXXXX..XXXXXXX 100644
61
+ gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_uncategorized(),
127
--- a/target/arm/kvm64.c
62
+ default_exception_el(s));
128
+++ b/target/arm/kvm64.c
63
+}
129
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
64
+
130
* and then query that CPU for the relevant ID registers.
65
static void init_tmp_a64_array(DisasContext *s)
131
*/
66
{
132
int fdarray[3];
67
#ifdef CONFIG_DEBUG_TCG
133
+ bool sve_supported;
68
diff --git a/target/arm/translate-vfp.inc.c b/target/arm/translate-vfp.inc.c
134
uint64_t features = 0;
69
index XXXXXXX..XXXXXXX 100644
135
+ uint64_t t;
70
--- a/target/arm/translate-vfp.inc.c
136
int err;
71
+++ b/target/arm/translate-vfp.inc.c
137
72
@@ -XXX,XX +XXX,XX @@ static bool full_vfp_access_check(DisasContext *s, bool ignore_vfp_enabled)
138
/* Old kernels may not know about the PREFERRED_TARGET ioctl: however
73
139
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
74
if (!s->vfp_enabled && !ignore_vfp_enabled) {
140
ARM64_SYS_REG(3, 0, 0, 3, 2));
75
assert(!arm_dc_feature(s, ARM_FEATURE_M));
141
}
76
- unallocated_encoding(s);
142
77
+ gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_uncategorized(),
143
+ sve_supported = ioctl(fdarray[0], KVM_CHECK_EXTENSION, KVM_CAP_ARM_SVE) > 0;
78
+ default_exception_el(s));
144
+
145
kvm_arm_destroy_scratch_host_vcpu(fdarray);
146
147
if (err < 0) {
79
return false;
148
return false;
80
}
149
}
81
150
82
diff --git a/target/arm/translate.c b/target/arm/translate.c
151
- /* We can assume any KVM supporting CPU is at least a v8
83
index XXXXXXX..XXXXXXX 100644
152
+ /* Add feature bits that can't appear until after VCPU init. */
84
--- a/target/arm/translate.c
153
+ if (sve_supported) {
85
+++ b/target/arm/translate.c
154
+ t = ahcf->isar.id_aa64pfr0;
86
@@ -XXX,XX +XXX,XX @@ static void gen_exception_bkpt_insn(DisasContext *s, uint32_t syn)
155
+ t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1);
87
s->base.is_jmp = DISAS_NORETURN;
156
+ ahcf->isar.id_aa64pfr0 = t;
88
}
157
+ }
89
158
+
90
-void unallocated_encoding(DisasContext *s)
159
+ /*
91
-{
160
+ * We can assume any KVM supporting CPU is at least a v8
92
- /* Unallocated and reserved encodings are uncategorized */
161
* with VFPv4+Neon; this in turn implies most of the other
93
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_uncategorized(),
162
* feature bits.
94
- default_exception_el(s));
163
*/
95
-}
164
diff --git a/tests/arm-cpu-features.c b/tests/arm-cpu-features.c
165
index XXXXXXX..XXXXXXX 100644
166
--- a/tests/arm-cpu-features.c
167
+++ b/tests/arm-cpu-features.c
168
@@ -XXX,XX +XXX,XX @@ static void test_query_cpu_model_expansion_kvm(const void *data)
169
"We cannot guarantee the CPU type 'cortex-a15' works "
170
"with KVM on this host", NULL);
171
172
- assert_has_feature(qts, "max", "sve");
173
- resp = do_query_no_props(qts, "max");
174
+ assert_has_feature(qts, "host", "sve");
175
+ resp = do_query_no_props(qts, "host");
176
kvm_supports_sve = resp_get_feature(resp, "sve");
177
vls = resp_get_sve_vls(resp);
178
qobject_unref(resp);
179
@@ -XXX,XX +XXX,XX @@ static void test_query_cpu_model_expansion_kvm(const void *data)
180
sprintf(max_name, "sve%d", max_vq * 128);
181
182
/* Enabling a supported length is of course fine. */
183
- assert_sve_vls(qts, "max", vls, "{ %s: true }", max_name);
184
+ assert_sve_vls(qts, "host", vls, "{ %s: true }", max_name);
185
186
/* Get the next supported length smaller than max-vq. */
187
vq = 64 - __builtin_clzll(vls & ~BIT_ULL(max_vq - 1));
188
@@ -XXX,XX +XXX,XX @@ static void test_query_cpu_model_expansion_kvm(const void *data)
189
* We have at least one length smaller than max-vq,
190
* so we can disable max-vq.
191
*/
192
- assert_sve_vls(qts, "max", (vls & ~BIT_ULL(max_vq - 1)),
193
+ assert_sve_vls(qts, "host", (vls & ~BIT_ULL(max_vq - 1)),
194
"{ %s: false }", max_name);
195
196
/*
197
@@ -XXX,XX +XXX,XX @@ static void test_query_cpu_model_expansion_kvm(const void *data)
198
*/
199
sprintf(name, "sve%d", vq * 128);
200
error = g_strdup_printf("cannot disable %s", name);
201
- assert_error(qts, "max", error,
202
+ assert_error(qts, "host", error,
203
"{ %s: true, %s: false }",
204
max_name, name);
205
g_free(error);
206
@@ -XXX,XX +XXX,XX @@ static void test_query_cpu_model_expansion_kvm(const void *data)
207
vq = __builtin_ffsll(vls);
208
sprintf(name, "sve%d", vq * 128);
209
error = g_strdup_printf("cannot disable %s", name);
210
- assert_error(qts, "max", error, "{ %s: false }", name);
211
+ assert_error(qts, "host", error, "{ %s: false }", name);
212
g_free(error);
213
214
/* Get an unsupported length. */
215
@@ -XXX,XX +XXX,XX @@ static void test_query_cpu_model_expansion_kvm(const void *data)
216
if (vq <= SVE_MAX_VQ) {
217
sprintf(name, "sve%d", vq * 128);
218
error = g_strdup_printf("cannot enable %s", name);
219
- assert_error(qts, "max", error, "{ %s: true }", name);
220
+ assert_error(qts, "host", error, "{ %s: true }", name);
221
g_free(error);
222
}
223
} else {
224
@@ -XXX,XX +XXX,XX @@ static void test_query_cpu_model_expansion_kvm(const void *data)
225
} else {
226
assert_has_not_feature(qts, "host", "aarch64");
227
assert_has_not_feature(qts, "host", "pmu");
96
-
228
-
97
/* Force a TB lookup after an instruction that changes the CPU state. */
229
- assert_has_not_feature(qts, "max", "sve");
98
static inline void gen_lookup_tb(DisasContext *s)
230
+ assert_has_not_feature(qts, "host", "sve");
99
{
231
}
100
@@ -XXX,XX +XXX,XX @@ static inline void gen_hlt(DisasContext *s, int imm)
232
101
return;
233
qtest_quit(qts);
102
}
234
diff --git a/docs/arm-cpu-features.rst b/docs/arm-cpu-features.rst
103
235
index XXXXXXX..XXXXXXX 100644
104
- unallocated_encoding(s);
236
--- a/docs/arm-cpu-features.rst
105
+ gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_uncategorized(),
237
+++ b/docs/arm-cpu-features.rst
106
+ default_exception_el(s));
238
@@ -XXX,XX +XXX,XX @@ SVE CPU Property Examples
107
}
239
108
240
$ qemu-system-aarch64 -M virt -cpu max
109
static inline void gen_add_data_offset(DisasContext *s, unsigned int insn,
241
110
@@ -XXX,XX +XXX,XX @@ static void gen_srs(DisasContext *s,
242
- 3) Only enable the 128-bit vector length::
111
}
243
+ 3) When KVM is enabled, implicitly enable all host CPU supported vector
112
244
+ lengths with the `host` CPU type::
113
if (undef) {
245
+
114
- unallocated_encoding(s);
246
+ $ qemu-system-aarch64 -M virt,accel=kvm -cpu host
115
+ gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_uncategorized(),
247
+
116
+ default_exception_el(s));
248
+ 4) Only enable the 128-bit vector length::
117
return;
249
118
}
250
$ qemu-system-aarch64 -M virt -cpu max,sve128=on
119
251
120
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
252
- 4) Disable the 512-bit vector length and all larger vector lengths,
121
break;
253
+ 5) Disable the 512-bit vector length and all larger vector lengths,
122
default:
254
since 512 is a power-of-two. This results in all the smaller,
123
illegal_op:
255
uninitialized lengths (128, 256, and 384) defaulting to enabled::
124
- unallocated_encoding(s);
256
125
+ gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_uncategorized(),
257
$ qemu-system-aarch64 -M virt -cpu max,sve512=off
126
+ default_exception_el(s));
258
127
break;
259
- 5) Enable the 128-bit, 256-bit, and 512-bit vector lengths::
128
}
260
+ 6) Enable the 128-bit, 256-bit, and 512-bit vector lengths::
129
}
261
130
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
262
$ qemu-system-aarch64 -M virt -cpu max,sve128=on,sve256=on,sve512=on
131
}
263
132
return;
264
- 6) The same as (5), but since the 128-bit and 256-bit vector
133
illegal_op:
265
+ 7) The same as (6), but since the 128-bit and 256-bit vector
134
- unallocated_encoding(s);
266
lengths are required for the 512-bit vector length to be enabled,
135
+ gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_uncategorized(),
267
then allow them to be auto-enabled::
136
+ default_exception_el(s));
268
137
}
269
$ qemu-system-aarch64 -M virt -cpu max,sve512=on
138
270
139
static void disas_thumb_insn(DisasContext *s, uint32_t insn)
271
- 7) Do the same as (6), but by first disabling SVE and then re-enabling it::
140
@@ -XXX,XX +XXX,XX @@ static void disas_thumb_insn(DisasContext *s, uint32_t insn)
272
+ 8) Do the same as (7), but by first disabling SVE and then re-enabling it::
141
return;
273
142
illegal_op:
274
$ qemu-system-aarch64 -M virt -cpu max,sve=off,sve512=on,sve=on
143
undef:
275
144
- unallocated_encoding(s);
276
- 8) Force errors regarding the last vector length::
145
+ gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_uncategorized(),
277
+ 9) Force errors regarding the last vector length::
146
+ default_exception_el(s));
278
147
}
279
$ qemu-system-aarch64 -M virt -cpu max,sve128=off
148
280
$ qemu-system-aarch64 -M virt -cpu max,sve=off,sve128=off,sve=on
149
static bool insn_crosses_page(CPUARMState *env, DisasContext *s)
281
@@ -XXX,XX +XXX,XX @@ The examples in "SVE CPU Property Examples" exhibit many ways to select
282
vector lengths which developers may find useful in order to avoid overly
283
verbose command lines. However, the recommended way to select vector
284
lengths is to explicitly enable each desired length. Therefore only
285
-example's (1), (3), and (5) exhibit recommended uses of the properties.
286
+example's (1), (4), and (6) exhibit recommended uses of the properties.
287
150
--
288
--
151
2.20.1
289
2.20.1
152
290
153
291
diff view generated by jsdifflib
Deleted patch
1
The translation table walk for an ATS instruction can result in
2
various faults. In general these are just reported back via the
3
PAR_EL1 fault status fields, but in some cases the architecture
4
requires that the fault is turned into an exception:
5
* synchronous stage 2 faults of any kind during AT S1E0* and
6
AT S1E1* instructions executed from NS EL1 fault to EL2 or EL3
7
* synchronous external aborts are taken as Data Abort exceptions
8
1
9
(This is documented in the v8A Arm ARM DDI0487A.e D5.2.11 and
10
G5.13.4.)
11
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
15
Message-id: 20190816125802.25877-3-peter.maydell@linaro.org
16
---
17
target/arm/helper.c | 107 +++++++++++++++++++++++++++++++++++++-------
18
1 file changed, 92 insertions(+), 15 deletions(-)
19
20
diff --git a/target/arm/helper.c b/target/arm/helper.c
21
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/helper.c
23
+++ b/target/arm/helper.c
24
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
25
ret = get_phys_addr(env, value, access_type, mmu_idx, &phys_addr, &attrs,
26
&prot, &page_size, &fi, &cacheattrs);
27
28
+ if (ret) {
29
+ /*
30
+ * Some kinds of translation fault must cause exceptions rather
31
+ * than being reported in the PAR.
32
+ */
33
+ int current_el = arm_current_el(env);
34
+ int target_el;
35
+ uint32_t syn, fsr, fsc;
36
+ bool take_exc = false;
37
+
38
+ if (fi.s1ptw && current_el == 1 && !arm_is_secure(env)
39
+ && (mmu_idx == ARMMMUIdx_S1NSE1 || mmu_idx == ARMMMUIdx_S1NSE0)) {
40
+ /*
41
+ * Synchronous stage 2 fault on an access made as part of the
42
+ * translation table walk for AT S1E0* or AT S1E1* insn
43
+ * executed from NS EL1. If this is a synchronous external abort
44
+ * and SCR_EL3.EA == 1, then we take a synchronous external abort
45
+ * to EL3. Otherwise the fault is taken as an exception to EL2,
46
+ * and HPFAR_EL2 holds the faulting IPA.
47
+ */
48
+ if (fi.type == ARMFault_SyncExternalOnWalk &&
49
+ (env->cp15.scr_el3 & SCR_EA)) {
50
+ target_el = 3;
51
+ } else {
52
+ env->cp15.hpfar_el2 = extract64(fi.s2addr, 12, 47) << 4;
53
+ target_el = 2;
54
+ }
55
+ take_exc = true;
56
+ } else if (fi.type == ARMFault_SyncExternalOnWalk) {
57
+ /*
58
+ * Synchronous external aborts during a translation table walk
59
+ * are taken as Data Abort exceptions.
60
+ */
61
+ if (fi.stage2) {
62
+ if (current_el == 3) {
63
+ target_el = 3;
64
+ } else {
65
+ target_el = 2;
66
+ }
67
+ } else {
68
+ target_el = exception_target_el(env);
69
+ }
70
+ take_exc = true;
71
+ }
72
+
73
+ if (take_exc) {
74
+ /* Construct FSR and FSC using same logic as arm_deliver_fault() */
75
+ if (target_el == 2 || arm_el_is_aa64(env, target_el) ||
76
+ arm_s1_regime_using_lpae_format(env, mmu_idx)) {
77
+ fsr = arm_fi_to_lfsc(&fi);
78
+ fsc = extract32(fsr, 0, 6);
79
+ } else {
80
+ fsr = arm_fi_to_sfsc(&fi);
81
+ fsc = 0x3f;
82
+ }
83
+ /*
84
+ * Report exception with ESR indicating a fault due to a
85
+ * translation table walk for a cache maintenance instruction.
86
+ */
87
+ syn = syn_data_abort_no_iss(current_el == target_el,
88
+ fi.ea, 1, fi.s1ptw, 1, fsc);
89
+ env->exception.vaddress = value;
90
+ env->exception.fsr = fsr;
91
+ raise_exception(env, EXCP_DATA_ABORT, syn, target_el);
92
+ }
93
+ }
94
+
95
if (is_a64(env)) {
96
format64 = true;
97
} else if (arm_feature(env, ARM_FEATURE_LPAE)) {
98
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo vapa_cp_reginfo[] = {
99
/* This underdecoding is safe because the reginfo is NO_RAW. */
100
{ .name = "ATS", .cp = 15, .crn = 7, .crm = 8, .opc1 = 0, .opc2 = CP_ANY,
101
.access = PL1_W, .accessfn = ats_access,
102
- .writefn = ats_write, .type = ARM_CP_NO_RAW },
103
+ .writefn = ats_write, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC },
104
#endif
105
REGINFO_SENTINEL
106
};
107
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
108
/* 64 bit address translation operations */
109
{ .name = "AT_S1E1R", .state = ARM_CP_STATE_AA64,
110
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 8, .opc2 = 0,
111
- .access = PL1_W, .type = ARM_CP_NO_RAW, .writefn = ats_write64 },
112
+ .access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
113
+ .writefn = ats_write64 },
114
{ .name = "AT_S1E1W", .state = ARM_CP_STATE_AA64,
115
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 8, .opc2 = 1,
116
- .access = PL1_W, .type = ARM_CP_NO_RAW, .writefn = ats_write64 },
117
+ .access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
118
+ .writefn = ats_write64 },
119
{ .name = "AT_S1E0R", .state = ARM_CP_STATE_AA64,
120
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 8, .opc2 = 2,
121
- .access = PL1_W, .type = ARM_CP_NO_RAW, .writefn = ats_write64 },
122
+ .access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
123
+ .writefn = ats_write64 },
124
{ .name = "AT_S1E0W", .state = ARM_CP_STATE_AA64,
125
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 8, .opc2 = 3,
126
- .access = PL1_W, .type = ARM_CP_NO_RAW, .writefn = ats_write64 },
127
+ .access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
128
+ .writefn = ats_write64 },
129
{ .name = "AT_S12E1R", .state = ARM_CP_STATE_AA64,
130
.opc0 = 1, .opc1 = 4, .crn = 7, .crm = 8, .opc2 = 4,
131
- .access = PL2_W, .type = ARM_CP_NO_RAW, .writefn = ats_write64 },
132
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
133
+ .writefn = ats_write64 },
134
{ .name = "AT_S12E1W", .state = ARM_CP_STATE_AA64,
135
.opc0 = 1, .opc1 = 4, .crn = 7, .crm = 8, .opc2 = 5,
136
- .access = PL2_W, .type = ARM_CP_NO_RAW, .writefn = ats_write64 },
137
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
138
+ .writefn = ats_write64 },
139
{ .name = "AT_S12E0R", .state = ARM_CP_STATE_AA64,
140
.opc0 = 1, .opc1 = 4, .crn = 7, .crm = 8, .opc2 = 6,
141
- .access = PL2_W, .type = ARM_CP_NO_RAW, .writefn = ats_write64 },
142
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
143
+ .writefn = ats_write64 },
144
{ .name = "AT_S12E0W", .state = ARM_CP_STATE_AA64,
145
.opc0 = 1, .opc1 = 4, .crn = 7, .crm = 8, .opc2 = 7,
146
- .access = PL2_W, .type = ARM_CP_NO_RAW, .writefn = ats_write64 },
147
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
148
+ .writefn = ats_write64 },
149
/* AT S1E2* are elsewhere as they UNDEF from EL3 if EL2 is not present */
150
{ .name = "AT_S1E3R", .state = ARM_CP_STATE_AA64,
151
.opc0 = 1, .opc1 = 6, .crn = 7, .crm = 8, .opc2 = 0,
152
- .access = PL3_W, .type = ARM_CP_NO_RAW, .writefn = ats_write64 },
153
+ .access = PL3_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
154
+ .writefn = ats_write64 },
155
{ .name = "AT_S1E3W", .state = ARM_CP_STATE_AA64,
156
.opc0 = 1, .opc1 = 6, .crn = 7, .crm = 8, .opc2 = 1,
157
- .access = PL3_W, .type = ARM_CP_NO_RAW, .writefn = ats_write64 },
158
+ .access = PL3_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
159
+ .writefn = ats_write64 },
160
{ .name = "PAR_EL1", .state = ARM_CP_STATE_AA64,
161
.type = ARM_CP_ALIAS,
162
.opc0 = 3, .opc1 = 0, .crn = 7, .crm = 4, .opc2 = 0,
163
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
164
{ .name = "AT_S1E2R", .state = ARM_CP_STATE_AA64,
165
.opc0 = 1, .opc1 = 4, .crn = 7, .crm = 8, .opc2 = 0,
166
.access = PL2_W, .accessfn = at_s1e2_access,
167
- .type = ARM_CP_NO_RAW, .writefn = ats_write64 },
168
+ .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC, .writefn = ats_write64 },
169
{ .name = "AT_S1E2W", .state = ARM_CP_STATE_AA64,
170
.opc0 = 1, .opc1 = 4, .crn = 7, .crm = 8, .opc2 = 1,
171
.access = PL2_W, .accessfn = at_s1e2_access,
172
- .type = ARM_CP_NO_RAW, .writefn = ats_write64 },
173
+ .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC, .writefn = ats_write64 },
174
/* The AArch32 ATS1H* operations are CONSTRAINED UNPREDICTABLE
175
* if EL2 is not implemented; we choose to UNDEF. Behaviour at EL3
176
* with SCR.NS == 0 outside Monitor mode is UNPREDICTABLE; we choose
177
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
178
*/
179
{ .name = "ATS1HR", .cp = 15, .opc1 = 4, .crn = 7, .crm = 8, .opc2 = 0,
180
.access = PL2_W,
181
- .writefn = ats1h_write, .type = ARM_CP_NO_RAW },
182
+ .writefn = ats1h_write, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC },
183
{ .name = "ATS1HW", .cp = 15, .opc1 = 4, .crn = 7, .crm = 8, .opc2 = 1,
184
.access = PL2_W,
185
- .writefn = ats1h_write, .type = ARM_CP_NO_RAW },
186
+ .writefn = ats1h_write, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC },
187
{ .name = "CNTHCTL_EL2", .state = ARM_CP_STATE_BOTH,
188
.opc0 = 3, .opc1 = 4, .crn = 14, .crm = 1, .opc2 = 0,
189
/* ARMv7 requires bit 0 and 1 to reset to 1. ARMv8 defines the
190
--
191
2.20.1
192
193
diff view generated by jsdifflib
1
From: Eric Auger <eric.auger@redhat.com>
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
2
2
3
memory_region_iommu_replay_all is not used. Remove it.
3
Rebuild hflags when modifying CPUState at boot.
4
4
5
Signed-off-by: Eric Auger <eric.auger@redhat.com>
5
Fixes: e979972a6a
6
Reported-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Reviewed-by: Peter Xu <peterx@redhat.com>
8
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
9
Message-id: 20190822172350.12008-2-eric.auger@redhat.com
9
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
10
Message-id: 20191031040830.18800-2-edgar.iglesias@xilinx.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
---
12
include/exec/memory.h | 10 ----------
13
hw/arm/boot.c | 1 +
13
memory.c | 9 ---------
14
1 file changed, 1 insertion(+)
14
2 files changed, 19 deletions(-)
15
15
16
diff --git a/include/exec/memory.h b/include/exec/memory.h
16
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
17
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
18
--- a/include/exec/memory.h
18
--- a/hw/arm/boot.c
19
+++ b/include/exec/memory.h
19
+++ b/hw/arm/boot.c
20
@@ -XXX,XX +XXX,XX @@ void memory_region_register_iommu_notifier(MemoryRegion *mr,
20
@@ -XXX,XX +XXX,XX @@ static void do_cpu_reset(void *opaque)
21
*/
21
info->secondary_cpu_reset_hook(cpu, info);
22
void memory_region_iommu_replay(IOMMUMemoryRegion *iommu_mr, IOMMUNotifier *n);
22
}
23
23
}
24
-/**
24
+ arm_rebuild_hflags(env);
25
- * memory_region_iommu_replay_all: replay existing IOMMU translations
26
- * to all the notifiers registered.
27
- *
28
- * Note: this is not related to record-and-replay functionality.
29
- *
30
- * @iommu_mr: the memory region to observe
31
- */
32
-void memory_region_iommu_replay_all(IOMMUMemoryRegion *iommu_mr);
33
-
34
/**
35
* memory_region_unregister_iommu_notifier: unregister a notifier for
36
* changes to IOMMU translation entries.
37
diff --git a/memory.c b/memory.c
38
index XXXXXXX..XXXXXXX 100644
39
--- a/memory.c
40
+++ b/memory.c
41
@@ -XXX,XX +XXX,XX @@ void memory_region_iommu_replay(IOMMUMemoryRegion *iommu_mr, IOMMUNotifier *n)
42
}
25
}
43
}
26
}
44
27
45
-void memory_region_iommu_replay_all(IOMMUMemoryRegion *iommu_mr)
46
-{
47
- IOMMUNotifier *notifier;
48
-
49
- IOMMU_NOTIFIER_FOREACH(notifier, iommu_mr) {
50
- memory_region_iommu_replay(iommu_mr, notifier);
51
- }
52
-}
53
-
54
void memory_region_unregister_iommu_notifier(MemoryRegion *mr,
55
IOMMUNotifier *n)
56
{
57
--
28
--
58
2.20.1
29
2.20.1
59
30
60
31
diff view generated by jsdifflib
Deleted patch
1
From: Eric Auger <eric.auger@redhat.com>
2
1
3
An IOVA/ASID invalidation is notified to all IOMMU Memory Regions
4
through smmuv3_inv_notifiers_iova/smmuv3_notify_iova.
5
6
When the notification occurs it is possible that some of the
7
PCIe devices associated to the notified regions do not have a
8
valid stream table entry. In that case we output a LOG_GUEST_ERROR
9
message, for example:
10
11
invalid sid=<SID> (L1STD span=0)
12
"smmuv3_notify_iova error decoding the configuration for iommu mr=<MR>
13
14
This is unfortunate as the user gets the impression that there
15
are some translation decoding errors whereas there are not.
16
17
This patch adds a new field in SMMUEventInfo that tells whether
18
the detection of an invalid STE must lead to an error report.
19
invalid_ste_allowed is set before doing the invalidations and
20
kept unset on actual translation.
21
22
The other configuration decoding error messages are kept since if the
23
STE is valid then the rest of the config must be correct.
24
25
Signed-off-by: Eric Auger <eric.auger@redhat.com>
26
Message-id: 20190822172350.12008-6-eric.auger@redhat.com
27
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
28
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
29
---
30
hw/arm/smmuv3-internal.h | 1 +
31
hw/arm/smmuv3.c | 19 +++++++++++--------
32
2 files changed, 12 insertions(+), 8 deletions(-)
33
34
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
35
index XXXXXXX..XXXXXXX 100644
36
--- a/hw/arm/smmuv3-internal.h
37
+++ b/hw/arm/smmuv3-internal.h
38
@@ -XXX,XX +XXX,XX @@ typedef struct SMMUEventInfo {
39
uint32_t sid;
40
bool recorded;
41
bool record_trans_faults;
42
+ bool inval_ste_allowed;
43
union {
44
struct {
45
uint32_t ssid;
46
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/hw/arm/smmuv3.c
49
+++ b/hw/arm/smmuv3.c
50
@@ -XXX,XX +XXX,XX @@ static int decode_ste(SMMUv3State *s, SMMUTransCfg *cfg,
51
uint32_t config;
52
53
if (!STE_VALID(ste)) {
54
- qemu_log_mask(LOG_GUEST_ERROR, "invalid STE\n");
55
+ if (!event->inval_ste_allowed) {
56
+ qemu_log_mask(LOG_GUEST_ERROR, "invalid STE\n");
57
+ }
58
goto bad_ste;
59
}
60
61
@@ -XXX,XX +XXX,XX @@ static int smmu_find_ste(SMMUv3State *s, uint32_t sid, STE *ste,
62
63
if (!span) {
64
/* l2ptr is not valid */
65
- qemu_log_mask(LOG_GUEST_ERROR,
66
- "invalid sid=%d (L1STD span=0)\n", sid);
67
+ if (!event->inval_ste_allowed) {
68
+ qemu_log_mask(LOG_GUEST_ERROR,
69
+ "invalid sid=%d (L1STD span=0)\n", sid);
70
+ }
71
event->type = SMMU_EVT_C_BAD_STREAMID;
72
return -EINVAL;
73
}
74
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
75
SMMUDevice *sdev = container_of(mr, SMMUDevice, iommu);
76
SMMUv3State *s = sdev->smmu;
77
uint32_t sid = smmu_get_sid(sdev);
78
- SMMUEventInfo event = {.type = SMMU_EVT_NONE, .sid = sid};
79
+ SMMUEventInfo event = {.type = SMMU_EVT_NONE,
80
+ .sid = sid,
81
+ .inval_ste_allowed = false};
82
SMMUPTWEventInfo ptw_info = {};
83
SMMUTranslationStatus status;
84
SMMUState *bs = ARM_SMMU(s);
85
@@ -XXX,XX +XXX,XX @@ static void smmuv3_notify_iova(IOMMUMemoryRegion *mr,
86
dma_addr_t iova)
87
{
88
SMMUDevice *sdev = container_of(mr, SMMUDevice, iommu);
89
- SMMUEventInfo event = {};
90
+ SMMUEventInfo event = {.inval_ste_allowed = true};
91
SMMUTransTableInfo *tt;
92
SMMUTransCfg *cfg;
93
IOMMUTLBEntry entry;
94
95
cfg = smmuv3_get_config(sdev, &event);
96
if (!cfg) {
97
- qemu_log_mask(LOG_GUEST_ERROR,
98
- "%s error decoding the configuration for iommu mr=%s\n",
99
- __func__, mr->parent_obj.name);
100
return;
101
}
102
103
--
104
2.20.1
105
106
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
The previous simplification got the order of operands to the
4
subtraction wrong. Since the 64-bit product is the subtrahend,
5
we must use a 64-bit subtract to properly compute the borrow
6
from the low-part of the product.
7
8
Fixes: 5f8cd06ebcf5 ("target/arm: Simplify SMMLA, SMMLAR, SMMLS, SMMLSR")
9
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
12
Message-id: 20190829013258.16102-1-richard.henderson@linaro.org
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
16
target/arm/translate.c | 20 ++++++++++++++++++--
17
1 file changed, 18 insertions(+), 2 deletions(-)
18
19
diff --git a/target/arm/translate.c b/target/arm/translate.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/translate.c
22
+++ b/target/arm/translate.c
23
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
24
if (rd != 15) {
25
tmp3 = load_reg(s, rd);
26
if (insn & (1 << 6)) {
27
- tcg_gen_sub_i32(tmp, tmp, tmp3);
28
+ /*
29
+ * For SMMLS, we need a 64-bit subtract.
30
+ * Borrow caused by a non-zero multiplicand
31
+ * lowpart, and the correct result lowpart
32
+ * for rounding.
33
+ */
34
+ TCGv_i32 zero = tcg_const_i32(0);
35
+ tcg_gen_sub2_i32(tmp2, tmp, zero, tmp3,
36
+ tmp2, tmp);
37
+ tcg_temp_free_i32(zero);
38
} else {
39
tcg_gen_add_i32(tmp, tmp, tmp3);
40
}
41
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
42
if (insn & (1 << 20)) {
43
tcg_gen_add_i32(tmp, tmp, tmp3);
44
} else {
45
- tcg_gen_sub_i32(tmp, tmp, tmp3);
46
+ /*
47
+ * For SMMLS, we need a 64-bit subtract.
48
+ * Borrow caused by a non-zero multiplicand lowpart,
49
+ * and the correct result lowpart for rounding.
50
+ */
51
+ TCGv_i32 zero = tcg_const_i32(0);
52
+ tcg_gen_sub2_i32(tmp2, tmp, zero, tmp3, tmp2, tmp);
53
+ tcg_temp_free_i32(zero);
54
}
55
tcg_temp_free_i32(tmp3);
56
}
57
--
58
2.20.1
59
60
diff view generated by jsdifflib
Deleted patch
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
2
1
3
Commit ba1ba5cca introduce the ARM_CPU_TYPE_NAME() macro.
4
Unify the code base by use it in all places.
5
6
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
7
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20190823143249.8096-2-philmd@redhat.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
hw/arm/allwinner-a10.c | 3 ++-
13
hw/arm/cubieboard.c | 3 ++-
14
hw/arm/digic.c | 3 ++-
15
hw/arm/fsl-imx25.c | 2 +-
16
hw/arm/fsl-imx31.c | 2 +-
17
hw/arm/fsl-imx6.c | 3 ++-
18
hw/arm/fsl-imx6ul.c | 2 +-
19
hw/arm/xlnx-zynqmp.c | 8 ++++----
20
8 files changed, 15 insertions(+), 11 deletions(-)
21
22
diff --git a/hw/arm/allwinner-a10.c b/hw/arm/allwinner-a10.c
23
index XXXXXXX..XXXXXXX 100644
24
--- a/hw/arm/allwinner-a10.c
25
+++ b/hw/arm/allwinner-a10.c
26
@@ -XXX,XX +XXX,XX @@ static void aw_a10_init(Object *obj)
27
AwA10State *s = AW_A10(obj);
28
29
object_initialize_child(obj, "cpu", &s->cpu, sizeof(s->cpu),
30
- "cortex-a8-" TYPE_ARM_CPU, &error_abort, NULL);
31
+ ARM_CPU_TYPE_NAME("cortex-a8"),
32
+ &error_abort, NULL);
33
34
sysbus_init_child_obj(obj, "intc", &s->intc, sizeof(s->intc),
35
TYPE_AW_A10_PIC);
36
diff --git a/hw/arm/cubieboard.c b/hw/arm/cubieboard.c
37
index XXXXXXX..XXXXXXX 100644
38
--- a/hw/arm/cubieboard.c
39
+++ b/hw/arm/cubieboard.c
40
@@ -XXX,XX +XXX,XX @@ static void cubieboard_init(MachineState *machine)
41
42
static void cubieboard_machine_init(MachineClass *mc)
43
{
44
- mc->desc = "cubietech cubieboard";
45
+ mc->desc = "cubietech cubieboard (Cortex-A9)";
46
+ mc->default_cpu_type = ARM_CPU_TYPE_NAME("cortex-a9");
47
mc->init = cubieboard_init;
48
mc->block_default_type = IF_IDE;
49
mc->units_per_default_bus = 1;
50
diff --git a/hw/arm/digic.c b/hw/arm/digic.c
51
index XXXXXXX..XXXXXXX 100644
52
--- a/hw/arm/digic.c
53
+++ b/hw/arm/digic.c
54
@@ -XXX,XX +XXX,XX @@ static void digic_init(Object *obj)
55
int i;
56
57
object_initialize_child(obj, "cpu", &s->cpu, sizeof(s->cpu),
58
- "arm946-" TYPE_ARM_CPU, &error_abort, NULL);
59
+ ARM_CPU_TYPE_NAME("arm946"),
60
+ &error_abort, NULL);
61
62
for (i = 0; i < DIGIC4_NB_TIMERS; i++) {
63
#define DIGIC_TIMER_NAME_MLEN 11
64
diff --git a/hw/arm/fsl-imx25.c b/hw/arm/fsl-imx25.c
65
index XXXXXXX..XXXXXXX 100644
66
--- a/hw/arm/fsl-imx25.c
67
+++ b/hw/arm/fsl-imx25.c
68
@@ -XXX,XX +XXX,XX @@ static void fsl_imx25_init(Object *obj)
69
FslIMX25State *s = FSL_IMX25(obj);
70
int i;
71
72
- object_initialize(&s->cpu, sizeof(s->cpu), "arm926-" TYPE_ARM_CPU);
73
+ object_initialize(&s->cpu, sizeof(s->cpu), ARM_CPU_TYPE_NAME("arm926"));
74
75
sysbus_init_child_obj(obj, "avic", &s->avic, sizeof(s->avic),
76
TYPE_IMX_AVIC);
77
diff --git a/hw/arm/fsl-imx31.c b/hw/arm/fsl-imx31.c
78
index XXXXXXX..XXXXXXX 100644
79
--- a/hw/arm/fsl-imx31.c
80
+++ b/hw/arm/fsl-imx31.c
81
@@ -XXX,XX +XXX,XX @@ static void fsl_imx31_init(Object *obj)
82
FslIMX31State *s = FSL_IMX31(obj);
83
int i;
84
85
- object_initialize(&s->cpu, sizeof(s->cpu), "arm1136-" TYPE_ARM_CPU);
86
+ object_initialize(&s->cpu, sizeof(s->cpu), ARM_CPU_TYPE_NAME("arm1136"));
87
88
sysbus_init_child_obj(obj, "avic", &s->avic, sizeof(s->avic),
89
TYPE_IMX_AVIC);
90
diff --git a/hw/arm/fsl-imx6.c b/hw/arm/fsl-imx6.c
91
index XXXXXXX..XXXXXXX 100644
92
--- a/hw/arm/fsl-imx6.c
93
+++ b/hw/arm/fsl-imx6.c
94
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6_init(Object *obj)
95
for (i = 0; i < MIN(ms->smp.cpus, FSL_IMX6_NUM_CPUS); i++) {
96
snprintf(name, NAME_SIZE, "cpu%d", i);
97
object_initialize_child(obj, name, &s->cpu[i], sizeof(s->cpu[i]),
98
- "cortex-a9-" TYPE_ARM_CPU, &error_abort, NULL);
99
+ ARM_CPU_TYPE_NAME("cortex-a9"),
100
+ &error_abort, NULL);
101
}
102
103
sysbus_init_child_obj(obj, "a9mpcore", &s->a9mpcore, sizeof(s->a9mpcore),
104
diff --git a/hw/arm/fsl-imx6ul.c b/hw/arm/fsl-imx6ul.c
105
index XXXXXXX..XXXXXXX 100644
106
--- a/hw/arm/fsl-imx6ul.c
107
+++ b/hw/arm/fsl-imx6ul.c
108
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_init(Object *obj)
109
int i;
110
111
object_initialize_child(obj, "cpu0", &s->cpu, sizeof(s->cpu),
112
- "cortex-a7-" TYPE_ARM_CPU, &error_abort, NULL);
113
+ ARM_CPU_TYPE_NAME("cortex-a7"), &error_abort, NULL);
114
115
/*
116
* A7MPCORE
117
diff --git a/hw/arm/xlnx-zynqmp.c b/hw/arm/xlnx-zynqmp.c
118
index XXXXXXX..XXXXXXX 100644
119
--- a/hw/arm/xlnx-zynqmp.c
120
+++ b/hw/arm/xlnx-zynqmp.c
121
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_create_rpu(MachineState *ms, XlnxZynqMPState *s,
122
123
object_initialize_child(OBJECT(&s->rpu_cluster), "rpu-cpu[*]",
124
&s->rpu_cpu[i], sizeof(s->rpu_cpu[i]),
125
- "cortex-r5f-" TYPE_ARM_CPU, &error_abort,
126
- NULL);
127
+ ARM_CPU_TYPE_NAME("cortex-r5f"),
128
+ &error_abort, NULL);
129
130
name = object_get_canonical_path_component(OBJECT(&s->rpu_cpu[i]));
131
if (strcmp(name, boot_cpu)) {
132
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_init(Object *obj)
133
for (i = 0; i < num_apus; i++) {
134
object_initialize_child(OBJECT(&s->apu_cluster), "apu-cpu[*]",
135
&s->apu_cpu[i], sizeof(s->apu_cpu[i]),
136
- "cortex-a53-" TYPE_ARM_CPU, &error_abort,
137
- NULL);
138
+ ARM_CPU_TYPE_NAME("cortex-a53"),
139
+ &error_abort, NULL);
140
}
141
142
sysbus_init_child_obj(obj, "gic", &s->gic, sizeof(s->gic),
143
--
144
2.20.1
145
146
diff view generated by jsdifflib
Deleted patch
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
2
1
3
As explained in commit aff39be0ed97:
4
5
Both functions, object_initialize() and object_property_add_child()
6
increase the reference counter of the new object, so one of the
7
references has to be dropped afterwards to get the reference
8
counting right. Otherwise the child object will not be properly
9
cleaned up when the parent gets destroyed.
10
Thus let's use now object_initialize_child() instead to get the
11
reference counting here right.
12
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
15
Reviewed-by: Thomas Huth <thuth@redhat.com>
16
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
17
Message-id: 20190823143249.8096-3-philmd@redhat.com
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
---
20
hw/arm/mcimx7d-sabre.c | 9 ++++-----
21
hw/arm/mps2-tz.c | 15 +++++++--------
22
hw/arm/musca.c | 9 +++++----
23
3 files changed, 16 insertions(+), 17 deletions(-)
24
25
diff --git a/hw/arm/mcimx7d-sabre.c b/hw/arm/mcimx7d-sabre.c
26
index XXXXXXX..XXXXXXX 100644
27
--- a/hw/arm/mcimx7d-sabre.c
28
+++ b/hw/arm/mcimx7d-sabre.c
29
@@ -XXX,XX +XXX,XX @@ static void mcimx7d_sabre_init(MachineState *machine)
30
{
31
static struct arm_boot_info boot_info;
32
MCIMX7Sabre *s = g_new0(MCIMX7Sabre, 1);
33
- Object *soc;
34
int i;
35
36
if (machine->ram_size > FSL_IMX7_MMDC_SIZE) {
37
@@ -XXX,XX +XXX,XX @@ static void mcimx7d_sabre_init(MachineState *machine)
38
.nb_cpus = machine->smp.cpus,
39
};
40
41
- object_initialize(&s->soc, sizeof(s->soc), TYPE_FSL_IMX7);
42
- soc = OBJECT(&s->soc);
43
- object_property_add_child(OBJECT(machine), "soc", soc, &error_fatal);
44
- object_property_set_bool(soc, true, "realized", &error_fatal);
45
+ object_initialize_child(OBJECT(machine), "soc",
46
+ &s->soc, sizeof(s->soc),
47
+ TYPE_FSL_IMX7, &error_fatal, NULL);
48
+ object_property_set_bool(OBJECT(&s->soc), true, "realized", &error_fatal);
49
50
memory_region_allocate_system_memory(&s->ram, NULL, "mcimx7d-sabre.ram",
51
machine->ram_size);
52
diff --git a/hw/arm/mps2-tz.c b/hw/arm/mps2-tz.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/hw/arm/mps2-tz.c
55
+++ b/hw/arm/mps2-tz.c
56
@@ -XXX,XX +XXX,XX @@ static void mps2tz_common_init(MachineState *machine)
57
/* The sec_resp_cfg output from the IoTKit must be split into multiple
58
* lines, one for each of the PPCs we create here, plus one per MSC.
59
*/
60
- object_initialize(&mms->sec_resp_splitter, sizeof(mms->sec_resp_splitter),
61
- TYPE_SPLIT_IRQ);
62
- object_property_add_child(OBJECT(machine), "sec-resp-splitter",
63
- OBJECT(&mms->sec_resp_splitter), &error_abort);
64
+ object_initialize_child(OBJECT(machine), "sec-resp-splitter",
65
+ &mms->sec_resp_splitter,
66
+ sizeof(mms->sec_resp_splitter),
67
+ TYPE_SPLIT_IRQ, &error_abort, NULL);
68
object_property_set_int(OBJECT(&mms->sec_resp_splitter),
69
ARRAY_SIZE(mms->ppc) + ARRAY_SIZE(mms->msc),
70
"num-lines", &error_fatal);
71
@@ -XXX,XX +XXX,XX @@ static void mps2tz_common_init(MachineState *machine)
72
* Tx, Rx and "combined" IRQs are sent to the NVIC separately.
73
* Create the OR gate for this.
74
*/
75
- object_initialize(&mms->uart_irq_orgate, sizeof(mms->uart_irq_orgate),
76
- TYPE_OR_IRQ);
77
- object_property_add_child(OBJECT(mms), "uart-irq-orgate",
78
- OBJECT(&mms->uart_irq_orgate), &error_abort);
79
+ object_initialize_child(OBJECT(mms), "uart-irq-orgate",
80
+ &mms->uart_irq_orgate, sizeof(mms->uart_irq_orgate),
81
+ TYPE_OR_IRQ, &error_abort, NULL);
82
object_property_set_int(OBJECT(&mms->uart_irq_orgate), 10, "num-lines",
83
&error_fatal);
84
object_property_set_bool(OBJECT(&mms->uart_irq_orgate), true,
85
diff --git a/hw/arm/musca.c b/hw/arm/musca.c
86
index XXXXXXX..XXXXXXX 100644
87
--- a/hw/arm/musca.c
88
+++ b/hw/arm/musca.c
89
@@ -XXX,XX +XXX,XX @@ static void musca_init(MachineState *machine)
90
* The sec_resp_cfg output from the SSE-200 must be split into multiple
91
* lines, one for each of the PPCs we create here.
92
*/
93
- object_initialize(&mms->sec_resp_splitter, sizeof(mms->sec_resp_splitter),
94
- TYPE_SPLIT_IRQ);
95
- object_property_add_child(OBJECT(machine), "sec-resp-splitter",
96
- OBJECT(&mms->sec_resp_splitter), &error_fatal);
97
+ object_initialize_child(OBJECT(machine), "sec-resp-splitter",
98
+ &mms->sec_resp_splitter,
99
+ sizeof(mms->sec_resp_splitter),
100
+ TYPE_SPLIT_IRQ, &error_fatal, NULL);
101
+
102
object_property_set_int(OBJECT(&mms->sec_resp_splitter),
103
ARRAY_SIZE(mms->ppc), "num-lines", &error_fatal);
104
object_property_set_bool(OBJECT(&mms->sec_resp_splitter), true,
105
--
106
2.20.1
107
108
diff view generated by jsdifflib
Deleted patch
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
2
1
3
Both object_initialize() and qdev_set_parent_bus() increase the
4
reference counter of the new object, so one of the references has
5
to be dropped afterwards to get the reference counting right.
6
In machine model code this refcount leak is not particularly
7
problematic because (unlike devices) machines will never be
8
created on demand via QMP, and they are never destroyed.
9
But in any case let's use the new sysbus_init_child_obj() instead
10
to get the reference counting here right.
11
12
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20190823143249.8096-4-philmd@redhat.com
15
[PMM: rewrote commit message]
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
18
hw/arm/exynos4_boards.c | 4 ++--
19
1 file changed, 2 insertions(+), 2 deletions(-)
20
21
diff --git a/hw/arm/exynos4_boards.c b/hw/arm/exynos4_boards.c
22
index XXXXXXX..XXXXXXX 100644
23
--- a/hw/arm/exynos4_boards.c
24
+++ b/hw/arm/exynos4_boards.c
25
@@ -XXX,XX +XXX,XX @@ exynos4_boards_init_common(MachineState *machine,
26
exynos4_boards_init_ram(s, get_system_memory(),
27
exynos4_board_ram_size[board_type]);
28
29
- object_initialize(&s->soc, sizeof(s->soc), TYPE_EXYNOS4210_SOC);
30
- qdev_set_parent_bus(DEVICE(&s->soc), sysbus_get_default());
31
+ sysbus_init_child_obj(OBJECT(machine), "soc",
32
+ &s->soc, sizeof(s->soc), TYPE_EXYNOS4210_SOC);
33
object_property_set_bool(OBJECT(&s->soc), true, "realized",
34
&error_fatal);
35
36
--
37
2.20.1
38
39
diff view generated by jsdifflib
Deleted patch
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
2
1
3
Child properties form the composition tree. All objects need to be
4
a child of another object. Objects can only be a child of one object.
5
6
Respect this with the i.MX SoC, to get a cleaner composition tree.
7
8
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20190823143249.8096-5-philmd@redhat.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
hw/arm/fsl-imx25.c | 4 +++-
14
hw/arm/fsl-imx31.c | 4 +++-
15
2 files changed, 6 insertions(+), 2 deletions(-)
16
17
diff --git a/hw/arm/fsl-imx25.c b/hw/arm/fsl-imx25.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/arm/fsl-imx25.c
20
+++ b/hw/arm/fsl-imx25.c
21
@@ -XXX,XX +XXX,XX @@ static void fsl_imx25_init(Object *obj)
22
FslIMX25State *s = FSL_IMX25(obj);
23
int i;
24
25
- object_initialize(&s->cpu, sizeof(s->cpu), ARM_CPU_TYPE_NAME("arm926"));
26
+ object_initialize_child(obj, "cpu", &s->cpu, sizeof(s->cpu),
27
+ ARM_CPU_TYPE_NAME("arm926"),
28
+ &error_abort, NULL);
29
30
sysbus_init_child_obj(obj, "avic", &s->avic, sizeof(s->avic),
31
TYPE_IMX_AVIC);
32
diff --git a/hw/arm/fsl-imx31.c b/hw/arm/fsl-imx31.c
33
index XXXXXXX..XXXXXXX 100644
34
--- a/hw/arm/fsl-imx31.c
35
+++ b/hw/arm/fsl-imx31.c
36
@@ -XXX,XX +XXX,XX @@ static void fsl_imx31_init(Object *obj)
37
FslIMX31State *s = FSL_IMX31(obj);
38
int i;
39
40
- object_initialize(&s->cpu, sizeof(s->cpu), ARM_CPU_TYPE_NAME("arm1136"));
41
+ object_initialize_child(obj, "cpu", &s->cpu, sizeof(s->cpu),
42
+ ARM_CPU_TYPE_NAME("arm1136"),
43
+ &error_abort, NULL);
44
45
sysbus_init_child_obj(obj, "avic", &s->avic, sizeof(s->avic),
46
TYPE_IMX_AVIC);
47
--
48
2.20.1
49
50
diff view generated by jsdifflib
Deleted patch
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
2
1
3
As explained in commit aff39be0ed97:
4
5
Both functions, object_initialize() and object_property_add_child()
6
increase the reference counter of the new object, so one of the
7
references has to be dropped afterwards to get the reference
8
counting right. Otherwise the child object will not be properly
9
cleaned up when the parent gets destroyed.
10
Thus let's use now object_initialize_child() instead to get the
11
reference counting here right.
12
13
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
14
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
15
Reviewed-by: Thomas Huth <thuth@redhat.com>
16
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
17
Message-id: 20190823143249.8096-6-philmd@redhat.com
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
---
20
hw/dma/xilinx_axidma.c | 16 ++++++++--------
21
1 file changed, 8 insertions(+), 8 deletions(-)
22
23
diff --git a/hw/dma/xilinx_axidma.c b/hw/dma/xilinx_axidma.c
24
index XXXXXXX..XXXXXXX 100644
25
--- a/hw/dma/xilinx_axidma.c
26
+++ b/hw/dma/xilinx_axidma.c
27
@@ -XXX,XX +XXX,XX @@ static void xilinx_axidma_init(Object *obj)
28
XilinxAXIDMA *s = XILINX_AXI_DMA(obj);
29
SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
30
31
- object_initialize(&s->rx_data_dev, sizeof(s->rx_data_dev),
32
- TYPE_XILINX_AXI_DMA_DATA_STREAM);
33
- object_initialize(&s->rx_control_dev, sizeof(s->rx_control_dev),
34
- TYPE_XILINX_AXI_DMA_CONTROL_STREAM);
35
- object_property_add_child(OBJECT(s), "axistream-connected-target",
36
- (Object *)&s->rx_data_dev, &error_abort);
37
- object_property_add_child(OBJECT(s), "axistream-control-connected-target",
38
- (Object *)&s->rx_control_dev, &error_abort);
39
+ object_initialize_child(OBJECT(s), "axistream-connected-target",
40
+ &s->rx_data_dev, sizeof(s->rx_data_dev),
41
+ TYPE_XILINX_AXI_DMA_DATA_STREAM, &error_abort,
42
+ NULL);
43
+ object_initialize_child(OBJECT(s), "axistream-control-connected-target",
44
+ &s->rx_control_dev, sizeof(s->rx_control_dev),
45
+ TYPE_XILINX_AXI_DMA_CONTROL_STREAM, &error_abort,
46
+ NULL);
47
48
sysbus_init_irq(sbd, &s->streams[0].irq);
49
sysbus_init_irq(sbd, &s->streams[1].irq);
50
--
51
2.20.1
52
53
diff view generated by jsdifflib
Deleted patch
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
2
1
3
As explained in commit aff39be0ed97:
4
5
Both functions, object_initialize() and object_property_add_child()
6
increase the reference counter of the new object, so one of the
7
references has to be dropped afterwards to get the reference
8
counting right. Otherwise the child object will not be properly
9
cleaned up when the parent gets destroyed.
10
Thus let's use now object_initialize_child() instead to get the
11
reference counting here right.
12
13
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
14
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
15
Reviewed-by: Thomas Huth <thuth@redhat.com>
16
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
17
Message-id: 20190823143249.8096-7-philmd@redhat.com
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
---
20
hw/net/xilinx_axienet.c | 17 ++++++++---------
21
1 file changed, 8 insertions(+), 9 deletions(-)
22
23
diff --git a/hw/net/xilinx_axienet.c b/hw/net/xilinx_axienet.c
24
index XXXXXXX..XXXXXXX 100644
25
--- a/hw/net/xilinx_axienet.c
26
+++ b/hw/net/xilinx_axienet.c
27
@@ -XXX,XX +XXX,XX @@ static void xilinx_enet_init(Object *obj)
28
XilinxAXIEnet *s = XILINX_AXI_ENET(obj);
29
SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
30
31
- object_initialize(&s->rx_data_dev, sizeof(s->rx_data_dev),
32
- TYPE_XILINX_AXI_ENET_DATA_STREAM);
33
- object_initialize(&s->rx_control_dev, sizeof(s->rx_control_dev),
34
- TYPE_XILINX_AXI_ENET_CONTROL_STREAM);
35
- object_property_add_child(OBJECT(s), "axistream-connected-target",
36
- (Object *)&s->rx_data_dev, &error_abort);
37
- object_property_add_child(OBJECT(s), "axistream-control-connected-target",
38
- (Object *)&s->rx_control_dev, &error_abort);
39
-
40
+ object_initialize_child(OBJECT(s), "axistream-connected-target",
41
+ &s->rx_data_dev, sizeof(s->rx_data_dev),
42
+ TYPE_XILINX_AXI_ENET_DATA_STREAM, &error_abort,
43
+ NULL);
44
+ object_initialize_child(OBJECT(s), "axistream-control-connected-target",
45
+ &s->rx_control_dev, sizeof(s->rx_control_dev),
46
+ TYPE_XILINX_AXI_ENET_CONTROL_STREAM, &error_abort,
47
+ NULL);
48
sysbus_init_irq(sbd, &s->irq);
49
50
memory_region_init_io(&s->iomem, OBJECT(s), &enet_ops, s, "enet", 0x40000);
51
--
52
2.20.1
53
54
diff view generated by jsdifflib
Deleted patch
1
From: Alex Bennée <alex.bennee@linaro.org>
2
1
3
Commit a5e0b3311 removed these in favour of querying machine
4
properties. Remove the extern declarations as well.
5
6
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20190828165307.18321-6-alex.bennee@linaro.org
10
Cc: Like Xu <like.xu@linux.intel.com>
11
Message-Id: <20190711130546.18578-1-alex.bennee@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
include/sysemu/sysemu.h | 2 --
15
1 file changed, 2 deletions(-)
16
17
diff --git a/include/sysemu/sysemu.h b/include/sysemu/sysemu.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/include/sysemu/sysemu.h
20
+++ b/include/sysemu/sysemu.h
21
@@ -XXX,XX +XXX,XX @@ extern const char *keyboard_layout;
22
extern int win2k_install_hack;
23
extern int alt_grab;
24
extern int ctrl_grab;
25
-extern int smp_cpus;
26
-extern unsigned int max_cpus;
27
extern int cursor_hide;
28
extern int graphic_rotate;
29
extern int no_quit;
30
--
31
2.20.1
32
33
diff view generated by jsdifflib
1
The function neon_store_reg32() doesn't free the TCG temp that it
1
From: Christophe Lyon <christophe.lyon@linaro.org>
2
is passed, so the caller must do that. We got this right in most
2
3
places but forgot to free the TCG temps in trans_VMOV_64_sp().
3
rt==15 is a special case when reading the flags: it means the
4
destination is APSR. This patch avoids rejecting
5
vmrs apsr_nzcv, fpscr
6
as illegal instruction.
4
7
5
Cc: qemu-stable@nongnu.org
8
Cc: qemu-stable@nongnu.org
9
Signed-off-by: Christophe Lyon <christophe.lyon@linaro.org>
10
Message-id: 20191025095711.10853-1-christophe.lyon@linaro.org
11
[PMM: updated the comment]
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Message-id: 20190827121931.26836-1-peter.maydell@linaro.org
10
---
14
---
11
target/arm/translate-vfp.inc.c | 2 ++
15
target/arm/translate-vfp.inc.c | 5 +++--
12
1 file changed, 2 insertions(+)
16
1 file changed, 3 insertions(+), 2 deletions(-)
13
17
14
diff --git a/target/arm/translate-vfp.inc.c b/target/arm/translate-vfp.inc.c
18
diff --git a/target/arm/translate-vfp.inc.c b/target/arm/translate-vfp.inc.c
15
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/translate-vfp.inc.c
20
--- a/target/arm/translate-vfp.inc.c
17
+++ b/target/arm/translate-vfp.inc.c
21
+++ b/target/arm/translate-vfp.inc.c
18
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_64_sp(DisasContext *s, arg_VMOV_64_sp *a)
22
@@ -XXX,XX +XXX,XX @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
19
/* gpreg to fpreg */
23
if (arm_dc_feature(s, ARM_FEATURE_M)) {
20
tmp = load_reg(s, a->rt);
24
/*
21
neon_store_reg32(tmp, a->vm);
25
* The only M-profile VFP vmrs/vmsr sysreg is FPSCR.
22
+ tcg_temp_free_i32(tmp);
26
- * Writes to R15 are UNPREDICTABLE; we choose to undef.
23
tmp = load_reg(s, a->rt2);
27
+ * Accesses to R15 are UNPREDICTABLE; we choose to undef.
24
neon_store_reg32(tmp, a->vm + 1);
28
+ * (FPSCR -> r15 is a special case which writes to the PSR flags.)
25
+ tcg_temp_free_i32(tmp);
29
*/
30
- if (a->rt == 15 || a->reg != ARM_VFP_FPSCR) {
31
+ if (a->rt == 15 && (!a->l || a->reg != ARM_VFP_FPSCR)) {
32
return false;
33
}
26
}
34
}
27
28
return true;
29
--
35
--
30
2.20.1
36
2.20.1
31
37
32
38
diff view generated by jsdifflib