1
Arm pullreq: Rémi's ARMv8.4-SEL2 support is the big thing here.
1
Hi; here's a target-arm pullreq for rc0; these are all bugfixes
2
and similar minor stuff.
2
3
3
thanks
4
thanks
4
-- PMM
5
-- PMM
5
6
6
The following changes since commit f1fcb6851aba6dd9838886dc179717a11e344a1c:
7
The following changes since commit 0462a32b4f63b2448b4a196381138afd50719dc4:
7
8
8
Merge remote-tracking branch 'remotes/huth-gitlab/tags/pull-request-2021-01-19' into staging (2021-01-19 11:57:07 +0000)
9
Merge tag 'for-upstream' of https://repo.or.cz/qemu/kevin into staging (2025-03-14 09:31:13 +0800)
9
10
10
are available in the Git repository at:
11
are available in the Git repository at:
11
12
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20210119
13
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20250314-1
13
14
14
for you to fetch changes up to 6d39956891b3d1857af84f72f0230a6d99eb3b6a:
15
for you to fetch changes up to a019e15edfd62beae1e2f6adc0fa7415ba20b14c:
15
16
16
docs: Build and install all the docs in a single manual (2021-01-19 14:38:53 +0000)
17
meson.build: Set RUST_BACKTRACE for all tests (2025-03-14 12:54:33 +0000)
17
18
18
----------------------------------------------------------------
19
----------------------------------------------------------------
19
target-arm queue:
20
target-arm queue:
20
* Implement IMPDEF pauth algorithm
21
* Correctly handle corner cases of guest attempting an exception
21
* Support ARMv8.4-SEL2
22
return to AArch32 when target EL is AArch64 only
22
* Fix bug where we were truncating predicate vector lengths in SVE insns
23
* MAINTAINERS: Fix status for Arm boards I "maintain"
23
* Implement new pvpanic-pci device
24
* tests/functional: Bump up arm_replay timeout
24
* npcm7xx_adc-test: Fix memleak in adc_qom_set
25
* Revert "hw/char/pl011: Warn when using disabled receiver"
25
* target/arm/m_helper: Silence GCC 10 maybe-uninitialized error
26
* util/cacheflush: Make first DSB unconditional on aarch64
26
* docs: Build and install all the docs in a single manual
27
* target/arm: Fix SVE/SME access check logic
28
* meson.build: Set RUST_BACKTRACE for all tests
27
29
28
----------------------------------------------------------------
30
----------------------------------------------------------------
29
Gan Qixin (1):
31
Joe Komlodi (1):
30
npcm7xx_adc-test: Fix memleak in adc_qom_set
32
util/cacheflush: Make first DSB unconditional on aarch64
31
33
32
Mihai Carabas (4):
34
Paolo Bonzini (1):
33
hw/misc/pvpanic: split-out generic and bus dependent code
35
Revert "hw/char/pl011: Warn when using disabled receiver"
34
hw/misc/pvpanic: add PCI interface support
35
pvpanic : update pvpanic spec document
36
tests/qtest: add a test case for pvpanic-pci
37
36
38
Peter Maydell (1):
37
Peter Maydell (13):
39
docs: Build and install all the docs in a single manual
38
target/arm: Move A32_BANKED_REG_{GET,SET} macros to cpregs.h
39
target/arm: Un-inline access_secure_reg()
40
linux-user/aarch64: Remove unused get/put_user macros
41
linux-user/arm: Remove unused get_put_user macros
42
target/arm: Move arm_cpu_data_is_big_endian() etc to internals.h
43
target/arm: Move arm_current_el() and arm_el_is_aa64() to internals.h
44
target/arm: SCR_EL3.RW should be treated as 1 if EL2 doesn't support AArch32
45
target/arm: HCR_EL2.RW should be RAO/WI if EL1 doesn't support AArch32
46
target/arm: Add cpu local variable to exception_return helper
47
target/arm: Forbid return to AArch32 when CPU is AArch64-only
48
MAINTAINERS: Fix status for Arm boards I "maintain"
49
tests/functional: Bump up arm_replay timeout
50
meson.build: Set RUST_BACKTRACE for all tests
40
51
41
Philippe Mathieu-Daudé (1):
52
Richard Henderson (2):
42
target/arm/m_helper: Silence GCC 10 maybe-uninitialized error
53
target/arm: Make DisasContext.{fp, sve}_access_checked tristate
54
target/arm: Simplify pstate_sm check in sve_access_check
43
55
44
Richard Henderson (7):
56
MAINTAINERS | 14 ++--
45
target/arm: Implement an IMPDEF pauth algorithm
57
meson.build | 9 ++-
46
target/arm: Add cpu properties to control pauth
58
target/arm/cpregs.h | 28 +++++++
47
target/arm: Use object_property_add_bool for "sve" property
59
target/arm/cpu.h | 153 +-----------------------------------
48
target/arm: Introduce PREDDESC field definitions
60
target/arm/internals.h | 135 +++++++++++++++++++++++++++++++
49
target/arm: Update PFIRST, PNEXT for pred_desc
61
target/arm/tcg/translate-a64.h | 2 +-
50
target/arm: Update ZIP, UZP, TRN for pred_desc
62
target/arm/tcg/translate.h | 10 ++-
51
target/arm: Update REV, PUNPK for pred_desc
63
hw/char/pl011.c | 19 ++---
52
64
hw/intc/arm_gicv3_cpuif.c | 1 +
53
Rémi Denis-Courmont (19):
65
linux-user/aarch64/cpu_loop.c | 48 -----------
54
target/arm: remove redundant tests
66
linux-user/arm/cpu_loop.c | 43 +---------
55
target/arm: add arm_is_el2_enabled() helper
67
target/arm/arch_dump.c | 1 +
56
target/arm: use arm_is_el2_enabled() where applicable
68
target/arm/helper.c | 16 +++-
57
target/arm: use arm_hcr_el2_eff() where applicable
69
target/arm/tcg/helper-a64.c | 12 ++-
58
target/arm: factor MDCR_EL2 common handling
70
target/arm/tcg/hflags.c | 9 +++
59
target/arm: Define isar_feature function to test for presence of SEL2
71
target/arm/tcg/translate-a64.c | 37 ++++-----
60
target/arm: add 64-bit S-EL2 to EL exception table
72
util/cacheflush.c | 4 +-
61
target/arm: add MMU stage 1 for Secure EL2
73
.gitlab-ci.d/buildtest-template.yml | 1 -
62
target/arm: add ARMv8.4-SEL2 system registers
74
18 files changed, 257 insertions(+), 285 deletions(-)
63
target/arm: handle VMID change in secure state
64
target/arm: do S1_ptw_translate() before address space lookup
65
target/arm: translate NS bit in page-walks
66
target/arm: generalize 2-stage page-walk condition
67
target/arm: secure stage 2 translation regime
68
target/arm: set HPFAR_EL2.NS on secure stage 2 faults
69
target/arm: revector to run-time pick target EL
70
target/arm: Implement SCR_EL2.EEL2
71
target/arm: enable Secure EL2 in max CPU
72
target/arm: refactor vae1_tlbmask()
73
74
docs/conf.py | 46 ++++-
75
docs/devel/conf.py | 15 --
76
docs/index.html.in | 17 --
77
docs/interop/conf.py | 28 ---
78
docs/meson.build | 64 +++---
79
docs/specs/conf.py | 16 --
80
docs/specs/pci-ids.txt | 1 +
81
docs/specs/pvpanic.txt | 13 +-
82
docs/system/arm/cpu-features.rst | 21 ++
83
docs/system/conf.py | 28 ---
84
docs/tools/conf.py | 37 ----
85
docs/user/conf.py | 15 --
86
include/hw/misc/pvpanic.h | 24 ++-
87
include/hw/pci/pci.h | 1 +
88
include/qemu/xxhash.h | 98 +++++++++
89
target/arm/cpu-param.h | 2 +-
90
target/arm/cpu.h | 107 ++++++++--
91
target/arm/internals.h | 45 +++++
92
hw/misc/pvpanic-isa.c | 94 +++++++++
93
hw/misc/pvpanic-pci.c | 95 +++++++++
94
hw/misc/pvpanic.c | 85 +-------
95
target/arm/cpu.c | 23 ++-
96
target/arm/cpu64.c | 65 ++++--
97
target/arm/helper-a64.c | 8 +-
98
target/arm/helper.c | 414 ++++++++++++++++++++++++++-------------
99
target/arm/m_helper.c | 2 +-
100
target/arm/monitor.c | 1 +
101
target/arm/op_helper.c | 4 +-
102
target/arm/pauth_helper.c | 27 ++-
103
target/arm/sve_helper.c | 33 ++--
104
target/arm/tlb_helper.c | 3 +
105
target/arm/translate-a64.c | 4 +
106
target/arm/translate-sve.c | 31 ++-
107
target/arm/translate.c | 36 +++-
108
tests/qtest/arm-cpu-features.c | 13 ++
109
tests/qtest/npcm7xx_adc-test.c | 1 +
110
tests/qtest/pvpanic-pci-test.c | 62 ++++++
111
.gitlab-ci.yml | 4 +-
112
hw/i386/Kconfig | 2 +-
113
hw/misc/Kconfig | 12 +-
114
hw/misc/meson.build | 4 +-
115
tests/qtest/meson.build | 3 +-
116
42 files changed, 1080 insertions(+), 524 deletions(-)
117
delete mode 100644 docs/devel/conf.py
118
delete mode 100644 docs/index.html.in
119
delete mode 100644 docs/interop/conf.py
120
delete mode 100644 docs/specs/conf.py
121
delete mode 100644 docs/system/conf.py
122
delete mode 100644 docs/tools/conf.py
123
delete mode 100644 docs/user/conf.py
124
create mode 100644 hw/misc/pvpanic-isa.c
125
create mode 100644 hw/misc/pvpanic-pci.c
126
create mode 100644 tests/qtest/pvpanic-pci-test.c
127
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
The A32_BANKED_REG_{GET,SET} macros are only used inside target/arm;
2
move their definitions to cpregs.h. There's no need to have them
3
defined in all the code that includes cpu.h.
2
4
3
The crypto overhead of emulating pauth can be significant for
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
some workloads. Add two boolean properties that allows the
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
feature to be turned off, on with the architected algorithm,
7
---
6
or on with an implementation defined algorithm.
8
target/arm/cpregs.h | 28 ++++++++++++++++++++++++++++
9
target/arm/cpu.h | 27 ---------------------------
10
2 files changed, 28 insertions(+), 27 deletions(-)
7
11
8
We need two intermediate booleans to control the state while
12
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
9
parsing properties lest we clobber ID_AA64ISAR1 into an invalid
10
intermediate state.
11
12
Tested-by: Mark Rutland <mark.rutland@arm.com>
13
Reviewed-by: Andrew Jones <drjones@redhat.com>
14
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20210111235740.462469-3-richard.henderson@linaro.org
16
[PMM: fixed docs typo, tweaked text to clarify that the impdef
17
algorithm is specific to QEMU]
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
---
20
docs/system/arm/cpu-features.rst | 21 +++++++++++++++++
21
target/arm/cpu.h | 10 ++++++++
22
target/arm/cpu.c | 13 +++++++++++
23
target/arm/cpu64.c | 40 ++++++++++++++++++++++++++++----
24
target/arm/monitor.c | 1 +
25
tests/qtest/arm-cpu-features.c | 13 +++++++++++
26
6 files changed, 94 insertions(+), 4 deletions(-)
27
28
diff --git a/docs/system/arm/cpu-features.rst b/docs/system/arm/cpu-features.rst
29
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
30
--- a/docs/system/arm/cpu-features.rst
14
--- a/target/arm/cpregs.h
31
+++ b/docs/system/arm/cpu-features.rst
15
+++ b/target/arm/cpregs.h
32
@@ -XXX,XX +XXX,XX @@ the list of KVM VCPU features and their descriptions.
16
@@ -XXX,XX +XXX,XX @@ static inline bool arm_cpreg_traps_in_nv(const ARMCPRegInfo *ri)
33
influence the guest scheduler behavior and/or be
17
return ri->opc1 == 4 || ri->opc1 == 5;
34
exposed to the guest userspace.
18
}
35
19
36
+TCG VCPU Features
20
+/* Macros for accessing a specified CP register bank */
37
+=================
21
+#define A32_BANKED_REG_GET(_env, _regname, _secure) \
22
+ ((_secure) ? (_env)->cp15._regname##_s : (_env)->cp15._regname##_ns)
38
+
23
+
39
+TCG VCPU features are CPU features that are specific to TCG.
24
+#define A32_BANKED_REG_SET(_env, _regname, _secure, _val) \
40
+Below is the list of TCG VCPU features and their descriptions.
25
+ do { \
26
+ if (_secure) { \
27
+ (_env)->cp15._regname##_s = (_val); \
28
+ } else { \
29
+ (_env)->cp15._regname##_ns = (_val); \
30
+ } \
31
+ } while (0)
41
+
32
+
42
+ pauth Enable or disable `FEAT_Pauth`, pointer
33
+/*
43
+ authentication. By default, the feature is
34
+ * Macros for automatically accessing a specific CP register bank depending on
44
+ enabled with `-cpu max`.
35
+ * the current secure state of the system. These macros are not intended for
36
+ * supporting instruction translation reads/writes as these are dependent
37
+ * solely on the SCR.NS bit and not the mode.
38
+ */
39
+#define A32_BANKED_CURRENT_REG_GET(_env, _regname) \
40
+ A32_BANKED_REG_GET((_env), _regname, \
41
+ (arm_is_secure(_env) && !arm_el_is_aa64((_env), 3)))
45
+
42
+
46
+ pauth-impdef When `FEAT_Pauth` is enabled, either the
43
+#define A32_BANKED_CURRENT_REG_SET(_env, _regname, _val) \
47
+ *impdef* (Implementation Defined) algorithm
44
+ A32_BANKED_REG_SET((_env), _regname, \
48
+ is enabled or the *architected* QARMA algorithm
45
+ (arm_is_secure(_env) && !arm_el_is_aa64((_env), 3)), \
49
+ is enabled. By default the impdef algorithm
46
+ (_val))
50
+ is disabled, and QARMA is enabled.
51
+
47
+
52
+ The architected QARMA algorithm has good
48
#endif /* TARGET_ARM_CPREGS_H */
53
+ cryptographic properties, but can be quite slow
54
+ to emulate. The impdef algorithm used by QEMU
55
+ is non-cryptographic but significantly faster.
56
+
57
SVE CPU Properties
58
==================
59
60
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
49
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
61
index XXXXXXX..XXXXXXX 100644
50
index XXXXXXX..XXXXXXX 100644
62
--- a/target/arm/cpu.h
51
--- a/target/arm/cpu.h
63
+++ b/target/arm/cpu.h
52
+++ b/target/arm/cpu.h
64
@@ -XXX,XX +XXX,XX @@ typedef struct {
53
@@ -XXX,XX +XXX,XX @@ static inline bool access_secure_reg(CPUARMState *env)
65
#ifdef TARGET_AARCH64
54
return ret;
66
# define ARM_MAX_VQ 16
67
void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp);
68
+void arm_cpu_pauth_finalize(ARMCPU *cpu, Error **errp);
69
#else
70
# define ARM_MAX_VQ 1
71
static inline void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp) { }
72
+static inline void arm_cpu_pauth_finalize(ARMCPU *cpu, Error **errp) { }
73
#endif
74
75
typedef struct ARMVectorReg {
76
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
77
uint64_t reset_cbar;
78
uint32_t reset_auxcr;
79
bool reset_hivecs;
80
+
81
+ /*
82
+ * Intermediate values used during property parsing.
83
+ * Once finalized, the values should be read from ID_AA64ISAR1.
84
+ */
85
+ bool prop_pauth;
86
+ bool prop_pauth_impdef;
87
+
88
/* DCZ blocksize, in log_2(words), ie low 4 bits of DCZID_EL0 */
89
uint32_t dcz_blocksize;
90
uint64_t rvbar;
91
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
92
index XXXXXXX..XXXXXXX 100644
93
--- a/target/arm/cpu.c
94
+++ b/target/arm/cpu.c
95
@@ -XXX,XX +XXX,XX @@ void arm_cpu_finalize_features(ARMCPU *cpu, Error **errp)
96
error_propagate(errp, local_err);
97
return;
98
}
99
+
100
+ /*
101
+ * KVM does not support modifications to this feature.
102
+ * We have not registered the cpu properties when KVM
103
+ * is in use, so the user will not be able to set them.
104
+ */
105
+ if (!kvm_enabled()) {
106
+ arm_cpu_pauth_finalize(cpu, &local_err);
107
+ if (local_err != NULL) {
108
+ error_propagate(errp, local_err);
109
+ return;
110
+ }
111
+ }
112
}
113
114
if (kvm_enabled()) {
115
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
116
index XXXXXXX..XXXXXXX 100644
117
--- a/target/arm/cpu64.c
118
+++ b/target/arm/cpu64.c
119
@@ -XXX,XX +XXX,XX @@
120
#include "sysemu/kvm.h"
121
#include "kvm_arm.h"
122
#include "qapi/visitor.h"
123
+#include "hw/qdev-properties.h"
124
+
125
126
#ifndef CONFIG_USER_ONLY
127
static uint64_t a57_a53_l2ctlr_read(CPUARMState *env, const ARMCPRegInfo *ri)
128
@@ -XXX,XX +XXX,XX @@ void aarch64_add_sve_properties(Object *obj)
129
}
130
}
55
}
131
56
132
+void arm_cpu_pauth_finalize(ARMCPU *cpu, Error **errp)
57
-/* Macros for accessing a specified CP register bank */
133
+{
58
-#define A32_BANKED_REG_GET(_env, _regname, _secure) \
134
+ int arch_val = 0, impdef_val = 0;
59
- ((_secure) ? (_env)->cp15._regname##_s : (_env)->cp15._regname##_ns)
135
+ uint64_t t;
60
-
136
+
61
-#define A32_BANKED_REG_SET(_env, _regname, _secure, _val) \
137
+ /* TODO: Handle HaveEnhancedPAC, HaveEnhancedPAC2, HaveFPAC. */
62
- do { \
138
+ if (cpu->prop_pauth) {
63
- if (_secure) { \
139
+ if (cpu->prop_pauth_impdef) {
64
- (_env)->cp15._regname##_s = (_val); \
140
+ impdef_val = 1;
65
- } else { \
141
+ } else {
66
- (_env)->cp15._regname##_ns = (_val); \
142
+ arch_val = 1;
67
- } \
143
+ }
68
- } while (0)
144
+ } else if (cpu->prop_pauth_impdef) {
69
-
145
+ error_setg(errp, "cannot enable pauth-impdef without pauth");
70
-/* Macros for automatically accessing a specific CP register bank depending on
146
+ error_append_hint(errp, "Add pauth=on to the CPU property list.\n");
71
- * the current secure state of the system. These macros are not intended for
147
+ }
72
- * supporting instruction translation reads/writes as these are dependent
148
+
73
- * solely on the SCR.NS bit and not the mode.
149
+ t = cpu->isar.id_aa64isar1;
74
- */
150
+ t = FIELD_DP64(t, ID_AA64ISAR1, APA, arch_val);
75
-#define A32_BANKED_CURRENT_REG_GET(_env, _regname) \
151
+ t = FIELD_DP64(t, ID_AA64ISAR1, GPA, arch_val);
76
- A32_BANKED_REG_GET((_env), _regname, \
152
+ t = FIELD_DP64(t, ID_AA64ISAR1, API, impdef_val);
77
- (arm_is_secure(_env) && !arm_el_is_aa64((_env), 3)))
153
+ t = FIELD_DP64(t, ID_AA64ISAR1, GPI, impdef_val);
78
-
154
+ cpu->isar.id_aa64isar1 = t;
79
-#define A32_BANKED_CURRENT_REG_SET(_env, _regname, _val) \
155
+}
80
- A32_BANKED_REG_SET((_env), _regname, \
156
+
81
- (arm_is_secure(_env) && !arm_el_is_aa64((_env), 3)), \
157
+static Property arm_cpu_pauth_property =
82
- (_val))
158
+ DEFINE_PROP_BOOL("pauth", ARMCPU, prop_pauth, true);
83
-
159
+static Property arm_cpu_pauth_impdef_property =
84
uint32_t arm_phys_excp_target_el(CPUState *cs, uint32_t excp_idx,
160
+ DEFINE_PROP_BOOL("pauth-impdef", ARMCPU, prop_pauth_impdef, false);
85
uint32_t cur_el, bool secure);
161
+
86
162
/* -cpu max: if KVM is enabled, like -cpu host (best possible with this host);
163
* otherwise, a CPU with as many features enabled as our emulation supports.
164
* The version of '-cpu max' for qemu-system-arm is defined in cpu.c;
165
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
166
t = FIELD_DP64(t, ID_AA64ISAR1, DPB, 2);
167
t = FIELD_DP64(t, ID_AA64ISAR1, JSCVT, 1);
168
t = FIELD_DP64(t, ID_AA64ISAR1, FCMA, 1);
169
- t = FIELD_DP64(t, ID_AA64ISAR1, APA, 1); /* PAuth, architected only */
170
- t = FIELD_DP64(t, ID_AA64ISAR1, API, 0);
171
- t = FIELD_DP64(t, ID_AA64ISAR1, GPA, 1);
172
- t = FIELD_DP64(t, ID_AA64ISAR1, GPI, 0);
173
t = FIELD_DP64(t, ID_AA64ISAR1, SB, 1);
174
t = FIELD_DP64(t, ID_AA64ISAR1, SPECRES, 1);
175
t = FIELD_DP64(t, ID_AA64ISAR1, FRINTTS, 1);
176
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
177
cpu->ctr = 0x80038003; /* 32 byte I and D cacheline size, VIPT icache */
178
cpu->dcz_blocksize = 7; /* 512 bytes */
179
#endif
180
+
181
+ /* Default to PAUTH on, with the architected algorithm. */
182
+ qdev_property_add_static(DEVICE(obj), &arm_cpu_pauth_property);
183
+ qdev_property_add_static(DEVICE(obj), &arm_cpu_pauth_impdef_property);
184
}
185
186
aarch64_add_sve_properties(obj);
187
diff --git a/target/arm/monitor.c b/target/arm/monitor.c
188
index XXXXXXX..XXXXXXX 100644
189
--- a/target/arm/monitor.c
190
+++ b/target/arm/monitor.c
191
@@ -XXX,XX +XXX,XX @@ static const char *cpu_model_advertised_features[] = {
192
"sve640", "sve768", "sve896", "sve1024", "sve1152", "sve1280",
193
"sve1408", "sve1536", "sve1664", "sve1792", "sve1920", "sve2048",
194
"kvm-no-adjvtime", "kvm-steal-time",
195
+ "pauth", "pauth-impdef",
196
NULL
197
};
198
199
diff --git a/tests/qtest/arm-cpu-features.c b/tests/qtest/arm-cpu-features.c
200
index XXXXXXX..XXXXXXX 100644
201
--- a/tests/qtest/arm-cpu-features.c
202
+++ b/tests/qtest/arm-cpu-features.c
203
@@ -XXX,XX +XXX,XX @@ static void sve_tests_sve_off_kvm(const void *data)
204
qtest_quit(qts);
205
}
206
207
+static void pauth_tests_default(QTestState *qts, const char *cpu_type)
208
+{
209
+ assert_has_feature_enabled(qts, cpu_type, "pauth");
210
+ assert_has_feature_disabled(qts, cpu_type, "pauth-impdef");
211
+ assert_set_feature(qts, cpu_type, "pauth", false);
212
+ assert_set_feature(qts, cpu_type, "pauth", true);
213
+ assert_set_feature(qts, cpu_type, "pauth-impdef", true);
214
+ assert_set_feature(qts, cpu_type, "pauth-impdef", false);
215
+ assert_error(qts, cpu_type, "cannot enable pauth-impdef without pauth",
216
+ "{ 'pauth': false, 'pauth-impdef': true }");
217
+}
218
+
219
static void test_query_cpu_model_expansion(const void *data)
220
{
221
QTestState *qts;
222
@@ -XXX,XX +XXX,XX @@ static void test_query_cpu_model_expansion(const void *data)
223
assert_has_feature_enabled(qts, "cortex-a57", "aarch64");
224
225
sve_tests_default(qts, "max");
226
+ pauth_tests_default(qts, "max");
227
228
/* Test that features that depend on KVM generate errors without. */
229
assert_error(qts, "max",
230
--
87
--
231
2.20.1
88
2.43.0
232
233
diff view generated by jsdifflib
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
1
We would like to move arm_el_is_aa64() to internals.h; however, it is
2
used by access_secure_reg(). Make that function not be inline, so
3
that it can stay in cpu.h.
2
4
3
This checks if EL2 is enabled (meaning EL2 registers take effects) in
5
access_secure_reg() is used only in two places:
4
the current security context.
6
* in hflags.c
7
* in the user-mode arm emulators, to decide whether to store
8
the TLS value in the secure or non-secure banked field
5
9
6
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
10
The second of these is not on a super-hot path that would care about
11
the inlining (and incidentally will always use the NS banked field
12
because our user-mode CPUs never set ARM_FEATURE_EL3); put the
13
definition of access_secure_reg() in hflags.c, near its only use
14
inside target/arm.
15
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210112104511.36576-2-remi.denis.courmont@huawei.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
18
---
11
target/arm/cpu.h | 17 +++++++++++++++++
19
target/arm/cpu.h | 12 +++---------
12
1 file changed, 17 insertions(+)
20
target/arm/tcg/hflags.c | 9 +++++++++
21
2 files changed, 12 insertions(+), 9 deletions(-)
13
22
14
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
23
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/cpu.h
25
--- a/target/arm/cpu.h
17
+++ b/target/arm/cpu.h
26
+++ b/target/arm/cpu.h
18
@@ -XXX,XX +XXX,XX @@ static inline bool arm_is_secure(CPUARMState *env)
27
@@ -XXX,XX +XXX,XX @@ static inline bool arm_el_is_aa64(CPUARMState *env, int el)
19
return arm_is_secure_below_el3(env);
28
return aa64;
20
}
29
}
21
30
31
-/* Function for determining whether guest cp register reads and writes should
22
+/*
32
+/*
23
+ * Return true if the current security state has AArch64 EL2 or AArch32 Hyp.
33
+ * Function for determining whether guest cp register reads and writes should
24
+ * This corresponds to the pseudocode EL2Enabled()
34
* access the secure or non-secure bank of a cp register. When EL3 is
25
+ */
35
* operating in AArch32 state, the NS-bit determines whether the secure
26
+static inline bool arm_is_el2_enabled(CPUARMState *env)
36
* instance of a cp register should be used. When EL3 is AArch64 (or if
37
* it doesn't exist at all) then there is no register banking, and all
38
* accesses are to the non-secure version.
39
*/
40
-static inline bool access_secure_reg(CPUARMState *env)
41
-{
42
- bool ret = (arm_feature(env, ARM_FEATURE_EL3) &&
43
- !arm_el_is_aa64(env, 3) &&
44
- !(env->cp15.scr_el3 & SCR_NS));
45
-
46
- return ret;
47
-}
48
+bool access_secure_reg(CPUARMState *env);
49
50
uint32_t arm_phys_excp_target_el(CPUState *cs, uint32_t excp_idx,
51
uint32_t cur_el, bool secure);
52
diff --git a/target/arm/tcg/hflags.c b/target/arm/tcg/hflags.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/target/arm/tcg/hflags.c
55
+++ b/target/arm/tcg/hflags.c
56
@@ -XXX,XX +XXX,XX @@ static bool aprofile_require_alignment(CPUARMState *env, int el, uint64_t sctlr)
57
#endif
58
}
59
60
+bool access_secure_reg(CPUARMState *env)
27
+{
61
+{
28
+ if (arm_feature(env, ARM_FEATURE_EL2)) {
62
+ bool ret = (arm_feature(env, ARM_FEATURE_EL3) &&
29
+ return !arm_is_secure_below_el3(env);
63
+ !arm_el_is_aa64(env, 3) &&
30
+ }
64
+ !(env->cp15.scr_el3 & SCR_NS));
31
+ return false;
65
+
66
+ return ret;
32
+}
67
+}
33
+
68
+
34
#else
69
static CPUARMTBFlags rebuild_hflags_common(CPUARMState *env, int fp_el,
35
static inline bool arm_is_secure_below_el3(CPUARMState *env)
70
ARMMMUIdx mmu_idx,
36
{
71
CPUARMTBFlags flags)
37
@@ -XXX,XX +XXX,XX @@ static inline bool arm_is_secure(CPUARMState *env)
38
{
39
return false;
40
}
41
+
42
+static inline bool arm_is_el2_enabled(CPUARMState *env)
43
+{
44
+ return false;
45
+}
46
#endif
47
48
/**
49
--
72
--
50
2.20.1
73
2.43.0
51
52
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
At the top of linux-user/aarch64/cpu_loop.c we define a set of
2
macros for reading and writing data and code words, but we never
3
use these macros. Delete them.
2
4
3
When building with GCC 10.2 configured with --extra-cflags=-Os, we get:
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
---
8
linux-user/aarch64/cpu_loop.c | 48 -----------------------------------
9
1 file changed, 48 deletions(-)
4
10
5
target/arm/m_helper.c: In function ‘arm_v7m_cpu_do_interrupt’:
11
diff --git a/linux-user/aarch64/cpu_loop.c b/linux-user/aarch64/cpu_loop.c
6
target/arm/m_helper.c:1811:16: error: ‘restore_s16_s31’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
7
1811 | if (restore_s16_s31) {
8
| ^
9
target/arm/m_helper.c:1350:10: note: ‘restore_s16_s31’ was declared here
10
1350 | bool restore_s16_s31;
11
| ^~~~~~~~~~~~~~~
12
cc1: all warnings being treated as errors
13
14
Initialize the 'restore_s16_s31' variable to silence the warning.
15
16
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
17
Message-id: 20210119062739.589049-1-f4bug@amsat.org
18
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
---
21
target/arm/m_helper.c | 2 +-
22
1 file changed, 1 insertion(+), 1 deletion(-)
23
24
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
25
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/m_helper.c
13
--- a/linux-user/aarch64/cpu_loop.c
27
+++ b/target/arm/m_helper.c
14
+++ b/linux-user/aarch64/cpu_loop.c
28
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
15
@@ -XXX,XX +XXX,XX @@
29
bool exc_secure = false;
16
#include "target/arm/syndrome.h"
30
bool return_to_secure;
17
#include "target/arm/cpu-features.h"
31
bool ftype;
18
32
- bool restore_s16_s31;
19
-#define get_user_code_u32(x, gaddr, env) \
33
+ bool restore_s16_s31 = false;
20
- ({ abi_long __r = get_user_u32((x), (gaddr)); \
34
21
- if (!__r && bswap_code(arm_sctlr_b(env))) { \
35
/*
22
- (x) = bswap32(x); \
36
* If we're not in Handler mode then jumps to magic exception-exit
23
- } \
24
- __r; \
25
- })
26
-
27
-#define get_user_code_u16(x, gaddr, env) \
28
- ({ abi_long __r = get_user_u16((x), (gaddr)); \
29
- if (!__r && bswap_code(arm_sctlr_b(env))) { \
30
- (x) = bswap16(x); \
31
- } \
32
- __r; \
33
- })
34
-
35
-#define get_user_data_u32(x, gaddr, env) \
36
- ({ abi_long __r = get_user_u32((x), (gaddr)); \
37
- if (!__r && arm_cpu_bswap_data(env)) { \
38
- (x) = bswap32(x); \
39
- } \
40
- __r; \
41
- })
42
-
43
-#define get_user_data_u16(x, gaddr, env) \
44
- ({ abi_long __r = get_user_u16((x), (gaddr)); \
45
- if (!__r && arm_cpu_bswap_data(env)) { \
46
- (x) = bswap16(x); \
47
- } \
48
- __r; \
49
- })
50
-
51
-#define put_user_data_u32(x, gaddr, env) \
52
- ({ typeof(x) __x = (x); \
53
- if (arm_cpu_bswap_data(env)) { \
54
- __x = bswap32(__x); \
55
- } \
56
- put_user_u32(__x, (gaddr)); \
57
- })
58
-
59
-#define put_user_data_u16(x, gaddr, env) \
60
- ({ typeof(x) __x = (x); \
61
- if (arm_cpu_bswap_data(env)) { \
62
- __x = bswap16(__x); \
63
- } \
64
- put_user_u16(__x, (gaddr)); \
65
- })
66
-
67
/* AArch64 main loop */
68
void cpu_loop(CPUARMState *env)
69
{
37
--
70
--
38
2.20.1
71
2.43.0
39
40
diff view generated by jsdifflib
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
1
In linux-user/arm/cpu_loop.c we define a full set of get/put
2
macros for both code and data (since the endianness handling
3
is different between the two). However the only one we actually
4
use is get_user_code_u32(). Remove the rest.
2
5
3
With the ARMv8.4-SEL2 extension, EL2 is a legal exception level in
6
We leave a comment noting how data-side accesses should be handled
4
secure mode, though it can only be AArch64.
7
for big-endian, because that's a subtle point and we just removed the
8
macros that were effectively documenting it.
5
9
6
This patch adds the target EL for exceptions from 64-bit S-EL2.
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
---
13
linux-user/arm/cpu_loop.c | 43 ++++-----------------------------------
14
1 file changed, 4 insertions(+), 39 deletions(-)
7
15
8
It also fixes the target EL to EL2 when HCR.{A,F,I}MO are set in secure
16
diff --git a/linux-user/arm/cpu_loop.c b/linux-user/arm/cpu_loop.c
9
mode. Those values were never used in practice as the effective value of
10
HCR was always 0 in secure mode.
11
12
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20210112104511.36576-7-remi.denis.courmont@huawei.com
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
17
target/arm/helper.c | 10 +++++-----
18
target/arm/op_helper.c | 4 ++--
19
2 files changed, 7 insertions(+), 7 deletions(-)
20
21
diff --git a/target/arm/helper.c b/target/arm/helper.c
22
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/helper.c
18
--- a/linux-user/arm/cpu_loop.c
24
+++ b/target/arm/helper.c
19
+++ b/linux-user/arm/cpu_loop.c
25
@@ -XXX,XX +XXX,XX @@ static const int8_t target_el_table[2][2][2][2][2][4] = {
20
@@ -XXX,XX +XXX,XX @@
26
{{/* 0 1 1 0 */{ 3, 3, 3, -1 },{ 3, -1, -1, 3 },},
21
__r; \
27
{/* 0 1 1 1 */{ 3, 3, 3, -1 },{ 3, -1, -1, 3 },},},},},
22
})
28
{{{{/* 1 0 0 0 */{ 1, 1, 2, -1 },{ 1, 1, -1, 1 },},
23
29
- {/* 1 0 0 1 */{ 2, 2, 2, -1 },{ 1, 1, -1, 1 },},},
24
-#define get_user_code_u16(x, gaddr, env) \
30
- {{/* 1 0 1 0 */{ 1, 1, 1, -1 },{ 1, 1, -1, 1 },},
25
- ({ abi_long __r = get_user_u16((x), (gaddr)); \
31
- {/* 1 0 1 1 */{ 2, 2, 2, -1 },{ 1, 1, -1, 1 },},},},
26
- if (!__r && bswap_code(arm_sctlr_b(env))) { \
32
+ {/* 1 0 0 1 */{ 2, 2, 2, -1 },{ 2, 2, -1, 1 },},},
27
- (x) = bswap16(x); \
33
+ {{/* 1 0 1 0 */{ 1, 1, 1, -1 },{ 1, 1, 1, 1 },},
28
- } \
34
+ {/* 1 0 1 1 */{ 2, 2, 2, -1 },{ 2, 2, 2, 1 },},},},
29
- __r; \
35
{{{/* 1 1 0 0 */{ 3, 3, 3, -1 },{ 3, 3, -1, 3 },},
30
- })
36
{/* 1 1 0 1 */{ 3, 3, 3, -1 },{ 3, 3, -1, 3 },},},
31
-
37
- {{/* 1 1 1 0 */{ 3, 3, 3, -1 },{ 3, 3, -1, 3 },},
32
-#define get_user_data_u32(x, gaddr, env) \
38
- {/* 1 1 1 1 */{ 3, 3, 3, -1 },{ 3, 3, -1, 3 },},},},},
33
- ({ abi_long __r = get_user_u32((x), (gaddr)); \
39
+ {{/* 1 1 1 0 */{ 3, 3, 3, -1 },{ 3, 3, 3, 3 },},
34
- if (!__r && arm_cpu_bswap_data(env)) { \
40
+ {/* 1 1 1 1 */{ 3, 3, 3, -1 },{ 3, 3, 3, 3 },},},},},
35
- (x) = bswap32(x); \
41
};
36
- } \
37
- __r; \
38
- })
39
-
40
-#define get_user_data_u16(x, gaddr, env) \
41
- ({ abi_long __r = get_user_u16((x), (gaddr)); \
42
- if (!__r && arm_cpu_bswap_data(env)) { \
43
- (x) = bswap16(x); \
44
- } \
45
- __r; \
46
- })
47
-
48
-#define put_user_data_u32(x, gaddr, env) \
49
- ({ typeof(x) __x = (x); \
50
- if (arm_cpu_bswap_data(env)) { \
51
- __x = bswap32(__x); \
52
- } \
53
- put_user_u32(__x, (gaddr)); \
54
- })
55
-
56
-#define put_user_data_u16(x, gaddr, env) \
57
- ({ typeof(x) __x = (x); \
58
- if (arm_cpu_bswap_data(env)) { \
59
- __x = bswap16(__x); \
60
- } \
61
- put_user_u16(__x, (gaddr)); \
62
- })
63
+/*
64
+ * Note that if we need to do data accesses here, they should do a
65
+ * bswap if arm_cpu_bswap_data() returns true.
66
+ */
42
67
43
/*
68
/*
44
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
69
* Similar to code in accel/tcg/user-exec.c, but outside the execution loop.
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/op_helper.c
47
+++ b/target/arm/op_helper.c
48
@@ -XXX,XX +XXX,XX @@ void HELPER(access_check_cp_reg)(CPUARMState *env, void *rip, uint32_t syndrome,
49
target_el = exception_target_el(env);
50
break;
51
case CP_ACCESS_TRAP_EL2:
52
- /* Requesting a trap to EL2 when we're in EL3 or S-EL0/1 is
53
+ /* Requesting a trap to EL2 when we're in EL3 is
54
* a bug in the access function.
55
*/
56
- assert(!arm_is_secure(env) && arm_current_el(env) != 3);
57
+ assert(arm_current_el(env) != 3);
58
target_el = 2;
59
break;
60
case CP_ACCESS_TRAP_EL3:
61
--
70
--
62
2.20.1
71
2.43.0
63
64
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
The arm_cpu_data_is_big_endian() and related functions are now used
2
only in target/arm; they can be moved to internals.h.
2
3
3
Without hardware acceleration, a cryptographically strong
4
The motivation here is that we would like to move arm_current_el()
4
algorithm is too expensive for pauth_computepac.
5
to internals.h.
5
6
6
Even with hardware accel, we are not currently expecting
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
to link the linux-user binaries to any crypto libraries,
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
and doing so would generally make the --static build fail.
9
---
10
target/arm/cpu.h | 48 ------------------------------------------
11
target/arm/internals.h | 48 ++++++++++++++++++++++++++++++++++++++++++
12
2 files changed, 48 insertions(+), 48 deletions(-)
9
13
10
So choose XXH64 as a reasonably quick and decent hash.
11
12
Tested-by: Mark Rutland <mark.rutland@arm.com>
13
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20210111235740.462469-2-richard.henderson@linaro.org
15
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
18
include/qemu/xxhash.h | 98 +++++++++++++++++++++++++++++++++++++++
19
target/arm/cpu.h | 15 ++++--
20
target/arm/pauth_helper.c | 27 +++++++++--
21
3 files changed, 131 insertions(+), 9 deletions(-)
22
23
diff --git a/include/qemu/xxhash.h b/include/qemu/xxhash.h
24
index XXXXXXX..XXXXXXX 100644
25
--- a/include/qemu/xxhash.h
26
+++ b/include/qemu/xxhash.h
27
@@ -XXX,XX +XXX,XX @@ static inline uint32_t qemu_xxhash6(uint64_t ab, uint64_t cd, uint32_t e,
28
return qemu_xxhash7(ab, cd, e, f, 0);
29
}
30
31
+/*
32
+ * Component parts of the XXH64 algorithm from
33
+ * https://github.com/Cyan4973/xxHash/blob/v0.8.0/xxhash.h
34
+ *
35
+ * The complete algorithm looks like
36
+ *
37
+ * i = 0;
38
+ * if (len >= 32) {
39
+ * v1 = seed + XXH_PRIME64_1 + XXH_PRIME64_2;
40
+ * v2 = seed + XXH_PRIME64_2;
41
+ * v3 = seed + 0;
42
+ * v4 = seed - XXH_PRIME64_1;
43
+ * do {
44
+ * v1 = XXH64_round(v1, get64bits(input + i));
45
+ * v2 = XXH64_round(v2, get64bits(input + i + 8));
46
+ * v3 = XXH64_round(v3, get64bits(input + i + 16));
47
+ * v4 = XXH64_round(v4, get64bits(input + i + 24));
48
+ * } while ((i += 32) <= len);
49
+ * h64 = XXH64_mergerounds(v1, v2, v3, v4);
50
+ * } else {
51
+ * h64 = seed + XXH_PRIME64_5;
52
+ * }
53
+ * h64 += len;
54
+ *
55
+ * for (; i + 8 <= len; i += 8) {
56
+ * h64 ^= XXH64_round(0, get64bits(input + i));
57
+ * h64 = rol64(h64, 27) * XXH_PRIME64_1 + XXH_PRIME64_4;
58
+ * }
59
+ * for (; i + 4 <= len; i += 4) {
60
+ * h64 ^= get32bits(input + i) * PRIME64_1;
61
+ * h64 = rol64(h64, 23) * XXH_PRIME64_2 + XXH_PRIME64_3;
62
+ * }
63
+ * for (; i < len; i += 1) {
64
+ * h64 ^= get8bits(input + i) * XXH_PRIME64_5;
65
+ * h64 = rol64(h64, 11) * XXH_PRIME64_1;
66
+ * }
67
+ *
68
+ * return XXH64_avalanche(h64)
69
+ *
70
+ * Exposing the pieces instead allows for simplified usage when
71
+ * the length is a known constant and the inputs are in registers.
72
+ */
73
+#define XXH_PRIME64_1 0x9E3779B185EBCA87ULL
74
+#define XXH_PRIME64_2 0xC2B2AE3D27D4EB4FULL
75
+#define XXH_PRIME64_3 0x165667B19E3779F9ULL
76
+#define XXH_PRIME64_4 0x85EBCA77C2B2AE63ULL
77
+#define XXH_PRIME64_5 0x27D4EB2F165667C5ULL
78
+
79
+static inline uint64_t XXH64_round(uint64_t acc, uint64_t input)
80
+{
81
+ return rol64(acc + input * XXH_PRIME64_2, 31) * XXH_PRIME64_1;
82
+}
83
+
84
+static inline uint64_t XXH64_mergeround(uint64_t acc, uint64_t val)
85
+{
86
+ return (acc ^ XXH64_round(0, val)) * XXH_PRIME64_1 + XXH_PRIME64_4;
87
+}
88
+
89
+static inline uint64_t XXH64_mergerounds(uint64_t v1, uint64_t v2,
90
+ uint64_t v3, uint64_t v4)
91
+{
92
+ uint64_t h64;
93
+
94
+ h64 = rol64(v1, 1) + rol64(v2, 7) + rol64(v3, 12) + rol64(v4, 18);
95
+ h64 = XXH64_mergeround(h64, v1);
96
+ h64 = XXH64_mergeround(h64, v2);
97
+ h64 = XXH64_mergeround(h64, v3);
98
+ h64 = XXH64_mergeround(h64, v4);
99
+
100
+ return h64;
101
+}
102
+
103
+static inline uint64_t XXH64_avalanche(uint64_t h64)
104
+{
105
+ h64 ^= h64 >> 33;
106
+ h64 *= XXH_PRIME64_2;
107
+ h64 ^= h64 >> 29;
108
+ h64 *= XXH_PRIME64_3;
109
+ h64 ^= h64 >> 32;
110
+ return h64;
111
+}
112
+
113
+static inline uint64_t qemu_xxhash64_4(uint64_t a, uint64_t b,
114
+ uint64_t c, uint64_t d)
115
+{
116
+ uint64_t v1 = QEMU_XXHASH_SEED + XXH_PRIME64_1 + XXH_PRIME64_2;
117
+ uint64_t v2 = QEMU_XXHASH_SEED + XXH_PRIME64_2;
118
+ uint64_t v3 = QEMU_XXHASH_SEED + 0;
119
+ uint64_t v4 = QEMU_XXHASH_SEED - XXH_PRIME64_1;
120
+
121
+ v1 = XXH64_round(v1, a);
122
+ v2 = XXH64_round(v2, b);
123
+ v3 = XXH64_round(v3, c);
124
+ v4 = XXH64_round(v4, d);
125
+
126
+ return XXH64_avalanche(XXH64_mergerounds(v1, v2, v3, v4));
127
+}
128
+
129
#endif /* QEMU_XXHASH_H */
130
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
14
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
131
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
132
--- a/target/arm/cpu.h
16
--- a/target/arm/cpu.h
133
+++ b/target/arm/cpu.h
17
+++ b/target/arm/cpu.h
134
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_fcma(const ARMISARegisters *id)
18
@@ -XXX,XX +XXX,XX @@ static inline bool arm_sctlr_b(CPUARMState *env)
135
static inline bool isar_feature_aa64_pauth(const ARMISARegisters *id)
19
136
{
20
uint64_t arm_sctlr(CPUARMState *env, int el);
137
/*
21
138
- * Note that while QEMU will only implement the architected algorithm
22
-static inline bool arm_cpu_data_is_big_endian_a32(CPUARMState *env,
139
- * QARMA, and thus APA+GPA, the host cpu for kvm may use implementation
23
- bool sctlr_b)
140
- * defined algorithms, and thus API+GPI, and this predicate controls
24
-{
141
- * migration of the 128-bit keys.
25
-#ifdef CONFIG_USER_ONLY
142
+ * Return true if any form of pauth is enabled, as this
26
- /*
143
+ * predicate controls migration of the 128-bit keys.
27
- * In system mode, BE32 is modelled in line with the
144
*/
28
- * architecture (as word-invariant big-endianness), where loads
145
return (id->id_aa64isar1 &
29
- * and stores are done little endian but from addresses which
146
(FIELD_DP64(0, ID_AA64ISAR1, APA, 0xf) |
30
- * are adjusted by XORing with the appropriate constant. So the
147
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_pauth(const ARMISARegisters *id)
31
- * endianness to use for the raw data access is not affected by
148
FIELD_DP64(0, ID_AA64ISAR1, GPI, 0xf))) != 0;
32
- * SCTLR.B.
33
- * In user mode, however, we model BE32 as byte-invariant
34
- * big-endianness (because user-only code cannot tell the
35
- * difference), and so we need to use a data access endianness
36
- * that depends on SCTLR.B.
37
- */
38
- if (sctlr_b) {
39
- return true;
40
- }
41
-#endif
42
- /* In 32bit endianness is determined by looking at CPSR's E bit */
43
- return env->uncached_cpsr & CPSR_E;
44
-}
45
-
46
-static inline bool arm_cpu_data_is_big_endian_a64(int el, uint64_t sctlr)
47
-{
48
- return sctlr & (el ? SCTLR_EE : SCTLR_E0E);
49
-}
50
-
51
-/* Return true if the processor is in big-endian mode. */
52
-static inline bool arm_cpu_data_is_big_endian(CPUARMState *env)
53
-{
54
- if (!is_a64(env)) {
55
- return arm_cpu_data_is_big_endian_a32(env, arm_sctlr_b(env));
56
- } else {
57
- int cur_el = arm_current_el(env);
58
- uint64_t sctlr = arm_sctlr(env, cur_el);
59
- return arm_cpu_data_is_big_endian_a64(cur_el, sctlr);
60
- }
61
-}
62
-
63
#include "exec/cpu-all.h"
64
65
/*
66
@@ -XXX,XX +XXX,XX @@ static inline bool bswap_code(bool sctlr_b)
67
#endif
149
}
68
}
150
69
151
+static inline bool isar_feature_aa64_pauth_arch(const ARMISARegisters *id)
70
-#ifdef CONFIG_USER_ONLY
71
-static inline bool arm_cpu_bswap_data(CPUARMState *env)
72
-{
73
- return TARGET_BIG_ENDIAN ^ arm_cpu_data_is_big_endian(env);
74
-}
75
-#endif
76
-
77
void cpu_get_tb_cpu_state(CPUARMState *env, vaddr *pc,
78
uint64_t *cs_base, uint32_t *flags);
79
80
diff --git a/target/arm/internals.h b/target/arm/internals.h
81
index XXXXXXX..XXXXXXX 100644
82
--- a/target/arm/internals.h
83
+++ b/target/arm/internals.h
84
@@ -XXX,XX +XXX,XX @@ static inline FloatRoundMode arm_rmode_to_sf(ARMFPRounding rmode)
85
return arm_rmode_to_sf_map[rmode];
86
}
87
88
+static inline bool arm_cpu_data_is_big_endian_a32(CPUARMState *env,
89
+ bool sctlr_b)
152
+{
90
+{
91
+#ifdef CONFIG_USER_ONLY
153
+ /*
92
+ /*
154
+ * Return true if pauth is enabled with the architected QARMA algorithm.
93
+ * In system mode, BE32 is modelled in line with the
155
+ * QEMU will always set APA+GPA to the same value.
94
+ * architecture (as word-invariant big-endianness), where loads
95
+ * and stores are done little endian but from addresses which
96
+ * are adjusted by XORing with the appropriate constant. So the
97
+ * endianness to use for the raw data access is not affected by
98
+ * SCTLR.B.
99
+ * In user mode, however, we model BE32 as byte-invariant
100
+ * big-endianness (because user-only code cannot tell the
101
+ * difference), and so we need to use a data access endianness
102
+ * that depends on SCTLR.B.
156
+ */
103
+ */
157
+ return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, APA) != 0;
104
+ if (sctlr_b) {
105
+ return true;
106
+ }
107
+#endif
108
+ /* In 32bit endianness is determined by looking at CPSR's E bit */
109
+ return env->uncached_cpsr & CPSR_E;
158
+}
110
+}
159
+
111
+
160
static inline bool isar_feature_aa64_sb(const ARMISARegisters *id)
112
+static inline bool arm_cpu_data_is_big_endian_a64(int el, uint64_t sctlr)
161
{
162
return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, SB) != 0;
163
diff --git a/target/arm/pauth_helper.c b/target/arm/pauth_helper.c
164
index XXXXXXX..XXXXXXX 100644
165
--- a/target/arm/pauth_helper.c
166
+++ b/target/arm/pauth_helper.c
167
@@ -XXX,XX +XXX,XX @@
168
#include "exec/cpu_ldst.h"
169
#include "exec/helper-proto.h"
170
#include "tcg/tcg-gvec-desc.h"
171
+#include "qemu/xxhash.h"
172
173
174
static uint64_t pac_cell_shuffle(uint64_t i)
175
@@ -XXX,XX +XXX,XX @@ static uint64_t tweak_inv_shuffle(uint64_t i)
176
return o;
177
}
178
179
-static uint64_t pauth_computepac(uint64_t data, uint64_t modifier,
180
- ARMPACKey key)
181
+static uint64_t pauth_computepac_architected(uint64_t data, uint64_t modifier,
182
+ ARMPACKey key)
183
{
184
static const uint64_t RC[5] = {
185
0x0000000000000000ull,
186
@@ -XXX,XX +XXX,XX @@ static uint64_t pauth_computepac(uint64_t data, uint64_t modifier,
187
return workingval;
188
}
189
190
+static uint64_t pauth_computepac_impdef(uint64_t data, uint64_t modifier,
191
+ ARMPACKey key)
192
+{
113
+{
193
+ return qemu_xxhash64_4(data, modifier, key.lo, key.hi);
114
+ return sctlr & (el ? SCTLR_EE : SCTLR_E0E);
194
+}
115
+}
195
+
116
+
196
+static uint64_t pauth_computepac(CPUARMState *env, uint64_t data,
117
+/* Return true if the processor is in big-endian mode. */
197
+ uint64_t modifier, ARMPACKey key)
118
+static inline bool arm_cpu_data_is_big_endian(CPUARMState *env)
198
+{
119
+{
199
+ if (cpu_isar_feature(aa64_pauth_arch, env_archcpu(env))) {
120
+ if (!is_a64(env)) {
200
+ return pauth_computepac_architected(data, modifier, key);
121
+ return arm_cpu_data_is_big_endian_a32(env, arm_sctlr_b(env));
201
+ } else {
122
+ } else {
202
+ return pauth_computepac_impdef(data, modifier, key);
123
+ int cur_el = arm_current_el(env);
124
+ uint64_t sctlr = arm_sctlr(env, cur_el);
125
+ return arm_cpu_data_is_big_endian_a64(cur_el, sctlr);
203
+ }
126
+ }
204
+}
127
+}
205
+
128
+
206
static uint64_t pauth_addpac(CPUARMState *env, uint64_t ptr, uint64_t modifier,
129
+#ifdef CONFIG_USER_ONLY
207
ARMPACKey *key, bool data)
130
+static inline bool arm_cpu_bswap_data(CPUARMState *env)
131
+{
132
+ return TARGET_BIG_ENDIAN ^ arm_cpu_data_is_big_endian(env);
133
+}
134
+#endif
135
+
136
static inline void aarch64_save_sp(CPUARMState *env, int el)
208
{
137
{
209
@@ -XXX,XX +XXX,XX @@ static uint64_t pauth_addpac(CPUARMState *env, uint64_t ptr, uint64_t modifier,
138
if (env->pstate & PSTATE_SP) {
210
bot_bit = 64 - param.tsz;
211
ext_ptr = deposit64(ptr, bot_bit, top_bit - bot_bit, ext);
212
213
- pac = pauth_computepac(ext_ptr, modifier, *key);
214
+ pac = pauth_computepac(env, ext_ptr, modifier, *key);
215
216
/*
217
* Check if the ptr has good extension bits and corrupt the
218
@@ -XXX,XX +XXX,XX @@ static uint64_t pauth_auth(CPUARMState *env, uint64_t ptr, uint64_t modifier,
219
uint64_t pac, orig_ptr, test;
220
221
orig_ptr = pauth_original_ptr(ptr, param);
222
- pac = pauth_computepac(orig_ptr, modifier, *key);
223
+ pac = pauth_computepac(env, orig_ptr, modifier, *key);
224
bot_bit = 64 - param.tsz;
225
top_bit = 64 - 8 * param.tbi;
226
227
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(pacga)(CPUARMState *env, uint64_t x, uint64_t y)
228
uint64_t pac;
229
230
pauth_check_trap(env, arm_current_el(env), GETPC());
231
- pac = pauth_computepac(x, y, env->keys.apga);
232
+ pac = pauth_computepac(env, x, y, env->keys.apga);
233
234
return pac & 0xffffffff00000000ull;
235
}
236
--
139
--
237
2.20.1
140
2.43.0
238
239
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
The interface for object_property_add_bool is simpler,
4
making the code easier to understand.
5
6
Reviewed-by: Andrew Jones <drjones@redhat.com>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210111235740.462469-4-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/cpu64.c | 24 ++++++++++--------------
12
1 file changed, 10 insertions(+), 14 deletions(-)
13
14
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/cpu64.c
17
+++ b/target/arm/cpu64.c
18
@@ -XXX,XX +XXX,XX @@ static void cpu_max_set_sve_max_vq(Object *obj, Visitor *v, const char *name,
19
cpu->sve_max_vq = max_vq;
20
}
21
22
+/*
23
+ * Note that cpu_arm_get/set_sve_vq cannot use the simpler
24
+ * object_property_add_bool interface because they make use
25
+ * of the contents of "name" to determine which bit on which
26
+ * to operate.
27
+ */
28
static void cpu_arm_get_sve_vq(Object *obj, Visitor *v, const char *name,
29
void *opaque, Error **errp)
30
{
31
@@ -XXX,XX +XXX,XX @@ static void cpu_arm_set_sve_vq(Object *obj, Visitor *v, const char *name,
32
set_bit(vq - 1, cpu->sve_vq_init);
33
}
34
35
-static void cpu_arm_get_sve(Object *obj, Visitor *v, const char *name,
36
- void *opaque, Error **errp)
37
+static bool cpu_arm_get_sve(Object *obj, Error **errp)
38
{
39
ARMCPU *cpu = ARM_CPU(obj);
40
- bool value = cpu_isar_feature(aa64_sve, cpu);
41
-
42
- visit_type_bool(v, name, &value, errp);
43
+ return cpu_isar_feature(aa64_sve, cpu);
44
}
45
46
-static void cpu_arm_set_sve(Object *obj, Visitor *v, const char *name,
47
- void *opaque, Error **errp)
48
+static void cpu_arm_set_sve(Object *obj, bool value, Error **errp)
49
{
50
ARMCPU *cpu = ARM_CPU(obj);
51
- bool value;
52
uint64_t t;
53
54
- if (!visit_type_bool(v, name, &value, errp)) {
55
- return;
56
- }
57
-
58
if (value && kvm_enabled() && !kvm_arm_sve_supported()) {
59
error_setg(errp, "'sve' feature not supported by KVM on this host");
60
return;
61
@@ -XXX,XX +XXX,XX @@ void aarch64_add_sve_properties(Object *obj)
62
{
63
uint32_t vq;
64
65
- object_property_add(obj, "sve", "bool", cpu_arm_get_sve,
66
- cpu_arm_set_sve, NULL, NULL);
67
+ object_property_add_bool(obj, "sve", cpu_arm_get_sve, cpu_arm_set_sve);
68
69
for (vq = 1; vq <= ARM_MAX_VQ; ++vq) {
70
char name[8];
71
--
72
2.20.1
73
74
diff view generated by jsdifflib
Deleted patch
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
2
1
3
In this context, the HCR value is the effective value, and thus is
4
zero in secure mode. The tests for HCR.{F,I}MO are sufficient.
5
6
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210112104511.36576-1-remi.denis.courmont@huawei.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/cpu.c | 8 ++++----
12
target/arm/helper.c | 10 ++++------
13
2 files changed, 8 insertions(+), 10 deletions(-)
14
15
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpu.c
18
+++ b/target/arm/cpu.c
19
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
20
break;
21
22
case EXCP_VFIQ:
23
- if (secure || !(hcr_el2 & HCR_FMO) || (hcr_el2 & HCR_TGE)) {
24
- /* VFIQs are only taken when hypervized and non-secure. */
25
+ if (!(hcr_el2 & HCR_FMO) || (hcr_el2 & HCR_TGE)) {
26
+ /* VFIQs are only taken when hypervized. */
27
return false;
28
}
29
return !(env->daif & PSTATE_F);
30
case EXCP_VIRQ:
31
- if (secure || !(hcr_el2 & HCR_IMO) || (hcr_el2 & HCR_TGE)) {
32
- /* VIRQs are only taken when hypervized and non-secure. */
33
+ if (!(hcr_el2 & HCR_IMO) || (hcr_el2 & HCR_TGE)) {
34
+ /* VIRQs are only taken when hypervized. */
35
return false;
36
}
37
return !(env->daif & PSTATE_I);
38
diff --git a/target/arm/helper.c b/target/arm/helper.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/helper.c
41
+++ b/target/arm/helper.c
42
@@ -XXX,XX +XXX,XX @@ static void csselr_write(CPUARMState *env, const ARMCPRegInfo *ri,
43
static uint64_t isr_read(CPUARMState *env, const ARMCPRegInfo *ri)
44
{
45
CPUState *cs = env_cpu(env);
46
- uint64_t hcr_el2 = arm_hcr_el2_eff(env);
47
+ bool el1 = arm_current_el(env) == 1;
48
+ uint64_t hcr_el2 = el1 ? arm_hcr_el2_eff(env) : 0;
49
uint64_t ret = 0;
50
- bool allow_virt = (arm_current_el(env) == 1 &&
51
- (!arm_is_secure_below_el3(env) ||
52
- (env->cp15.scr_el3 & SCR_EEL2)));
53
54
- if (allow_virt && (hcr_el2 & HCR_IMO)) {
55
+ if (hcr_el2 & HCR_IMO) {
56
if (cs->interrupt_request & CPU_INTERRUPT_VIRQ) {
57
ret |= CPSR_I;
58
}
59
@@ -XXX,XX +XXX,XX @@ static uint64_t isr_read(CPUARMState *env, const ARMCPRegInfo *ri)
60
}
61
}
62
63
- if (allow_virt && (hcr_el2 & HCR_FMO)) {
64
+ if (hcr_el2 & HCR_FMO) {
65
if (cs->interrupt_request & CPU_INTERRUPT_VFIQ) {
66
ret |= CPSR_F;
67
}
68
--
69
2.20.1
70
71
diff view generated by jsdifflib
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
1
The functions arm_current_el() and arm_el_is_aa64() are used only in
2
2
target/arm and in hw/intc/arm_gicv3_cpuif.c. They're functions that
3
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
3
query internal state of the CPU. Move them out of cpu.h and into
4
internals.h.
5
6
This means we need to include internals.h in arm_gicv3_cpuif.c, but
7
this is justifiable because that file is implementing the GICv3 CPU
8
interface, which really is part of the CPU proper; we just ended up
9
implementing it in code in hw/intc/ for historical reasons.
10
11
The motivation for this move is that we'd like to change
12
arm_el_is_aa64() to add a condition that uses cpu_isar_feature();
13
but we don't want to include cpu-features.h in cpu.h.
14
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210112104511.36576-15-remi.denis.courmont@huawei.com
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
17
---
8
target/arm/cpu.h | 2 ++
18
target/arm/cpu.h | 66 --------------------------------------
9
target/arm/internals.h | 2 ++
19
target/arm/internals.h | 67 +++++++++++++++++++++++++++++++++++++++
10
target/arm/helper.c | 6 ++++++
20
hw/intc/arm_gicv3_cpuif.c | 1 +
11
target/arm/tlb_helper.c | 3 +++
21
target/arm/arch_dump.c | 1 +
12
4 files changed, 13 insertions(+)
22
4 files changed, 69 insertions(+), 66 deletions(-)
13
23
14
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
24
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/cpu.h
26
--- a/target/arm/cpu.h
17
+++ b/target/arm/cpu.h
27
+++ b/target/arm/cpu.h
18
@@ -XXX,XX +XXX,XX @@ static inline void xpsr_write(CPUARMState *env, uint32_t val, uint32_t mask)
28
@@ -XXX,XX +XXX,XX @@ uint64_t arm_hcr_el2_eff_secstate(CPUARMState *env, ARMSecuritySpace space);
19
#define HCR_TWEDEN (1ULL << 59)
29
uint64_t arm_hcr_el2_eff(CPUARMState *env);
20
#define HCR_TWEDEL MAKE_64BIT_MASK(60, 4)
30
uint64_t arm_hcrx_el2_eff(CPUARMState *env);
21
31
22
+#define HPFAR_NS (1ULL << 63)
32
-/* Return true if the specified exception level is running in AArch64 state. */
23
+
33
-static inline bool arm_el_is_aa64(CPUARMState *env, int el)
24
#define SCR_NS (1U << 0)
34
-{
25
#define SCR_IRQ (1U << 1)
35
- /* This isn't valid for EL0 (if we're in EL0, is_a64() is what you want,
26
#define SCR_FIQ (1U << 2)
36
- * and if we're not in EL0 then the state of EL0 isn't well defined.)
37
- */
38
- assert(el >= 1 && el <= 3);
39
- bool aa64 = arm_feature(env, ARM_FEATURE_AARCH64);
40
-
41
- /* The highest exception level is always at the maximum supported
42
- * register width, and then lower levels have a register width controlled
43
- * by bits in the SCR or HCR registers.
44
- */
45
- if (el == 3) {
46
- return aa64;
47
- }
48
-
49
- if (arm_feature(env, ARM_FEATURE_EL3) &&
50
- ((env->cp15.scr_el3 & SCR_NS) || !(env->cp15.scr_el3 & SCR_EEL2))) {
51
- aa64 = aa64 && (env->cp15.scr_el3 & SCR_RW);
52
- }
53
-
54
- if (el == 2) {
55
- return aa64;
56
- }
57
-
58
- if (arm_is_el2_enabled(env)) {
59
- aa64 = aa64 && (env->cp15.hcr_el2 & HCR_RW);
60
- }
61
-
62
- return aa64;
63
-}
64
-
65
/*
66
* Function for determining whether guest cp register reads and writes should
67
* access the secure or non-secure bank of a cp register. When EL3 is
68
@@ -XXX,XX +XXX,XX @@ static inline bool arm_v7m_is_handler_mode(CPUARMState *env)
69
return env->v7m.exception != 0;
70
}
71
72
-/* Return the current Exception Level (as per ARMv8; note that this differs
73
- * from the ARMv7 Privilege Level).
74
- */
75
-static inline int arm_current_el(CPUARMState *env)
76
-{
77
- if (arm_feature(env, ARM_FEATURE_M)) {
78
- return arm_v7m_is_handler_mode(env) ||
79
- !(env->v7m.control[env->v7m.secure] & 1);
80
- }
81
-
82
- if (is_a64(env)) {
83
- return extract32(env->pstate, 2, 2);
84
- }
85
-
86
- switch (env->uncached_cpsr & 0x1f) {
87
- case ARM_CPU_MODE_USR:
88
- return 0;
89
- case ARM_CPU_MODE_HYP:
90
- return 2;
91
- case ARM_CPU_MODE_MON:
92
- return 3;
93
- default:
94
- if (arm_is_secure(env) && !arm_el_is_aa64(env, 3)) {
95
- /* If EL3 is 32-bit then all secure privileged modes run in
96
- * EL3
97
- */
98
- return 3;
99
- }
100
-
101
- return 1;
102
- }
103
-}
104
-
105
/**
106
* write_list_to_cpustate
107
* @cpu: ARMCPU
27
diff --git a/target/arm/internals.h b/target/arm/internals.h
108
diff --git a/target/arm/internals.h b/target/arm/internals.h
28
index XXXXXXX..XXXXXXX 100644
109
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/internals.h
110
--- a/target/arm/internals.h
30
+++ b/target/arm/internals.h
111
+++ b/target/arm/internals.h
31
@@ -XXX,XX +XXX,XX @@ typedef enum ARMFaultType {
112
@@ -XXX,XX +XXX,XX @@ static inline FloatRoundMode arm_rmode_to_sf(ARMFPRounding rmode)
32
* @s2addr: Address that caused a fault at stage 2
113
return arm_rmode_to_sf_map[rmode];
33
* @stage2: True if we faulted at stage 2
34
* @s1ptw: True if we faulted at stage 2 while doing a stage 1 page-table walk
35
+ * @s1ns: True if we faulted on a non-secure IPA while in secure state
36
* @ea: True if we should set the EA (external abort type) bit in syndrome
37
*/
38
typedef struct ARMMMUFaultInfo ARMMMUFaultInfo;
39
@@ -XXX,XX +XXX,XX @@ struct ARMMMUFaultInfo {
40
int domain;
41
bool stage2;
42
bool s1ptw;
43
+ bool s1ns;
44
bool ea;
45
};
46
47
diff --git a/target/arm/helper.c b/target/arm/helper.c
48
index XXXXXXX..XXXXXXX 100644
49
--- a/target/arm/helper.c
50
+++ b/target/arm/helper.c
51
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
52
target_el = 3;
53
} else {
54
env->cp15.hpfar_el2 = extract64(fi.s2addr, 12, 47) << 4;
55
+ if (arm_is_secure_below_el3(env) && fi.s1ns) {
56
+ env->cp15.hpfar_el2 |= HPFAR_NS;
57
+ }
58
target_el = 2;
59
}
60
take_exc = true;
61
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
62
fi->s2addr = addr;
63
fi->stage2 = true;
64
fi->s1ptw = true;
65
+ fi->s1ns = !*is_secure;
66
return ~0;
67
}
68
if ((arm_hcr_el2_eff(env) & HCR_PTW) &&
69
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
70
fi->s2addr = addr;
71
fi->stage2 = true;
72
fi->s1ptw = true;
73
+ fi->s1ns = !*is_secure;
74
return ~0;
75
}
76
77
@@ -XXX,XX +XXX,XX @@ do_fault:
78
/* Tag the error as S2 for failed S1 PTW at S2 or ordinary S2. */
79
fi->stage2 = fi->s1ptw || (mmu_idx == ARMMMUIdx_Stage2 ||
80
mmu_idx == ARMMMUIdx_Stage2_S);
81
+ fi->s1ns = mmu_idx == ARMMMUIdx_Stage2;
82
return true;
83
}
114
}
84
115
85
diff --git a/target/arm/tlb_helper.c b/target/arm/tlb_helper.c
116
+/* Return true if the specified exception level is running in AArch64 state. */
86
index XXXXXXX..XXXXXXX 100644
117
+static inline bool arm_el_is_aa64(CPUARMState *env, int el)
87
--- a/target/arm/tlb_helper.c
118
+{
88
+++ b/target/arm/tlb_helper.c
119
+ /*
89
@@ -XXX,XX +XXX,XX @@ static void QEMU_NORETURN arm_deliver_fault(ARMCPU *cpu, vaddr addr,
120
+ * This isn't valid for EL0 (if we're in EL0, is_a64() is what you want,
90
if (fi->stage2) {
121
+ * and if we're not in EL0 then the state of EL0 isn't well defined.)
91
target_el = 2;
122
+ */
92
env->cp15.hpfar_el2 = extract64(fi->s2addr, 12, 47) << 4;
123
+ assert(el >= 1 && el <= 3);
93
+ if (arm_is_secure_below_el3(env) && fi->s1ns) {
124
+ bool aa64 = arm_feature(env, ARM_FEATURE_AARCH64);
94
+ env->cp15.hpfar_el2 |= HPFAR_NS;
125
+
126
+ /*
127
+ * The highest exception level is always at the maximum supported
128
+ * register width, and then lower levels have a register width controlled
129
+ * by bits in the SCR or HCR registers.
130
+ */
131
+ if (el == 3) {
132
+ return aa64;
133
+ }
134
+
135
+ if (arm_feature(env, ARM_FEATURE_EL3) &&
136
+ ((env->cp15.scr_el3 & SCR_NS) || !(env->cp15.scr_el3 & SCR_EEL2))) {
137
+ aa64 = aa64 && (env->cp15.scr_el3 & SCR_RW);
138
+ }
139
+
140
+ if (el == 2) {
141
+ return aa64;
142
+ }
143
+
144
+ if (arm_is_el2_enabled(env)) {
145
+ aa64 = aa64 && (env->cp15.hcr_el2 & HCR_RW);
146
+ }
147
+
148
+ return aa64;
149
+}
150
+
151
+/*
152
+ * Return the current Exception Level (as per ARMv8; note that this differs
153
+ * from the ARMv7 Privilege Level).
154
+ */
155
+static inline int arm_current_el(CPUARMState *env)
156
+{
157
+ if (arm_feature(env, ARM_FEATURE_M)) {
158
+ return arm_v7m_is_handler_mode(env) ||
159
+ !(env->v7m.control[env->v7m.secure] & 1);
160
+ }
161
+
162
+ if (is_a64(env)) {
163
+ return extract32(env->pstate, 2, 2);
164
+ }
165
+
166
+ switch (env->uncached_cpsr & 0x1f) {
167
+ case ARM_CPU_MODE_USR:
168
+ return 0;
169
+ case ARM_CPU_MODE_HYP:
170
+ return 2;
171
+ case ARM_CPU_MODE_MON:
172
+ return 3;
173
+ default:
174
+ if (arm_is_secure(env) && !arm_el_is_aa64(env, 3)) {
175
+ /* If EL3 is 32-bit then all secure privileged modes run in EL3 */
176
+ return 3;
95
+ }
177
+ }
96
}
178
+
97
same_el = (arm_current_el(env) == target_el);
179
+ return 1;
98
180
+ }
181
+}
182
+
183
static inline bool arm_cpu_data_is_big_endian_a32(CPUARMState *env,
184
bool sctlr_b)
185
{
186
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
187
index XXXXXXX..XXXXXXX 100644
188
--- a/hw/intc/arm_gicv3_cpuif.c
189
+++ b/hw/intc/arm_gicv3_cpuif.c
190
@@ -XXX,XX +XXX,XX @@
191
#include "cpu.h"
192
#include "target/arm/cpregs.h"
193
#include "target/arm/cpu-features.h"
194
+#include "target/arm/internals.h"
195
#include "system/tcg.h"
196
#include "system/qtest.h"
197
198
diff --git a/target/arm/arch_dump.c b/target/arm/arch_dump.c
199
index XXXXXXX..XXXXXXX 100644
200
--- a/target/arm/arch_dump.c
201
+++ b/target/arm/arch_dump.c
202
@@ -XXX,XX +XXX,XX @@
203
#include "elf.h"
204
#include "system/dump.h"
205
#include "cpu-features.h"
206
+#include "internals.h"
207
208
/* struct user_pt_regs from arch/arm64/include/uapi/asm/ptrace.h */
209
struct aarch64_user_regs {
99
--
210
--
100
2.20.1
211
2.43.0
101
102
diff view generated by jsdifflib
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
1
The definition of SCR_EL3.RW says that its effective value is 1 if:
2
- EL2 is implemented and does not support AArch32, and SCR_EL3.NS is 1
3
- the effective value of SCR_EL3.{EEL2,NS} is {1,0} (i.e. we are
4
Secure and Secure EL2 is disabled)
2
5
3
This adds handling for the SCR_EL3.EEL2 bit.
6
We implement the second of these in arm_el_is_aa64(), but forgot the
7
first.
4
8
5
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
9
Provide a new function arm_scr_rw_eff() to return the effective
6
Message-id: 20210112104511.36576-17-remi.denis.courmont@huawei.com
10
value of SCR_EL3.RW, and use it in arm_el_is_aa64() and the other
7
[PMM: Applied fixes for review issues noted by RTH:
11
places that currently look directly at the bit value.
8
- check for FEATURE_AARCH64 before checking sel2 isar feature
12
9
- correct the commit message subject line]
13
(scr_write() enforces that the RW bit is RAO/WI if neither EL1 nor
14
EL2 have AArch32 support, but if EL1 does but EL2 does not then the
15
bit must still be writeable.)
16
17
This will mean that if code at EL3 attempts to perform an exception
18
return to AArch32 EL2 when EL2 is AArch64-only we will correctly
19
handle this as an illegal exception return: it will be caught by the
20
"return to an EL which is configured for a different register width"
21
check in HELPER(exception_return).
22
23
We do already have some CPU types which don't implement AArch32
24
above EL0, so this is technically a bug; it doesn't seem worth
25
backporting to stable because no sensible guest code will be
26
deliberately attempting to set the RW bit to a value corresponding
27
to an unimplemented execution state and then checking that we
28
did the right thing.
29
30
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
31
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
32
---
13
target/arm/cpu.h | 8 ++++++--
33
target/arm/internals.h | 26 +++++++++++++++++++++++---
14
target/arm/cpu.c | 2 +-
34
target/arm/helper.c | 4 ++--
15
target/arm/helper.c | 19 ++++++++++++++++---
35
2 files changed, 25 insertions(+), 5 deletions(-)
16
target/arm/translate.c | 15 +++++++++++++--
17
4 files changed, 36 insertions(+), 8 deletions(-)
18
36
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
37
diff --git a/target/arm/internals.h b/target/arm/internals.h
20
index XXXXXXX..XXXXXXX 100644
38
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/cpu.h
39
--- a/target/arm/internals.h
22
+++ b/target/arm/cpu.h
40
+++ b/target/arm/internals.h
23
@@ -XXX,XX +XXX,XX @@ static inline bool arm_is_secure(CPUARMState *env)
41
@@ -XXX,XX +XXX,XX @@ static inline FloatRoundMode arm_rmode_to_sf(ARMFPRounding rmode)
24
static inline bool arm_is_el2_enabled(CPUARMState *env)
42
return arm_rmode_to_sf_map[rmode];
43
}
44
45
+/* Return the effective value of SCR_EL3.RW */
46
+static inline bool arm_scr_rw_eff(CPUARMState *env)
47
+{
48
+ /*
49
+ * SCR_EL3.RW has an effective value of 1 if:
50
+ * - we are NS and EL2 is implemented but doesn't support AArch32
51
+ * - we are S and EL2 is enabled (in which case it must be AArch64)
52
+ */
53
+ ARMCPU *cpu = env_archcpu(env);
54
+
55
+ if (env->cp15.scr_el3 & SCR_RW) {
56
+ return true;
57
+ }
58
+ if (env->cp15.scr_el3 & SCR_NS) {
59
+ return arm_feature(env, ARM_FEATURE_EL2) &&
60
+ !cpu_isar_feature(aa64_aa32_el2, cpu);
61
+ } else {
62
+ return env->cp15.scr_el3 & SCR_EEL2;
63
+ }
64
+}
65
+
66
/* Return true if the specified exception level is running in AArch64 state. */
67
static inline bool arm_el_is_aa64(CPUARMState *env, int el)
25
{
68
{
26
if (arm_feature(env, ARM_FEATURE_EL2)) {
27
- return !arm_is_secure_below_el3(env);
28
+ if (arm_is_secure_below_el3(env)) {
29
+ return (env->cp15.scr_el3 & SCR_EEL2) != 0;
30
+ }
31
+ return true;
32
}
33
return false;
34
}
35
@@ -XXX,XX +XXX,XX @@ static inline bool arm_el_is_aa64(CPUARMState *env, int el)
69
@@ -XXX,XX +XXX,XX @@ static inline bool arm_el_is_aa64(CPUARMState *env, int el)
36
return aa64;
70
return aa64;
37
}
71
}
38
72
39
- if (arm_feature(env, ARM_FEATURE_EL3)) {
73
- if (arm_feature(env, ARM_FEATURE_EL3) &&
40
+ if (arm_feature(env, ARM_FEATURE_EL3) &&
74
- ((env->cp15.scr_el3 & SCR_NS) || !(env->cp15.scr_el3 & SCR_EEL2))) {
41
+ ((env->cp15.scr_el3 & SCR_NS) || !(env->cp15.scr_el3 & SCR_EEL2))) {
75
- aa64 = aa64 && (env->cp15.scr_el3 & SCR_RW);
42
aa64 = aa64 && (env->cp15.scr_el3 & SCR_RW);
76
+ if (arm_feature(env, ARM_FEATURE_EL3)) {
77
+ aa64 = aa64 && arm_scr_rw_eff(env);
43
}
78
}
44
79
45
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
80
if (el == 2) {
46
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/cpu.c
48
+++ b/target/arm/cpu.c
49
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
50
* masked from Secure state. The HCR and SCR settings
51
* don't affect the masking logic, only the interrupt routing.
52
*/
53
- if (target_el == 3 || !secure) {
54
+ if (target_el == 3 || !secure || (env->cp15.scr_el3 & SCR_EEL2)) {
55
unmasked = true;
56
}
57
} else {
58
diff --git a/target/arm/helper.c b/target/arm/helper.c
81
diff --git a/target/arm/helper.c b/target/arm/helper.c
59
index XXXXXXX..XXXXXXX 100644
82
index XXXXXXX..XXXXXXX 100644
60
--- a/target/arm/helper.c
83
--- a/target/arm/helper.c
61
+++ b/target/arm/helper.c
84
+++ b/target/arm/helper.c
62
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_trap_aa32s_el1(CPUARMState *env,
85
@@ -XXX,XX +XXX,XX @@ uint32_t arm_phys_excp_target_el(CPUState *cs, uint32_t excp_idx,
63
return CP_ACCESS_OK;
86
uint64_t hcr_el2;
64
}
87
65
if (arm_is_secure_below_el3(env)) {
88
if (arm_feature(env, ARM_FEATURE_EL3)) {
66
+ if (env->cp15.scr_el3 & SCR_EEL2) {
89
- rw = ((env->cp15.scr_el3 & SCR_RW) == SCR_RW);
67
+ return CP_ACCESS_TRAP_EL2;
90
+ rw = arm_scr_rw_eff(env);
68
+ }
91
} else {
69
return CP_ACCESS_TRAP_EL3;
92
/*
70
}
93
* Either EL2 is the highest EL (and so the EL2 register width
71
/* This will be EL1 NS and EL2 NS, which just UNDEF */
94
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
72
@@ -XXX,XX +XXX,XX @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
95
73
if (cpu_isar_feature(aa64_pauth, cpu)) {
96
switch (new_el) {
74
valid_mask |= SCR_API | SCR_APK;
97
case 3:
75
}
98
- is_aa64 = (env->cp15.scr_el3 & SCR_RW) != 0;
76
+ if (cpu_isar_feature(aa64_sel2, cpu)) {
99
+ is_aa64 = arm_scr_rw_eff(env);
77
+ valid_mask |= SCR_EEL2;
100
break;
78
+ }
101
case 2:
79
if (cpu_isar_feature(aa64_mte, cpu)) {
102
hcr = arm_hcr_el2_eff(env);
80
valid_mask |= SCR_ATA;
81
}
82
@@ -XXX,XX +XXX,XX @@ static CPAccessResult ats_access(CPUARMState *env, const ARMCPRegInfo *ri,
83
bool isread)
84
{
85
if (ri->opc2 & 4) {
86
- /* The ATS12NSO* operations must trap to EL3 if executed in
87
+ /* The ATS12NSO* operations must trap to EL3 or EL2 if executed in
88
* Secure EL1 (which can only happen if EL3 is AArch64).
89
* They are simply UNDEF if executed from NS EL1.
90
* They function normally from EL2 or EL3.
91
*/
92
if (arm_current_el(env) == 1) {
93
if (arm_is_secure_below_el3(env)) {
94
+ if (env->cp15.scr_el3 & SCR_EEL2) {
95
+ return CP_ACCESS_TRAP_UNCATEGORIZED_EL2;
96
+ }
97
return CP_ACCESS_TRAP_UNCATEGORIZED_EL3;
98
}
99
return CP_ACCESS_TRAP_UNCATEGORIZED;
100
@@ -XXX,XX +XXX,XX @@ static void ats1h_write(CPUARMState *env, const ARMCPRegInfo *ri,
101
static CPAccessResult at_s1e2_access(CPUARMState *env, const ARMCPRegInfo *ri,
102
bool isread)
103
{
104
- if (arm_current_el(env) == 3 && !(env->cp15.scr_el3 & SCR_NS)) {
105
+ if (arm_current_el(env) == 3 &&
106
+ !(env->cp15.scr_el3 & (SCR_NS | SCR_EEL2))) {
107
return CP_ACCESS_TRAP;
108
}
109
return CP_ACCESS_OK;
110
@@ -XXX,XX +XXX,XX @@ static CPAccessResult nsacr_access(CPUARMState *env, const ARMCPRegInfo *ri,
111
bool isread)
112
{
113
/* The NSACR is RW at EL3, and RO for NS EL1 and NS EL2.
114
- * At Secure EL1 it traps to EL3.
115
+ * At Secure EL1 it traps to EL3 or EL2.
116
*/
117
if (arm_current_el(env) == 3) {
118
return CP_ACCESS_OK;
119
}
120
if (arm_is_secure_below_el3(env)) {
121
+ if (env->cp15.scr_el3 & SCR_EEL2) {
122
+ return CP_ACCESS_TRAP_EL2;
123
+ }
124
return CP_ACCESS_TRAP_EL3;
125
}
126
/* Accesses from EL1 NS and EL2 NS are UNDEF for write but allow reads. */
127
diff --git a/target/arm/translate.c b/target/arm/translate.c
128
index XXXXXXX..XXXXXXX 100644
129
--- a/target/arm/translate.c
130
+++ b/target/arm/translate.c
131
@@ -XXX,XX +XXX,XX @@ static bool msr_banked_access_decode(DisasContext *s, int r, int sysm, int rn,
132
}
133
if (s->current_el == 1) {
134
/* If we're in Secure EL1 (which implies that EL3 is AArch64)
135
- * then accesses to Mon registers trap to EL3
136
+ * then accesses to Mon registers trap to Secure EL2, if it exists,
137
+ * otherwise EL3.
138
*/
139
- TCGv_i32 tcg_el = tcg_const_i32(3);
140
+ TCGv_i32 tcg_el;
141
+
142
+ if (arm_dc_feature(s, ARM_FEATURE_AARCH64) &&
143
+ dc_isar_feature(aa64_sel2, s)) {
144
+ /* Target EL is EL<3 minus SCR_EL3.EEL2> */
145
+ tcg_el = load_cpu_field(cp15.scr_el3);
146
+ tcg_gen_sextract_i32(tcg_el, tcg_el, ctz32(SCR_EEL2), 1);
147
+ tcg_gen_addi_i32(tcg_el, tcg_el, 3);
148
+ } else {
149
+ tcg_el = tcg_const_i32(3);
150
+ }
151
152
gen_exception_el(s, EXCP_UDEF, syn_uncategorized(), tcg_el);
153
tcg_temp_free_i32(tcg_el);
154
--
103
--
155
2.20.1
104
2.43.0
156
157
diff view generated by jsdifflib
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
1
When EL1 doesn't support AArch32, the HCR_EL2.RW bit is supposed to
2
be RAO/WI. Enforce the RAO/WI behaviour.
2
3
3
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
4
Note that we handle "reset value should honour RES1 bits" in the same
5
way that SCR_EL3 does, via a reset function.
6
7
We do already have some CPU types which don't implement AArch32
8
above EL0, so this is technically a bug; it doesn't seem worth
9
backporting to stable because no sensible guest code will be
10
deliberately attempting to set the RW bit to a value corresponding
11
to an unimplemented execution state and then checking that we
12
did the right thing.
13
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210112104511.36576-12-remi.denis.courmont@huawei.com
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
16
---
8
target/arm/helper.c | 12 ++++++++++++
17
target/arm/helper.c | 12 ++++++++++++
9
1 file changed, 12 insertions(+)
18
1 file changed, 12 insertions(+)
10
19
11
diff --git a/target/arm/helper.c b/target/arm/helper.c
20
diff --git a/target/arm/helper.c b/target/arm/helper.c
12
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/helper.c
22
--- a/target/arm/helper.c
14
+++ b/target/arm/helper.c
23
+++ b/target/arm/helper.c
15
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
24
@@ -XXX,XX +XXX,XX @@ static void do_hcr_write(CPUARMState *env, uint64_t value, uint64_t valid_mask)
16
fi->s1ptw = true;
25
/* Clear RES0 bits. */
17
return ~0;
26
value &= valid_mask;
18
}
27
28
+ /* RW is RAO/WI if EL1 is AArch64 only */
29
+ if (!cpu_isar_feature(aa64_aa32_el1, cpu)) {
30
+ value |= HCR_RW;
31
+ }
19
+
32
+
20
+ if (arm_is_secure_below_el3(env)) {
33
/*
21
+ /* Check if page table walk is to secure or non-secure PA space. */
34
* These bits change the MMU setup:
22
+ if (*is_secure) {
35
* HCR_VM enables stage 2 translation
23
+ *is_secure = !(env->cp15.vstcr_el2.raw_tcr & VSTCR_SW);
36
@@ -XXX,XX +XXX,XX @@ static void hcr_writelow(CPUARMState *env, const ARMCPRegInfo *ri,
24
+ } else {
37
do_hcr_write(env, value, MAKE_64BIT_MASK(32, 32));
25
+ *is_secure = !(env->cp15.vtcr_el2.raw_tcr & VTCR_NSW);
38
}
26
+ }
39
27
+ } else {
40
+static void hcr_reset(CPUARMState *env, const ARMCPRegInfo *ri)
28
+ assert(!*is_secure);
41
+{
29
+ }
42
+ /* hcr_write will set the RES1 bits on an AArch64-only CPU */
43
+ hcr_write(env, ri, 0);
44
+}
30
+
45
+
31
addr = s2pa;
46
/*
32
}
47
* Return the effective value of HCR_EL2, at the given security state.
33
return addr;
48
* Bits that are not included here:
49
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
50
.opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 0,
51
.access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.hcr_el2),
52
.nv2_redirect_offset = 0x78,
53
+ .resetfn = hcr_reset,
54
.writefn = hcr_write, .raw_writefn = raw_write },
55
{ .name = "HCR", .state = ARM_CP_STATE_AA32,
56
.type = ARM_CP_ALIAS | ARM_CP_IO,
34
--
57
--
35
2.20.1
58
2.43.0
36
37
diff view generated by jsdifflib
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
1
We already call env_archcpu() multiple times within the
2
exception_return helper function, and we're about to want to
3
add another use of the ARMCPU pointer. Add a local variable
4
cpu so we can call env_archcpu() just once.
2
5
3
This will simplify accessing HCR conditionally in secure state.
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
---
9
target/arm/tcg/helper-a64.c | 7 ++++---
10
1 file changed, 4 insertions(+), 3 deletions(-)
4
11
5
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
12
diff --git a/target/arm/tcg/helper-a64.c b/target/arm/tcg/helper-a64.c
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210112104511.36576-4-remi.denis.courmont@huawei.com
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/helper.c | 31 ++++++++++++++++++-------------
11
1 file changed, 18 insertions(+), 13 deletions(-)
12
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
14
--- a/target/arm/tcg/helper-a64.c
16
+++ b/target/arm/helper.c
15
+++ b/target/arm/tcg/helper-a64.c
17
@@ -XXX,XX +XXX,XX @@ static CPAccessResult aa64_cacheop_pou_access(CPUARMState *env,
16
@@ -XXX,XX +XXX,XX @@ static void cpsr_write_from_spsr_elx(CPUARMState *env,
18
17
19
static int vae1_tlbmask(CPUARMState *env)
18
void HELPER(exception_return)(CPUARMState *env, uint64_t new_pc)
20
{
19
{
21
- /* Since we exclude secure first, we may read HCR_EL2 directly. */
20
+ ARMCPU *cpu = env_archcpu(env);
22
- if (arm_is_secure_below_el3(env)) {
21
int cur_el = arm_current_el(env);
23
- return ARMMMUIdxBit_SE10_1 |
22
unsigned int spsr_idx = aarch64_banked_spsr_index(cur_el);
24
- ARMMMUIdxBit_SE10_1_PAN |
23
uint32_t spsr = env->banked_spsr[spsr_idx];
25
- ARMMMUIdxBit_SE10_0;
24
@@ -XXX,XX +XXX,XX @@ void HELPER(exception_return)(CPUARMState *env, uint64_t new_pc)
26
- } else if ((env->cp15.hcr_el2 & (HCR_E2H | HCR_TGE))
27
- == (HCR_E2H | HCR_TGE)) {
28
+ uint64_t hcr = arm_hcr_el2_eff(env);
29
+
30
+ if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) {
31
return ARMMMUIdxBit_E20_2 |
32
ARMMMUIdxBit_E20_2_PAN |
33
ARMMMUIdxBit_E20_0;
34
+ } else if (arm_is_secure_below_el3(env)) {
35
+ return ARMMMUIdxBit_SE10_1 |
36
+ ARMMMUIdxBit_SE10_1_PAN |
37
+ ARMMMUIdxBit_SE10_0;
38
} else {
39
return ARMMMUIdxBit_E10_1 |
40
ARMMMUIdxBit_E10_1_PAN |
41
@@ -XXX,XX +XXX,XX @@ static inline uint64_t regime_sctlr(CPUARMState *env, ARMMMUIdx mmu_idx)
42
static inline bool regime_translation_disabled(CPUARMState *env,
43
ARMMMUIdx mmu_idx)
44
{
45
+ uint64_t hcr_el2;
46
+
47
if (arm_feature(env, ARM_FEATURE_M)) {
48
switch (env->v7m.mpu_ctrl[regime_is_secure(env, mmu_idx)] &
49
(R_V7M_MPU_CTRL_ENABLE_MASK | R_V7M_MPU_CTRL_HFNMIENA_MASK)) {
50
@@ -XXX,XX +XXX,XX @@ static inline bool regime_translation_disabled(CPUARMState *env,
51
}
52
}
25
}
53
26
54
+ hcr_el2 = arm_hcr_el2_eff(env);
27
bql_lock();
55
+
28
- arm_call_pre_el_change_hook(env_archcpu(env));
56
if (mmu_idx == ARMMMUIdx_Stage2) {
29
+ arm_call_pre_el_change_hook(cpu);
57
/* HCR.DC means HCR.VM behaves as 1 */
30
bql_unlock();
58
- return (env->cp15.hcr_el2 & (HCR_DC | HCR_VM)) == 0;
31
59
+ return (hcr_el2 & (HCR_DC | HCR_VM)) == 0;
32
if (!return_to_aa64) {
60
}
33
@@ -XXX,XX +XXX,XX @@ void HELPER(exception_return)(CPUARMState *env, uint64_t new_pc)
61
34
int tbii;
62
- if (env->cp15.hcr_el2 & HCR_TGE) {
35
63
+ if (hcr_el2 & HCR_TGE) {
36
env->aarch64 = true;
64
/* TGE means that NS EL0/1 act as if SCTLR_EL1.M is zero */
37
- spsr &= aarch64_pstate_valid_mask(&env_archcpu(env)->isar);
65
if (!regime_is_secure(env, mmu_idx) && regime_el(env, mmu_idx) == 1) {
38
+ spsr &= aarch64_pstate_valid_mask(&cpu->isar);
66
return true;
39
pstate_write(env, spsr);
67
}
40
if (!arm_singlestep_active(env)) {
68
}
41
env->pstate &= ~PSTATE_SS;
69
42
@@ -XXX,XX +XXX,XX @@ void HELPER(exception_return)(CPUARMState *env, uint64_t new_pc)
70
- if ((env->cp15.hcr_el2 & HCR_DC) && arm_mmu_idx_is_stage1_of_2(mmu_idx)) {
43
aarch64_sve_change_el(env, cur_el, new_el, return_to_aa64);
71
+ if ((hcr_el2 & HCR_DC) && arm_mmu_idx_is_stage1_of_2(mmu_idx)) {
44
72
/* HCR.DC means SCTLR_EL1.M behaves as 0 */
45
bql_lock();
73
return true;
46
- arm_call_el_change_hook(env_archcpu(env));
74
}
47
+ arm_call_el_change_hook(cpu);
75
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
48
bql_unlock();
76
fi->s1ptw = true;
49
77
return ~0;
50
return;
78
}
79
- if ((env->cp15.hcr_el2 & HCR_PTW) && (cacheattrs.attrs & 0xf0) == 0) {
80
+ if ((arm_hcr_el2_eff(env) & HCR_PTW) &&
81
+ (cacheattrs.attrs & 0xf0) == 0) {
82
/*
83
* PTW set and S1 walk touched S2 Device memory:
84
* generate Permission fault.
85
@@ -XXX,XX +XXX,XX @@ static uint8_t convert_stage2_attrs(CPUARMState *env, uint8_t s2attrs)
86
uint8_t hihint = 0, lohint = 0;
87
88
if (hiattr != 0) { /* normal memory */
89
- if ((env->cp15.hcr_el2 & HCR_CD) != 0) { /* cache disabled */
90
+ if (arm_hcr_el2_eff(env) & HCR_CD) { /* cache disabled */
91
hiattr = loattr = 1; /* non-cacheable */
92
} else {
93
if (hiattr != 1) { /* Write-through or write-back */
94
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
95
}
96
97
/* Combine the S1 and S2 cache attributes. */
98
- if (env->cp15.hcr_el2 & HCR_DC) {
99
+ if (arm_hcr_el2_eff(env) & HCR_DC) {
100
/*
101
* HCR.DC forces the first stage attributes to
102
* Normal Non-Shareable,
103
--
51
--
104
2.20.1
52
2.43.0
105
106
diff view generated by jsdifflib
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
1
In the Arm ARM, rule R_TYTWB states that returning to AArch32
2
is an illegal exception return if:
3
* AArch32 is not supported at any exception level
4
* the target EL is configured for AArch64 via SCR_EL3.RW
5
or HCR_EL2.RW or via CPU state at reset
2
6
3
Do not assume that EL2 is available in and only in non-secure context.
7
We check the second of these, but not the first (which can only be
4
That equivalence is broken by ARMv8.4-SEL2.
8
relevant for the case of a return to EL0, because if AArch32 is not
9
supported at one of the higher ELs then the RW bits will have an
10
effective value of 1 and the the "configured for AArch64" condition
11
will hold also).
5
12
6
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
13
Add the missing condition. Although this is technically a bug
14
(because we have one AArch64-only CPU: a64fx) it isn't worth
15
backporting to stable because no sensible guest code will
16
deliberately try to return to a nonexistent execution state
17
to check that it gets an illegal exception return.
18
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210112104511.36576-3-remi.denis.courmont@huawei.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
21
---
11
target/arm/cpu.h | 4 ++--
22
target/arm/tcg/helper-a64.c | 5 +++++
12
target/arm/helper-a64.c | 8 +-------
23
1 file changed, 5 insertions(+)
13
target/arm/helper.c | 33 +++++++++++++--------------------
14
3 files changed, 16 insertions(+), 29 deletions(-)
15
24
16
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
25
diff --git a/target/arm/tcg/helper-a64.c b/target/arm/tcg/helper-a64.c
17
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/cpu.h
27
--- a/target/arm/tcg/helper-a64.c
19
+++ b/target/arm/cpu.h
28
+++ b/target/arm/tcg/helper-a64.c
20
@@ -XXX,XX +XXX,XX @@ static inline bool arm_el_is_aa64(CPUARMState *env, int el)
21
return aa64;
22
}
23
24
- if (arm_feature(env, ARM_FEATURE_EL2) && !arm_is_secure_below_el3(env)) {
25
+ if (arm_is_el2_enabled(env)) {
26
aa64 = aa64 && (env->cp15.hcr_el2 & HCR_RW);
27
}
28
29
@@ -XXX,XX +XXX,XX @@ static inline int arm_debug_target_el(CPUARMState *env)
30
bool secure = arm_is_secure(env);
31
bool route_to_el2 = false;
32
33
- if (arm_feature(env, ARM_FEATURE_EL2) && !secure) {
34
+ if (arm_is_el2_enabled(env)) {
35
route_to_el2 = env->cp15.hcr_el2 & HCR_TGE ||
36
env->cp15.mdcr_el2 & MDCR_TDE;
37
}
38
diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/helper-a64.c
41
+++ b/target/arm/helper-a64.c
42
@@ -XXX,XX +XXX,XX @@ void HELPER(exception_return)(CPUARMState *env, uint64_t new_pc)
43
if (new_el == -1) {
44
goto illegal_return;
45
}
46
- if (new_el > cur_el
47
- || (new_el == 2 && !arm_feature(env, ARM_FEATURE_EL2))) {
48
+ if (new_el > cur_el || (new_el == 2 && !arm_is_el2_enabled(env))) {
49
/* Disallow return to an EL which is unimplemented or higher
50
* than the current one.
51
*/
52
@@ -XXX,XX +XXX,XX @@ void HELPER(exception_return)(CPUARMState *env, uint64_t new_pc)
29
@@ -XXX,XX +XXX,XX @@ void HELPER(exception_return)(CPUARMState *env, uint64_t new_pc)
53
goto illegal_return;
30
goto illegal_return;
54
}
31
}
55
32
56
- if (new_el == 2 && arm_is_secure_below_el3(env)) {
33
+ if (!return_to_aa64 && !cpu_isar_feature(aa64_aa32, cpu)) {
57
- /* Return to the non-existent secure-EL2 */
34
+ /* Return to AArch32 when CPU is AArch64-only */
58
- goto illegal_return;
35
+ goto illegal_return;
59
- }
36
+ }
60
-
37
+
61
if (new_el == 1 && (arm_hcr_el2_eff(env) & HCR_TGE)) {
38
if (new_el == 1 && (arm_hcr_el2_eff(env) & HCR_TGE)) {
62
goto illegal_return;
39
goto illegal_return;
63
}
40
}
64
diff --git a/target/arm/helper.c b/target/arm/helper.c
65
index XXXXXXX..XXXXXXX 100644
66
--- a/target/arm/helper.c
67
+++ b/target/arm/helper.c
68
@@ -XXX,XX +XXX,XX @@ static CPAccessResult cpacr_access(CPUARMState *env, const ARMCPRegInfo *ri,
69
{
70
if (arm_feature(env, ARM_FEATURE_V8)) {
71
/* Check if CPACR accesses are to be trapped to EL2 */
72
- if (arm_current_el(env) == 1 &&
73
- (env->cp15.cptr_el[2] & CPTR_TCPAC) && !arm_is_secure(env)) {
74
+ if (arm_current_el(env) == 1 && arm_is_el2_enabled(env) &&
75
+ (env->cp15.cptr_el[2] & CPTR_TCPAC)) {
76
return CP_ACCESS_TRAP_EL2;
77
/* Check if CPACR accesses are to be trapped to EL3 */
78
} else if (arm_current_el(env) < 3 &&
79
@@ -XXX,XX +XXX,XX @@ static CPAccessResult gt_counter_access(CPUARMState *env, int timeridx,
80
bool isread)
81
{
82
unsigned int cur_el = arm_current_el(env);
83
- bool secure = arm_is_secure(env);
84
+ bool has_el2 = arm_is_el2_enabled(env);
85
uint64_t hcr = arm_hcr_el2_eff(env);
86
87
switch (cur_el) {
88
@@ -XXX,XX +XXX,XX @@ static CPAccessResult gt_counter_access(CPUARMState *env, int timeridx,
89
}
90
} else {
91
/* If HCR_EL2.<E2H> == 0: check CNTHCTL_EL2.EL1PCEN. */
92
- if (arm_feature(env, ARM_FEATURE_EL2) &&
93
- timeridx == GTIMER_PHYS && !secure &&
94
+ if (has_el2 && timeridx == GTIMER_PHYS &&
95
!extract32(env->cp15.cnthctl_el2, 1, 1)) {
96
return CP_ACCESS_TRAP_EL2;
97
}
98
@@ -XXX,XX +XXX,XX @@ static CPAccessResult gt_counter_access(CPUARMState *env, int timeridx,
99
100
case 1:
101
/* Check CNTHCTL_EL2.EL1PCTEN, which changes location based on E2H. */
102
- if (arm_feature(env, ARM_FEATURE_EL2) &&
103
- timeridx == GTIMER_PHYS && !secure &&
104
+ if (has_el2 && timeridx == GTIMER_PHYS &&
105
(hcr & HCR_E2H
106
? !extract32(env->cp15.cnthctl_el2, 10, 1)
107
: !extract32(env->cp15.cnthctl_el2, 0, 1))) {
108
@@ -XXX,XX +XXX,XX @@ static CPAccessResult gt_timer_access(CPUARMState *env, int timeridx,
109
bool isread)
110
{
111
unsigned int cur_el = arm_current_el(env);
112
- bool secure = arm_is_secure(env);
113
+ bool has_el2 = arm_is_el2_enabled(env);
114
uint64_t hcr = arm_hcr_el2_eff(env);
115
116
switch (cur_el) {
117
@@ -XXX,XX +XXX,XX @@ static CPAccessResult gt_timer_access(CPUARMState *env, int timeridx,
118
/* fall through */
119
120
case 1:
121
- if (arm_feature(env, ARM_FEATURE_EL2) &&
122
- timeridx == GTIMER_PHYS && !secure) {
123
+ if (has_el2 && timeridx == GTIMER_PHYS) {
124
if (hcr & HCR_E2H) {
125
/* If HCR_EL2.<E2H,TGE> == '10': check CNTHCTL_EL2.EL1PTEN. */
126
if (!extract32(env->cp15.cnthctl_el2, 11, 1)) {
127
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo strongarm_cp_reginfo[] = {
128
129
static uint64_t midr_read(CPUARMState *env, const ARMCPRegInfo *ri)
130
{
131
- ARMCPU *cpu = env_archcpu(env);
132
unsigned int cur_el = arm_current_el(env);
133
- bool secure = arm_is_secure(env);
134
135
- if (arm_feature(&cpu->env, ARM_FEATURE_EL2) && !secure && cur_el == 1) {
136
+ if (arm_is_el2_enabled(env) && cur_el == 1) {
137
return env->cp15.vpidr_el2;
138
}
139
return raw_read(env, ri);
140
@@ -XXX,XX +XXX,XX @@ static uint64_t mpidr_read_val(CPUARMState *env)
141
static uint64_t mpidr_read(CPUARMState *env, const ARMCPRegInfo *ri)
142
{
143
unsigned int cur_el = arm_current_el(env);
144
- bool secure = arm_is_secure(env);
145
146
- if (arm_feature(env, ARM_FEATURE_EL2) && !secure && cur_el == 1) {
147
+ if (arm_is_el2_enabled(env) && cur_el == 1) {
148
return env->cp15.vmpidr_el2;
149
}
150
return mpidr_read_val(env);
151
@@ -XXX,XX +XXX,XX @@ uint64_t arm_hcr_el2_eff(CPUARMState *env)
152
{
153
uint64_t ret = env->cp15.hcr_el2;
154
155
- if (arm_is_secure_below_el3(env)) {
156
+ if (!arm_is_el2_enabled(env)) {
157
/*
158
* "This register has no effect if EL2 is not enabled in the
159
* current Security state". This is ARMv8.4-SecEL2 speak for
160
@@ -XXX,XX +XXX,XX @@ int sve_exception_el(CPUARMState *env, int el)
161
/* CPTR_EL2. Since TZ and TFP are positive,
162
* they will be zero when EL2 is not present.
163
*/
164
- if (el <= 2 && !arm_is_secure_below_el3(env)) {
165
+ if (el <= 2 && arm_is_el2_enabled(env)) {
166
if (env->cp15.cptr_el[2] & CPTR_TZ) {
167
return 2;
168
}
169
@@ -XXX,XX +XXX,XX @@ static int bad_mode_switch(CPUARMState *env, int mode, CPSRWriteType write_type)
170
}
171
return 0;
172
case ARM_CPU_MODE_HYP:
173
- return !arm_feature(env, ARM_FEATURE_EL2)
174
- || arm_current_el(env) < 2 || arm_is_secure_below_el3(env);
175
+ return !arm_is_el2_enabled(env) || arm_current_el(env) < 2;
176
case ARM_CPU_MODE_MON:
177
return arm_current_el(env) < 3;
178
default:
179
@@ -XXX,XX +XXX,XX @@ int fp_exception_el(CPUARMState *env, int cur_el)
180
181
/* CPTR_EL2 : present in v7VE or v8 */
182
if (cur_el <= 2 && extract32(env->cp15.cptr_el[2], 10, 1)
183
- && !arm_is_secure_below_el3(env)) {
184
+ && arm_is_el2_enabled(env)) {
185
/* Trap FP ops at EL2, NS-EL1 or NS-EL0 to EL2 */
186
return 2;
187
}
188
--
41
--
189
2.20.1
42
2.43.0
190
191
diff view generated by jsdifflib
Deleted patch
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
2
1
3
This adds a common helper to compute the effective value of MDCR_EL2.
4
That is the actual value if EL2 is enabled in the current security
5
context, or 0 elsewise.
6
7
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210112104511.36576-5-remi.denis.courmont@huawei.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/helper.c | 38 ++++++++++++++++++++++----------------
13
1 file changed, 22 insertions(+), 16 deletions(-)
14
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
18
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_trap_aa32s_el1(CPUARMState *env,
20
return CP_ACCESS_TRAP_UNCATEGORIZED;
21
}
22
23
+static uint64_t arm_mdcr_el2_eff(CPUARMState *env)
24
+{
25
+ return arm_is_el2_enabled(env) ? env->cp15.mdcr_el2 : 0;
26
+}
27
+
28
/* Check for traps to "powerdown debug" registers, which are controlled
29
* by MDCR.TDOSA
30
*/
31
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tdosa(CPUARMState *env, const ARMCPRegInfo *ri,
32
bool isread)
33
{
34
int el = arm_current_el(env);
35
- bool mdcr_el2_tdosa = (env->cp15.mdcr_el2 & MDCR_TDOSA) ||
36
- (env->cp15.mdcr_el2 & MDCR_TDE) ||
37
+ uint64_t mdcr_el2 = arm_mdcr_el2_eff(env);
38
+ bool mdcr_el2_tdosa = (mdcr_el2 & MDCR_TDOSA) || (mdcr_el2 & MDCR_TDE) ||
39
(arm_hcr_el2_eff(env) & HCR_TGE);
40
41
- if (el < 2 && mdcr_el2_tdosa && !arm_is_secure_below_el3(env)) {
42
+ if (el < 2 && mdcr_el2_tdosa) {
43
return CP_ACCESS_TRAP_EL2;
44
}
45
if (el < 3 && (env->cp15.mdcr_el3 & MDCR_TDOSA)) {
46
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tdra(CPUARMState *env, const ARMCPRegInfo *ri,
47
bool isread)
48
{
49
int el = arm_current_el(env);
50
- bool mdcr_el2_tdra = (env->cp15.mdcr_el2 & MDCR_TDRA) ||
51
- (env->cp15.mdcr_el2 & MDCR_TDE) ||
52
+ uint64_t mdcr_el2 = arm_mdcr_el2_eff(env);
53
+ bool mdcr_el2_tdra = (mdcr_el2 & MDCR_TDRA) || (mdcr_el2 & MDCR_TDE) ||
54
(arm_hcr_el2_eff(env) & HCR_TGE);
55
56
- if (el < 2 && mdcr_el2_tdra && !arm_is_secure_below_el3(env)) {
57
+ if (el < 2 && mdcr_el2_tdra) {
58
return CP_ACCESS_TRAP_EL2;
59
}
60
if (el < 3 && (env->cp15.mdcr_el3 & MDCR_TDA)) {
61
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tda(CPUARMState *env, const ARMCPRegInfo *ri,
62
bool isread)
63
{
64
int el = arm_current_el(env);
65
- bool mdcr_el2_tda = (env->cp15.mdcr_el2 & MDCR_TDA) ||
66
- (env->cp15.mdcr_el2 & MDCR_TDE) ||
67
+ uint64_t mdcr_el2 = arm_mdcr_el2_eff(env);
68
+ bool mdcr_el2_tda = (mdcr_el2 & MDCR_TDA) || (mdcr_el2 & MDCR_TDE) ||
69
(arm_hcr_el2_eff(env) & HCR_TGE);
70
71
- if (el < 2 && mdcr_el2_tda && !arm_is_secure_below_el3(env)) {
72
+ if (el < 2 && mdcr_el2_tda) {
73
return CP_ACCESS_TRAP_EL2;
74
}
75
if (el < 3 && (env->cp15.mdcr_el3 & MDCR_TDA)) {
76
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tpm(CPUARMState *env, const ARMCPRegInfo *ri,
77
bool isread)
78
{
79
int el = arm_current_el(env);
80
+ uint64_t mdcr_el2 = arm_mdcr_el2_eff(env);
81
82
- if (el < 2 && (env->cp15.mdcr_el2 & MDCR_TPM)
83
- && !arm_is_secure_below_el3(env)) {
84
+ if (el < 2 && (mdcr_el2 & MDCR_TPM)) {
85
return CP_ACCESS_TRAP_EL2;
86
}
87
if (el < 3 && (env->cp15.mdcr_el3 & MDCR_TPM)) {
88
@@ -XXX,XX +XXX,XX @@ static CPAccessResult pmreg_access(CPUARMState *env, const ARMCPRegInfo *ri,
89
* trapping to EL2 or EL3 for other accesses.
90
*/
91
int el = arm_current_el(env);
92
+ uint64_t mdcr_el2 = arm_mdcr_el2_eff(env);
93
94
if (el == 0 && !(env->cp15.c9_pmuserenr & 1)) {
95
return CP_ACCESS_TRAP;
96
}
97
- if (el < 2 && (env->cp15.mdcr_el2 & MDCR_TPM)
98
- && !arm_is_secure_below_el3(env)) {
99
+ if (el < 2 && (mdcr_el2 & MDCR_TPM)) {
100
return CP_ACCESS_TRAP_EL2;
101
}
102
if (el < 3 && (env->cp15.mdcr_el3 & MDCR_TPM)) {
103
@@ -XXX,XX +XXX,XX @@ static bool pmu_counter_enabled(CPUARMState *env, uint8_t counter)
104
bool enabled, prohibited, filtered;
105
bool secure = arm_is_secure(env);
106
int el = arm_current_el(env);
107
- uint8_t hpmn = env->cp15.mdcr_el2 & MDCR_HPMN;
108
+ uint64_t mdcr_el2 = arm_mdcr_el2_eff(env);
109
+ uint8_t hpmn = mdcr_el2 & MDCR_HPMN;
110
111
if (!arm_feature(env, ARM_FEATURE_PMU)) {
112
return false;
113
@@ -XXX,XX +XXX,XX @@ static bool pmu_counter_enabled(CPUARMState *env, uint8_t counter)
114
(counter < hpmn || counter == 31)) {
115
e = env->cp15.c9_pmcr & PMCRE;
116
} else {
117
- e = env->cp15.mdcr_el2 & MDCR_HPME;
118
+ e = mdcr_el2 & MDCR_HPME;
119
}
120
enabled = e && (env->cp15.c9_pmcnten & (1 << counter));
121
122
if (!secure) {
123
if (el == 2 && (counter < hpmn || counter == 31)) {
124
- prohibited = env->cp15.mdcr_el2 & MDCR_HPMD;
125
+ prohibited = mdcr_el2 & MDCR_HPMD;
126
} else {
127
prohibited = false;
128
}
129
--
130
2.20.1
131
132
diff view generated by jsdifflib
Deleted patch
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
2
1
3
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210112104511.36576-6-remi.denis.courmont@huawei.com
6
[PMM: tweaked commit message to match reduced scope of patch
7
following rebase]
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/cpu.h | 5 +++++
11
1 file changed, 5 insertions(+)
12
13
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/cpu.h
16
+++ b/target/arm/cpu.h
17
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_sve(const ARMISARegisters *id)
18
return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, SVE) != 0;
19
}
20
21
+static inline bool isar_feature_aa64_sel2(const ARMISARegisters *id)
22
+{
23
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, SEL2) != 0;
24
+}
25
+
26
static inline bool isar_feature_aa64_vh(const ARMISARegisters *id)
27
{
28
return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, VH) != 0;
29
--
30
2.20.1
31
32
diff view generated by jsdifflib
Deleted patch
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
2
1
3
This adds the MMU indices for EL2 stage 1 in secure state.
4
5
To keep code contained, which is largelly identical between secure and
6
non-secure modes, the MMU indices are reassigned. The new assignments
7
provide a systematic pattern with a non-secure bit.
8
9
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20210112104511.36576-8-remi.denis.courmont@huawei.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
target/arm/cpu-param.h | 2 +-
15
target/arm/cpu.h | 35 ++++++----
16
target/arm/internals.h | 12 ++++
17
target/arm/helper.c | 127 ++++++++++++++++++++++++-------------
18
target/arm/translate-a64.c | 4 ++
19
5 files changed, 123 insertions(+), 57 deletions(-)
20
21
diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h
22
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/cpu-param.h
24
+++ b/target/arm/cpu-param.h
25
@@ -XXX,XX +XXX,XX @@
26
# define TARGET_PAGE_BITS_MIN 10
27
#endif
28
29
-#define NB_MMU_MODES 11
30
+#define NB_MMU_MODES 15
31
32
#endif
33
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/cpu.h
36
+++ b/target/arm/cpu.h
37
@@ -XXX,XX +XXX,XX @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync);
38
#define ARM_MMU_IDX_NOTLB 0x20 /* does not have a TLB */
39
#define ARM_MMU_IDX_M 0x40 /* M profile */
40
41
+/* Meanings of the bits for A profile mmu idx values */
42
+#define ARM_MMU_IDX_A_NS 0x8
43
+
44
/* Meanings of the bits for M profile mmu idx values */
45
#define ARM_MMU_IDX_M_PRIV 0x1
46
#define ARM_MMU_IDX_M_NEGPRI 0x2
47
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
48
/*
49
* A-profile.
50
*/
51
- ARMMMUIdx_E10_0 = 0 | ARM_MMU_IDX_A,
52
- ARMMMUIdx_E20_0 = 1 | ARM_MMU_IDX_A,
53
+ ARMMMUIdx_SE10_0 = 0 | ARM_MMU_IDX_A,
54
+ ARMMMUIdx_SE20_0 = 1 | ARM_MMU_IDX_A,
55
+ ARMMMUIdx_SE10_1 = 2 | ARM_MMU_IDX_A,
56
+ ARMMMUIdx_SE20_2 = 3 | ARM_MMU_IDX_A,
57
+ ARMMMUIdx_SE10_1_PAN = 4 | ARM_MMU_IDX_A,
58
+ ARMMMUIdx_SE20_2_PAN = 5 | ARM_MMU_IDX_A,
59
+ ARMMMUIdx_SE2 = 6 | ARM_MMU_IDX_A,
60
+ ARMMMUIdx_SE3 = 7 | ARM_MMU_IDX_A,
61
62
- ARMMMUIdx_E10_1 = 2 | ARM_MMU_IDX_A,
63
- ARMMMUIdx_E10_1_PAN = 3 | ARM_MMU_IDX_A,
64
-
65
- ARMMMUIdx_E2 = 4 | ARM_MMU_IDX_A,
66
- ARMMMUIdx_E20_2 = 5 | ARM_MMU_IDX_A,
67
- ARMMMUIdx_E20_2_PAN = 6 | ARM_MMU_IDX_A,
68
-
69
- ARMMMUIdx_SE10_0 = 7 | ARM_MMU_IDX_A,
70
- ARMMMUIdx_SE10_1 = 8 | ARM_MMU_IDX_A,
71
- ARMMMUIdx_SE10_1_PAN = 9 | ARM_MMU_IDX_A,
72
- ARMMMUIdx_SE3 = 10 | ARM_MMU_IDX_A,
73
+ ARMMMUIdx_E10_0 = ARMMMUIdx_SE10_0 | ARM_MMU_IDX_A_NS,
74
+ ARMMMUIdx_E20_0 = ARMMMUIdx_SE20_0 | ARM_MMU_IDX_A_NS,
75
+ ARMMMUIdx_E10_1 = ARMMMUIdx_SE10_1 | ARM_MMU_IDX_A_NS,
76
+ ARMMMUIdx_E20_2 = ARMMMUIdx_SE20_2 | ARM_MMU_IDX_A_NS,
77
+ ARMMMUIdx_E10_1_PAN = ARMMMUIdx_SE10_1_PAN | ARM_MMU_IDX_A_NS,
78
+ ARMMMUIdx_E20_2_PAN = ARMMMUIdx_SE20_2_PAN | ARM_MMU_IDX_A_NS,
79
+ ARMMMUIdx_E2 = ARMMMUIdx_SE2 | ARM_MMU_IDX_A_NS,
80
81
/*
82
* These are not allocated TLBs and are used only for AT system
83
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdxBit {
84
TO_CORE_BIT(E20_2),
85
TO_CORE_BIT(E20_2_PAN),
86
TO_CORE_BIT(SE10_0),
87
+ TO_CORE_BIT(SE20_0),
88
TO_CORE_BIT(SE10_1),
89
+ TO_CORE_BIT(SE20_2),
90
TO_CORE_BIT(SE10_1_PAN),
91
+ TO_CORE_BIT(SE20_2_PAN),
92
+ TO_CORE_BIT(SE2),
93
TO_CORE_BIT(SE3),
94
95
TO_CORE_BIT(MUser),
96
diff --git a/target/arm/internals.h b/target/arm/internals.h
97
index XXXXXXX..XXXXXXX 100644
98
--- a/target/arm/internals.h
99
+++ b/target/arm/internals.h
100
@@ -XXX,XX +XXX,XX @@ static inline bool regime_has_2_ranges(ARMMMUIdx mmu_idx)
101
case ARMMMUIdx_SE10_0:
102
case ARMMMUIdx_SE10_1:
103
case ARMMMUIdx_SE10_1_PAN:
104
+ case ARMMMUIdx_SE20_0:
105
+ case ARMMMUIdx_SE20_2:
106
+ case ARMMMUIdx_SE20_2_PAN:
107
return true;
108
default:
109
return false;
110
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_secure(CPUARMState *env, ARMMMUIdx mmu_idx)
111
case ARMMMUIdx_SE10_0:
112
case ARMMMUIdx_SE10_1:
113
case ARMMMUIdx_SE10_1_PAN:
114
+ case ARMMMUIdx_SE20_0:
115
+ case ARMMMUIdx_SE20_2:
116
+ case ARMMMUIdx_SE20_2_PAN:
117
+ case ARMMMUIdx_SE2:
118
case ARMMMUIdx_MSPrivNegPri:
119
case ARMMMUIdx_MSUserNegPri:
120
case ARMMMUIdx_MSPriv:
121
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_pan(CPUARMState *env, ARMMMUIdx mmu_idx)
122
case ARMMMUIdx_E10_1_PAN:
123
case ARMMMUIdx_E20_2_PAN:
124
case ARMMMUIdx_SE10_1_PAN:
125
+ case ARMMMUIdx_SE20_2_PAN:
126
return true;
127
default:
128
return false;
129
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_pan(CPUARMState *env, ARMMMUIdx mmu_idx)
130
static inline uint32_t regime_el(CPUARMState *env, ARMMMUIdx mmu_idx)
131
{
132
switch (mmu_idx) {
133
+ case ARMMMUIdx_SE20_0:
134
+ case ARMMMUIdx_SE20_2:
135
+ case ARMMMUIdx_SE20_2_PAN:
136
case ARMMMUIdx_E20_0:
137
case ARMMMUIdx_E20_2:
138
case ARMMMUIdx_E20_2_PAN:
139
case ARMMMUIdx_Stage2:
140
+ case ARMMMUIdx_SE2:
141
case ARMMMUIdx_E2:
142
return 2;
143
case ARMMMUIdx_SE3:
144
diff --git a/target/arm/helper.c b/target/arm/helper.c
145
index XXXXXXX..XXXXXXX 100644
146
--- a/target/arm/helper.c
147
+++ b/target/arm/helper.c
148
@@ -XXX,XX +XXX,XX @@ static int gt_phys_redir_timeridx(CPUARMState *env)
149
case ARMMMUIdx_E20_0:
150
case ARMMMUIdx_E20_2:
151
case ARMMMUIdx_E20_2_PAN:
152
+ case ARMMMUIdx_SE20_0:
153
+ case ARMMMUIdx_SE20_2:
154
+ case ARMMMUIdx_SE20_2_PAN:
155
return GTIMER_HYP;
156
default:
157
return GTIMER_PHYS;
158
@@ -XXX,XX +XXX,XX @@ static int gt_virt_redir_timeridx(CPUARMState *env)
159
case ARMMMUIdx_E20_0:
160
case ARMMMUIdx_E20_2:
161
case ARMMMUIdx_E20_2_PAN:
162
+ case ARMMMUIdx_SE20_0:
163
+ case ARMMMUIdx_SE20_2:
164
+ case ARMMMUIdx_SE20_2_PAN:
165
return GTIMER_HYPVIRT;
166
default:
167
return GTIMER_VIRT;
168
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
169
mmu_idx = ARMMMUIdx_SE3;
170
break;
171
case 2:
172
- g_assert(!secure); /* TODO: ARMv8.4-SecEL2 */
173
+ g_assert(!secure); /* ARMv8.4-SecEL2 is 64-bit only */
174
/* fall through */
175
case 1:
176
if (ri->crm == 9 && (env->uncached_cpsr & CPSR_PAN)) {
177
@@ -XXX,XX +XXX,XX @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri,
178
}
179
break;
180
case 4: /* AT S1E2R, AT S1E2W */
181
- mmu_idx = ARMMMUIdx_E2;
182
+ mmu_idx = secure ? ARMMMUIdx_SE2 : ARMMMUIdx_E2;
183
break;
184
case 6: /* AT S1E3R, AT S1E3W */
185
mmu_idx = ARMMMUIdx_SE3;
186
@@ -XXX,XX +XXX,XX @@ static void vmsa_tcr_ttbr_el2_write(CPUARMState *env, const ARMCPRegInfo *ri,
187
*/
188
if (extract64(raw_read(env, ri) ^ value, 48, 16) &&
189
(arm_hcr_el2_eff(env) & HCR_E2H)) {
190
- tlb_flush_by_mmuidx(env_cpu(env),
191
- ARMMMUIdxBit_E20_2 |
192
- ARMMMUIdxBit_E20_2_PAN |
193
- ARMMMUIdxBit_E20_0);
194
+ uint16_t mask = ARMMMUIdxBit_E20_2 |
195
+ ARMMMUIdxBit_E20_2_PAN |
196
+ ARMMMUIdxBit_E20_0;
197
+
198
+ if (arm_is_secure_below_el3(env)) {
199
+ mask >>= ARM_MMU_IDX_A_NS;
200
+ }
201
+
202
+ tlb_flush_by_mmuidx(env_cpu(env), mask);
203
}
204
raw_write(env, ri, value);
205
}
206
@@ -XXX,XX +XXX,XX @@ static int vae1_tlbmask(CPUARMState *env)
207
uint64_t hcr = arm_hcr_el2_eff(env);
208
209
if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) {
210
- return ARMMMUIdxBit_E20_2 |
211
- ARMMMUIdxBit_E20_2_PAN |
212
- ARMMMUIdxBit_E20_0;
213
+ uint16_t mask = ARMMMUIdxBit_E20_2 |
214
+ ARMMMUIdxBit_E20_2_PAN |
215
+ ARMMMUIdxBit_E20_0;
216
+
217
+ if (arm_is_secure_below_el3(env)) {
218
+ mask >>= ARM_MMU_IDX_A_NS;
219
+ }
220
+
221
+ return mask;
222
} else if (arm_is_secure_below_el3(env)) {
223
return ARMMMUIdxBit_SE10_1 |
224
ARMMMUIdxBit_SE10_1_PAN |
225
@@ -XXX,XX +XXX,XX @@ static int tlbbits_for_regime(CPUARMState *env, ARMMMUIdx mmu_idx,
226
227
static int vae1_tlbbits(CPUARMState *env, uint64_t addr)
228
{
229
+ uint64_t hcr = arm_hcr_el2_eff(env);
230
ARMMMUIdx mmu_idx;
231
232
/* Only the regime of the mmu_idx below is significant. */
233
- if (arm_is_secure_below_el3(env)) {
234
- mmu_idx = ARMMMUIdx_SE10_0;
235
- } else if ((env->cp15.hcr_el2 & (HCR_E2H | HCR_TGE))
236
- == (HCR_E2H | HCR_TGE)) {
237
+ if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) {
238
mmu_idx = ARMMMUIdx_E20_0;
239
} else {
240
mmu_idx = ARMMMUIdx_E10_0;
241
}
242
+
243
+ if (arm_is_secure_below_el3(env)) {
244
+ mmu_idx &= ~ARM_MMU_IDX_A_NS;
245
+ }
246
+
247
return tlbbits_for_regime(env, mmu_idx, addr);
248
}
249
250
@@ -XXX,XX +XXX,XX @@ static int alle1_tlbmask(CPUARMState *env)
251
252
static int e2_tlbmask(CPUARMState *env)
253
{
254
- /* TODO: ARMv8.4-SecEL2 */
255
- return ARMMMUIdxBit_E20_0 |
256
- ARMMMUIdxBit_E20_2 |
257
- ARMMMUIdxBit_E20_2_PAN |
258
- ARMMMUIdxBit_E2;
259
+ if (arm_is_secure_below_el3(env)) {
260
+ return ARMMMUIdxBit_SE20_0 |
261
+ ARMMMUIdxBit_SE20_2 |
262
+ ARMMMUIdxBit_SE20_2_PAN |
263
+ ARMMMUIdxBit_SE2;
264
+ } else {
265
+ return ARMMMUIdxBit_E20_0 |
266
+ ARMMMUIdxBit_E20_2 |
267
+ ARMMMUIdxBit_E20_2_PAN |
268
+ ARMMMUIdxBit_E2;
269
+ }
270
}
271
272
static void tlbi_aa64_alle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
273
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
274
{
275
CPUState *cs = env_cpu(env);
276
uint64_t pageaddr = sextract64(value << 12, 0, 56);
277
- int bits = tlbbits_for_regime(env, ARMMMUIdx_E2, pageaddr);
278
+ bool secure = arm_is_secure_below_el3(env);
279
+ int mask = secure ? ARMMMUIdxBit_SE2 : ARMMMUIdxBit_E2;
280
+ int bits = tlbbits_for_regime(env, secure ? ARMMMUIdx_E2 : ARMMMUIdx_SE2,
281
+ pageaddr);
282
283
- tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr,
284
- ARMMMUIdxBit_E2, bits);
285
+ tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr, mask, bits);
286
}
287
288
static void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
289
@@ -XXX,XX +XXX,XX @@ uint64_t arm_sctlr(CPUARMState *env, int el)
290
/* Only EL0 needs to be adjusted for EL1&0 or EL2&0. */
291
if (el == 0) {
292
ARMMMUIdx mmu_idx = arm_mmu_idx_el(env, 0);
293
- el = (mmu_idx == ARMMMUIdx_E20_0 ? 2 : 1);
294
+ el = (mmu_idx == ARMMMUIdx_E20_0 || mmu_idx == ARMMMUIdx_SE20_0)
295
+ ? 2 : 1;
296
}
297
return env->cp15.sctlr_el[el];
298
}
299
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx)
300
switch (mmu_idx) {
301
case ARMMMUIdx_SE10_0:
302
case ARMMMUIdx_E20_0:
303
+ case ARMMMUIdx_SE20_0:
304
case ARMMMUIdx_Stage1_E0:
305
case ARMMMUIdx_MUser:
306
case ARMMMUIdx_MSUser:
307
@@ -XXX,XX +XXX,XX @@ int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx)
308
case ARMMMUIdx_E10_0:
309
case ARMMMUIdx_E20_0:
310
case ARMMMUIdx_SE10_0:
311
+ case ARMMMUIdx_SE20_0:
312
return 0;
313
case ARMMMUIdx_E10_1:
314
case ARMMMUIdx_E10_1_PAN:
315
@@ -XXX,XX +XXX,XX @@ int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx)
316
case ARMMMUIdx_E2:
317
case ARMMMUIdx_E20_2:
318
case ARMMMUIdx_E20_2_PAN:
319
+ case ARMMMUIdx_SE2:
320
+ case ARMMMUIdx_SE20_2:
321
+ case ARMMMUIdx_SE20_2_PAN:
322
return 2;
323
case ARMMMUIdx_SE3:
324
return 3;
325
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate)
326
327
ARMMMUIdx arm_mmu_idx_el(CPUARMState *env, int el)
328
{
329
+ ARMMMUIdx idx;
330
+ uint64_t hcr;
331
+
332
if (arm_feature(env, ARM_FEATURE_M)) {
333
return arm_v7m_mmu_idx_for_secstate(env, env->v7m.secure);
334
}
335
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_mmu_idx_el(CPUARMState *env, int el)
336
/* See ARM pseudo-function ELIsInHost. */
337
switch (el) {
338
case 0:
339
- if (arm_is_secure_below_el3(env)) {
340
- return ARMMMUIdx_SE10_0;
341
+ hcr = arm_hcr_el2_eff(env);
342
+ if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) {
343
+ idx = ARMMMUIdx_E20_0;
344
+ } else {
345
+ idx = ARMMMUIdx_E10_0;
346
}
347
- if ((env->cp15.hcr_el2 & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)
348
- && arm_el_is_aa64(env, 2)) {
349
- return ARMMMUIdx_E20_0;
350
- }
351
- return ARMMMUIdx_E10_0;
352
+ break;
353
case 1:
354
- if (arm_is_secure_below_el3(env)) {
355
- if (env->pstate & PSTATE_PAN) {
356
- return ARMMMUIdx_SE10_1_PAN;
357
- }
358
- return ARMMMUIdx_SE10_1;
359
- }
360
if (env->pstate & PSTATE_PAN) {
361
- return ARMMMUIdx_E10_1_PAN;
362
+ idx = ARMMMUIdx_E10_1_PAN;
363
+ } else {
364
+ idx = ARMMMUIdx_E10_1;
365
}
366
- return ARMMMUIdx_E10_1;
367
+ break;
368
case 2:
369
- /* TODO: ARMv8.4-SecEL2 */
370
/* Note that TGE does not apply at EL2. */
371
- if ((env->cp15.hcr_el2 & HCR_E2H) && arm_el_is_aa64(env, 2)) {
372
+ if (arm_hcr_el2_eff(env) & HCR_E2H) {
373
if (env->pstate & PSTATE_PAN) {
374
- return ARMMMUIdx_E20_2_PAN;
375
+ idx = ARMMMUIdx_E20_2_PAN;
376
+ } else {
377
+ idx = ARMMMUIdx_E20_2;
378
}
379
- return ARMMMUIdx_E20_2;
380
+ } else {
381
+ idx = ARMMMUIdx_E2;
382
}
383
- return ARMMMUIdx_E2;
384
+ break;
385
case 3:
386
return ARMMMUIdx_SE3;
387
default:
388
g_assert_not_reached();
389
}
390
+
391
+ if (arm_is_secure_below_el3(env)) {
392
+ idx &= ~ARM_MMU_IDX_A_NS;
393
+ }
394
+
395
+ return idx;
396
}
397
398
ARMMMUIdx arm_mmu_idx(CPUARMState *env)
399
@@ -XXX,XX +XXX,XX @@ static uint32_t rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
400
break;
401
case ARMMMUIdx_E20_2:
402
case ARMMMUIdx_E20_2_PAN:
403
- /* TODO: ARMv8.4-SecEL2 */
404
+ case ARMMMUIdx_SE20_2:
405
+ case ARMMMUIdx_SE20_2_PAN:
406
/*
407
* Note that EL20_2 is gated by HCR_EL2.E2H == 1, but EL20_0 is
408
* gated by HCR_EL2.<E2H,TGE> == '11', and so is LDTR.
409
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
410
index XXXXXXX..XXXXXXX 100644
411
--- a/target/arm/translate-a64.c
412
+++ b/target/arm/translate-a64.c
413
@@ -XXX,XX +XXX,XX @@ static int get_a64_user_mem_index(DisasContext *s)
414
case ARMMMUIdx_SE10_1_PAN:
415
useridx = ARMMMUIdx_SE10_0;
416
break;
417
+ case ARMMMUIdx_SE20_2:
418
+ case ARMMMUIdx_SE20_2_PAN:
419
+ useridx = ARMMMUIdx_SE20_0;
420
+ break;
421
default:
422
g_assert_not_reached();
423
}
424
--
425
2.20.1
426
427
diff view generated by jsdifflib
Deleted patch
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
2
1
3
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210112104511.36576-9-remi.denis.courmont@huawei.com
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/cpu.h | 7 +++++++
9
target/arm/helper.c | 24 ++++++++++++++++++++++++
10
2 files changed, 31 insertions(+)
11
12
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/cpu.h
15
+++ b/target/arm/cpu.h
16
@@ -XXX,XX +XXX,XX @@ typedef struct {
17
uint32_t base_mask;
18
} TCR;
19
20
+#define VTCR_NSW (1u << 29)
21
+#define VTCR_NSA (1u << 30)
22
+#define VSTCR_SW VTCR_NSW
23
+#define VSTCR_SA VTCR_NSA
24
+
25
/* Define a maximum sized vector register.
26
* For 32-bit, this is a 128-bit NEON/AdvSIMD register.
27
* For 64-bit, this is a 2048-bit SVE register.
28
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
29
uint64_t ttbr1_el[4];
30
};
31
uint64_t vttbr_el2; /* Virtualization Translation Table Base. */
32
+ uint64_t vsttbr_el2; /* Secure Virtualization Translation Table. */
33
/* MMU translation table base control. */
34
TCR tcr_el[4];
35
TCR vtcr_el2; /* Virtualization Translation Control. */
36
+ TCR vstcr_el2; /* Secure Virtualization Translation Control. */
37
uint32_t c2_data; /* MPU data cacheable bits. */
38
uint32_t c2_insn; /* MPU instruction cacheable bits. */
39
union { /* MMU domain access control register
40
diff --git a/target/arm/helper.c b/target/arm/helper.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/helper.c
43
+++ b/target/arm/helper.c
44
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_v8_cp_reginfo[] = {
45
REGINFO_SENTINEL
46
};
47
48
+static CPAccessResult sel2_access(CPUARMState *env, const ARMCPRegInfo *ri,
49
+ bool isread)
50
+{
51
+ if (arm_current_el(env) == 3 || arm_is_secure_below_el3(env)) {
52
+ return CP_ACCESS_OK;
53
+ }
54
+ return CP_ACCESS_TRAP_UNCATEGORIZED;
55
+}
56
+
57
+static const ARMCPRegInfo el2_sec_cp_reginfo[] = {
58
+ { .name = "VSTTBR_EL2", .state = ARM_CP_STATE_AA64,
59
+ .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 6, .opc2 = 0,
60
+ .access = PL2_RW, .accessfn = sel2_access,
61
+ .fieldoffset = offsetof(CPUARMState, cp15.vsttbr_el2) },
62
+ { .name = "VSTCR_EL2", .state = ARM_CP_STATE_AA64,
63
+ .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 6, .opc2 = 2,
64
+ .access = PL2_RW, .accessfn = sel2_access,
65
+ .fieldoffset = offsetof(CPUARMState, cp15.vstcr_el2) },
66
+ REGINFO_SENTINEL
67
+};
68
+
69
static CPAccessResult nsacr_access(CPUARMState *env, const ARMCPRegInfo *ri,
70
bool isread)
71
{
72
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
73
if (arm_feature(env, ARM_FEATURE_V8)) {
74
define_arm_cp_regs(cpu, el2_v8_cp_reginfo);
75
}
76
+ if (cpu_isar_feature(aa64_sel2, cpu)) {
77
+ define_arm_cp_regs(cpu, el2_sec_cp_reginfo);
78
+ }
79
/* RVBAR_EL2 is only implemented if EL2 is the highest EL */
80
if (!arm_feature(env, ARM_FEATURE_EL3)) {
81
ARMCPRegInfo rvbar = {
82
--
83
2.20.1
84
85
diff view generated by jsdifflib
Deleted patch
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
2
1
3
The VTTBR write callback so far assumes that the underlying VM lies in
4
non-secure state. This handles the secure state scenario.
5
6
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210112104511.36576-10-remi.denis.courmont@huawei.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/helper.c | 13 +++++++++----
12
1 file changed, 9 insertions(+), 4 deletions(-)
13
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper.c
17
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@ static void vttbr_write(CPUARMState *env, const ARMCPRegInfo *ri,
19
* the combined stage 1&2 tlbs (EL10_1 and EL10_0).
20
*/
21
if (raw_read(env, ri) != value) {
22
- tlb_flush_by_mmuidx(cs,
23
- ARMMMUIdxBit_E10_1 |
24
- ARMMMUIdxBit_E10_1_PAN |
25
- ARMMMUIdxBit_E10_0);
26
+ uint16_t mask = ARMMMUIdxBit_E10_1 |
27
+ ARMMMUIdxBit_E10_1_PAN |
28
+ ARMMMUIdxBit_E10_0;
29
+
30
+ if (arm_is_secure_below_el3(env)) {
31
+ mask >>= ARM_MMU_IDX_A_NS;
32
+ }
33
+
34
+ tlb_flush_by_mmuidx(cs, mask);
35
raw_write(env, ri, value);
36
}
37
}
38
--
39
2.20.1
40
41
diff view generated by jsdifflib
Deleted patch
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
2
1
3
In the secure stage 2 translation regime, the VSTCR.SW and VTCR.NSW
4
bits can invert the secure flag for pagetable walks. This patchset
5
allows S1_ptw_translate() to change the non-secure bit.
6
7
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210112104511.36576-11-remi.denis.courmont@huawei.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/helper.c | 9 ++++++---
13
1 file changed, 6 insertions(+), 3 deletions(-)
14
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
18
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@ static bool get_level1_table_address(CPUARMState *env, ARMMMUIdx mmu_idx,
20
21
/* Translate a S1 pagetable walk through S2 if needed. */
22
static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
23
- hwaddr addr, MemTxAttrs txattrs,
24
+ hwaddr addr, bool *is_secure,
25
ARMMMUFaultInfo *fi)
26
{
27
if (arm_mmu_idx_is_stage1_of_2(mmu_idx) &&
28
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
29
int s2prot;
30
int ret;
31
ARMCacheAttrs cacheattrs = {};
32
+ MemTxAttrs txattrs = {};
33
+
34
+ assert(!*is_secure); /* TODO: S-EL2 */
35
36
ret = get_phys_addr_lpae(env, addr, MMU_DATA_LOAD, ARMMMUIdx_Stage2,
37
false,
38
@@ -XXX,XX +XXX,XX @@ static uint32_t arm_ldl_ptw(CPUState *cs, hwaddr addr, bool is_secure,
39
AddressSpace *as;
40
uint32_t data;
41
42
+ addr = S1_ptw_translate(env, mmu_idx, addr, &is_secure, fi);
43
attrs.secure = is_secure;
44
as = arm_addressspace(cs, attrs);
45
- addr = S1_ptw_translate(env, mmu_idx, addr, attrs, fi);
46
if (fi->s1ptw) {
47
return 0;
48
}
49
@@ -XXX,XX +XXX,XX @@ static uint64_t arm_ldq_ptw(CPUState *cs, hwaddr addr, bool is_secure,
50
AddressSpace *as;
51
uint64_t data;
52
53
+ addr = S1_ptw_translate(env, mmu_idx, addr, &is_secure, fi);
54
attrs.secure = is_secure;
55
as = arm_addressspace(cs, attrs);
56
- addr = S1_ptw_translate(env, mmu_idx, addr, attrs, fi);
57
if (fi->s1ptw) {
58
return 0;
59
}
60
--
61
2.20.1
62
63
diff view generated by jsdifflib
Deleted patch
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
2
1
3
The stage_1_mmu_idx() already effectively keeps track of which
4
translation regimes have two stages. Don't hard-code another test.
5
6
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210112104511.36576-13-remi.denis.courmont@huawei.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/helper.c | 13 ++++++-------
12
1 file changed, 6 insertions(+), 7 deletions(-)
13
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper.c
17
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
19
target_ulong *page_size,
20
ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs)
21
{
22
- if (mmu_idx == ARMMMUIdx_E10_0 ||
23
- mmu_idx == ARMMMUIdx_E10_1 ||
24
- mmu_idx == ARMMMUIdx_E10_1_PAN) {
25
+ ARMMMUIdx s1_mmu_idx = stage_1_mmu_idx(mmu_idx);
26
+
27
+ if (mmu_idx != s1_mmu_idx) {
28
/* Call ourselves recursively to do the stage 1 and then stage 2
29
- * translations.
30
+ * translations if mmu_idx is a two-stage regime.
31
*/
32
if (arm_feature(env, ARM_FEATURE_EL2)) {
33
hwaddr ipa;
34
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
35
int ret;
36
ARMCacheAttrs cacheattrs2 = {};
37
38
- ret = get_phys_addr(env, address, access_type,
39
- stage_1_mmu_idx(mmu_idx), &ipa, attrs,
40
- prot, page_size, fi, cacheattrs);
41
+ ret = get_phys_addr(env, address, access_type, s1_mmu_idx, &ipa,
42
+ attrs, prot, page_size, fi, cacheattrs);
43
44
/* If S1 fails or S2 is disabled, return early. */
45
if (ret || regime_translation_disabled(env, ARMMMUIdx_Stage2)) {
46
--
47
2.20.1
48
49
diff view generated by jsdifflib
Deleted patch
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
2
1
3
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210112104511.36576-14-remi.denis.courmont@huawei.com
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/cpu.h | 6 +++-
9
target/arm/internals.h | 22 ++++++++++++
10
target/arm/helper.c | 78 +++++++++++++++++++++++++++++-------------
11
3 files changed, 81 insertions(+), 25 deletions(-)
12
13
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/cpu.h
16
+++ b/target/arm/cpu.h
17
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
18
ARMMMUIdx_Stage1_E0 = 0 | ARM_MMU_IDX_NOTLB,
19
ARMMMUIdx_Stage1_E1 = 1 | ARM_MMU_IDX_NOTLB,
20
ARMMMUIdx_Stage1_E1_PAN = 2 | ARM_MMU_IDX_NOTLB,
21
+ ARMMMUIdx_Stage1_SE0 = 3 | ARM_MMU_IDX_NOTLB,
22
+ ARMMMUIdx_Stage1_SE1 = 4 | ARM_MMU_IDX_NOTLB,
23
+ ARMMMUIdx_Stage1_SE1_PAN = 5 | ARM_MMU_IDX_NOTLB,
24
/*
25
* Not allocated a TLB: used only for second stage of an S12 page
26
* table walk, or for descriptor loads during first stage of an S1
27
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
28
* then various TLB flush insns which currently are no-ops or flush
29
* only stage 1 MMU indexes will need to change to flush stage 2.
30
*/
31
- ARMMMUIdx_Stage2 = 3 | ARM_MMU_IDX_NOTLB,
32
+ ARMMMUIdx_Stage2 = 6 | ARM_MMU_IDX_NOTLB,
33
+ ARMMMUIdx_Stage2_S = 7 | ARM_MMU_IDX_NOTLB,
34
35
/*
36
* M-profile.
37
diff --git a/target/arm/internals.h b/target/arm/internals.h
38
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/internals.h
40
+++ b/target/arm/internals.h
41
@@ -XXX,XX +XXX,XX @@ static inline bool regime_has_2_ranges(ARMMMUIdx mmu_idx)
42
case ARMMMUIdx_Stage1_E0:
43
case ARMMMUIdx_Stage1_E1:
44
case ARMMMUIdx_Stage1_E1_PAN:
45
+ case ARMMMUIdx_Stage1_SE0:
46
+ case ARMMMUIdx_Stage1_SE1:
47
+ case ARMMMUIdx_Stage1_SE1_PAN:
48
case ARMMMUIdx_E10_0:
49
case ARMMMUIdx_E10_1:
50
case ARMMMUIdx_E10_1_PAN:
51
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_secure(CPUARMState *env, ARMMMUIdx mmu_idx)
52
case ARMMMUIdx_SE20_0:
53
case ARMMMUIdx_SE20_2:
54
case ARMMMUIdx_SE20_2_PAN:
55
+ case ARMMMUIdx_Stage1_SE0:
56
+ case ARMMMUIdx_Stage1_SE1:
57
+ case ARMMMUIdx_Stage1_SE1_PAN:
58
case ARMMMUIdx_SE2:
59
+ case ARMMMUIdx_Stage2_S:
60
case ARMMMUIdx_MSPrivNegPri:
61
case ARMMMUIdx_MSUserNegPri:
62
case ARMMMUIdx_MSPriv:
63
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_pan(CPUARMState *env, ARMMMUIdx mmu_idx)
64
{
65
switch (mmu_idx) {
66
case ARMMMUIdx_Stage1_E1_PAN:
67
+ case ARMMMUIdx_Stage1_SE1_PAN:
68
case ARMMMUIdx_E10_1_PAN:
69
case ARMMMUIdx_E20_2_PAN:
70
case ARMMMUIdx_SE10_1_PAN:
71
@@ -XXX,XX +XXX,XX @@ static inline uint32_t regime_el(CPUARMState *env, ARMMMUIdx mmu_idx)
72
case ARMMMUIdx_E20_2:
73
case ARMMMUIdx_E20_2_PAN:
74
case ARMMMUIdx_Stage2:
75
+ case ARMMMUIdx_Stage2_S:
76
case ARMMMUIdx_SE2:
77
case ARMMMUIdx_E2:
78
return 2;
79
case ARMMMUIdx_SE3:
80
return 3;
81
case ARMMMUIdx_SE10_0:
82
+ case ARMMMUIdx_Stage1_SE0:
83
return arm_el_is_aa64(env, 3) ? 1 : 3;
84
case ARMMMUIdx_SE10_1:
85
case ARMMMUIdx_SE10_1_PAN:
86
case ARMMMUIdx_Stage1_E0:
87
case ARMMMUIdx_Stage1_E1:
88
case ARMMMUIdx_Stage1_E1_PAN:
89
+ case ARMMMUIdx_Stage1_SE1:
90
+ case ARMMMUIdx_Stage1_SE1_PAN:
91
case ARMMMUIdx_E10_0:
92
case ARMMMUIdx_E10_1:
93
case ARMMMUIdx_E10_1_PAN:
94
@@ -XXX,XX +XXX,XX @@ static inline TCR *regime_tcr(CPUARMState *env, ARMMMUIdx mmu_idx)
95
if (mmu_idx == ARMMMUIdx_Stage2) {
96
return &env->cp15.vtcr_el2;
97
}
98
+ if (mmu_idx == ARMMMUIdx_Stage2_S) {
99
+ /*
100
+ * Note: Secure stage 2 nominally shares fields from VTCR_EL2, but
101
+ * those are not currently used by QEMU, so just return VSTCR_EL2.
102
+ */
103
+ return &env->cp15.vstcr_el2;
104
+ }
105
return &env->cp15.tcr_el[regime_el(env, mmu_idx)];
106
}
107
108
@@ -XXX,XX +XXX,XX @@ static inline bool arm_mmu_idx_is_stage1_of_2(ARMMMUIdx mmu_idx)
109
case ARMMMUIdx_Stage1_E0:
110
case ARMMMUIdx_Stage1_E1:
111
case ARMMMUIdx_Stage1_E1_PAN:
112
+ case ARMMMUIdx_Stage1_SE0:
113
+ case ARMMMUIdx_Stage1_SE1:
114
+ case ARMMMUIdx_Stage1_SE1_PAN:
115
return true;
116
default:
117
return false;
118
diff --git a/target/arm/helper.c b/target/arm/helper.c
119
index XXXXXXX..XXXXXXX 100644
120
--- a/target/arm/helper.c
121
+++ b/target/arm/helper.c
122
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
123
uint32_t syn, fsr, fsc;
124
bool take_exc = false;
125
126
- if (fi.s1ptw && current_el == 1 && !arm_is_secure(env)
127
+ if (fi.s1ptw && current_el == 1
128
&& arm_mmu_idx_is_stage1_of_2(mmu_idx)) {
129
/*
130
* Synchronous stage 2 fault on an access made as part of the
131
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
132
/* fall through */
133
case 1:
134
if (ri->crm == 9 && (env->uncached_cpsr & CPSR_PAN)) {
135
- mmu_idx = (secure ? ARMMMUIdx_SE10_1_PAN
136
+ mmu_idx = (secure ? ARMMMUIdx_Stage1_SE1_PAN
137
: ARMMMUIdx_Stage1_E1_PAN);
138
} else {
139
- mmu_idx = secure ? ARMMMUIdx_SE10_1 : ARMMMUIdx_Stage1_E1;
140
+ mmu_idx = secure ? ARMMMUIdx_Stage1_SE1 : ARMMMUIdx_Stage1_E1;
141
}
142
break;
143
default:
144
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
145
mmu_idx = ARMMMUIdx_SE10_0;
146
break;
147
case 2:
148
+ g_assert(!secure); /* ARMv8.4-SecEL2 is 64-bit only */
149
mmu_idx = ARMMMUIdx_Stage1_E0;
150
break;
151
case 1:
152
- mmu_idx = secure ? ARMMMUIdx_SE10_0 : ARMMMUIdx_Stage1_E0;
153
+ mmu_idx = secure ? ARMMMUIdx_Stage1_SE0 : ARMMMUIdx_Stage1_E0;
154
break;
155
default:
156
g_assert_not_reached();
157
@@ -XXX,XX +XXX,XX @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri,
158
switch (ri->opc1) {
159
case 0: /* AT S1E1R, AT S1E1W, AT S1E1RP, AT S1E1WP */
160
if (ri->crm == 9 && (env->pstate & PSTATE_PAN)) {
161
- mmu_idx = (secure ? ARMMMUIdx_SE10_1_PAN
162
+ mmu_idx = (secure ? ARMMMUIdx_Stage1_SE1_PAN
163
: ARMMMUIdx_Stage1_E1_PAN);
164
} else {
165
- mmu_idx = secure ? ARMMMUIdx_SE10_1 : ARMMMUIdx_Stage1_E1;
166
+ mmu_idx = secure ? ARMMMUIdx_Stage1_SE1 : ARMMMUIdx_Stage1_E1;
167
}
168
break;
169
case 4: /* AT S1E2R, AT S1E2W */
170
@@ -XXX,XX +XXX,XX @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri,
171
}
172
break;
173
case 2: /* AT S1E0R, AT S1E0W */
174
- mmu_idx = secure ? ARMMMUIdx_SE10_0 : ARMMMUIdx_Stage1_E0;
175
+ mmu_idx = secure ? ARMMMUIdx_Stage1_SE0 : ARMMMUIdx_Stage1_E0;
176
break;
177
case 4: /* AT S12E1R, AT S12E1W */
178
mmu_idx = secure ? ARMMMUIdx_SE10_1 : ARMMMUIdx_E10_1;
179
@@ -XXX,XX +XXX,XX @@ static inline bool regime_translation_disabled(CPUARMState *env,
180
181
hcr_el2 = arm_hcr_el2_eff(env);
182
183
- if (mmu_idx == ARMMMUIdx_Stage2) {
184
+ if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
185
/* HCR.DC means HCR.VM behaves as 1 */
186
return (hcr_el2 & (HCR_DC | HCR_VM)) == 0;
187
}
188
@@ -XXX,XX +XXX,XX @@ static inline uint64_t regime_ttbr(CPUARMState *env, ARMMMUIdx mmu_idx,
189
if (mmu_idx == ARMMMUIdx_Stage2) {
190
return env->cp15.vttbr_el2;
191
}
192
+ if (mmu_idx == ARMMMUIdx_Stage2_S) {
193
+ return env->cp15.vsttbr_el2;
194
+ }
195
if (ttbrn == 0) {
196
return env->cp15.ttbr0_el[regime_el(env, mmu_idx)];
197
} else {
198
@@ -XXX,XX +XXX,XX @@ static inline uint64_t regime_ttbr(CPUARMState *env, ARMMMUIdx mmu_idx,
199
static inline ARMMMUIdx stage_1_mmu_idx(ARMMMUIdx mmu_idx)
200
{
201
switch (mmu_idx) {
202
+ case ARMMMUIdx_SE10_0:
203
+ return ARMMMUIdx_Stage1_SE0;
204
+ case ARMMMUIdx_SE10_1:
205
+ return ARMMMUIdx_Stage1_SE1;
206
+ case ARMMMUIdx_SE10_1_PAN:
207
+ return ARMMMUIdx_Stage1_SE1_PAN;
208
case ARMMMUIdx_E10_0:
209
return ARMMMUIdx_Stage1_E0;
210
case ARMMMUIdx_E10_1:
211
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx)
212
case ARMMMUIdx_E20_0:
213
case ARMMMUIdx_SE20_0:
214
case ARMMMUIdx_Stage1_E0:
215
+ case ARMMMUIdx_Stage1_SE0:
216
case ARMMMUIdx_MUser:
217
case ARMMMUIdx_MSUser:
218
case ARMMMUIdx_MUserNegPri:
219
@@ -XXX,XX +XXX,XX @@ static int get_S1prot(CPUARMState *env, ARMMMUIdx mmu_idx, bool is_aa64,
220
int wxn = 0;
221
222
assert(mmu_idx != ARMMMUIdx_Stage2);
223
+ assert(mmu_idx != ARMMMUIdx_Stage2_S);
224
225
user_rw = simple_ap_to_rw_prot_is_user(ap, true);
226
if (is_user) {
227
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
228
hwaddr s2pa;
229
int s2prot;
230
int ret;
231
+ ARMMMUIdx s2_mmu_idx = *is_secure ? ARMMMUIdx_Stage2_S
232
+ : ARMMMUIdx_Stage2;
233
ARMCacheAttrs cacheattrs = {};
234
MemTxAttrs txattrs = {};
235
236
- assert(!*is_secure); /* TODO: S-EL2 */
237
-
238
- ret = get_phys_addr_lpae(env, addr, MMU_DATA_LOAD, ARMMMUIdx_Stage2,
239
- false,
240
+ ret = get_phys_addr_lpae(env, addr, MMU_DATA_LOAD, s2_mmu_idx, false,
241
&s2pa, &txattrs, &s2prot, &s2size, fi,
242
&cacheattrs);
243
if (ret) {
244
@@ -XXX,XX +XXX,XX @@ static int aa64_va_parameter_tbi(uint64_t tcr, ARMMMUIdx mmu_idx)
245
{
246
if (regime_has_2_ranges(mmu_idx)) {
247
return extract64(tcr, 37, 2);
248
- } else if (mmu_idx == ARMMMUIdx_Stage2) {
249
+ } else if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
250
return 0; /* VTCR_EL2 */
251
} else {
252
/* Replicate the single TBI bit so we always have 2 bits. */
253
@@ -XXX,XX +XXX,XX @@ static int aa64_va_parameter_tbid(uint64_t tcr, ARMMMUIdx mmu_idx)
254
{
255
if (regime_has_2_ranges(mmu_idx)) {
256
return extract64(tcr, 51, 2);
257
- } else if (mmu_idx == ARMMMUIdx_Stage2) {
258
+ } else if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
259
return 0; /* VTCR_EL2 */
260
} else {
261
/* Replicate the single TBID bit so we always have 2 bits. */
262
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
263
tsz = extract32(tcr, 0, 6);
264
using64k = extract32(tcr, 14, 1);
265
using16k = extract32(tcr, 15, 1);
266
- if (mmu_idx == ARMMMUIdx_Stage2) {
267
+ if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
268
/* VTCR_EL2 */
269
hpd = false;
270
} else {
271
@@ -XXX,XX +XXX,XX @@ static ARMVAParameters aa32_va_parameters(CPUARMState *env, uint32_t va,
272
int select, tsz;
273
bool epd, hpd;
274
275
+ assert(mmu_idx != ARMMMUIdx_Stage2_S);
276
+
277
if (mmu_idx == ARMMMUIdx_Stage2) {
278
/* VTCR */
279
bool sext = extract32(tcr, 4, 1);
280
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
281
goto do_fault;
282
}
283
284
- if (mmu_idx != ARMMMUIdx_Stage2) {
285
+ if (mmu_idx != ARMMMUIdx_Stage2 && mmu_idx != ARMMMUIdx_Stage2_S) {
286
/* The starting level depends on the virtual address size (which can
287
* be up to 48 bits) and the translation granule size. It indicates
288
* the number of strides (stride bits at a time) needed to
289
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
290
attrs = extract64(descriptor, 2, 10)
291
| (extract64(descriptor, 52, 12) << 10);
292
293
- if (mmu_idx == ARMMMUIdx_Stage2) {
294
+ if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
295
/* Stage 2 table descriptors do not include any attribute fields */
296
break;
297
}
298
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
299
300
ap = extract32(attrs, 4, 2);
301
302
- if (mmu_idx == ARMMMUIdx_Stage2) {
303
- ns = true;
304
+ if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
305
+ ns = mmu_idx == ARMMMUIdx_Stage2;
306
xn = extract32(attrs, 11, 2);
307
*prot = get_S2prot(env, ap, xn, s1_is_el0);
308
} else {
309
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
310
arm_tlb_bti_gp(txattrs) = true;
311
}
312
313
- if (mmu_idx == ARMMMUIdx_Stage2) {
314
+ if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
315
cacheattrs->attrs = convert_stage2_attrs(env, extract32(attrs, 0, 4));
316
} else {
317
/* Index into MAIR registers for cache attributes */
318
@@ -XXX,XX +XXX,XX @@ do_fault:
319
fi->type = fault_type;
320
fi->level = level;
321
/* Tag the error as S2 for failed S1 PTW at S2 or ordinary S2. */
322
- fi->stage2 = fi->s1ptw || (mmu_idx == ARMMMUIdx_Stage2);
323
+ fi->stage2 = fi->s1ptw || (mmu_idx == ARMMMUIdx_Stage2 ||
324
+ mmu_idx == ARMMMUIdx_Stage2_S);
325
return true;
326
}
327
328
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
329
int s2_prot;
330
int ret;
331
ARMCacheAttrs cacheattrs2 = {};
332
+ ARMMMUIdx s2_mmu_idx;
333
+ bool is_el0;
334
335
ret = get_phys_addr(env, address, access_type, s1_mmu_idx, &ipa,
336
attrs, prot, page_size, fi, cacheattrs);
337
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
338
return ret;
339
}
340
341
+ s2_mmu_idx = attrs->secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
342
+ is_el0 = mmu_idx == ARMMMUIdx_E10_0 || mmu_idx == ARMMMUIdx_SE10_0;
343
+
344
/* S1 is done. Now do S2 translation. */
345
- ret = get_phys_addr_lpae(env, ipa, access_type, ARMMMUIdx_Stage2,
346
- mmu_idx == ARMMMUIdx_E10_0,
347
+ ret = get_phys_addr_lpae(env, ipa, access_type, s2_mmu_idx, is_el0,
348
phys_ptr, attrs, &s2_prot,
349
page_size, fi, &cacheattrs2);
350
fi->s2addr = ipa;
351
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
352
cacheattrs->shareability = 0;
353
}
354
*cacheattrs = combine_cacheattrs(*cacheattrs, cacheattrs2);
355
+
356
+ /* Check if IPA translates to secure or non-secure PA space. */
357
+ if (arm_is_secure_below_el3(env)) {
358
+ if (attrs->secure) {
359
+ attrs->secure =
360
+ !(env->cp15.vstcr_el2.raw_tcr & (VSTCR_SA | VSTCR_SW));
361
+ } else {
362
+ attrs->secure =
363
+ !((env->cp15.vtcr_el2.raw_tcr & (VTCR_NSA | VTCR_NSW))
364
+ || (env->cp15.vstcr_el2.raw_tcr & VSTCR_SA));
365
+ }
366
+ }
367
return 0;
368
} else {
369
/*
370
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
371
* MMU disabled. S1 addresses within aa64 translation regimes are
372
* still checked for bounds -- see AArch64.TranslateAddressS1Off.
373
*/
374
- if (mmu_idx != ARMMMUIdx_Stage2) {
375
+ if (mmu_idx != ARMMMUIdx_Stage2 && mmu_idx != ARMMMUIdx_Stage2_S) {
376
int r_el = regime_el(env, mmu_idx);
377
if (arm_el_is_aa64(env, r_el)) {
378
int pamax = arm_pamax(env_archcpu(env));
379
--
380
2.20.1
381
382
diff view generated by jsdifflib
Deleted patch
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
2
1
3
On ARMv8-A, accesses by 32-bit secure EL1 to monitor registers trap to
4
the upper (64-bit) EL. With Secure EL2 support, we can no longer assume
5
that that is always EL3, so make room for the value to be computed at
6
run-time.
7
8
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20210112104511.36576-16-remi.denis.courmont@huawei.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
target/arm/translate.c | 23 +++++++++++++++++++++--
14
1 file changed, 21 insertions(+), 2 deletions(-)
15
16
diff --git a/target/arm/translate.c b/target/arm/translate.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/translate.c
19
+++ b/target/arm/translate.c
20
@@ -XXX,XX +XXX,XX @@ static void unallocated_encoding(DisasContext *s)
21
default_exception_el(s));
22
}
23
24
+static void gen_exception_el(DisasContext *s, int excp, uint32_t syn,
25
+ TCGv_i32 tcg_el)
26
+{
27
+ TCGv_i32 tcg_excp;
28
+ TCGv_i32 tcg_syn;
29
+
30
+ gen_set_condexec(s);
31
+ gen_set_pc_im(s, s->pc_curr);
32
+ tcg_excp = tcg_const_i32(excp);
33
+ tcg_syn = tcg_const_i32(syn);
34
+ gen_helper_exception_with_syndrome(cpu_env, tcg_excp, tcg_syn, tcg_el);
35
+ tcg_temp_free_i32(tcg_syn);
36
+ tcg_temp_free_i32(tcg_excp);
37
+ s->base.is_jmp = DISAS_NORETURN;
38
+}
39
+
40
/* Force a TB lookup after an instruction that changes the CPU state. */
41
static inline void gen_lookup_tb(DisasContext *s)
42
{
43
@@ -XXX,XX +XXX,XX @@ static bool msr_banked_access_decode(DisasContext *s, int r, int sysm, int rn,
44
/* If we're in Secure EL1 (which implies that EL3 is AArch64)
45
* then accesses to Mon registers trap to EL3
46
*/
47
- exc_target = 3;
48
- goto undef;
49
+ TCGv_i32 tcg_el = tcg_const_i32(3);
50
+
51
+ gen_exception_el(s, EXCP_UDEF, syn_uncategorized(), tcg_el);
52
+ tcg_temp_free_i32(tcg_el);
53
+ return false;
54
}
55
break;
56
case ARM_CPU_MODE_HYP:
57
--
58
2.20.1
59
60
diff view generated by jsdifflib
Deleted patch
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
2
1
3
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210112104511.36576-18-remi.denis.courmont@huawei.com
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/cpu64.c | 1 +
9
1 file changed, 1 insertion(+)
10
11
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/cpu64.c
14
+++ b/target/arm/cpu64.c
15
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
16
t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1);
17
t = FIELD_DP64(t, ID_AA64PFR0, FP, 1);
18
t = FIELD_DP64(t, ID_AA64PFR0, ADVSIMD, 1);
19
+ t = FIELD_DP64(t, ID_AA64PFR0, SEL2, 1);
20
cpu->isar.id_aa64pfr0 = t;
21
22
t = cpu->isar.id_aa64pfr1;
23
--
24
2.20.1
25
26
diff view generated by jsdifflib
Deleted patch
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
2
1
3
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210112104511.36576-19-remi.denis.courmont@huawei.com
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper.c | 25 +++++++++++--------------
9
1 file changed, 11 insertions(+), 14 deletions(-)
10
11
diff --git a/target/arm/helper.c b/target/arm/helper.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/helper.c
14
+++ b/target/arm/helper.c
15
@@ -XXX,XX +XXX,XX @@ static CPAccessResult aa64_cacheop_pou_access(CPUARMState *env,
16
static int vae1_tlbmask(CPUARMState *env)
17
{
18
uint64_t hcr = arm_hcr_el2_eff(env);
19
+ uint16_t mask;
20
21
if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) {
22
- uint16_t mask = ARMMMUIdxBit_E20_2 |
23
- ARMMMUIdxBit_E20_2_PAN |
24
- ARMMMUIdxBit_E20_0;
25
-
26
- if (arm_is_secure_below_el3(env)) {
27
- mask >>= ARM_MMU_IDX_A_NS;
28
- }
29
-
30
- return mask;
31
- } else if (arm_is_secure_below_el3(env)) {
32
- return ARMMMUIdxBit_SE10_1 |
33
- ARMMMUIdxBit_SE10_1_PAN |
34
- ARMMMUIdxBit_SE10_0;
35
+ mask = ARMMMUIdxBit_E20_2 |
36
+ ARMMMUIdxBit_E20_2_PAN |
37
+ ARMMMUIdxBit_E20_0;
38
} else {
39
- return ARMMMUIdxBit_E10_1 |
40
+ mask = ARMMMUIdxBit_E10_1 |
41
ARMMMUIdxBit_E10_1_PAN |
42
ARMMMUIdxBit_E10_0;
43
}
44
+
45
+ if (arm_is_secure_below_el3(env)) {
46
+ mask >>= ARM_MMU_IDX_A_NS;
47
+ }
48
+
49
+ return mask;
50
}
51
52
/* Return 56 if TBI is enabled, 64 otherwise. */
53
--
54
2.20.1
55
56
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
SVE predicate operations cannot use the "usual" simd_desc
4
encoding, because the lengths are not a multiple of 8.
5
But we were abusing the SIMD_* fields to store values anyway.
6
This abuse broke when SIMD_OPRSZ_BITS was modified in e2e7168a214.
7
8
Introduce a new set of field definitions for exclusive use
9
of predicates, so that it is obvious what kind of predicate
10
we are manipulating. To be used in future patches.
11
12
Cc: qemu-stable@nongnu.org
13
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20210113062650.593824-2-richard.henderson@linaro.org
15
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
18
target/arm/internals.h | 9 +++++++++
19
1 file changed, 9 insertions(+)
20
21
diff --git a/target/arm/internals.h b/target/arm/internals.h
22
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/internals.h
24
+++ b/target/arm/internals.h
25
@@ -XXX,XX +XXX,XX @@ void arm_log_exception(int idx);
26
#define LOG2_TAG_GRANULE 4
27
#define TAG_GRANULE (1 << LOG2_TAG_GRANULE)
28
29
+/*
30
+ * SVE predicates are 1/8 the size of SVE vectors, and cannot use
31
+ * the same simd_desc() encoding due to restrictions on size.
32
+ * Use these instead.
33
+ */
34
+FIELD(PREDDESC, OPRSZ, 0, 6)
35
+FIELD(PREDDESC, ESZ, 6, 2)
36
+FIELD(PREDDESC, DATA, 8, 24)
37
+
38
/*
39
* The SVE simd_data field, for memory ops, contains either
40
* rd (5 bits) or a shift count (2 bits).
41
--
42
2.20.1
43
44
diff view generated by jsdifflib
1
From: Gan Qixin <ganqixin@huawei.com>
1
I'm down as the only listed maintainer for quite a lot of Arm SoC and
2
board types. In some cases this is only as the "maintainer of last
3
resort" and I'm not in practice doing anything beyond patch review
4
and the odd bit of tidyup.
2
5
3
The adc_qom_set function didn't free "response", which caused an indirect
6
Move these entries in MAINTAINERS from "Maintained" to "Odd Fixes",
4
memory leak. So use qobject_unref() to fix it.
7
to better represent reality. Entries for other boards and SoCs where
8
I do more actively care (or where there is a listed co-maintainer)
9
remain as they are.
5
10
6
ASAN shows memory leak stack:
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
13
Message-id: 20250307152838.3226398-1-peter.maydell@linaro.org
14
---
15
MAINTAINERS | 14 +++++++-------
16
1 file changed, 7 insertions(+), 7 deletions(-)
7
17
8
Indirect leak of 593280 byte(s) in 144 object(s) allocated from:
18
diff --git a/MAINTAINERS b/MAINTAINERS
9
#0 0x7f9a5e7e8d4e in __interceptor_calloc (/lib64/libasan.so.5+0x112d4e)
10
#1 0x7f9a5e607a50 in g_malloc0 (/lib64/libglib-2.0.so.0+0x55a50)
11
#2 0x55b1bebf636b in qdict_new ../qobject/qdict.c:30
12
#3 0x55b1bec09699 in parse_object ../qobject/json-parser.c:318
13
#4 0x55b1bec0b2df in parse_value ../qobject/json-parser.c:546
14
#5 0x55b1bec0b6a9 in json_parser_parse ../qobject/json-parser.c:580
15
#6 0x55b1bec060d1 in json_message_process_token ../qobject/json-streamer.c:92
16
#7 0x55b1bec16a12 in json_lexer_feed_char ../qobject/json-lexer.c:313
17
#8 0x55b1bec16fbd in json_lexer_feed ../qobject/json-lexer.c:350
18
#9 0x55b1bec06453 in json_message_parser_feed ../qobject/json-streamer.c:121
19
#10 0x55b1bebc2d51 in qmp_fd_receive ../tests/qtest/libqtest.c:614
20
#11 0x55b1bebc2f5e in qtest_qmp_receive_dict ../tests/qtest/libqtest.c:636
21
#12 0x55b1bebc2e6c in qtest_qmp_receive ../tests/qtest/libqtest.c:624
22
#13 0x55b1bebc3340 in qtest_vqmp ../tests/qtest/libqtest.c:715
23
#14 0x55b1bebc3942 in qtest_qmp ../tests/qtest/libqtest.c:756
24
#15 0x55b1bebbd64a in adc_qom_set ../tests/qtest/npcm7xx_adc-test.c:127
25
#16 0x55b1bebbd793 in adc_write_input ../tests/qtest/npcm7xx_adc-test.c:140
26
#17 0x55b1bebbdf92 in test_convert_external ../tests/qtest/npcm7xx_adc-test.c:246
27
28
Reported-by: Euler Robot <euler.robot@huawei.com>
29
Signed-off-by: Gan Qixin <ganqixin@huawei.com>
30
Reviewed-by: Hao Wu <wuhaotsh@google.com>
31
Message-id: 20210118065627.79903-1-ganqixin@huawei.com
32
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
33
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
34
---
35
tests/qtest/npcm7xx_adc-test.c | 1 +
36
1 file changed, 1 insertion(+)
37
38
diff --git a/tests/qtest/npcm7xx_adc-test.c b/tests/qtest/npcm7xx_adc-test.c
39
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
40
--- a/tests/qtest/npcm7xx_adc-test.c
20
--- a/MAINTAINERS
41
+++ b/tests/qtest/npcm7xx_adc-test.c
21
+++ b/MAINTAINERS
42
@@ -XXX,XX +XXX,XX @@ static void adc_qom_set(QTestState *qts, const ADC *adc,
22
@@ -XXX,XX +XXX,XX @@ F: docs/system/arm/kzm.rst
43
path, name, value);
23
Integrator CP
44
/* The qom set message returns successfully. */
24
M: Peter Maydell <peter.maydell@linaro.org>
45
g_assert_true(qdict_haskey(response, "return"));
25
L: qemu-arm@nongnu.org
46
+ qobject_unref(response);
26
-S: Maintained
47
}
27
+S: Odd Fixes
48
28
F: hw/arm/integratorcp.c
49
static void adc_write_input(QTestState *qts, const ADC *adc,
29
F: hw/misc/arm_integrator_debug.c
30
F: include/hw/misc/arm_integrator_debug.h
31
@@ -XXX,XX +XXX,XX @@ F: docs/system/arm/mps2.rst
32
Musca
33
M: Peter Maydell <peter.maydell@linaro.org>
34
L: qemu-arm@nongnu.org
35
-S: Maintained
36
+S: Odd Fixes
37
F: hw/arm/musca.c
38
F: docs/system/arm/musca.rst
39
40
@@ -XXX,XX +XXX,XX @@ F: tests/functional/test_aarch64_raspi4.py
41
Real View
42
M: Peter Maydell <peter.maydell@linaro.org>
43
L: qemu-arm@nongnu.org
44
-S: Maintained
45
+S: Odd Fixes
46
F: hw/arm/realview*
47
F: hw/cpu/realview_mpcore.c
48
F: hw/intc/realview_gic.c
49
@@ -XXX,XX +XXX,XX @@ F: tests/functional/test_arm_collie.py
50
Stellaris
51
M: Peter Maydell <peter.maydell@linaro.org>
52
L: qemu-arm@nongnu.org
53
-S: Maintained
54
+S: Odd Fixes
55
F: hw/*/stellaris*
56
F: hw/display/ssd03*
57
F: include/hw/input/gamepad.h
58
@@ -XXX,XX +XXX,XX @@ F: docs/system/arm/stm32.rst
59
Versatile Express
60
M: Peter Maydell <peter.maydell@linaro.org>
61
L: qemu-arm@nongnu.org
62
-S: Maintained
63
+S: Odd Fixes
64
F: hw/arm/vexpress.c
65
F: hw/display/sii9022.c
66
F: docs/system/arm/vexpress.rst
67
@@ -XXX,XX +XXX,XX @@ F: tests/functional/test_arm_vexpress.py
68
Versatile PB
69
M: Peter Maydell <peter.maydell@linaro.org>
70
L: qemu-arm@nongnu.org
71
-S: Maintained
72
+S: Odd Fixes
73
F: hw/*/versatile*
74
F: hw/i2c/arm_sbcon_i2c.c
75
F: include/hw/i2c/arm_sbcon_i2c.h
76
@@ -XXX,XX +XXX,XX @@ F: include/hw/hyperv/vmbus*.h
77
OMAP
78
M: Peter Maydell <peter.maydell@linaro.org>
79
L: qemu-arm@nongnu.org
80
-S: Maintained
81
+S: Odd Fixes
82
F: hw/*/omap*
83
F: include/hw/arm/omap.h
84
F: docs/system/arm/sx1.rst
50
--
85
--
51
2.20.1
86
2.43.0
52
87
53
88
diff view generated by jsdifflib
1
From: Mihai Carabas <mihai.carabas@oracle.com>
1
From: Paolo Bonzini <pbonzini@redhat.com>
2
2
3
Add pvpanic PCI device support details in docs/specs/pvpanic.txt.
3
The guest does not control whether characters are sent on the UART.
4
Sending them before the guest happens to boot will now result in a
5
"guest error" log entry that is only because of timing, even if the
6
guest _would_ later setup the receiver correctly.
4
7
5
Signed-off-by: Mihai Carabas <mihai.carabas@oracle.com>
8
This reverts the bulk of commit abf2b6a028670bd2890bb3aee7e103fe53e4b0df,
6
[fixed s/device/bus/ error]
9
and instead adds a comment about why we don't check the enable bits.
10
11
Cc: Philippe Mathieu-Daudé <philmd@linaro.org>
12
Cc: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
14
Message-id: 20250311153717.206129-1-pbonzini@redhat.com
15
[PMM: expanded comment]
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
16
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
18
---
10
docs/specs/pvpanic.txt | 13 ++++++++++++-
19
hw/char/pl011.c | 19 ++++++++++---------
11
1 file changed, 12 insertions(+), 1 deletion(-)
20
1 file changed, 10 insertions(+), 9 deletions(-)
12
21
13
diff --git a/docs/specs/pvpanic.txt b/docs/specs/pvpanic.txt
22
diff --git a/hw/char/pl011.c b/hw/char/pl011.c
14
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
15
--- a/docs/specs/pvpanic.txt
24
--- a/hw/char/pl011.c
16
+++ b/docs/specs/pvpanic.txt
25
+++ b/hw/char/pl011.c
17
@@ -XXX,XX +XXX,XX @@
26
@@ -XXX,XX +XXX,XX @@ static int pl011_can_receive(void *opaque)
18
PVPANIC DEVICE
27
unsigned fifo_depth = pl011_get_fifo_depth(s);
19
==============
28
unsigned fifo_available = fifo_depth - s->read_count;
20
29
21
-pvpanic device is a simulated ISA device, through which a guest panic
30
- if (!(s->cr & CR_UARTEN)) {
22
+pvpanic device is a simulated device, through which a guest panic
31
- qemu_log_mask(LOG_GUEST_ERROR,
23
event is sent to qemu, and a QMP event is generated. This allows
32
- "PL011 receiving data on disabled UART\n");
24
management apps (e.g. libvirt) to be notified and respond to the event.
33
- }
25
34
- if (!(s->cr & CR_RXE)) {
26
@@ -XXX,XX +XXX,XX @@ The management app has the option of waiting for GUEST_PANICKED events,
35
- qemu_log_mask(LOG_GUEST_ERROR,
27
and/or polling for guest-panicked RunState, to learn when the pvpanic
36
- "PL011 receiving data on disabled RX UART\n");
28
device has fired a panic event.
37
- }
29
38
- trace_pl011_can_receive(s->lcr, s->read_count, fifo_depth, fifo_available);
30
+The pvpanic device can be implemented as an ISA device (using IOPORT) or as a
39
+ /*
31
+PCI device.
40
+ * In theory we should check the UART and RX enable bits here and
32
+
41
+ * return 0 if they are not set (so the guest can't receive data
33
ISA Interface
42
+ * until you have enabled the UART). In practice we suspect there
34
-------------
43
+ * is at least some guest code out there which has been tested only
35
44
+ * on QEMU and which never bothers to enable the UART because we
36
@@ -XXX,XX +XXX,XX @@ bit 1: a guest panic has happened and will be handled by the guest;
45
+ * historically never enforced that. So we effectively keep the
37
the host should record it or report it, but should not affect
46
+ * UART continuously enabled regardless of the enable bits.
38
the execution of the guest.
47
+ */
39
48
40
+PCI Interface
49
+ trace_pl011_can_receive(s->lcr, s->read_count, fifo_depth, fifo_available);
41
+-------------
50
return fifo_available;
42
+
51
}
43
+The PCI interface is similar to the ISA interface except that it uses an MMIO
44
+address space provided by its BAR0, 1 byte long. Any machine with a PCI bus
45
+can enable a pvpanic device by adding '-device pvpanic-pci' to the command
46
+line.
47
+
48
ACPI Interface
49
--------------
50
52
51
--
53
--
52
2.20.1
54
2.43.0
53
55
54
56
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Joe Komlodi <komlodi@google.com>
2
2
3
Update all users of do_perm_pred2 for the new
3
On ARM hosts with CTR_EL0.DIC and CTR_EL0.IDC set, this would only cause
4
predicate descriptor field definitions.
4
an ISB to be executed during cache maintenance, which could lead to QEMU
5
executing TBs containing garbage instructions.
6
7
This seems to be because the ISB finishes executing instructions and
8
flushes the pipeline, but the ISB doesn't guarantee that writes from the
9
executed instructions are committed. If a small enough TB is created, it's
10
possible that the writes setting up the TB aren't committed by the time the
11
TB is executed.
12
13
This function is intended to be a port of the gcc implementation
14
(https://github.com/gcc-mirror/gcc/blob/85b46d0795ac76bc192cb8f88b646a647acf98c1/libgcc/config/aarch64/sync-cache.c#L67)
15
which makes the first DSB unconditional, so we can fix the synchronization
16
issue by doing that as well.
5
17
6
Cc: qemu-stable@nongnu.org
18
Cc: qemu-stable@nongnu.org
7
Buglink: https://bugs.launchpad.net/bugs/1908551
19
Fixes: 664a79735e4deb1 ("util: Specialize flush_idcache_range for aarch64")
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
20
Signed-off-by: Joe Komlodi <komlodi@google.com>
9
Message-id: 20210113062650.593824-5-richard.henderson@linaro.org
21
Message-id: 20250310203622.1827940-2-komlodi@google.com
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
22
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
23
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
25
---
13
target/arm/sve_helper.c | 8 ++++----
26
util/cacheflush.c | 4 +++-
14
target/arm/translate-sve.c | 13 ++++---------
27
1 file changed, 3 insertions(+), 1 deletion(-)
15
2 files changed, 8 insertions(+), 13 deletions(-)
16
28
17
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
29
diff --git a/util/cacheflush.c b/util/cacheflush.c
18
index XXXXXXX..XXXXXXX 100644
30
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/sve_helper.c
31
--- a/util/cacheflush.c
20
+++ b/target/arm/sve_helper.c
32
+++ b/util/cacheflush.c
21
@@ -XXX,XX +XXX,XX @@ static uint8_t reverse_bits_8(uint8_t x, int n)
33
@@ -XXX,XX +XXX,XX @@ void flush_idcache_range(uintptr_t rx, uintptr_t rw, size_t len)
22
34
for (p = rw & -dcache_lsize; p < rw + len; p += dcache_lsize) {
23
void HELPER(sve_rev_p)(void *vd, void *vn, uint32_t pred_desc)
35
asm volatile("dc\tcvau, %0" : : "r" (p) : "memory");
24
{
36
}
25
- intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
37
- asm volatile("dsb\tish" : : : "memory");
26
- int esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
38
}
27
+ intptr_t oprsz = FIELD_EX32(pred_desc, PREDDESC, OPRSZ);
39
28
+ int esz = FIELD_EX32(pred_desc, PREDDESC, ESZ);
40
+ /* DSB unconditionally to ensure any outstanding writes are committed. */
29
intptr_t i, oprsz_2 = oprsz / 2;
41
+ asm volatile("dsb\tish" : : : "memory");
30
42
+
31
if (oprsz <= 8) {
43
/*
32
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_rev_p)(void *vd, void *vn, uint32_t pred_desc)
44
* If CTR_EL0.DIC is enabled, Instruction cache cleaning to the Point
33
45
* of Unification is not required for instruction to data coherence.
34
void HELPER(sve_punpk_p)(void *vd, void *vn, uint32_t pred_desc)
35
{
36
- intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
37
- intptr_t high = extract32(pred_desc, SIMD_DATA_SHIFT + 2, 1);
38
+ intptr_t oprsz = FIELD_EX32(pred_desc, PREDDESC, OPRSZ);
39
+ intptr_t high = FIELD_EX32(pred_desc, PREDDESC, DATA);
40
uint64_t *d = vd;
41
intptr_t i;
42
43
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/arm/translate-sve.c
46
+++ b/target/arm/translate-sve.c
47
@@ -XXX,XX +XXX,XX @@ static bool do_perm_pred2(DisasContext *s, arg_rr_esz *a, bool high_odd,
48
TCGv_ptr t_d = tcg_temp_new_ptr();
49
TCGv_ptr t_n = tcg_temp_new_ptr();
50
TCGv_i32 t_desc;
51
- int desc;
52
+ uint32_t desc = 0;
53
54
tcg_gen_addi_ptr(t_d, cpu_env, pred_full_reg_offset(s, a->rd));
55
tcg_gen_addi_ptr(t_n, cpu_env, pred_full_reg_offset(s, a->rn));
56
57
- /* Predicate sizes may be smaller and cannot use simd_desc.
58
- We cannot round up, as we do elsewhere, because we need
59
- the exact size for ZIP2 and REV. We retain the style for
60
- the other helpers for consistency. */
61
-
62
- desc = vsz - 2;
63
- desc = deposit32(desc, SIMD_DATA_SHIFT, 2, a->esz);
64
- desc = deposit32(desc, SIMD_DATA_SHIFT + 2, 2, high_odd);
65
+ desc = FIELD_DP32(desc, PREDDESC, OPRSZ, vsz);
66
+ desc = FIELD_DP32(desc, PREDDESC, ESZ, a->esz);
67
+ desc = FIELD_DP32(desc, PREDDESC, DATA, high_odd);
68
t_desc = tcg_const_i32(desc);
69
70
fn(t_d, t_n, t_desc);
71
--
46
--
72
2.20.1
47
2.43.0
73
74
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Update all users of do_perm_pred3 for the new
3
The check for fp_excp_el in assert_fp_access_checked is
4
predicate descriptor field definitions.
4
incorrect. For SME, with StreamingMode enabled, the access
5
is really against the streaming mode vectors, and access
6
to the normal fp registers is allowed to be disabled.
7
C.f. sme_enabled_check.
8
9
Convert sve_access_checked to match, even though we don't
10
currently check the exception state.
5
11
6
Cc: qemu-stable@nongnu.org
12
Cc: qemu-stable@nongnu.org
13
Fixes: 3d74825f4d6 ("target/arm: Add SME enablement checks")
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
14
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210113062650.593824-4-richard.henderson@linaro.org
15
Message-id: 20250307190415.982049-2-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
16
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
18
---
12
target/arm/sve_helper.c | 18 +++++++++---------
19
target/arm/tcg/translate-a64.h | 2 +-
13
target/arm/translate-sve.c | 12 ++++--------
20
target/arm/tcg/translate.h | 10 +++++++---
14
2 files changed, 13 insertions(+), 17 deletions(-)
21
target/arm/tcg/translate-a64.c | 17 +++++++++--------
22
3 files changed, 17 insertions(+), 12 deletions(-)
15
23
16
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
24
diff --git a/target/arm/tcg/translate-a64.h b/target/arm/tcg/translate-a64.h
17
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/sve_helper.c
26
--- a/target/arm/tcg/translate-a64.h
19
+++ b/target/arm/sve_helper.c
27
+++ b/target/arm/tcg/translate-a64.h
20
@@ -XXX,XX +XXX,XX @@ static uint64_t compress_bits(uint64_t x, int n)
28
@@ -XXX,XX +XXX,XX @@ TCGv_i64 gen_mte_checkN(DisasContext *s, TCGv_i64 addr, bool is_write,
21
29
static inline void assert_fp_access_checked(DisasContext *s)
22
void HELPER(sve_zip_p)(void *vd, void *vn, void *vm, uint32_t pred_desc)
23
{
30
{
24
- intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
31
#ifdef CONFIG_DEBUG_TCG
25
- int esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
32
- if (unlikely(!s->fp_access_checked || s->fp_excp_el)) {
26
- intptr_t high = extract32(pred_desc, SIMD_DATA_SHIFT + 2, 1);
33
+ if (unlikely(s->fp_access_checked <= 0)) {
27
+ intptr_t oprsz = FIELD_EX32(pred_desc, PREDDESC, OPRSZ);
34
fprintf(stderr, "target-arm: FP access check missing for "
28
+ int esz = FIELD_EX32(pred_desc, PREDDESC, ESZ);
35
"instruction 0x%08x\n", s->insn);
29
+ intptr_t high = FIELD_EX32(pred_desc, PREDDESC, DATA);
36
abort();
30
uint64_t *d = vd;
37
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
31
intptr_t i;
38
index XXXXXXX..XXXXXXX 100644
32
39
--- a/target/arm/tcg/translate.h
33
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_zip_p)(void *vd, void *vn, void *vm, uint32_t pred_desc)
40
+++ b/target/arm/tcg/translate.h
34
41
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
35
void HELPER(sve_uzp_p)(void *vd, void *vn, void *vm, uint32_t pred_desc)
42
bool aarch64;
43
bool thumb;
44
bool lse2;
45
- /* Because unallocated encodings generate different exception syndrome
46
+ /*
47
+ * Because unallocated encodings generate different exception syndrome
48
* information from traps due to FP being disabled, we can't do a single
49
* "is fp access disabled" check at a high level in the decode tree.
50
* To help in catching bugs where the access check was forgotten in some
51
* code path, we set this flag when the access check is done, and assert
52
* that it is set at the point where we actually touch the FP regs.
53
+ * 0: not checked,
54
+ * 1: checked, access ok
55
+ * -1: checked, access denied
56
*/
57
- bool fp_access_checked;
58
- bool sve_access_checked;
59
+ int8_t fp_access_checked;
60
+ int8_t sve_access_checked;
61
/* ARMv8 single-step state (this is distinct from the QEMU gdbstub
62
* single-step support).
63
*/
64
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
65
index XXXXXXX..XXXXXXX 100644
66
--- a/target/arm/tcg/translate-a64.c
67
+++ b/target/arm/tcg/translate-a64.c
68
@@ -XXX,XX +XXX,XX @@ static bool fp_access_check_only(DisasContext *s)
36
{
69
{
37
- intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
70
if (s->fp_excp_el) {
38
- int esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
71
assert(!s->fp_access_checked);
39
- int odd = extract32(pred_desc, SIMD_DATA_SHIFT + 2, 1) << esz;
72
- s->fp_access_checked = true;
40
+ intptr_t oprsz = FIELD_EX32(pred_desc, PREDDESC, OPRSZ);
73
+ s->fp_access_checked = -1;
41
+ int esz = FIELD_EX32(pred_desc, PREDDESC, ESZ);
74
42
+ int odd = FIELD_EX32(pred_desc, PREDDESC, DATA) << esz;
75
gen_exception_insn_el(s, 0, EXCP_UDEF,
43
uint64_t *d = vd, *n = vn, *m = vm;
76
syn_fp_access_trap(1, 0xe, false, 0),
44
uint64_t l, h;
77
s->fp_excp_el);
45
intptr_t i;
78
return false;
46
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_uzp_p)(void *vd, void *vn, void *vm, uint32_t pred_desc)
79
}
47
80
- s->fp_access_checked = true;
48
void HELPER(sve_trn_p)(void *vd, void *vn, void *vm, uint32_t pred_desc)
81
+ s->fp_access_checked = 1;
49
{
82
return true;
50
- intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
83
}
51
- uintptr_t esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
84
52
- bool odd = extract32(pred_desc, SIMD_DATA_SHIFT + 2, 1);
85
@@ -XXX,XX +XXX,XX @@ bool sve_access_check(DisasContext *s)
53
+ intptr_t oprsz = FIELD_EX32(pred_desc, PREDDESC, OPRSZ);
86
syn_sve_access_trap(), s->sve_excp_el);
54
+ int esz = FIELD_EX32(pred_desc, PREDDESC, ESZ);
87
goto fail_exit;
55
+ int odd = FIELD_EX32(pred_desc, PREDDESC, DATA);
88
}
56
uint64_t *d = vd, *n = vn, *m = vm;
89
- s->sve_access_checked = true;
57
uint64_t mask;
90
+ s->sve_access_checked = 1;
58
int shr, shl;
91
return fp_access_check(s);
59
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
92
60
index XXXXXXX..XXXXXXX 100644
93
fail_exit:
61
--- a/target/arm/translate-sve.c
94
/* Assert that we only raise one exception per instruction. */
62
+++ b/target/arm/translate-sve.c
95
assert(!s->sve_access_checked);
63
@@ -XXX,XX +XXX,XX @@ static bool do_perm_pred3(DisasContext *s, arg_rrr_esz *a, bool high_odd,
96
- s->sve_access_checked = true;
64
97
+ s->sve_access_checked = -1;
65
unsigned vsz = pred_full_reg_size(s);
98
return false;
66
99
}
67
- /* Predicate sizes may be smaller and cannot use simd_desc.
100
68
- We cannot round up, as we do elsewhere, because we need
101
@@ -XXX,XX +XXX,XX @@ bool sme_enabled_check(DisasContext *s)
69
- the exact size for ZIP2 and REV. We retain the style for
102
* sme_excp_el by itself for cpregs access checks.
70
- the other helpers for consistency. */
103
*/
71
TCGv_ptr t_d = tcg_temp_new_ptr();
104
if (!s->fp_excp_el || s->sme_excp_el < s->fp_excp_el) {
72
TCGv_ptr t_n = tcg_temp_new_ptr();
105
- s->fp_access_checked = true;
73
TCGv_ptr t_m = tcg_temp_new_ptr();
106
- return sme_access_check(s);
74
TCGv_i32 t_desc;
107
+ bool ret = sme_access_check(s);
75
- int desc;
108
+ s->fp_access_checked = (ret ? 1 : -1);
76
+ uint32_t desc = 0;
109
+ return ret;
77
110
}
78
- desc = vsz - 2;
111
return fp_access_check_only(s);
79
- desc = deposit32(desc, SIMD_DATA_SHIFT, 2, a->esz);
112
}
80
- desc = deposit32(desc, SIMD_DATA_SHIFT + 2, 2, high_odd);
113
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
81
+ desc = FIELD_DP32(desc, PREDDESC, OPRSZ, vsz);
114
s->insn = insn;
82
+ desc = FIELD_DP32(desc, PREDDESC, ESZ, a->esz);
115
s->base.pc_next = pc + 4;
83
+ desc = FIELD_DP32(desc, PREDDESC, DATA, high_odd);
116
84
117
- s->fp_access_checked = false;
85
tcg_gen_addi_ptr(t_d, cpu_env, pred_full_reg_offset(s, a->rd));
118
- s->sve_access_checked = false;
86
tcg_gen_addi_ptr(t_n, cpu_env, pred_full_reg_offset(s, a->rn));
119
+ s->fp_access_checked = 0;
120
+ s->sve_access_checked = 0;
121
122
if (s->pstate_il) {
123
/*
87
--
124
--
88
2.20.1
125
2.43.0
89
90
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
These two were odd, in that do_pfirst_pnext passed the
3
In StreamingMode, fp_access_checked is handled already.
4
count of 64-bit words rather than bytes. Change to pass
4
We cannot fall through to fp_access_check lest we fall
5
the standard pred_full_reg_size to avoid confusion.
5
foul of the double-check assertion.
6
6
7
Cc: qemu-stable@nongnu.org
7
Cc: qemu-stable@nongnu.org
8
Fixes: 285b1d5fcef ("target/arm: Handle SME in sve_access_check")
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210113062650.593824-3-richard.henderson@linaro.org
10
Message-id: 20250307190415.982049-3-richard.henderson@linaro.org
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
[PMM: move declaration of 'ret' to top of block]
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
14
---
13
target/arm/sve_helper.c | 7 ++++---
15
target/arm/tcg/translate-a64.c | 22 +++++++++++-----------
14
target/arm/translate-sve.c | 6 +++---
16
1 file changed, 11 insertions(+), 11 deletions(-)
15
2 files changed, 7 insertions(+), 6 deletions(-)
16
17
17
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
18
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
18
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/sve_helper.c
20
--- a/target/arm/tcg/translate-a64.c
20
+++ b/target/arm/sve_helper.c
21
+++ b/target/arm/tcg/translate-a64.c
21
@@ -XXX,XX +XXX,XX @@ static intptr_t last_active_element(uint64_t *g, intptr_t words, intptr_t esz)
22
@@ -XXX,XX +XXX,XX @@ static int fp_access_check_vector_hsd(DisasContext *s, bool is_q, MemOp esz)
22
return (intptr_t)-1 << esz;
23
bool sve_access_check(DisasContext *s)
24
{
25
if (s->pstate_sm || !dc_isar_feature(aa64_sve, s)) {
26
+ bool ret;
27
+
28
assert(dc_isar_feature(aa64_sme, s));
29
- if (!sme_sm_enabled_check(s)) {
30
- goto fail_exit;
31
- }
32
- } else if (s->sve_excp_el) {
33
+ ret = sme_sm_enabled_check(s);
34
+ s->sve_access_checked = (ret ? 1 : -1);
35
+ return ret;
36
+ }
37
+ if (s->sve_excp_el) {
38
+ /* Assert that we only raise one exception per instruction. */
39
+ assert(!s->sve_access_checked);
40
gen_exception_insn_el(s, 0, EXCP_UDEF,
41
syn_sve_access_trap(), s->sve_excp_el);
42
- goto fail_exit;
43
+ s->sve_access_checked = -1;
44
+ return false;
45
}
46
s->sve_access_checked = 1;
47
return fp_access_check(s);
48
-
49
- fail_exit:
50
- /* Assert that we only raise one exception per instruction. */
51
- assert(!s->sve_access_checked);
52
- s->sve_access_checked = -1;
53
- return false;
23
}
54
}
24
55
25
-uint32_t HELPER(sve_pfirst)(void *vd, void *vg, uint32_t words)
56
/*
26
+uint32_t HELPER(sve_pfirst)(void *vd, void *vg, uint32_t pred_desc)
27
{
28
+ intptr_t words = DIV_ROUND_UP(FIELD_EX32(pred_desc, PREDDESC, OPRSZ), 8);
29
uint32_t flags = PREDTEST_INIT;
30
uint64_t *d = vd, *g = vg;
31
intptr_t i = 0;
32
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(sve_pfirst)(void *vd, void *vg, uint32_t words)
33
34
uint32_t HELPER(sve_pnext)(void *vd, void *vg, uint32_t pred_desc)
35
{
36
- intptr_t words = extract32(pred_desc, 0, SIMD_OPRSZ_BITS);
37
- intptr_t esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
38
+ intptr_t words = DIV_ROUND_UP(FIELD_EX32(pred_desc, PREDDESC, OPRSZ), 8);
39
+ intptr_t esz = FIELD_EX32(pred_desc, PREDDESC, ESZ);
40
uint32_t flags = PREDTEST_INIT;
41
uint64_t *d = vd, *g = vg, esz_mask;
42
intptr_t i, next;
43
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/arm/translate-sve.c
46
+++ b/target/arm/translate-sve.c
47
@@ -XXX,XX +XXX,XX @@ static bool do_pfirst_pnext(DisasContext *s, arg_rr_esz *a,
48
TCGv_ptr t_pd = tcg_temp_new_ptr();
49
TCGv_ptr t_pg = tcg_temp_new_ptr();
50
TCGv_i32 t;
51
- unsigned desc;
52
+ unsigned desc = 0;
53
54
- desc = DIV_ROUND_UP(pred_full_reg_size(s), 8);
55
- desc = deposit32(desc, SIMD_DATA_SHIFT, 2, a->esz);
56
+ desc = FIELD_DP32(desc, PREDDESC, OPRSZ, pred_full_reg_size(s));
57
+ desc = FIELD_DP32(desc, PREDDESC, ESZ, a->esz);
58
59
tcg_gen_addi_ptr(t_pd, cpu_env, pred_full_reg_offset(s, a->rd));
60
tcg_gen_addi_ptr(t_pg, cpu_env, pred_full_reg_offset(s, a->rn));
61
--
57
--
62
2.20.1
58
2.43.0
63
64
diff view generated by jsdifflib
Deleted patch
1
From: Mihai Carabas <mihai.carabas@oracle.com>
2
1
3
To ease the PCI device addition in next patches, split the code as follows:
4
- generic code (read/write/setup) is being kept in pvpanic.c
5
- ISA dependent code moved to pvpanic-isa.c
6
7
Also, rename:
8
- ISA_PVPANIC_DEVICE -> PVPANIC_ISA_DEVICE.
9
- TYPE_PVPANIC -> TYPE_PVPANIC_ISA.
10
- MemoryRegion io -> mr.
11
- pvpanic_ioport_* in pvpanic_*.
12
13
Update the build system with the new files and config structure.
14
15
Signed-off-by: Mihai Carabas <mihai.carabas@oracle.com>
16
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
---
19
include/hw/misc/pvpanic.h | 23 +++++++++-
20
hw/misc/pvpanic-isa.c | 94 +++++++++++++++++++++++++++++++++++++++
21
hw/misc/pvpanic.c | 85 +++--------------------------------
22
hw/i386/Kconfig | 2 +-
23
hw/misc/Kconfig | 6 ++-
24
hw/misc/meson.build | 3 +-
25
tests/qtest/meson.build | 2 +-
26
7 files changed, 130 insertions(+), 85 deletions(-)
27
create mode 100644 hw/misc/pvpanic-isa.c
28
29
diff --git a/include/hw/misc/pvpanic.h b/include/hw/misc/pvpanic.h
30
index XXXXXXX..XXXXXXX 100644
31
--- a/include/hw/misc/pvpanic.h
32
+++ b/include/hw/misc/pvpanic.h
33
@@ -XXX,XX +XXX,XX @@
34
35
#include "qom/object.h"
36
37
-#define TYPE_PVPANIC "pvpanic"
38
+#define TYPE_PVPANIC_ISA_DEVICE "pvpanic"
39
40
#define PVPANIC_IOPORT_PROP "ioport"
41
42
+/* The bit of supported pv event, TODO: include uapi header and remove this */
43
+#define PVPANIC_F_PANICKED 0
44
+#define PVPANIC_F_CRASHLOADED 1
45
+
46
+/* The pv event value */
47
+#define PVPANIC_PANICKED (1 << PVPANIC_F_PANICKED)
48
+#define PVPANIC_CRASHLOADED (1 << PVPANIC_F_CRASHLOADED)
49
+
50
+/*
51
+ * PVPanicState for any device type
52
+ */
53
+typedef struct PVPanicState PVPanicState;
54
+struct PVPanicState {
55
+ MemoryRegion mr;
56
+ uint8_t events;
57
+};
58
+
59
+void pvpanic_setup_io(PVPanicState *s, DeviceState *dev, unsigned size);
60
+
61
static inline uint16_t pvpanic_port(void)
62
{
63
- Object *o = object_resolve_path_type("", TYPE_PVPANIC, NULL);
64
+ Object *o = object_resolve_path_type("", TYPE_PVPANIC_ISA_DEVICE, NULL);
65
if (!o) {
66
return 0;
67
}
68
diff --git a/hw/misc/pvpanic-isa.c b/hw/misc/pvpanic-isa.c
69
new file mode 100644
70
index XXXXXXX..XXXXXXX
71
--- /dev/null
72
+++ b/hw/misc/pvpanic-isa.c
73
@@ -XXX,XX +XXX,XX @@
74
+/*
75
+ * QEMU simulated pvpanic device.
76
+ *
77
+ * Copyright Fujitsu, Corp. 2013
78
+ *
79
+ * Authors:
80
+ * Wen Congyang <wency@cn.fujitsu.com>
81
+ * Hu Tao <hutao@cn.fujitsu.com>
82
+ *
83
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
84
+ * See the COPYING file in the top-level directory.
85
+ *
86
+ */
87
+
88
+#include "qemu/osdep.h"
89
+#include "qemu/log.h"
90
+#include "qemu/module.h"
91
+#include "sysemu/runstate.h"
92
+
93
+#include "hw/nvram/fw_cfg.h"
94
+#include "hw/qdev-properties.h"
95
+#include "hw/misc/pvpanic.h"
96
+#include "qom/object.h"
97
+#include "hw/isa/isa.h"
98
+
99
+OBJECT_DECLARE_SIMPLE_TYPE(PVPanicISAState, PVPANIC_ISA_DEVICE)
100
+
101
+/*
102
+ * PVPanicISAState for ISA device and
103
+ * use ioport.
104
+ */
105
+struct PVPanicISAState {
106
+ ISADevice parent_obj;
107
+
108
+ uint16_t ioport;
109
+ PVPanicState pvpanic;
110
+};
111
+
112
+static void pvpanic_isa_initfn(Object *obj)
113
+{
114
+ PVPanicISAState *s = PVPANIC_ISA_DEVICE(obj);
115
+
116
+ pvpanic_setup_io(&s->pvpanic, DEVICE(s), 1);
117
+}
118
+
119
+static void pvpanic_isa_realizefn(DeviceState *dev, Error **errp)
120
+{
121
+ ISADevice *d = ISA_DEVICE(dev);
122
+ PVPanicISAState *s = PVPANIC_ISA_DEVICE(dev);
123
+ PVPanicState *ps = &s->pvpanic;
124
+ FWCfgState *fw_cfg = fw_cfg_find();
125
+ uint16_t *pvpanic_port;
126
+
127
+ if (!fw_cfg) {
128
+ return;
129
+ }
130
+
131
+ pvpanic_port = g_malloc(sizeof(*pvpanic_port));
132
+ *pvpanic_port = cpu_to_le16(s->ioport);
133
+ fw_cfg_add_file(fw_cfg, "etc/pvpanic-port", pvpanic_port,
134
+ sizeof(*pvpanic_port));
135
+
136
+ isa_register_ioport(d, &ps->mr, s->ioport);
137
+}
138
+
139
+static Property pvpanic_isa_properties[] = {
140
+ DEFINE_PROP_UINT16(PVPANIC_IOPORT_PROP, PVPanicISAState, ioport, 0x505),
141
+ DEFINE_PROP_UINT8("events", PVPanicISAState, pvpanic.events, PVPANIC_PANICKED | PVPANIC_CRASHLOADED),
142
+ DEFINE_PROP_END_OF_LIST(),
143
+};
144
+
145
+static void pvpanic_isa_class_init(ObjectClass *klass, void *data)
146
+{
147
+ DeviceClass *dc = DEVICE_CLASS(klass);
148
+
149
+ dc->realize = pvpanic_isa_realizefn;
150
+ device_class_set_props(dc, pvpanic_isa_properties);
151
+ set_bit(DEVICE_CATEGORY_MISC, dc->categories);
152
+}
153
+
154
+static TypeInfo pvpanic_isa_info = {
155
+ .name = TYPE_PVPANIC_ISA_DEVICE,
156
+ .parent = TYPE_ISA_DEVICE,
157
+ .instance_size = sizeof(PVPanicISAState),
158
+ .instance_init = pvpanic_isa_initfn,
159
+ .class_init = pvpanic_isa_class_init,
160
+};
161
+
162
+static void pvpanic_register_types(void)
163
+{
164
+ type_register_static(&pvpanic_isa_info);
165
+}
166
+
167
+type_init(pvpanic_register_types)
168
diff --git a/hw/misc/pvpanic.c b/hw/misc/pvpanic.c
169
index XXXXXXX..XXXXXXX 100644
170
--- a/hw/misc/pvpanic.c
171
+++ b/hw/misc/pvpanic.c
172
@@ -XXX,XX +XXX,XX @@
173
#include "hw/misc/pvpanic.h"
174
#include "qom/object.h"
175
176
-/* The bit of supported pv event, TODO: include uapi header and remove this */
177
-#define PVPANIC_F_PANICKED 0
178
-#define PVPANIC_F_CRASHLOADED 1
179
-
180
-/* The pv event value */
181
-#define PVPANIC_PANICKED (1 << PVPANIC_F_PANICKED)
182
-#define PVPANIC_CRASHLOADED (1 << PVPANIC_F_CRASHLOADED)
183
-
184
-typedef struct PVPanicState PVPanicState;
185
-DECLARE_INSTANCE_CHECKER(PVPanicState, ISA_PVPANIC_DEVICE,
186
- TYPE_PVPANIC)
187
-
188
static void handle_event(int event)
189
{
190
static bool logged;
191
@@ -XXX,XX +XXX,XX @@ static void handle_event(int event)
192
}
193
}
194
195
-#include "hw/isa/isa.h"
196
-
197
-struct PVPanicState {
198
- ISADevice parent_obj;
199
-
200
- MemoryRegion io;
201
- uint16_t ioport;
202
- uint8_t events;
203
-};
204
-
205
/* return supported events on read */
206
-static uint64_t pvpanic_ioport_read(void *opaque, hwaddr addr, unsigned size)
207
+static uint64_t pvpanic_read(void *opaque, hwaddr addr, unsigned size)
208
{
209
PVPanicState *pvp = opaque;
210
return pvp->events;
211
}
212
213
-static void pvpanic_ioport_write(void *opaque, hwaddr addr, uint64_t val,
214
+static void pvpanic_write(void *opaque, hwaddr addr, uint64_t val,
215
unsigned size)
216
{
217
handle_event(val);
218
}
219
220
static const MemoryRegionOps pvpanic_ops = {
221
- .read = pvpanic_ioport_read,
222
- .write = pvpanic_ioport_write,
223
+ .read = pvpanic_read,
224
+ .write = pvpanic_write,
225
.impl = {
226
.min_access_size = 1,
227
.max_access_size = 1,
228
},
229
};
230
231
-static void pvpanic_isa_initfn(Object *obj)
232
+void pvpanic_setup_io(PVPanicState *s, DeviceState *dev, unsigned size)
233
{
234
- PVPanicState *s = ISA_PVPANIC_DEVICE(obj);
235
-
236
- memory_region_init_io(&s->io, OBJECT(s), &pvpanic_ops, s, "pvpanic", 1);
237
+ memory_region_init_io(&s->mr, OBJECT(dev), &pvpanic_ops, s, "pvpanic", size);
238
}
239
-
240
-static void pvpanic_isa_realizefn(DeviceState *dev, Error **errp)
241
-{
242
- ISADevice *d = ISA_DEVICE(dev);
243
- PVPanicState *s = ISA_PVPANIC_DEVICE(dev);
244
- FWCfgState *fw_cfg = fw_cfg_find();
245
- uint16_t *pvpanic_port;
246
-
247
- if (!fw_cfg) {
248
- return;
249
- }
250
-
251
- pvpanic_port = g_malloc(sizeof(*pvpanic_port));
252
- *pvpanic_port = cpu_to_le16(s->ioport);
253
- fw_cfg_add_file(fw_cfg, "etc/pvpanic-port", pvpanic_port,
254
- sizeof(*pvpanic_port));
255
-
256
- isa_register_ioport(d, &s->io, s->ioport);
257
-}
258
-
259
-static Property pvpanic_isa_properties[] = {
260
- DEFINE_PROP_UINT16(PVPANIC_IOPORT_PROP, PVPanicState, ioport, 0x505),
261
- DEFINE_PROP_UINT8("events", PVPanicState, events, PVPANIC_PANICKED | PVPANIC_CRASHLOADED),
262
- DEFINE_PROP_END_OF_LIST(),
263
-};
264
-
265
-static void pvpanic_isa_class_init(ObjectClass *klass, void *data)
266
-{
267
- DeviceClass *dc = DEVICE_CLASS(klass);
268
-
269
- dc->realize = pvpanic_isa_realizefn;
270
- device_class_set_props(dc, pvpanic_isa_properties);
271
- set_bit(DEVICE_CATEGORY_MISC, dc->categories);
272
-}
273
-
274
-static TypeInfo pvpanic_isa_info = {
275
- .name = TYPE_PVPANIC,
276
- .parent = TYPE_ISA_DEVICE,
277
- .instance_size = sizeof(PVPanicState),
278
- .instance_init = pvpanic_isa_initfn,
279
- .class_init = pvpanic_isa_class_init,
280
-};
281
-
282
-static void pvpanic_register_types(void)
283
-{
284
- type_register_static(&pvpanic_isa_info);
285
-}
286
-
287
-type_init(pvpanic_register_types)
288
diff --git a/hw/i386/Kconfig b/hw/i386/Kconfig
289
index XXXXXXX..XXXXXXX 100644
290
--- a/hw/i386/Kconfig
291
+++ b/hw/i386/Kconfig
292
@@ -XXX,XX +XXX,XX @@ config PC
293
imply ISA_DEBUG
294
imply PARALLEL
295
imply PCI_DEVICES
296
- imply PVPANIC
297
+ imply PVPANIC_ISA
298
imply QXL
299
imply SEV
300
imply SGA
301
diff --git a/hw/misc/Kconfig b/hw/misc/Kconfig
302
index XXXXXXX..XXXXXXX 100644
303
--- a/hw/misc/Kconfig
304
+++ b/hw/misc/Kconfig
305
@@ -XXX,XX +XXX,XX @@ config IOTKIT_SYSCTL
306
config IOTKIT_SYSINFO
307
bool
308
309
-config PVPANIC
310
+config PVPANIC_COMMON
311
+ bool
312
+
313
+config PVPANIC_ISA
314
bool
315
depends on ISA_BUS
316
+ select PVPANIC_COMMON
317
318
config AUX
319
bool
320
diff --git a/hw/misc/meson.build b/hw/misc/meson.build
321
index XXXXXXX..XXXXXXX 100644
322
--- a/hw/misc/meson.build
323
+++ b/hw/misc/meson.build
324
@@ -XXX,XX +XXX,XX @@ softmmu_ss.add(when: 'CONFIG_EMC141X', if_true: files('emc141x.c'))
325
softmmu_ss.add(when: 'CONFIG_UNIMP', if_true: files('unimp.c'))
326
softmmu_ss.add(when: 'CONFIG_EMPTY_SLOT', if_true: files('empty_slot.c'))
327
softmmu_ss.add(when: 'CONFIG_LED', if_true: files('led.c'))
328
+softmmu_ss.add(when: 'CONFIG_PVPANIC_COMMON', if_true: files('pvpanic.c'))
329
330
# ARM devices
331
softmmu_ss.add(when: 'CONFIG_PL310', if_true: files('arm_l2x0.c'))
332
@@ -XXX,XX +XXX,XX @@ softmmu_ss.add(when: 'CONFIG_IOTKIT_SYSINFO', if_true: files('iotkit-sysinfo.c')
333
softmmu_ss.add(when: 'CONFIG_ARMSSE_CPUID', if_true: files('armsse-cpuid.c'))
334
softmmu_ss.add(when: 'CONFIG_ARMSSE_MHU', if_true: files('armsse-mhu.c'))
335
336
-softmmu_ss.add(when: 'CONFIG_PVPANIC', if_true: files('pvpanic.c'))
337
+softmmu_ss.add(when: 'CONFIG_PVPANIC_ISA', if_true: files('pvpanic-isa.c'))
338
softmmu_ss.add(when: 'CONFIG_AUX', if_true: files('auxbus.c'))
339
softmmu_ss.add(when: 'CONFIG_ASPEED_SOC', if_true: files('aspeed_scu.c', 'aspeed_sdmc.c', 'aspeed_xdma.c'))
340
softmmu_ss.add(when: 'CONFIG_MSF2', if_true: files('msf2-sysreg.c'))
341
diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
342
index XXXXXXX..XXXXXXX 100644
343
--- a/tests/qtest/meson.build
344
+++ b/tests/qtest/meson.build
345
@@ -XXX,XX +XXX,XX @@ qtests_i386 = \
346
(config_host.has_key('CONFIG_LINUX') and \
347
config_all_devices.has_key('CONFIG_ISA_IPMI_BT') ? ['ipmi-bt-test'] : []) + \
348
(config_all_devices.has_key('CONFIG_WDT_IB700') ? ['wdt_ib700-test'] : []) + \
349
- (config_all_devices.has_key('CONFIG_PVPANIC') ? ['pvpanic-test'] : []) + \
350
+ (config_all_devices.has_key('CONFIG_PVPANIC_ISA') ? ['pvpanic-test'] : []) + \
351
(config_all_devices.has_key('CONFIG_HDA') ? ['intel-hda-test'] : []) + \
352
(config_all_devices.has_key('CONFIG_I82801B11') ? ['i82801b11-test'] : []) + \
353
(config_all_devices.has_key('CONFIG_IOH3420') ? ['ioh3420-test'] : []) + \
354
--
355
2.20.1
356
357
diff view generated by jsdifflib
Deleted patch
1
From: Mihai Carabas <mihai.carabas@oracle.com>
2
1
3
Add PCI interface support for PVPANIC device. Create a new file pvpanic-pci.c
4
where the PCI specific routines reside and update the build system with the new
5
files and config structure.
6
7
Signed-off-by: Mihai Carabas <mihai.carabas@oracle.com>
8
Reviewed-by: Gerd Hoffmann <kraxel@redhat.com>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
[PMM: wrapped one long line]
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
docs/specs/pci-ids.txt | 1 +
14
include/hw/misc/pvpanic.h | 1 +
15
include/hw/pci/pci.h | 1 +
16
hw/misc/pvpanic-pci.c | 95 +++++++++++++++++++++++++++++++++++++++
17
hw/misc/Kconfig | 6 +++
18
hw/misc/meson.build | 1 +
19
6 files changed, 105 insertions(+)
20
create mode 100644 hw/misc/pvpanic-pci.c
21
22
diff --git a/docs/specs/pci-ids.txt b/docs/specs/pci-ids.txt
23
index XXXXXXX..XXXXXXX 100644
24
--- a/docs/specs/pci-ids.txt
25
+++ b/docs/specs/pci-ids.txt
26
@@ -XXX,XX +XXX,XX @@ PCI devices (other than virtio):
27
1b36:000d PCI xhci usb host adapter
28
1b36:000f mdpy (mdev sample device), linux/samples/vfio-mdev/mdpy.c
29
1b36:0010 PCIe NVMe device (-device nvme)
30
+1b36:0011 PCI PVPanic device (-device pvpanic-pci)
31
32
All these devices are documented in docs/specs.
33
34
diff --git a/include/hw/misc/pvpanic.h b/include/hw/misc/pvpanic.h
35
index XXXXXXX..XXXXXXX 100644
36
--- a/include/hw/misc/pvpanic.h
37
+++ b/include/hw/misc/pvpanic.h
38
@@ -XXX,XX +XXX,XX @@
39
#include "qom/object.h"
40
41
#define TYPE_PVPANIC_ISA_DEVICE "pvpanic"
42
+#define TYPE_PVPANIC_PCI_DEVICE "pvpanic-pci"
43
44
#define PVPANIC_IOPORT_PROP "ioport"
45
46
diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h
47
index XXXXXXX..XXXXXXX 100644
48
--- a/include/hw/pci/pci.h
49
+++ b/include/hw/pci/pci.h
50
@@ -XXX,XX +XXX,XX @@ extern bool pci_available;
51
#define PCI_DEVICE_ID_REDHAT_PCIE_BRIDGE 0x000e
52
#define PCI_DEVICE_ID_REDHAT_MDPY 0x000f
53
#define PCI_DEVICE_ID_REDHAT_NVME 0x0010
54
+#define PCI_DEVICE_ID_REDHAT_PVPANIC 0x0011
55
#define PCI_DEVICE_ID_REDHAT_QXL 0x0100
56
57
#define FMT_PCIBUS PRIx64
58
diff --git a/hw/misc/pvpanic-pci.c b/hw/misc/pvpanic-pci.c
59
new file mode 100644
60
index XXXXXXX..XXXXXXX
61
--- /dev/null
62
+++ b/hw/misc/pvpanic-pci.c
63
@@ -XXX,XX +XXX,XX @@
64
+/*
65
+ * QEMU simulated PCI pvpanic device.
66
+ *
67
+ * Copyright (C) 2020 Oracle
68
+ *
69
+ * Authors:
70
+ * Mihai Carabas <mihai.carabas@oracle.com>
71
+ *
72
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
73
+ * See the COPYING file in the top-level directory.
74
+ *
75
+ */
76
+
77
+#include "qemu/osdep.h"
78
+#include "qemu/log.h"
79
+#include "qemu/module.h"
80
+#include "sysemu/runstate.h"
81
+
82
+#include "hw/nvram/fw_cfg.h"
83
+#include "hw/qdev-properties.h"
84
+#include "migration/vmstate.h"
85
+#include "hw/misc/pvpanic.h"
86
+#include "qom/object.h"
87
+#include "hw/pci/pci.h"
88
+
89
+OBJECT_DECLARE_SIMPLE_TYPE(PVPanicPCIState, PVPANIC_PCI_DEVICE)
90
+
91
+/*
92
+ * PVPanicPCIState for PCI device
93
+ */
94
+typedef struct PVPanicPCIState {
95
+ PCIDevice dev;
96
+ PVPanicState pvpanic;
97
+} PVPanicPCIState;
98
+
99
+static const VMStateDescription vmstate_pvpanic_pci = {
100
+ .name = "pvpanic-pci",
101
+ .version_id = 1,
102
+ .minimum_version_id = 1,
103
+ .fields = (VMStateField[]) {
104
+ VMSTATE_PCI_DEVICE(dev, PVPanicPCIState),
105
+ VMSTATE_END_OF_LIST()
106
+ }
107
+};
108
+
109
+static void pvpanic_pci_realizefn(PCIDevice *dev, Error **errp)
110
+{
111
+ PVPanicPCIState *s = PVPANIC_PCI_DEVICE(dev);
112
+ PVPanicState *ps = &s->pvpanic;
113
+
114
+ pvpanic_setup_io(&s->pvpanic, DEVICE(s), 2);
115
+
116
+ pci_register_bar(dev, 0, PCI_BASE_ADDRESS_SPACE_MEMORY, &ps->mr);
117
+}
118
+
119
+static Property pvpanic_pci_properties[] = {
120
+ DEFINE_PROP_UINT8("events", PVPanicPCIState, pvpanic.events,
121
+ PVPANIC_PANICKED | PVPANIC_CRASHLOADED),
122
+ DEFINE_PROP_END_OF_LIST(),
123
+};
124
+
125
+static void pvpanic_pci_class_init(ObjectClass *klass, void *data)
126
+{
127
+ DeviceClass *dc = DEVICE_CLASS(klass);
128
+ PCIDeviceClass *pc = PCI_DEVICE_CLASS(klass);
129
+
130
+ device_class_set_props(dc, pvpanic_pci_properties);
131
+
132
+ pc->realize = pvpanic_pci_realizefn;
133
+ pc->vendor_id = PCI_VENDOR_ID_REDHAT;
134
+ pc->device_id = PCI_DEVICE_ID_REDHAT_PVPANIC;
135
+ pc->revision = 1;
136
+ pc->class_id = PCI_CLASS_SYSTEM_OTHER;
137
+ dc->vmsd = &vmstate_pvpanic_pci;
138
+
139
+ set_bit(DEVICE_CATEGORY_MISC, dc->categories);
140
+}
141
+
142
+static TypeInfo pvpanic_pci_info = {
143
+ .name = TYPE_PVPANIC_PCI_DEVICE,
144
+ .parent = TYPE_PCI_DEVICE,
145
+ .instance_size = sizeof(PVPanicPCIState),
146
+ .class_init = pvpanic_pci_class_init,
147
+ .interfaces = (InterfaceInfo[]) {
148
+ { INTERFACE_CONVENTIONAL_PCI_DEVICE },
149
+ { }
150
+ }
151
+};
152
+
153
+static void pvpanic_register_types(void)
154
+{
155
+ type_register_static(&pvpanic_pci_info);
156
+}
157
+
158
+type_init(pvpanic_register_types);
159
diff --git a/hw/misc/Kconfig b/hw/misc/Kconfig
160
index XXXXXXX..XXXXXXX 100644
161
--- a/hw/misc/Kconfig
162
+++ b/hw/misc/Kconfig
163
@@ -XXX,XX +XXX,XX @@ config IOTKIT_SYSINFO
164
config PVPANIC_COMMON
165
bool
166
167
+config PVPANIC_PCI
168
+ bool
169
+ default y if PCI_DEVICES
170
+ depends on PCI
171
+ select PVPANIC_COMMON
172
+
173
config PVPANIC_ISA
174
bool
175
depends on ISA_BUS
176
diff --git a/hw/misc/meson.build b/hw/misc/meson.build
177
index XXXXXXX..XXXXXXX 100644
178
--- a/hw/misc/meson.build
179
+++ b/hw/misc/meson.build
180
@@ -XXX,XX +XXX,XX @@ softmmu_ss.add(when: 'CONFIG_ARMSSE_CPUID', if_true: files('armsse-cpuid.c'))
181
softmmu_ss.add(when: 'CONFIG_ARMSSE_MHU', if_true: files('armsse-mhu.c'))
182
183
softmmu_ss.add(when: 'CONFIG_PVPANIC_ISA', if_true: files('pvpanic-isa.c'))
184
+softmmu_ss.add(when: 'CONFIG_PVPANIC_PCI', if_true: files('pvpanic-pci.c'))
185
softmmu_ss.add(when: 'CONFIG_AUX', if_true: files('auxbus.c'))
186
softmmu_ss.add(when: 'CONFIG_ASPEED_SOC', if_true: files('aspeed_scu.c', 'aspeed_sdmc.c', 'aspeed_xdma.c'))
187
softmmu_ss.add(when: 'CONFIG_MSF2', if_true: files('msf2-sysreg.c'))
188
--
189
2.20.1
190
191
diff view generated by jsdifflib
Deleted patch
1
From: Mihai Carabas <mihai.carabas@oracle.com>
2
1
3
Add a test case for pvpanic-pci device. The scenario is the same as pvpapnic
4
ISA device, but is using the PCI bus.
5
6
Signed-off-by: Mihai Carabas <mihai.carabas@oracle.com>
7
Acked-by: Thomas Huth <thuth@redhat.com>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
tests/qtest/pvpanic-pci-test.c | 62 ++++++++++++++++++++++++++++++++++
12
tests/qtest/meson.build | 1 +
13
2 files changed, 63 insertions(+)
14
create mode 100644 tests/qtest/pvpanic-pci-test.c
15
16
diff --git a/tests/qtest/pvpanic-pci-test.c b/tests/qtest/pvpanic-pci-test.c
17
new file mode 100644
18
index XXXXXXX..XXXXXXX
19
--- /dev/null
20
+++ b/tests/qtest/pvpanic-pci-test.c
21
@@ -XXX,XX +XXX,XX @@
22
+/*
23
+ * QTest testcase for PV Panic PCI device
24
+ *
25
+ * Copyright (C) 2020 Oracle
26
+ *
27
+ * Authors:
28
+ * Mihai Carabas <mihai.carabas@oracle.com>
29
+ *
30
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
31
+ * See the COPYING file in the top-level directory.
32
+ *
33
+ */
34
+
35
+#include "qemu/osdep.h"
36
+#include "libqos/libqtest.h"
37
+#include "qapi/qmp/qdict.h"
38
+#include "libqos/pci.h"
39
+#include "libqos/pci-pc.h"
40
+#include "hw/pci/pci_regs.h"
41
+
42
+static void test_panic(void)
43
+{
44
+ uint8_t val;
45
+ QDict *response, *data;
46
+ QTestState *qts;
47
+ QPCIBus *pcibus;
48
+ QPCIDevice *dev;
49
+ QPCIBar bar;
50
+
51
+ qts = qtest_init("-device pvpanic-pci");
52
+ pcibus = qpci_new_pc(qts, NULL);
53
+ dev = qpci_device_find(pcibus, QPCI_DEVFN(0x4, 0x0));
54
+ qpci_device_enable(dev);
55
+ bar = qpci_iomap(dev, 0, NULL);
56
+
57
+ qpci_memread(dev, bar, 0, &val, sizeof(val));
58
+ g_assert_cmpuint(val, ==, 3);
59
+
60
+ val = 1;
61
+ qpci_memwrite(dev, bar, 0, &val, sizeof(val));
62
+
63
+ response = qtest_qmp_eventwait_ref(qts, "GUEST_PANICKED");
64
+ g_assert(qdict_haskey(response, "data"));
65
+ data = qdict_get_qdict(response, "data");
66
+ g_assert(qdict_haskey(data, "action"));
67
+ g_assert_cmpstr(qdict_get_str(data, "action"), ==, "pause");
68
+ qobject_unref(response);
69
+
70
+ qtest_quit(qts);
71
+}
72
+
73
+int main(int argc, char **argv)
74
+{
75
+ int ret;
76
+
77
+ g_test_init(&argc, &argv, NULL);
78
+ qtest_add_func("/pvpanic-pci/panic", test_panic);
79
+
80
+ ret = g_test_run();
81
+
82
+ return ret;
83
+}
84
diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
85
index XXXXXXX..XXXXXXX 100644
86
--- a/tests/qtest/meson.build
87
+++ b/tests/qtest/meson.build
88
@@ -XXX,XX +XXX,XX @@ endif
89
90
qtests_pci = \
91
(config_all_devices.has_key('CONFIG_VGA') ? ['display-vga-test'] : []) + \
92
+ (config_all_devices.has_key('CONFIG_PVPANIC_PCI') ? ['pvpanic-pci-test'] : []) + \
93
(config_all_devices.has_key('CONFIG_IVSHMEM_DEVICE') ? ['ivshmem-test'] : [])
94
95
qtests_i386 = \
96
--
97
2.20.1
98
99
diff view generated by jsdifflib
1
When we first converted our documentation to Sphinx, we split it into
1
We want to capture potential Rust backtraces on panics in our test
2
multiple manuals (system, interop, tools, etc), which are all built
2
logs, which isn't Rust's default behaviour. Set RUST_BACKTRACE=1 in
3
separately. The primary driver for this was wanting to be able to
3
the add_test_setup environments, so that all our tests get run with
4
avoid shipping the 'devel' manual to end-users. However, this is
4
this environment variable set.
5
working against the grain of the way Sphinx wants to be used and
6
causes some annoyances:
7
* Cross-references between documents become much harder or
8
possibly impossible
9
* There is no single index to the whole documentation
10
* Within one manual there's no links or table-of-contents info
11
that lets you easily navigate to the others
12
* The devel manual doesn't get published on the QEMU website
13
(it would be nice to able to refer to it there)
14
5
15
Merely hiding our developer documentation from end users seems like
6
This makes the setting of that variable in the gitlab CI template
16
it's not enough benefit for these costs. Combine all the
7
redundant, so we can remove it.
17
documentation into a single manual (the same way that the readthedocs
18
site builds it) and install the whole thing. The previous manual
19
divisions remain as the new top level sections in the manual.
20
21
* The per-manual conf.py files are no longer needed
22
* The man_pages[] specifications previously in each per-manual
23
conf.py move to the top level conf.py
24
* docs/meson.build logic is simplified as we now only need to run
25
Sphinx once for the HTML and then once for the manpages5B
26
* The old index.html.in that produced the top-level page with
27
links to each manual is no longer needed
28
29
Unfortunately this means that we now have to build the HTML
30
documentation into docs/manual in the build tree rather than directly
31
into docs/; otherwise it is too awkward to ensure we install only the
32
built manual and not also the dependency info, stamp file, etc. The
33
manual still ends up in the same place in the final installed
34
directory, but anybody who was consulting documentation from within
35
the build tree will have to adjust where they're looking.
36
8
37
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
38
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
10
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
39
Message-id: 20210115154449.4801-1-peter.maydell@linaro.org
11
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
12
Message-id: 20250310102950.3752908-1-peter.maydell@linaro.org
40
---
13
---
41
docs/conf.py | 46 ++++++++++++++++++++++++++++++-
14
meson.build | 9 ++++++---
42
docs/devel/conf.py | 15 -----------
15
.gitlab-ci.d/buildtest-template.yml | 1 -
43
docs/index.html.in | 17 ------------
16
2 files changed, 6 insertions(+), 4 deletions(-)
44
docs/interop/conf.py | 28 -------------------
45
docs/meson.build | 64 +++++++++++++++++---------------------------
46
docs/specs/conf.py | 16 -----------
47
docs/system/conf.py | 28 -------------------
48
docs/tools/conf.py | 37 -------------------------
49
docs/user/conf.py | 15 -----------
50
.gitlab-ci.yml | 4 +--
51
10 files changed, 72 insertions(+), 198 deletions(-)
52
delete mode 100644 docs/devel/conf.py
53
delete mode 100644 docs/index.html.in
54
delete mode 100644 docs/interop/conf.py
55
delete mode 100644 docs/specs/conf.py
56
delete mode 100644 docs/system/conf.py
57
delete mode 100644 docs/tools/conf.py
58
delete mode 100644 docs/user/conf.py
59
17
60
diff --git a/docs/conf.py b/docs/conf.py
18
diff --git a/meson.build b/meson.build
61
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
62
--- a/docs/conf.py
20
--- a/meson.build
63
+++ b/docs/conf.py
21
+++ b/meson.build
64
@@ -XXX,XX +XXX,XX @@ latex_documents = [
22
@@ -XXX,XX +XXX,XX @@ project('qemu', ['c'], meson_version: '>=1.5.0',
65
23
66
# -- Options for manual page output ---------------------------------------
24
meson.add_devenv({ 'MESON_BUILD_ROOT' : meson.project_build_root() })
67
# Individual manual/conf.py can override this to create man pages
25
68
-man_pages = []
26
-add_test_setup('quick', exclude_suites: ['slow', 'thorough'], is_default: true)
69
+man_pages = [
27
-add_test_setup('slow', exclude_suites: ['thorough'], env: ['G_TEST_SLOW=1', 'SPEED=slow'])
70
+ ('interop/qemu-ga', 'qemu-ga',
28
-add_test_setup('thorough', env: ['G_TEST_SLOW=1', 'SPEED=thorough'])
71
+ 'QEMU Guest Agent',
29
+add_test_setup('quick', exclude_suites: ['slow', 'thorough'], is_default: true,
72
+ ['Michael Roth <mdroth@linux.vnet.ibm.com>'], 8),
30
+ env: ['RUST_BACKTRACE=1'])
73
+ ('interop/qemu-ga-ref', 'qemu-ga-ref',
31
+add_test_setup('slow', exclude_suites: ['thorough'],
74
+ 'QEMU Guest Agent Protocol Reference',
32
+ env: ['G_TEST_SLOW=1', 'SPEED=slow', 'RUST_BACKTRACE=1'])
75
+ [], 7),
33
+add_test_setup('thorough',
76
+ ('interop/qemu-qmp-ref', 'qemu-qmp-ref',
34
+ env: ['G_TEST_SLOW=1', 'SPEED=thorough', 'RUST_BACKTRACE=1'])
77
+ 'QEMU QMP Reference Manual',
35
78
+ [], 7),
36
meson.add_postconf_script(find_program('scripts/symlink-install-tree.py'))
79
+ ('interop/qemu-storage-daemon-qmp-ref', 'qemu-storage-daemon-qmp-ref',
37
80
+ 'QEMU Storage Daemon QMP Reference Manual',
38
diff --git a/.gitlab-ci.d/buildtest-template.yml b/.gitlab-ci.d/buildtest-template.yml
81
+ [], 7),
39
index XXXXXXX..XXXXXXX 100644
82
+ ('system/qemu-manpage', 'qemu',
40
--- a/.gitlab-ci.d/buildtest-template.yml
83
+ 'QEMU User Documentation',
41
+++ b/.gitlab-ci.d/buildtest-template.yml
84
+ ['Fabrice Bellard'], 1),
85
+ ('system/qemu-block-drivers', 'qemu-block-drivers',
86
+ 'QEMU block drivers reference',
87
+ ['Fabrice Bellard and the QEMU Project developers'], 7),
88
+ ('system/qemu-cpu-models', 'qemu-cpu-models',
89
+ 'QEMU CPU Models',
90
+ ['The QEMU Project developers'], 7),
91
+ ('tools/qemu-img', 'qemu-img',
92
+ 'QEMU disk image utility',
93
+ ['Fabrice Bellard'], 1),
94
+ ('tools/qemu-nbd', 'qemu-nbd',
95
+ 'QEMU Disk Network Block Device Server',
96
+ ['Anthony Liguori <anthony@codemonkey.ws>'], 8),
97
+ ('tools/qemu-pr-helper', 'qemu-pr-helper',
98
+ 'QEMU persistent reservation helper',
99
+ [], 8),
100
+ ('tools/qemu-storage-daemon', 'qemu-storage-daemon',
101
+ 'QEMU storage daemon',
102
+ [], 1),
103
+ ('tools/qemu-trace-stap', 'qemu-trace-stap',
104
+ 'QEMU SystemTap trace tool',
105
+ [], 1),
106
+ ('tools/virtfs-proxy-helper', 'virtfs-proxy-helper',
107
+ 'QEMU 9p virtfs proxy filesystem helper',
108
+ ['M. Mohan Kumar'], 1),
109
+ ('tools/virtiofsd', 'virtiofsd',
110
+ 'QEMU virtio-fs shared file system daemon',
111
+ ['Stefan Hajnoczi <stefanha@redhat.com>',
112
+ 'Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>'], 1),
113
+]
114
115
# -- Options for Texinfo output -------------------------------------------
116
117
diff --git a/docs/devel/conf.py b/docs/devel/conf.py
118
deleted file mode 100644
119
index XXXXXXX..XXXXXXX
120
--- a/docs/devel/conf.py
121
+++ /dev/null
122
@@ -XXX,XX +XXX,XX @@
42
@@ -XXX,XX +XXX,XX @@
123
-# -*- coding: utf-8 -*-
43
stage: test
124
-#
44
image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:$QEMU_CI_CONTAINER_TAG
125
-# QEMU documentation build configuration file for the 'devel' manual.
45
script:
126
-#
46
- - export RUST_BACKTRACE=1
127
-# This includes the top level conf file and then makes any necessary tweaks.
47
- source scripts/ci/gitlab-ci-section
128
-import sys
48
- section_start buildenv "Setting up to run tests"
129
-import os
49
- scripts/git-submodule.sh update roms/SLOF
130
-
131
-qemu_docdir = os.path.abspath("..")
132
-parent_config = os.path.join(qemu_docdir, "conf.py")
133
-exec(compile(open(parent_config, "rb").read(), parent_config, 'exec'))
134
-
135
-# This slightly misuses the 'description', but is the best way to get
136
-# the manual title to appear in the sidebar.
137
-html_theme_options['description'] = u'Developer''s Guide'
138
diff --git a/docs/index.html.in b/docs/index.html.in
139
deleted file mode 100644
140
index XXXXXXX..XXXXXXX
141
--- a/docs/index.html.in
142
+++ /dev/null
143
@@ -XXX,XX +XXX,XX @@
144
-<!DOCTYPE html>
145
-<html lang="en">
146
- <head>
147
- <meta charset="UTF-8">
148
- <title>QEMU @VERSION@ Documentation</title>
149
- </head>
150
- <body>
151
- <h1>QEMU @VERSION@ Documentation</h1>
152
- <ul>
153
- <li><a href="system/index.html">System Emulation User's Guide</a></li>
154
- <li><a href="user/index.html">User Mode Emulation User's Guide</a></li>
155
- <li><a href="tools/index.html">Tools Guide</a></li>
156
- <li><a href="interop/index.html">System Emulation Management and Interoperability Guide</a></li>
157
- <li><a href="specs/index.html">System Emulation Guest Hardware Specifications</a></li>
158
- </ul>
159
- </body>
160
-</html>
161
diff --git a/docs/interop/conf.py b/docs/interop/conf.py
162
deleted file mode 100644
163
index XXXXXXX..XXXXXXX
164
--- a/docs/interop/conf.py
165
+++ /dev/null
166
@@ -XXX,XX +XXX,XX @@
167
-# -*- coding: utf-8 -*-
168
-#
169
-# QEMU documentation build configuration file for the 'interop' manual.
170
-#
171
-# This includes the top level conf file and then makes any necessary tweaks.
172
-import sys
173
-import os
174
-
175
-qemu_docdir = os.path.abspath("..")
176
-parent_config = os.path.join(qemu_docdir, "conf.py")
177
-exec(compile(open(parent_config, "rb").read(), parent_config, 'exec'))
178
-
179
-# This slightly misuses the 'description', but is the best way to get
180
-# the manual title to appear in the sidebar.
181
-html_theme_options['description'] = u'System Emulation Management and Interoperability Guide'
182
-
183
-# One entry per manual page. List of tuples
184
-# (source start file, name, description, authors, manual section).
185
-man_pages = [
186
- ('qemu-ga', 'qemu-ga', u'QEMU Guest Agent',
187
- ['Michael Roth <mdroth@linux.vnet.ibm.com>'], 8),
188
- ('qemu-ga-ref', 'qemu-ga-ref', 'QEMU Guest Agent Protocol Reference',
189
- [], 7),
190
- ('qemu-qmp-ref', 'qemu-qmp-ref', 'QEMU QMP Reference Manual',
191
- [], 7),
192
- ('qemu-storage-daemon-qmp-ref', 'qemu-storage-daemon-qmp-ref',
193
- 'QEMU Storage Daemon QMP Reference Manual', [], 7),
194
-]
195
diff --git a/docs/meson.build b/docs/meson.build
196
index XXXXXXX..XXXXXXX 100644
197
--- a/docs/meson.build
198
+++ b/docs/meson.build
199
@@ -XXX,XX +XXX,XX @@ if build_docs
200
meson.source_root() / 'docs/sphinx/qmp_lexer.py',
201
qapi_gen_depends ]
202
203
- configure_file(output: 'index.html',
204
- input: files('index.html.in'),
205
- configuration: {'VERSION': meson.project_version()},
206
- install_dir: qemu_docdir)
207
- manuals = [ 'devel', 'interop', 'tools', 'specs', 'system', 'user' ]
208
man_pages = {
209
- 'interop' : {
210
'qemu-ga.8': (have_tools ? 'man8' : ''),
211
'qemu-ga-ref.7': 'man7',
212
'qemu-qmp-ref.7': 'man7',
213
'qemu-storage-daemon-qmp-ref.7': (have_tools ? 'man7' : ''),
214
- },
215
- 'tools': {
216
'qemu-img.1': (have_tools ? 'man1' : ''),
217
'qemu-nbd.8': (have_tools ? 'man8' : ''),
218
'qemu-pr-helper.8': (have_tools ? 'man8' : ''),
219
@@ -XXX,XX +XXX,XX @@ if build_docs
220
'qemu-trace-stap.1': (config_host.has_key('CONFIG_TRACE_SYSTEMTAP') ? 'man1' : ''),
221
'virtfs-proxy-helper.1': (have_virtfs_proxy_helper ? 'man1' : ''),
222
'virtiofsd.1': (have_virtiofsd ? 'man1' : ''),
223
- },
224
- 'system': {
225
'qemu.1': 'man1',
226
'qemu-block-drivers.7': 'man7',
227
'qemu-cpu-models.7': 'man7'
228
- },
229
}
230
231
sphinxdocs = []
232
sphinxmans = []
233
- foreach manual : manuals
234
- private_dir = meson.current_build_dir() / (manual + '.p')
235
- output_dir = meson.current_build_dir() / manual
236
- input_dir = meson.current_source_dir() / manual
237
238
- this_manual = custom_target(manual + ' manual',
239
+ private_dir = meson.current_build_dir() / 'manual.p'
240
+ output_dir = meson.current_build_dir() / 'manual'
241
+ input_dir = meson.current_source_dir()
242
+
243
+ this_manual = custom_target('QEMU manual',
244
build_by_default: build_docs,
245
- output: [manual + '.stamp'],
246
- input: [files('conf.py'), files(manual / 'conf.py')],
247
- depfile: manual + '.d',
248
+ output: 'docs.stamp',
249
+ input: files('conf.py'),
250
+ depfile: 'docs.d',
251
depend_files: sphinx_extn_depends,
252
command: [SPHINX_ARGS, '-Ddepfile=@DEPFILE@',
253
'-Ddepfile_stamp=@OUTPUT0@',
254
'-b', 'html', '-d', private_dir,
255
input_dir, output_dir])
256
- sphinxdocs += this_manual
257
- if build_docs and manual != 'devel'
258
- install_subdir(output_dir, install_dir: qemu_docdir)
259
- endif
260
+ sphinxdocs += this_manual
261
+ install_subdir(output_dir, install_dir: qemu_docdir, strip_directory: true)
262
263
- these_man_pages = []
264
- install_dirs = []
265
- foreach page, section : man_pages.get(manual, {})
266
- these_man_pages += page
267
- install_dirs += section == '' ? false : get_option('mandir') / section
268
- endforeach
269
- if these_man_pages.length() > 0
270
- sphinxmans += custom_target(manual + ' man pages',
271
- build_by_default: build_docs,
272
- output: these_man_pages,
273
- input: this_manual,
274
- install: build_docs,
275
- install_dir: install_dirs,
276
- command: [SPHINX_ARGS, '-b', 'man', '-d', private_dir,
277
- input_dir, meson.current_build_dir()])
278
- endif
279
+ these_man_pages = []
280
+ install_dirs = []
281
+ foreach page, section : man_pages
282
+ these_man_pages += page
283
+ install_dirs += section == '' ? false : get_option('mandir') / section
284
endforeach
285
+
286
+ sphinxmans += custom_target('QEMU man pages',
287
+ build_by_default: build_docs,
288
+ output: these_man_pages,
289
+ input: this_manual,
290
+ install: build_docs,
291
+ install_dir: install_dirs,
292
+ command: [SPHINX_ARGS, '-b', 'man', '-d', private_dir,
293
+ input_dir, meson.current_build_dir()])
294
+
295
alias_target('sphinxdocs', sphinxdocs)
296
alias_target('html', sphinxdocs)
297
alias_target('man', sphinxmans)
298
diff --git a/docs/specs/conf.py b/docs/specs/conf.py
299
deleted file mode 100644
300
index XXXXXXX..XXXXXXX
301
--- a/docs/specs/conf.py
302
+++ /dev/null
303
@@ -XXX,XX +XXX,XX @@
304
-# -*- coding: utf-8 -*-
305
-#
306
-# QEMU documentation build configuration file for the 'specs' manual.
307
-#
308
-# This includes the top level conf file and then makes any necessary tweaks.
309
-import sys
310
-import os
311
-
312
-qemu_docdir = os.path.abspath("..")
313
-parent_config = os.path.join(qemu_docdir, "conf.py")
314
-exec(compile(open(parent_config, "rb").read(), parent_config, 'exec'))
315
-
316
-# This slightly misuses the 'description', but is the best way to get
317
-# the manual title to appear in the sidebar.
318
-html_theme_options['description'] = \
319
- u'System Emulation Guest Hardware Specifications'
320
diff --git a/docs/system/conf.py b/docs/system/conf.py
321
deleted file mode 100644
322
index XXXXXXX..XXXXXXX
323
--- a/docs/system/conf.py
324
+++ /dev/null
325
@@ -XXX,XX +XXX,XX @@
326
-# -*- coding: utf-8 -*-
327
-#
328
-# QEMU documentation build configuration file for the 'system' manual.
329
-#
330
-# This includes the top level conf file and then makes any necessary tweaks.
331
-import sys
332
-import os
333
-
334
-qemu_docdir = os.path.abspath("..")
335
-parent_config = os.path.join(qemu_docdir, "conf.py")
336
-exec(compile(open(parent_config, "rb").read(), parent_config, 'exec'))
337
-
338
-# This slightly misuses the 'description', but is the best way to get
339
-# the manual title to appear in the sidebar.
340
-html_theme_options['description'] = u'System Emulation User''s Guide'
341
-
342
-# One entry per manual page. List of tuples
343
-# (source start file, name, description, authors, manual section).
344
-man_pages = [
345
- ('qemu-manpage', 'qemu', u'QEMU User Documentation',
346
- ['Fabrice Bellard'], 1),
347
- ('qemu-block-drivers', 'qemu-block-drivers',
348
- u'QEMU block drivers reference',
349
- ['Fabrice Bellard and the QEMU Project developers'], 7),
350
- ('qemu-cpu-models', 'qemu-cpu-models',
351
- u'QEMU CPU Models',
352
- ['The QEMU Project developers'], 7)
353
-]
354
diff --git a/docs/tools/conf.py b/docs/tools/conf.py
355
deleted file mode 100644
356
index XXXXXXX..XXXXXXX
357
--- a/docs/tools/conf.py
358
+++ /dev/null
359
@@ -XXX,XX +XXX,XX @@
360
-# -*- coding: utf-8 -*-
361
-#
362
-# QEMU documentation build configuration file for the 'tools' manual.
363
-#
364
-# This includes the top level conf file and then makes any necessary tweaks.
365
-import sys
366
-import os
367
-
368
-qemu_docdir = os.path.abspath("..")
369
-parent_config = os.path.join(qemu_docdir, "conf.py")
370
-exec(compile(open(parent_config, "rb").read(), parent_config, 'exec'))
371
-
372
-# This slightly misuses the 'description', but is the best way to get
373
-# the manual title to appear in the sidebar.
374
-html_theme_options['description'] = \
375
- u'Tools Guide'
376
-
377
-# One entry per manual page. List of tuples
378
-# (source start file, name, description, authors, manual section).
379
-man_pages = [
380
- ('qemu-img', 'qemu-img', u'QEMU disk image utility',
381
- ['Fabrice Bellard'], 1),
382
- ('qemu-storage-daemon', 'qemu-storage-daemon', u'QEMU storage daemon',
383
- [], 1),
384
- ('qemu-nbd', 'qemu-nbd', u'QEMU Disk Network Block Device Server',
385
- ['Anthony Liguori <anthony@codemonkey.ws>'], 8),
386
- ('qemu-pr-helper', 'qemu-pr-helper', 'QEMU persistent reservation helper',
387
- [], 8),
388
- ('qemu-trace-stap', 'qemu-trace-stap', u'QEMU SystemTap trace tool',
389
- [], 1),
390
- ('virtfs-proxy-helper', 'virtfs-proxy-helper',
391
- u'QEMU 9p virtfs proxy filesystem helper',
392
- ['M. Mohan Kumar'], 1),
393
- ('virtiofsd', 'virtiofsd', u'QEMU virtio-fs shared file system daemon',
394
- ['Stefan Hajnoczi <stefanha@redhat.com>',
395
- 'Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>'], 1),
396
-]
397
diff --git a/docs/user/conf.py b/docs/user/conf.py
398
deleted file mode 100644
399
index XXXXXXX..XXXXXXX
400
--- a/docs/user/conf.py
401
+++ /dev/null
402
@@ -XXX,XX +XXX,XX @@
403
-# -*- coding: utf-8 -*-
404
-#
405
-# QEMU documentation build configuration file for the 'user' manual.
406
-#
407
-# This includes the top level conf file and then makes any necessary tweaks.
408
-import sys
409
-import os
410
-
411
-qemu_docdir = os.path.abspath("..")
412
-parent_config = os.path.join(qemu_docdir, "conf.py")
413
-exec(compile(open(parent_config, "rb").read(), parent_config, 'exec'))
414
-
415
-# This slightly misuses the 'description', but is the best way to get
416
-# the manual title to appear in the sidebar.
417
-html_theme_options['description'] = u'User Mode Emulation User''s Guide'
418
diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
419
index XXXXXXX..XXXXXXX 100644
420
--- a/.gitlab-ci.yml
421
+++ b/.gitlab-ci.yml
422
@@ -XXX,XX +XXX,XX @@ pages:
423
-t "Welcome to the QEMU sourcecode"
424
- mv HTML public/src
425
# Project documentation
426
- - mv build/docs/index.html public/
427
- - for i in devel interop specs system tools user ; do mv build/docs/$i public/ ; done
428
+ - make -C build install DESTDIR=$(pwd)/temp-install
429
+ - mv temp-install/usr/local/share/doc/qemu/* public/
430
artifacts:
431
paths:
432
- public
433
--
50
--
434
2.20.1
51
2.43.0
435
52
436
53
diff view generated by jsdifflib