1
A last collection of patches to squeeze in before rc0.
1
The following changes since commit 8f6330a807f2642dc2a3cdf33347aa28a4c00a87:
2
The patches from me are all bugfixes. Philippe's are just
3
code-movement, but I wanted to get these into 4.1 because
4
that kind of patch is so painful to have to rebase.
5
(The diffstat is huge but it's just code moving from file to file.)
6
2
7
thanks
3
Merge tag 'pull-maintainer-updates-060324-1' of https://gitlab.com/stsquad/qemu into staging (2024-03-06 16:56:20 +0000)
8
-- PMM
9
10
The following changes since commit 234e256511e588680300600ce087c5185d68cf2a:
11
12
Merge remote-tracking branch 'remotes/armbru/tags/pull-build-2019-07-02-v2' into staging (2019-07-04 15:58:46 +0100)
13
4
14
are available in the Git repository at:
5
are available in the Git repository at:
15
6
16
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20190704
7
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20240308
17
8
18
for you to fetch changes up to b75f3735802b5b33f10e4bfe374d4b17bb86d29a:
9
for you to fetch changes up to bbf6c6dbead82292a20951eb1204442a6b838de9:
19
10
20
target/arm: Correct VMOV_imm_dp handling of short vectors (2019-07-04 16:52:05 +0100)
11
target/arm: Move v7m-related code from cpu32.c into a separate file (2024-03-08 14:45:03 +0000)
21
12
22
----------------------------------------------------------------
13
----------------------------------------------------------------
23
target-arm queue:
14
target-arm queue:
24
* more code-movement to separate TCG-only functions into their own files
15
* Implement FEAT_ECV
25
* Correct VMOV_imm_dp handling of short vectors
16
* STM32L4x5: Implement GPIO device
26
* Execute Thumb instructions when their condbits are 0xf
17
* Fix 32-bit SMOPA
27
* armv7m_systick: Forbid non-privileged accesses
18
* Refactor v7m related code from cpu32.c into its own file
28
* Use _ra versions of cpu_stl_data() in v7M helpers
19
* hw/rtc/sun4v-rtc: Relicense to GPLv2-or-later
29
* v8M: Check state of exception being returned from
30
* v8M: Forcibly clear negative-priority exceptions on deactivate
31
20
32
----------------------------------------------------------------
21
----------------------------------------------------------------
33
Peter Maydell (6):
22
Inès Varhol (3):
34
arm v8M: Forcibly clear negative-priority exceptions on deactivate
23
hw/gpio: Implement STM32L4x5 GPIO
35
target/arm: v8M: Check state of exception being returned from
24
hw/arm: Connect STM32L4x5 GPIO to STM32L4x5 SoC
36
target/arm: Use _ra versions of cpu_stl_data() in v7M helpers
25
tests/qtest: Add STM32L4x5 GPIO QTest testcase
37
hw/timer/armv7m_systick: Forbid non-privileged accesses
38
target/arm: Execute Thumb instructions when their condbits are 0xf
39
target/arm: Correct VMOV_imm_dp handling of short vectors
40
26
41
Philippe Mathieu-Daudé (3):
27
Peter Maydell (9):
42
target/arm: Move debug routines to debug_helper.c
28
target/arm: Move some register related defines to internals.h
43
target/arm: Restrict semi-hosting to TCG
29
target/arm: Timer _EL02 registers UNDEF for E2H == 0
44
target/arm/helper: Move M profile routines to m_helper.c
30
target/arm: use FIELD macro for CNTHCTL bit definitions
31
target/arm: Don't allow RES0 CNTHCTL_EL2 bits to be written
32
target/arm: Implement new FEAT_ECV trap bits
33
target/arm: Define CNTPCTSS_EL0 and CNTVCTSS_EL0
34
target/arm: Implement FEAT_ECV CNTPOFF_EL2 handling
35
target/arm: Enable FEAT_ECV for 'max' CPU
36
hw/rtc/sun4v-rtc: Relicense to GPLv2-or-later
45
37
46
target/arm/Makefile.objs | 5 +-
38
Richard Henderson (1):
47
target/arm/cpu.h | 7 +
39
target/arm: Fix 32-bit SMOPA
48
hw/intc/armv7m_nvic.c | 54 +-
49
hw/timer/armv7m_systick.c | 26 +-
50
target/arm/cpu.c | 9 +-
51
target/arm/debug_helper.c | 311 +++++
52
target/arm/helper.c | 2646 +--------------------------------------
53
target/arm/m_helper.c | 2679 ++++++++++++++++++++++++++++++++++++++++
54
target/arm/op_helper.c | 295 -----
55
target/arm/translate-vfp.inc.c | 2 +-
56
target/arm/translate.c | 15 +-
57
11 files changed, 3096 insertions(+), 2953 deletions(-)
58
create mode 100644 target/arm/debug_helper.c
59
create mode 100644 target/arm/m_helper.c
60
40
41
Thomas Huth (1):
42
target/arm: Move v7m-related code from cpu32.c into a separate file
43
44
MAINTAINERS | 1 +
45
docs/system/arm/b-l475e-iot01a.rst | 2 +-
46
docs/system/arm/emulation.rst | 1 +
47
include/hw/arm/stm32l4x5_soc.h | 2 +
48
include/hw/gpio/stm32l4x5_gpio.h | 71 +++++
49
include/hw/misc/stm32l4x5_syscfg.h | 3 +-
50
include/hw/rtc/sun4v-rtc.h | 2 +-
51
target/arm/cpu-features.h | 10 +
52
target/arm/cpu.h | 129 +--------
53
target/arm/internals.h | 151 ++++++++++
54
hw/arm/stm32l4x5_soc.c | 71 ++++-
55
hw/gpio/stm32l4x5_gpio.c | 477 ++++++++++++++++++++++++++++++++
56
hw/misc/stm32l4x5_syscfg.c | 1 +
57
hw/rtc/sun4v-rtc.c | 2 +-
58
target/arm/helper.c | 189 ++++++++++++-
59
target/arm/tcg/cpu-v7m.c | 290 +++++++++++++++++++
60
target/arm/tcg/cpu32.c | 261 ------------------
61
target/arm/tcg/cpu64.c | 1 +
62
target/arm/tcg/sme_helper.c | 77 +++---
63
tests/qtest/stm32l4x5_gpio-test.c | 551 +++++++++++++++++++++++++++++++++++++
64
tests/tcg/aarch64/sme-smopa-1.c | 47 ++++
65
tests/tcg/aarch64/sme-smopa-2.c | 54 ++++
66
hw/arm/Kconfig | 3 +-
67
hw/gpio/Kconfig | 3 +
68
hw/gpio/meson.build | 1 +
69
hw/gpio/trace-events | 6 +
70
target/arm/meson.build | 3 +
71
target/arm/tcg/meson.build | 3 +
72
target/arm/trace-events | 1 +
73
tests/qtest/meson.build | 3 +-
74
tests/tcg/aarch64/Makefile.target | 2 +-
75
31 files changed, 1962 insertions(+), 456 deletions(-)
76
create mode 100644 include/hw/gpio/stm32l4x5_gpio.h
77
create mode 100644 hw/gpio/stm32l4x5_gpio.c
78
create mode 100644 target/arm/tcg/cpu-v7m.c
79
create mode 100644 tests/qtest/stm32l4x5_gpio-test.c
80
create mode 100644 tests/tcg/aarch64/sme-smopa-1.c
81
create mode 100644 tests/tcg/aarch64/sme-smopa-2.c
82
diff view generated by jsdifflib
New patch
1
cpu.h has a lot of #defines relating to CPU register fields.
2
Most of these aren't actually used outside target/arm code,
3
so there's no point in cluttering up the cpu.h file with them.
4
Move some easy ones to internals.h.
1
5
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20240301183219.2424889-2-peter.maydell@linaro.org
10
---
11
target/arm/cpu.h | 128 -----------------------------------------
12
target/arm/internals.h | 128 +++++++++++++++++++++++++++++++++++++++++
13
2 files changed, 128 insertions(+), 128 deletions(-)
14
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpu.h
18
+++ b/target/arm/cpu.h
19
@@ -XXX,XX +XXX,XX @@ typedef struct ARMGenericTimer {
20
uint64_t ctl; /* Timer Control register */
21
} ARMGenericTimer;
22
23
-#define VTCR_NSW (1u << 29)
24
-#define VTCR_NSA (1u << 30)
25
-#define VSTCR_SW VTCR_NSW
26
-#define VSTCR_SA VTCR_NSA
27
-
28
/* Define a maximum sized vector register.
29
* For 32-bit, this is a 128-bit NEON/AdvSIMD register.
30
* For 64-bit, this is a 2048-bit SVE register.
31
@@ -XXX,XX +XXX,XX @@ void pmu_init(ARMCPU *cpu);
32
#define SCTLR_SPINTMASK (1ULL << 62) /* FEAT_NMI */
33
#define SCTLR_TIDCP (1ULL << 63) /* FEAT_TIDCP1 */
34
35
-/* Bit definitions for CPACR (AArch32 only) */
36
-FIELD(CPACR, CP10, 20, 2)
37
-FIELD(CPACR, CP11, 22, 2)
38
-FIELD(CPACR, TRCDIS, 28, 1) /* matches CPACR_EL1.TTA */
39
-FIELD(CPACR, D32DIS, 30, 1) /* up to v7; RAZ in v8 */
40
-FIELD(CPACR, ASEDIS, 31, 1)
41
-
42
-/* Bit definitions for CPACR_EL1 (AArch64 only) */
43
-FIELD(CPACR_EL1, ZEN, 16, 2)
44
-FIELD(CPACR_EL1, FPEN, 20, 2)
45
-FIELD(CPACR_EL1, SMEN, 24, 2)
46
-FIELD(CPACR_EL1, TTA, 28, 1) /* matches CPACR.TRCDIS */
47
-
48
-/* Bit definitions for HCPTR (AArch32 only) */
49
-FIELD(HCPTR, TCP10, 10, 1)
50
-FIELD(HCPTR, TCP11, 11, 1)
51
-FIELD(HCPTR, TASE, 15, 1)
52
-FIELD(HCPTR, TTA, 20, 1)
53
-FIELD(HCPTR, TAM, 30, 1) /* matches CPTR_EL2.TAM */
54
-FIELD(HCPTR, TCPAC, 31, 1) /* matches CPTR_EL2.TCPAC */
55
-
56
-/* Bit definitions for CPTR_EL2 (AArch64 only) */
57
-FIELD(CPTR_EL2, TZ, 8, 1) /* !E2H */
58
-FIELD(CPTR_EL2, TFP, 10, 1) /* !E2H, matches HCPTR.TCP10 */
59
-FIELD(CPTR_EL2, TSM, 12, 1) /* !E2H */
60
-FIELD(CPTR_EL2, ZEN, 16, 2) /* E2H */
61
-FIELD(CPTR_EL2, FPEN, 20, 2) /* E2H */
62
-FIELD(CPTR_EL2, SMEN, 24, 2) /* E2H */
63
-FIELD(CPTR_EL2, TTA, 28, 1)
64
-FIELD(CPTR_EL2, TAM, 30, 1) /* matches HCPTR.TAM */
65
-FIELD(CPTR_EL2, TCPAC, 31, 1) /* matches HCPTR.TCPAC */
66
-
67
-/* Bit definitions for CPTR_EL3 (AArch64 only) */
68
-FIELD(CPTR_EL3, EZ, 8, 1)
69
-FIELD(CPTR_EL3, TFP, 10, 1)
70
-FIELD(CPTR_EL3, ESM, 12, 1)
71
-FIELD(CPTR_EL3, TTA, 20, 1)
72
-FIELD(CPTR_EL3, TAM, 30, 1)
73
-FIELD(CPTR_EL3, TCPAC, 31, 1)
74
-
75
-#define MDCR_MTPME (1U << 28)
76
-#define MDCR_TDCC (1U << 27)
77
-#define MDCR_HLP (1U << 26) /* MDCR_EL2 */
78
-#define MDCR_SCCD (1U << 23) /* MDCR_EL3 */
79
-#define MDCR_HCCD (1U << 23) /* MDCR_EL2 */
80
-#define MDCR_EPMAD (1U << 21)
81
-#define MDCR_EDAD (1U << 20)
82
-#define MDCR_TTRF (1U << 19)
83
-#define MDCR_STE (1U << 18) /* MDCR_EL3 */
84
-#define MDCR_SPME (1U << 17) /* MDCR_EL3 */
85
-#define MDCR_HPMD (1U << 17) /* MDCR_EL2 */
86
-#define MDCR_SDD (1U << 16)
87
-#define MDCR_SPD (3U << 14)
88
-#define MDCR_TDRA (1U << 11)
89
-#define MDCR_TDOSA (1U << 10)
90
-#define MDCR_TDA (1U << 9)
91
-#define MDCR_TDE (1U << 8)
92
-#define MDCR_HPME (1U << 7)
93
-#define MDCR_TPM (1U << 6)
94
-#define MDCR_TPMCR (1U << 5)
95
-#define MDCR_HPMN (0x1fU)
96
-
97
-/* Not all of the MDCR_EL3 bits are present in the 32-bit SDCR */
98
-#define SDCR_VALID_MASK (MDCR_MTPME | MDCR_TDCC | MDCR_SCCD | \
99
- MDCR_EPMAD | MDCR_EDAD | MDCR_TTRF | \
100
- MDCR_STE | MDCR_SPME | MDCR_SPD)
101
-
102
#define CPSR_M (0x1fU)
103
#define CPSR_T (1U << 5)
104
#define CPSR_F (1U << 6)
105
@@ -XXX,XX +XXX,XX @@ FIELD(CPTR_EL3, TCPAC, 31, 1)
106
#define XPSR_NZCV CPSR_NZCV
107
#define XPSR_IT CPSR_IT
108
109
-#define TTBCR_N (7U << 0) /* TTBCR.EAE==0 */
110
-#define TTBCR_T0SZ (7U << 0) /* TTBCR.EAE==1 */
111
-#define TTBCR_PD0 (1U << 4)
112
-#define TTBCR_PD1 (1U << 5)
113
-#define TTBCR_EPD0 (1U << 7)
114
-#define TTBCR_IRGN0 (3U << 8)
115
-#define TTBCR_ORGN0 (3U << 10)
116
-#define TTBCR_SH0 (3U << 12)
117
-#define TTBCR_T1SZ (3U << 16)
118
-#define TTBCR_A1 (1U << 22)
119
-#define TTBCR_EPD1 (1U << 23)
120
-#define TTBCR_IRGN1 (3U << 24)
121
-#define TTBCR_ORGN1 (3U << 26)
122
-#define TTBCR_SH1 (1U << 28)
123
-#define TTBCR_EAE (1U << 31)
124
-
125
-FIELD(VTCR, T0SZ, 0, 6)
126
-FIELD(VTCR, SL0, 6, 2)
127
-FIELD(VTCR, IRGN0, 8, 2)
128
-FIELD(VTCR, ORGN0, 10, 2)
129
-FIELD(VTCR, SH0, 12, 2)
130
-FIELD(VTCR, TG0, 14, 2)
131
-FIELD(VTCR, PS, 16, 3)
132
-FIELD(VTCR, VS, 19, 1)
133
-FIELD(VTCR, HA, 21, 1)
134
-FIELD(VTCR, HD, 22, 1)
135
-FIELD(VTCR, HWU59, 25, 1)
136
-FIELD(VTCR, HWU60, 26, 1)
137
-FIELD(VTCR, HWU61, 27, 1)
138
-FIELD(VTCR, HWU62, 28, 1)
139
-FIELD(VTCR, NSW, 29, 1)
140
-FIELD(VTCR, NSA, 30, 1)
141
-FIELD(VTCR, DS, 32, 1)
142
-FIELD(VTCR, SL2, 33, 1)
143
-
144
/* Bit definitions for ARMv8 SPSR (PSTATE) format.
145
* Only these are valid when in AArch64 mode; in
146
* AArch32 mode SPSRs are basically CPSR-format.
147
@@ -XXX,XX +XXX,XX @@ static inline void xpsr_write(CPUARMState *env, uint32_t val, uint32_t mask)
148
#define HCR_TWEDEN (1ULL << 59)
149
#define HCR_TWEDEL MAKE_64BIT_MASK(60, 4)
150
151
-#define HCRX_ENAS0 (1ULL << 0)
152
-#define HCRX_ENALS (1ULL << 1)
153
-#define HCRX_ENASR (1ULL << 2)
154
-#define HCRX_FNXS (1ULL << 3)
155
-#define HCRX_FGTNXS (1ULL << 4)
156
-#define HCRX_SMPME (1ULL << 5)
157
-#define HCRX_TALLINT (1ULL << 6)
158
-#define HCRX_VINMI (1ULL << 7)
159
-#define HCRX_VFNMI (1ULL << 8)
160
-#define HCRX_CMOW (1ULL << 9)
161
-#define HCRX_MCE2 (1ULL << 10)
162
-#define HCRX_MSCEN (1ULL << 11)
163
-
164
-#define HPFAR_NS (1ULL << 63)
165
-
166
#define SCR_NS (1ULL << 0)
167
#define SCR_IRQ (1ULL << 1)
168
#define SCR_FIQ (1ULL << 2)
169
@@ -XXX,XX +XXX,XX @@ static inline void xpsr_write(CPUARMState *env, uint32_t val, uint32_t mask)
170
#define SCR_GPF (1ULL << 48)
171
#define SCR_NSE (1ULL << 62)
172
173
-#define HSTR_TTEE (1 << 16)
174
-#define HSTR_TJDBX (1 << 17)
175
-
176
-#define CNTHCTL_CNTVMASK (1 << 18)
177
-#define CNTHCTL_CNTPMASK (1 << 19)
178
-
179
/* Return the current FPSCR value. */
180
uint32_t vfp_get_fpscr(CPUARMState *env);
181
void vfp_set_fpscr(CPUARMState *env, uint32_t val);
182
diff --git a/target/arm/internals.h b/target/arm/internals.h
183
index XXXXXXX..XXXXXXX 100644
184
--- a/target/arm/internals.h
185
+++ b/target/arm/internals.h
186
@@ -XXX,XX +XXX,XX @@ FIELD(DBGWCR, WT, 20, 1)
187
FIELD(DBGWCR, MASK, 24, 5)
188
FIELD(DBGWCR, SSCE, 29, 1)
189
190
+#define VTCR_NSW (1u << 29)
191
+#define VTCR_NSA (1u << 30)
192
+#define VSTCR_SW VTCR_NSW
193
+#define VSTCR_SA VTCR_NSA
194
+
195
+/* Bit definitions for CPACR (AArch32 only) */
196
+FIELD(CPACR, CP10, 20, 2)
197
+FIELD(CPACR, CP11, 22, 2)
198
+FIELD(CPACR, TRCDIS, 28, 1) /* matches CPACR_EL1.TTA */
199
+FIELD(CPACR, D32DIS, 30, 1) /* up to v7; RAZ in v8 */
200
+FIELD(CPACR, ASEDIS, 31, 1)
201
+
202
+/* Bit definitions for CPACR_EL1 (AArch64 only) */
203
+FIELD(CPACR_EL1, ZEN, 16, 2)
204
+FIELD(CPACR_EL1, FPEN, 20, 2)
205
+FIELD(CPACR_EL1, SMEN, 24, 2)
206
+FIELD(CPACR_EL1, TTA, 28, 1) /* matches CPACR.TRCDIS */
207
+
208
+/* Bit definitions for HCPTR (AArch32 only) */
209
+FIELD(HCPTR, TCP10, 10, 1)
210
+FIELD(HCPTR, TCP11, 11, 1)
211
+FIELD(HCPTR, TASE, 15, 1)
212
+FIELD(HCPTR, TTA, 20, 1)
213
+FIELD(HCPTR, TAM, 30, 1) /* matches CPTR_EL2.TAM */
214
+FIELD(HCPTR, TCPAC, 31, 1) /* matches CPTR_EL2.TCPAC */
215
+
216
+/* Bit definitions for CPTR_EL2 (AArch64 only) */
217
+FIELD(CPTR_EL2, TZ, 8, 1) /* !E2H */
218
+FIELD(CPTR_EL2, TFP, 10, 1) /* !E2H, matches HCPTR.TCP10 */
219
+FIELD(CPTR_EL2, TSM, 12, 1) /* !E2H */
220
+FIELD(CPTR_EL2, ZEN, 16, 2) /* E2H */
221
+FIELD(CPTR_EL2, FPEN, 20, 2) /* E2H */
222
+FIELD(CPTR_EL2, SMEN, 24, 2) /* E2H */
223
+FIELD(CPTR_EL2, TTA, 28, 1)
224
+FIELD(CPTR_EL2, TAM, 30, 1) /* matches HCPTR.TAM */
225
+FIELD(CPTR_EL2, TCPAC, 31, 1) /* matches HCPTR.TCPAC */
226
+
227
+/* Bit definitions for CPTR_EL3 (AArch64 only) */
228
+FIELD(CPTR_EL3, EZ, 8, 1)
229
+FIELD(CPTR_EL3, TFP, 10, 1)
230
+FIELD(CPTR_EL3, ESM, 12, 1)
231
+FIELD(CPTR_EL3, TTA, 20, 1)
232
+FIELD(CPTR_EL3, TAM, 30, 1)
233
+FIELD(CPTR_EL3, TCPAC, 31, 1)
234
+
235
+#define MDCR_MTPME (1U << 28)
236
+#define MDCR_TDCC (1U << 27)
237
+#define MDCR_HLP (1U << 26) /* MDCR_EL2 */
238
+#define MDCR_SCCD (1U << 23) /* MDCR_EL3 */
239
+#define MDCR_HCCD (1U << 23) /* MDCR_EL2 */
240
+#define MDCR_EPMAD (1U << 21)
241
+#define MDCR_EDAD (1U << 20)
242
+#define MDCR_TTRF (1U << 19)
243
+#define MDCR_STE (1U << 18) /* MDCR_EL3 */
244
+#define MDCR_SPME (1U << 17) /* MDCR_EL3 */
245
+#define MDCR_HPMD (1U << 17) /* MDCR_EL2 */
246
+#define MDCR_SDD (1U << 16)
247
+#define MDCR_SPD (3U << 14)
248
+#define MDCR_TDRA (1U << 11)
249
+#define MDCR_TDOSA (1U << 10)
250
+#define MDCR_TDA (1U << 9)
251
+#define MDCR_TDE (1U << 8)
252
+#define MDCR_HPME (1U << 7)
253
+#define MDCR_TPM (1U << 6)
254
+#define MDCR_TPMCR (1U << 5)
255
+#define MDCR_HPMN (0x1fU)
256
+
257
+/* Not all of the MDCR_EL3 bits are present in the 32-bit SDCR */
258
+#define SDCR_VALID_MASK (MDCR_MTPME | MDCR_TDCC | MDCR_SCCD | \
259
+ MDCR_EPMAD | MDCR_EDAD | MDCR_TTRF | \
260
+ MDCR_STE | MDCR_SPME | MDCR_SPD)
261
+
262
+#define TTBCR_N (7U << 0) /* TTBCR.EAE==0 */
263
+#define TTBCR_T0SZ (7U << 0) /* TTBCR.EAE==1 */
264
+#define TTBCR_PD0 (1U << 4)
265
+#define TTBCR_PD1 (1U << 5)
266
+#define TTBCR_EPD0 (1U << 7)
267
+#define TTBCR_IRGN0 (3U << 8)
268
+#define TTBCR_ORGN0 (3U << 10)
269
+#define TTBCR_SH0 (3U << 12)
270
+#define TTBCR_T1SZ (3U << 16)
271
+#define TTBCR_A1 (1U << 22)
272
+#define TTBCR_EPD1 (1U << 23)
273
+#define TTBCR_IRGN1 (3U << 24)
274
+#define TTBCR_ORGN1 (3U << 26)
275
+#define TTBCR_SH1 (1U << 28)
276
+#define TTBCR_EAE (1U << 31)
277
+
278
+FIELD(VTCR, T0SZ, 0, 6)
279
+FIELD(VTCR, SL0, 6, 2)
280
+FIELD(VTCR, IRGN0, 8, 2)
281
+FIELD(VTCR, ORGN0, 10, 2)
282
+FIELD(VTCR, SH0, 12, 2)
283
+FIELD(VTCR, TG0, 14, 2)
284
+FIELD(VTCR, PS, 16, 3)
285
+FIELD(VTCR, VS, 19, 1)
286
+FIELD(VTCR, HA, 21, 1)
287
+FIELD(VTCR, HD, 22, 1)
288
+FIELD(VTCR, HWU59, 25, 1)
289
+FIELD(VTCR, HWU60, 26, 1)
290
+FIELD(VTCR, HWU61, 27, 1)
291
+FIELD(VTCR, HWU62, 28, 1)
292
+FIELD(VTCR, NSW, 29, 1)
293
+FIELD(VTCR, NSA, 30, 1)
294
+FIELD(VTCR, DS, 32, 1)
295
+FIELD(VTCR, SL2, 33, 1)
296
+
297
+#define HCRX_ENAS0 (1ULL << 0)
298
+#define HCRX_ENALS (1ULL << 1)
299
+#define HCRX_ENASR (1ULL << 2)
300
+#define HCRX_FNXS (1ULL << 3)
301
+#define HCRX_FGTNXS (1ULL << 4)
302
+#define HCRX_SMPME (1ULL << 5)
303
+#define HCRX_TALLINT (1ULL << 6)
304
+#define HCRX_VINMI (1ULL << 7)
305
+#define HCRX_VFNMI (1ULL << 8)
306
+#define HCRX_CMOW (1ULL << 9)
307
+#define HCRX_MCE2 (1ULL << 10)
308
+#define HCRX_MSCEN (1ULL << 11)
309
+
310
+#define HPFAR_NS (1ULL << 63)
311
+
312
+#define HSTR_TTEE (1 << 16)
313
+#define HSTR_TJDBX (1 << 17)
314
+
315
+#define CNTHCTL_CNTVMASK (1 << 18)
316
+#define CNTHCTL_CNTPMASK (1 << 19)
317
+
318
/* We use a few fake FSR values for internal purposes in M profile.
319
* M profile cores don't have A/R format FSRs, but currently our
320
* get_phys_addr() code assumes A/R profile and reports failures via
321
--
322
2.34.1
323
324
diff view generated by jsdifflib
New patch
1
The timer _EL02 registers should UNDEF for invalid accesses from EL2
2
or EL3 when HCR_EL2.E2H == 0, not take a cp access trap. We were
3
delivering the exception to EL2 with the wrong syndrome.
1
4
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20240301183219.2424889-3-peter.maydell@linaro.org
8
---
9
target/arm/helper.c | 2 +-
10
1 file changed, 1 insertion(+), 1 deletion(-)
11
12
diff --git a/target/arm/helper.c b/target/arm/helper.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/helper.c
15
+++ b/target/arm/helper.c
16
@@ -XXX,XX +XXX,XX @@ static CPAccessResult e2h_access(CPUARMState *env, const ARMCPRegInfo *ri,
17
return CP_ACCESS_OK;
18
}
19
if (!(arm_hcr_el2_eff(env) & HCR_E2H)) {
20
- return CP_ACCESS_TRAP;
21
+ return CP_ACCESS_TRAP_UNCATEGORIZED;
22
}
23
return CP_ACCESS_OK;
24
}
25
--
26
2.34.1
diff view generated by jsdifflib
1
In v8M, an attempt to return from an exception which is not
1
We prefer the FIELD macro over ad-hoc #defines for register bits;
2
active is an illegal exception return. For this purpose,
2
switch CNTHCTL to that style before we add any more bits.
3
exceptions which can configurably target either Secure or
4
NonSecure are not considered to be active if they are
5
configured for the opposite security state for the one
6
we're trying to return from (eg attempt to return from
7
an NS NMI but NMI targets Secure). In the pseudocode this
8
is handled by IsActiveForState().
9
10
Detect this case rather than counting an active exception
11
possibly of the wrong security state as being sufficient.
12
3
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20190617175317.27557-4-peter.maydell@linaro.org
7
Message-id: 20240301183219.2424889-4-peter.maydell@linaro.org
16
---
8
---
17
hw/intc/armv7m_nvic.c | 14 +++++++++++++-
9
target/arm/internals.h | 27 +++++++++++++++++++++++++--
18
1 file changed, 13 insertions(+), 1 deletion(-)
10
target/arm/helper.c | 9 ++++-----
11
2 files changed, 29 insertions(+), 7 deletions(-)
19
12
20
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
13
diff --git a/target/arm/internals.h b/target/arm/internals.h
21
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/intc/armv7m_nvic.c
15
--- a/target/arm/internals.h
23
+++ b/hw/intc/armv7m_nvic.c
16
+++ b/target/arm/internals.h
24
@@ -XXX,XX +XXX,XX @@ int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure)
17
@@ -XXX,XX +XXX,XX @@ FIELD(VTCR, SL2, 33, 1)
25
return -1;
18
#define HSTR_TTEE (1 << 16)
19
#define HSTR_TJDBX (1 << 17)
20
21
-#define CNTHCTL_CNTVMASK (1 << 18)
22
-#define CNTHCTL_CNTPMASK (1 << 19)
23
+/*
24
+ * Depending on the value of HCR_EL2.E2H, bits 0 and 1
25
+ * have different bit definitions, and EL1PCTEN might be
26
+ * bit 0 or bit 10. We use _E2H1 and _E2H0 suffixes to
27
+ * disambiguate if necessary.
28
+ */
29
+FIELD(CNTHCTL, EL0PCTEN_E2H1, 0, 1)
30
+FIELD(CNTHCTL, EL0VCTEN_E2H1, 1, 1)
31
+FIELD(CNTHCTL, EL1PCTEN_E2H0, 0, 1)
32
+FIELD(CNTHCTL, EL1PCEN_E2H0, 1, 1)
33
+FIELD(CNTHCTL, EVNTEN, 2, 1)
34
+FIELD(CNTHCTL, EVNTDIR, 3, 1)
35
+FIELD(CNTHCTL, EVNTI, 4, 4)
36
+FIELD(CNTHCTL, EL0VTEN, 8, 1)
37
+FIELD(CNTHCTL, EL0PTEN, 9, 1)
38
+FIELD(CNTHCTL, EL1PCTEN_E2H1, 10, 1)
39
+FIELD(CNTHCTL, EL1PTEN, 11, 1)
40
+FIELD(CNTHCTL, ECV, 12, 1)
41
+FIELD(CNTHCTL, EL1TVT, 13, 1)
42
+FIELD(CNTHCTL, EL1TVCT, 14, 1)
43
+FIELD(CNTHCTL, EL1NVPCT, 15, 1)
44
+FIELD(CNTHCTL, EL1NVVCT, 16, 1)
45
+FIELD(CNTHCTL, EVNTIS, 17, 1)
46
+FIELD(CNTHCTL, CNTVMASK, 18, 1)
47
+FIELD(CNTHCTL, CNTPMASK, 19, 1)
48
49
/* We use a few fake FSR values for internal purposes in M profile.
50
* M profile cores don't have A/R format FSRs, but currently our
51
diff --git a/target/arm/helper.c b/target/arm/helper.c
52
index XXXXXXX..XXXXXXX 100644
53
--- a/target/arm/helper.c
54
+++ b/target/arm/helper.c
55
@@ -XXX,XX +XXX,XX @@ static void gt_update_irq(ARMCPU *cpu, int timeridx)
56
* It is RES0 in Secure and NonSecure state.
57
*/
58
if ((ss == ARMSS_Root || ss == ARMSS_Realm) &&
59
- ((timeridx == GTIMER_VIRT && (cnthctl & CNTHCTL_CNTVMASK)) ||
60
- (timeridx == GTIMER_PHYS && (cnthctl & CNTHCTL_CNTPMASK)))) {
61
+ ((timeridx == GTIMER_VIRT && (cnthctl & R_CNTHCTL_CNTVMASK_MASK)) ||
62
+ (timeridx == GTIMER_PHYS && (cnthctl & R_CNTHCTL_CNTPMASK_MASK)))) {
63
irqstate = 0;
26
}
64
}
27
65
28
- ret = nvic_rettobase(s);
66
@@ -XXX,XX +XXX,XX @@ static void gt_cnthctl_write(CPUARMState *env, const ARMCPRegInfo *ri,
29
+ /*
67
{
30
+ * If this is a configurable exception and it is currently
68
ARMCPU *cpu = env_archcpu(env);
31
+ * targeting the opposite security state from the one we're trying
69
uint32_t oldval = env->cp15.cnthctl_el2;
32
+ * to complete it for, this counts as an illegal exception return.
70
-
33
+ * We still need to deactivate whatever vector the logic above has
71
raw_write(env, ri, value);
34
+ * selected, though, as it might not be the same as the one for the
72
35
+ * requested exception number.
73
- if ((oldval ^ value) & CNTHCTL_CNTVMASK) {
36
+ */
74
+ if ((oldval ^ value) & R_CNTHCTL_CNTVMASK_MASK) {
37
+ if (!exc_is_banked(irq) && exc_targets_secure(s, irq) != secure) {
75
gt_update_irq(cpu, GTIMER_VIRT);
38
+ ret = -1;
76
- } else if ((oldval ^ value) & CNTHCTL_CNTPMASK) {
39
+ } else {
77
+ } else if ((oldval ^ value) & R_CNTHCTL_CNTPMASK_MASK) {
40
+ ret = nvic_rettobase(s);
78
gt_update_irq(cpu, GTIMER_PHYS);
41
+ }
79
}
42
80
}
43
vec->active = 0;
44
if (vec->level) {
45
--
81
--
46
2.20.1
82
2.34.1
47
83
48
84
diff view generated by jsdifflib
New patch
1
Don't allow the guest to write CNTHCTL_EL2 bits which don't exist.
2
This is not strictly architecturally required, but it is how we've
3
tended to implement registers more recently.
1
4
5
In particular, bits [19:18] are only present with FEAT_RME,
6
and bits [17:12] will only be present with FEAT_ECV.
7
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20240301183219.2424889-5-peter.maydell@linaro.org
11
---
12
target/arm/helper.c | 18 ++++++++++++++++++
13
1 file changed, 18 insertions(+)
14
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
18
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@ static void gt_cnthctl_write(CPUARMState *env, const ARMCPRegInfo *ri,
20
{
21
ARMCPU *cpu = env_archcpu(env);
22
uint32_t oldval = env->cp15.cnthctl_el2;
23
+ uint32_t valid_mask =
24
+ R_CNTHCTL_EL0PCTEN_E2H1_MASK |
25
+ R_CNTHCTL_EL0VCTEN_E2H1_MASK |
26
+ R_CNTHCTL_EVNTEN_MASK |
27
+ R_CNTHCTL_EVNTDIR_MASK |
28
+ R_CNTHCTL_EVNTI_MASK |
29
+ R_CNTHCTL_EL0VTEN_MASK |
30
+ R_CNTHCTL_EL0PTEN_MASK |
31
+ R_CNTHCTL_EL1PCTEN_E2H1_MASK |
32
+ R_CNTHCTL_EL1PTEN_MASK;
33
+
34
+ if (cpu_isar_feature(aa64_rme, cpu)) {
35
+ valid_mask |= R_CNTHCTL_CNTVMASK_MASK | R_CNTHCTL_CNTPMASK_MASK;
36
+ }
37
+
38
+ /* Clear RES0 bits */
39
+ value &= valid_mask;
40
+
41
raw_write(env, ri, value);
42
43
if ((oldval ^ value) & R_CNTHCTL_CNTVMASK_MASK) {
44
--
45
2.34.1
diff view generated by jsdifflib
1
To prevent execution priority remaining negative if the guest
1
The functionality defined by ID_AA64MMFR0_EL1.ECV == 1 is:
2
returns from an NMI or HardFault with a corrupted IPSR, the
2
* four new trap bits for various counter and timer registers
3
v8M interrupt deactivation process forces the HardFault and NMI
3
* the CNTHCTL_EL2.EVNTIS and CNTKCTL_EL1.EVNTIS bits which control
4
to inactive based on the current raw execution priority,
4
scaling of the event stream. This is a no-op for us, because we don't
5
even if the interrupt the guest is trying to deactivate
5
implement the event stream (our WFE is a NOP): all we need to do is
6
is something else. In the pseudocode this is done in the
6
allow CNTHCTL_EL2.ENVTIS to be read and written.
7
Deactivate() function.
7
* extensions to PMSCR_EL1.PCT, PMSCR_EL2.PCT, TRFCR_EL1.TS and
8
TRFCR_EL2.TS: these are all no-ops for us, because we don't implement
9
FEAT_SPE or FEAT_TRF.
10
* new registers CNTPCTSS_EL0 and NCTVCTSS_EL0 which are
11
"self-sychronizing" views of the CNTPCT_EL0 and CNTVCT_EL0, meaning
12
that no barriers are needed around their accesses. For us these
13
are just the same as the normal views, because all our sysregs are
14
inherently self-sychronizing.
15
16
In this commit we implement the trap handling and permit the new
17
CNTHCTL_EL2 bits to be written.
8
18
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20190617175317.27557-3-peter.maydell@linaro.org
21
Message-id: 20240301183219.2424889-6-peter.maydell@linaro.org
12
---
22
---
13
hw/intc/armv7m_nvic.c | 40 +++++++++++++++++++++++++++++++++++-----
23
target/arm/cpu-features.h | 5 ++++
14
1 file changed, 35 insertions(+), 5 deletions(-)
24
target/arm/helper.c | 51 +++++++++++++++++++++++++++++++++++----
25
2 files changed, 51 insertions(+), 5 deletions(-)
15
26
16
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
27
diff --git a/target/arm/cpu-features.h b/target/arm/cpu-features.h
17
index XXXXXXX..XXXXXXX 100644
28
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/intc/armv7m_nvic.c
29
--- a/target/arm/cpu-features.h
19
+++ b/hw/intc/armv7m_nvic.c
30
+++ b/target/arm/cpu-features.h
20
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_get_pending_irq_info(void *opaque,
31
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_fgt(const ARMISARegisters *id)
21
int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure)
32
return FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, FGT) != 0;
33
}
34
35
+static inline bool isar_feature_aa64_ecv_traps(const ARMISARegisters *id)
36
+{
37
+ return FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, ECV) > 0;
38
+}
39
+
40
static inline bool isar_feature_aa64_vh(const ARMISARegisters *id)
22
{
41
{
23
NVICState *s = (NVICState *)opaque;
42
return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, VH) != 0;
24
- VecInfo *vec;
43
diff --git a/target/arm/helper.c b/target/arm/helper.c
25
+ VecInfo *vec = NULL;
44
index XXXXXXX..XXXXXXX 100644
26
int ret;
45
--- a/target/arm/helper.c
27
46
+++ b/target/arm/helper.c
28
assert(irq > ARMV7M_EXCP_RESET && irq < s->num_irq);
47
@@ -XXX,XX +XXX,XX @@ static CPAccessResult gt_counter_access(CPUARMState *env, int timeridx,
29
48
: !extract32(env->cp15.cnthctl_el2, 0, 1))) {
30
- if (secure && exc_is_banked(irq)) {
49
return CP_ACCESS_TRAP_EL2;
31
- vec = &s->sec_vectors[irq];
50
}
32
- } else {
51
+ if (has_el2 && timeridx == GTIMER_VIRT) {
33
- vec = &s->vectors[irq];
52
+ if (FIELD_EX64(env->cp15.cnthctl_el2, CNTHCTL, EL1TVCT)) {
34
+ /*
53
+ return CP_ACCESS_TRAP_EL2;
35
+ * For negative priorities, v8M will forcibly deactivate the appropriate
36
+ * NMI or HardFault regardless of what interrupt we're being asked to
37
+ * deactivate (compare the DeActivate() pseudocode). This is a guard
38
+ * against software returning from NMI or HardFault with a corrupted
39
+ * IPSR and leaving the CPU in a negative-priority state.
40
+ * v7M does not do this, but simply deactivates the requested interrupt.
41
+ */
42
+ if (arm_feature(&s->cpu->env, ARM_FEATURE_V8)) {
43
+ switch (armv7m_nvic_raw_execution_priority(s)) {
44
+ case -1:
45
+ if (s->cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK) {
46
+ vec = &s->vectors[ARMV7M_EXCP_HARD];
47
+ } else {
48
+ vec = &s->sec_vectors[ARMV7M_EXCP_HARD];
49
+ }
54
+ }
50
+ break;
55
+ }
51
+ case -2:
56
break;
52
+ vec = &s->vectors[ARMV7M_EXCP_NMI];
57
}
53
+ break;
58
return CP_ACCESS_OK;
54
+ case -3:
59
@@ -XXX,XX +XXX,XX @@ static CPAccessResult gt_timer_access(CPUARMState *env, int timeridx,
55
+ vec = &s->sec_vectors[ARMV7M_EXCP_HARD];
60
}
56
+ break;
61
}
57
+ default:
62
}
58
+ break;
63
+ if (has_el2 && timeridx == GTIMER_VIRT) {
64
+ if (FIELD_EX64(env->cp15.cnthctl_el2, CNTHCTL, EL1TVT)) {
65
+ return CP_ACCESS_TRAP_EL2;
66
+ }
67
+ }
68
break;
69
}
70
return CP_ACCESS_OK;
71
@@ -XXX,XX +XXX,XX @@ static void gt_cnthctl_write(CPUARMState *env, const ARMCPRegInfo *ri,
72
if (cpu_isar_feature(aa64_rme, cpu)) {
73
valid_mask |= R_CNTHCTL_CNTVMASK_MASK | R_CNTHCTL_CNTPMASK_MASK;
74
}
75
+ if (cpu_isar_feature(aa64_ecv_traps, cpu)) {
76
+ valid_mask |=
77
+ R_CNTHCTL_EL1TVT_MASK |
78
+ R_CNTHCTL_EL1TVCT_MASK |
79
+ R_CNTHCTL_EL1NVPCT_MASK |
80
+ R_CNTHCTL_EL1NVVCT_MASK |
81
+ R_CNTHCTL_EVNTIS_MASK;
82
+ }
83
84
/* Clear RES0 bits */
85
value &= valid_mask;
86
@@ -XXX,XX +XXX,XX @@ static CPAccessResult e2h_access(CPUARMState *env, const ARMCPRegInfo *ri,
87
{
88
if (arm_current_el(env) == 1) {
89
/* This must be a FEAT_NV access */
90
- /* TODO: FEAT_ECV will need to check CNTHCTL_EL2 here */
91
return CP_ACCESS_OK;
92
}
93
if (!(arm_hcr_el2_eff(env) & HCR_E2H)) {
94
@@ -XXX,XX +XXX,XX @@ static CPAccessResult e2h_access(CPUARMState *env, const ARMCPRegInfo *ri,
95
return CP_ACCESS_OK;
96
}
97
98
+static CPAccessResult access_el1nvpct(CPUARMState *env, const ARMCPRegInfo *ri,
99
+ bool isread)
100
+{
101
+ if (arm_current_el(env) == 1) {
102
+ /* This must be a FEAT_NV access with NVx == 101 */
103
+ if (FIELD_EX64(env->cp15.cnthctl_el2, CNTHCTL, EL1NVPCT)) {
104
+ return CP_ACCESS_TRAP_EL2;
59
+ }
105
+ }
60
+ }
106
+ }
107
+ return e2h_access(env, ri, isread);
108
+}
61
+
109
+
62
+ if (!vec) {
110
+static CPAccessResult access_el1nvvct(CPUARMState *env, const ARMCPRegInfo *ri,
63
+ if (secure && exc_is_banked(irq)) {
111
+ bool isread)
64
+ vec = &s->sec_vectors[irq];
112
+{
65
+ } else {
113
+ if (arm_current_el(env) == 1) {
66
+ vec = &s->vectors[irq];
114
+ /* This must be a FEAT_NV access with NVx == 101 */
115
+ if (FIELD_EX64(env->cp15.cnthctl_el2, CNTHCTL, EL1NVVCT)) {
116
+ return CP_ACCESS_TRAP_EL2;
67
+ }
117
+ }
68
}
118
+ }
69
119
+ return e2h_access(env, ri, isread);
70
trace_nvic_complete_irq(irq, secure);
120
+}
121
+
122
/* Test if system register redirection is to occur in the current state. */
123
static bool redirect_for_e2h(CPUARMState *env)
124
{
125
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo vhe_reginfo[] = {
126
{ .name = "CNTP_CTL_EL02", .state = ARM_CP_STATE_AA64,
127
.opc0 = 3, .opc1 = 5, .crn = 14, .crm = 2, .opc2 = 1,
128
.type = ARM_CP_IO | ARM_CP_ALIAS,
129
- .access = PL2_RW, .accessfn = e2h_access,
130
+ .access = PL2_RW, .accessfn = access_el1nvpct,
131
.nv2_redirect_offset = 0x180 | NV2_REDIR_NO_NV1,
132
.fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_PHYS].ctl),
133
.writefn = gt_phys_ctl_write, .raw_writefn = raw_write },
134
{ .name = "CNTV_CTL_EL02", .state = ARM_CP_STATE_AA64,
135
.opc0 = 3, .opc1 = 5, .crn = 14, .crm = 3, .opc2 = 1,
136
.type = ARM_CP_IO | ARM_CP_ALIAS,
137
- .access = PL2_RW, .accessfn = e2h_access,
138
+ .access = PL2_RW, .accessfn = access_el1nvvct,
139
.nv2_redirect_offset = 0x170 | NV2_REDIR_NO_NV1,
140
.fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_VIRT].ctl),
141
.writefn = gt_virt_ctl_write, .raw_writefn = raw_write },
142
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo vhe_reginfo[] = {
143
.type = ARM_CP_IO | ARM_CP_ALIAS,
144
.fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_PHYS].cval),
145
.nv2_redirect_offset = 0x178 | NV2_REDIR_NO_NV1,
146
- .access = PL2_RW, .accessfn = e2h_access,
147
+ .access = PL2_RW, .accessfn = access_el1nvpct,
148
.writefn = gt_phys_cval_write, .raw_writefn = raw_write },
149
{ .name = "CNTV_CVAL_EL02", .state = ARM_CP_STATE_AA64,
150
.opc0 = 3, .opc1 = 5, .crn = 14, .crm = 3, .opc2 = 2,
151
.type = ARM_CP_IO | ARM_CP_ALIAS,
152
.nv2_redirect_offset = 0x168 | NV2_REDIR_NO_NV1,
153
.fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_VIRT].cval),
154
- .access = PL2_RW, .accessfn = e2h_access,
155
+ .access = PL2_RW, .accessfn = access_el1nvvct,
156
.writefn = gt_virt_cval_write, .raw_writefn = raw_write },
157
#endif
158
};
71
--
159
--
72
2.20.1
160
2.34.1
73
74
diff view generated by jsdifflib
New patch
1
For FEAT_ECV, new registers CNTPCTSS_EL0 and CNTVCTSS_EL0 are
2
defined, which are "self-synchronized" views of the physical and
3
virtual counts as seen in the CNTPCT_EL0 and CNTVCT_EL0 registers
4
(meaning that no barriers are needed around accesses to them to
5
ensure that reads of them do not occur speculatively and out-of-order
6
with other instructions).
1
7
8
For QEMU, all our system registers are self-synchronized, so we can
9
simply copy the existing implementation of CNTPCT_EL0 and CNTVCT_EL0
10
to the new register encodings.
11
12
This means we now implement all the functionality required for
13
ID_AA64MMFR0_EL1.ECV == 0b0001.
14
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
17
Message-id: 20240301183219.2424889-7-peter.maydell@linaro.org
18
---
19
target/arm/helper.c | 43 +++++++++++++++++++++++++++++++++++++++++++
20
1 file changed, 43 insertions(+)
21
22
diff --git a/target/arm/helper.c b/target/arm/helper.c
23
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/helper.c
25
+++ b/target/arm/helper.c
26
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo generic_timer_cp_reginfo[] = {
27
},
28
};
29
30
+/*
31
+ * FEAT_ECV adds extra views of CNTVCT_EL0 and CNTPCT_EL0 which
32
+ * are "self-synchronizing". For QEMU all sysregs are self-synchronizing,
33
+ * so our implementations here are identical to the normal registers.
34
+ */
35
+static const ARMCPRegInfo gen_timer_ecv_cp_reginfo[] = {
36
+ { .name = "CNTVCTSS", .cp = 15, .crm = 14, .opc1 = 9,
37
+ .access = PL0_R, .type = ARM_CP_64BIT | ARM_CP_NO_RAW | ARM_CP_IO,
38
+ .accessfn = gt_vct_access,
39
+ .readfn = gt_virt_cnt_read, .resetfn = arm_cp_reset_ignore,
40
+ },
41
+ { .name = "CNTVCTSS_EL0", .state = ARM_CP_STATE_AA64,
42
+ .opc0 = 3, .opc1 = 3, .crn = 14, .crm = 0, .opc2 = 6,
43
+ .access = PL0_R, .type = ARM_CP_NO_RAW | ARM_CP_IO,
44
+ .accessfn = gt_vct_access, .readfn = gt_virt_cnt_read,
45
+ },
46
+ { .name = "CNTPCTSS", .cp = 15, .crm = 14, .opc1 = 8,
47
+ .access = PL0_R, .type = ARM_CP_64BIT | ARM_CP_NO_RAW | ARM_CP_IO,
48
+ .accessfn = gt_pct_access,
49
+ .readfn = gt_cnt_read, .resetfn = arm_cp_reset_ignore,
50
+ },
51
+ { .name = "CNTPCTSS_EL0", .state = ARM_CP_STATE_AA64,
52
+ .opc0 = 3, .opc1 = 3, .crn = 14, .crm = 0, .opc2 = 5,
53
+ .access = PL0_R, .type = ARM_CP_NO_RAW | ARM_CP_IO,
54
+ .accessfn = gt_pct_access, .readfn = gt_cnt_read,
55
+ },
56
+};
57
+
58
#else
59
60
/*
61
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo generic_timer_cp_reginfo[] = {
62
},
63
};
64
65
+/*
66
+ * CNTVCTSS_EL0 has the same trap conditions as CNTVCT_EL0, so it also
67
+ * is exposed to userspace by Linux.
68
+ */
69
+static const ARMCPRegInfo gen_timer_ecv_cp_reginfo[] = {
70
+ { .name = "CNTVCTSS_EL0", .state = ARM_CP_STATE_AA64,
71
+ .opc0 = 3, .opc1 = 3, .crn = 14, .crm = 0, .opc2 = 6,
72
+ .access = PL0_R, .type = ARM_CP_NO_RAW | ARM_CP_IO,
73
+ .readfn = gt_virt_cnt_read,
74
+ },
75
+};
76
+
77
#endif
78
79
static void par_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
80
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
81
if (arm_feature(env, ARM_FEATURE_GENERIC_TIMER)) {
82
define_arm_cp_regs(cpu, generic_timer_cp_reginfo);
83
}
84
+ if (cpu_isar_feature(aa64_ecv_traps, cpu)) {
85
+ define_arm_cp_regs(cpu, gen_timer_ecv_cp_reginfo);
86
+ }
87
if (arm_feature(env, ARM_FEATURE_VAPA)) {
88
ARMCPRegInfo vapa_cp_reginfo[] = {
89
{ .name = "PAR", .cp = 15, .crn = 7, .crm = 4, .opc1 = 0, .opc2 = 0,
90
--
91
2.34.1
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
When ID_AA64MMFR0_EL1.ECV is 0b0010, a new register CNTPOFF_EL2 is
2
implemented. This is similar to the existing CNTVOFF_EL2, except
3
that it controls a hypervisor-adjustable offset made to the physical
4
counter and timer.
2
5
3
Per Peter Maydell:
6
Implement the handling for this register, which includes control/trap
7
bits in SCR_EL3 and CNTHCTL_EL2.
4
8
5
Semihosting hooks either SVC or HLT instructions, and inside KVM
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
both of those go to EL1, ie to the guest, and can't be trapped to
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
KVM.
11
Message-id: 20240301183219.2424889-8-peter.maydell@linaro.org
12
---
13
target/arm/cpu-features.h | 5 +++
14
target/arm/cpu.h | 1 +
15
target/arm/helper.c | 68 +++++++++++++++++++++++++++++++++++++--
16
target/arm/trace-events | 1 +
17
4 files changed, 73 insertions(+), 2 deletions(-)
8
18
9
Let check_for_semihosting() return False when not running on TCG.
19
diff --git a/target/arm/cpu-features.h b/target/arm/cpu-features.h
10
11
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
12
Message-id: 20190701194942.10092-3-philmd@redhat.com
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
16
target/arm/Makefile.objs | 2 +-
17
target/arm/cpu.h | 7 +++++++
18
target/arm/helper.c | 8 +++++++-
19
3 files changed, 15 insertions(+), 2 deletions(-)
20
21
diff --git a/target/arm/Makefile.objs b/target/arm/Makefile.objs
22
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/Makefile.objs
21
--- a/target/arm/cpu-features.h
24
+++ b/target/arm/Makefile.objs
22
+++ b/target/arm/cpu-features.h
25
@@ -XXX,XX +XXX,XX @@
23
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_ecv_traps(const ARMISARegisters *id)
26
-obj-y += arm-semi.o
24
return FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, ECV) > 0;
27
+obj-$(CONFIG_TCG) += arm-semi.o
25
}
28
obj-y += helper.o vfp_helper.o
26
29
obj-y += cpu.o gdbstub.o
27
+static inline bool isar_feature_aa64_ecv(const ARMISARegisters *id)
30
obj-$(TARGET_AARCH64) += cpu64.o gdbstub64.o
28
+{
29
+ return FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, ECV) > 1;
30
+}
31
+
32
static inline bool isar_feature_aa64_vh(const ARMISARegisters *id)
33
{
34
return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, VH) != 0;
31
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
35
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
32
index XXXXXXX..XXXXXXX 100644
36
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/cpu.h
37
--- a/target/arm/cpu.h
34
+++ b/target/arm/cpu.h
38
+++ b/target/arm/cpu.h
35
@@ -XXX,XX +XXX,XX @@ static inline void aarch64_sve_change_el(CPUARMState *env, int o,
39
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
36
{ }
40
uint64_t c14_cntkctl; /* Timer Control register */
37
#endif
41
uint64_t cnthctl_el2; /* Counter/Timer Hyp Control register */
38
42
uint64_t cntvoff_el2; /* Counter Virtual Offset register */
39
+#if !defined(CONFIG_TCG)
43
+ uint64_t cntpoff_el2; /* Counter Physical Offset register */
40
+static inline target_ulong do_arm_semihosting(CPUARMState *env)
44
ARMGenericTimer c14_timer[NUM_GTIMERS];
41
+{
45
uint32_t c15_cpar; /* XScale Coprocessor Access Register */
42
+ g_assert_not_reached();
46
uint32_t c15_ticonfig; /* TI925T configuration byte. */
43
+}
44
+#else
45
target_ulong do_arm_semihosting(CPUARMState *env);
46
+#endif
47
void aarch64_sync_32_to_64(CPUARMState *env);
48
void aarch64_sync_64_to_32(CPUARMState *env);
49
50
diff --git a/target/arm/helper.c b/target/arm/helper.c
47
diff --git a/target/arm/helper.c b/target/arm/helper.c
51
index XXXXXXX..XXXXXXX 100644
48
index XXXXXXX..XXXXXXX 100644
52
--- a/target/arm/helper.c
49
--- a/target/arm/helper.c
53
+++ b/target/arm/helper.c
50
+++ b/target/arm/helper.c
54
@@ -XXX,XX +XXX,XX @@
51
@@ -XXX,XX +XXX,XX @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
55
#include "qemu/qemu-print.h"
52
if (cpu_isar_feature(aa64_rme, cpu)) {
56
#include "exec/exec-all.h"
53
valid_mask |= SCR_NSE | SCR_GPF;
57
#include "exec/cpu_ldst.h"
54
}
58
-#include "arm_ldst.h"
55
+ if (cpu_isar_feature(aa64_ecv, cpu)) {
59
#include <zlib.h> /* For crc32 */
56
+ valid_mask |= SCR_ECVEN;
60
#include "hw/semihosting/semihost.h"
57
+ }
61
#include "sysemu/cpus.h"
58
} else {
62
@@ -XXX,XX +XXX,XX @@
59
valid_mask &= ~(SCR_RW | SCR_ST);
63
#include "qapi/qapi-commands-machine-target.h"
60
if (cpu_isar_feature(aa32_ras, cpu)) {
64
#include "qapi/error.h"
61
@@ -XXX,XX +XXX,XX @@ void gt_rme_post_el_change(ARMCPU *cpu, void *ignored)
65
#include "qemu/guest-random.h"
62
gt_update_irq(cpu, GTIMER_PHYS);
66
+#ifdef CONFIG_TCG
63
}
67
+#include "arm_ldst.h"
64
65
+static uint64_t gt_phys_raw_cnt_offset(CPUARMState *env)
66
+{
67
+ if ((env->cp15.scr_el3 & SCR_ECVEN) &&
68
+ FIELD_EX64(env->cp15.cnthctl_el2, CNTHCTL, ECV) &&
69
+ arm_is_el2_enabled(env) &&
70
+ (arm_hcr_el2_eff(env) & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE)) {
71
+ return env->cp15.cntpoff_el2;
72
+ }
73
+ return 0;
74
+}
75
+
76
+static uint64_t gt_phys_cnt_offset(CPUARMState *env)
77
+{
78
+ if (arm_current_el(env) >= 2) {
79
+ return 0;
80
+ }
81
+ return gt_phys_raw_cnt_offset(env);
82
+}
83
+
84
static void gt_recalc_timer(ARMCPU *cpu, int timeridx)
85
{
86
ARMGenericTimer *gt = &cpu->env.cp15.c14_timer[timeridx];
87
@@ -XXX,XX +XXX,XX @@ static void gt_recalc_timer(ARMCPU *cpu, int timeridx)
88
* reset timer to when ISTATUS next has to change
89
*/
90
uint64_t offset = timeridx == GTIMER_VIRT ?
91
- cpu->env.cp15.cntvoff_el2 : 0;
92
+ cpu->env.cp15.cntvoff_el2 : gt_phys_raw_cnt_offset(&cpu->env);
93
uint64_t count = gt_get_countervalue(&cpu->env);
94
/* Note that this must be unsigned 64 bit arithmetic: */
95
int istatus = count - offset >= gt->cval;
96
@@ -XXX,XX +XXX,XX @@ static void gt_timer_reset(CPUARMState *env, const ARMCPRegInfo *ri,
97
98
static uint64_t gt_cnt_read(CPUARMState *env, const ARMCPRegInfo *ri)
99
{
100
- return gt_get_countervalue(env);
101
+ return gt_get_countervalue(env) - gt_phys_cnt_offset(env);
102
}
103
104
static uint64_t gt_virt_cnt_offset(CPUARMState *env)
105
@@ -XXX,XX +XXX,XX @@ static uint64_t gt_tval_read(CPUARMState *env, const ARMCPRegInfo *ri,
106
case GTIMER_HYPVIRT:
107
offset = gt_virt_cnt_offset(env);
108
break;
109
+ case GTIMER_PHYS:
110
+ offset = gt_phys_cnt_offset(env);
111
+ break;
112
}
113
114
return (uint32_t)(env->cp15.c14_timer[timeridx].cval -
115
@@ -XXX,XX +XXX,XX @@ static void gt_tval_write(CPUARMState *env, const ARMCPRegInfo *ri,
116
case GTIMER_HYPVIRT:
117
offset = gt_virt_cnt_offset(env);
118
break;
119
+ case GTIMER_PHYS:
120
+ offset = gt_phys_cnt_offset(env);
121
+ break;
122
}
123
124
trace_arm_gt_tval_write(timeridx, value);
125
@@ -XXX,XX +XXX,XX @@ static void gt_cnthctl_write(CPUARMState *env, const ARMCPRegInfo *ri,
126
R_CNTHCTL_EL1NVVCT_MASK |
127
R_CNTHCTL_EVNTIS_MASK;
128
}
129
+ if (cpu_isar_feature(aa64_ecv, cpu)) {
130
+ valid_mask |= R_CNTHCTL_ECV_MASK;
131
+ }
132
133
/* Clear RES0 bits */
134
value &= valid_mask;
135
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo gen_timer_ecv_cp_reginfo[] = {
136
},
137
};
138
139
+static CPAccessResult gt_cntpoff_access(CPUARMState *env,
140
+ const ARMCPRegInfo *ri,
141
+ bool isread)
142
+{
143
+ if (arm_current_el(env) == 2 && !(env->cp15.scr_el3 & SCR_ECVEN)) {
144
+ return CP_ACCESS_TRAP_EL3;
145
+ }
146
+ return CP_ACCESS_OK;
147
+}
148
+
149
+static void gt_cntpoff_write(CPUARMState *env, const ARMCPRegInfo *ri,
150
+ uint64_t value)
151
+{
152
+ ARMCPU *cpu = env_archcpu(env);
153
+
154
+ trace_arm_gt_cntpoff_write(value);
155
+ raw_write(env, ri, value);
156
+ gt_recalc_timer(cpu, GTIMER_PHYS);
157
+}
158
+
159
+static const ARMCPRegInfo gen_timer_cntpoff_reginfo = {
160
+ .name = "CNTPOFF_EL2", .state = ARM_CP_STATE_AA64,
161
+ .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 0, .opc2 = 6,
162
+ .access = PL2_RW, .type = ARM_CP_IO, .resetvalue = 0,
163
+ .accessfn = gt_cntpoff_access, .writefn = gt_cntpoff_write,
164
+ .nv2_redirect_offset = 0x1a8,
165
+ .fieldoffset = offsetof(CPUARMState, cp15.cntpoff_el2),
166
+};
167
#else
168
169
/*
170
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
171
if (cpu_isar_feature(aa64_ecv_traps, cpu)) {
172
define_arm_cp_regs(cpu, gen_timer_ecv_cp_reginfo);
173
}
174
+#ifndef CONFIG_USER_ONLY
175
+ if (cpu_isar_feature(aa64_ecv, cpu)) {
176
+ define_one_arm_cp_reg(cpu, &gen_timer_cntpoff_reginfo);
177
+ }
68
+#endif
178
+#endif
69
179
if (arm_feature(env, ARM_FEATURE_VAPA)) {
70
#define ARM_CPU_FREQ 1000000000 /* FIXME: 1 GHz, should be configurable */
180
ARMCPRegInfo vapa_cp_reginfo[] = {
71
181
{ .name = "PAR", .cp = 15, .crn = 7, .crm = 4, .opc1 = 0, .opc2 = 0,
72
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
182
diff --git a/target/arm/trace-events b/target/arm/trace-events
73
183
index XXXXXXX..XXXXXXX 100644
74
static inline bool check_for_semihosting(CPUState *cs)
184
--- a/target/arm/trace-events
75
{
185
+++ b/target/arm/trace-events
76
+#ifdef CONFIG_TCG
186
@@ -XXX,XX +XXX,XX @@ arm_gt_tval_write(int timer, uint64_t value) "gt_tval_write: timer %d value 0x%"
77
/* Check whether this exception is a semihosting call; if so
187
arm_gt_ctl_write(int timer, uint64_t value) "gt_ctl_write: timer %d value 0x%" PRIx64
78
* then handle it and return true; otherwise return false.
188
arm_gt_imask_toggle(int timer) "gt_ctl_write: timer %d IMASK toggle"
79
*/
189
arm_gt_cntvoff_write(uint64_t value) "gt_cntvoff_write: value 0x%" PRIx64
80
@@ -XXX,XX +XXX,XX @@ static inline bool check_for_semihosting(CPUState *cs)
190
+arm_gt_cntpoff_write(uint64_t value) "gt_cntpoff_write: value 0x%" PRIx64
81
env->regs[0] = do_arm_semihosting(env);
191
arm_gt_update_irq(int timer, int irqstate) "gt_update_irq: timer %d irqstate %d"
82
return true;
192
83
}
193
# kvm.c
84
+#else
85
+ return false;
86
+#endif
87
}
88
89
/* Handle a CPU exception for A and R profile CPUs.
90
--
194
--
91
2.20.1
195
2.34.1
92
93
diff view generated by jsdifflib
1
Thumb instructions in an IT block are set up to be conditionally
1
Enable all FEAT_ECV features on the 'max' CPU.
2
executed depending on a set of condition bits encoded into the IT
3
bits of the CPSR/XPSR. The architecture specifies that if the
4
condition bits are 0b1111 this means "always execute" (like 0b1110),
5
not "never execute"; we were treating it as "never execute". (See
6
the ConditionHolds() pseudocode in both the A-profile and M-profile
7
Arm ARM.)
8
9
This is a bit of an obscure corner case, because the only legal
10
way to get to an 0b1111 set of condbits is to do an exception
11
return which sets the XPSR/CPSR up that way. An IT instruction
12
which encodes a condition sequence that would include an 0b1111 is
13
UNPREDICTABLE, and for v8A the CONSTRAINED UNPREDICTABLE choices
14
for such an IT insn are to NOP, UNDEF, or treat 0b1111 like 0b1110.
15
Add a comment noting that we take the latter option.
16
2
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
18
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
19
Message-id: 20190617175317.27557-7-peter.maydell@linaro.org
6
Message-id: 20240301183219.2424889-9-peter.maydell@linaro.org
20
---
7
---
21
target/arm/translate.c | 15 +++++++++++++--
8
docs/system/arm/emulation.rst | 1 +
22
1 file changed, 13 insertions(+), 2 deletions(-)
9
target/arm/tcg/cpu64.c | 1 +
10
2 files changed, 2 insertions(+)
23
11
24
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
25
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/translate.c
14
--- a/docs/system/arm/emulation.rst
27
+++ b/target/arm/translate.c
15
+++ b/docs/system/arm/emulation.rst
28
@@ -XXX,XX +XXX,XX @@ static void disas_thumb_insn(DisasContext *s, uint32_t insn)
16
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
29
gen_nop_hint(s, (insn >> 4) & 0xf);
17
- FEAT_DotProd (Advanced SIMD dot product instructions)
30
break;
18
- FEAT_DoubleFault (Double Fault Extension)
31
}
19
- FEAT_E0PD (Preventing EL0 access to halves of address maps)
32
- /* If Then. */
20
+- FEAT_ECV (Enhanced Counter Virtualization)
33
+ /*
21
- FEAT_EPAC (Enhanced pointer authentication)
34
+ * IT (If-Then)
22
- FEAT_ETS (Enhanced Translation Synchronization)
35
+ *
23
- FEAT_EVT (Enhanced Virtualization Traps)
36
+ * Combinations of firstcond and mask which set up an 0b1111
24
diff --git a/target/arm/tcg/cpu64.c b/target/arm/tcg/cpu64.c
37
+ * condition are UNPREDICTABLE; we take the CONSTRAINED
25
index XXXXXXX..XXXXXXX 100644
38
+ * UNPREDICTABLE choice to treat 0b1111 the same as 0b1110,
26
--- a/target/arm/tcg/cpu64.c
39
+ * i.e. both meaning "execute always".
27
+++ b/target/arm/tcg/cpu64.c
40
+ */
28
@@ -XXX,XX +XXX,XX @@ void aarch64_max_tcg_initfn(Object *obj)
41
s->condexec_cond = (insn >> 4) & 0xe;
29
t = FIELD_DP64(t, ID_AA64MMFR0, TGRAN64_2, 2); /* 64k stage2 supported */
42
s->condexec_mask = insn & 0x1f;
30
t = FIELD_DP64(t, ID_AA64MMFR0, TGRAN4_2, 2); /* 4k stage2 supported */
43
/* No actual code generated for this insn, just setup state. */
31
t = FIELD_DP64(t, ID_AA64MMFR0, FGT, 1); /* FEAT_FGT */
44
@@ -XXX,XX +XXX,XX @@ static void thumb_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
32
+ t = FIELD_DP64(t, ID_AA64MMFR0, ECV, 2); /* FEAT_ECV */
45
if (dc->condexec_mask && !thumb_insn_is_unconditional(dc, insn)) {
33
cpu->isar.id_aa64mmfr0 = t;
46
uint32_t cond = dc->condexec_cond;
34
47
35
t = cpu->isar.id_aa64mmfr1;
48
- if (cond != 0x0e) { /* Skip conditional when condition is AL. */
49
+ /*
50
+ * Conditionally skip the insn. Note that both 0xe and 0xf mean
51
+ * "always"; 0xf is not "never".
52
+ */
53
+ if (cond < 0x0e) {
54
arm_skip_unless(dc, cond);
55
}
56
}
57
--
36
--
58
2.20.1
37
2.34.1
59
38
60
39
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Inès Varhol <ines.varhol@telecom-paris.fr>
2
2
3
In preparation for supporting TCG disablement on ARM, we move most
3
Features supported :
4
of TCG related v7m/v8m helpers and APIs into their own file.
4
- the 8 STM32L4x5 GPIOs are initialized with their reset values
5
(except IDR, see below)
6
- input mode : setting a pin in input mode "externally" (using input
7
irqs) results in an out irq (transmitted to SYSCFG)
8
- output mode : setting a bit in ODR sets the corresponding out irq
9
(if this line is configured in output mode)
10
- pull-up, pull-down
11
- push-pull, open-drain
5
12
6
Note: It is easier to review this commit using the 'histogram'
13
Difference with the real GPIOs :
7
diff algorithm:
14
- Alternate Function and Analog mode aren't implemented :
15
pins in AF/Analog behave like pins in input mode
16
- floating pins stay at their last value
17
- register IDR reset values differ from the real one :
18
values are coherent with the other registers reset values
19
and the fact that AF/Analog modes aren't implemented
20
- setting I/O output speed isn't supported
21
- locking port bits isn't supported
22
- ADC function isn't supported
23
- GPIOH has 16 pins instead of 2 pins
24
- writing to registers LCKR, AFRL, AFRH and ASCR is ineffective
8
25
9
$ git diff --diff-algorithm=histogram ...
26
Signed-off-by: Arnaud Minier <arnaud.minier@telecom-paris.fr>
10
or
27
Signed-off-by: Inès Varhol <ines.varhol@telecom-paris.fr>
11
$ git diff --histogram ...
28
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
12
29
Acked-by: Alistair Francis <alistair.francis@wdc.com>
13
Suggested-by: Samuel Ortiz <sameo@linux.intel.com>
30
Message-id: 20240305210444.310665-2-ines.varhol@telecom-paris.fr
14
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
15
Message-id: 20190702144335.10717-2-philmd@redhat.com
16
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
31
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
---
32
---
19
target/arm/Makefile.objs | 1 +
33
MAINTAINERS | 1 +
20
target/arm/helper.c | 2638 +------------------------------------
34
docs/system/arm/b-l475e-iot01a.rst | 2 +-
21
target/arm/m_helper.c | 2676 ++++++++++++++++++++++++++++++++++++++
35
include/hw/gpio/stm32l4x5_gpio.h | 70 +++++
22
3 files changed, 2681 insertions(+), 2634 deletions(-)
36
hw/gpio/stm32l4x5_gpio.c | 477 +++++++++++++++++++++++++++++
23
create mode 100644 target/arm/m_helper.c
37
hw/gpio/Kconfig | 3 +
38
hw/gpio/meson.build | 1 +
39
hw/gpio/trace-events | 6 +
40
7 files changed, 559 insertions(+), 1 deletion(-)
41
create mode 100644 include/hw/gpio/stm32l4x5_gpio.h
42
create mode 100644 hw/gpio/stm32l4x5_gpio.c
24
43
25
diff --git a/target/arm/Makefile.objs b/target/arm/Makefile.objs
44
diff --git a/MAINTAINERS b/MAINTAINERS
26
index XXXXXXX..XXXXXXX 100644
45
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/Makefile.objs
46
--- a/MAINTAINERS
28
+++ b/target/arm/Makefile.objs
47
+++ b/MAINTAINERS
29
@@ -XXX,XX +XXX,XX @@ obj-y += tlb_helper.o debug_helper.o
48
@@ -XXX,XX +XXX,XX @@ F: hw/arm/stm32l4x5_soc.c
30
obj-y += translate.o op_helper.o
49
F: hw/misc/stm32l4x5_exti.c
31
obj-y += crypto_helper.o
50
F: hw/misc/stm32l4x5_syscfg.c
32
obj-y += iwmmxt_helper.o vec_helper.o neon_helper.o
51
F: hw/misc/stm32l4x5_rcc.c
33
+obj-y += m_helper.o
52
+F: hw/gpio/stm32l4x5_gpio.c
34
53
F: include/hw/*/stm32l4x5_*.h
35
obj-$(CONFIG_SOFTMMU) += psci.o
54
36
55
B-L475E-IOT01A IoT Node
37
diff --git a/target/arm/helper.c b/target/arm/helper.c
56
diff --git a/docs/system/arm/b-l475e-iot01a.rst b/docs/system/arm/b-l475e-iot01a.rst
38
index XXXXXXX..XXXXXXX 100644
57
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/helper.c
58
--- a/docs/system/arm/b-l475e-iot01a.rst
40
+++ b/target/arm/helper.c
59
+++ b/docs/system/arm/b-l475e-iot01a.rst
41
@@ -XXX,XX +XXX,XX @@
60
@@ -XXX,XX +XXX,XX @@ Currently B-L475E-IOT01A machine's only supports the following devices:
42
#include "qemu/crc32c.h"
61
- STM32L4x5 EXTI (Extended interrupts and events controller)
43
#include "qemu/qemu-print.h"
62
- STM32L4x5 SYSCFG (System configuration controller)
44
#include "exec/exec-all.h"
63
- STM32L4x5 RCC (Reset and clock control)
45
-#include "exec/cpu_ldst.h"
64
+- STM32L4x5 GPIOs (General-purpose I/Os)
46
#include <zlib.h> /* For crc32 */
65
47
#include "hw/semihosting/semihost.h"
66
Missing devices
48
#include "sysemu/cpus.h"
67
"""""""""""""""
49
@@ -XXX,XX +XXX,XX @@
68
@@ -XXX,XX +XXX,XX @@ Missing devices
50
#include "qemu/guest-random.h"
69
The B-L475E-IOT01A does *not* support the following devices:
51
#ifdef CONFIG_TCG
70
52
#include "arm_ldst.h"
71
- Serial ports (UART)
53
+#include "exec/cpu_ldst.h"
72
-- General-purpose I/Os (GPIO)
54
#endif
73
- Analog to Digital Converter (ADC)
55
74
- SPI controller
56
#define ARM_CPU_FREQ 1000000000 /* FIXME: 1 GHz, should be configurable */
75
- Timer controller (TIMER)
57
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(rbit)(uint32_t x)
76
diff --git a/include/hw/gpio/stm32l4x5_gpio.h b/include/hw/gpio/stm32l4x5_gpio.h
58
59
#ifdef CONFIG_USER_ONLY
60
61
-/* These should probably raise undefined insn exceptions. */
62
-void HELPER(v7m_msr)(CPUARMState *env, uint32_t reg, uint32_t val)
63
-{
64
- ARMCPU *cpu = env_archcpu(env);
65
-
66
- cpu_abort(CPU(cpu), "v7m_msr %d\n", reg);
67
-}
68
-
69
-uint32_t HELPER(v7m_mrs)(CPUARMState *env, uint32_t reg)
70
-{
71
- ARMCPU *cpu = env_archcpu(env);
72
-
73
- cpu_abort(CPU(cpu), "v7m_mrs %d\n", reg);
74
- return 0;
75
-}
76
-
77
-void HELPER(v7m_bxns)(CPUARMState *env, uint32_t dest)
78
-{
79
- /* translate.c should never generate calls here in user-only mode */
80
- g_assert_not_reached();
81
-}
82
-
83
-void HELPER(v7m_blxns)(CPUARMState *env, uint32_t dest)
84
-{
85
- /* translate.c should never generate calls here in user-only mode */
86
- g_assert_not_reached();
87
-}
88
-
89
-void HELPER(v7m_preserve_fp_state)(CPUARMState *env)
90
-{
91
- /* translate.c should never generate calls here in user-only mode */
92
- g_assert_not_reached();
93
-}
94
-
95
-void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr)
96
-{
97
- /* translate.c should never generate calls here in user-only mode */
98
- g_assert_not_reached();
99
-}
100
-
101
-void HELPER(v7m_vlldm)(CPUARMState *env, uint32_t fptr)
102
-{
103
- /* translate.c should never generate calls here in user-only mode */
104
- g_assert_not_reached();
105
-}
106
-
107
-uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
108
-{
109
- /*
110
- * The TT instructions can be used by unprivileged code, but in
111
- * user-only emulation we don't have the MPU.
112
- * Luckily since we know we are NonSecure unprivileged (and that in
113
- * turn means that the A flag wasn't specified), all the bits in the
114
- * register must be zero:
115
- * IREGION: 0 because IRVALID is 0
116
- * IRVALID: 0 because NS
117
- * S: 0 because NS
118
- * NSRW: 0 because NS
119
- * NSR: 0 because NS
120
- * RW: 0 because unpriv and A flag not set
121
- * R: 0 because unpriv and A flag not set
122
- * SRVALID: 0 because NS
123
- * MRVALID: 0 because unpriv and A flag not set
124
- * SREGION: 0 becaus SRVALID is 0
125
- * MREGION: 0 because MRVALID is 0
126
- */
127
- return 0;
128
-}
129
-
130
static void switch_mode(CPUARMState *env, int mode)
131
{
132
ARMCPU *cpu = env_archcpu(env);
133
@@ -XXX,XX +XXX,XX @@ void arm_log_exception(int idx)
134
}
135
}
136
137
-/*
138
- * What kind of stack write are we doing? This affects how exceptions
139
- * generated during the stacking are treated.
140
- */
141
-typedef enum StackingMode {
142
- STACK_NORMAL,
143
- STACK_IGNFAULTS,
144
- STACK_LAZYFP,
145
-} StackingMode;
146
-
147
-static bool v7m_stack_write(ARMCPU *cpu, uint32_t addr, uint32_t value,
148
- ARMMMUIdx mmu_idx, StackingMode mode)
149
-{
150
- CPUState *cs = CPU(cpu);
151
- CPUARMState *env = &cpu->env;
152
- MemTxAttrs attrs = {};
153
- MemTxResult txres;
154
- target_ulong page_size;
155
- hwaddr physaddr;
156
- int prot;
157
- ARMMMUFaultInfo fi = {};
158
- bool secure = mmu_idx & ARM_MMU_IDX_M_S;
159
- int exc;
160
- bool exc_secure;
161
-
162
- if (get_phys_addr(env, addr, MMU_DATA_STORE, mmu_idx, &physaddr,
163
- &attrs, &prot, &page_size, &fi, NULL)) {
164
- /* MPU/SAU lookup failed */
165
- if (fi.type == ARMFault_QEMU_SFault) {
166
- if (mode == STACK_LAZYFP) {
167
- qemu_log_mask(CPU_LOG_INT,
168
- "...SecureFault with SFSR.LSPERR "
169
- "during lazy stacking\n");
170
- env->v7m.sfsr |= R_V7M_SFSR_LSPERR_MASK;
171
- } else {
172
- qemu_log_mask(CPU_LOG_INT,
173
- "...SecureFault with SFSR.AUVIOL "
174
- "during stacking\n");
175
- env->v7m.sfsr |= R_V7M_SFSR_AUVIOL_MASK;
176
- }
177
- env->v7m.sfsr |= R_V7M_SFSR_SFARVALID_MASK;
178
- env->v7m.sfar = addr;
179
- exc = ARMV7M_EXCP_SECURE;
180
- exc_secure = false;
181
- } else {
182
- if (mode == STACK_LAZYFP) {
183
- qemu_log_mask(CPU_LOG_INT,
184
- "...MemManageFault with CFSR.MLSPERR\n");
185
- env->v7m.cfsr[secure] |= R_V7M_CFSR_MLSPERR_MASK;
186
- } else {
187
- qemu_log_mask(CPU_LOG_INT,
188
- "...MemManageFault with CFSR.MSTKERR\n");
189
- env->v7m.cfsr[secure] |= R_V7M_CFSR_MSTKERR_MASK;
190
- }
191
- exc = ARMV7M_EXCP_MEM;
192
- exc_secure = secure;
193
- }
194
- goto pend_fault;
195
- }
196
- address_space_stl_le(arm_addressspace(cs, attrs), physaddr, value,
197
- attrs, &txres);
198
- if (txres != MEMTX_OK) {
199
- /* BusFault trying to write the data */
200
- if (mode == STACK_LAZYFP) {
201
- qemu_log_mask(CPU_LOG_INT, "...BusFault with BFSR.LSPERR\n");
202
- env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_LSPERR_MASK;
203
- } else {
204
- qemu_log_mask(CPU_LOG_INT, "...BusFault with BFSR.STKERR\n");
205
- env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_STKERR_MASK;
206
- }
207
- exc = ARMV7M_EXCP_BUS;
208
- exc_secure = false;
209
- goto pend_fault;
210
- }
211
- return true;
212
-
213
-pend_fault:
214
- /*
215
- * By pending the exception at this point we are making
216
- * the IMPDEF choice "overridden exceptions pended" (see the
217
- * MergeExcInfo() pseudocode). The other choice would be to not
218
- * pend them now and then make a choice about which to throw away
219
- * later if we have two derived exceptions.
220
- * The only case when we must not pend the exception but instead
221
- * throw it away is if we are doing the push of the callee registers
222
- * and we've already generated a derived exception (this is indicated
223
- * by the caller passing STACK_IGNFAULTS). Even in this case we will
224
- * still update the fault status registers.
225
- */
226
- switch (mode) {
227
- case STACK_NORMAL:
228
- armv7m_nvic_set_pending_derived(env->nvic, exc, exc_secure);
229
- break;
230
- case STACK_LAZYFP:
231
- armv7m_nvic_set_pending_lazyfp(env->nvic, exc, exc_secure);
232
- break;
233
- case STACK_IGNFAULTS:
234
- break;
235
- }
236
- return false;
237
-}
238
-
239
-static bool v7m_stack_read(ARMCPU *cpu, uint32_t *dest, uint32_t addr,
240
- ARMMMUIdx mmu_idx)
241
-{
242
- CPUState *cs = CPU(cpu);
243
- CPUARMState *env = &cpu->env;
244
- MemTxAttrs attrs = {};
245
- MemTxResult txres;
246
- target_ulong page_size;
247
- hwaddr physaddr;
248
- int prot;
249
- ARMMMUFaultInfo fi = {};
250
- bool secure = mmu_idx & ARM_MMU_IDX_M_S;
251
- int exc;
252
- bool exc_secure;
253
- uint32_t value;
254
-
255
- if (get_phys_addr(env, addr, MMU_DATA_LOAD, mmu_idx, &physaddr,
256
- &attrs, &prot, &page_size, &fi, NULL)) {
257
- /* MPU/SAU lookup failed */
258
- if (fi.type == ARMFault_QEMU_SFault) {
259
- qemu_log_mask(CPU_LOG_INT,
260
- "...SecureFault with SFSR.AUVIOL during unstack\n");
261
- env->v7m.sfsr |= R_V7M_SFSR_AUVIOL_MASK | R_V7M_SFSR_SFARVALID_MASK;
262
- env->v7m.sfar = addr;
263
- exc = ARMV7M_EXCP_SECURE;
264
- exc_secure = false;
265
- } else {
266
- qemu_log_mask(CPU_LOG_INT,
267
- "...MemManageFault with CFSR.MUNSTKERR\n");
268
- env->v7m.cfsr[secure] |= R_V7M_CFSR_MUNSTKERR_MASK;
269
- exc = ARMV7M_EXCP_MEM;
270
- exc_secure = secure;
271
- }
272
- goto pend_fault;
273
- }
274
-
275
- value = address_space_ldl(arm_addressspace(cs, attrs), physaddr,
276
- attrs, &txres);
277
- if (txres != MEMTX_OK) {
278
- /* BusFault trying to read the data */
279
- qemu_log_mask(CPU_LOG_INT, "...BusFault with BFSR.UNSTKERR\n");
280
- env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_UNSTKERR_MASK;
281
- exc = ARMV7M_EXCP_BUS;
282
- exc_secure = false;
283
- goto pend_fault;
284
- }
285
-
286
- *dest = value;
287
- return true;
288
-
289
-pend_fault:
290
- /*
291
- * By pending the exception at this point we are making
292
- * the IMPDEF choice "overridden exceptions pended" (see the
293
- * MergeExcInfo() pseudocode). The other choice would be to not
294
- * pend them now and then make a choice about which to throw away
295
- * later if we have two derived exceptions.
296
- */
297
- armv7m_nvic_set_pending(env->nvic, exc, exc_secure);
298
- return false;
299
-}
300
-
301
-void HELPER(v7m_preserve_fp_state)(CPUARMState *env)
302
-{
303
- /*
304
- * Preserve FP state (because LSPACT was set and we are about
305
- * to execute an FP instruction). This corresponds to the
306
- * PreserveFPState() pseudocode.
307
- * We may throw an exception if the stacking fails.
308
- */
309
- ARMCPU *cpu = env_archcpu(env);
310
- bool is_secure = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_S_MASK;
311
- bool negpri = !(env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_HFRDY_MASK);
312
- bool is_priv = !(env->v7m.fpccr[is_secure] & R_V7M_FPCCR_USER_MASK);
313
- bool splimviol = env->v7m.fpccr[is_secure] & R_V7M_FPCCR_SPLIMVIOL_MASK;
314
- uint32_t fpcar = env->v7m.fpcar[is_secure];
315
- bool stacked_ok = true;
316
- bool ts = is_secure && (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK);
317
- bool take_exception;
318
-
319
- /* Take the iothread lock as we are going to touch the NVIC */
320
- qemu_mutex_lock_iothread();
321
-
322
- /* Check the background context had access to the FPU */
323
- if (!v7m_cpacr_pass(env, is_secure, is_priv)) {
324
- armv7m_nvic_set_pending_lazyfp(env->nvic, ARMV7M_EXCP_USAGE, is_secure);
325
- env->v7m.cfsr[is_secure] |= R_V7M_CFSR_NOCP_MASK;
326
- stacked_ok = false;
327
- } else if (!is_secure && !extract32(env->v7m.nsacr, 10, 1)) {
328
- armv7m_nvic_set_pending_lazyfp(env->nvic, ARMV7M_EXCP_USAGE, M_REG_S);
329
- env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_NOCP_MASK;
330
- stacked_ok = false;
331
- }
332
-
333
- if (!splimviol && stacked_ok) {
334
- /* We only stack if the stack limit wasn't violated */
335
- int i;
336
- ARMMMUIdx mmu_idx;
337
-
338
- mmu_idx = arm_v7m_mmu_idx_all(env, is_secure, is_priv, negpri);
339
- for (i = 0; i < (ts ? 32 : 16); i += 2) {
340
- uint64_t dn = *aa32_vfp_dreg(env, i / 2);
341
- uint32_t faddr = fpcar + 4 * i;
342
- uint32_t slo = extract64(dn, 0, 32);
343
- uint32_t shi = extract64(dn, 32, 32);
344
-
345
- if (i >= 16) {
346
- faddr += 8; /* skip the slot for the FPSCR */
347
- }
348
- stacked_ok = stacked_ok &&
349
- v7m_stack_write(cpu, faddr, slo, mmu_idx, STACK_LAZYFP) &&
350
- v7m_stack_write(cpu, faddr + 4, shi, mmu_idx, STACK_LAZYFP);
351
- }
352
-
353
- stacked_ok = stacked_ok &&
354
- v7m_stack_write(cpu, fpcar + 0x40,
355
- vfp_get_fpscr(env), mmu_idx, STACK_LAZYFP);
356
- }
357
-
358
- /*
359
- * We definitely pended an exception, but it's possible that it
360
- * might not be able to be taken now. If its priority permits us
361
- * to take it now, then we must not update the LSPACT or FP regs,
362
- * but instead jump out to take the exception immediately.
363
- * If it's just pending and won't be taken until the current
364
- * handler exits, then we do update LSPACT and the FP regs.
365
- */
366
- take_exception = !stacked_ok &&
367
- armv7m_nvic_can_take_pending_exception(env->nvic);
368
-
369
- qemu_mutex_unlock_iothread();
370
-
371
- if (take_exception) {
372
- raise_exception_ra(env, EXCP_LAZYFP, 0, 1, GETPC());
373
- }
374
-
375
- env->v7m.fpccr[is_secure] &= ~R_V7M_FPCCR_LSPACT_MASK;
376
-
377
- if (ts) {
378
- /* Clear s0 to s31 and the FPSCR */
379
- int i;
380
-
381
- for (i = 0; i < 32; i += 2) {
382
- *aa32_vfp_dreg(env, i / 2) = 0;
383
- }
384
- vfp_set_fpscr(env, 0);
385
- }
386
- /*
387
- * Otherwise s0 to s15 and FPSCR are UNKNOWN; we choose to leave them
388
- * unchanged.
389
- */
390
-}
391
-
392
-/*
393
- * Write to v7M CONTROL.SPSEL bit for the specified security bank.
394
- * This may change the current stack pointer between Main and Process
395
- * stack pointers if it is done for the CONTROL register for the current
396
- * security state.
397
- */
398
-static void write_v7m_control_spsel_for_secstate(CPUARMState *env,
399
- bool new_spsel,
400
- bool secstate)
401
-{
402
- bool old_is_psp = v7m_using_psp(env);
403
-
404
- env->v7m.control[secstate] =
405
- deposit32(env->v7m.control[secstate],
406
- R_V7M_CONTROL_SPSEL_SHIFT,
407
- R_V7M_CONTROL_SPSEL_LENGTH, new_spsel);
408
-
409
- if (secstate == env->v7m.secure) {
410
- bool new_is_psp = v7m_using_psp(env);
411
- uint32_t tmp;
412
-
413
- if (old_is_psp != new_is_psp) {
414
- tmp = env->v7m.other_sp;
415
- env->v7m.other_sp = env->regs[13];
416
- env->regs[13] = tmp;
417
- }
418
- }
419
-}
420
-
421
-/*
422
- * Write to v7M CONTROL.SPSEL bit. This may change the current
423
- * stack pointer between Main and Process stack pointers.
424
- */
425
-static void write_v7m_control_spsel(CPUARMState *env, bool new_spsel)
426
-{
427
- write_v7m_control_spsel_for_secstate(env, new_spsel, env->v7m.secure);
428
-}
429
-
430
-void write_v7m_exception(CPUARMState *env, uint32_t new_exc)
431
-{
432
- /*
433
- * Write a new value to v7m.exception, thus transitioning into or out
434
- * of Handler mode; this may result in a change of active stack pointer.
435
- */
436
- bool new_is_psp, old_is_psp = v7m_using_psp(env);
437
- uint32_t tmp;
438
-
439
- env->v7m.exception = new_exc;
440
-
441
- new_is_psp = v7m_using_psp(env);
442
-
443
- if (old_is_psp != new_is_psp) {
444
- tmp = env->v7m.other_sp;
445
- env->v7m.other_sp = env->regs[13];
446
- env->regs[13] = tmp;
447
- }
448
-}
449
-
450
-/* Switch M profile security state between NS and S */
451
-static void switch_v7m_security_state(CPUARMState *env, bool new_secstate)
452
-{
453
- uint32_t new_ss_msp, new_ss_psp;
454
-
455
- if (env->v7m.secure == new_secstate) {
456
- return;
457
- }
458
-
459
- /*
460
- * All the banked state is accessed by looking at env->v7m.secure
461
- * except for the stack pointer; rearrange the SP appropriately.
462
- */
463
- new_ss_msp = env->v7m.other_ss_msp;
464
- new_ss_psp = env->v7m.other_ss_psp;
465
-
466
- if (v7m_using_psp(env)) {
467
- env->v7m.other_ss_psp = env->regs[13];
468
- env->v7m.other_ss_msp = env->v7m.other_sp;
469
- } else {
470
- env->v7m.other_ss_msp = env->regs[13];
471
- env->v7m.other_ss_psp = env->v7m.other_sp;
472
- }
473
-
474
- env->v7m.secure = new_secstate;
475
-
476
- if (v7m_using_psp(env)) {
477
- env->regs[13] = new_ss_psp;
478
- env->v7m.other_sp = new_ss_msp;
479
- } else {
480
- env->regs[13] = new_ss_msp;
481
- env->v7m.other_sp = new_ss_psp;
482
- }
483
-}
484
-
485
-void HELPER(v7m_bxns)(CPUARMState *env, uint32_t dest)
486
-{
487
- /*
488
- * Handle v7M BXNS:
489
- * - if the return value is a magic value, do exception return (like BX)
490
- * - otherwise bit 0 of the return value is the target security state
491
- */
492
- uint32_t min_magic;
493
-
494
- if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
495
- /* Covers FNC_RETURN and EXC_RETURN magic */
496
- min_magic = FNC_RETURN_MIN_MAGIC;
497
- } else {
498
- /* EXC_RETURN magic only */
499
- min_magic = EXC_RETURN_MIN_MAGIC;
500
- }
501
-
502
- if (dest >= min_magic) {
503
- /*
504
- * This is an exception return magic value; put it where
505
- * do_v7m_exception_exit() expects and raise EXCEPTION_EXIT.
506
- * Note that if we ever add gen_ss_advance() singlestep support to
507
- * M profile this should count as an "instruction execution complete"
508
- * event (compare gen_bx_excret_final_code()).
509
- */
510
- env->regs[15] = dest & ~1;
511
- env->thumb = dest & 1;
512
- HELPER(exception_internal)(env, EXCP_EXCEPTION_EXIT);
513
- /* notreached */
514
- }
515
-
516
- /* translate.c should have made BXNS UNDEF unless we're secure */
517
- assert(env->v7m.secure);
518
-
519
- if (!(dest & 1)) {
520
- env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
521
- }
522
- switch_v7m_security_state(env, dest & 1);
523
- env->thumb = 1;
524
- env->regs[15] = dest & ~1;
525
-}
526
-
527
-void HELPER(v7m_blxns)(CPUARMState *env, uint32_t dest)
528
-{
529
- /*
530
- * Handle v7M BLXNS:
531
- * - bit 0 of the destination address is the target security state
532
- */
533
-
534
- /* At this point regs[15] is the address just after the BLXNS */
535
- uint32_t nextinst = env->regs[15] | 1;
536
- uint32_t sp = env->regs[13] - 8;
537
- uint32_t saved_psr;
538
-
539
- /* translate.c will have made BLXNS UNDEF unless we're secure */
540
- assert(env->v7m.secure);
541
-
542
- if (dest & 1) {
543
- /*
544
- * Target is Secure, so this is just a normal BLX,
545
- * except that the low bit doesn't indicate Thumb/not.
546
- */
547
- env->regs[14] = nextinst;
548
- env->thumb = 1;
549
- env->regs[15] = dest & ~1;
550
- return;
551
- }
552
-
553
- /* Target is non-secure: first push a stack frame */
554
- if (!QEMU_IS_ALIGNED(sp, 8)) {
555
- qemu_log_mask(LOG_GUEST_ERROR,
556
- "BLXNS with misaligned SP is UNPREDICTABLE\n");
557
- }
558
-
559
- if (sp < v7m_sp_limit(env)) {
560
- raise_exception(env, EXCP_STKOF, 0, 1);
561
- }
562
-
563
- saved_psr = env->v7m.exception;
564
- if (env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK) {
565
- saved_psr |= XPSR_SFPA;
566
- }
567
-
568
- /* Note that these stores can throw exceptions on MPU faults */
569
- cpu_stl_data(env, sp, nextinst);
570
- cpu_stl_data(env, sp + 4, saved_psr);
571
-
572
- env->regs[13] = sp;
573
- env->regs[14] = 0xfeffffff;
574
- if (arm_v7m_is_handler_mode(env)) {
575
- /*
576
- * Write a dummy value to IPSR, to avoid leaking the current secure
577
- * exception number to non-secure code. This is guaranteed not
578
- * to cause write_v7m_exception() to actually change stacks.
579
- */
580
- write_v7m_exception(env, 1);
581
- }
582
- env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
583
- switch_v7m_security_state(env, 0);
584
- env->thumb = 1;
585
- env->regs[15] = dest;
586
-}
587
-
588
-static uint32_t *get_v7m_sp_ptr(CPUARMState *env, bool secure, bool threadmode,
589
- bool spsel)
590
-{
591
- /*
592
- * Return a pointer to the location where we currently store the
593
- * stack pointer for the requested security state and thread mode.
594
- * This pointer will become invalid if the CPU state is updated
595
- * such that the stack pointers are switched around (eg changing
596
- * the SPSEL control bit).
597
- * Compare the v8M ARM ARM pseudocode LookUpSP_with_security_mode().
598
- * Unlike that pseudocode, we require the caller to pass us in the
599
- * SPSEL control bit value; this is because we also use this
600
- * function in handling of pushing of the callee-saves registers
601
- * part of the v8M stack frame (pseudocode PushCalleeStack()),
602
- * and in the tailchain codepath the SPSEL bit comes from the exception
603
- * return magic LR value from the previous exception. The pseudocode
604
- * opencodes the stack-selection in PushCalleeStack(), but we prefer
605
- * to make this utility function generic enough to do the job.
606
- */
607
- bool want_psp = threadmode && spsel;
608
-
609
- if (secure == env->v7m.secure) {
610
- if (want_psp == v7m_using_psp(env)) {
611
- return &env->regs[13];
612
- } else {
613
- return &env->v7m.other_sp;
614
- }
615
- } else {
616
- if (want_psp) {
617
- return &env->v7m.other_ss_psp;
618
- } else {
619
- return &env->v7m.other_ss_msp;
620
- }
621
- }
622
-}
623
-
624
-static bool arm_v7m_load_vector(ARMCPU *cpu, int exc, bool targets_secure,
625
- uint32_t *pvec)
626
-{
627
- CPUState *cs = CPU(cpu);
628
- CPUARMState *env = &cpu->env;
629
- MemTxResult result;
630
- uint32_t addr = env->v7m.vecbase[targets_secure] + exc * 4;
631
- uint32_t vector_entry;
632
- MemTxAttrs attrs = {};
633
- ARMMMUIdx mmu_idx;
634
- bool exc_secure;
635
-
636
- mmu_idx = arm_v7m_mmu_idx_for_secstate_and_priv(env, targets_secure, true);
637
-
638
- /*
639
- * We don't do a get_phys_addr() here because the rules for vector
640
- * loads are special: they always use the default memory map, and
641
- * the default memory map permits reads from all addresses.
642
- * Since there's no easy way to pass through to pmsav8_mpu_lookup()
643
- * that we want this special case which would always say "yes",
644
- * we just do the SAU lookup here followed by a direct physical load.
645
- */
646
- attrs.secure = targets_secure;
647
- attrs.user = false;
648
-
649
- if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
650
- V8M_SAttributes sattrs = {};
651
-
652
- v8m_security_lookup(env, addr, MMU_DATA_LOAD, mmu_idx, &sattrs);
653
- if (sattrs.ns) {
654
- attrs.secure = false;
655
- } else if (!targets_secure) {
656
- /* NS access to S memory */
657
- goto load_fail;
658
- }
659
- }
660
-
661
- vector_entry = address_space_ldl(arm_addressspace(cs, attrs), addr,
662
- attrs, &result);
663
- if (result != MEMTX_OK) {
664
- goto load_fail;
665
- }
666
- *pvec = vector_entry;
667
- return true;
668
-
669
-load_fail:
670
- /*
671
- * All vector table fetch fails are reported as HardFault, with
672
- * HFSR.VECTTBL and .FORCED set. (FORCED is set because
673
- * technically the underlying exception is a MemManage or BusFault
674
- * that is escalated to HardFault.) This is a terminal exception,
675
- * so we will either take the HardFault immediately or else enter
676
- * lockup (the latter case is handled in armv7m_nvic_set_pending_derived()).
677
- */
678
- exc_secure = targets_secure ||
679
- !(cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK);
680
- env->v7m.hfsr |= R_V7M_HFSR_VECTTBL_MASK | R_V7M_HFSR_FORCED_MASK;
681
- armv7m_nvic_set_pending_derived(env->nvic, ARMV7M_EXCP_HARD, exc_secure);
682
- return false;
683
-}
684
-
685
-static uint32_t v7m_integrity_sig(CPUARMState *env, uint32_t lr)
686
-{
687
- /*
688
- * Return the integrity signature value for the callee-saves
689
- * stack frame section. @lr is the exception return payload/LR value
690
- * whose FType bit forms bit 0 of the signature if FP is present.
691
- */
692
- uint32_t sig = 0xfefa125a;
693
-
694
- if (!arm_feature(env, ARM_FEATURE_VFP) || (lr & R_V7M_EXCRET_FTYPE_MASK)) {
695
- sig |= 1;
696
- }
697
- return sig;
698
-}
699
-
700
-static bool v7m_push_callee_stack(ARMCPU *cpu, uint32_t lr, bool dotailchain,
701
- bool ignore_faults)
702
-{
703
- /*
704
- * For v8M, push the callee-saves register part of the stack frame.
705
- * Compare the v8M pseudocode PushCalleeStack().
706
- * In the tailchaining case this may not be the current stack.
707
- */
708
- CPUARMState *env = &cpu->env;
709
- uint32_t *frame_sp_p;
710
- uint32_t frameptr;
711
- ARMMMUIdx mmu_idx;
712
- bool stacked_ok;
713
- uint32_t limit;
714
- bool want_psp;
715
- uint32_t sig;
716
- StackingMode smode = ignore_faults ? STACK_IGNFAULTS : STACK_NORMAL;
717
-
718
- if (dotailchain) {
719
- bool mode = lr & R_V7M_EXCRET_MODE_MASK;
720
- bool priv = !(env->v7m.control[M_REG_S] & R_V7M_CONTROL_NPRIV_MASK) ||
721
- !mode;
722
-
723
- mmu_idx = arm_v7m_mmu_idx_for_secstate_and_priv(env, M_REG_S, priv);
724
- frame_sp_p = get_v7m_sp_ptr(env, M_REG_S, mode,
725
- lr & R_V7M_EXCRET_SPSEL_MASK);
726
- want_psp = mode && (lr & R_V7M_EXCRET_SPSEL_MASK);
727
- if (want_psp) {
728
- limit = env->v7m.psplim[M_REG_S];
729
- } else {
730
- limit = env->v7m.msplim[M_REG_S];
731
- }
732
- } else {
733
- mmu_idx = arm_mmu_idx(env);
734
- frame_sp_p = &env->regs[13];
735
- limit = v7m_sp_limit(env);
736
- }
737
-
738
- frameptr = *frame_sp_p - 0x28;
739
- if (frameptr < limit) {
740
- /*
741
- * Stack limit failure: set SP to the limit value, and generate
742
- * STKOF UsageFault. Stack pushes below the limit must not be
743
- * performed. It is IMPDEF whether pushes above the limit are
744
- * performed; we choose not to.
745
- */
746
- qemu_log_mask(CPU_LOG_INT,
747
- "...STKOF during callee-saves register stacking\n");
748
- env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_STKOF_MASK;
749
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
750
- env->v7m.secure);
751
- *frame_sp_p = limit;
752
- return true;
753
- }
754
-
755
- /*
756
- * Write as much of the stack frame as we can. A write failure may
757
- * cause us to pend a derived exception.
758
- */
759
- sig = v7m_integrity_sig(env, lr);
760
- stacked_ok =
761
- v7m_stack_write(cpu, frameptr, sig, mmu_idx, smode) &&
762
- v7m_stack_write(cpu, frameptr + 0x8, env->regs[4], mmu_idx, smode) &&
763
- v7m_stack_write(cpu, frameptr + 0xc, env->regs[5], mmu_idx, smode) &&
764
- v7m_stack_write(cpu, frameptr + 0x10, env->regs[6], mmu_idx, smode) &&
765
- v7m_stack_write(cpu, frameptr + 0x14, env->regs[7], mmu_idx, smode) &&
766
- v7m_stack_write(cpu, frameptr + 0x18, env->regs[8], mmu_idx, smode) &&
767
- v7m_stack_write(cpu, frameptr + 0x1c, env->regs[9], mmu_idx, smode) &&
768
- v7m_stack_write(cpu, frameptr + 0x20, env->regs[10], mmu_idx, smode) &&
769
- v7m_stack_write(cpu, frameptr + 0x24, env->regs[11], mmu_idx, smode);
770
-
771
- /* Update SP regardless of whether any of the stack accesses failed. */
772
- *frame_sp_p = frameptr;
773
-
774
- return !stacked_ok;
775
-}
776
-
777
-static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain,
778
- bool ignore_stackfaults)
779
-{
780
- /*
781
- * Do the "take the exception" parts of exception entry,
782
- * but not the pushing of state to the stack. This is
783
- * similar to the pseudocode ExceptionTaken() function.
784
- */
785
- CPUARMState *env = &cpu->env;
786
- uint32_t addr;
787
- bool targets_secure;
788
- int exc;
789
- bool push_failed = false;
790
-
791
- armv7m_nvic_get_pending_irq_info(env->nvic, &exc, &targets_secure);
792
- qemu_log_mask(CPU_LOG_INT, "...taking pending %s exception %d\n",
793
- targets_secure ? "secure" : "nonsecure", exc);
794
-
795
- if (dotailchain) {
796
- /* Sanitize LR FType and PREFIX bits */
797
- if (!arm_feature(env, ARM_FEATURE_VFP)) {
798
- lr |= R_V7M_EXCRET_FTYPE_MASK;
799
- }
800
- lr = deposit32(lr, 24, 8, 0xff);
801
- }
802
-
803
- if (arm_feature(env, ARM_FEATURE_V8)) {
804
- if (arm_feature(env, ARM_FEATURE_M_SECURITY) &&
805
- (lr & R_V7M_EXCRET_S_MASK)) {
806
- /*
807
- * The background code (the owner of the registers in the
808
- * exception frame) is Secure. This means it may either already
809
- * have or now needs to push callee-saves registers.
810
- */
811
- if (targets_secure) {
812
- if (dotailchain && !(lr & R_V7M_EXCRET_ES_MASK)) {
813
- /*
814
- * We took an exception from Secure to NonSecure
815
- * (which means the callee-saved registers got stacked)
816
- * and are now tailchaining to a Secure exception.
817
- * Clear DCRS so eventual return from this Secure
818
- * exception unstacks the callee-saved registers.
819
- */
820
- lr &= ~R_V7M_EXCRET_DCRS_MASK;
821
- }
822
- } else {
823
- /*
824
- * We're going to a non-secure exception; push the
825
- * callee-saves registers to the stack now, if they're
826
- * not already saved.
827
- */
828
- if (lr & R_V7M_EXCRET_DCRS_MASK &&
829
- !(dotailchain && !(lr & R_V7M_EXCRET_ES_MASK))) {
830
- push_failed = v7m_push_callee_stack(cpu, lr, dotailchain,
831
- ignore_stackfaults);
832
- }
833
- lr |= R_V7M_EXCRET_DCRS_MASK;
834
- }
835
- }
836
-
837
- lr &= ~R_V7M_EXCRET_ES_MASK;
838
- if (targets_secure || !arm_feature(env, ARM_FEATURE_M_SECURITY)) {
839
- lr |= R_V7M_EXCRET_ES_MASK;
840
- }
841
- lr &= ~R_V7M_EXCRET_SPSEL_MASK;
842
- if (env->v7m.control[targets_secure] & R_V7M_CONTROL_SPSEL_MASK) {
843
- lr |= R_V7M_EXCRET_SPSEL_MASK;
844
- }
845
-
846
- /*
847
- * Clear registers if necessary to prevent non-secure exception
848
- * code being able to see register values from secure code.
849
- * Where register values become architecturally UNKNOWN we leave
850
- * them with their previous values.
851
- */
852
- if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
853
- if (!targets_secure) {
854
- /*
855
- * Always clear the caller-saved registers (they have been
856
- * pushed to the stack earlier in v7m_push_stack()).
857
- * Clear callee-saved registers if the background code is
858
- * Secure (in which case these regs were saved in
859
- * v7m_push_callee_stack()).
860
- */
861
- int i;
862
-
863
- for (i = 0; i < 13; i++) {
864
- /* r4..r11 are callee-saves, zero only if EXCRET.S == 1 */
865
- if (i < 4 || i > 11 || (lr & R_V7M_EXCRET_S_MASK)) {
866
- env->regs[i] = 0;
867
- }
868
- }
869
- /* Clear EAPSR */
870
- xpsr_write(env, 0, XPSR_NZCV | XPSR_Q | XPSR_GE | XPSR_IT);
871
- }
872
- }
873
- }
874
-
875
- if (push_failed && !ignore_stackfaults) {
876
- /*
877
- * Derived exception on callee-saves register stacking:
878
- * we might now want to take a different exception which
879
- * targets a different security state, so try again from the top.
880
- */
881
- qemu_log_mask(CPU_LOG_INT,
882
- "...derived exception on callee-saves register stacking");
883
- v7m_exception_taken(cpu, lr, true, true);
884
- return;
885
- }
886
-
887
- if (!arm_v7m_load_vector(cpu, exc, targets_secure, &addr)) {
888
- /* Vector load failed: derived exception */
889
- qemu_log_mask(CPU_LOG_INT, "...derived exception on vector table load");
890
- v7m_exception_taken(cpu, lr, true, true);
891
- return;
892
- }
893
-
894
- /*
895
- * Now we've done everything that might cause a derived exception
896
- * we can go ahead and activate whichever exception we're going to
897
- * take (which might now be the derived exception).
898
- */
899
- armv7m_nvic_acknowledge_irq(env->nvic);
900
-
901
- /* Switch to target security state -- must do this before writing SPSEL */
902
- switch_v7m_security_state(env, targets_secure);
903
- write_v7m_control_spsel(env, 0);
904
- arm_clear_exclusive(env);
905
- /* Clear SFPA and FPCA (has no effect if no FPU) */
906
- env->v7m.control[M_REG_S] &=
907
- ~(R_V7M_CONTROL_FPCA_MASK | R_V7M_CONTROL_SFPA_MASK);
908
- /* Clear IT bits */
909
- env->condexec_bits = 0;
910
- env->regs[14] = lr;
911
- env->regs[15] = addr & 0xfffffffe;
912
- env->thumb = addr & 1;
913
-}
914
-
915
-static void v7m_update_fpccr(CPUARMState *env, uint32_t frameptr,
916
- bool apply_splim)
917
-{
918
- /*
919
- * Like the pseudocode UpdateFPCCR: save state in FPCAR and FPCCR
920
- * that we will need later in order to do lazy FP reg stacking.
921
- */
922
- bool is_secure = env->v7m.secure;
923
- void *nvic = env->nvic;
924
- /*
925
- * Some bits are unbanked and live always in fpccr[M_REG_S]; some bits
926
- * are banked and we want to update the bit in the bank for the
927
- * current security state; and in one case we want to specifically
928
- * update the NS banked version of a bit even if we are secure.
929
- */
930
- uint32_t *fpccr_s = &env->v7m.fpccr[M_REG_S];
931
- uint32_t *fpccr_ns = &env->v7m.fpccr[M_REG_NS];
932
- uint32_t *fpccr = &env->v7m.fpccr[is_secure];
933
- bool hfrdy, bfrdy, mmrdy, ns_ufrdy, s_ufrdy, sfrdy, monrdy;
934
-
935
- env->v7m.fpcar[is_secure] = frameptr & ~0x7;
936
-
937
- if (apply_splim && arm_feature(env, ARM_FEATURE_V8)) {
938
- bool splimviol;
939
- uint32_t splim = v7m_sp_limit(env);
940
- bool ign = armv7m_nvic_neg_prio_requested(nvic, is_secure) &&
941
- (env->v7m.ccr[is_secure] & R_V7M_CCR_STKOFHFNMIGN_MASK);
942
-
943
- splimviol = !ign && frameptr < splim;
944
- *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, SPLIMVIOL, splimviol);
945
- }
946
-
947
- *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, LSPACT, 1);
948
-
949
- *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, S, is_secure);
950
-
951
- *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, USER, arm_current_el(env) == 0);
952
-
953
- *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, THREAD,
954
- !arm_v7m_is_handler_mode(env));
955
-
956
- hfrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_HARD, false);
957
- *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, HFRDY, hfrdy);
958
-
959
- bfrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_BUS, false);
960
- *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, BFRDY, bfrdy);
961
-
962
- mmrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_MEM, is_secure);
963
- *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, MMRDY, mmrdy);
964
-
965
- ns_ufrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_USAGE, false);
966
- *fpccr_ns = FIELD_DP32(*fpccr_ns, V7M_FPCCR, UFRDY, ns_ufrdy);
967
-
968
- monrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_DEBUG, false);
969
- *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, MONRDY, monrdy);
970
-
971
- if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
972
- s_ufrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_USAGE, true);
973
- *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, UFRDY, s_ufrdy);
974
-
975
- sfrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_SECURE, false);
976
- *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, SFRDY, sfrdy);
977
- }
978
-}
979
-
980
-void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr)
981
-{
982
- /* fptr is the value of Rn, the frame pointer we store the FP regs to */
983
- bool s = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_S_MASK;
984
- bool lspact = env->v7m.fpccr[s] & R_V7M_FPCCR_LSPACT_MASK;
985
-
986
- assert(env->v7m.secure);
987
-
988
- if (!(env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK)) {
989
- return;
990
- }
991
-
992
- /* Check access to the coprocessor is permitted */
993
- if (!v7m_cpacr_pass(env, true, arm_current_el(env) != 0)) {
994
- raise_exception_ra(env, EXCP_NOCP, 0, 1, GETPC());
995
- }
996
-
997
- if (lspact) {
998
- /* LSPACT should not be active when there is active FP state */
999
- raise_exception_ra(env, EXCP_LSERR, 0, 1, GETPC());
1000
- }
1001
-
1002
- if (fptr & 7) {
1003
- raise_exception_ra(env, EXCP_UNALIGNED, 0, 1, GETPC());
1004
- }
1005
-
1006
- /*
1007
- * Note that we do not use v7m_stack_write() here, because the
1008
- * accesses should not set the FSR bits for stacking errors if they
1009
- * fail. (In pseudocode terms, they are AccType_NORMAL, not AccType_STACK
1010
- * or AccType_LAZYFP). Faults in cpu_stl_data() will throw exceptions
1011
- * and longjmp out.
1012
- */
1013
- if (!(env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPEN_MASK)) {
1014
- bool ts = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK;
1015
- int i;
1016
-
1017
- for (i = 0; i < (ts ? 32 : 16); i += 2) {
1018
- uint64_t dn = *aa32_vfp_dreg(env, i / 2);
1019
- uint32_t faddr = fptr + 4 * i;
1020
- uint32_t slo = extract64(dn, 0, 32);
1021
- uint32_t shi = extract64(dn, 32, 32);
1022
-
1023
- if (i >= 16) {
1024
- faddr += 8; /* skip the slot for the FPSCR */
1025
- }
1026
- cpu_stl_data(env, faddr, slo);
1027
- cpu_stl_data(env, faddr + 4, shi);
1028
- }
1029
- cpu_stl_data(env, fptr + 0x40, vfp_get_fpscr(env));
1030
-
1031
- /*
1032
- * If TS is 0 then s0 to s15 and FPSCR are UNKNOWN; we choose to
1033
- * leave them unchanged, matching our choice in v7m_preserve_fp_state.
1034
- */
1035
- if (ts) {
1036
- for (i = 0; i < 32; i += 2) {
1037
- *aa32_vfp_dreg(env, i / 2) = 0;
1038
- }
1039
- vfp_set_fpscr(env, 0);
1040
- }
1041
- } else {
1042
- v7m_update_fpccr(env, fptr, false);
1043
- }
1044
-
1045
- env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_FPCA_MASK;
1046
-}
1047
-
1048
-void HELPER(v7m_vlldm)(CPUARMState *env, uint32_t fptr)
1049
-{
1050
- /* fptr is the value of Rn, the frame pointer we load the FP regs from */
1051
- assert(env->v7m.secure);
1052
-
1053
- if (!(env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK)) {
1054
- return;
1055
- }
1056
-
1057
- /* Check access to the coprocessor is permitted */
1058
- if (!v7m_cpacr_pass(env, true, arm_current_el(env) != 0)) {
1059
- raise_exception_ra(env, EXCP_NOCP, 0, 1, GETPC());
1060
- }
1061
-
1062
- if (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPACT_MASK) {
1063
- /* State in FP is still valid */
1064
- env->v7m.fpccr[M_REG_S] &= ~R_V7M_FPCCR_LSPACT_MASK;
1065
- } else {
1066
- bool ts = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK;
1067
- int i;
1068
- uint32_t fpscr;
1069
-
1070
- if (fptr & 7) {
1071
- raise_exception_ra(env, EXCP_UNALIGNED, 0, 1, GETPC());
1072
- }
1073
-
1074
- for (i = 0; i < (ts ? 32 : 16); i += 2) {
1075
- uint32_t slo, shi;
1076
- uint64_t dn;
1077
- uint32_t faddr = fptr + 4 * i;
1078
-
1079
- if (i >= 16) {
1080
- faddr += 8; /* skip the slot for the FPSCR */
1081
- }
1082
-
1083
- slo = cpu_ldl_data(env, faddr);
1084
- shi = cpu_ldl_data(env, faddr + 4);
1085
-
1086
- dn = (uint64_t) shi << 32 | slo;
1087
- *aa32_vfp_dreg(env, i / 2) = dn;
1088
- }
1089
- fpscr = cpu_ldl_data(env, fptr + 0x40);
1090
- vfp_set_fpscr(env, fpscr);
1091
- }
1092
-
1093
- env->v7m.control[M_REG_S] |= R_V7M_CONTROL_FPCA_MASK;
1094
-}
1095
-
1096
-static bool v7m_push_stack(ARMCPU *cpu)
1097
-{
1098
- /*
1099
- * Do the "set up stack frame" part of exception entry,
1100
- * similar to pseudocode PushStack().
1101
- * Return true if we generate a derived exception (and so
1102
- * should ignore further stack faults trying to process
1103
- * that derived exception.)
1104
- */
1105
- bool stacked_ok = true, limitviol = false;
1106
- CPUARMState *env = &cpu->env;
1107
- uint32_t xpsr = xpsr_read(env);
1108
- uint32_t frameptr = env->regs[13];
1109
- ARMMMUIdx mmu_idx = arm_mmu_idx(env);
1110
- uint32_t framesize;
1111
- bool nsacr_cp10 = extract32(env->v7m.nsacr, 10, 1);
1112
-
1113
- if ((env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK) &&
1114
- (env->v7m.secure || nsacr_cp10)) {
1115
- if (env->v7m.secure &&
1116
- env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK) {
1117
- framesize = 0xa8;
1118
- } else {
1119
- framesize = 0x68;
1120
- }
1121
- } else {
1122
- framesize = 0x20;
1123
- }
1124
-
1125
- /* Align stack pointer if the guest wants that */
1126
- if ((frameptr & 4) &&
1127
- (env->v7m.ccr[env->v7m.secure] & R_V7M_CCR_STKALIGN_MASK)) {
1128
- frameptr -= 4;
1129
- xpsr |= XPSR_SPREALIGN;
1130
- }
1131
-
1132
- xpsr &= ~XPSR_SFPA;
1133
- if (env->v7m.secure &&
1134
- (env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK)) {
1135
- xpsr |= XPSR_SFPA;
1136
- }
1137
-
1138
- frameptr -= framesize;
1139
-
1140
- if (arm_feature(env, ARM_FEATURE_V8)) {
1141
- uint32_t limit = v7m_sp_limit(env);
1142
-
1143
- if (frameptr < limit) {
1144
- /*
1145
- * Stack limit failure: set SP to the limit value, and generate
1146
- * STKOF UsageFault. Stack pushes below the limit must not be
1147
- * performed. It is IMPDEF whether pushes above the limit are
1148
- * performed; we choose not to.
1149
- */
1150
- qemu_log_mask(CPU_LOG_INT,
1151
- "...STKOF during stacking\n");
1152
- env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_STKOF_MASK;
1153
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
1154
- env->v7m.secure);
1155
- env->regs[13] = limit;
1156
- /*
1157
- * We won't try to perform any further memory accesses but
1158
- * we must continue through the following code to check for
1159
- * permission faults during FPU state preservation, and we
1160
- * must update FPCCR if lazy stacking is enabled.
1161
- */
1162
- limitviol = true;
1163
- stacked_ok = false;
1164
- }
1165
- }
1166
-
1167
- /*
1168
- * Write as much of the stack frame as we can. If we fail a stack
1169
- * write this will result in a derived exception being pended
1170
- * (which may be taken in preference to the one we started with
1171
- * if it has higher priority).
1172
- */
1173
- stacked_ok = stacked_ok &&
1174
- v7m_stack_write(cpu, frameptr, env->regs[0], mmu_idx, STACK_NORMAL) &&
1175
- v7m_stack_write(cpu, frameptr + 4, env->regs[1],
1176
- mmu_idx, STACK_NORMAL) &&
1177
- v7m_stack_write(cpu, frameptr + 8, env->regs[2],
1178
- mmu_idx, STACK_NORMAL) &&
1179
- v7m_stack_write(cpu, frameptr + 12, env->regs[3],
1180
- mmu_idx, STACK_NORMAL) &&
1181
- v7m_stack_write(cpu, frameptr + 16, env->regs[12],
1182
- mmu_idx, STACK_NORMAL) &&
1183
- v7m_stack_write(cpu, frameptr + 20, env->regs[14],
1184
- mmu_idx, STACK_NORMAL) &&
1185
- v7m_stack_write(cpu, frameptr + 24, env->regs[15],
1186
- mmu_idx, STACK_NORMAL) &&
1187
- v7m_stack_write(cpu, frameptr + 28, xpsr, mmu_idx, STACK_NORMAL);
1188
-
1189
- if (env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK) {
1190
- /* FPU is active, try to save its registers */
1191
- bool fpccr_s = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_S_MASK;
1192
- bool lspact = env->v7m.fpccr[fpccr_s] & R_V7M_FPCCR_LSPACT_MASK;
1193
-
1194
- if (lspact && arm_feature(env, ARM_FEATURE_M_SECURITY)) {
1195
- qemu_log_mask(CPU_LOG_INT,
1196
- "...SecureFault because LSPACT and FPCA both set\n");
1197
- env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK;
1198
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
1199
- } else if (!env->v7m.secure && !nsacr_cp10) {
1200
- qemu_log_mask(CPU_LOG_INT,
1201
- "...Secure UsageFault with CFSR.NOCP because "
1202
- "NSACR.CP10 prevents stacking FP regs\n");
1203
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, M_REG_S);
1204
- env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_NOCP_MASK;
1205
- } else {
1206
- if (!(env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPEN_MASK)) {
1207
- /* Lazy stacking disabled, save registers now */
1208
- int i;
1209
- bool cpacr_pass = v7m_cpacr_pass(env, env->v7m.secure,
1210
- arm_current_el(env) != 0);
1211
-
1212
- if (stacked_ok && !cpacr_pass) {
1213
- /*
1214
- * Take UsageFault if CPACR forbids access. The pseudocode
1215
- * here does a full CheckCPEnabled() but we know the NSACR
1216
- * check can never fail as we have already handled that.
1217
- */
1218
- qemu_log_mask(CPU_LOG_INT,
1219
- "...UsageFault with CFSR.NOCP because "
1220
- "CPACR.CP10 prevents stacking FP regs\n");
1221
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
1222
- env->v7m.secure);
1223
- env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_NOCP_MASK;
1224
- stacked_ok = false;
1225
- }
1226
-
1227
- for (i = 0; i < ((framesize == 0xa8) ? 32 : 16); i += 2) {
1228
- uint64_t dn = *aa32_vfp_dreg(env, i / 2);
1229
- uint32_t faddr = frameptr + 0x20 + 4 * i;
1230
- uint32_t slo = extract64(dn, 0, 32);
1231
- uint32_t shi = extract64(dn, 32, 32);
1232
-
1233
- if (i >= 16) {
1234
- faddr += 8; /* skip the slot for the FPSCR */
1235
- }
1236
- stacked_ok = stacked_ok &&
1237
- v7m_stack_write(cpu, faddr, slo,
1238
- mmu_idx, STACK_NORMAL) &&
1239
- v7m_stack_write(cpu, faddr + 4, shi,
1240
- mmu_idx, STACK_NORMAL);
1241
- }
1242
- stacked_ok = stacked_ok &&
1243
- v7m_stack_write(cpu, frameptr + 0x60,
1244
- vfp_get_fpscr(env), mmu_idx, STACK_NORMAL);
1245
- if (cpacr_pass) {
1246
- for (i = 0; i < ((framesize == 0xa8) ? 32 : 16); i += 2) {
1247
- *aa32_vfp_dreg(env, i / 2) = 0;
1248
- }
1249
- vfp_set_fpscr(env, 0);
1250
- }
1251
- } else {
1252
- /* Lazy stacking enabled, save necessary info to stack later */
1253
- v7m_update_fpccr(env, frameptr + 0x20, true);
1254
- }
1255
- }
1256
- }
1257
-
1258
- /*
1259
- * If we broke a stack limit then SP was already updated earlier;
1260
- * otherwise we update SP regardless of whether any of the stack
1261
- * accesses failed or we took some other kind of fault.
1262
- */
1263
- if (!limitviol) {
1264
- env->regs[13] = frameptr;
1265
- }
1266
-
1267
- return !stacked_ok;
1268
-}
1269
-
1270
-static void do_v7m_exception_exit(ARMCPU *cpu)
1271
-{
1272
- CPUARMState *env = &cpu->env;
1273
- uint32_t excret;
1274
- uint32_t xpsr, xpsr_mask;
1275
- bool ufault = false;
1276
- bool sfault = false;
1277
- bool return_to_sp_process;
1278
- bool return_to_handler;
1279
- bool rettobase = false;
1280
- bool exc_secure = false;
1281
- bool return_to_secure;
1282
- bool ftype;
1283
- bool restore_s16_s31;
1284
-
1285
- /*
1286
- * If we're not in Handler mode then jumps to magic exception-exit
1287
- * addresses don't have magic behaviour. However for the v8M
1288
- * security extensions the magic secure-function-return has to
1289
- * work in thread mode too, so to avoid doing an extra check in
1290
- * the generated code we allow exception-exit magic to also cause the
1291
- * internal exception and bring us here in thread mode. Correct code
1292
- * will never try to do this (the following insn fetch will always
1293
- * fault) so we the overhead of having taken an unnecessary exception
1294
- * doesn't matter.
1295
- */
1296
- if (!arm_v7m_is_handler_mode(env)) {
1297
- return;
1298
- }
1299
-
1300
- /*
1301
- * In the spec pseudocode ExceptionReturn() is called directly
1302
- * from BXWritePC() and gets the full target PC value including
1303
- * bit zero. In QEMU's implementation we treat it as a normal
1304
- * jump-to-register (which is then caught later on), and so split
1305
- * the target value up between env->regs[15] and env->thumb in
1306
- * gen_bx(). Reconstitute it.
1307
- */
1308
- excret = env->regs[15];
1309
- if (env->thumb) {
1310
- excret |= 1;
1311
- }
1312
-
1313
- qemu_log_mask(CPU_LOG_INT, "Exception return: magic PC %" PRIx32
1314
- " previous exception %d\n",
1315
- excret, env->v7m.exception);
1316
-
1317
- if ((excret & R_V7M_EXCRET_RES1_MASK) != R_V7M_EXCRET_RES1_MASK) {
1318
- qemu_log_mask(LOG_GUEST_ERROR, "M profile: zero high bits in exception "
1319
- "exit PC value 0x%" PRIx32 " are UNPREDICTABLE\n",
1320
- excret);
1321
- }
1322
-
1323
- ftype = excret & R_V7M_EXCRET_FTYPE_MASK;
1324
-
1325
- if (!arm_feature(env, ARM_FEATURE_VFP) && !ftype) {
1326
- qemu_log_mask(LOG_GUEST_ERROR, "M profile: zero FTYPE in exception "
1327
- "exit PC value 0x%" PRIx32 " is UNPREDICTABLE "
1328
- "if FPU not present\n",
1329
- excret);
1330
- ftype = true;
1331
- }
1332
-
1333
- if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
1334
- /*
1335
- * EXC_RETURN.ES validation check (R_SMFL). We must do this before
1336
- * we pick which FAULTMASK to clear.
1337
- */
1338
- if (!env->v7m.secure &&
1339
- ((excret & R_V7M_EXCRET_ES_MASK) ||
1340
- !(excret & R_V7M_EXCRET_DCRS_MASK))) {
1341
- sfault = 1;
1342
- /* For all other purposes, treat ES as 0 (R_HXSR) */
1343
- excret &= ~R_V7M_EXCRET_ES_MASK;
1344
- }
1345
- exc_secure = excret & R_V7M_EXCRET_ES_MASK;
1346
- }
1347
-
1348
- if (env->v7m.exception != ARMV7M_EXCP_NMI) {
1349
- /*
1350
- * Auto-clear FAULTMASK on return from other than NMI.
1351
- * If the security extension is implemented then this only
1352
- * happens if the raw execution priority is >= 0; the
1353
- * value of the ES bit in the exception return value indicates
1354
- * which security state's faultmask to clear. (v8M ARM ARM R_KBNF.)
1355
- */
1356
- if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
1357
- if (armv7m_nvic_raw_execution_priority(env->nvic) >= 0) {
1358
- env->v7m.faultmask[exc_secure] = 0;
1359
- }
1360
- } else {
1361
- env->v7m.faultmask[M_REG_NS] = 0;
1362
- }
1363
- }
1364
-
1365
- switch (armv7m_nvic_complete_irq(env->nvic, env->v7m.exception,
1366
- exc_secure)) {
1367
- case -1:
1368
- /* attempt to exit an exception that isn't active */
1369
- ufault = true;
1370
- break;
1371
- case 0:
1372
- /* still an irq active now */
1373
- break;
1374
- case 1:
1375
- /*
1376
- * We returned to base exception level, no nesting.
1377
- * (In the pseudocode this is written using "NestedActivation != 1"
1378
- * where we have 'rettobase == false'.)
1379
- */
1380
- rettobase = true;
1381
- break;
1382
- default:
1383
- g_assert_not_reached();
1384
- }
1385
-
1386
- return_to_handler = !(excret & R_V7M_EXCRET_MODE_MASK);
1387
- return_to_sp_process = excret & R_V7M_EXCRET_SPSEL_MASK;
1388
- return_to_secure = arm_feature(env, ARM_FEATURE_M_SECURITY) &&
1389
- (excret & R_V7M_EXCRET_S_MASK);
1390
-
1391
- if (arm_feature(env, ARM_FEATURE_V8)) {
1392
- if (!arm_feature(env, ARM_FEATURE_M_SECURITY)) {
1393
- /*
1394
- * UNPREDICTABLE if S == 1 or DCRS == 0 or ES == 1 (R_XLCP);
1395
- * we choose to take the UsageFault.
1396
- */
1397
- if ((excret & R_V7M_EXCRET_S_MASK) ||
1398
- (excret & R_V7M_EXCRET_ES_MASK) ||
1399
- !(excret & R_V7M_EXCRET_DCRS_MASK)) {
1400
- ufault = true;
1401
- }
1402
- }
1403
- if (excret & R_V7M_EXCRET_RES0_MASK) {
1404
- ufault = true;
1405
- }
1406
- } else {
1407
- /* For v7M we only recognize certain combinations of the low bits */
1408
- switch (excret & 0xf) {
1409
- case 1: /* Return to Handler */
1410
- break;
1411
- case 13: /* Return to Thread using Process stack */
1412
- case 9: /* Return to Thread using Main stack */
1413
- /*
1414
- * We only need to check NONBASETHRDENA for v7M, because in
1415
- * v8M this bit does not exist (it is RES1).
1416
- */
1417
- if (!rettobase &&
1418
- !(env->v7m.ccr[env->v7m.secure] &
1419
- R_V7M_CCR_NONBASETHRDENA_MASK)) {
1420
- ufault = true;
1421
- }
1422
- break;
1423
- default:
1424
- ufault = true;
1425
- }
1426
- }
1427
-
1428
- /*
1429
- * Set CONTROL.SPSEL from excret.SPSEL. Since we're still in
1430
- * Handler mode (and will be until we write the new XPSR.Interrupt
1431
- * field) this does not switch around the current stack pointer.
1432
- * We must do this before we do any kind of tailchaining, including
1433
- * for the derived exceptions on integrity check failures, or we will
1434
- * give the guest an incorrect EXCRET.SPSEL value on exception entry.
1435
- */
1436
- write_v7m_control_spsel_for_secstate(env, return_to_sp_process, exc_secure);
1437
-
1438
- /*
1439
- * Clear scratch FP values left in caller saved registers; this
1440
- * must happen before any kind of tail chaining.
1441
- */
1442
- if ((env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_CLRONRET_MASK) &&
1443
- (env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK)) {
1444
- if (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPACT_MASK) {
1445
- env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK;
1446
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
1447
- qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing "
1448
- "stackframe: error during lazy state deactivation\n");
1449
- v7m_exception_taken(cpu, excret, true, false);
1450
- return;
1451
- } else {
1452
- /* Clear s0..s15 and FPSCR */
1453
- int i;
1454
-
1455
- for (i = 0; i < 16; i += 2) {
1456
- *aa32_vfp_dreg(env, i / 2) = 0;
1457
- }
1458
- vfp_set_fpscr(env, 0);
1459
- }
1460
- }
1461
-
1462
- if (sfault) {
1463
- env->v7m.sfsr |= R_V7M_SFSR_INVER_MASK;
1464
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
1465
- qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing "
1466
- "stackframe: failed EXC_RETURN.ES validity check\n");
1467
- v7m_exception_taken(cpu, excret, true, false);
1468
- return;
1469
- }
1470
-
1471
- if (ufault) {
1472
- /*
1473
- * Bad exception return: instead of popping the exception
1474
- * stack, directly take a usage fault on the current stack.
1475
- */
1476
- env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
1477
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
1478
- qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
1479
- "stackframe: failed exception return integrity check\n");
1480
- v7m_exception_taken(cpu, excret, true, false);
1481
- return;
1482
- }
1483
-
1484
- /*
1485
- * Tailchaining: if there is currently a pending exception that
1486
- * is high enough priority to preempt execution at the level we're
1487
- * about to return to, then just directly take that exception now,
1488
- * avoiding an unstack-and-then-stack. Note that now we have
1489
- * deactivated the previous exception by calling armv7m_nvic_complete_irq()
1490
- * our current execution priority is already the execution priority we are
1491
- * returning to -- none of the state we would unstack or set based on
1492
- * the EXCRET value affects it.
1493
- */
1494
- if (armv7m_nvic_can_take_pending_exception(env->nvic)) {
1495
- qemu_log_mask(CPU_LOG_INT, "...tailchaining to pending exception\n");
1496
- v7m_exception_taken(cpu, excret, true, false);
1497
- return;
1498
- }
1499
-
1500
- switch_v7m_security_state(env, return_to_secure);
1501
-
1502
- {
1503
- /*
1504
- * The stack pointer we should be reading the exception frame from
1505
- * depends on bits in the magic exception return type value (and
1506
- * for v8M isn't necessarily the stack pointer we will eventually
1507
- * end up resuming execution with). Get a pointer to the location
1508
- * in the CPU state struct where the SP we need is currently being
1509
- * stored; we will use and modify it in place.
1510
- * We use this limited C variable scope so we don't accidentally
1511
- * use 'frame_sp_p' after we do something that makes it invalid.
1512
- */
1513
- uint32_t *frame_sp_p = get_v7m_sp_ptr(env,
1514
- return_to_secure,
1515
- !return_to_handler,
1516
- return_to_sp_process);
1517
- uint32_t frameptr = *frame_sp_p;
1518
- bool pop_ok = true;
1519
- ARMMMUIdx mmu_idx;
1520
- bool return_to_priv = return_to_handler ||
1521
- !(env->v7m.control[return_to_secure] & R_V7M_CONTROL_NPRIV_MASK);
1522
-
1523
- mmu_idx = arm_v7m_mmu_idx_for_secstate_and_priv(env, return_to_secure,
1524
- return_to_priv);
1525
-
1526
- if (!QEMU_IS_ALIGNED(frameptr, 8) &&
1527
- arm_feature(env, ARM_FEATURE_V8)) {
1528
- qemu_log_mask(LOG_GUEST_ERROR,
1529
- "M profile exception return with non-8-aligned SP "
1530
- "for destination state is UNPREDICTABLE\n");
1531
- }
1532
-
1533
- /* Do we need to pop callee-saved registers? */
1534
- if (return_to_secure &&
1535
- ((excret & R_V7M_EXCRET_ES_MASK) == 0 ||
1536
- (excret & R_V7M_EXCRET_DCRS_MASK) == 0)) {
1537
- uint32_t actual_sig;
1538
-
1539
- pop_ok = v7m_stack_read(cpu, &actual_sig, frameptr, mmu_idx);
1540
-
1541
- if (pop_ok && v7m_integrity_sig(env, excret) != actual_sig) {
1542
- /* Take a SecureFault on the current stack */
1543
- env->v7m.sfsr |= R_V7M_SFSR_INVIS_MASK;
1544
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
1545
- qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing "
1546
- "stackframe: failed exception return integrity "
1547
- "signature check\n");
1548
- v7m_exception_taken(cpu, excret, true, false);
1549
- return;
1550
- }
1551
-
1552
- pop_ok = pop_ok &&
1553
- v7m_stack_read(cpu, &env->regs[4], frameptr + 0x8, mmu_idx) &&
1554
- v7m_stack_read(cpu, &env->regs[5], frameptr + 0xc, mmu_idx) &&
1555
- v7m_stack_read(cpu, &env->regs[6], frameptr + 0x10, mmu_idx) &&
1556
- v7m_stack_read(cpu, &env->regs[7], frameptr + 0x14, mmu_idx) &&
1557
- v7m_stack_read(cpu, &env->regs[8], frameptr + 0x18, mmu_idx) &&
1558
- v7m_stack_read(cpu, &env->regs[9], frameptr + 0x1c, mmu_idx) &&
1559
- v7m_stack_read(cpu, &env->regs[10], frameptr + 0x20, mmu_idx) &&
1560
- v7m_stack_read(cpu, &env->regs[11], frameptr + 0x24, mmu_idx);
1561
-
1562
- frameptr += 0x28;
1563
- }
1564
-
1565
- /* Pop registers */
1566
- pop_ok = pop_ok &&
1567
- v7m_stack_read(cpu, &env->regs[0], frameptr, mmu_idx) &&
1568
- v7m_stack_read(cpu, &env->regs[1], frameptr + 0x4, mmu_idx) &&
1569
- v7m_stack_read(cpu, &env->regs[2], frameptr + 0x8, mmu_idx) &&
1570
- v7m_stack_read(cpu, &env->regs[3], frameptr + 0xc, mmu_idx) &&
1571
- v7m_stack_read(cpu, &env->regs[12], frameptr + 0x10, mmu_idx) &&
1572
- v7m_stack_read(cpu, &env->regs[14], frameptr + 0x14, mmu_idx) &&
1573
- v7m_stack_read(cpu, &env->regs[15], frameptr + 0x18, mmu_idx) &&
1574
- v7m_stack_read(cpu, &xpsr, frameptr + 0x1c, mmu_idx);
1575
-
1576
- if (!pop_ok) {
1577
- /*
1578
- * v7m_stack_read() pended a fault, so take it (as a tail
1579
- * chained exception on the same stack frame)
1580
- */
1581
- qemu_log_mask(CPU_LOG_INT, "...derived exception on unstacking\n");
1582
- v7m_exception_taken(cpu, excret, true, false);
1583
- return;
1584
- }
1585
-
1586
- /*
1587
- * Returning from an exception with a PC with bit 0 set is defined
1588
- * behaviour on v8M (bit 0 is ignored), but for v7M it was specified
1589
- * to be UNPREDICTABLE. In practice actual v7M hardware seems to ignore
1590
- * the lsbit, and there are several RTOSes out there which incorrectly
1591
- * assume the r15 in the stack frame should be a Thumb-style "lsbit
1592
- * indicates ARM/Thumb" value, so ignore the bit on v7M as well, but
1593
- * complain about the badly behaved guest.
1594
- */
1595
- if (env->regs[15] & 1) {
1596
- env->regs[15] &= ~1U;
1597
- if (!arm_feature(env, ARM_FEATURE_V8)) {
1598
- qemu_log_mask(LOG_GUEST_ERROR,
1599
- "M profile return from interrupt with misaligned "
1600
- "PC is UNPREDICTABLE on v7M\n");
1601
- }
1602
- }
1603
-
1604
- if (arm_feature(env, ARM_FEATURE_V8)) {
1605
- /*
1606
- * For v8M we have to check whether the xPSR exception field
1607
- * matches the EXCRET value for return to handler/thread
1608
- * before we commit to changing the SP and xPSR.
1609
- */
1610
- bool will_be_handler = (xpsr & XPSR_EXCP) != 0;
1611
- if (return_to_handler != will_be_handler) {
1612
- /*
1613
- * Take an INVPC UsageFault on the current stack.
1614
- * By this point we will have switched to the security state
1615
- * for the background state, so this UsageFault will target
1616
- * that state.
1617
- */
1618
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
1619
- env->v7m.secure);
1620
- env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
1621
- qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
1622
- "stackframe: failed exception return integrity "
1623
- "check\n");
1624
- v7m_exception_taken(cpu, excret, true, false);
1625
- return;
1626
- }
1627
- }
1628
-
1629
- if (!ftype) {
1630
- /* FP present and we need to handle it */
1631
- if (!return_to_secure &&
1632
- (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPACT_MASK)) {
1633
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
1634
- env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK;
1635
- qemu_log_mask(CPU_LOG_INT,
1636
- "...taking SecureFault on existing stackframe: "
1637
- "Secure LSPACT set but exception return is "
1638
- "not to secure state\n");
1639
- v7m_exception_taken(cpu, excret, true, false);
1640
- return;
1641
- }
1642
-
1643
- restore_s16_s31 = return_to_secure &&
1644
- (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK);
1645
-
1646
- if (env->v7m.fpccr[return_to_secure] & R_V7M_FPCCR_LSPACT_MASK) {
1647
- /* State in FPU is still valid, just clear LSPACT */
1648
- env->v7m.fpccr[return_to_secure] &= ~R_V7M_FPCCR_LSPACT_MASK;
1649
- } else {
1650
- int i;
1651
- uint32_t fpscr;
1652
- bool cpacr_pass, nsacr_pass;
1653
-
1654
- cpacr_pass = v7m_cpacr_pass(env, return_to_secure,
1655
- return_to_priv);
1656
- nsacr_pass = return_to_secure ||
1657
- extract32(env->v7m.nsacr, 10, 1);
1658
-
1659
- if (!cpacr_pass) {
1660
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
1661
- return_to_secure);
1662
- env->v7m.cfsr[return_to_secure] |= R_V7M_CFSR_NOCP_MASK;
1663
- qemu_log_mask(CPU_LOG_INT,
1664
- "...taking UsageFault on existing "
1665
- "stackframe: CPACR.CP10 prevents unstacking "
1666
- "FP regs\n");
1667
- v7m_exception_taken(cpu, excret, true, false);
1668
- return;
1669
- } else if (!nsacr_pass) {
1670
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, true);
1671
- env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_INVPC_MASK;
1672
- qemu_log_mask(CPU_LOG_INT,
1673
- "...taking Secure UsageFault on existing "
1674
- "stackframe: NSACR.CP10 prevents unstacking "
1675
- "FP regs\n");
1676
- v7m_exception_taken(cpu, excret, true, false);
1677
- return;
1678
- }
1679
-
1680
- for (i = 0; i < (restore_s16_s31 ? 32 : 16); i += 2) {
1681
- uint32_t slo, shi;
1682
- uint64_t dn;
1683
- uint32_t faddr = frameptr + 0x20 + 4 * i;
1684
-
1685
- if (i >= 16) {
1686
- faddr += 8; /* Skip the slot for the FPSCR */
1687
- }
1688
-
1689
- pop_ok = pop_ok &&
1690
- v7m_stack_read(cpu, &slo, faddr, mmu_idx) &&
1691
- v7m_stack_read(cpu, &shi, faddr + 4, mmu_idx);
1692
-
1693
- if (!pop_ok) {
1694
- break;
1695
- }
1696
-
1697
- dn = (uint64_t)shi << 32 | slo;
1698
- *aa32_vfp_dreg(env, i / 2) = dn;
1699
- }
1700
- pop_ok = pop_ok &&
1701
- v7m_stack_read(cpu, &fpscr, frameptr + 0x60, mmu_idx);
1702
- if (pop_ok) {
1703
- vfp_set_fpscr(env, fpscr);
1704
- }
1705
- if (!pop_ok) {
1706
- /*
1707
- * These regs are 0 if security extension present;
1708
- * otherwise merely UNKNOWN. We zero always.
1709
- */
1710
- for (i = 0; i < (restore_s16_s31 ? 32 : 16); i += 2) {
1711
- *aa32_vfp_dreg(env, i / 2) = 0;
1712
- }
1713
- vfp_set_fpscr(env, 0);
1714
- }
1715
- }
1716
- }
1717
- env->v7m.control[M_REG_S] = FIELD_DP32(env->v7m.control[M_REG_S],
1718
- V7M_CONTROL, FPCA, !ftype);
1719
-
1720
- /* Commit to consuming the stack frame */
1721
- frameptr += 0x20;
1722
- if (!ftype) {
1723
- frameptr += 0x48;
1724
- if (restore_s16_s31) {
1725
- frameptr += 0x40;
1726
- }
1727
- }
1728
- /*
1729
- * Undo stack alignment (the SPREALIGN bit indicates that the original
1730
- * pre-exception SP was not 8-aligned and we added a padding word to
1731
- * align it, so we undo this by ORing in the bit that increases it
1732
- * from the current 8-aligned value to the 8-unaligned value. (Adding 4
1733
- * would work too but a logical OR is how the pseudocode specifies it.)
1734
- */
1735
- if (xpsr & XPSR_SPREALIGN) {
1736
- frameptr |= 4;
1737
- }
1738
- *frame_sp_p = frameptr;
1739
- }
1740
-
1741
- xpsr_mask = ~(XPSR_SPREALIGN | XPSR_SFPA);
1742
- if (!arm_feature(env, ARM_FEATURE_THUMB_DSP)) {
1743
- xpsr_mask &= ~XPSR_GE;
1744
- }
1745
- /* This xpsr_write() will invalidate frame_sp_p as it may switch stack */
1746
- xpsr_write(env, xpsr, xpsr_mask);
1747
-
1748
- if (env->v7m.secure) {
1749
- bool sfpa = xpsr & XPSR_SFPA;
1750
-
1751
- env->v7m.control[M_REG_S] = FIELD_DP32(env->v7m.control[M_REG_S],
1752
- V7M_CONTROL, SFPA, sfpa);
1753
- }
1754
-
1755
- /*
1756
- * The restored xPSR exception field will be zero if we're
1757
- * resuming in Thread mode. If that doesn't match what the
1758
- * exception return excret specified then this is a UsageFault.
1759
- * v7M requires we make this check here; v8M did it earlier.
1760
- */
1761
- if (return_to_handler != arm_v7m_is_handler_mode(env)) {
1762
- /*
1763
- * Take an INVPC UsageFault by pushing the stack again;
1764
- * we know we're v7M so this is never a Secure UsageFault.
1765
- */
1766
- bool ignore_stackfaults;
1767
-
1768
- assert(!arm_feature(env, ARM_FEATURE_V8));
1769
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, false);
1770
- env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
1771
- ignore_stackfaults = v7m_push_stack(cpu);
1772
- qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on new stackframe: "
1773
- "failed exception return integrity check\n");
1774
- v7m_exception_taken(cpu, excret, false, ignore_stackfaults);
1775
- return;
1776
- }
1777
-
1778
- /* Otherwise, we have a successful exception exit. */
1779
- arm_clear_exclusive(env);
1780
- qemu_log_mask(CPU_LOG_INT, "...successful exception return\n");
1781
-}
1782
-
1783
-static bool do_v7m_function_return(ARMCPU *cpu)
1784
-{
1785
- /*
1786
- * v8M security extensions magic function return.
1787
- * We may either:
1788
- * (1) throw an exception (longjump)
1789
- * (2) return true if we successfully handled the function return
1790
- * (3) return false if we failed a consistency check and have
1791
- * pended a UsageFault that needs to be taken now
1792
- *
1793
- * At this point the magic return value is split between env->regs[15]
1794
- * and env->thumb. We don't bother to reconstitute it because we don't
1795
- * need it (all values are handled the same way).
1796
- */
1797
- CPUARMState *env = &cpu->env;
1798
- uint32_t newpc, newpsr, newpsr_exc;
1799
-
1800
- qemu_log_mask(CPU_LOG_INT, "...really v7M secure function return\n");
1801
-
1802
- {
1803
- bool threadmode, spsel;
1804
- TCGMemOpIdx oi;
1805
- ARMMMUIdx mmu_idx;
1806
- uint32_t *frame_sp_p;
1807
- uint32_t frameptr;
1808
-
1809
- /* Pull the return address and IPSR from the Secure stack */
1810
- threadmode = !arm_v7m_is_handler_mode(env);
1811
- spsel = env->v7m.control[M_REG_S] & R_V7M_CONTROL_SPSEL_MASK;
1812
-
1813
- frame_sp_p = get_v7m_sp_ptr(env, true, threadmode, spsel);
1814
- frameptr = *frame_sp_p;
1815
-
1816
- /*
1817
- * These loads may throw an exception (for MPU faults). We want to
1818
- * do them as secure, so work out what MMU index that is.
1819
- */
1820
- mmu_idx = arm_v7m_mmu_idx_for_secstate(env, true);
1821
- oi = make_memop_idx(MO_LE, arm_to_core_mmu_idx(mmu_idx));
1822
- newpc = helper_le_ldul_mmu(env, frameptr, oi, 0);
1823
- newpsr = helper_le_ldul_mmu(env, frameptr + 4, oi, 0);
1824
-
1825
- /* Consistency checks on new IPSR */
1826
- newpsr_exc = newpsr & XPSR_EXCP;
1827
- if (!((env->v7m.exception == 0 && newpsr_exc == 0) ||
1828
- (env->v7m.exception == 1 && newpsr_exc != 0))) {
1829
- /* Pend the fault and tell our caller to take it */
1830
- env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
1831
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
1832
- env->v7m.secure);
1833
- qemu_log_mask(CPU_LOG_INT,
1834
- "...taking INVPC UsageFault: "
1835
- "IPSR consistency check failed\n");
1836
- return false;
1837
- }
1838
-
1839
- *frame_sp_p = frameptr + 8;
1840
- }
1841
-
1842
- /* This invalidates frame_sp_p */
1843
- switch_v7m_security_state(env, true);
1844
- env->v7m.exception = newpsr_exc;
1845
- env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
1846
- if (newpsr & XPSR_SFPA) {
1847
- env->v7m.control[M_REG_S] |= R_V7M_CONTROL_SFPA_MASK;
1848
- }
1849
- xpsr_write(env, 0, XPSR_IT);
1850
- env->thumb = newpc & 1;
1851
- env->regs[15] = newpc & ~1;
1852
-
1853
- qemu_log_mask(CPU_LOG_INT, "...function return successful\n");
1854
- return true;
1855
-}
1856
-
1857
-static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx,
1858
- uint32_t addr, uint16_t *insn)
1859
-{
1860
- /*
1861
- * Load a 16-bit portion of a v7M instruction, returning true on success,
1862
- * or false on failure (in which case we will have pended the appropriate
1863
- * exception).
1864
- * We need to do the instruction fetch's MPU and SAU checks
1865
- * like this because there is no MMU index that would allow
1866
- * doing the load with a single function call. Instead we must
1867
- * first check that the security attributes permit the load
1868
- * and that they don't mismatch on the two halves of the instruction,
1869
- * and then we do the load as a secure load (ie using the security
1870
- * attributes of the address, not the CPU, as architecturally required).
1871
- */
1872
- CPUState *cs = CPU(cpu);
1873
- CPUARMState *env = &cpu->env;
1874
- V8M_SAttributes sattrs = {};
1875
- MemTxAttrs attrs = {};
1876
- ARMMMUFaultInfo fi = {};
1877
- MemTxResult txres;
1878
- target_ulong page_size;
1879
- hwaddr physaddr;
1880
- int prot;
1881
-
1882
- v8m_security_lookup(env, addr, MMU_INST_FETCH, mmu_idx, &sattrs);
1883
- if (!sattrs.nsc || sattrs.ns) {
1884
- /*
1885
- * This must be the second half of the insn, and it straddles a
1886
- * region boundary with the second half not being S&NSC.
1887
- */
1888
- env->v7m.sfsr |= R_V7M_SFSR_INVEP_MASK;
1889
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
1890
- qemu_log_mask(CPU_LOG_INT,
1891
- "...really SecureFault with SFSR.INVEP\n");
1892
- return false;
1893
- }
1894
- if (get_phys_addr(env, addr, MMU_INST_FETCH, mmu_idx,
1895
- &physaddr, &attrs, &prot, &page_size, &fi, NULL)) {
1896
- /* the MPU lookup failed */
1897
- env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_IACCVIOL_MASK;
1898
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM, env->v7m.secure);
1899
- qemu_log_mask(CPU_LOG_INT, "...really MemManage with CFSR.IACCVIOL\n");
1900
- return false;
1901
- }
1902
- *insn = address_space_lduw_le(arm_addressspace(cs, attrs), physaddr,
1903
- attrs, &txres);
1904
- if (txres != MEMTX_OK) {
1905
- env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_IBUSERR_MASK;
1906
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false);
1907
- qemu_log_mask(CPU_LOG_INT, "...really BusFault with CFSR.IBUSERR\n");
1908
- return false;
1909
- }
1910
- return true;
1911
-}
1912
-
1913
-static bool v7m_handle_execute_nsc(ARMCPU *cpu)
1914
-{
1915
- /*
1916
- * Check whether this attempt to execute code in a Secure & NS-Callable
1917
- * memory region is for an SG instruction; if so, then emulate the
1918
- * effect of the SG instruction and return true. Otherwise pend
1919
- * the correct kind of exception and return false.
1920
- */
1921
- CPUARMState *env = &cpu->env;
1922
- ARMMMUIdx mmu_idx;
1923
- uint16_t insn;
1924
-
1925
- /*
1926
- * We should never get here unless get_phys_addr_pmsav8() caused
1927
- * an exception for NS executing in S&NSC memory.
1928
- */
1929
- assert(!env->v7m.secure);
1930
- assert(arm_feature(env, ARM_FEATURE_M_SECURITY));
1931
-
1932
- /* We want to do the MPU lookup as secure; work out what mmu_idx that is */
1933
- mmu_idx = arm_v7m_mmu_idx_for_secstate(env, true);
1934
-
1935
- if (!v7m_read_half_insn(cpu, mmu_idx, env->regs[15], &insn)) {
1936
- return false;
1937
- }
1938
-
1939
- if (!env->thumb) {
1940
- goto gen_invep;
1941
- }
1942
-
1943
- if (insn != 0xe97f) {
1944
- /*
1945
- * Not an SG instruction first half (we choose the IMPDEF
1946
- * early-SG-check option).
1947
- */
1948
- goto gen_invep;
1949
- }
1950
-
1951
- if (!v7m_read_half_insn(cpu, mmu_idx, env->regs[15] + 2, &insn)) {
1952
- return false;
1953
- }
1954
-
1955
- if (insn != 0xe97f) {
1956
- /*
1957
- * Not an SG instruction second half (yes, both halves of the SG
1958
- * insn have the same hex value)
1959
- */
1960
- goto gen_invep;
1961
- }
1962
-
1963
- /*
1964
- * OK, we have confirmed that we really have an SG instruction.
1965
- * We know we're NS in S memory so don't need to repeat those checks.
1966
- */
1967
- qemu_log_mask(CPU_LOG_INT, "...really an SG instruction at 0x%08" PRIx32
1968
- ", executing it\n", env->regs[15]);
1969
- env->regs[14] &= ~1;
1970
- env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
1971
- switch_v7m_security_state(env, true);
1972
- xpsr_write(env, 0, XPSR_IT);
1973
- env->regs[15] += 4;
1974
- return true;
1975
-
1976
-gen_invep:
1977
- env->v7m.sfsr |= R_V7M_SFSR_INVEP_MASK;
1978
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
1979
- qemu_log_mask(CPU_LOG_INT,
1980
- "...really SecureFault with SFSR.INVEP\n");
1981
- return false;
1982
-}
1983
-
1984
-void arm_v7m_cpu_do_interrupt(CPUState *cs)
1985
-{
1986
- ARMCPU *cpu = ARM_CPU(cs);
1987
- CPUARMState *env = &cpu->env;
1988
- uint32_t lr;
1989
- bool ignore_stackfaults;
1990
-
1991
- arm_log_exception(cs->exception_index);
1992
-
1993
- /*
1994
- * For exceptions we just mark as pending on the NVIC, and let that
1995
- * handle it.
1996
- */
1997
- switch (cs->exception_index) {
1998
- case EXCP_UDEF:
1999
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
2000
- env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_UNDEFINSTR_MASK;
2001
- break;
2002
- case EXCP_NOCP:
2003
- {
2004
- /*
2005
- * NOCP might be directed to something other than the current
2006
- * security state if this fault is because of NSACR; we indicate
2007
- * the target security state using exception.target_el.
2008
- */
2009
- int target_secstate;
2010
-
2011
- if (env->exception.target_el == 3) {
2012
- target_secstate = M_REG_S;
2013
- } else {
2014
- target_secstate = env->v7m.secure;
2015
- }
2016
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, target_secstate);
2017
- env->v7m.cfsr[target_secstate] |= R_V7M_CFSR_NOCP_MASK;
2018
- break;
2019
- }
2020
- case EXCP_INVSTATE:
2021
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
2022
- env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVSTATE_MASK;
2023
- break;
2024
- case EXCP_STKOF:
2025
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
2026
- env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_STKOF_MASK;
2027
- break;
2028
- case EXCP_LSERR:
2029
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
2030
- env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK;
2031
- break;
2032
- case EXCP_UNALIGNED:
2033
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
2034
- env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_UNALIGNED_MASK;
2035
- break;
2036
- case EXCP_SWI:
2037
- /* The PC already points to the next instruction. */
2038
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SVC, env->v7m.secure);
2039
- break;
2040
- case EXCP_PREFETCH_ABORT:
2041
- case EXCP_DATA_ABORT:
2042
- /*
2043
- * Note that for M profile we don't have a guest facing FSR, but
2044
- * the env->exception.fsr will be populated by the code that
2045
- * raises the fault, in the A profile short-descriptor format.
2046
- */
2047
- switch (env->exception.fsr & 0xf) {
2048
- case M_FAKE_FSR_NSC_EXEC:
2049
- /*
2050
- * Exception generated when we try to execute code at an address
2051
- * which is marked as Secure & Non-Secure Callable and the CPU
2052
- * is in the Non-Secure state. The only instruction which can
2053
- * be executed like this is SG (and that only if both halves of
2054
- * the SG instruction have the same security attributes.)
2055
- * Everything else must generate an INVEP SecureFault, so we
2056
- * emulate the SG instruction here.
2057
- */
2058
- if (v7m_handle_execute_nsc(cpu)) {
2059
- return;
2060
- }
2061
- break;
2062
- case M_FAKE_FSR_SFAULT:
2063
- /*
2064
- * Various flavours of SecureFault for attempts to execute or
2065
- * access data in the wrong security state.
2066
- */
2067
- switch (cs->exception_index) {
2068
- case EXCP_PREFETCH_ABORT:
2069
- if (env->v7m.secure) {
2070
- env->v7m.sfsr |= R_V7M_SFSR_INVTRAN_MASK;
2071
- qemu_log_mask(CPU_LOG_INT,
2072
- "...really SecureFault with SFSR.INVTRAN\n");
2073
- } else {
2074
- env->v7m.sfsr |= R_V7M_SFSR_INVEP_MASK;
2075
- qemu_log_mask(CPU_LOG_INT,
2076
- "...really SecureFault with SFSR.INVEP\n");
2077
- }
2078
- break;
2079
- case EXCP_DATA_ABORT:
2080
- /* This must be an NS access to S memory */
2081
- env->v7m.sfsr |= R_V7M_SFSR_AUVIOL_MASK;
2082
- qemu_log_mask(CPU_LOG_INT,
2083
- "...really SecureFault with SFSR.AUVIOL\n");
2084
- break;
2085
- }
2086
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
2087
- break;
2088
- case 0x8: /* External Abort */
2089
- switch (cs->exception_index) {
2090
- case EXCP_PREFETCH_ABORT:
2091
- env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_IBUSERR_MASK;
2092
- qemu_log_mask(CPU_LOG_INT, "...with CFSR.IBUSERR\n");
2093
- break;
2094
- case EXCP_DATA_ABORT:
2095
- env->v7m.cfsr[M_REG_NS] |=
2096
- (R_V7M_CFSR_PRECISERR_MASK | R_V7M_CFSR_BFARVALID_MASK);
2097
- env->v7m.bfar = env->exception.vaddress;
2098
- qemu_log_mask(CPU_LOG_INT,
2099
- "...with CFSR.PRECISERR and BFAR 0x%x\n",
2100
- env->v7m.bfar);
2101
- break;
2102
- }
2103
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false);
2104
- break;
2105
- default:
2106
- /*
2107
- * All other FSR values are either MPU faults or "can't happen
2108
- * for M profile" cases.
2109
- */
2110
- switch (cs->exception_index) {
2111
- case EXCP_PREFETCH_ABORT:
2112
- env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_IACCVIOL_MASK;
2113
- qemu_log_mask(CPU_LOG_INT, "...with CFSR.IACCVIOL\n");
2114
- break;
2115
- case EXCP_DATA_ABORT:
2116
- env->v7m.cfsr[env->v7m.secure] |=
2117
- (R_V7M_CFSR_DACCVIOL_MASK | R_V7M_CFSR_MMARVALID_MASK);
2118
- env->v7m.mmfar[env->v7m.secure] = env->exception.vaddress;
2119
- qemu_log_mask(CPU_LOG_INT,
2120
- "...with CFSR.DACCVIOL and MMFAR 0x%x\n",
2121
- env->v7m.mmfar[env->v7m.secure]);
2122
- break;
2123
- }
2124
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM,
2125
- env->v7m.secure);
2126
- break;
2127
- }
2128
- break;
2129
- case EXCP_BKPT:
2130
- if (semihosting_enabled()) {
2131
- int nr;
2132
- nr = arm_lduw_code(env, env->regs[15], arm_sctlr_b(env)) & 0xff;
2133
- if (nr == 0xab) {
2134
- env->regs[15] += 2;
2135
- qemu_log_mask(CPU_LOG_INT,
2136
- "...handling as semihosting call 0x%x\n",
2137
- env->regs[0]);
2138
- env->regs[0] = do_arm_semihosting(env);
2139
- return;
2140
- }
2141
- }
2142
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_DEBUG, false);
2143
- break;
2144
- case EXCP_IRQ:
2145
- break;
2146
- case EXCP_EXCEPTION_EXIT:
2147
- if (env->regs[15] < EXC_RETURN_MIN_MAGIC) {
2148
- /* Must be v8M security extension function return */
2149
- assert(env->regs[15] >= FNC_RETURN_MIN_MAGIC);
2150
- assert(arm_feature(env, ARM_FEATURE_M_SECURITY));
2151
- if (do_v7m_function_return(cpu)) {
2152
- return;
2153
- }
2154
- } else {
2155
- do_v7m_exception_exit(cpu);
2156
- return;
2157
- }
2158
- break;
2159
- case EXCP_LAZYFP:
2160
- /*
2161
- * We already pended the specific exception in the NVIC in the
2162
- * v7m_preserve_fp_state() helper function.
2163
- */
2164
- break;
2165
- default:
2166
- cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index);
2167
- return; /* Never happens. Keep compiler happy. */
2168
- }
2169
-
2170
- if (arm_feature(env, ARM_FEATURE_V8)) {
2171
- lr = R_V7M_EXCRET_RES1_MASK |
2172
- R_V7M_EXCRET_DCRS_MASK;
2173
- /*
2174
- * The S bit indicates whether we should return to Secure
2175
- * or NonSecure (ie our current state).
2176
- * The ES bit indicates whether we're taking this exception
2177
- * to Secure or NonSecure (ie our target state). We set it
2178
- * later, in v7m_exception_taken().
2179
- * The SPSEL bit is also set in v7m_exception_taken() for v8M.
2180
- * This corresponds to the ARM ARM pseudocode for v8M setting
2181
- * some LR bits in PushStack() and some in ExceptionTaken();
2182
- * the distinction matters for the tailchain cases where we
2183
- * can take an exception without pushing the stack.
2184
- */
2185
- if (env->v7m.secure) {
2186
- lr |= R_V7M_EXCRET_S_MASK;
2187
- }
2188
- if (!(env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK)) {
2189
- lr |= R_V7M_EXCRET_FTYPE_MASK;
2190
- }
2191
- } else {
2192
- lr = R_V7M_EXCRET_RES1_MASK |
2193
- R_V7M_EXCRET_S_MASK |
2194
- R_V7M_EXCRET_DCRS_MASK |
2195
- R_V7M_EXCRET_FTYPE_MASK |
2196
- R_V7M_EXCRET_ES_MASK;
2197
- if (env->v7m.control[M_REG_NS] & R_V7M_CONTROL_SPSEL_MASK) {
2198
- lr |= R_V7M_EXCRET_SPSEL_MASK;
2199
- }
2200
- }
2201
- if (!arm_v7m_is_handler_mode(env)) {
2202
- lr |= R_V7M_EXCRET_MODE_MASK;
2203
- }
2204
-
2205
- ignore_stackfaults = v7m_push_stack(cpu);
2206
- v7m_exception_taken(cpu, lr, false, ignore_stackfaults);
2207
-}
2208
-
2209
/*
2210
* Function used to synchronize QEMU's AArch64 register set with AArch32
2211
* register set. This is necessary when switching between AArch32 and AArch64
2212
@@ -XXX,XX +XXX,XX @@ hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr,
2213
return phys_addr;
2214
}
2215
2216
-uint32_t HELPER(v7m_mrs)(CPUARMState *env, uint32_t reg)
2217
-{
2218
- uint32_t mask;
2219
- unsigned el = arm_current_el(env);
2220
-
2221
- /* First handle registers which unprivileged can read */
2222
-
2223
- switch (reg) {
2224
- case 0 ... 7: /* xPSR sub-fields */
2225
- mask = 0;
2226
- if ((reg & 1) && el) {
2227
- mask |= XPSR_EXCP; /* IPSR (unpriv. reads as zero) */
2228
- }
2229
- if (!(reg & 4)) {
2230
- mask |= XPSR_NZCV | XPSR_Q; /* APSR */
2231
- if (arm_feature(env, ARM_FEATURE_THUMB_DSP)) {
2232
- mask |= XPSR_GE;
2233
- }
2234
- }
2235
- /* EPSR reads as zero */
2236
- return xpsr_read(env) & mask;
2237
- break;
2238
- case 20: /* CONTROL */
2239
- {
2240
- uint32_t value = env->v7m.control[env->v7m.secure];
2241
- if (!env->v7m.secure) {
2242
- /* SFPA is RAZ/WI from NS; FPCA is stored in the M_REG_S bank */
2243
- value |= env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK;
2244
- }
2245
- return value;
2246
- }
2247
- case 0x94: /* CONTROL_NS */
2248
- /*
2249
- * We have to handle this here because unprivileged Secure code
2250
- * can read the NS CONTROL register.
2251
- */
2252
- if (!env->v7m.secure) {
2253
- return 0;
2254
- }
2255
- return env->v7m.control[M_REG_NS] |
2256
- (env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK);
2257
- }
2258
-
2259
- if (el == 0) {
2260
- return 0; /* unprivileged reads others as zero */
2261
- }
2262
-
2263
- if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
2264
- switch (reg) {
2265
- case 0x88: /* MSP_NS */
2266
- if (!env->v7m.secure) {
2267
- return 0;
2268
- }
2269
- return env->v7m.other_ss_msp;
2270
- case 0x89: /* PSP_NS */
2271
- if (!env->v7m.secure) {
2272
- return 0;
2273
- }
2274
- return env->v7m.other_ss_psp;
2275
- case 0x8a: /* MSPLIM_NS */
2276
- if (!env->v7m.secure) {
2277
- return 0;
2278
- }
2279
- return env->v7m.msplim[M_REG_NS];
2280
- case 0x8b: /* PSPLIM_NS */
2281
- if (!env->v7m.secure) {
2282
- return 0;
2283
- }
2284
- return env->v7m.psplim[M_REG_NS];
2285
- case 0x90: /* PRIMASK_NS */
2286
- if (!env->v7m.secure) {
2287
- return 0;
2288
- }
2289
- return env->v7m.primask[M_REG_NS];
2290
- case 0x91: /* BASEPRI_NS */
2291
- if (!env->v7m.secure) {
2292
- return 0;
2293
- }
2294
- return env->v7m.basepri[M_REG_NS];
2295
- case 0x93: /* FAULTMASK_NS */
2296
- if (!env->v7m.secure) {
2297
- return 0;
2298
- }
2299
- return env->v7m.faultmask[M_REG_NS];
2300
- case 0x98: /* SP_NS */
2301
- {
2302
- /*
2303
- * This gives the non-secure SP selected based on whether we're
2304
- * currently in handler mode or not, using the NS CONTROL.SPSEL.
2305
- */
2306
- bool spsel = env->v7m.control[M_REG_NS] & R_V7M_CONTROL_SPSEL_MASK;
2307
-
2308
- if (!env->v7m.secure) {
2309
- return 0;
2310
- }
2311
- if (!arm_v7m_is_handler_mode(env) && spsel) {
2312
- return env->v7m.other_ss_psp;
2313
- } else {
2314
- return env->v7m.other_ss_msp;
2315
- }
2316
- }
2317
- default:
2318
- break;
2319
- }
2320
- }
2321
-
2322
- switch (reg) {
2323
- case 8: /* MSP */
2324
- return v7m_using_psp(env) ? env->v7m.other_sp : env->regs[13];
2325
- case 9: /* PSP */
2326
- return v7m_using_psp(env) ? env->regs[13] : env->v7m.other_sp;
2327
- case 10: /* MSPLIM */
2328
- if (!arm_feature(env, ARM_FEATURE_V8)) {
2329
- goto bad_reg;
2330
- }
2331
- return env->v7m.msplim[env->v7m.secure];
2332
- case 11: /* PSPLIM */
2333
- if (!arm_feature(env, ARM_FEATURE_V8)) {
2334
- goto bad_reg;
2335
- }
2336
- return env->v7m.psplim[env->v7m.secure];
2337
- case 16: /* PRIMASK */
2338
- return env->v7m.primask[env->v7m.secure];
2339
- case 17: /* BASEPRI */
2340
- case 18: /* BASEPRI_MAX */
2341
- return env->v7m.basepri[env->v7m.secure];
2342
- case 19: /* FAULTMASK */
2343
- return env->v7m.faultmask[env->v7m.secure];
2344
- default:
2345
- bad_reg:
2346
- qemu_log_mask(LOG_GUEST_ERROR, "Attempt to read unknown special"
2347
- " register %d\n", reg);
2348
- return 0;
2349
- }
2350
-}
2351
-
2352
-void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
2353
-{
2354
- /*
2355
- * We're passed bits [11..0] of the instruction; extract
2356
- * SYSm and the mask bits.
2357
- * Invalid combinations of SYSm and mask are UNPREDICTABLE;
2358
- * we choose to treat them as if the mask bits were valid.
2359
- * NB that the pseudocode 'mask' variable is bits [11..10],
2360
- * whereas ours is [11..8].
2361
- */
2362
- uint32_t mask = extract32(maskreg, 8, 4);
2363
- uint32_t reg = extract32(maskreg, 0, 8);
2364
- int cur_el = arm_current_el(env);
2365
-
2366
- if (cur_el == 0 && reg > 7 && reg != 20) {
2367
- /*
2368
- * only xPSR sub-fields and CONTROL.SFPA may be written by
2369
- * unprivileged code
2370
- */
2371
- return;
2372
- }
2373
-
2374
- if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
2375
- switch (reg) {
2376
- case 0x88: /* MSP_NS */
2377
- if (!env->v7m.secure) {
2378
- return;
2379
- }
2380
- env->v7m.other_ss_msp = val;
2381
- return;
2382
- case 0x89: /* PSP_NS */
2383
- if (!env->v7m.secure) {
2384
- return;
2385
- }
2386
- env->v7m.other_ss_psp = val;
2387
- return;
2388
- case 0x8a: /* MSPLIM_NS */
2389
- if (!env->v7m.secure) {
2390
- return;
2391
- }
2392
- env->v7m.msplim[M_REG_NS] = val & ~7;
2393
- return;
2394
- case 0x8b: /* PSPLIM_NS */
2395
- if (!env->v7m.secure) {
2396
- return;
2397
- }
2398
- env->v7m.psplim[M_REG_NS] = val & ~7;
2399
- return;
2400
- case 0x90: /* PRIMASK_NS */
2401
- if (!env->v7m.secure) {
2402
- return;
2403
- }
2404
- env->v7m.primask[M_REG_NS] = val & 1;
2405
- return;
2406
- case 0x91: /* BASEPRI_NS */
2407
- if (!env->v7m.secure || !arm_feature(env, ARM_FEATURE_M_MAIN)) {
2408
- return;
2409
- }
2410
- env->v7m.basepri[M_REG_NS] = val & 0xff;
2411
- return;
2412
- case 0x93: /* FAULTMASK_NS */
2413
- if (!env->v7m.secure || !arm_feature(env, ARM_FEATURE_M_MAIN)) {
2414
- return;
2415
- }
2416
- env->v7m.faultmask[M_REG_NS] = val & 1;
2417
- return;
2418
- case 0x94: /* CONTROL_NS */
2419
- if (!env->v7m.secure) {
2420
- return;
2421
- }
2422
- write_v7m_control_spsel_for_secstate(env,
2423
- val & R_V7M_CONTROL_SPSEL_MASK,
2424
- M_REG_NS);
2425
- if (arm_feature(env, ARM_FEATURE_M_MAIN)) {
2426
- env->v7m.control[M_REG_NS] &= ~R_V7M_CONTROL_NPRIV_MASK;
2427
- env->v7m.control[M_REG_NS] |= val & R_V7M_CONTROL_NPRIV_MASK;
2428
- }
2429
- /*
2430
- * SFPA is RAZ/WI from NS. FPCA is RO if NSACR.CP10 == 0,
2431
- * RES0 if the FPU is not present, and is stored in the S bank
2432
- */
2433
- if (arm_feature(env, ARM_FEATURE_VFP) &&
2434
- extract32(env->v7m.nsacr, 10, 1)) {
2435
- env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_FPCA_MASK;
2436
- env->v7m.control[M_REG_S] |= val & R_V7M_CONTROL_FPCA_MASK;
2437
- }
2438
- return;
2439
- case 0x98: /* SP_NS */
2440
- {
2441
- /*
2442
- * This gives the non-secure SP selected based on whether we're
2443
- * currently in handler mode or not, using the NS CONTROL.SPSEL.
2444
- */
2445
- bool spsel = env->v7m.control[M_REG_NS] & R_V7M_CONTROL_SPSEL_MASK;
2446
- bool is_psp = !arm_v7m_is_handler_mode(env) && spsel;
2447
- uint32_t limit;
2448
-
2449
- if (!env->v7m.secure) {
2450
- return;
2451
- }
2452
-
2453
- limit = is_psp ? env->v7m.psplim[false] : env->v7m.msplim[false];
2454
-
2455
- if (val < limit) {
2456
- CPUState *cs = env_cpu(env);
2457
-
2458
- cpu_restore_state(cs, GETPC(), true);
2459
- raise_exception(env, EXCP_STKOF, 0, 1);
2460
- }
2461
-
2462
- if (is_psp) {
2463
- env->v7m.other_ss_psp = val;
2464
- } else {
2465
- env->v7m.other_ss_msp = val;
2466
- }
2467
- return;
2468
- }
2469
- default:
2470
- break;
2471
- }
2472
- }
2473
-
2474
- switch (reg) {
2475
- case 0 ... 7: /* xPSR sub-fields */
2476
- /* only APSR is actually writable */
2477
- if (!(reg & 4)) {
2478
- uint32_t apsrmask = 0;
2479
-
2480
- if (mask & 8) {
2481
- apsrmask |= XPSR_NZCV | XPSR_Q;
2482
- }
2483
- if ((mask & 4) && arm_feature(env, ARM_FEATURE_THUMB_DSP)) {
2484
- apsrmask |= XPSR_GE;
2485
- }
2486
- xpsr_write(env, val, apsrmask);
2487
- }
2488
- break;
2489
- case 8: /* MSP */
2490
- if (v7m_using_psp(env)) {
2491
- env->v7m.other_sp = val;
2492
- } else {
2493
- env->regs[13] = val;
2494
- }
2495
- break;
2496
- case 9: /* PSP */
2497
- if (v7m_using_psp(env)) {
2498
- env->regs[13] = val;
2499
- } else {
2500
- env->v7m.other_sp = val;
2501
- }
2502
- break;
2503
- case 10: /* MSPLIM */
2504
- if (!arm_feature(env, ARM_FEATURE_V8)) {
2505
- goto bad_reg;
2506
- }
2507
- env->v7m.msplim[env->v7m.secure] = val & ~7;
2508
- break;
2509
- case 11: /* PSPLIM */
2510
- if (!arm_feature(env, ARM_FEATURE_V8)) {
2511
- goto bad_reg;
2512
- }
2513
- env->v7m.psplim[env->v7m.secure] = val & ~7;
2514
- break;
2515
- case 16: /* PRIMASK */
2516
- env->v7m.primask[env->v7m.secure] = val & 1;
2517
- break;
2518
- case 17: /* BASEPRI */
2519
- if (!arm_feature(env, ARM_FEATURE_M_MAIN)) {
2520
- goto bad_reg;
2521
- }
2522
- env->v7m.basepri[env->v7m.secure] = val & 0xff;
2523
- break;
2524
- case 18: /* BASEPRI_MAX */
2525
- if (!arm_feature(env, ARM_FEATURE_M_MAIN)) {
2526
- goto bad_reg;
2527
- }
2528
- val &= 0xff;
2529
- if (val != 0 && (val < env->v7m.basepri[env->v7m.secure]
2530
- || env->v7m.basepri[env->v7m.secure] == 0)) {
2531
- env->v7m.basepri[env->v7m.secure] = val;
2532
- }
2533
- break;
2534
- case 19: /* FAULTMASK */
2535
- if (!arm_feature(env, ARM_FEATURE_M_MAIN)) {
2536
- goto bad_reg;
2537
- }
2538
- env->v7m.faultmask[env->v7m.secure] = val & 1;
2539
- break;
2540
- case 20: /* CONTROL */
2541
- /*
2542
- * Writing to the SPSEL bit only has an effect if we are in
2543
- * thread mode; other bits can be updated by any privileged code.
2544
- * write_v7m_control_spsel() deals with updating the SPSEL bit in
2545
- * env->v7m.control, so we only need update the others.
2546
- * For v7M, we must just ignore explicit writes to SPSEL in handler
2547
- * mode; for v8M the write is permitted but will have no effect.
2548
- * All these bits are writes-ignored from non-privileged code,
2549
- * except for SFPA.
2550
- */
2551
- if (cur_el > 0 && (arm_feature(env, ARM_FEATURE_V8) ||
2552
- !arm_v7m_is_handler_mode(env))) {
2553
- write_v7m_control_spsel(env, (val & R_V7M_CONTROL_SPSEL_MASK) != 0);
2554
- }
2555
- if (cur_el > 0 && arm_feature(env, ARM_FEATURE_M_MAIN)) {
2556
- env->v7m.control[env->v7m.secure] &= ~R_V7M_CONTROL_NPRIV_MASK;
2557
- env->v7m.control[env->v7m.secure] |= val & R_V7M_CONTROL_NPRIV_MASK;
2558
- }
2559
- if (arm_feature(env, ARM_FEATURE_VFP)) {
2560
- /*
2561
- * SFPA is RAZ/WI from NS or if no FPU.
2562
- * FPCA is RO if NSACR.CP10 == 0, RES0 if the FPU is not present.
2563
- * Both are stored in the S bank.
2564
- */
2565
- if (env->v7m.secure) {
2566
- env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
2567
- env->v7m.control[M_REG_S] |= val & R_V7M_CONTROL_SFPA_MASK;
2568
- }
2569
- if (cur_el > 0 &&
2570
- (env->v7m.secure || !arm_feature(env, ARM_FEATURE_M_SECURITY) ||
2571
- extract32(env->v7m.nsacr, 10, 1))) {
2572
- env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_FPCA_MASK;
2573
- env->v7m.control[M_REG_S] |= val & R_V7M_CONTROL_FPCA_MASK;
2574
- }
2575
- }
2576
- break;
2577
- default:
2578
- bad_reg:
2579
- qemu_log_mask(LOG_GUEST_ERROR, "Attempt to write unknown special"
2580
- " register %d\n", reg);
2581
- return;
2582
- }
2583
-}
2584
-
2585
-uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
2586
-{
2587
- /* Implement the TT instruction. op is bits [7:6] of the insn. */
2588
- bool forceunpriv = op & 1;
2589
- bool alt = op & 2;
2590
- V8M_SAttributes sattrs = {};
2591
- uint32_t tt_resp;
2592
- bool r, rw, nsr, nsrw, mrvalid;
2593
- int prot;
2594
- ARMMMUFaultInfo fi = {};
2595
- MemTxAttrs attrs = {};
2596
- hwaddr phys_addr;
2597
- ARMMMUIdx mmu_idx;
2598
- uint32_t mregion;
2599
- bool targetpriv;
2600
- bool targetsec = env->v7m.secure;
2601
- bool is_subpage;
2602
-
2603
- /*
2604
- * Work out what the security state and privilege level we're
2605
- * interested in is...
2606
- */
2607
- if (alt) {
2608
- targetsec = !targetsec;
2609
- }
2610
-
2611
- if (forceunpriv) {
2612
- targetpriv = false;
2613
- } else {
2614
- targetpriv = arm_v7m_is_handler_mode(env) ||
2615
- !(env->v7m.control[targetsec] & R_V7M_CONTROL_NPRIV_MASK);
2616
- }
2617
-
2618
- /* ...and then figure out which MMU index this is */
2619
- mmu_idx = arm_v7m_mmu_idx_for_secstate_and_priv(env, targetsec, targetpriv);
2620
-
2621
- /*
2622
- * We know that the MPU and SAU don't care about the access type
2623
- * for our purposes beyond that we don't want to claim to be
2624
- * an insn fetch, so we arbitrarily call this a read.
2625
- */
2626
-
2627
- /*
2628
- * MPU region info only available for privileged or if
2629
- * inspecting the other MPU state.
2630
- */
2631
- if (arm_current_el(env) != 0 || alt) {
2632
- /* We can ignore the return value as prot is always set */
2633
- pmsav8_mpu_lookup(env, addr, MMU_DATA_LOAD, mmu_idx,
2634
- &phys_addr, &attrs, &prot, &is_subpage,
2635
- &fi, &mregion);
2636
- if (mregion == -1) {
2637
- mrvalid = false;
2638
- mregion = 0;
2639
- } else {
2640
- mrvalid = true;
2641
- }
2642
- r = prot & PAGE_READ;
2643
- rw = prot & PAGE_WRITE;
2644
- } else {
2645
- r = false;
2646
- rw = false;
2647
- mrvalid = false;
2648
- mregion = 0;
2649
- }
2650
-
2651
- if (env->v7m.secure) {
2652
- v8m_security_lookup(env, addr, MMU_DATA_LOAD, mmu_idx, &sattrs);
2653
- nsr = sattrs.ns && r;
2654
- nsrw = sattrs.ns && rw;
2655
- } else {
2656
- sattrs.ns = true;
2657
- nsr = false;
2658
- nsrw = false;
2659
- }
2660
-
2661
- tt_resp = (sattrs.iregion << 24) |
2662
- (sattrs.irvalid << 23) |
2663
- ((!sattrs.ns) << 22) |
2664
- (nsrw << 21) |
2665
- (nsr << 20) |
2666
- (rw << 19) |
2667
- (r << 18) |
2668
- (sattrs.srvalid << 17) |
2669
- (mrvalid << 16) |
2670
- (sattrs.sregion << 8) |
2671
- mregion;
2672
-
2673
- return tt_resp;
2674
-}
2675
-
2676
#endif
2677
2678
/* Note that signed overflow is undefined in C. The following routines are
2679
@@ -XXX,XX +XXX,XX @@ int fp_exception_el(CPUARMState *env, int cur_el)
2680
return 0;
2681
}
2682
2683
-ARMMMUIdx arm_v7m_mmu_idx_all(CPUARMState *env,
2684
- bool secstate, bool priv, bool negpri)
2685
-{
2686
- ARMMMUIdx mmu_idx = ARM_MMU_IDX_M;
2687
-
2688
- if (priv) {
2689
- mmu_idx |= ARM_MMU_IDX_M_PRIV;
2690
- }
2691
-
2692
- if (negpri) {
2693
- mmu_idx |= ARM_MMU_IDX_M_NEGPRI;
2694
- }
2695
-
2696
- if (secstate) {
2697
- mmu_idx |= ARM_MMU_IDX_M_S;
2698
- }
2699
-
2700
- return mmu_idx;
2701
-}
2702
-
2703
-ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
2704
- bool secstate, bool priv)
2705
-{
2706
- bool negpri = armv7m_nvic_neg_prio_requested(env->nvic, secstate);
2707
-
2708
- return arm_v7m_mmu_idx_all(env, secstate, priv, negpri);
2709
-}
2710
-
2711
-/* Return the MMU index for a v7M CPU in the specified security state */
2712
+#ifndef CONFIG_TCG
2713
ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate)
2714
{
2715
- bool priv = arm_current_el(env) != 0;
2716
-
2717
- return arm_v7m_mmu_idx_for_secstate_and_priv(env, secstate, priv);
2718
+ g_assert_not_reached();
2719
}
2720
+#endif
2721
2722
ARMMMUIdx arm_mmu_idx(CPUARMState *env)
2723
{
2724
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
2725
new file mode 100644
77
new file mode 100644
2726
index XXXXXXX..XXXXXXX
78
index XXXXXXX..XXXXXXX
2727
--- /dev/null
79
--- /dev/null
2728
+++ b/target/arm/m_helper.c
80
+++ b/include/hw/gpio/stm32l4x5_gpio.h
2729
@@ -XXX,XX +XXX,XX @@
81
@@ -XXX,XX +XXX,XX @@
2730
+/*
82
+/*
2731
+ * ARM generic helpers.
83
+ * STM32L4x5 GPIO (General Purpose Input/Ouput)
2732
+ *
84
+ *
2733
+ * This code is licensed under the GNU GPL v2 or later.
85
+ * Copyright (c) 2024 Arnaud Minier <arnaud.minier@telecom-paris.fr>
86
+ * Copyright (c) 2024 Inès Varhol <ines.varhol@telecom-paris.fr>
2734
+ *
87
+ *
2735
+ * SPDX-License-Identifier: GPL-2.0-or-later
88
+ * SPDX-License-Identifier: GPL-2.0-or-later
89
+ *
90
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
91
+ * See the COPYING file in the top-level directory.
2736
+ */
92
+ */
93
+
94
+/*
95
+ * The reference used is the STMicroElectronics RM0351 Reference manual
96
+ * for STM32L4x5 and STM32L4x6 advanced Arm ® -based 32-bit MCUs.
97
+ * https://www.st.com/en/microcontrollers-microprocessors/stm32l4x5/documentation.html
98
+ */
99
+
100
+#ifndef HW_STM32L4X5_GPIO_H
101
+#define HW_STM32L4X5_GPIO_H
102
+
103
+#include "hw/sysbus.h"
104
+#include "qom/object.h"
105
+
106
+#define TYPE_STM32L4X5_GPIO "stm32l4x5-gpio"
107
+OBJECT_DECLARE_SIMPLE_TYPE(Stm32l4x5GpioState, STM32L4X5_GPIO)
108
+
109
+#define GPIO_NUM_PINS 16
110
+
111
+struct Stm32l4x5GpioState {
112
+ SysBusDevice parent_obj;
113
+
114
+ MemoryRegion mmio;
115
+
116
+ /* GPIO registers */
117
+ uint32_t moder;
118
+ uint32_t otyper;
119
+ uint32_t ospeedr;
120
+ uint32_t pupdr;
121
+ uint32_t idr;
122
+ uint32_t odr;
123
+ uint32_t lckr;
124
+ uint32_t afrl;
125
+ uint32_t afrh;
126
+ uint32_t ascr;
127
+
128
+ /* GPIO registers reset values */
129
+ uint32_t moder_reset;
130
+ uint32_t ospeedr_reset;
131
+ uint32_t pupdr_reset;
132
+
133
+ /*
134
+ * External driving of pins.
135
+ * The pins can be set externally through the device
136
+ * anonymous input GPIOs lines under certain conditions.
137
+ * The pin must not be in push-pull output mode,
138
+ * and can't be set high in open-drain mode.
139
+ * Pins driven externally and configured to
140
+ * output mode will in general be "disconnected"
141
+ * (see `get_gpio_pinmask_to_disconnect()`)
142
+ */
143
+ uint16_t disconnected_pins;
144
+ uint16_t pins_connected_high;
145
+
146
+ char *name;
147
+ Clock *clk;
148
+ qemu_irq pin[GPIO_NUM_PINS];
149
+};
150
+
151
+#endif
152
diff --git a/hw/gpio/stm32l4x5_gpio.c b/hw/gpio/stm32l4x5_gpio.c
153
new file mode 100644
154
index XXXXXXX..XXXXXXX
155
--- /dev/null
156
+++ b/hw/gpio/stm32l4x5_gpio.c
157
@@ -XXX,XX +XXX,XX @@
158
+/*
159
+ * STM32L4x5 GPIO (General Purpose Input/Ouput)
160
+ *
161
+ * Copyright (c) 2024 Arnaud Minier <arnaud.minier@telecom-paris.fr>
162
+ * Copyright (c) 2024 Inès Varhol <ines.varhol@telecom-paris.fr>
163
+ *
164
+ * SPDX-License-Identifier: GPL-2.0-or-later
165
+ *
166
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
167
+ * See the COPYING file in the top-level directory.
168
+ */
169
+
170
+/*
171
+ * The reference used is the STMicroElectronics RM0351 Reference manual
172
+ * for STM32L4x5 and STM32L4x6 advanced Arm ® -based 32-bit MCUs.
173
+ * https://www.st.com/en/microcontrollers-microprocessors/stm32l4x5/documentation.html
174
+ */
175
+
2737
+#include "qemu/osdep.h"
176
+#include "qemu/osdep.h"
2738
+#include "qemu/units.h"
177
+#include "qemu/log.h"
2739
+#include "target/arm/idau.h"
178
+#include "hw/gpio/stm32l4x5_gpio.h"
179
+#include "hw/irq.h"
180
+#include "hw/qdev-clock.h"
181
+#include "hw/qdev-properties.h"
182
+#include "qapi/visitor.h"
183
+#include "qapi/error.h"
184
+#include "migration/vmstate.h"
2740
+#include "trace.h"
185
+#include "trace.h"
2741
+#include "cpu.h"
186
+
2742
+#include "internals.h"
187
+#define GPIO_MODER 0x00
2743
+#include "exec/gdbstub.h"
188
+#define GPIO_OTYPER 0x04
2744
+#include "exec/helper-proto.h"
189
+#define GPIO_OSPEEDR 0x08
2745
+#include "qemu/host-utils.h"
190
+#define GPIO_PUPDR 0x0C
2746
+#include "sysemu/sysemu.h"
191
+#define GPIO_IDR 0x10
2747
+#include "qemu/bitops.h"
192
+#define GPIO_ODR 0x14
2748
+#include "qemu/crc32c.h"
193
+#define GPIO_BSRR 0x18
2749
+#include "qemu/qemu-print.h"
194
+#define GPIO_LCKR 0x1C
2750
+#include "exec/exec-all.h"
195
+#define GPIO_AFRL 0x20
2751
+#include <zlib.h> /* For crc32 */
196
+#define GPIO_AFRH 0x24
2752
+#include "hw/semihosting/semihost.h"
197
+#define GPIO_BRR 0x28
2753
+#include "sysemu/cpus.h"
198
+#define GPIO_ASCR 0x2C
2754
+#include "sysemu/kvm.h"
199
+
2755
+#include "qemu/range.h"
200
+/* 0b11111111_11111111_00000000_00000000 */
2756
+#include "qapi/qapi-commands-target.h"
201
+#define RESERVED_BITS_MASK 0xFFFF0000
2757
+#include "qapi/error.h"
202
+
2758
+#include "qemu/guest-random.h"
203
+static void update_gpio_idr(Stm32l4x5GpioState *s);
2759
+#ifdef CONFIG_TCG
204
+
2760
+#include "arm_ldst.h"
205
+static bool is_pull_up(Stm32l4x5GpioState *s, unsigned pin)
2761
+#include "exec/cpu_ldst.h"
206
+{
2762
+#endif
207
+ return extract32(s->pupdr, 2 * pin, 2) == 1;
2763
+
208
+}
2764
+#ifdef CONFIG_USER_ONLY
209
+
2765
+
210
+static bool is_pull_down(Stm32l4x5GpioState *s, unsigned pin)
2766
+/* These should probably raise undefined insn exceptions. */
211
+{
2767
+void HELPER(v7m_msr)(CPUARMState *env, uint32_t reg, uint32_t val)
212
+ return extract32(s->pupdr, 2 * pin, 2) == 2;
2768
+{
213
+}
2769
+ ARMCPU *cpu = env_archcpu(env);
214
+
2770
+
215
+static bool is_output(Stm32l4x5GpioState *s, unsigned pin)
2771
+ cpu_abort(CPU(cpu), "v7m_msr %d\n", reg);
216
+{
2772
+}
217
+ return extract32(s->moder, 2 * pin, 2) == 1;
2773
+
218
+}
2774
+uint32_t HELPER(v7m_mrs)(CPUARMState *env, uint32_t reg)
219
+
2775
+{
220
+static bool is_open_drain(Stm32l4x5GpioState *s, unsigned pin)
2776
+ ARMCPU *cpu = env_archcpu(env);
221
+{
2777
+
222
+ return extract32(s->otyper, pin, 1) == 1;
2778
+ cpu_abort(CPU(cpu), "v7m_mrs %d\n", reg);
223
+}
2779
+ return 0;
224
+
2780
+}
225
+static bool is_push_pull(Stm32l4x5GpioState *s, unsigned pin)
2781
+
226
+{
2782
+void HELPER(v7m_bxns)(CPUARMState *env, uint32_t dest)
227
+ return extract32(s->otyper, pin, 1) == 0;
2783
+{
228
+}
2784
+ /* translate.c should never generate calls here in user-only mode */
229
+
2785
+ g_assert_not_reached();
230
+static void stm32l4x5_gpio_reset_hold(Object *obj)
2786
+}
231
+{
2787
+
232
+ Stm32l4x5GpioState *s = STM32L4X5_GPIO(obj);
2788
+void HELPER(v7m_blxns)(CPUARMState *env, uint32_t dest)
233
+
2789
+{
234
+ s->moder = s->moder_reset;
2790
+ /* translate.c should never generate calls here in user-only mode */
235
+ s->otyper = 0x00000000;
2791
+ g_assert_not_reached();
236
+ s->ospeedr = s->ospeedr_reset;
2792
+}
237
+ s->pupdr = s->pupdr_reset;
2793
+
238
+ s->idr = 0x00000000;
2794
+void HELPER(v7m_preserve_fp_state)(CPUARMState *env)
239
+ s->odr = 0x00000000;
2795
+{
240
+ s->lckr = 0x00000000;
2796
+ /* translate.c should never generate calls here in user-only mode */
241
+ s->afrl = 0x00000000;
2797
+ g_assert_not_reached();
242
+ s->afrh = 0x00000000;
2798
+}
243
+ s->ascr = 0x00000000;
2799
+
244
+
2800
+void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr)
245
+ s->disconnected_pins = 0xFFFF;
2801
+{
246
+ s->pins_connected_high = 0x0000;
2802
+ /* translate.c should never generate calls here in user-only mode */
247
+ update_gpio_idr(s);
2803
+ g_assert_not_reached();
248
+}
2804
+}
249
+
2805
+
250
+static void stm32l4x5_gpio_set(void *opaque, int line, int level)
2806
+void HELPER(v7m_vlldm)(CPUARMState *env, uint32_t fptr)
251
+{
2807
+{
252
+ Stm32l4x5GpioState *s = opaque;
2808
+ /* translate.c should never generate calls here in user-only mode */
2809
+ g_assert_not_reached();
2810
+}
2811
+
2812
+uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
2813
+{
2814
+ /*
253
+ /*
2815
+ * The TT instructions can be used by unprivileged code, but in
254
+ * The pin isn't set if line is configured in output mode
2816
+ * user-only emulation we don't have the MPU.
255
+ * except if level is 0 and the output is open-drain.
2817
+ * Luckily since we know we are NonSecure unprivileged (and that in
256
+ * This way there will be no short-circuit prone situations.
2818
+ * turn means that the A flag wasn't specified), all the bits in the
2819
+ * register must be zero:
2820
+ * IREGION: 0 because IRVALID is 0
2821
+ * IRVALID: 0 because NS
2822
+ * S: 0 because NS
2823
+ * NSRW: 0 because NS
2824
+ * NSR: 0 because NS
2825
+ * RW: 0 because unpriv and A flag not set
2826
+ * R: 0 because unpriv and A flag not set
2827
+ * SRVALID: 0 because NS
2828
+ * MRVALID: 0 because unpriv and A flag not set
2829
+ * SREGION: 0 becaus SRVALID is 0
2830
+ * MREGION: 0 because MRVALID is 0
2831
+ */
257
+ */
2832
+ return 0;
258
+ if (is_output(s, line) && !(is_open_drain(s, line) && (level == 0))) {
2833
+}
259
+ qemu_log_mask(LOG_GUEST_ERROR, "Line %d can't be driven externally\n",
2834
+
260
+ line);
2835
+#else
261
+ return;
2836
+
262
+ }
2837
+/*
263
+
2838
+ * What kind of stack write are we doing? This affects how exceptions
264
+ s->disconnected_pins &= ~(1 << line);
2839
+ * generated during the stacking are treated.
265
+ if (level) {
2840
+ */
266
+ s->pins_connected_high |= (1 << line);
2841
+typedef enum StackingMode {
267
+ } else {
2842
+ STACK_NORMAL,
268
+ s->pins_connected_high &= ~(1 << line);
2843
+ STACK_IGNFAULTS,
269
+ }
2844
+ STACK_LAZYFP,
270
+ trace_stm32l4x5_gpio_pins(s->name, s->disconnected_pins,
2845
+} StackingMode;
271
+ s->pins_connected_high);
2846
+
272
+ update_gpio_idr(s);
2847
+static bool v7m_stack_write(ARMCPU *cpu, uint32_t addr, uint32_t value,
273
+}
2848
+ ARMMMUIdx mmu_idx, StackingMode mode)
274
+
2849
+{
275
+
2850
+ CPUState *cs = CPU(cpu);
276
+static void update_gpio_idr(Stm32l4x5GpioState *s)
2851
+ CPUARMState *env = &cpu->env;
277
+{
2852
+ MemTxAttrs attrs = {};
278
+ uint32_t new_idr_mask = 0;
2853
+ MemTxResult txres;
279
+ uint32_t new_idr = s->odr;
2854
+ target_ulong page_size;
280
+ uint32_t old_idr = s->idr;
2855
+ hwaddr physaddr;
281
+ int new_pin_state, old_pin_state;
2856
+ int prot;
282
+
2857
+ ARMMMUFaultInfo fi = {};
283
+ for (int i = 0; i < GPIO_NUM_PINS; i++) {
2858
+ bool secure = mmu_idx & ARM_MMU_IDX_M_S;
284
+ if (is_output(s, i)) {
2859
+ int exc;
285
+ if (is_push_pull(s, i)) {
2860
+ bool exc_secure;
286
+ new_idr_mask |= (1 << i);
2861
+
287
+ } else if (!(s->odr & (1 << i))) {
2862
+ if (get_phys_addr(env, addr, MMU_DATA_STORE, mmu_idx, &physaddr,
288
+ /* open-drain ODR 0 */
2863
+ &attrs, &prot, &page_size, &fi, NULL)) {
289
+ new_idr_mask |= (1 << i);
2864
+ /* MPU/SAU lookup failed */
290
+ /* open-drain ODR 1 */
2865
+ if (fi.type == ARMFault_QEMU_SFault) {
291
+ } else if (!(s->disconnected_pins & (1 << i)) &&
2866
+ if (mode == STACK_LAZYFP) {
292
+ !(s->pins_connected_high & (1 << i))) {
2867
+ qemu_log_mask(CPU_LOG_INT,
293
+ /* open-drain ODR 1 with pin connected low */
2868
+ "...SecureFault with SFSR.LSPERR "
294
+ new_idr_mask |= (1 << i);
2869
+ "during lazy stacking\n");
295
+ new_idr &= ~(1 << i);
2870
+ env->v7m.sfsr |= R_V7M_SFSR_LSPERR_MASK;
296
+ /* open-drain ODR 1 with unactive pin */
297
+ } else if (is_pull_up(s, i)) {
298
+ new_idr_mask |= (1 << i);
299
+ } else if (is_pull_down(s, i)) {
300
+ new_idr_mask |= (1 << i);
301
+ new_idr &= ~(1 << i);
302
+ }
303
+ /*
304
+ * The only case left is for open-drain ODR 1
305
+ * with unactive pin without pull-up or pull-down :
306
+ * the value is floating.
307
+ */
308
+ /* input or analog mode with connected pin */
309
+ } else if (!(s->disconnected_pins & (1 << i))) {
310
+ if (s->pins_connected_high & (1 << i)) {
311
+ /* pin high */
312
+ new_idr_mask |= (1 << i);
313
+ new_idr |= (1 << i);
2871
+ } else {
314
+ } else {
2872
+ qemu_log_mask(CPU_LOG_INT,
315
+ /* pin low */
2873
+ "...SecureFault with SFSR.AUVIOL "
316
+ new_idr_mask |= (1 << i);
2874
+ "during stacking\n");
317
+ new_idr &= ~(1 << i);
2875
+ env->v7m.sfsr |= R_V7M_SFSR_AUVIOL_MASK;
2876
+ }
318
+ }
2877
+ env->v7m.sfsr |= R_V7M_SFSR_SFARVALID_MASK;
319
+ /* input or analog mode with disconnected pin */
2878
+ env->v7m.sfar = addr;
2879
+ exc = ARMV7M_EXCP_SECURE;
2880
+ exc_secure = false;
2881
+ } else {
320
+ } else {
2882
+ if (mode == STACK_LAZYFP) {
321
+ if (is_pull_up(s, i)) {
2883
+ qemu_log_mask(CPU_LOG_INT,
322
+ /* pull-up */
2884
+ "...MemManageFault with CFSR.MLSPERR\n");
323
+ new_idr_mask |= (1 << i);
2885
+ env->v7m.cfsr[secure] |= R_V7M_CFSR_MLSPERR_MASK;
324
+ new_idr |= (1 << i);
2886
+ } else {
325
+ } else if (is_pull_down(s, i)) {
2887
+ qemu_log_mask(CPU_LOG_INT,
326
+ /* pull-down */
2888
+ "...MemManageFault with CFSR.MSTKERR\n");
327
+ new_idr_mask |= (1 << i);
2889
+ env->v7m.cfsr[secure] |= R_V7M_CFSR_MSTKERR_MASK;
328
+ new_idr &= ~(1 << i);
2890
+ }
329
+ }
2891
+ exc = ARMV7M_EXCP_MEM;
330
+ /*
2892
+ exc_secure = secure;
331
+ * The only case left is for a disconnected pin
332
+ * without pull-up or pull-down :
333
+ * the value is floating.
334
+ */
2893
+ }
335
+ }
2894
+ goto pend_fault;
336
+ }
2895
+ }
337
+
2896
+ address_space_stl_le(arm_addressspace(cs, attrs), physaddr, value,
338
+ s->idr = (old_idr & ~new_idr_mask) | (new_idr & new_idr_mask);
2897
+ attrs, &txres);
339
+ trace_stm32l4x5_gpio_update_idr(s->name, old_idr, s->idr);
2898
+ if (txres != MEMTX_OK) {
340
+
2899
+ /* BusFault trying to write the data */
341
+ for (int i = 0; i < GPIO_NUM_PINS; i++) {
2900
+ if (mode == STACK_LAZYFP) {
342
+ if (new_idr_mask & (1 << i)) {
2901
+ qemu_log_mask(CPU_LOG_INT, "...BusFault with BFSR.LSPERR\n");
343
+ new_pin_state = (new_idr & (1 << i)) > 0;
2902
+ env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_LSPERR_MASK;
344
+ old_pin_state = (old_idr & (1 << i)) > 0;
2903
+ } else {
345
+ if (new_pin_state > old_pin_state) {
2904
+ qemu_log_mask(CPU_LOG_INT, "...BusFault with BFSR.STKERR\n");
346
+ qemu_irq_raise(s->pin[i]);
2905
+ env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_STKERR_MASK;
347
+ } else if (new_pin_state < old_pin_state) {
2906
+ }
348
+ qemu_irq_lower(s->pin[i]);
2907
+ exc = ARMV7M_EXCP_BUS;
2908
+ exc_secure = false;
2909
+ goto pend_fault;
2910
+ }
2911
+ return true;
2912
+
2913
+pend_fault:
2914
+ /*
2915
+ * By pending the exception at this point we are making
2916
+ * the IMPDEF choice "overridden exceptions pended" (see the
2917
+ * MergeExcInfo() pseudocode). The other choice would be to not
2918
+ * pend them now and then make a choice about which to throw away
2919
+ * later if we have two derived exceptions.
2920
+ * The only case when we must not pend the exception but instead
2921
+ * throw it away is if we are doing the push of the callee registers
2922
+ * and we've already generated a derived exception (this is indicated
2923
+ * by the caller passing STACK_IGNFAULTS). Even in this case we will
2924
+ * still update the fault status registers.
2925
+ */
2926
+ switch (mode) {
2927
+ case STACK_NORMAL:
2928
+ armv7m_nvic_set_pending_derived(env->nvic, exc, exc_secure);
2929
+ break;
2930
+ case STACK_LAZYFP:
2931
+ armv7m_nvic_set_pending_lazyfp(env->nvic, exc, exc_secure);
2932
+ break;
2933
+ case STACK_IGNFAULTS:
2934
+ break;
2935
+ }
2936
+ return false;
2937
+}
2938
+
2939
+static bool v7m_stack_read(ARMCPU *cpu, uint32_t *dest, uint32_t addr,
2940
+ ARMMMUIdx mmu_idx)
2941
+{
2942
+ CPUState *cs = CPU(cpu);
2943
+ CPUARMState *env = &cpu->env;
2944
+ MemTxAttrs attrs = {};
2945
+ MemTxResult txres;
2946
+ target_ulong page_size;
2947
+ hwaddr physaddr;
2948
+ int prot;
2949
+ ARMMMUFaultInfo fi = {};
2950
+ bool secure = mmu_idx & ARM_MMU_IDX_M_S;
2951
+ int exc;
2952
+ bool exc_secure;
2953
+ uint32_t value;
2954
+
2955
+ if (get_phys_addr(env, addr, MMU_DATA_LOAD, mmu_idx, &physaddr,
2956
+ &attrs, &prot, &page_size, &fi, NULL)) {
2957
+ /* MPU/SAU lookup failed */
2958
+ if (fi.type == ARMFault_QEMU_SFault) {
2959
+ qemu_log_mask(CPU_LOG_INT,
2960
+ "...SecureFault with SFSR.AUVIOL during unstack\n");
2961
+ env->v7m.sfsr |= R_V7M_SFSR_AUVIOL_MASK | R_V7M_SFSR_SFARVALID_MASK;
2962
+ env->v7m.sfar = addr;
2963
+ exc = ARMV7M_EXCP_SECURE;
2964
+ exc_secure = false;
2965
+ } else {
2966
+ qemu_log_mask(CPU_LOG_INT,
2967
+ "...MemManageFault with CFSR.MUNSTKERR\n");
2968
+ env->v7m.cfsr[secure] |= R_V7M_CFSR_MUNSTKERR_MASK;
2969
+ exc = ARMV7M_EXCP_MEM;
2970
+ exc_secure = secure;
2971
+ }
2972
+ goto pend_fault;
2973
+ }
2974
+
2975
+ value = address_space_ldl(arm_addressspace(cs, attrs), physaddr,
2976
+ attrs, &txres);
2977
+ if (txres != MEMTX_OK) {
2978
+ /* BusFault trying to read the data */
2979
+ qemu_log_mask(CPU_LOG_INT, "...BusFault with BFSR.UNSTKERR\n");
2980
+ env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_UNSTKERR_MASK;
2981
+ exc = ARMV7M_EXCP_BUS;
2982
+ exc_secure = false;
2983
+ goto pend_fault;
2984
+ }
2985
+
2986
+ *dest = value;
2987
+ return true;
2988
+
2989
+pend_fault:
2990
+ /*
2991
+ * By pending the exception at this point we are making
2992
+ * the IMPDEF choice "overridden exceptions pended" (see the
2993
+ * MergeExcInfo() pseudocode). The other choice would be to not
2994
+ * pend them now and then make a choice about which to throw away
2995
+ * later if we have two derived exceptions.
2996
+ */
2997
+ armv7m_nvic_set_pending(env->nvic, exc, exc_secure);
2998
+ return false;
2999
+}
3000
+
3001
+void HELPER(v7m_preserve_fp_state)(CPUARMState *env)
3002
+{
3003
+ /*
3004
+ * Preserve FP state (because LSPACT was set and we are about
3005
+ * to execute an FP instruction). This corresponds to the
3006
+ * PreserveFPState() pseudocode.
3007
+ * We may throw an exception if the stacking fails.
3008
+ */
3009
+ ARMCPU *cpu = env_archcpu(env);
3010
+ bool is_secure = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_S_MASK;
3011
+ bool negpri = !(env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_HFRDY_MASK);
3012
+ bool is_priv = !(env->v7m.fpccr[is_secure] & R_V7M_FPCCR_USER_MASK);
3013
+ bool splimviol = env->v7m.fpccr[is_secure] & R_V7M_FPCCR_SPLIMVIOL_MASK;
3014
+ uint32_t fpcar = env->v7m.fpcar[is_secure];
3015
+ bool stacked_ok = true;
3016
+ bool ts = is_secure && (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK);
3017
+ bool take_exception;
3018
+
3019
+ /* Take the iothread lock as we are going to touch the NVIC */
3020
+ qemu_mutex_lock_iothread();
3021
+
3022
+ /* Check the background context had access to the FPU */
3023
+ if (!v7m_cpacr_pass(env, is_secure, is_priv)) {
3024
+ armv7m_nvic_set_pending_lazyfp(env->nvic, ARMV7M_EXCP_USAGE, is_secure);
3025
+ env->v7m.cfsr[is_secure] |= R_V7M_CFSR_NOCP_MASK;
3026
+ stacked_ok = false;
3027
+ } else if (!is_secure && !extract32(env->v7m.nsacr, 10, 1)) {
3028
+ armv7m_nvic_set_pending_lazyfp(env->nvic, ARMV7M_EXCP_USAGE, M_REG_S);
3029
+ env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_NOCP_MASK;
3030
+ stacked_ok = false;
3031
+ }
3032
+
3033
+ if (!splimviol && stacked_ok) {
3034
+ /* We only stack if the stack limit wasn't violated */
3035
+ int i;
3036
+ ARMMMUIdx mmu_idx;
3037
+
3038
+ mmu_idx = arm_v7m_mmu_idx_all(env, is_secure, is_priv, negpri);
3039
+ for (i = 0; i < (ts ? 32 : 16); i += 2) {
3040
+ uint64_t dn = *aa32_vfp_dreg(env, i / 2);
3041
+ uint32_t faddr = fpcar + 4 * i;
3042
+ uint32_t slo = extract64(dn, 0, 32);
3043
+ uint32_t shi = extract64(dn, 32, 32);
3044
+
3045
+ if (i >= 16) {
3046
+ faddr += 8; /* skip the slot for the FPSCR */
3047
+ }
3048
+ stacked_ok = stacked_ok &&
3049
+ v7m_stack_write(cpu, faddr, slo, mmu_idx, STACK_LAZYFP) &&
3050
+ v7m_stack_write(cpu, faddr + 4, shi, mmu_idx, STACK_LAZYFP);
3051
+ }
3052
+
3053
+ stacked_ok = stacked_ok &&
3054
+ v7m_stack_write(cpu, fpcar + 0x40,
3055
+ vfp_get_fpscr(env), mmu_idx, STACK_LAZYFP);
3056
+ }
3057
+
3058
+ /*
3059
+ * We definitely pended an exception, but it's possible that it
3060
+ * might not be able to be taken now. If its priority permits us
3061
+ * to take it now, then we must not update the LSPACT or FP regs,
3062
+ * but instead jump out to take the exception immediately.
3063
+ * If it's just pending and won't be taken until the current
3064
+ * handler exits, then we do update LSPACT and the FP regs.
3065
+ */
3066
+ take_exception = !stacked_ok &&
3067
+ armv7m_nvic_can_take_pending_exception(env->nvic);
3068
+
3069
+ qemu_mutex_unlock_iothread();
3070
+
3071
+ if (take_exception) {
3072
+ raise_exception_ra(env, EXCP_LAZYFP, 0, 1, GETPC());
3073
+ }
3074
+
3075
+ env->v7m.fpccr[is_secure] &= ~R_V7M_FPCCR_LSPACT_MASK;
3076
+
3077
+ if (ts) {
3078
+ /* Clear s0 to s31 and the FPSCR */
3079
+ int i;
3080
+
3081
+ for (i = 0; i < 32; i += 2) {
3082
+ *aa32_vfp_dreg(env, i / 2) = 0;
3083
+ }
3084
+ vfp_set_fpscr(env, 0);
3085
+ }
3086
+ /*
3087
+ * Otherwise s0 to s15 and FPSCR are UNKNOWN; we choose to leave them
3088
+ * unchanged.
3089
+ */
3090
+}
3091
+
3092
+/*
3093
+ * Write to v7M CONTROL.SPSEL bit for the specified security bank.
3094
+ * This may change the current stack pointer between Main and Process
3095
+ * stack pointers if it is done for the CONTROL register for the current
3096
+ * security state.
3097
+ */
3098
+static void write_v7m_control_spsel_for_secstate(CPUARMState *env,
3099
+ bool new_spsel,
3100
+ bool secstate)
3101
+{
3102
+ bool old_is_psp = v7m_using_psp(env);
3103
+
3104
+ env->v7m.control[secstate] =
3105
+ deposit32(env->v7m.control[secstate],
3106
+ R_V7M_CONTROL_SPSEL_SHIFT,
3107
+ R_V7M_CONTROL_SPSEL_LENGTH, new_spsel);
3108
+
3109
+ if (secstate == env->v7m.secure) {
3110
+ bool new_is_psp = v7m_using_psp(env);
3111
+ uint32_t tmp;
3112
+
3113
+ if (old_is_psp != new_is_psp) {
3114
+ tmp = env->v7m.other_sp;
3115
+ env->v7m.other_sp = env->regs[13];
3116
+ env->regs[13] = tmp;
3117
+ }
3118
+ }
3119
+}
3120
+
3121
+/*
3122
+ * Write to v7M CONTROL.SPSEL bit. This may change the current
3123
+ * stack pointer between Main and Process stack pointers.
3124
+ */
3125
+static void write_v7m_control_spsel(CPUARMState *env, bool new_spsel)
3126
+{
3127
+ write_v7m_control_spsel_for_secstate(env, new_spsel, env->v7m.secure);
3128
+}
3129
+
3130
+void write_v7m_exception(CPUARMState *env, uint32_t new_exc)
3131
+{
3132
+ /*
3133
+ * Write a new value to v7m.exception, thus transitioning into or out
3134
+ * of Handler mode; this may result in a change of active stack pointer.
3135
+ */
3136
+ bool new_is_psp, old_is_psp = v7m_using_psp(env);
3137
+ uint32_t tmp;
3138
+
3139
+ env->v7m.exception = new_exc;
3140
+
3141
+ new_is_psp = v7m_using_psp(env);
3142
+
3143
+ if (old_is_psp != new_is_psp) {
3144
+ tmp = env->v7m.other_sp;
3145
+ env->v7m.other_sp = env->regs[13];
3146
+ env->regs[13] = tmp;
3147
+ }
3148
+}
3149
+
3150
+/* Switch M profile security state between NS and S */
3151
+static void switch_v7m_security_state(CPUARMState *env, bool new_secstate)
3152
+{
3153
+ uint32_t new_ss_msp, new_ss_psp;
3154
+
3155
+ if (env->v7m.secure == new_secstate) {
3156
+ return;
3157
+ }
3158
+
3159
+ /*
3160
+ * All the banked state is accessed by looking at env->v7m.secure
3161
+ * except for the stack pointer; rearrange the SP appropriately.
3162
+ */
3163
+ new_ss_msp = env->v7m.other_ss_msp;
3164
+ new_ss_psp = env->v7m.other_ss_psp;
3165
+
3166
+ if (v7m_using_psp(env)) {
3167
+ env->v7m.other_ss_psp = env->regs[13];
3168
+ env->v7m.other_ss_msp = env->v7m.other_sp;
3169
+ } else {
3170
+ env->v7m.other_ss_msp = env->regs[13];
3171
+ env->v7m.other_ss_psp = env->v7m.other_sp;
3172
+ }
3173
+
3174
+ env->v7m.secure = new_secstate;
3175
+
3176
+ if (v7m_using_psp(env)) {
3177
+ env->regs[13] = new_ss_psp;
3178
+ env->v7m.other_sp = new_ss_msp;
3179
+ } else {
3180
+ env->regs[13] = new_ss_msp;
3181
+ env->v7m.other_sp = new_ss_psp;
3182
+ }
3183
+}
3184
+
3185
+void HELPER(v7m_bxns)(CPUARMState *env, uint32_t dest)
3186
+{
3187
+ /*
3188
+ * Handle v7M BXNS:
3189
+ * - if the return value is a magic value, do exception return (like BX)
3190
+ * - otherwise bit 0 of the return value is the target security state
3191
+ */
3192
+ uint32_t min_magic;
3193
+
3194
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
3195
+ /* Covers FNC_RETURN and EXC_RETURN magic */
3196
+ min_magic = FNC_RETURN_MIN_MAGIC;
3197
+ } else {
3198
+ /* EXC_RETURN magic only */
3199
+ min_magic = EXC_RETURN_MIN_MAGIC;
3200
+ }
3201
+
3202
+ if (dest >= min_magic) {
3203
+ /*
3204
+ * This is an exception return magic value; put it where
3205
+ * do_v7m_exception_exit() expects and raise EXCEPTION_EXIT.
3206
+ * Note that if we ever add gen_ss_advance() singlestep support to
3207
+ * M profile this should count as an "instruction execution complete"
3208
+ * event (compare gen_bx_excret_final_code()).
3209
+ */
3210
+ env->regs[15] = dest & ~1;
3211
+ env->thumb = dest & 1;
3212
+ HELPER(exception_internal)(env, EXCP_EXCEPTION_EXIT);
3213
+ /* notreached */
3214
+ }
3215
+
3216
+ /* translate.c should have made BXNS UNDEF unless we're secure */
3217
+ assert(env->v7m.secure);
3218
+
3219
+ if (!(dest & 1)) {
3220
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
3221
+ }
3222
+ switch_v7m_security_state(env, dest & 1);
3223
+ env->thumb = 1;
3224
+ env->regs[15] = dest & ~1;
3225
+}
3226
+
3227
+void HELPER(v7m_blxns)(CPUARMState *env, uint32_t dest)
3228
+{
3229
+ /*
3230
+ * Handle v7M BLXNS:
3231
+ * - bit 0 of the destination address is the target security state
3232
+ */
3233
+
3234
+ /* At this point regs[15] is the address just after the BLXNS */
3235
+ uint32_t nextinst = env->regs[15] | 1;
3236
+ uint32_t sp = env->regs[13] - 8;
3237
+ uint32_t saved_psr;
3238
+
3239
+ /* translate.c will have made BLXNS UNDEF unless we're secure */
3240
+ assert(env->v7m.secure);
3241
+
3242
+ if (dest & 1) {
3243
+ /*
3244
+ * Target is Secure, so this is just a normal BLX,
3245
+ * except that the low bit doesn't indicate Thumb/not.
3246
+ */
3247
+ env->regs[14] = nextinst;
3248
+ env->thumb = 1;
3249
+ env->regs[15] = dest & ~1;
3250
+ return;
3251
+ }
3252
+
3253
+ /* Target is non-secure: first push a stack frame */
3254
+ if (!QEMU_IS_ALIGNED(sp, 8)) {
3255
+ qemu_log_mask(LOG_GUEST_ERROR,
3256
+ "BLXNS with misaligned SP is UNPREDICTABLE\n");
3257
+ }
3258
+
3259
+ if (sp < v7m_sp_limit(env)) {
3260
+ raise_exception(env, EXCP_STKOF, 0, 1);
3261
+ }
3262
+
3263
+ saved_psr = env->v7m.exception;
3264
+ if (env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK) {
3265
+ saved_psr |= XPSR_SFPA;
3266
+ }
3267
+
3268
+ /* Note that these stores can throw exceptions on MPU faults */
3269
+ cpu_stl_data(env, sp, nextinst);
3270
+ cpu_stl_data(env, sp + 4, saved_psr);
3271
+
3272
+ env->regs[13] = sp;
3273
+ env->regs[14] = 0xfeffffff;
3274
+ if (arm_v7m_is_handler_mode(env)) {
3275
+ /*
3276
+ * Write a dummy value to IPSR, to avoid leaking the current secure
3277
+ * exception number to non-secure code. This is guaranteed not
3278
+ * to cause write_v7m_exception() to actually change stacks.
3279
+ */
3280
+ write_v7m_exception(env, 1);
3281
+ }
3282
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
3283
+ switch_v7m_security_state(env, 0);
3284
+ env->thumb = 1;
3285
+ env->regs[15] = dest;
3286
+}
3287
+
3288
+static uint32_t *get_v7m_sp_ptr(CPUARMState *env, bool secure, bool threadmode,
3289
+ bool spsel)
3290
+{
3291
+ /*
3292
+ * Return a pointer to the location where we currently store the
3293
+ * stack pointer for the requested security state and thread mode.
3294
+ * This pointer will become invalid if the CPU state is updated
3295
+ * such that the stack pointers are switched around (eg changing
3296
+ * the SPSEL control bit).
3297
+ * Compare the v8M ARM ARM pseudocode LookUpSP_with_security_mode().
3298
+ * Unlike that pseudocode, we require the caller to pass us in the
3299
+ * SPSEL control bit value; this is because we also use this
3300
+ * function in handling of pushing of the callee-saves registers
3301
+ * part of the v8M stack frame (pseudocode PushCalleeStack()),
3302
+ * and in the tailchain codepath the SPSEL bit comes from the exception
3303
+ * return magic LR value from the previous exception. The pseudocode
3304
+ * opencodes the stack-selection in PushCalleeStack(), but we prefer
3305
+ * to make this utility function generic enough to do the job.
3306
+ */
3307
+ bool want_psp = threadmode && spsel;
3308
+
3309
+ if (secure == env->v7m.secure) {
3310
+ if (want_psp == v7m_using_psp(env)) {
3311
+ return &env->regs[13];
3312
+ } else {
3313
+ return &env->v7m.other_sp;
3314
+ }
3315
+ } else {
3316
+ if (want_psp) {
3317
+ return &env->v7m.other_ss_psp;
3318
+ } else {
3319
+ return &env->v7m.other_ss_msp;
3320
+ }
3321
+ }
3322
+}
3323
+
3324
+static bool arm_v7m_load_vector(ARMCPU *cpu, int exc, bool targets_secure,
3325
+ uint32_t *pvec)
3326
+{
3327
+ CPUState *cs = CPU(cpu);
3328
+ CPUARMState *env = &cpu->env;
3329
+ MemTxResult result;
3330
+ uint32_t addr = env->v7m.vecbase[targets_secure] + exc * 4;
3331
+ uint32_t vector_entry;
3332
+ MemTxAttrs attrs = {};
3333
+ ARMMMUIdx mmu_idx;
3334
+ bool exc_secure;
3335
+
3336
+ mmu_idx = arm_v7m_mmu_idx_for_secstate_and_priv(env, targets_secure, true);
3337
+
3338
+ /*
3339
+ * We don't do a get_phys_addr() here because the rules for vector
3340
+ * loads are special: they always use the default memory map, and
3341
+ * the default memory map permits reads from all addresses.
3342
+ * Since there's no easy way to pass through to pmsav8_mpu_lookup()
3343
+ * that we want this special case which would always say "yes",
3344
+ * we just do the SAU lookup here followed by a direct physical load.
3345
+ */
3346
+ attrs.secure = targets_secure;
3347
+ attrs.user = false;
3348
+
3349
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
3350
+ V8M_SAttributes sattrs = {};
3351
+
3352
+ v8m_security_lookup(env, addr, MMU_DATA_LOAD, mmu_idx, &sattrs);
3353
+ if (sattrs.ns) {
3354
+ attrs.secure = false;
3355
+ } else if (!targets_secure) {
3356
+ /* NS access to S memory */
3357
+ goto load_fail;
3358
+ }
3359
+ }
3360
+
3361
+ vector_entry = address_space_ldl(arm_addressspace(cs, attrs), addr,
3362
+ attrs, &result);
3363
+ if (result != MEMTX_OK) {
3364
+ goto load_fail;
3365
+ }
3366
+ *pvec = vector_entry;
3367
+ return true;
3368
+
3369
+load_fail:
3370
+ /*
3371
+ * All vector table fetch fails are reported as HardFault, with
3372
+ * HFSR.VECTTBL and .FORCED set. (FORCED is set because
3373
+ * technically the underlying exception is a MemManage or BusFault
3374
+ * that is escalated to HardFault.) This is a terminal exception,
3375
+ * so we will either take the HardFault immediately or else enter
3376
+ * lockup (the latter case is handled in armv7m_nvic_set_pending_derived()).
3377
+ */
3378
+ exc_secure = targets_secure ||
3379
+ !(cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK);
3380
+ env->v7m.hfsr |= R_V7M_HFSR_VECTTBL_MASK | R_V7M_HFSR_FORCED_MASK;
3381
+ armv7m_nvic_set_pending_derived(env->nvic, ARMV7M_EXCP_HARD, exc_secure);
3382
+ return false;
3383
+}
3384
+
3385
+static uint32_t v7m_integrity_sig(CPUARMState *env, uint32_t lr)
3386
+{
3387
+ /*
3388
+ * Return the integrity signature value for the callee-saves
3389
+ * stack frame section. @lr is the exception return payload/LR value
3390
+ * whose FType bit forms bit 0 of the signature if FP is present.
3391
+ */
3392
+ uint32_t sig = 0xfefa125a;
3393
+
3394
+ if (!arm_feature(env, ARM_FEATURE_VFP) || (lr & R_V7M_EXCRET_FTYPE_MASK)) {
3395
+ sig |= 1;
3396
+ }
3397
+ return sig;
3398
+}
3399
+
3400
+static bool v7m_push_callee_stack(ARMCPU *cpu, uint32_t lr, bool dotailchain,
3401
+ bool ignore_faults)
3402
+{
3403
+ /*
3404
+ * For v8M, push the callee-saves register part of the stack frame.
3405
+ * Compare the v8M pseudocode PushCalleeStack().
3406
+ * In the tailchaining case this may not be the current stack.
3407
+ */
3408
+ CPUARMState *env = &cpu->env;
3409
+ uint32_t *frame_sp_p;
3410
+ uint32_t frameptr;
3411
+ ARMMMUIdx mmu_idx;
3412
+ bool stacked_ok;
3413
+ uint32_t limit;
3414
+ bool want_psp;
3415
+ uint32_t sig;
3416
+ StackingMode smode = ignore_faults ? STACK_IGNFAULTS : STACK_NORMAL;
3417
+
3418
+ if (dotailchain) {
3419
+ bool mode = lr & R_V7M_EXCRET_MODE_MASK;
3420
+ bool priv = !(env->v7m.control[M_REG_S] & R_V7M_CONTROL_NPRIV_MASK) ||
3421
+ !mode;
3422
+
3423
+ mmu_idx = arm_v7m_mmu_idx_for_secstate_and_priv(env, M_REG_S, priv);
3424
+ frame_sp_p = get_v7m_sp_ptr(env, M_REG_S, mode,
3425
+ lr & R_V7M_EXCRET_SPSEL_MASK);
3426
+ want_psp = mode && (lr & R_V7M_EXCRET_SPSEL_MASK);
3427
+ if (want_psp) {
3428
+ limit = env->v7m.psplim[M_REG_S];
3429
+ } else {
3430
+ limit = env->v7m.msplim[M_REG_S];
3431
+ }
3432
+ } else {
3433
+ mmu_idx = arm_mmu_idx(env);
3434
+ frame_sp_p = &env->regs[13];
3435
+ limit = v7m_sp_limit(env);
3436
+ }
3437
+
3438
+ frameptr = *frame_sp_p - 0x28;
3439
+ if (frameptr < limit) {
3440
+ /*
3441
+ * Stack limit failure: set SP to the limit value, and generate
3442
+ * STKOF UsageFault. Stack pushes below the limit must not be
3443
+ * performed. It is IMPDEF whether pushes above the limit are
3444
+ * performed; we choose not to.
3445
+ */
3446
+ qemu_log_mask(CPU_LOG_INT,
3447
+ "...STKOF during callee-saves register stacking\n");
3448
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_STKOF_MASK;
3449
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
3450
+ env->v7m.secure);
3451
+ *frame_sp_p = limit;
3452
+ return true;
3453
+ }
3454
+
3455
+ /*
3456
+ * Write as much of the stack frame as we can. A write failure may
3457
+ * cause us to pend a derived exception.
3458
+ */
3459
+ sig = v7m_integrity_sig(env, lr);
3460
+ stacked_ok =
3461
+ v7m_stack_write(cpu, frameptr, sig, mmu_idx, smode) &&
3462
+ v7m_stack_write(cpu, frameptr + 0x8, env->regs[4], mmu_idx, smode) &&
3463
+ v7m_stack_write(cpu, frameptr + 0xc, env->regs[5], mmu_idx, smode) &&
3464
+ v7m_stack_write(cpu, frameptr + 0x10, env->regs[6], mmu_idx, smode) &&
3465
+ v7m_stack_write(cpu, frameptr + 0x14, env->regs[7], mmu_idx, smode) &&
3466
+ v7m_stack_write(cpu, frameptr + 0x18, env->regs[8], mmu_idx, smode) &&
3467
+ v7m_stack_write(cpu, frameptr + 0x1c, env->regs[9], mmu_idx, smode) &&
3468
+ v7m_stack_write(cpu, frameptr + 0x20, env->regs[10], mmu_idx, smode) &&
3469
+ v7m_stack_write(cpu, frameptr + 0x24, env->regs[11], mmu_idx, smode);
3470
+
3471
+ /* Update SP regardless of whether any of the stack accesses failed. */
3472
+ *frame_sp_p = frameptr;
3473
+
3474
+ return !stacked_ok;
3475
+}
3476
+
3477
+static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain,
3478
+ bool ignore_stackfaults)
3479
+{
3480
+ /*
3481
+ * Do the "take the exception" parts of exception entry,
3482
+ * but not the pushing of state to the stack. This is
3483
+ * similar to the pseudocode ExceptionTaken() function.
3484
+ */
3485
+ CPUARMState *env = &cpu->env;
3486
+ uint32_t addr;
3487
+ bool targets_secure;
3488
+ int exc;
3489
+ bool push_failed = false;
3490
+
3491
+ armv7m_nvic_get_pending_irq_info(env->nvic, &exc, &targets_secure);
3492
+ qemu_log_mask(CPU_LOG_INT, "...taking pending %s exception %d\n",
3493
+ targets_secure ? "secure" : "nonsecure", exc);
3494
+
3495
+ if (dotailchain) {
3496
+ /* Sanitize LR FType and PREFIX bits */
3497
+ if (!arm_feature(env, ARM_FEATURE_VFP)) {
3498
+ lr |= R_V7M_EXCRET_FTYPE_MASK;
3499
+ }
3500
+ lr = deposit32(lr, 24, 8, 0xff);
3501
+ }
3502
+
3503
+ if (arm_feature(env, ARM_FEATURE_V8)) {
3504
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY) &&
3505
+ (lr & R_V7M_EXCRET_S_MASK)) {
3506
+ /*
3507
+ * The background code (the owner of the registers in the
3508
+ * exception frame) is Secure. This means it may either already
3509
+ * have or now needs to push callee-saves registers.
3510
+ */
3511
+ if (targets_secure) {
3512
+ if (dotailchain && !(lr & R_V7M_EXCRET_ES_MASK)) {
3513
+ /*
3514
+ * We took an exception from Secure to NonSecure
3515
+ * (which means the callee-saved registers got stacked)
3516
+ * and are now tailchaining to a Secure exception.
3517
+ * Clear DCRS so eventual return from this Secure
3518
+ * exception unstacks the callee-saved registers.
3519
+ */
3520
+ lr &= ~R_V7M_EXCRET_DCRS_MASK;
3521
+ }
3522
+ } else {
3523
+ /*
3524
+ * We're going to a non-secure exception; push the
3525
+ * callee-saves registers to the stack now, if they're
3526
+ * not already saved.
3527
+ */
3528
+ if (lr & R_V7M_EXCRET_DCRS_MASK &&
3529
+ !(dotailchain && !(lr & R_V7M_EXCRET_ES_MASK))) {
3530
+ push_failed = v7m_push_callee_stack(cpu, lr, dotailchain,
3531
+ ignore_stackfaults);
3532
+ }
3533
+ lr |= R_V7M_EXCRET_DCRS_MASK;
3534
+ }
349
+ }
3535
+ }
350
+ }
3536
+
351
+ }
3537
+ lr &= ~R_V7M_EXCRET_ES_MASK;
352
+}
3538
+ if (targets_secure || !arm_feature(env, ARM_FEATURE_M_SECURITY)) {
353
+
3539
+ lr |= R_V7M_EXCRET_ES_MASK;
354
+/*
3540
+ }
355
+ * Return mask of pins that are both configured in output
3541
+ lr &= ~R_V7M_EXCRET_SPSEL_MASK;
356
+ * mode and externally driven (except pins in open-drain
3542
+ if (env->v7m.control[targets_secure] & R_V7M_CONTROL_SPSEL_MASK) {
357
+ * mode externally set to 0).
3543
+ lr |= R_V7M_EXCRET_SPSEL_MASK;
358
+ */
3544
+ }
359
+static uint32_t get_gpio_pinmask_to_disconnect(Stm32l4x5GpioState *s)
3545
+
360
+{
3546
+ /*
361
+ uint32_t pins_to_disconnect = 0;
3547
+ * Clear registers if necessary to prevent non-secure exception
362
+ for (int i = 0; i < GPIO_NUM_PINS; i++) {
3548
+ * code being able to see register values from secure code.
363
+ /* for each connected pin in output mode */
3549
+ * Where register values become architecturally UNKNOWN we leave
364
+ if (!(s->disconnected_pins & (1 << i)) && is_output(s, i)) {
3550
+ * them with their previous values.
365
+ /* if either push-pull or high level */
3551
+ */
366
+ if (is_push_pull(s, i) || s->pins_connected_high & (1 << i)) {
3552
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
367
+ pins_to_disconnect |= (1 << i);
3553
+ if (!targets_secure) {
368
+ qemu_log_mask(LOG_GUEST_ERROR,
3554
+ /*
369
+ "Line %d can't be driven externally\n",
3555
+ * Always clear the caller-saved registers (they have been
370
+ i);
3556
+ * pushed to the stack earlier in v7m_push_stack()).
3557
+ * Clear callee-saved registers if the background code is
3558
+ * Secure (in which case these regs were saved in
3559
+ * v7m_push_callee_stack()).
3560
+ */
3561
+ int i;
3562
+
3563
+ for (i = 0; i < 13; i++) {
3564
+ /* r4..r11 are callee-saves, zero only if EXCRET.S == 1 */
3565
+ if (i < 4 || i > 11 || (lr & R_V7M_EXCRET_S_MASK)) {
3566
+ env->regs[i] = 0;
3567
+ }
3568
+ }
3569
+ /* Clear EAPSR */
3570
+ xpsr_write(env, 0, XPSR_NZCV | XPSR_Q | XPSR_GE | XPSR_IT);
3571
+ }
371
+ }
3572
+ }
372
+ }
3573
+ }
373
+ }
3574
+
374
+ return pins_to_disconnect;
3575
+ if (push_failed && !ignore_stackfaults) {
375
+}
3576
+ /*
376
+
3577
+ * Derived exception on callee-saves register stacking:
377
+/*
3578
+ * we might now want to take a different exception which
378
+ * Set field `disconnected_pins` and call `update_gpio_idr()`
3579
+ * targets a different security state, so try again from the top.
379
+ */
3580
+ */
380
+static void disconnect_gpio_pins(Stm32l4x5GpioState *s, uint16_t lines)
3581
+ qemu_log_mask(CPU_LOG_INT,
381
+{
3582
+ "...derived exception on callee-saves register stacking");
382
+ s->disconnected_pins |= lines;
3583
+ v7m_exception_taken(cpu, lr, true, true);
383
+ trace_stm32l4x5_gpio_pins(s->name, s->disconnected_pins,
3584
+ return;
384
+ s->pins_connected_high);
3585
+ }
385
+ update_gpio_idr(s);
3586
+
386
+}
3587
+ if (!arm_v7m_load_vector(cpu, exc, targets_secure, &addr)) {
387
+
3588
+ /* Vector load failed: derived exception */
388
+static void disconnected_pins_set(Object *obj, Visitor *v,
3589
+ qemu_log_mask(CPU_LOG_INT, "...derived exception on vector table load");
389
+ const char *name, void *opaque, Error **errp)
3590
+ v7m_exception_taken(cpu, lr, true, true);
390
+{
3591
+ return;
391
+ Stm32l4x5GpioState *s = STM32L4X5_GPIO(obj);
3592
+ }
392
+ uint16_t value;
3593
+
393
+ if (!visit_type_uint16(v, name, &value, errp)) {
3594
+ /*
394
+ return;
3595
+ * Now we've done everything that might cause a derived exception
395
+ }
3596
+ * we can go ahead and activate whichever exception we're going to
396
+ disconnect_gpio_pins(s, value);
3597
+ * take (which might now be the derived exception).
397
+}
3598
+ */
398
+
3599
+ armv7m_nvic_acknowledge_irq(env->nvic);
399
+static void disconnected_pins_get(Object *obj, Visitor *v,
3600
+
400
+ const char *name, void *opaque, Error **errp)
3601
+ /* Switch to target security state -- must do this before writing SPSEL */
401
+{
3602
+ switch_v7m_security_state(env, targets_secure);
402
+ visit_type_uint16(v, name, (uint16_t *)opaque, errp);
3603
+ write_v7m_control_spsel(env, 0);
403
+}
3604
+ arm_clear_exclusive(env);
404
+
3605
+ /* Clear SFPA and FPCA (has no effect if no FPU) */
405
+static void clock_freq_get(Object *obj, Visitor *v,
3606
+ env->v7m.control[M_REG_S] &=
406
+ const char *name, void *opaque, Error **errp)
3607
+ ~(R_V7M_CONTROL_FPCA_MASK | R_V7M_CONTROL_SFPA_MASK);
407
+{
3608
+ /* Clear IT bits */
408
+ Stm32l4x5GpioState *s = STM32L4X5_GPIO(obj);
3609
+ env->condexec_bits = 0;
409
+ uint32_t clock_freq_hz = clock_get_hz(s->clk);
3610
+ env->regs[14] = lr;
410
+ visit_type_uint32(v, name, &clock_freq_hz, errp);
3611
+ env->regs[15] = addr & 0xfffffffe;
411
+}
3612
+ env->thumb = addr & 1;
412
+
3613
+}
413
+static void stm32l4x5_gpio_write(void *opaque, hwaddr addr,
3614
+
414
+ uint64_t val64, unsigned int size)
3615
+static void v7m_update_fpccr(CPUARMState *env, uint32_t frameptr,
415
+{
3616
+ bool apply_splim)
416
+ Stm32l4x5GpioState *s = opaque;
3617
+{
417
+
3618
+ /*
418
+ uint32_t value = val64;
3619
+ * Like the pseudocode UpdateFPCCR: save state in FPCAR and FPCCR
419
+ trace_stm32l4x5_gpio_write(s->name, addr, val64);
3620
+ * that we will need later in order to do lazy FP reg stacking.
420
+
3621
+ */
421
+ switch (addr) {
3622
+ bool is_secure = env->v7m.secure;
422
+ case GPIO_MODER:
3623
+ void *nvic = env->nvic;
423
+ s->moder = value;
3624
+ /*
424
+ disconnect_gpio_pins(s, get_gpio_pinmask_to_disconnect(s));
3625
+ * Some bits are unbanked and live always in fpccr[M_REG_S]; some bits
425
+ qemu_log_mask(LOG_UNIMP,
3626
+ * are banked and we want to update the bit in the bank for the
426
+ "%s: Analog and AF modes aren't supported\n\
3627
+ * current security state; and in one case we want to specifically
427
+ Analog and AF mode behave like input mode\n",
3628
+ * update the NS banked version of a bit even if we are secure.
428
+ __func__);
3629
+ */
429
+ return;
3630
+ uint32_t *fpccr_s = &env->v7m.fpccr[M_REG_S];
430
+ case GPIO_OTYPER:
3631
+ uint32_t *fpccr_ns = &env->v7m.fpccr[M_REG_NS];
431
+ s->otyper = value & ~RESERVED_BITS_MASK;
3632
+ uint32_t *fpccr = &env->v7m.fpccr[is_secure];
432
+ disconnect_gpio_pins(s, get_gpio_pinmask_to_disconnect(s));
3633
+ bool hfrdy, bfrdy, mmrdy, ns_ufrdy, s_ufrdy, sfrdy, monrdy;
433
+ return;
3634
+
434
+ case GPIO_OSPEEDR:
3635
+ env->v7m.fpcar[is_secure] = frameptr & ~0x7;
435
+ qemu_log_mask(LOG_UNIMP,
3636
+
436
+ "%s: Changing I/O output speed isn't supported\n\
3637
+ if (apply_splim && arm_feature(env, ARM_FEATURE_V8)) {
437
+ I/O speed is already maximal\n",
3638
+ bool splimviol;
438
+ __func__);
3639
+ uint32_t splim = v7m_sp_limit(env);
439
+ s->ospeedr = value;
3640
+ bool ign = armv7m_nvic_neg_prio_requested(nvic, is_secure) &&
440
+ return;
3641
+ (env->v7m.ccr[is_secure] & R_V7M_CCR_STKOFHFNMIGN_MASK);
441
+ case GPIO_PUPDR:
3642
+
442
+ s->pupdr = value;
3643
+ splimviol = !ign && frameptr < splim;
443
+ update_gpio_idr(s);
3644
+ *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, SPLIMVIOL, splimviol);
444
+ return;
3645
+ }
445
+ case GPIO_IDR:
3646
+
446
+ qemu_log_mask(LOG_UNIMP,
3647
+ *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, LSPACT, 1);
447
+ "%s: GPIO->IDR is read-only\n",
3648
+
448
+ __func__);
3649
+ *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, S, is_secure);
449
+ return;
3650
+
450
+ case GPIO_ODR:
3651
+ *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, USER, arm_current_el(env) == 0);
451
+ s->odr = value & ~RESERVED_BITS_MASK;
3652
+
452
+ update_gpio_idr(s);
3653
+ *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, THREAD,
453
+ return;
3654
+ !arm_v7m_is_handler_mode(env));
454
+ case GPIO_BSRR: {
3655
+
455
+ uint32_t bits_to_reset = (value & RESERVED_BITS_MASK) >> GPIO_NUM_PINS;
3656
+ hfrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_HARD, false);
456
+ uint32_t bits_to_set = value & ~RESERVED_BITS_MASK;
3657
+ *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, HFRDY, hfrdy);
457
+ /* If both BSx and BRx are set, BSx has priority.*/
3658
+
458
+ s->odr &= ~bits_to_reset;
3659
+ bfrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_BUS, false);
459
+ s->odr |= bits_to_set;
3660
+ *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, BFRDY, bfrdy);
460
+ update_gpio_idr(s);
3661
+
461
+ return;
3662
+ mmrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_MEM, is_secure);
462
+ }
3663
+ *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, MMRDY, mmrdy);
463
+ case GPIO_LCKR:
3664
+
464
+ qemu_log_mask(LOG_UNIMP,
3665
+ ns_ufrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_USAGE, false);
465
+ "%s: Locking port bits configuration isn't supported\n",
3666
+ *fpccr_ns = FIELD_DP32(*fpccr_ns, V7M_FPCCR, UFRDY, ns_ufrdy);
466
+ __func__);
3667
+
467
+ s->lckr = value & ~RESERVED_BITS_MASK;
3668
+ monrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_DEBUG, false);
468
+ return;
3669
+ *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, MONRDY, monrdy);
469
+ case GPIO_AFRL:
3670
+
470
+ qemu_log_mask(LOG_UNIMP,
3671
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
471
+ "%s: Alternate functions aren't supported\n",
3672
+ s_ufrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_USAGE, true);
472
+ __func__);
3673
+ *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, UFRDY, s_ufrdy);
473
+ s->afrl = value;
3674
+
474
+ return;
3675
+ sfrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_SECURE, false);
475
+ case GPIO_AFRH:
3676
+ *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, SFRDY, sfrdy);
476
+ qemu_log_mask(LOG_UNIMP,
3677
+ }
477
+ "%s: Alternate functions aren't supported\n",
3678
+}
478
+ __func__);
3679
+
479
+ s->afrh = value;
3680
+void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr)
480
+ return;
3681
+{
481
+ case GPIO_BRR: {
3682
+ /* fptr is the value of Rn, the frame pointer we store the FP regs to */
482
+ uint32_t bits_to_reset = value & ~RESERVED_BITS_MASK;
3683
+ bool s = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_S_MASK;
483
+ s->odr &= ~bits_to_reset;
3684
+ bool lspact = env->v7m.fpccr[s] & R_V7M_FPCCR_LSPACT_MASK;
484
+ update_gpio_idr(s);
3685
+
485
+ return;
3686
+ assert(env->v7m.secure);
486
+ }
3687
+
487
+ case GPIO_ASCR:
3688
+ if (!(env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK)) {
488
+ qemu_log_mask(LOG_UNIMP,
3689
+ return;
489
+ "%s: ADC function isn't supported\n",
3690
+ }
490
+ __func__);
3691
+
491
+ s->ascr = value & ~RESERVED_BITS_MASK;
3692
+ /* Check access to the coprocessor is permitted */
492
+ return;
3693
+ if (!v7m_cpacr_pass(env, true, arm_current_el(env) != 0)) {
3694
+ raise_exception_ra(env, EXCP_NOCP, 0, 1, GETPC());
3695
+ }
3696
+
3697
+ if (lspact) {
3698
+ /* LSPACT should not be active when there is active FP state */
3699
+ raise_exception_ra(env, EXCP_LSERR, 0, 1, GETPC());
3700
+ }
3701
+
3702
+ if (fptr & 7) {
3703
+ raise_exception_ra(env, EXCP_UNALIGNED, 0, 1, GETPC());
3704
+ }
3705
+
3706
+ /*
3707
+ * Note that we do not use v7m_stack_write() here, because the
3708
+ * accesses should not set the FSR bits for stacking errors if they
3709
+ * fail. (In pseudocode terms, they are AccType_NORMAL, not AccType_STACK
3710
+ * or AccType_LAZYFP). Faults in cpu_stl_data() will throw exceptions
3711
+ * and longjmp out.
3712
+ */
3713
+ if (!(env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPEN_MASK)) {
3714
+ bool ts = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK;
3715
+ int i;
3716
+
3717
+ for (i = 0; i < (ts ? 32 : 16); i += 2) {
3718
+ uint64_t dn = *aa32_vfp_dreg(env, i / 2);
3719
+ uint32_t faddr = fptr + 4 * i;
3720
+ uint32_t slo = extract64(dn, 0, 32);
3721
+ uint32_t shi = extract64(dn, 32, 32);
3722
+
3723
+ if (i >= 16) {
3724
+ faddr += 8; /* skip the slot for the FPSCR */
3725
+ }
3726
+ cpu_stl_data(env, faddr, slo);
3727
+ cpu_stl_data(env, faddr + 4, shi);
3728
+ }
3729
+ cpu_stl_data(env, fptr + 0x40, vfp_get_fpscr(env));
3730
+
3731
+ /*
3732
+ * If TS is 0 then s0 to s15 and FPSCR are UNKNOWN; we choose to
3733
+ * leave them unchanged, matching our choice in v7m_preserve_fp_state.
3734
+ */
3735
+ if (ts) {
3736
+ for (i = 0; i < 32; i += 2) {
3737
+ *aa32_vfp_dreg(env, i / 2) = 0;
3738
+ }
3739
+ vfp_set_fpscr(env, 0);
3740
+ }
3741
+ } else {
3742
+ v7m_update_fpccr(env, fptr, false);
3743
+ }
3744
+
3745
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_FPCA_MASK;
3746
+}
3747
+
3748
+void HELPER(v7m_vlldm)(CPUARMState *env, uint32_t fptr)
3749
+{
3750
+ /* fptr is the value of Rn, the frame pointer we load the FP regs from */
3751
+ assert(env->v7m.secure);
3752
+
3753
+ if (!(env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK)) {
3754
+ return;
3755
+ }
3756
+
3757
+ /* Check access to the coprocessor is permitted */
3758
+ if (!v7m_cpacr_pass(env, true, arm_current_el(env) != 0)) {
3759
+ raise_exception_ra(env, EXCP_NOCP, 0, 1, GETPC());
3760
+ }
3761
+
3762
+ if (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPACT_MASK) {
3763
+ /* State in FP is still valid */
3764
+ env->v7m.fpccr[M_REG_S] &= ~R_V7M_FPCCR_LSPACT_MASK;
3765
+ } else {
3766
+ bool ts = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK;
3767
+ int i;
3768
+ uint32_t fpscr;
3769
+
3770
+ if (fptr & 7) {
3771
+ raise_exception_ra(env, EXCP_UNALIGNED, 0, 1, GETPC());
3772
+ }
3773
+
3774
+ for (i = 0; i < (ts ? 32 : 16); i += 2) {
3775
+ uint32_t slo, shi;
3776
+ uint64_t dn;
3777
+ uint32_t faddr = fptr + 4 * i;
3778
+
3779
+ if (i >= 16) {
3780
+ faddr += 8; /* skip the slot for the FPSCR */
3781
+ }
3782
+
3783
+ slo = cpu_ldl_data(env, faddr);
3784
+ shi = cpu_ldl_data(env, faddr + 4);
3785
+
3786
+ dn = (uint64_t) shi << 32 | slo;
3787
+ *aa32_vfp_dreg(env, i / 2) = dn;
3788
+ }
3789
+ fpscr = cpu_ldl_data(env, fptr + 0x40);
3790
+ vfp_set_fpscr(env, fpscr);
3791
+ }
3792
+
3793
+ env->v7m.control[M_REG_S] |= R_V7M_CONTROL_FPCA_MASK;
3794
+}
3795
+
3796
+static bool v7m_push_stack(ARMCPU *cpu)
3797
+{
3798
+ /*
3799
+ * Do the "set up stack frame" part of exception entry,
3800
+ * similar to pseudocode PushStack().
3801
+ * Return true if we generate a derived exception (and so
3802
+ * should ignore further stack faults trying to process
3803
+ * that derived exception.)
3804
+ */
3805
+ bool stacked_ok = true, limitviol = false;
3806
+ CPUARMState *env = &cpu->env;
3807
+ uint32_t xpsr = xpsr_read(env);
3808
+ uint32_t frameptr = env->regs[13];
3809
+ ARMMMUIdx mmu_idx = arm_mmu_idx(env);
3810
+ uint32_t framesize;
3811
+ bool nsacr_cp10 = extract32(env->v7m.nsacr, 10, 1);
3812
+
3813
+ if ((env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK) &&
3814
+ (env->v7m.secure || nsacr_cp10)) {
3815
+ if (env->v7m.secure &&
3816
+ env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK) {
3817
+ framesize = 0xa8;
3818
+ } else {
3819
+ framesize = 0x68;
3820
+ }
3821
+ } else {
3822
+ framesize = 0x20;
3823
+ }
3824
+
3825
+ /* Align stack pointer if the guest wants that */
3826
+ if ((frameptr & 4) &&
3827
+ (env->v7m.ccr[env->v7m.secure] & R_V7M_CCR_STKALIGN_MASK)) {
3828
+ frameptr -= 4;
3829
+ xpsr |= XPSR_SPREALIGN;
3830
+ }
3831
+
3832
+ xpsr &= ~XPSR_SFPA;
3833
+ if (env->v7m.secure &&
3834
+ (env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK)) {
3835
+ xpsr |= XPSR_SFPA;
3836
+ }
3837
+
3838
+ frameptr -= framesize;
3839
+
3840
+ if (arm_feature(env, ARM_FEATURE_V8)) {
3841
+ uint32_t limit = v7m_sp_limit(env);
3842
+
3843
+ if (frameptr < limit) {
3844
+ /*
3845
+ * Stack limit failure: set SP to the limit value, and generate
3846
+ * STKOF UsageFault. Stack pushes below the limit must not be
3847
+ * performed. It is IMPDEF whether pushes above the limit are
3848
+ * performed; we choose not to.
3849
+ */
3850
+ qemu_log_mask(CPU_LOG_INT,
3851
+ "...STKOF during stacking\n");
3852
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_STKOF_MASK;
3853
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
3854
+ env->v7m.secure);
3855
+ env->regs[13] = limit;
3856
+ /*
3857
+ * We won't try to perform any further memory accesses but
3858
+ * we must continue through the following code to check for
3859
+ * permission faults during FPU state preservation, and we
3860
+ * must update FPCCR if lazy stacking is enabled.
3861
+ */
3862
+ limitviol = true;
3863
+ stacked_ok = false;
3864
+ }
3865
+ }
3866
+
3867
+ /*
3868
+ * Write as much of the stack frame as we can. If we fail a stack
3869
+ * write this will result in a derived exception being pended
3870
+ * (which may be taken in preference to the one we started with
3871
+ * if it has higher priority).
3872
+ */
3873
+ stacked_ok = stacked_ok &&
3874
+ v7m_stack_write(cpu, frameptr, env->regs[0], mmu_idx, STACK_NORMAL) &&
3875
+ v7m_stack_write(cpu, frameptr + 4, env->regs[1],
3876
+ mmu_idx, STACK_NORMAL) &&
3877
+ v7m_stack_write(cpu, frameptr + 8, env->regs[2],
3878
+ mmu_idx, STACK_NORMAL) &&
3879
+ v7m_stack_write(cpu, frameptr + 12, env->regs[3],
3880
+ mmu_idx, STACK_NORMAL) &&
3881
+ v7m_stack_write(cpu, frameptr + 16, env->regs[12],
3882
+ mmu_idx, STACK_NORMAL) &&
3883
+ v7m_stack_write(cpu, frameptr + 20, env->regs[14],
3884
+ mmu_idx, STACK_NORMAL) &&
3885
+ v7m_stack_write(cpu, frameptr + 24, env->regs[15],
3886
+ mmu_idx, STACK_NORMAL) &&
3887
+ v7m_stack_write(cpu, frameptr + 28, xpsr, mmu_idx, STACK_NORMAL);
3888
+
3889
+ if (env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK) {
3890
+ /* FPU is active, try to save its registers */
3891
+ bool fpccr_s = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_S_MASK;
3892
+ bool lspact = env->v7m.fpccr[fpccr_s] & R_V7M_FPCCR_LSPACT_MASK;
3893
+
3894
+ if (lspact && arm_feature(env, ARM_FEATURE_M_SECURITY)) {
3895
+ qemu_log_mask(CPU_LOG_INT,
3896
+ "...SecureFault because LSPACT and FPCA both set\n");
3897
+ env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK;
3898
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
3899
+ } else if (!env->v7m.secure && !nsacr_cp10) {
3900
+ qemu_log_mask(CPU_LOG_INT,
3901
+ "...Secure UsageFault with CFSR.NOCP because "
3902
+ "NSACR.CP10 prevents stacking FP regs\n");
3903
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, M_REG_S);
3904
+ env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_NOCP_MASK;
3905
+ } else {
3906
+ if (!(env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPEN_MASK)) {
3907
+ /* Lazy stacking disabled, save registers now */
3908
+ int i;
3909
+ bool cpacr_pass = v7m_cpacr_pass(env, env->v7m.secure,
3910
+ arm_current_el(env) != 0);
3911
+
3912
+ if (stacked_ok && !cpacr_pass) {
3913
+ /*
3914
+ * Take UsageFault if CPACR forbids access. The pseudocode
3915
+ * here does a full CheckCPEnabled() but we know the NSACR
3916
+ * check can never fail as we have already handled that.
3917
+ */
3918
+ qemu_log_mask(CPU_LOG_INT,
3919
+ "...UsageFault with CFSR.NOCP because "
3920
+ "CPACR.CP10 prevents stacking FP regs\n");
3921
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
3922
+ env->v7m.secure);
3923
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_NOCP_MASK;
3924
+ stacked_ok = false;
3925
+ }
3926
+
3927
+ for (i = 0; i < ((framesize == 0xa8) ? 32 : 16); i += 2) {
3928
+ uint64_t dn = *aa32_vfp_dreg(env, i / 2);
3929
+ uint32_t faddr = frameptr + 0x20 + 4 * i;
3930
+ uint32_t slo = extract64(dn, 0, 32);
3931
+ uint32_t shi = extract64(dn, 32, 32);
3932
+
3933
+ if (i >= 16) {
3934
+ faddr += 8; /* skip the slot for the FPSCR */
3935
+ }
3936
+ stacked_ok = stacked_ok &&
3937
+ v7m_stack_write(cpu, faddr, slo,
3938
+ mmu_idx, STACK_NORMAL) &&
3939
+ v7m_stack_write(cpu, faddr + 4, shi,
3940
+ mmu_idx, STACK_NORMAL);
3941
+ }
3942
+ stacked_ok = stacked_ok &&
3943
+ v7m_stack_write(cpu, frameptr + 0x60,
3944
+ vfp_get_fpscr(env), mmu_idx, STACK_NORMAL);
3945
+ if (cpacr_pass) {
3946
+ for (i = 0; i < ((framesize == 0xa8) ? 32 : 16); i += 2) {
3947
+ *aa32_vfp_dreg(env, i / 2) = 0;
3948
+ }
3949
+ vfp_set_fpscr(env, 0);
3950
+ }
3951
+ } else {
3952
+ /* Lazy stacking enabled, save necessary info to stack later */
3953
+ v7m_update_fpccr(env, frameptr + 0x20, true);
3954
+ }
3955
+ }
3956
+ }
3957
+
3958
+ /*
3959
+ * If we broke a stack limit then SP was already updated earlier;
3960
+ * otherwise we update SP regardless of whether any of the stack
3961
+ * accesses failed or we took some other kind of fault.
3962
+ */
3963
+ if (!limitviol) {
3964
+ env->regs[13] = frameptr;
3965
+ }
3966
+
3967
+ return !stacked_ok;
3968
+}
3969
+
3970
+static void do_v7m_exception_exit(ARMCPU *cpu)
3971
+{
3972
+ CPUARMState *env = &cpu->env;
3973
+ uint32_t excret;
3974
+ uint32_t xpsr, xpsr_mask;
3975
+ bool ufault = false;
3976
+ bool sfault = false;
3977
+ bool return_to_sp_process;
3978
+ bool return_to_handler;
3979
+ bool rettobase = false;
3980
+ bool exc_secure = false;
3981
+ bool return_to_secure;
3982
+ bool ftype;
3983
+ bool restore_s16_s31;
3984
+
3985
+ /*
3986
+ * If we're not in Handler mode then jumps to magic exception-exit
3987
+ * addresses don't have magic behaviour. However for the v8M
3988
+ * security extensions the magic secure-function-return has to
3989
+ * work in thread mode too, so to avoid doing an extra check in
3990
+ * the generated code we allow exception-exit magic to also cause the
3991
+ * internal exception and bring us here in thread mode. Correct code
3992
+ * will never try to do this (the following insn fetch will always
3993
+ * fault) so we the overhead of having taken an unnecessary exception
3994
+ * doesn't matter.
3995
+ */
3996
+ if (!arm_v7m_is_handler_mode(env)) {
3997
+ return;
3998
+ }
3999
+
4000
+ /*
4001
+ * In the spec pseudocode ExceptionReturn() is called directly
4002
+ * from BXWritePC() and gets the full target PC value including
4003
+ * bit zero. In QEMU's implementation we treat it as a normal
4004
+ * jump-to-register (which is then caught later on), and so split
4005
+ * the target value up between env->regs[15] and env->thumb in
4006
+ * gen_bx(). Reconstitute it.
4007
+ */
4008
+ excret = env->regs[15];
4009
+ if (env->thumb) {
4010
+ excret |= 1;
4011
+ }
4012
+
4013
+ qemu_log_mask(CPU_LOG_INT, "Exception return: magic PC %" PRIx32
4014
+ " previous exception %d\n",
4015
+ excret, env->v7m.exception);
4016
+
4017
+ if ((excret & R_V7M_EXCRET_RES1_MASK) != R_V7M_EXCRET_RES1_MASK) {
4018
+ qemu_log_mask(LOG_GUEST_ERROR, "M profile: zero high bits in exception "
4019
+ "exit PC value 0x%" PRIx32 " are UNPREDICTABLE\n",
4020
+ excret);
4021
+ }
4022
+
4023
+ ftype = excret & R_V7M_EXCRET_FTYPE_MASK;
4024
+
4025
+ if (!arm_feature(env, ARM_FEATURE_VFP) && !ftype) {
4026
+ qemu_log_mask(LOG_GUEST_ERROR, "M profile: zero FTYPE in exception "
4027
+ "exit PC value 0x%" PRIx32 " is UNPREDICTABLE "
4028
+ "if FPU not present\n",
4029
+ excret);
4030
+ ftype = true;
4031
+ }
4032
+
4033
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
4034
+ /*
4035
+ * EXC_RETURN.ES validation check (R_SMFL). We must do this before
4036
+ * we pick which FAULTMASK to clear.
4037
+ */
4038
+ if (!env->v7m.secure &&
4039
+ ((excret & R_V7M_EXCRET_ES_MASK) ||
4040
+ !(excret & R_V7M_EXCRET_DCRS_MASK))) {
4041
+ sfault = 1;
4042
+ /* For all other purposes, treat ES as 0 (R_HXSR) */
4043
+ excret &= ~R_V7M_EXCRET_ES_MASK;
4044
+ }
4045
+ exc_secure = excret & R_V7M_EXCRET_ES_MASK;
4046
+ }
4047
+
4048
+ if (env->v7m.exception != ARMV7M_EXCP_NMI) {
4049
+ /*
4050
+ * Auto-clear FAULTMASK on return from other than NMI.
4051
+ * If the security extension is implemented then this only
4052
+ * happens if the raw execution priority is >= 0; the
4053
+ * value of the ES bit in the exception return value indicates
4054
+ * which security state's faultmask to clear. (v8M ARM ARM R_KBNF.)
4055
+ */
4056
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
4057
+ if (armv7m_nvic_raw_execution_priority(env->nvic) >= 0) {
4058
+ env->v7m.faultmask[exc_secure] = 0;
4059
+ }
4060
+ } else {
4061
+ env->v7m.faultmask[M_REG_NS] = 0;
4062
+ }
4063
+ }
4064
+
4065
+ switch (armv7m_nvic_complete_irq(env->nvic, env->v7m.exception,
4066
+ exc_secure)) {
4067
+ case -1:
4068
+ /* attempt to exit an exception that isn't active */
4069
+ ufault = true;
4070
+ break;
4071
+ case 0:
4072
+ /* still an irq active now */
4073
+ break;
4074
+ case 1:
4075
+ /*
4076
+ * We returned to base exception level, no nesting.
4077
+ * (In the pseudocode this is written using "NestedActivation != 1"
4078
+ * where we have 'rettobase == false'.)
4079
+ */
4080
+ rettobase = true;
4081
+ break;
4082
+ default:
493
+ default:
4083
+ g_assert_not_reached();
494
+ qemu_log_mask(LOG_GUEST_ERROR,
4084
+ }
495
+ "%s: Bad offset 0x%" HWADDR_PRIx "\n", __func__, addr);
4085
+
496
+ }
4086
+ return_to_handler = !(excret & R_V7M_EXCRET_MODE_MASK);
497
+}
4087
+ return_to_sp_process = excret & R_V7M_EXCRET_SPSEL_MASK;
498
+
4088
+ return_to_secure = arm_feature(env, ARM_FEATURE_M_SECURITY) &&
499
+static uint64_t stm32l4x5_gpio_read(void *opaque, hwaddr addr,
4089
+ (excret & R_V7M_EXCRET_S_MASK);
500
+ unsigned int size)
4090
+
501
+{
4091
+ if (arm_feature(env, ARM_FEATURE_V8)) {
502
+ Stm32l4x5GpioState *s = opaque;
4092
+ if (!arm_feature(env, ARM_FEATURE_M_SECURITY)) {
503
+
4093
+ /*
504
+ trace_stm32l4x5_gpio_read(s->name, addr);
4094
+ * UNPREDICTABLE if S == 1 or DCRS == 0 or ES == 1 (R_XLCP);
505
+
4095
+ * we choose to take the UsageFault.
506
+ switch (addr) {
4096
+ */
507
+ case GPIO_MODER:
4097
+ if ((excret & R_V7M_EXCRET_S_MASK) ||
508
+ return s->moder;
4098
+ (excret & R_V7M_EXCRET_ES_MASK) ||
509
+ case GPIO_OTYPER:
4099
+ !(excret & R_V7M_EXCRET_DCRS_MASK)) {
510
+ return s->otyper;
4100
+ ufault = true;
511
+ case GPIO_OSPEEDR:
4101
+ }
512
+ return s->ospeedr;
4102
+ }
513
+ case GPIO_PUPDR:
4103
+ if (excret & R_V7M_EXCRET_RES0_MASK) {
514
+ return s->pupdr;
4104
+ ufault = true;
515
+ case GPIO_IDR:
4105
+ }
516
+ return s->idr;
4106
+ } else {
517
+ case GPIO_ODR:
4107
+ /* For v7M we only recognize certain combinations of the low bits */
518
+ return s->odr;
4108
+ switch (excret & 0xf) {
519
+ case GPIO_BSRR:
4109
+ case 1: /* Return to Handler */
520
+ return 0;
4110
+ break;
521
+ case GPIO_LCKR:
4111
+ case 13: /* Return to Thread using Process stack */
522
+ return s->lckr;
4112
+ case 9: /* Return to Thread using Main stack */
523
+ case GPIO_AFRL:
4113
+ /*
524
+ return s->afrl;
4114
+ * We only need to check NONBASETHRDENA for v7M, because in
525
+ case GPIO_AFRH:
4115
+ * v8M this bit does not exist (it is RES1).
526
+ return s->afrh;
4116
+ */
527
+ case GPIO_BRR:
4117
+ if (!rettobase &&
528
+ return 0;
4118
+ !(env->v7m.ccr[env->v7m.secure] &
529
+ case GPIO_ASCR:
4119
+ R_V7M_CCR_NONBASETHRDENA_MASK)) {
530
+ return s->ascr;
4120
+ ufault = true;
531
+ default:
4121
+ }
532
+ qemu_log_mask(LOG_GUEST_ERROR,
4122
+ break;
533
+ "%s: Bad offset 0x%" HWADDR_PRIx "\n", __func__, addr);
4123
+ default:
534
+ return 0;
4124
+ ufault = true;
535
+ }
4125
+ }
536
+}
4126
+ }
537
+
4127
+
538
+static const MemoryRegionOps stm32l4x5_gpio_ops = {
4128
+ /*
539
+ .read = stm32l4x5_gpio_read,
4129
+ * Set CONTROL.SPSEL from excret.SPSEL. Since we're still in
540
+ .write = stm32l4x5_gpio_write,
4130
+ * Handler mode (and will be until we write the new XPSR.Interrupt
541
+ .endianness = DEVICE_NATIVE_ENDIAN,
4131
+ * field) this does not switch around the current stack pointer.
542
+ .impl = {
4132
+ * We must do this before we do any kind of tailchaining, including
543
+ .min_access_size = 4,
4133
+ * for the derived exceptions on integrity check failures, or we will
544
+ .max_access_size = 4,
4134
+ * give the guest an incorrect EXCRET.SPSEL value on exception entry.
545
+ .unaligned = false,
4135
+ */
546
+ },
4136
+ write_v7m_control_spsel_for_secstate(env, return_to_sp_process, exc_secure);
547
+ .valid = {
4137
+
548
+ .min_access_size = 4,
4138
+ /*
549
+ .max_access_size = 4,
4139
+ * Clear scratch FP values left in caller saved registers; this
550
+ .unaligned = false,
4140
+ * must happen before any kind of tail chaining.
551
+ },
4141
+ */
552
+};
4142
+ if ((env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_CLRONRET_MASK) &&
553
+
4143
+ (env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK)) {
554
+static void stm32l4x5_gpio_init(Object *obj)
4144
+ if (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPACT_MASK) {
555
+{
4145
+ env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK;
556
+ Stm32l4x5GpioState *s = STM32L4X5_GPIO(obj);
4146
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
557
+
4147
+ qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing "
558
+ memory_region_init_io(&s->mmio, obj, &stm32l4x5_gpio_ops, s,
4148
+ "stackframe: error during lazy state deactivation\n");
559
+ TYPE_STM32L4X5_GPIO, 0x400);
4149
+ v7m_exception_taken(cpu, excret, true, false);
560
+
4150
+ return;
561
+ sysbus_init_mmio(SYS_BUS_DEVICE(obj), &s->mmio);
4151
+ } else {
562
+
4152
+ /* Clear s0..s15 and FPSCR */
563
+ qdev_init_gpio_out(DEVICE(obj), s->pin, GPIO_NUM_PINS);
4153
+ int i;
564
+ qdev_init_gpio_in(DEVICE(obj), stm32l4x5_gpio_set, GPIO_NUM_PINS);
4154
+
565
+
4155
+ for (i = 0; i < 16; i += 2) {
566
+ s->clk = qdev_init_clock_in(DEVICE(s), "clk", NULL, s, 0);
4156
+ *aa32_vfp_dreg(env, i / 2) = 0;
567
+
4157
+ }
568
+ object_property_add(obj, "disconnected-pins", "uint16",
4158
+ vfp_set_fpscr(env, 0);
569
+ disconnected_pins_get, disconnected_pins_set,
4159
+ }
570
+ NULL, &s->disconnected_pins);
4160
+ }
571
+ object_property_add(obj, "clock-freq-hz", "uint32",
4161
+
572
+ clock_freq_get, NULL, NULL, NULL);
4162
+ if (sfault) {
573
+}
4163
+ env->v7m.sfsr |= R_V7M_SFSR_INVER_MASK;
574
+
4164
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
575
+static void stm32l4x5_gpio_realize(DeviceState *dev, Error **errp)
4165
+ qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing "
576
+{
4166
+ "stackframe: failed EXC_RETURN.ES validity check\n");
577
+ Stm32l4x5GpioState *s = STM32L4X5_GPIO(dev);
4167
+ v7m_exception_taken(cpu, excret, true, false);
578
+ if (!clock_has_source(s->clk)) {
4168
+ return;
579
+ error_setg(errp, "GPIO: clk input must be connected");
4169
+ }
580
+ return;
4170
+
581
+ }
4171
+ if (ufault) {
582
+}
4172
+ /*
583
+
4173
+ * Bad exception return: instead of popping the exception
584
+static const VMStateDescription vmstate_stm32l4x5_gpio = {
4174
+ * stack, directly take a usage fault on the current stack.
585
+ .name = TYPE_STM32L4X5_GPIO,
4175
+ */
586
+ .version_id = 1,
4176
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
587
+ .minimum_version_id = 1,
4177
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
588
+ .fields = (VMStateField[]){
4178
+ qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
589
+ VMSTATE_UINT32(moder, Stm32l4x5GpioState),
4179
+ "stackframe: failed exception return integrity check\n");
590
+ VMSTATE_UINT32(otyper, Stm32l4x5GpioState),
4180
+ v7m_exception_taken(cpu, excret, true, false);
591
+ VMSTATE_UINT32(ospeedr, Stm32l4x5GpioState),
4181
+ return;
592
+ VMSTATE_UINT32(pupdr, Stm32l4x5GpioState),
4182
+ }
593
+ VMSTATE_UINT32(idr, Stm32l4x5GpioState),
4183
+
594
+ VMSTATE_UINT32(odr, Stm32l4x5GpioState),
4184
+ /*
595
+ VMSTATE_UINT32(lckr, Stm32l4x5GpioState),
4185
+ * Tailchaining: if there is currently a pending exception that
596
+ VMSTATE_UINT32(afrl, Stm32l4x5GpioState),
4186
+ * is high enough priority to preempt execution at the level we're
597
+ VMSTATE_UINT32(afrh, Stm32l4x5GpioState),
4187
+ * about to return to, then just directly take that exception now,
598
+ VMSTATE_UINT32(ascr, Stm32l4x5GpioState),
4188
+ * avoiding an unstack-and-then-stack. Note that now we have
599
+ VMSTATE_UINT16(disconnected_pins, Stm32l4x5GpioState),
4189
+ * deactivated the previous exception by calling armv7m_nvic_complete_irq()
600
+ VMSTATE_UINT16(pins_connected_high, Stm32l4x5GpioState),
4190
+ * our current execution priority is already the execution priority we are
601
+ VMSTATE_END_OF_LIST()
4191
+ * returning to -- none of the state we would unstack or set based on
602
+ }
4192
+ * the EXCRET value affects it.
603
+};
4193
+ */
604
+
4194
+ if (armv7m_nvic_can_take_pending_exception(env->nvic)) {
605
+static Property stm32l4x5_gpio_properties[] = {
4195
+ qemu_log_mask(CPU_LOG_INT, "...tailchaining to pending exception\n");
606
+ DEFINE_PROP_STRING("name", Stm32l4x5GpioState, name),
4196
+ v7m_exception_taken(cpu, excret, true, false);
607
+ DEFINE_PROP_UINT32("mode-reset", Stm32l4x5GpioState, moder_reset, 0),
4197
+ return;
608
+ DEFINE_PROP_UINT32("ospeed-reset", Stm32l4x5GpioState, ospeedr_reset, 0),
4198
+ }
609
+ DEFINE_PROP_UINT32("pupd-reset", Stm32l4x5GpioState, pupdr_reset, 0),
4199
+
610
+ DEFINE_PROP_END_OF_LIST(),
4200
+ switch_v7m_security_state(env, return_to_secure);
611
+};
4201
+
612
+
613
+static void stm32l4x5_gpio_class_init(ObjectClass *klass, void *data)
614
+{
615
+ DeviceClass *dc = DEVICE_CLASS(klass);
616
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
617
+
618
+ device_class_set_props(dc, stm32l4x5_gpio_properties);
619
+ dc->vmsd = &vmstate_stm32l4x5_gpio;
620
+ dc->realize = stm32l4x5_gpio_realize;
621
+ rc->phases.hold = stm32l4x5_gpio_reset_hold;
622
+}
623
+
624
+static const TypeInfo stm32l4x5_gpio_types[] = {
4202
+ {
625
+ {
4203
+ /*
626
+ .name = TYPE_STM32L4X5_GPIO,
4204
+ * The stack pointer we should be reading the exception frame from
627
+ .parent = TYPE_SYS_BUS_DEVICE,
4205
+ * depends on bits in the magic exception return type value (and
628
+ .instance_size = sizeof(Stm32l4x5GpioState),
4206
+ * for v8M isn't necessarily the stack pointer we will eventually
629
+ .instance_init = stm32l4x5_gpio_init,
4207
+ * end up resuming execution with). Get a pointer to the location
630
+ .class_init = stm32l4x5_gpio_class_init,
4208
+ * in the CPU state struct where the SP we need is currently being
631
+ },
4209
+ * stored; we will use and modify it in place.
632
+};
4210
+ * We use this limited C variable scope so we don't accidentally
633
+
4211
+ * use 'frame_sp_p' after we do something that makes it invalid.
634
+DEFINE_TYPES(stm32l4x5_gpio_types)
4212
+ */
635
diff --git a/hw/gpio/Kconfig b/hw/gpio/Kconfig
4213
+ uint32_t *frame_sp_p = get_v7m_sp_ptr(env,
636
index XXXXXXX..XXXXXXX 100644
4214
+ return_to_secure,
637
--- a/hw/gpio/Kconfig
4215
+ !return_to_handler,
638
+++ b/hw/gpio/Kconfig
4216
+ return_to_sp_process);
639
@@ -XXX,XX +XXX,XX @@ config GPIO_PWR
4217
+ uint32_t frameptr = *frame_sp_p;
640
4218
+ bool pop_ok = true;
641
config SIFIVE_GPIO
4219
+ ARMMMUIdx mmu_idx;
642
bool
4220
+ bool return_to_priv = return_to_handler ||
643
+
4221
+ !(env->v7m.control[return_to_secure] & R_V7M_CONTROL_NPRIV_MASK);
644
+config STM32L4X5_GPIO
4222
+
645
+ bool
4223
+ mmu_idx = arm_v7m_mmu_idx_for_secstate_and_priv(env, return_to_secure,
646
diff --git a/hw/gpio/meson.build b/hw/gpio/meson.build
4224
+ return_to_priv);
647
index XXXXXXX..XXXXXXX 100644
4225
+
648
--- a/hw/gpio/meson.build
4226
+ if (!QEMU_IS_ALIGNED(frameptr, 8) &&
649
+++ b/hw/gpio/meson.build
4227
+ arm_feature(env, ARM_FEATURE_V8)) {
650
@@ -XXX,XX +XXX,XX @@ system_ss.add(when: 'CONFIG_RASPI', if_true: files(
4228
+ qemu_log_mask(LOG_GUEST_ERROR,
651
'bcm2835_gpio.c',
4229
+ "M profile exception return with non-8-aligned SP "
652
'bcm2838_gpio.c'
4230
+ "for destination state is UNPREDICTABLE\n");
653
))
4231
+ }
654
+system_ss.add(when: 'CONFIG_STM32L4X5_SOC', if_true: files('stm32l4x5_gpio.c'))
4232
+
655
system_ss.add(when: 'CONFIG_ASPEED_SOC', if_true: files('aspeed_gpio.c'))
4233
+ /* Do we need to pop callee-saved registers? */
656
system_ss.add(when: 'CONFIG_SIFIVE_GPIO', if_true: files('sifive_gpio.c'))
4234
+ if (return_to_secure &&
657
diff --git a/hw/gpio/trace-events b/hw/gpio/trace-events
4235
+ ((excret & R_V7M_EXCRET_ES_MASK) == 0 ||
658
index XXXXXXX..XXXXXXX 100644
4236
+ (excret & R_V7M_EXCRET_DCRS_MASK) == 0)) {
659
--- a/hw/gpio/trace-events
4237
+ uint32_t actual_sig;
660
+++ b/hw/gpio/trace-events
4238
+
661
@@ -XXX,XX +XXX,XX @@ sifive_gpio_update_output_irq(int64_t line, int64_t value) "line %" PRIi64 " val
4239
+ pop_ok = v7m_stack_read(cpu, &actual_sig, frameptr, mmu_idx);
662
# aspeed_gpio.c
4240
+
663
aspeed_gpio_read(uint64_t offset, uint64_t value) "offset: 0x%" PRIx64 " value 0x%" PRIx64
4241
+ if (pop_ok && v7m_integrity_sig(env, excret) != actual_sig) {
664
aspeed_gpio_write(uint64_t offset, uint64_t value) "offset: 0x%" PRIx64 " value 0x%" PRIx64
4242
+ /* Take a SecureFault on the current stack */
665
+
4243
+ env->v7m.sfsr |= R_V7M_SFSR_INVIS_MASK;
666
+# stm32l4x5_gpio.c
4244
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
667
+stm32l4x5_gpio_read(char *gpio, uint64_t addr) "GPIO%s addr: 0x%" PRIx64 " "
4245
+ qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing "
668
+stm32l4x5_gpio_write(char *gpio, uint64_t addr, uint64_t data) "GPIO%s addr: 0x%" PRIx64 " val: 0x%" PRIx64 ""
4246
+ "stackframe: failed exception return integrity "
669
+stm32l4x5_gpio_update_idr(char *gpio, uint32_t old_idr, uint32_t new_idr) "GPIO%s from: 0x%x to: 0x%x"
4247
+ "signature check\n");
670
+stm32l4x5_gpio_pins(char *gpio, uint16_t disconnected, uint16_t high) "GPIO%s disconnected pins: 0x%x levels: 0x%x"
4248
+ v7m_exception_taken(cpu, excret, true, false);
4249
+ return;
4250
+ }
4251
+
4252
+ pop_ok = pop_ok &&
4253
+ v7m_stack_read(cpu, &env->regs[4], frameptr + 0x8, mmu_idx) &&
4254
+ v7m_stack_read(cpu, &env->regs[5], frameptr + 0xc, mmu_idx) &&
4255
+ v7m_stack_read(cpu, &env->regs[6], frameptr + 0x10, mmu_idx) &&
4256
+ v7m_stack_read(cpu, &env->regs[7], frameptr + 0x14, mmu_idx) &&
4257
+ v7m_stack_read(cpu, &env->regs[8], frameptr + 0x18, mmu_idx) &&
4258
+ v7m_stack_read(cpu, &env->regs[9], frameptr + 0x1c, mmu_idx) &&
4259
+ v7m_stack_read(cpu, &env->regs[10], frameptr + 0x20, mmu_idx) &&
4260
+ v7m_stack_read(cpu, &env->regs[11], frameptr + 0x24, mmu_idx);
4261
+
4262
+ frameptr += 0x28;
4263
+ }
4264
+
4265
+ /* Pop registers */
4266
+ pop_ok = pop_ok &&
4267
+ v7m_stack_read(cpu, &env->regs[0], frameptr, mmu_idx) &&
4268
+ v7m_stack_read(cpu, &env->regs[1], frameptr + 0x4, mmu_idx) &&
4269
+ v7m_stack_read(cpu, &env->regs[2], frameptr + 0x8, mmu_idx) &&
4270
+ v7m_stack_read(cpu, &env->regs[3], frameptr + 0xc, mmu_idx) &&
4271
+ v7m_stack_read(cpu, &env->regs[12], frameptr + 0x10, mmu_idx) &&
4272
+ v7m_stack_read(cpu, &env->regs[14], frameptr + 0x14, mmu_idx) &&
4273
+ v7m_stack_read(cpu, &env->regs[15], frameptr + 0x18, mmu_idx) &&
4274
+ v7m_stack_read(cpu, &xpsr, frameptr + 0x1c, mmu_idx);
4275
+
4276
+ if (!pop_ok) {
4277
+ /*
4278
+ * v7m_stack_read() pended a fault, so take it (as a tail
4279
+ * chained exception on the same stack frame)
4280
+ */
4281
+ qemu_log_mask(CPU_LOG_INT, "...derived exception on unstacking\n");
4282
+ v7m_exception_taken(cpu, excret, true, false);
4283
+ return;
4284
+ }
4285
+
4286
+ /*
4287
+ * Returning from an exception with a PC with bit 0 set is defined
4288
+ * behaviour on v8M (bit 0 is ignored), but for v7M it was specified
4289
+ * to be UNPREDICTABLE. In practice actual v7M hardware seems to ignore
4290
+ * the lsbit, and there are several RTOSes out there which incorrectly
4291
+ * assume the r15 in the stack frame should be a Thumb-style "lsbit
4292
+ * indicates ARM/Thumb" value, so ignore the bit on v7M as well, but
4293
+ * complain about the badly behaved guest.
4294
+ */
4295
+ if (env->regs[15] & 1) {
4296
+ env->regs[15] &= ~1U;
4297
+ if (!arm_feature(env, ARM_FEATURE_V8)) {
4298
+ qemu_log_mask(LOG_GUEST_ERROR,
4299
+ "M profile return from interrupt with misaligned "
4300
+ "PC is UNPREDICTABLE on v7M\n");
4301
+ }
4302
+ }
4303
+
4304
+ if (arm_feature(env, ARM_FEATURE_V8)) {
4305
+ /*
4306
+ * For v8M we have to check whether the xPSR exception field
4307
+ * matches the EXCRET value for return to handler/thread
4308
+ * before we commit to changing the SP and xPSR.
4309
+ */
4310
+ bool will_be_handler = (xpsr & XPSR_EXCP) != 0;
4311
+ if (return_to_handler != will_be_handler) {
4312
+ /*
4313
+ * Take an INVPC UsageFault on the current stack.
4314
+ * By this point we will have switched to the security state
4315
+ * for the background state, so this UsageFault will target
4316
+ * that state.
4317
+ */
4318
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
4319
+ env->v7m.secure);
4320
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
4321
+ qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
4322
+ "stackframe: failed exception return integrity "
4323
+ "check\n");
4324
+ v7m_exception_taken(cpu, excret, true, false);
4325
+ return;
4326
+ }
4327
+ }
4328
+
4329
+ if (!ftype) {
4330
+ /* FP present and we need to handle it */
4331
+ if (!return_to_secure &&
4332
+ (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPACT_MASK)) {
4333
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
4334
+ env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK;
4335
+ qemu_log_mask(CPU_LOG_INT,
4336
+ "...taking SecureFault on existing stackframe: "
4337
+ "Secure LSPACT set but exception return is "
4338
+ "not to secure state\n");
4339
+ v7m_exception_taken(cpu, excret, true, false);
4340
+ return;
4341
+ }
4342
+
4343
+ restore_s16_s31 = return_to_secure &&
4344
+ (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK);
4345
+
4346
+ if (env->v7m.fpccr[return_to_secure] & R_V7M_FPCCR_LSPACT_MASK) {
4347
+ /* State in FPU is still valid, just clear LSPACT */
4348
+ env->v7m.fpccr[return_to_secure] &= ~R_V7M_FPCCR_LSPACT_MASK;
4349
+ } else {
4350
+ int i;
4351
+ uint32_t fpscr;
4352
+ bool cpacr_pass, nsacr_pass;
4353
+
4354
+ cpacr_pass = v7m_cpacr_pass(env, return_to_secure,
4355
+ return_to_priv);
4356
+ nsacr_pass = return_to_secure ||
4357
+ extract32(env->v7m.nsacr, 10, 1);
4358
+
4359
+ if (!cpacr_pass) {
4360
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
4361
+ return_to_secure);
4362
+ env->v7m.cfsr[return_to_secure] |= R_V7M_CFSR_NOCP_MASK;
4363
+ qemu_log_mask(CPU_LOG_INT,
4364
+ "...taking UsageFault on existing "
4365
+ "stackframe: CPACR.CP10 prevents unstacking "
4366
+ "FP regs\n");
4367
+ v7m_exception_taken(cpu, excret, true, false);
4368
+ return;
4369
+ } else if (!nsacr_pass) {
4370
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, true);
4371
+ env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_INVPC_MASK;
4372
+ qemu_log_mask(CPU_LOG_INT,
4373
+ "...taking Secure UsageFault on existing "
4374
+ "stackframe: NSACR.CP10 prevents unstacking "
4375
+ "FP regs\n");
4376
+ v7m_exception_taken(cpu, excret, true, false);
4377
+ return;
4378
+ }
4379
+
4380
+ for (i = 0; i < (restore_s16_s31 ? 32 : 16); i += 2) {
4381
+ uint32_t slo, shi;
4382
+ uint64_t dn;
4383
+ uint32_t faddr = frameptr + 0x20 + 4 * i;
4384
+
4385
+ if (i >= 16) {
4386
+ faddr += 8; /* Skip the slot for the FPSCR */
4387
+ }
4388
+
4389
+ pop_ok = pop_ok &&
4390
+ v7m_stack_read(cpu, &slo, faddr, mmu_idx) &&
4391
+ v7m_stack_read(cpu, &shi, faddr + 4, mmu_idx);
4392
+
4393
+ if (!pop_ok) {
4394
+ break;
4395
+ }
4396
+
4397
+ dn = (uint64_t)shi << 32 | slo;
4398
+ *aa32_vfp_dreg(env, i / 2) = dn;
4399
+ }
4400
+ pop_ok = pop_ok &&
4401
+ v7m_stack_read(cpu, &fpscr, frameptr + 0x60, mmu_idx);
4402
+ if (pop_ok) {
4403
+ vfp_set_fpscr(env, fpscr);
4404
+ }
4405
+ if (!pop_ok) {
4406
+ /*
4407
+ * These regs are 0 if security extension present;
4408
+ * otherwise merely UNKNOWN. We zero always.
4409
+ */
4410
+ for (i = 0; i < (restore_s16_s31 ? 32 : 16); i += 2) {
4411
+ *aa32_vfp_dreg(env, i / 2) = 0;
4412
+ }
4413
+ vfp_set_fpscr(env, 0);
4414
+ }
4415
+ }
4416
+ }
4417
+ env->v7m.control[M_REG_S] = FIELD_DP32(env->v7m.control[M_REG_S],
4418
+ V7M_CONTROL, FPCA, !ftype);
4419
+
4420
+ /* Commit to consuming the stack frame */
4421
+ frameptr += 0x20;
4422
+ if (!ftype) {
4423
+ frameptr += 0x48;
4424
+ if (restore_s16_s31) {
4425
+ frameptr += 0x40;
4426
+ }
4427
+ }
4428
+ /*
4429
+ * Undo stack alignment (the SPREALIGN bit indicates that the original
4430
+ * pre-exception SP was not 8-aligned and we added a padding word to
4431
+ * align it, so we undo this by ORing in the bit that increases it
4432
+ * from the current 8-aligned value to the 8-unaligned value. (Adding 4
4433
+ * would work too but a logical OR is how the pseudocode specifies it.)
4434
+ */
4435
+ if (xpsr & XPSR_SPREALIGN) {
4436
+ frameptr |= 4;
4437
+ }
4438
+ *frame_sp_p = frameptr;
4439
+ }
4440
+
4441
+ xpsr_mask = ~(XPSR_SPREALIGN | XPSR_SFPA);
4442
+ if (!arm_feature(env, ARM_FEATURE_THUMB_DSP)) {
4443
+ xpsr_mask &= ~XPSR_GE;
4444
+ }
4445
+ /* This xpsr_write() will invalidate frame_sp_p as it may switch stack */
4446
+ xpsr_write(env, xpsr, xpsr_mask);
4447
+
4448
+ if (env->v7m.secure) {
4449
+ bool sfpa = xpsr & XPSR_SFPA;
4450
+
4451
+ env->v7m.control[M_REG_S] = FIELD_DP32(env->v7m.control[M_REG_S],
4452
+ V7M_CONTROL, SFPA, sfpa);
4453
+ }
4454
+
4455
+ /*
4456
+ * The restored xPSR exception field will be zero if we're
4457
+ * resuming in Thread mode. If that doesn't match what the
4458
+ * exception return excret specified then this is a UsageFault.
4459
+ * v7M requires we make this check here; v8M did it earlier.
4460
+ */
4461
+ if (return_to_handler != arm_v7m_is_handler_mode(env)) {
4462
+ /*
4463
+ * Take an INVPC UsageFault by pushing the stack again;
4464
+ * we know we're v7M so this is never a Secure UsageFault.
4465
+ */
4466
+ bool ignore_stackfaults;
4467
+
4468
+ assert(!arm_feature(env, ARM_FEATURE_V8));
4469
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, false);
4470
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
4471
+ ignore_stackfaults = v7m_push_stack(cpu);
4472
+ qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on new stackframe: "
4473
+ "failed exception return integrity check\n");
4474
+ v7m_exception_taken(cpu, excret, false, ignore_stackfaults);
4475
+ return;
4476
+ }
4477
+
4478
+ /* Otherwise, we have a successful exception exit. */
4479
+ arm_clear_exclusive(env);
4480
+ qemu_log_mask(CPU_LOG_INT, "...successful exception return\n");
4481
+}
4482
+
4483
+static bool do_v7m_function_return(ARMCPU *cpu)
4484
+{
4485
+ /*
4486
+ * v8M security extensions magic function return.
4487
+ * We may either:
4488
+ * (1) throw an exception (longjump)
4489
+ * (2) return true if we successfully handled the function return
4490
+ * (3) return false if we failed a consistency check and have
4491
+ * pended a UsageFault that needs to be taken now
4492
+ *
4493
+ * At this point the magic return value is split between env->regs[15]
4494
+ * and env->thumb. We don't bother to reconstitute it because we don't
4495
+ * need it (all values are handled the same way).
4496
+ */
4497
+ CPUARMState *env = &cpu->env;
4498
+ uint32_t newpc, newpsr, newpsr_exc;
4499
+
4500
+ qemu_log_mask(CPU_LOG_INT, "...really v7M secure function return\n");
4501
+
4502
+ {
4503
+ bool threadmode, spsel;
4504
+ TCGMemOpIdx oi;
4505
+ ARMMMUIdx mmu_idx;
4506
+ uint32_t *frame_sp_p;
4507
+ uint32_t frameptr;
4508
+
4509
+ /* Pull the return address and IPSR from the Secure stack */
4510
+ threadmode = !arm_v7m_is_handler_mode(env);
4511
+ spsel = env->v7m.control[M_REG_S] & R_V7M_CONTROL_SPSEL_MASK;
4512
+
4513
+ frame_sp_p = get_v7m_sp_ptr(env, true, threadmode, spsel);
4514
+ frameptr = *frame_sp_p;
4515
+
4516
+ /*
4517
+ * These loads may throw an exception (for MPU faults). We want to
4518
+ * do them as secure, so work out what MMU index that is.
4519
+ */
4520
+ mmu_idx = arm_v7m_mmu_idx_for_secstate(env, true);
4521
+ oi = make_memop_idx(MO_LE, arm_to_core_mmu_idx(mmu_idx));
4522
+ newpc = helper_le_ldul_mmu(env, frameptr, oi, 0);
4523
+ newpsr = helper_le_ldul_mmu(env, frameptr + 4, oi, 0);
4524
+
4525
+ /* Consistency checks on new IPSR */
4526
+ newpsr_exc = newpsr & XPSR_EXCP;
4527
+ if (!((env->v7m.exception == 0 && newpsr_exc == 0) ||
4528
+ (env->v7m.exception == 1 && newpsr_exc != 0))) {
4529
+ /* Pend the fault and tell our caller to take it */
4530
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
4531
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
4532
+ env->v7m.secure);
4533
+ qemu_log_mask(CPU_LOG_INT,
4534
+ "...taking INVPC UsageFault: "
4535
+ "IPSR consistency check failed\n");
4536
+ return false;
4537
+ }
4538
+
4539
+ *frame_sp_p = frameptr + 8;
4540
+ }
4541
+
4542
+ /* This invalidates frame_sp_p */
4543
+ switch_v7m_security_state(env, true);
4544
+ env->v7m.exception = newpsr_exc;
4545
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
4546
+ if (newpsr & XPSR_SFPA) {
4547
+ env->v7m.control[M_REG_S] |= R_V7M_CONTROL_SFPA_MASK;
4548
+ }
4549
+ xpsr_write(env, 0, XPSR_IT);
4550
+ env->thumb = newpc & 1;
4551
+ env->regs[15] = newpc & ~1;
4552
+
4553
+ qemu_log_mask(CPU_LOG_INT, "...function return successful\n");
4554
+ return true;
4555
+}
4556
+
4557
+static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx,
4558
+ uint32_t addr, uint16_t *insn)
4559
+{
4560
+ /*
4561
+ * Load a 16-bit portion of a v7M instruction, returning true on success,
4562
+ * or false on failure (in which case we will have pended the appropriate
4563
+ * exception).
4564
+ * We need to do the instruction fetch's MPU and SAU checks
4565
+ * like this because there is no MMU index that would allow
4566
+ * doing the load with a single function call. Instead we must
4567
+ * first check that the security attributes permit the load
4568
+ * and that they don't mismatch on the two halves of the instruction,
4569
+ * and then we do the load as a secure load (ie using the security
4570
+ * attributes of the address, not the CPU, as architecturally required).
4571
+ */
4572
+ CPUState *cs = CPU(cpu);
4573
+ CPUARMState *env = &cpu->env;
4574
+ V8M_SAttributes sattrs = {};
4575
+ MemTxAttrs attrs = {};
4576
+ ARMMMUFaultInfo fi = {};
4577
+ MemTxResult txres;
4578
+ target_ulong page_size;
4579
+ hwaddr physaddr;
4580
+ int prot;
4581
+
4582
+ v8m_security_lookup(env, addr, MMU_INST_FETCH, mmu_idx, &sattrs);
4583
+ if (!sattrs.nsc || sattrs.ns) {
4584
+ /*
4585
+ * This must be the second half of the insn, and it straddles a
4586
+ * region boundary with the second half not being S&NSC.
4587
+ */
4588
+ env->v7m.sfsr |= R_V7M_SFSR_INVEP_MASK;
4589
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
4590
+ qemu_log_mask(CPU_LOG_INT,
4591
+ "...really SecureFault with SFSR.INVEP\n");
4592
+ return false;
4593
+ }
4594
+ if (get_phys_addr(env, addr, MMU_INST_FETCH, mmu_idx,
4595
+ &physaddr, &attrs, &prot, &page_size, &fi, NULL)) {
4596
+ /* the MPU lookup failed */
4597
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_IACCVIOL_MASK;
4598
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM, env->v7m.secure);
4599
+ qemu_log_mask(CPU_LOG_INT, "...really MemManage with CFSR.IACCVIOL\n");
4600
+ return false;
4601
+ }
4602
+ *insn = address_space_lduw_le(arm_addressspace(cs, attrs), physaddr,
4603
+ attrs, &txres);
4604
+ if (txres != MEMTX_OK) {
4605
+ env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_IBUSERR_MASK;
4606
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false);
4607
+ qemu_log_mask(CPU_LOG_INT, "...really BusFault with CFSR.IBUSERR\n");
4608
+ return false;
4609
+ }
4610
+ return true;
4611
+}
4612
+
4613
+static bool v7m_handle_execute_nsc(ARMCPU *cpu)
4614
+{
4615
+ /*
4616
+ * Check whether this attempt to execute code in a Secure & NS-Callable
4617
+ * memory region is for an SG instruction; if so, then emulate the
4618
+ * effect of the SG instruction and return true. Otherwise pend
4619
+ * the correct kind of exception and return false.
4620
+ */
4621
+ CPUARMState *env = &cpu->env;
4622
+ ARMMMUIdx mmu_idx;
4623
+ uint16_t insn;
4624
+
4625
+ /*
4626
+ * We should never get here unless get_phys_addr_pmsav8() caused
4627
+ * an exception for NS executing in S&NSC memory.
4628
+ */
4629
+ assert(!env->v7m.secure);
4630
+ assert(arm_feature(env, ARM_FEATURE_M_SECURITY));
4631
+
4632
+ /* We want to do the MPU lookup as secure; work out what mmu_idx that is */
4633
+ mmu_idx = arm_v7m_mmu_idx_for_secstate(env, true);
4634
+
4635
+ if (!v7m_read_half_insn(cpu, mmu_idx, env->regs[15], &insn)) {
4636
+ return false;
4637
+ }
4638
+
4639
+ if (!env->thumb) {
4640
+ goto gen_invep;
4641
+ }
4642
+
4643
+ if (insn != 0xe97f) {
4644
+ /*
4645
+ * Not an SG instruction first half (we choose the IMPDEF
4646
+ * early-SG-check option).
4647
+ */
4648
+ goto gen_invep;
4649
+ }
4650
+
4651
+ if (!v7m_read_half_insn(cpu, mmu_idx, env->regs[15] + 2, &insn)) {
4652
+ return false;
4653
+ }
4654
+
4655
+ if (insn != 0xe97f) {
4656
+ /*
4657
+ * Not an SG instruction second half (yes, both halves of the SG
4658
+ * insn have the same hex value)
4659
+ */
4660
+ goto gen_invep;
4661
+ }
4662
+
4663
+ /*
4664
+ * OK, we have confirmed that we really have an SG instruction.
4665
+ * We know we're NS in S memory so don't need to repeat those checks.
4666
+ */
4667
+ qemu_log_mask(CPU_LOG_INT, "...really an SG instruction at 0x%08" PRIx32
4668
+ ", executing it\n", env->regs[15]);
4669
+ env->regs[14] &= ~1;
4670
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
4671
+ switch_v7m_security_state(env, true);
4672
+ xpsr_write(env, 0, XPSR_IT);
4673
+ env->regs[15] += 4;
4674
+ return true;
4675
+
4676
+gen_invep:
4677
+ env->v7m.sfsr |= R_V7M_SFSR_INVEP_MASK;
4678
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
4679
+ qemu_log_mask(CPU_LOG_INT,
4680
+ "...really SecureFault with SFSR.INVEP\n");
4681
+ return false;
4682
+}
4683
+
4684
+void arm_v7m_cpu_do_interrupt(CPUState *cs)
4685
+{
4686
+ ARMCPU *cpu = ARM_CPU(cs);
4687
+ CPUARMState *env = &cpu->env;
4688
+ uint32_t lr;
4689
+ bool ignore_stackfaults;
4690
+
4691
+ arm_log_exception(cs->exception_index);
4692
+
4693
+ /*
4694
+ * For exceptions we just mark as pending on the NVIC, and let that
4695
+ * handle it.
4696
+ */
4697
+ switch (cs->exception_index) {
4698
+ case EXCP_UDEF:
4699
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
4700
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_UNDEFINSTR_MASK;
4701
+ break;
4702
+ case EXCP_NOCP:
4703
+ {
4704
+ /*
4705
+ * NOCP might be directed to something other than the current
4706
+ * security state if this fault is because of NSACR; we indicate
4707
+ * the target security state using exception.target_el.
4708
+ */
4709
+ int target_secstate;
4710
+
4711
+ if (env->exception.target_el == 3) {
4712
+ target_secstate = M_REG_S;
4713
+ } else {
4714
+ target_secstate = env->v7m.secure;
4715
+ }
4716
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, target_secstate);
4717
+ env->v7m.cfsr[target_secstate] |= R_V7M_CFSR_NOCP_MASK;
4718
+ break;
4719
+ }
4720
+ case EXCP_INVSTATE:
4721
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
4722
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVSTATE_MASK;
4723
+ break;
4724
+ case EXCP_STKOF:
4725
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
4726
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_STKOF_MASK;
4727
+ break;
4728
+ case EXCP_LSERR:
4729
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
4730
+ env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK;
4731
+ break;
4732
+ case EXCP_UNALIGNED:
4733
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
4734
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_UNALIGNED_MASK;
4735
+ break;
4736
+ case EXCP_SWI:
4737
+ /* The PC already points to the next instruction. */
4738
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SVC, env->v7m.secure);
4739
+ break;
4740
+ case EXCP_PREFETCH_ABORT:
4741
+ case EXCP_DATA_ABORT:
4742
+ /*
4743
+ * Note that for M profile we don't have a guest facing FSR, but
4744
+ * the env->exception.fsr will be populated by the code that
4745
+ * raises the fault, in the A profile short-descriptor format.
4746
+ */
4747
+ switch (env->exception.fsr & 0xf) {
4748
+ case M_FAKE_FSR_NSC_EXEC:
4749
+ /*
4750
+ * Exception generated when we try to execute code at an address
4751
+ * which is marked as Secure & Non-Secure Callable and the CPU
4752
+ * is in the Non-Secure state. The only instruction which can
4753
+ * be executed like this is SG (and that only if both halves of
4754
+ * the SG instruction have the same security attributes.)
4755
+ * Everything else must generate an INVEP SecureFault, so we
4756
+ * emulate the SG instruction here.
4757
+ */
4758
+ if (v7m_handle_execute_nsc(cpu)) {
4759
+ return;
4760
+ }
4761
+ break;
4762
+ case M_FAKE_FSR_SFAULT:
4763
+ /*
4764
+ * Various flavours of SecureFault for attempts to execute or
4765
+ * access data in the wrong security state.
4766
+ */
4767
+ switch (cs->exception_index) {
4768
+ case EXCP_PREFETCH_ABORT:
4769
+ if (env->v7m.secure) {
4770
+ env->v7m.sfsr |= R_V7M_SFSR_INVTRAN_MASK;
4771
+ qemu_log_mask(CPU_LOG_INT,
4772
+ "...really SecureFault with SFSR.INVTRAN\n");
4773
+ } else {
4774
+ env->v7m.sfsr |= R_V7M_SFSR_INVEP_MASK;
4775
+ qemu_log_mask(CPU_LOG_INT,
4776
+ "...really SecureFault with SFSR.INVEP\n");
4777
+ }
4778
+ break;
4779
+ case EXCP_DATA_ABORT:
4780
+ /* This must be an NS access to S memory */
4781
+ env->v7m.sfsr |= R_V7M_SFSR_AUVIOL_MASK;
4782
+ qemu_log_mask(CPU_LOG_INT,
4783
+ "...really SecureFault with SFSR.AUVIOL\n");
4784
+ break;
4785
+ }
4786
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
4787
+ break;
4788
+ case 0x8: /* External Abort */
4789
+ switch (cs->exception_index) {
4790
+ case EXCP_PREFETCH_ABORT:
4791
+ env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_IBUSERR_MASK;
4792
+ qemu_log_mask(CPU_LOG_INT, "...with CFSR.IBUSERR\n");
4793
+ break;
4794
+ case EXCP_DATA_ABORT:
4795
+ env->v7m.cfsr[M_REG_NS] |=
4796
+ (R_V7M_CFSR_PRECISERR_MASK | R_V7M_CFSR_BFARVALID_MASK);
4797
+ env->v7m.bfar = env->exception.vaddress;
4798
+ qemu_log_mask(CPU_LOG_INT,
4799
+ "...with CFSR.PRECISERR and BFAR 0x%x\n",
4800
+ env->v7m.bfar);
4801
+ break;
4802
+ }
4803
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false);
4804
+ break;
4805
+ default:
4806
+ /*
4807
+ * All other FSR values are either MPU faults or "can't happen
4808
+ * for M profile" cases.
4809
+ */
4810
+ switch (cs->exception_index) {
4811
+ case EXCP_PREFETCH_ABORT:
4812
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_IACCVIOL_MASK;
4813
+ qemu_log_mask(CPU_LOG_INT, "...with CFSR.IACCVIOL\n");
4814
+ break;
4815
+ case EXCP_DATA_ABORT:
4816
+ env->v7m.cfsr[env->v7m.secure] |=
4817
+ (R_V7M_CFSR_DACCVIOL_MASK | R_V7M_CFSR_MMARVALID_MASK);
4818
+ env->v7m.mmfar[env->v7m.secure] = env->exception.vaddress;
4819
+ qemu_log_mask(CPU_LOG_INT,
4820
+ "...with CFSR.DACCVIOL and MMFAR 0x%x\n",
4821
+ env->v7m.mmfar[env->v7m.secure]);
4822
+ break;
4823
+ }
4824
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM,
4825
+ env->v7m.secure);
4826
+ break;
4827
+ }
4828
+ break;
4829
+ case EXCP_BKPT:
4830
+ if (semihosting_enabled()) {
4831
+ int nr;
4832
+ nr = arm_lduw_code(env, env->regs[15], arm_sctlr_b(env)) & 0xff;
4833
+ if (nr == 0xab) {
4834
+ env->regs[15] += 2;
4835
+ qemu_log_mask(CPU_LOG_INT,
4836
+ "...handling as semihosting call 0x%x\n",
4837
+ env->regs[0]);
4838
+ env->regs[0] = do_arm_semihosting(env);
4839
+ return;
4840
+ }
4841
+ }
4842
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_DEBUG, false);
4843
+ break;
4844
+ case EXCP_IRQ:
4845
+ break;
4846
+ case EXCP_EXCEPTION_EXIT:
4847
+ if (env->regs[15] < EXC_RETURN_MIN_MAGIC) {
4848
+ /* Must be v8M security extension function return */
4849
+ assert(env->regs[15] >= FNC_RETURN_MIN_MAGIC);
4850
+ assert(arm_feature(env, ARM_FEATURE_M_SECURITY));
4851
+ if (do_v7m_function_return(cpu)) {
4852
+ return;
4853
+ }
4854
+ } else {
4855
+ do_v7m_exception_exit(cpu);
4856
+ return;
4857
+ }
4858
+ break;
4859
+ case EXCP_LAZYFP:
4860
+ /*
4861
+ * We already pended the specific exception in the NVIC in the
4862
+ * v7m_preserve_fp_state() helper function.
4863
+ */
4864
+ break;
4865
+ default:
4866
+ cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index);
4867
+ return; /* Never happens. Keep compiler happy. */
4868
+ }
4869
+
4870
+ if (arm_feature(env, ARM_FEATURE_V8)) {
4871
+ lr = R_V7M_EXCRET_RES1_MASK |
4872
+ R_V7M_EXCRET_DCRS_MASK;
4873
+ /*
4874
+ * The S bit indicates whether we should return to Secure
4875
+ * or NonSecure (ie our current state).
4876
+ * The ES bit indicates whether we're taking this exception
4877
+ * to Secure or NonSecure (ie our target state). We set it
4878
+ * later, in v7m_exception_taken().
4879
+ * The SPSEL bit is also set in v7m_exception_taken() for v8M.
4880
+ * This corresponds to the ARM ARM pseudocode for v8M setting
4881
+ * some LR bits in PushStack() and some in ExceptionTaken();
4882
+ * the distinction matters for the tailchain cases where we
4883
+ * can take an exception without pushing the stack.
4884
+ */
4885
+ if (env->v7m.secure) {
4886
+ lr |= R_V7M_EXCRET_S_MASK;
4887
+ }
4888
+ if (!(env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK)) {
4889
+ lr |= R_V7M_EXCRET_FTYPE_MASK;
4890
+ }
4891
+ } else {
4892
+ lr = R_V7M_EXCRET_RES1_MASK |
4893
+ R_V7M_EXCRET_S_MASK |
4894
+ R_V7M_EXCRET_DCRS_MASK |
4895
+ R_V7M_EXCRET_FTYPE_MASK |
4896
+ R_V7M_EXCRET_ES_MASK;
4897
+ if (env->v7m.control[M_REG_NS] & R_V7M_CONTROL_SPSEL_MASK) {
4898
+ lr |= R_V7M_EXCRET_SPSEL_MASK;
4899
+ }
4900
+ }
4901
+ if (!arm_v7m_is_handler_mode(env)) {
4902
+ lr |= R_V7M_EXCRET_MODE_MASK;
4903
+ }
4904
+
4905
+ ignore_stackfaults = v7m_push_stack(cpu);
4906
+ v7m_exception_taken(cpu, lr, false, ignore_stackfaults);
4907
+}
4908
+
4909
+uint32_t HELPER(v7m_mrs)(CPUARMState *env, uint32_t reg)
4910
+{
4911
+ uint32_t mask;
4912
+ unsigned el = arm_current_el(env);
4913
+
4914
+ /* First handle registers which unprivileged can read */
4915
+
4916
+ switch (reg) {
4917
+ case 0 ... 7: /* xPSR sub-fields */
4918
+ mask = 0;
4919
+ if ((reg & 1) && el) {
4920
+ mask |= XPSR_EXCP; /* IPSR (unpriv. reads as zero) */
4921
+ }
4922
+ if (!(reg & 4)) {
4923
+ mask |= XPSR_NZCV | XPSR_Q; /* APSR */
4924
+ if (arm_feature(env, ARM_FEATURE_THUMB_DSP)) {
4925
+ mask |= XPSR_GE;
4926
+ }
4927
+ }
4928
+ /* EPSR reads as zero */
4929
+ return xpsr_read(env) & mask;
4930
+ break;
4931
+ case 20: /* CONTROL */
4932
+ {
4933
+ uint32_t value = env->v7m.control[env->v7m.secure];
4934
+ if (!env->v7m.secure) {
4935
+ /* SFPA is RAZ/WI from NS; FPCA is stored in the M_REG_S bank */
4936
+ value |= env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK;
4937
+ }
4938
+ return value;
4939
+ }
4940
+ case 0x94: /* CONTROL_NS */
4941
+ /*
4942
+ * We have to handle this here because unprivileged Secure code
4943
+ * can read the NS CONTROL register.
4944
+ */
4945
+ if (!env->v7m.secure) {
4946
+ return 0;
4947
+ }
4948
+ return env->v7m.control[M_REG_NS] |
4949
+ (env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK);
4950
+ }
4951
+
4952
+ if (el == 0) {
4953
+ return 0; /* unprivileged reads others as zero */
4954
+ }
4955
+
4956
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
4957
+ switch (reg) {
4958
+ case 0x88: /* MSP_NS */
4959
+ if (!env->v7m.secure) {
4960
+ return 0;
4961
+ }
4962
+ return env->v7m.other_ss_msp;
4963
+ case 0x89: /* PSP_NS */
4964
+ if (!env->v7m.secure) {
4965
+ return 0;
4966
+ }
4967
+ return env->v7m.other_ss_psp;
4968
+ case 0x8a: /* MSPLIM_NS */
4969
+ if (!env->v7m.secure) {
4970
+ return 0;
4971
+ }
4972
+ return env->v7m.msplim[M_REG_NS];
4973
+ case 0x8b: /* PSPLIM_NS */
4974
+ if (!env->v7m.secure) {
4975
+ return 0;
4976
+ }
4977
+ return env->v7m.psplim[M_REG_NS];
4978
+ case 0x90: /* PRIMASK_NS */
4979
+ if (!env->v7m.secure) {
4980
+ return 0;
4981
+ }
4982
+ return env->v7m.primask[M_REG_NS];
4983
+ case 0x91: /* BASEPRI_NS */
4984
+ if (!env->v7m.secure) {
4985
+ return 0;
4986
+ }
4987
+ return env->v7m.basepri[M_REG_NS];
4988
+ case 0x93: /* FAULTMASK_NS */
4989
+ if (!env->v7m.secure) {
4990
+ return 0;
4991
+ }
4992
+ return env->v7m.faultmask[M_REG_NS];
4993
+ case 0x98: /* SP_NS */
4994
+ {
4995
+ /*
4996
+ * This gives the non-secure SP selected based on whether we're
4997
+ * currently in handler mode or not, using the NS CONTROL.SPSEL.
4998
+ */
4999
+ bool spsel = env->v7m.control[M_REG_NS] & R_V7M_CONTROL_SPSEL_MASK;
5000
+
5001
+ if (!env->v7m.secure) {
5002
+ return 0;
5003
+ }
5004
+ if (!arm_v7m_is_handler_mode(env) && spsel) {
5005
+ return env->v7m.other_ss_psp;
5006
+ } else {
5007
+ return env->v7m.other_ss_msp;
5008
+ }
5009
+ }
5010
+ default:
5011
+ break;
5012
+ }
5013
+ }
5014
+
5015
+ switch (reg) {
5016
+ case 8: /* MSP */
5017
+ return v7m_using_psp(env) ? env->v7m.other_sp : env->regs[13];
5018
+ case 9: /* PSP */
5019
+ return v7m_using_psp(env) ? env->regs[13] : env->v7m.other_sp;
5020
+ case 10: /* MSPLIM */
5021
+ if (!arm_feature(env, ARM_FEATURE_V8)) {
5022
+ goto bad_reg;
5023
+ }
5024
+ return env->v7m.msplim[env->v7m.secure];
5025
+ case 11: /* PSPLIM */
5026
+ if (!arm_feature(env, ARM_FEATURE_V8)) {
5027
+ goto bad_reg;
5028
+ }
5029
+ return env->v7m.psplim[env->v7m.secure];
5030
+ case 16: /* PRIMASK */
5031
+ return env->v7m.primask[env->v7m.secure];
5032
+ case 17: /* BASEPRI */
5033
+ case 18: /* BASEPRI_MAX */
5034
+ return env->v7m.basepri[env->v7m.secure];
5035
+ case 19: /* FAULTMASK */
5036
+ return env->v7m.faultmask[env->v7m.secure];
5037
+ default:
5038
+ bad_reg:
5039
+ qemu_log_mask(LOG_GUEST_ERROR, "Attempt to read unknown special"
5040
+ " register %d\n", reg);
5041
+ return 0;
5042
+ }
5043
+}
5044
+
5045
+void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
5046
+{
5047
+ /*
5048
+ * We're passed bits [11..0] of the instruction; extract
5049
+ * SYSm and the mask bits.
5050
+ * Invalid combinations of SYSm and mask are UNPREDICTABLE;
5051
+ * we choose to treat them as if the mask bits were valid.
5052
+ * NB that the pseudocode 'mask' variable is bits [11..10],
5053
+ * whereas ours is [11..8].
5054
+ */
5055
+ uint32_t mask = extract32(maskreg, 8, 4);
5056
+ uint32_t reg = extract32(maskreg, 0, 8);
5057
+ int cur_el = arm_current_el(env);
5058
+
5059
+ if (cur_el == 0 && reg > 7 && reg != 20) {
5060
+ /*
5061
+ * only xPSR sub-fields and CONTROL.SFPA may be written by
5062
+ * unprivileged code
5063
+ */
5064
+ return;
5065
+ }
5066
+
5067
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
5068
+ switch (reg) {
5069
+ case 0x88: /* MSP_NS */
5070
+ if (!env->v7m.secure) {
5071
+ return;
5072
+ }
5073
+ env->v7m.other_ss_msp = val;
5074
+ return;
5075
+ case 0x89: /* PSP_NS */
5076
+ if (!env->v7m.secure) {
5077
+ return;
5078
+ }
5079
+ env->v7m.other_ss_psp = val;
5080
+ return;
5081
+ case 0x8a: /* MSPLIM_NS */
5082
+ if (!env->v7m.secure) {
5083
+ return;
5084
+ }
5085
+ env->v7m.msplim[M_REG_NS] = val & ~7;
5086
+ return;
5087
+ case 0x8b: /* PSPLIM_NS */
5088
+ if (!env->v7m.secure) {
5089
+ return;
5090
+ }
5091
+ env->v7m.psplim[M_REG_NS] = val & ~7;
5092
+ return;
5093
+ case 0x90: /* PRIMASK_NS */
5094
+ if (!env->v7m.secure) {
5095
+ return;
5096
+ }
5097
+ env->v7m.primask[M_REG_NS] = val & 1;
5098
+ return;
5099
+ case 0x91: /* BASEPRI_NS */
5100
+ if (!env->v7m.secure || !arm_feature(env, ARM_FEATURE_M_MAIN)) {
5101
+ return;
5102
+ }
5103
+ env->v7m.basepri[M_REG_NS] = val & 0xff;
5104
+ return;
5105
+ case 0x93: /* FAULTMASK_NS */
5106
+ if (!env->v7m.secure || !arm_feature(env, ARM_FEATURE_M_MAIN)) {
5107
+ return;
5108
+ }
5109
+ env->v7m.faultmask[M_REG_NS] = val & 1;
5110
+ return;
5111
+ case 0x94: /* CONTROL_NS */
5112
+ if (!env->v7m.secure) {
5113
+ return;
5114
+ }
5115
+ write_v7m_control_spsel_for_secstate(env,
5116
+ val & R_V7M_CONTROL_SPSEL_MASK,
5117
+ M_REG_NS);
5118
+ if (arm_feature(env, ARM_FEATURE_M_MAIN)) {
5119
+ env->v7m.control[M_REG_NS] &= ~R_V7M_CONTROL_NPRIV_MASK;
5120
+ env->v7m.control[M_REG_NS] |= val & R_V7M_CONTROL_NPRIV_MASK;
5121
+ }
5122
+ /*
5123
+ * SFPA is RAZ/WI from NS. FPCA is RO if NSACR.CP10 == 0,
5124
+ * RES0 if the FPU is not present, and is stored in the S bank
5125
+ */
5126
+ if (arm_feature(env, ARM_FEATURE_VFP) &&
5127
+ extract32(env->v7m.nsacr, 10, 1)) {
5128
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_FPCA_MASK;
5129
+ env->v7m.control[M_REG_S] |= val & R_V7M_CONTROL_FPCA_MASK;
5130
+ }
5131
+ return;
5132
+ case 0x98: /* SP_NS */
5133
+ {
5134
+ /*
5135
+ * This gives the non-secure SP selected based on whether we're
5136
+ * currently in handler mode or not, using the NS CONTROL.SPSEL.
5137
+ */
5138
+ bool spsel = env->v7m.control[M_REG_NS] & R_V7M_CONTROL_SPSEL_MASK;
5139
+ bool is_psp = !arm_v7m_is_handler_mode(env) && spsel;
5140
+ uint32_t limit;
5141
+
5142
+ if (!env->v7m.secure) {
5143
+ return;
5144
+ }
5145
+
5146
+ limit = is_psp ? env->v7m.psplim[false] : env->v7m.msplim[false];
5147
+
5148
+ if (val < limit) {
5149
+ CPUState *cs = env_cpu(env);
5150
+
5151
+ cpu_restore_state(cs, GETPC(), true);
5152
+ raise_exception(env, EXCP_STKOF, 0, 1);
5153
+ }
5154
+
5155
+ if (is_psp) {
5156
+ env->v7m.other_ss_psp = val;
5157
+ } else {
5158
+ env->v7m.other_ss_msp = val;
5159
+ }
5160
+ return;
5161
+ }
5162
+ default:
5163
+ break;
5164
+ }
5165
+ }
5166
+
5167
+ switch (reg) {
5168
+ case 0 ... 7: /* xPSR sub-fields */
5169
+ /* only APSR is actually writable */
5170
+ if (!(reg & 4)) {
5171
+ uint32_t apsrmask = 0;
5172
+
5173
+ if (mask & 8) {
5174
+ apsrmask |= XPSR_NZCV | XPSR_Q;
5175
+ }
5176
+ if ((mask & 4) && arm_feature(env, ARM_FEATURE_THUMB_DSP)) {
5177
+ apsrmask |= XPSR_GE;
5178
+ }
5179
+ xpsr_write(env, val, apsrmask);
5180
+ }
5181
+ break;
5182
+ case 8: /* MSP */
5183
+ if (v7m_using_psp(env)) {
5184
+ env->v7m.other_sp = val;
5185
+ } else {
5186
+ env->regs[13] = val;
5187
+ }
5188
+ break;
5189
+ case 9: /* PSP */
5190
+ if (v7m_using_psp(env)) {
5191
+ env->regs[13] = val;
5192
+ } else {
5193
+ env->v7m.other_sp = val;
5194
+ }
5195
+ break;
5196
+ case 10: /* MSPLIM */
5197
+ if (!arm_feature(env, ARM_FEATURE_V8)) {
5198
+ goto bad_reg;
5199
+ }
5200
+ env->v7m.msplim[env->v7m.secure] = val & ~7;
5201
+ break;
5202
+ case 11: /* PSPLIM */
5203
+ if (!arm_feature(env, ARM_FEATURE_V8)) {
5204
+ goto bad_reg;
5205
+ }
5206
+ env->v7m.psplim[env->v7m.secure] = val & ~7;
5207
+ break;
5208
+ case 16: /* PRIMASK */
5209
+ env->v7m.primask[env->v7m.secure] = val & 1;
5210
+ break;
5211
+ case 17: /* BASEPRI */
5212
+ if (!arm_feature(env, ARM_FEATURE_M_MAIN)) {
5213
+ goto bad_reg;
5214
+ }
5215
+ env->v7m.basepri[env->v7m.secure] = val & 0xff;
5216
+ break;
5217
+ case 18: /* BASEPRI_MAX */
5218
+ if (!arm_feature(env, ARM_FEATURE_M_MAIN)) {
5219
+ goto bad_reg;
5220
+ }
5221
+ val &= 0xff;
5222
+ if (val != 0 && (val < env->v7m.basepri[env->v7m.secure]
5223
+ || env->v7m.basepri[env->v7m.secure] == 0)) {
5224
+ env->v7m.basepri[env->v7m.secure] = val;
5225
+ }
5226
+ break;
5227
+ case 19: /* FAULTMASK */
5228
+ if (!arm_feature(env, ARM_FEATURE_M_MAIN)) {
5229
+ goto bad_reg;
5230
+ }
5231
+ env->v7m.faultmask[env->v7m.secure] = val & 1;
5232
+ break;
5233
+ case 20: /* CONTROL */
5234
+ /*
5235
+ * Writing to the SPSEL bit only has an effect if we are in
5236
+ * thread mode; other bits can be updated by any privileged code.
5237
+ * write_v7m_control_spsel() deals with updating the SPSEL bit in
5238
+ * env->v7m.control, so we only need update the others.
5239
+ * For v7M, we must just ignore explicit writes to SPSEL in handler
5240
+ * mode; for v8M the write is permitted but will have no effect.
5241
+ * All these bits are writes-ignored from non-privileged code,
5242
+ * except for SFPA.
5243
+ */
5244
+ if (cur_el > 0 && (arm_feature(env, ARM_FEATURE_V8) ||
5245
+ !arm_v7m_is_handler_mode(env))) {
5246
+ write_v7m_control_spsel(env, (val & R_V7M_CONTROL_SPSEL_MASK) != 0);
5247
+ }
5248
+ if (cur_el > 0 && arm_feature(env, ARM_FEATURE_M_MAIN)) {
5249
+ env->v7m.control[env->v7m.secure] &= ~R_V7M_CONTROL_NPRIV_MASK;
5250
+ env->v7m.control[env->v7m.secure] |= val & R_V7M_CONTROL_NPRIV_MASK;
5251
+ }
5252
+ if (arm_feature(env, ARM_FEATURE_VFP)) {
5253
+ /*
5254
+ * SFPA is RAZ/WI from NS or if no FPU.
5255
+ * FPCA is RO if NSACR.CP10 == 0, RES0 if the FPU is not present.
5256
+ * Both are stored in the S bank.
5257
+ */
5258
+ if (env->v7m.secure) {
5259
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
5260
+ env->v7m.control[M_REG_S] |= val & R_V7M_CONTROL_SFPA_MASK;
5261
+ }
5262
+ if (cur_el > 0 &&
5263
+ (env->v7m.secure || !arm_feature(env, ARM_FEATURE_M_SECURITY) ||
5264
+ extract32(env->v7m.nsacr, 10, 1))) {
5265
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_FPCA_MASK;
5266
+ env->v7m.control[M_REG_S] |= val & R_V7M_CONTROL_FPCA_MASK;
5267
+ }
5268
+ }
5269
+ break;
5270
+ default:
5271
+ bad_reg:
5272
+ qemu_log_mask(LOG_GUEST_ERROR, "Attempt to write unknown special"
5273
+ " register %d\n", reg);
5274
+ return;
5275
+ }
5276
+}
5277
+
5278
+uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
5279
+{
5280
+ /* Implement the TT instruction. op is bits [7:6] of the insn. */
5281
+ bool forceunpriv = op & 1;
5282
+ bool alt = op & 2;
5283
+ V8M_SAttributes sattrs = {};
5284
+ uint32_t tt_resp;
5285
+ bool r, rw, nsr, nsrw, mrvalid;
5286
+ int prot;
5287
+ ARMMMUFaultInfo fi = {};
5288
+ MemTxAttrs attrs = {};
5289
+ hwaddr phys_addr;
5290
+ ARMMMUIdx mmu_idx;
5291
+ uint32_t mregion;
5292
+ bool targetpriv;
5293
+ bool targetsec = env->v7m.secure;
5294
+ bool is_subpage;
5295
+
5296
+ /*
5297
+ * Work out what the security state and privilege level we're
5298
+ * interested in is...
5299
+ */
5300
+ if (alt) {
5301
+ targetsec = !targetsec;
5302
+ }
5303
+
5304
+ if (forceunpriv) {
5305
+ targetpriv = false;
5306
+ } else {
5307
+ targetpriv = arm_v7m_is_handler_mode(env) ||
5308
+ !(env->v7m.control[targetsec] & R_V7M_CONTROL_NPRIV_MASK);
5309
+ }
5310
+
5311
+ /* ...and then figure out which MMU index this is */
5312
+ mmu_idx = arm_v7m_mmu_idx_for_secstate_and_priv(env, targetsec, targetpriv);
5313
+
5314
+ /*
5315
+ * We know that the MPU and SAU don't care about the access type
5316
+ * for our purposes beyond that we don't want to claim to be
5317
+ * an insn fetch, so we arbitrarily call this a read.
5318
+ */
5319
+
5320
+ /*
5321
+ * MPU region info only available for privileged or if
5322
+ * inspecting the other MPU state.
5323
+ */
5324
+ if (arm_current_el(env) != 0 || alt) {
5325
+ /* We can ignore the return value as prot is always set */
5326
+ pmsav8_mpu_lookup(env, addr, MMU_DATA_LOAD, mmu_idx,
5327
+ &phys_addr, &attrs, &prot, &is_subpage,
5328
+ &fi, &mregion);
5329
+ if (mregion == -1) {
5330
+ mrvalid = false;
5331
+ mregion = 0;
5332
+ } else {
5333
+ mrvalid = true;
5334
+ }
5335
+ r = prot & PAGE_READ;
5336
+ rw = prot & PAGE_WRITE;
5337
+ } else {
5338
+ r = false;
5339
+ rw = false;
5340
+ mrvalid = false;
5341
+ mregion = 0;
5342
+ }
5343
+
5344
+ if (env->v7m.secure) {
5345
+ v8m_security_lookup(env, addr, MMU_DATA_LOAD, mmu_idx, &sattrs);
5346
+ nsr = sattrs.ns && r;
5347
+ nsrw = sattrs.ns && rw;
5348
+ } else {
5349
+ sattrs.ns = true;
5350
+ nsr = false;
5351
+ nsrw = false;
5352
+ }
5353
+
5354
+ tt_resp = (sattrs.iregion << 24) |
5355
+ (sattrs.irvalid << 23) |
5356
+ ((!sattrs.ns) << 22) |
5357
+ (nsrw << 21) |
5358
+ (nsr << 20) |
5359
+ (rw << 19) |
5360
+ (r << 18) |
5361
+ (sattrs.srvalid << 17) |
5362
+ (mrvalid << 16) |
5363
+ (sattrs.sregion << 8) |
5364
+ mregion;
5365
+
5366
+ return tt_resp;
5367
+}
5368
+
5369
+#endif /* !CONFIG_USER_ONLY */
5370
+
5371
+ARMMMUIdx arm_v7m_mmu_idx_all(CPUARMState *env,
5372
+ bool secstate, bool priv, bool negpri)
5373
+{
5374
+ ARMMMUIdx mmu_idx = ARM_MMU_IDX_M;
5375
+
5376
+ if (priv) {
5377
+ mmu_idx |= ARM_MMU_IDX_M_PRIV;
5378
+ }
5379
+
5380
+ if (negpri) {
5381
+ mmu_idx |= ARM_MMU_IDX_M_NEGPRI;
5382
+ }
5383
+
5384
+ if (secstate) {
5385
+ mmu_idx |= ARM_MMU_IDX_M_S;
5386
+ }
5387
+
5388
+ return mmu_idx;
5389
+}
5390
+
5391
+ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
5392
+ bool secstate, bool priv)
5393
+{
5394
+ bool negpri = armv7m_nvic_neg_prio_requested(env->nvic, secstate);
5395
+
5396
+ return arm_v7m_mmu_idx_all(env, secstate, priv, negpri);
5397
+}
5398
+
5399
+/* Return the MMU index for a v7M CPU in the specified security state */
5400
+ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate)
5401
+{
5402
+ bool priv = arm_current_el(env) != 0;
5403
+
5404
+ return arm_v7m_mmu_idx_for_secstate_and_priv(env, secstate, priv);
5405
+}
5406
--
671
--
5407
2.20.1
672
2.34.1
5408
673
5409
674
diff view generated by jsdifflib
New patch
1
1
From: Inès Varhol <ines.varhol@telecom-paris.fr>
2
3
Signed-off-by: Arnaud Minier <arnaud.minier@telecom-paris.fr>
4
Signed-off-by: Inès Varhol <ines.varhol@telecom-paris.fr>
5
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
Acked-by: Alistair Francis <alistair.francis@wdc.com>
7
Message-id: 20240305210444.310665-3-ines.varhol@telecom-paris.fr
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
include/hw/arm/stm32l4x5_soc.h | 2 +
11
include/hw/gpio/stm32l4x5_gpio.h | 1 +
12
include/hw/misc/stm32l4x5_syscfg.h | 3 +-
13
hw/arm/stm32l4x5_soc.c | 71 +++++++++++++++++++++++-------
14
hw/misc/stm32l4x5_syscfg.c | 1 +
15
hw/arm/Kconfig | 3 +-
16
6 files changed, 63 insertions(+), 18 deletions(-)
17
18
diff --git a/include/hw/arm/stm32l4x5_soc.h b/include/hw/arm/stm32l4x5_soc.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/include/hw/arm/stm32l4x5_soc.h
21
+++ b/include/hw/arm/stm32l4x5_soc.h
22
@@ -XXX,XX +XXX,XX @@
23
#include "hw/misc/stm32l4x5_syscfg.h"
24
#include "hw/misc/stm32l4x5_exti.h"
25
#include "hw/misc/stm32l4x5_rcc.h"
26
+#include "hw/gpio/stm32l4x5_gpio.h"
27
#include "qom/object.h"
28
29
#define TYPE_STM32L4X5_SOC "stm32l4x5-soc"
30
@@ -XXX,XX +XXX,XX @@ struct Stm32l4x5SocState {
31
OrIRQState exti_or_gates[NUM_EXTI_OR_GATES];
32
Stm32l4x5SyscfgState syscfg;
33
Stm32l4x5RccState rcc;
34
+ Stm32l4x5GpioState gpio[NUM_GPIOS];
35
36
MemoryRegion sram1;
37
MemoryRegion sram2;
38
diff --git a/include/hw/gpio/stm32l4x5_gpio.h b/include/hw/gpio/stm32l4x5_gpio.h
39
index XXXXXXX..XXXXXXX 100644
40
--- a/include/hw/gpio/stm32l4x5_gpio.h
41
+++ b/include/hw/gpio/stm32l4x5_gpio.h
42
@@ -XXX,XX +XXX,XX @@
43
#define TYPE_STM32L4X5_GPIO "stm32l4x5-gpio"
44
OBJECT_DECLARE_SIMPLE_TYPE(Stm32l4x5GpioState, STM32L4X5_GPIO)
45
46
+#define NUM_GPIOS 8
47
#define GPIO_NUM_PINS 16
48
49
struct Stm32l4x5GpioState {
50
diff --git a/include/hw/misc/stm32l4x5_syscfg.h b/include/hw/misc/stm32l4x5_syscfg.h
51
index XXXXXXX..XXXXXXX 100644
52
--- a/include/hw/misc/stm32l4x5_syscfg.h
53
+++ b/include/hw/misc/stm32l4x5_syscfg.h
54
@@ -XXX,XX +XXX,XX @@
55
56
#include "hw/sysbus.h"
57
#include "qom/object.h"
58
+#include "hw/gpio/stm32l4x5_gpio.h"
59
60
#define TYPE_STM32L4X5_SYSCFG "stm32l4x5-syscfg"
61
OBJECT_DECLARE_SIMPLE_TYPE(Stm32l4x5SyscfgState, STM32L4X5_SYSCFG)
62
63
-#define NUM_GPIOS 8
64
-#define GPIO_NUM_PINS 16
65
#define SYSCFG_NUM_EXTICR 4
66
67
struct Stm32l4x5SyscfgState {
68
diff --git a/hw/arm/stm32l4x5_soc.c b/hw/arm/stm32l4x5_soc.c
69
index XXXXXXX..XXXXXXX 100644
70
--- a/hw/arm/stm32l4x5_soc.c
71
+++ b/hw/arm/stm32l4x5_soc.c
72
@@ -XXX,XX +XXX,XX @@
73
#include "sysemu/sysemu.h"
74
#include "hw/or-irq.h"
75
#include "hw/arm/stm32l4x5_soc.h"
76
+#include "hw/gpio/stm32l4x5_gpio.h"
77
#include "hw/qdev-clock.h"
78
#include "hw/misc/unimp.h"
79
80
@@ -XXX,XX +XXX,XX @@ static const int exti_or_gate1_lines_in[EXTI_OR_GATE1_NUM_LINES_IN] = {
81
16, 35, 36, 37, 38,
82
};
83
84
+static const struct {
85
+ uint32_t addr;
86
+ uint32_t moder_reset;
87
+ uint32_t ospeedr_reset;
88
+ uint32_t pupdr_reset;
89
+} stm32l4x5_gpio_cfg[NUM_GPIOS] = {
90
+ { 0x48000000, 0xABFFFFFF, 0x0C000000, 0x64000000 },
91
+ { 0x48000400, 0xFFFFFEBF, 0x00000000, 0x00000100 },
92
+ { 0x48000800, 0xFFFFFFFF, 0x00000000, 0x00000000 },
93
+ { 0x48000C00, 0xFFFFFFFF, 0x00000000, 0x00000000 },
94
+ { 0x48001000, 0xFFFFFFFF, 0x00000000, 0x00000000 },
95
+ { 0x48001400, 0xFFFFFFFF, 0x00000000, 0x00000000 },
96
+ { 0x48001800, 0xFFFFFFFF, 0x00000000, 0x00000000 },
97
+ { 0x48001C00, 0x0000000F, 0x00000000, 0x00000000 },
98
+};
99
+
100
static void stm32l4x5_soc_initfn(Object *obj)
101
{
102
Stm32l4x5SocState *s = STM32L4X5_SOC(obj);
103
@@ -XXX,XX +XXX,XX @@ static void stm32l4x5_soc_initfn(Object *obj)
104
}
105
object_initialize_child(obj, "syscfg", &s->syscfg, TYPE_STM32L4X5_SYSCFG);
106
object_initialize_child(obj, "rcc", &s->rcc, TYPE_STM32L4X5_RCC);
107
+
108
+ for (unsigned i = 0; i < NUM_GPIOS; i++) {
109
+ g_autofree char *name = g_strdup_printf("gpio%c", 'a' + i);
110
+ object_initialize_child(obj, name, &s->gpio[i], TYPE_STM32L4X5_GPIO);
111
+ }
112
}
113
114
static void stm32l4x5_soc_realize(DeviceState *dev_soc, Error **errp)
115
@@ -XXX,XX +XXX,XX @@ static void stm32l4x5_soc_realize(DeviceState *dev_soc, Error **errp)
116
Stm32l4x5SocState *s = STM32L4X5_SOC(dev_soc);
117
const Stm32l4x5SocClass *sc = STM32L4X5_SOC_GET_CLASS(dev_soc);
118
MemoryRegion *system_memory = get_system_memory();
119
- DeviceState *armv7m;
120
+ DeviceState *armv7m, *dev;
121
SysBusDevice *busdev;
122
+ uint32_t pin_index;
123
124
if (!memory_region_init_rom(&s->flash, OBJECT(dev_soc), "flash",
125
sc->flash_size, errp)) {
126
@@ -XXX,XX +XXX,XX @@ static void stm32l4x5_soc_realize(DeviceState *dev_soc, Error **errp)
127
return;
128
}
129
130
+ /* GPIOs */
131
+ for (unsigned i = 0; i < NUM_GPIOS; i++) {
132
+ g_autofree char *name = g_strdup_printf("%c", 'A' + i);
133
+ dev = DEVICE(&s->gpio[i]);
134
+ qdev_prop_set_string(dev, "name", name);
135
+ qdev_prop_set_uint32(dev, "mode-reset",
136
+ stm32l4x5_gpio_cfg[i].moder_reset);
137
+ qdev_prop_set_uint32(dev, "ospeed-reset",
138
+ stm32l4x5_gpio_cfg[i].ospeedr_reset);
139
+ qdev_prop_set_uint32(dev, "pupd-reset",
140
+ stm32l4x5_gpio_cfg[i].pupdr_reset);
141
+ busdev = SYS_BUS_DEVICE(&s->gpio[i]);
142
+ g_free(name);
143
+ name = g_strdup_printf("gpio%c-out", 'a' + i);
144
+ qdev_connect_clock_in(DEVICE(&s->gpio[i]), "clk",
145
+ qdev_get_clock_out(DEVICE(&(s->rcc)), name));
146
+ if (!sysbus_realize(busdev, errp)) {
147
+ return;
148
+ }
149
+ sysbus_mmio_map(busdev, 0, stm32l4x5_gpio_cfg[i].addr);
150
+ }
151
+
152
/* System configuration controller */
153
busdev = SYS_BUS_DEVICE(&s->syscfg);
154
if (!sysbus_realize(busdev, errp)) {
155
return;
156
}
157
sysbus_mmio_map(busdev, 0, SYSCFG_ADDR);
158
- /*
159
- * TODO: when the GPIO device is implemented, connect it
160
- * to SYCFG using `qdev_connect_gpio_out`, NUM_GPIOS and
161
- * GPIO_NUM_PINS.
162
- */
163
+
164
+ for (unsigned i = 0; i < NUM_GPIOS; i++) {
165
+ for (unsigned j = 0; j < GPIO_NUM_PINS; j++) {
166
+ pin_index = GPIO_NUM_PINS * i + j;
167
+ qdev_connect_gpio_out(DEVICE(&s->gpio[i]), j,
168
+ qdev_get_gpio_in(DEVICE(&s->syscfg),
169
+ pin_index));
170
+ }
171
+ }
172
173
/* EXTI device */
174
busdev = SYS_BUS_DEVICE(&s->exti);
175
@@ -XXX,XX +XXX,XX @@ static void stm32l4x5_soc_realize(DeviceState *dev_soc, Error **errp)
176
}
177
}
178
179
- for (unsigned i = 0; i < 16; i++) {
180
+ for (unsigned i = 0; i < GPIO_NUM_PINS; i++) {
181
qdev_connect_gpio_out(DEVICE(&s->syscfg), i,
182
qdev_get_gpio_in(DEVICE(&s->exti), i));
183
}
184
@@ -XXX,XX +XXX,XX @@ static void stm32l4x5_soc_realize(DeviceState *dev_soc, Error **errp)
185
/* RESERVED: 0x40024400, 0x7FDBC00 */
186
187
/* AHB2 BUS */
188
- create_unimplemented_device("GPIOA", 0x48000000, 0x400);
189
- create_unimplemented_device("GPIOB", 0x48000400, 0x400);
190
- create_unimplemented_device("GPIOC", 0x48000800, 0x400);
191
- create_unimplemented_device("GPIOD", 0x48000C00, 0x400);
192
- create_unimplemented_device("GPIOE", 0x48001000, 0x400);
193
- create_unimplemented_device("GPIOF", 0x48001400, 0x400);
194
- create_unimplemented_device("GPIOG", 0x48001800, 0x400);
195
- create_unimplemented_device("GPIOH", 0x48001C00, 0x400);
196
/* RESERVED: 0x48002000, 0x7FDBC00 */
197
create_unimplemented_device("OTG_FS", 0x50000000, 0x40000);
198
create_unimplemented_device("ADC", 0x50040000, 0x400);
199
diff --git a/hw/misc/stm32l4x5_syscfg.c b/hw/misc/stm32l4x5_syscfg.c
200
index XXXXXXX..XXXXXXX 100644
201
--- a/hw/misc/stm32l4x5_syscfg.c
202
+++ b/hw/misc/stm32l4x5_syscfg.c
203
@@ -XXX,XX +XXX,XX @@
204
#include "hw/irq.h"
205
#include "migration/vmstate.h"
206
#include "hw/misc/stm32l4x5_syscfg.h"
207
+#include "hw/gpio/stm32l4x5_gpio.h"
208
209
#define SYSCFG_MEMRMP 0x00
210
#define SYSCFG_CFGR1 0x04
211
diff --git a/hw/arm/Kconfig b/hw/arm/Kconfig
212
index XXXXXXX..XXXXXXX 100644
213
--- a/hw/arm/Kconfig
214
+++ b/hw/arm/Kconfig
215
@@ -XXX,XX +XXX,XX @@ config STM32L4X5_SOC
216
bool
217
select ARM_V7M
218
select OR_IRQ
219
- select STM32L4X5_SYSCFG
220
select STM32L4X5_EXTI
221
+ select STM32L4X5_SYSCFG
222
select STM32L4X5_RCC
223
+ select STM32L4X5_GPIO
224
225
config XLNX_ZYNQMP_ARM
226
bool
227
--
228
2.34.1
229
230
diff view generated by jsdifflib
1
Coverity points out (CID 1402195) that the loop in trans_VMOV_imm_dp()
1
From: Inès Varhol <ines.varhol@telecom-paris.fr>
2
that iterates over the destination registers in a short-vector VMOV
3
accidentally throws away the returned updated register number
4
from vfp_advance_dreg(). Add the missing assignment. (We got this
5
correct in trans_VMOV_imm_sp().)
6
2
7
Fixes: 18cf951af9a27ae573a
3
The testcase contains :
4
- `test_idr_reset_value()` :
5
Checks the reset values of MODER, OTYPER, PUPDR, ODR and IDR.
6
- `test_gpio_output_mode()` :
7
Checks that writing a bit in register ODR results in the corresponding
8
pin rising or lowering, if this pin is configured in output mode.
9
- `test_gpio_input_mode()` :
10
Checks that a input pin set high or low externally results
11
in the pin rising and lowering.
12
- `test_pull_up_pull_down()` :
13
Checks that a floating pin in pull-up/down mode is actually high/down.
14
- `test_push_pull()` :
15
Checks that a pin set externally is disconnected when configured in
16
push-pull output mode, and can't be set externally while in this mode.
17
- `test_open_drain()` :
18
Checks that a pin set externally high is disconnected when configured
19
in open-drain output mode, and can't be set high while in this mode.
20
- `test_bsrr_brr()` :
21
Checks that writing to BSRR and BRR has the desired result in ODR.
22
- `test_clock_enable()` :
23
Checks that GPIO clock is at the right frequency after enabling it.
24
25
Acked-by: Thomas Huth <thuth@redhat.com>
26
Signed-off-by: Arnaud Minier <arnaud.minier@telecom-paris.fr>
27
Signed-off-by: Inès Varhol <ines.varhol@telecom-paris.fr>
28
Message-id: 20240305210444.310665-4-ines.varhol@telecom-paris.fr
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
29
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20190702105115.9465-1-peter.maydell@linaro.org
11
---
30
---
12
target/arm/translate-vfp.inc.c | 2 +-
31
tests/qtest/stm32l4x5_gpio-test.c | 551 ++++++++++++++++++++++++++++++
13
1 file changed, 1 insertion(+), 1 deletion(-)
32
tests/qtest/meson.build | 3 +-
33
2 files changed, 553 insertions(+), 1 deletion(-)
34
create mode 100644 tests/qtest/stm32l4x5_gpio-test.c
14
35
15
diff --git a/target/arm/translate-vfp.inc.c b/target/arm/translate-vfp.inc.c
36
diff --git a/tests/qtest/stm32l4x5_gpio-test.c b/tests/qtest/stm32l4x5_gpio-test.c
37
new file mode 100644
38
index XXXXXXX..XXXXXXX
39
--- /dev/null
40
+++ b/tests/qtest/stm32l4x5_gpio-test.c
41
@@ -XXX,XX +XXX,XX @@
42
+/*
43
+ * QTest testcase for STM32L4x5_GPIO
44
+ *
45
+ * Copyright (c) 2024 Arnaud Minier <arnaud.minier@telecom-paris.fr>
46
+ * Copyright (c) 2024 Inès Varhol <ines.varhol@telecom-paris.fr>
47
+ *
48
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
49
+ * See the COPYING file in the top-level directory.
50
+ */
51
+
52
+#include "qemu/osdep.h"
53
+#include "libqtest-single.h"
54
+
55
+#define GPIO_BASE_ADDR 0x48000000
56
+#define GPIO_SIZE 0x400
57
+#define NUM_GPIOS 8
58
+#define NUM_GPIO_PINS 16
59
+
60
+#define GPIO_A 0x48000000
61
+#define GPIO_B 0x48000400
62
+#define GPIO_C 0x48000800
63
+#define GPIO_D 0x48000C00
64
+#define GPIO_E 0x48001000
65
+#define GPIO_F 0x48001400
66
+#define GPIO_G 0x48001800
67
+#define GPIO_H 0x48001C00
68
+
69
+#define MODER 0x00
70
+#define OTYPER 0x04
71
+#define PUPDR 0x0C
72
+#define IDR 0x10
73
+#define ODR 0x14
74
+#define BSRR 0x18
75
+#define BRR 0x28
76
+
77
+#define MODER_INPUT 0
78
+#define MODER_OUTPUT 1
79
+
80
+#define PUPDR_NONE 0
81
+#define PUPDR_PULLUP 1
82
+#define PUPDR_PULLDOWN 2
83
+
84
+#define OTYPER_PUSH_PULL 0
85
+#define OTYPER_OPEN_DRAIN 1
86
+
87
+const uint32_t moder_reset[NUM_GPIOS] = {
88
+ 0xABFFFFFF,
89
+ 0xFFFFFEBF,
90
+ 0xFFFFFFFF,
91
+ 0xFFFFFFFF,
92
+ 0xFFFFFFFF,
93
+ 0xFFFFFFFF,
94
+ 0xFFFFFFFF,
95
+ 0x0000000F
96
+};
97
+
98
+const uint32_t pupdr_reset[NUM_GPIOS] = {
99
+ 0x64000000,
100
+ 0x00000100,
101
+ 0x00000000,
102
+ 0x00000000,
103
+ 0x00000000,
104
+ 0x00000000,
105
+ 0x00000000,
106
+ 0x00000000
107
+};
108
+
109
+const uint32_t idr_reset[NUM_GPIOS] = {
110
+ 0x0000A000,
111
+ 0x00000010,
112
+ 0x00000000,
113
+ 0x00000000,
114
+ 0x00000000,
115
+ 0x00000000,
116
+ 0x00000000,
117
+ 0x00000000
118
+};
119
+
120
+static uint32_t gpio_readl(unsigned int gpio, unsigned int offset)
121
+{
122
+ return readl(gpio + offset);
123
+}
124
+
125
+static void gpio_writel(unsigned int gpio, unsigned int offset, uint32_t value)
126
+{
127
+ writel(gpio + offset, value);
128
+}
129
+
130
+static void gpio_set_bit(unsigned int gpio, unsigned int reg,
131
+ unsigned int pin, uint32_t value)
132
+{
133
+ uint32_t mask = 0xFFFFFFFF & ~(0x1 << pin);
134
+ gpio_writel(gpio, reg, (gpio_readl(gpio, reg) & mask) | value << pin);
135
+}
136
+
137
+static void gpio_set_2bits(unsigned int gpio, unsigned int reg,
138
+ unsigned int pin, uint32_t value)
139
+{
140
+ uint32_t offset = 2 * pin;
141
+ uint32_t mask = 0xFFFFFFFF & ~(0x3 << offset);
142
+ gpio_writel(gpio, reg, (gpio_readl(gpio, reg) & mask) | value << offset);
143
+}
144
+
145
+static unsigned int get_gpio_id(uint32_t gpio_addr)
146
+{
147
+ return (gpio_addr - GPIO_BASE_ADDR) / GPIO_SIZE;
148
+}
149
+
150
+static void gpio_set_irq(unsigned int gpio, int num, int level)
151
+{
152
+ g_autofree char *name = g_strdup_printf("/machine/soc/gpio%c",
153
+ get_gpio_id(gpio) + 'a');
154
+ qtest_set_irq_in(global_qtest, name, NULL, num, level);
155
+}
156
+
157
+static void disconnect_all_pins(unsigned int gpio)
158
+{
159
+ g_autofree char *path = g_strdup_printf("/machine/soc/gpio%c",
160
+ get_gpio_id(gpio) + 'a');
161
+ QDict *r;
162
+
163
+ r = qtest_qmp(global_qtest, "{ 'execute': 'qom-set', 'arguments': "
164
+ "{ 'path': %s, 'property': 'disconnected-pins', 'value': %d } }",
165
+ path, 0xFFFF);
166
+ g_assert_false(qdict_haskey(r, "error"));
167
+ qobject_unref(r);
168
+}
169
+
170
+static uint32_t get_disconnected_pins(unsigned int gpio)
171
+{
172
+ g_autofree char *path = g_strdup_printf("/machine/soc/gpio%c",
173
+ get_gpio_id(gpio) + 'a');
174
+ uint32_t disconnected_pins = 0;
175
+ QDict *r;
176
+
177
+ r = qtest_qmp(global_qtest, "{ 'execute': 'qom-get', 'arguments':"
178
+ " { 'path': %s, 'property': 'disconnected-pins'} }", path);
179
+ g_assert_false(qdict_haskey(r, "error"));
180
+ disconnected_pins = qdict_get_int(r, "return");
181
+ qobject_unref(r);
182
+ return disconnected_pins;
183
+}
184
+
185
+static uint32_t reset(uint32_t gpio, unsigned int offset)
186
+{
187
+ switch (offset) {
188
+ case MODER:
189
+ return moder_reset[get_gpio_id(gpio)];
190
+ case PUPDR:
191
+ return pupdr_reset[get_gpio_id(gpio)];
192
+ case IDR:
193
+ return idr_reset[get_gpio_id(gpio)];
194
+ }
195
+ return 0x0;
196
+}
197
+
198
+static void system_reset(void)
199
+{
200
+ QDict *r;
201
+ r = qtest_qmp(global_qtest, "{'execute': 'system_reset'}");
202
+ g_assert_false(qdict_haskey(r, "error"));
203
+ qobject_unref(r);
204
+}
205
+
206
+static void test_idr_reset_value(void)
207
+{
208
+ /*
209
+ * Checks that the values in MODER, OTYPER, PUPDR and ODR
210
+ * after reset are correct, and that the value in IDR is
211
+ * coherent.
212
+ * Since AF and analog modes aren't implemented, IDR reset
213
+ * values aren't the same as with a real board.
214
+ *
215
+ * Register IDR contains the actual values of all GPIO pins.
216
+ * Its value depends on the pins' configuration
217
+ * (intput/output/analog : register MODER, push-pull/open-drain :
218
+ * register OTYPER, pull-up/pull-down/none : register PUPDR)
219
+ * and on the values stored in register ODR
220
+ * (in case the pin is in output mode).
221
+ */
222
+
223
+ gpio_writel(GPIO_A, MODER, 0xDEADBEEF);
224
+ gpio_writel(GPIO_A, ODR, 0xDEADBEEF);
225
+ gpio_writel(GPIO_A, OTYPER, 0xDEADBEEF);
226
+ gpio_writel(GPIO_A, PUPDR, 0xDEADBEEF);
227
+
228
+ gpio_writel(GPIO_B, MODER, 0xDEADBEEF);
229
+ gpio_writel(GPIO_B, ODR, 0xDEADBEEF);
230
+ gpio_writel(GPIO_B, OTYPER, 0xDEADBEEF);
231
+ gpio_writel(GPIO_B, PUPDR, 0xDEADBEEF);
232
+
233
+ gpio_writel(GPIO_C, MODER, 0xDEADBEEF);
234
+ gpio_writel(GPIO_C, ODR, 0xDEADBEEF);
235
+ gpio_writel(GPIO_C, OTYPER, 0xDEADBEEF);
236
+ gpio_writel(GPIO_C, PUPDR, 0xDEADBEEF);
237
+
238
+ gpio_writel(GPIO_H, MODER, 0xDEADBEEF);
239
+ gpio_writel(GPIO_H, ODR, 0xDEADBEEF);
240
+ gpio_writel(GPIO_H, OTYPER, 0xDEADBEEF);
241
+ gpio_writel(GPIO_H, PUPDR, 0xDEADBEEF);
242
+
243
+ system_reset();
244
+
245
+ uint32_t moder = gpio_readl(GPIO_A, MODER);
246
+ uint32_t odr = gpio_readl(GPIO_A, ODR);
247
+ uint32_t otyper = gpio_readl(GPIO_A, OTYPER);
248
+ uint32_t pupdr = gpio_readl(GPIO_A, PUPDR);
249
+ uint32_t idr = gpio_readl(GPIO_A, IDR);
250
+ /* 15: AF, 14: AF, 13: AF, 12: Analog ... */
251
+ /* here AF is the same as Analog and Input mode */
252
+ g_assert_cmphex(moder, ==, reset(GPIO_A, MODER));
253
+ g_assert_cmphex(odr, ==, reset(GPIO_A, ODR));
254
+ g_assert_cmphex(otyper, ==, reset(GPIO_A, OTYPER));
255
+ /* 15: pull-up, 14: pull-down, 13: pull-up, 12: neither ... */
256
+ g_assert_cmphex(pupdr, ==, reset(GPIO_A, PUPDR));
257
+ /* 15 : 1, 14: 0, 13: 1, 12 : reset value ... */
258
+ g_assert_cmphex(idr, ==, reset(GPIO_A, IDR));
259
+
260
+ moder = gpio_readl(GPIO_B, MODER);
261
+ odr = gpio_readl(GPIO_B, ODR);
262
+ otyper = gpio_readl(GPIO_B, OTYPER);
263
+ pupdr = gpio_readl(GPIO_B, PUPDR);
264
+ idr = gpio_readl(GPIO_B, IDR);
265
+ /* ... 5: Analog, 4: AF, 3: AF, 2: Analog ... */
266
+ /* here AF is the same as Analog and Input mode */
267
+ g_assert_cmphex(moder, ==, reset(GPIO_B, MODER));
268
+ g_assert_cmphex(odr, ==, reset(GPIO_B, ODR));
269
+ g_assert_cmphex(otyper, ==, reset(GPIO_B, OTYPER));
270
+ /* ... 5: neither, 4: pull-up, 3: neither ... */
271
+ g_assert_cmphex(pupdr, ==, reset(GPIO_B, PUPDR));
272
+ /* ... 5 : reset value, 4 : 1, 3 : reset value ... */
273
+ g_assert_cmphex(idr, ==, reset(GPIO_B, IDR));
274
+
275
+ moder = gpio_readl(GPIO_C, MODER);
276
+ odr = gpio_readl(GPIO_C, ODR);
277
+ otyper = gpio_readl(GPIO_C, OTYPER);
278
+ pupdr = gpio_readl(GPIO_C, PUPDR);
279
+ idr = gpio_readl(GPIO_C, IDR);
280
+ /* Analog, same as Input mode*/
281
+ g_assert_cmphex(moder, ==, reset(GPIO_C, MODER));
282
+ g_assert_cmphex(odr, ==, reset(GPIO_C, ODR));
283
+ g_assert_cmphex(otyper, ==, reset(GPIO_C, OTYPER));
284
+ /* no pull-up or pull-down */
285
+ g_assert_cmphex(pupdr, ==, reset(GPIO_C, PUPDR));
286
+ /* reset value */
287
+ g_assert_cmphex(idr, ==, reset(GPIO_C, IDR));
288
+
289
+ moder = gpio_readl(GPIO_H, MODER);
290
+ odr = gpio_readl(GPIO_H, ODR);
291
+ otyper = gpio_readl(GPIO_H, OTYPER);
292
+ pupdr = gpio_readl(GPIO_H, PUPDR);
293
+ idr = gpio_readl(GPIO_H, IDR);
294
+ /* Analog, same as Input mode */
295
+ g_assert_cmphex(moder, ==, reset(GPIO_H, MODER));
296
+ g_assert_cmphex(odr, ==, reset(GPIO_H, ODR));
297
+ g_assert_cmphex(otyper, ==, reset(GPIO_H, OTYPER));
298
+ /* no pull-up or pull-down */
299
+ g_assert_cmphex(pupdr, ==, reset(GPIO_H, PUPDR));
300
+ /* reset value */
301
+ g_assert_cmphex(idr, ==, reset(GPIO_H, IDR));
302
+}
303
+
304
+static void test_gpio_output_mode(const void *data)
305
+{
306
+ /*
307
+ * Checks that setting a bit in ODR sets the corresponding
308
+ * GPIO line high : it should set the right bit in IDR
309
+ * and send an irq to syscfg.
310
+ * Additionally, it checks that values written to ODR
311
+ * when not in output mode are stored and not discarded.
312
+ */
313
+ unsigned int pin = ((uint64_t)data) & 0xF;
314
+ uint32_t gpio = ((uint64_t)data) >> 32;
315
+ unsigned int gpio_id = get_gpio_id(gpio);
316
+
317
+ qtest_irq_intercept_in(global_qtest, "/machine/soc/syscfg");
318
+
319
+ /* Set a bit in ODR and check nothing happens */
320
+ gpio_set_bit(gpio, ODR, pin, 1);
321
+ g_assert_cmphex(gpio_readl(gpio, IDR), ==, reset(gpio, IDR));
322
+ g_assert_false(get_irq(gpio_id * NUM_GPIO_PINS + pin));
323
+
324
+ /* Configure the relevant line as output and check the pin is high */
325
+ gpio_set_2bits(gpio, MODER, pin, MODER_OUTPUT);
326
+ g_assert_cmphex(gpio_readl(gpio, IDR), ==, reset(gpio, IDR) | (1 << pin));
327
+ g_assert_true(get_irq(gpio_id * NUM_GPIO_PINS + pin));
328
+
329
+ /* Reset the bit in ODR and check the pin is low */
330
+ gpio_set_bit(gpio, ODR, pin, 0);
331
+ g_assert_cmphex(gpio_readl(gpio, IDR), ==, reset(gpio, IDR) & ~(1 << pin));
332
+ g_assert_false(get_irq(gpio_id * NUM_GPIO_PINS + pin));
333
+
334
+ /* Clean the test */
335
+ gpio_writel(gpio, ODR, reset(gpio, ODR));
336
+ gpio_writel(gpio, MODER, reset(gpio, MODER));
337
+ g_assert_cmphex(gpio_readl(gpio, IDR), ==, reset(gpio, IDR));
338
+ g_assert_false(get_irq(gpio_id * NUM_GPIO_PINS + pin));
339
+}
340
+
341
+static void test_gpio_input_mode(const void *data)
342
+{
343
+ /*
344
+ * Test that setting a line high/low externally sets the
345
+ * corresponding GPIO line high/low : it should set the
346
+ * right bit in IDR and send an irq to syscfg.
347
+ */
348
+ unsigned int pin = ((uint64_t)data) & 0xF;
349
+ uint32_t gpio = ((uint64_t)data) >> 32;
350
+ unsigned int gpio_id = get_gpio_id(gpio);
351
+
352
+ qtest_irq_intercept_in(global_qtest, "/machine/soc/syscfg");
353
+
354
+ /* Configure a line as input, raise it, and check that the pin is high */
355
+ gpio_set_2bits(gpio, MODER, pin, MODER_INPUT);
356
+ gpio_set_irq(gpio, pin, 1);
357
+ g_assert_cmphex(gpio_readl(gpio, IDR), ==, reset(gpio, IDR) | (1 << pin));
358
+ g_assert_true(get_irq(gpio_id * NUM_GPIO_PINS + pin));
359
+
360
+ /* Lower the line and check that the pin is low */
361
+ gpio_set_irq(gpio, pin, 0);
362
+ g_assert_cmphex(gpio_readl(gpio, IDR), ==, reset(gpio, IDR) & ~(1 << pin));
363
+ g_assert_false(get_irq(gpio_id * NUM_GPIO_PINS + pin));
364
+
365
+ /* Clean the test */
366
+ gpio_writel(gpio, MODER, reset(gpio, MODER));
367
+ disconnect_all_pins(gpio);
368
+ g_assert_cmphex(gpio_readl(gpio, IDR), ==, reset(gpio, IDR));
369
+}
370
+
371
+static void test_pull_up_pull_down(const void *data)
372
+{
373
+ /*
374
+ * Test that a floating pin with pull-up sets the pin
375
+ * high and vice-versa.
376
+ */
377
+ unsigned int pin = ((uint64_t)data) & 0xF;
378
+ uint32_t gpio = ((uint64_t)data) >> 32;
379
+ unsigned int gpio_id = get_gpio_id(gpio);
380
+
381
+ qtest_irq_intercept_in(global_qtest, "/machine/soc/syscfg");
382
+
383
+ /* Configure a line as input with pull-up, check the line is set high */
384
+ gpio_set_2bits(gpio, MODER, pin, MODER_INPUT);
385
+ gpio_set_2bits(gpio, PUPDR, pin, PUPDR_PULLUP);
386
+ g_assert_cmphex(gpio_readl(gpio, IDR), ==, reset(gpio, IDR) | (1 << pin));
387
+ g_assert_true(get_irq(gpio_id * NUM_GPIO_PINS + pin));
388
+
389
+ /* Configure the line with pull-down, check the line is low */
390
+ gpio_set_2bits(gpio, PUPDR, pin, PUPDR_PULLDOWN);
391
+ g_assert_cmphex(gpio_readl(gpio, IDR), ==, reset(gpio, IDR) & ~(1 << pin));
392
+ g_assert_false(get_irq(gpio_id * NUM_GPIO_PINS + pin));
393
+
394
+ /* Clean the test */
395
+ gpio_writel(gpio, MODER, reset(gpio, MODER));
396
+ gpio_writel(gpio, PUPDR, reset(gpio, PUPDR));
397
+ g_assert_cmphex(gpio_readl(gpio, IDR), ==, reset(gpio, IDR));
398
+}
399
+
400
+static void test_push_pull(const void *data)
401
+{
402
+ /*
403
+ * Test that configuring a line in push-pull output mode
404
+ * disconnects the pin, that the pin can't be set or reset
405
+ * externally afterwards.
406
+ */
407
+ unsigned int pin = ((uint64_t)data) & 0xF;
408
+ uint32_t gpio = ((uint64_t)data) >> 32;
409
+ uint32_t gpio2 = GPIO_BASE_ADDR + (GPIO_H - gpio);
410
+
411
+ qtest_irq_intercept_in(global_qtest, "/machine/soc/syscfg");
412
+
413
+ /* Setting a line high externally, configuring it in push-pull output */
414
+ /* And checking the pin was disconnected */
415
+ gpio_set_irq(gpio, pin, 1);
416
+ gpio_set_2bits(gpio, MODER, pin, MODER_OUTPUT);
417
+ g_assert_cmphex(get_disconnected_pins(gpio), ==, 0xFFFF);
418
+ g_assert_cmphex(gpio_readl(gpio, IDR), ==, reset(gpio, IDR) & ~(1 << pin));
419
+
420
+ /* Setting a line low externally, configuring it in push-pull output */
421
+ /* And checking the pin was disconnected */
422
+ gpio_set_irq(gpio2, pin, 0);
423
+ gpio_set_bit(gpio2, ODR, pin, 1);
424
+ gpio_set_2bits(gpio2, MODER, pin, MODER_OUTPUT);
425
+ g_assert_cmphex(get_disconnected_pins(gpio2), ==, 0xFFFF);
426
+ g_assert_cmphex(gpio_readl(gpio2, IDR), ==, reset(gpio2, IDR) | (1 << pin));
427
+
428
+ /* Trying to set a push-pull output pin, checking it doesn't work */
429
+ gpio_set_irq(gpio, pin, 1);
430
+ g_assert_cmphex(get_disconnected_pins(gpio), ==, 0xFFFF);
431
+ g_assert_cmphex(gpio_readl(gpio, IDR), ==, reset(gpio, IDR) & ~(1 << pin));
432
+
433
+ /* Trying to reset a push-pull output pin, checking it doesn't work */
434
+ gpio_set_irq(gpio2, pin, 0);
435
+ g_assert_cmphex(get_disconnected_pins(gpio2), ==, 0xFFFF);
436
+ g_assert_cmphex(gpio_readl(gpio2, IDR), ==, reset(gpio2, IDR) | (1 << pin));
437
+
438
+ /* Clean the test */
439
+ gpio_writel(gpio, MODER, reset(gpio, MODER));
440
+ gpio_writel(gpio2, ODR, reset(gpio2, ODR));
441
+ gpio_writel(gpio2, MODER, reset(gpio2, MODER));
442
+}
443
+
444
+static void test_open_drain(const void *data)
445
+{
446
+ /*
447
+ * Test that configuring a line in open-drain output mode
448
+ * disconnects a pin set high externally and that the pin
449
+ * can't be set high externally while configured in open-drain.
450
+ *
451
+ * However a pin set low externally shouldn't be disconnected,
452
+ * and it can be set low externally when in open-drain mode.
453
+ */
454
+ unsigned int pin = ((uint64_t)data) & 0xF;
455
+ uint32_t gpio = ((uint64_t)data) >> 32;
456
+ uint32_t gpio2 = GPIO_BASE_ADDR + (GPIO_H - gpio);
457
+
458
+ qtest_irq_intercept_in(global_qtest, "/machine/soc/syscfg");
459
+
460
+ /* Setting a line high externally, configuring it in open-drain output */
461
+ /* And checking the pin was disconnected */
462
+ gpio_set_irq(gpio, pin, 1);
463
+ gpio_set_bit(gpio, OTYPER, pin, OTYPER_OPEN_DRAIN);
464
+ gpio_set_2bits(gpio, MODER, pin, MODER_OUTPUT);
465
+ g_assert_cmphex(get_disconnected_pins(gpio), ==, 0xFFFF);
466
+ g_assert_cmphex(gpio_readl(gpio, IDR), ==, reset(gpio, IDR) & ~(1 << pin));
467
+
468
+ /* Setting a line low externally, configuring it in open-drain output */
469
+ /* And checking the pin wasn't disconnected */
470
+ gpio_set_irq(gpio2, pin, 0);
471
+ gpio_set_bit(gpio2, ODR, pin, 1);
472
+ gpio_set_bit(gpio2, OTYPER, pin, OTYPER_OPEN_DRAIN);
473
+ gpio_set_2bits(gpio2, MODER, pin, MODER_OUTPUT);
474
+ g_assert_cmphex(get_disconnected_pins(gpio2), ==, 0xFFFF & ~(1 << pin));
475
+ g_assert_cmphex(gpio_readl(gpio2, IDR), ==,
476
+ reset(gpio2, IDR) & ~(1 << pin));
477
+
478
+ /* Trying to set a open-drain output pin, checking it doesn't work */
479
+ gpio_set_irq(gpio, pin, 1);
480
+ g_assert_cmphex(get_disconnected_pins(gpio), ==, 0xFFFF);
481
+ g_assert_cmphex(gpio_readl(gpio, IDR), ==, reset(gpio, IDR) & ~(1 << pin));
482
+
483
+ /* Trying to reset a open-drain output pin, checking it works */
484
+ gpio_set_bit(gpio, ODR, pin, 1);
485
+ gpio_set_irq(gpio, pin, 0);
486
+ g_assert_cmphex(get_disconnected_pins(gpio2), ==, 0xFFFF & ~(1 << pin));
487
+ g_assert_cmphex(gpio_readl(gpio2, IDR), ==,
488
+ reset(gpio2, IDR) & ~(1 << pin));
489
+
490
+ /* Clean the test */
491
+ disconnect_all_pins(gpio2);
492
+ gpio_writel(gpio2, OTYPER, reset(gpio2, OTYPER));
493
+ gpio_writel(gpio2, ODR, reset(gpio2, ODR));
494
+ gpio_writel(gpio2, MODER, reset(gpio2, MODER));
495
+ g_assert_cmphex(gpio_readl(gpio2, IDR), ==, reset(gpio2, IDR));
496
+ disconnect_all_pins(gpio);
497
+ gpio_writel(gpio, OTYPER, reset(gpio, OTYPER));
498
+ gpio_writel(gpio, ODR, reset(gpio, ODR));
499
+ gpio_writel(gpio, MODER, reset(gpio, MODER));
500
+ g_assert_cmphex(gpio_readl(gpio, IDR), ==, reset(gpio, IDR));
501
+}
502
+
503
+static void test_bsrr_brr(const void *data)
504
+{
505
+ /*
506
+ * Test that writing a '1' in BSS and BSRR
507
+ * has the desired effect on ODR.
508
+ * In BSRR, BSx has priority over BRx.
509
+ */
510
+ unsigned int pin = ((uint64_t)data) & 0xF;
511
+ uint32_t gpio = ((uint64_t)data) >> 32;
512
+
513
+ gpio_writel(gpio, BSRR, (1 << pin));
514
+ g_assert_cmphex(gpio_readl(gpio, ODR), ==, reset(gpio, ODR) | (1 << pin));
515
+
516
+ gpio_writel(gpio, BSRR, (1 << (pin + NUM_GPIO_PINS)));
517
+ g_assert_cmphex(gpio_readl(gpio, ODR), ==, reset(gpio, ODR));
518
+
519
+ gpio_writel(gpio, BSRR, (1 << pin));
520
+ g_assert_cmphex(gpio_readl(gpio, ODR), ==, reset(gpio, ODR) | (1 << pin));
521
+
522
+ gpio_writel(gpio, BRR, (1 << pin));
523
+ g_assert_cmphex(gpio_readl(gpio, ODR), ==, reset(gpio, ODR));
524
+
525
+ /* BSx should have priority over BRx */
526
+ gpio_writel(gpio, BSRR, (1 << pin) | (1 << (pin + NUM_GPIO_PINS)));
527
+ g_assert_cmphex(gpio_readl(gpio, ODR), ==, reset(gpio, ODR) | (1 << pin));
528
+
529
+ gpio_writel(gpio, BRR, (1 << pin));
530
+ g_assert_cmphex(gpio_readl(gpio, ODR), ==, reset(gpio, ODR));
531
+
532
+ gpio_writel(gpio, ODR, reset(gpio, ODR));
533
+}
534
+
535
+int main(int argc, char **argv)
536
+{
537
+ int ret;
538
+
539
+ g_test_init(&argc, &argv, NULL);
540
+ g_test_set_nonfatal_assertions();
541
+ qtest_add_func("stm32l4x5/gpio/test_idr_reset_value",
542
+ test_idr_reset_value);
543
+ /*
544
+ * The inputs for the tests (gpio and pin) can be changed,
545
+ * but the tests don't work for pins that are high at reset
546
+ * (GPIOA15, GPIO13 and GPIOB5).
547
+ * Specifically, rising the pin then checking `get_irq()`
548
+ * is problematic since the pin was already high.
549
+ */
550
+ qtest_add_data_func("stm32l4x5/gpio/test_gpioc5_output_mode",
551
+ (void *)((uint64_t)GPIO_C << 32 | 5),
552
+ test_gpio_output_mode);
553
+ qtest_add_data_func("stm32l4x5/gpio/test_gpioh3_output_mode",
554
+ (void *)((uint64_t)GPIO_H << 32 | 3),
555
+ test_gpio_output_mode);
556
+ qtest_add_data_func("stm32l4x5/gpio/test_gpio_input_mode1",
557
+ (void *)((uint64_t)GPIO_D << 32 | 6),
558
+ test_gpio_input_mode);
559
+ qtest_add_data_func("stm32l4x5/gpio/test_gpio_input_mode2",
560
+ (void *)((uint64_t)GPIO_C << 32 | 10),
561
+ test_gpio_input_mode);
562
+ qtest_add_data_func("stm32l4x5/gpio/test_gpio_pull_up_pull_down1",
563
+ (void *)((uint64_t)GPIO_B << 32 | 5),
564
+ test_pull_up_pull_down);
565
+ qtest_add_data_func("stm32l4x5/gpio/test_gpio_pull_up_pull_down2",
566
+ (void *)((uint64_t)GPIO_F << 32 | 1),
567
+ test_pull_up_pull_down);
568
+ qtest_add_data_func("stm32l4x5/gpio/test_gpio_push_pull1",
569
+ (void *)((uint64_t)GPIO_G << 32 | 6),
570
+ test_push_pull);
571
+ qtest_add_data_func("stm32l4x5/gpio/test_gpio_push_pull2",
572
+ (void *)((uint64_t)GPIO_H << 32 | 3),
573
+ test_push_pull);
574
+ qtest_add_data_func("stm32l4x5/gpio/test_gpio_open_drain1",
575
+ (void *)((uint64_t)GPIO_C << 32 | 4),
576
+ test_open_drain);
577
+ qtest_add_data_func("stm32l4x5/gpio/test_gpio_open_drain2",
578
+ (void *)((uint64_t)GPIO_E << 32 | 11),
579
+ test_open_drain);
580
+ qtest_add_data_func("stm32l4x5/gpio/test_bsrr_brr1",
581
+ (void *)((uint64_t)GPIO_A << 32 | 12),
582
+ test_bsrr_brr);
583
+ qtest_add_data_func("stm32l4x5/gpio/test_bsrr_brr2",
584
+ (void *)((uint64_t)GPIO_D << 32 | 0),
585
+ test_bsrr_brr);
586
+
587
+ qtest_start("-machine b-l475e-iot01a");
588
+ ret = g_test_run();
589
+ qtest_end();
590
+
591
+ return ret;
592
+}
593
diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
16
index XXXXXXX..XXXXXXX 100644
594
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate-vfp.inc.c
595
--- a/tests/qtest/meson.build
18
+++ b/target/arm/translate-vfp.inc.c
596
+++ b/tests/qtest/meson.build
19
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_dp(DisasContext *s, arg_VMOV_imm_dp *a)
597
@@ -XXX,XX +XXX,XX @@ qtests_aspeed = \
20
598
qtests_stm32l4x5 = \
21
/* Set up the operands for the next iteration */
599
['stm32l4x5_exti-test',
22
veclen--;
600
'stm32l4x5_syscfg-test',
23
- vfp_advance_dreg(vd, delta_d);
601
- 'stm32l4x5_rcc-test']
24
+ vd = vfp_advance_dreg(vd, delta_d);
602
+ 'stm32l4x5_rcc-test',
25
}
603
+ 'stm32l4x5_gpio-test']
26
604
27
tcg_temp_free_i64(fd);
605
qtests_arm = \
606
(config_all_devices.has_key('CONFIG_MPS2') ? ['sse-timer-test'] : []) + \
28
--
607
--
29
2.20.1
608
2.34.1
30
609
31
610
diff view generated by jsdifflib
1
Like most of the v7M memory mapped system registers, the systick
1
From: Richard Henderson <richard.henderson@linaro.org>
2
registers are accessible to privileged code only and user accesses
2
3
must generate a BusFault. We implement that for registers in
3
While the 8-bit input elements are sequential in the input vector,
4
the NVIC proper already, but missed it for systick since we
4
the 32-bit output elements are not sequential in the output matrix.
5
implement it as a separate device. Correct the omission.
5
Do not attempt to compute 2 32-bit outputs at the same time.
6
6
7
Cc: qemu-stable@nongnu.org
8
Fixes: 23a5e3859f5 ("target/arm: Implement SME integer outer product")
9
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2083
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
12
Message-id: 20240305163931.242795-1-richard.henderson@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
10
Message-id: 20190617175317.27557-6-peter.maydell@linaro.org
11
---
14
---
12
hw/timer/armv7m_systick.c | 26 ++++++++++++++++++++------
15
target/arm/tcg/sme_helper.c | 77 ++++++++++++++++++-------------
13
1 file changed, 20 insertions(+), 6 deletions(-)
16
tests/tcg/aarch64/sme-smopa-1.c | 47 +++++++++++++++++++
14
17
tests/tcg/aarch64/sme-smopa-2.c | 54 ++++++++++++++++++++++
15
diff --git a/hw/timer/armv7m_systick.c b/hw/timer/armv7m_systick.c
18
tests/tcg/aarch64/Makefile.target | 2 +-
19
4 files changed, 147 insertions(+), 33 deletions(-)
20
create mode 100644 tests/tcg/aarch64/sme-smopa-1.c
21
create mode 100644 tests/tcg/aarch64/sme-smopa-2.c
22
23
diff --git a/target/arm/tcg/sme_helper.c b/target/arm/tcg/sme_helper.c
16
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/timer/armv7m_systick.c
25
--- a/target/arm/tcg/sme_helper.c
18
+++ b/hw/timer/armv7m_systick.c
26
+++ b/target/arm/tcg/sme_helper.c
19
@@ -XXX,XX +XXX,XX @@ static void systick_timer_tick(void *opaque)
27
@@ -XXX,XX +XXX,XX @@ void HELPER(sme_bfmopa)(void *vza, void *vzn, void *vzm, void *vpn,
20
}
28
}
21
}
29
}
22
30
23
-static uint64_t systick_read(void *opaque, hwaddr addr, unsigned size)
31
-typedef uint64_t IMOPFn(uint64_t, uint64_t, uint64_t, uint8_t, bool);
24
+static MemTxResult systick_read(void *opaque, hwaddr addr, uint64_t *data,
32
+typedef uint32_t IMOPFn32(uint32_t, uint32_t, uint32_t, uint8_t, bool);
25
+ unsigned size, MemTxAttrs attrs)
33
+static inline void do_imopa_s(uint32_t *za, uint32_t *zn, uint32_t *zm,
34
+ uint8_t *pn, uint8_t *pm,
35
+ uint32_t desc, IMOPFn32 *fn)
36
+{
37
+ intptr_t row, col, oprsz = simd_oprsz(desc) / 4;
38
+ bool neg = simd_data(desc);
39
40
-static inline void do_imopa(uint64_t *za, uint64_t *zn, uint64_t *zm,
41
- uint8_t *pn, uint8_t *pm,
42
- uint32_t desc, IMOPFn *fn)
43
+ for (row = 0; row < oprsz; ++row) {
44
+ uint8_t pa = (pn[H1(row >> 1)] >> ((row & 1) * 4)) & 0xf;
45
+ uint32_t *za_row = &za[tile_vslice_index(row)];
46
+ uint32_t n = zn[H4(row)];
47
+
48
+ for (col = 0; col < oprsz; ++col) {
49
+ uint8_t pb = pm[H1(col >> 1)] >> ((col & 1) * 4);
50
+ uint32_t *a = &za_row[H4(col)];
51
+
52
+ *a = fn(n, zm[H4(col)], *a, pa & pb, neg);
53
+ }
54
+ }
55
+}
56
+
57
+typedef uint64_t IMOPFn64(uint64_t, uint64_t, uint64_t, uint8_t, bool);
58
+static inline void do_imopa_d(uint64_t *za, uint64_t *zn, uint64_t *zm,
59
+ uint8_t *pn, uint8_t *pm,
60
+ uint32_t desc, IMOPFn64 *fn)
26
{
61
{
27
SysTickState *s = opaque;
62
intptr_t row, col, oprsz = simd_oprsz(desc) / 8;
28
uint32_t val;
63
bool neg = simd_data(desc);
29
64
@@ -XXX,XX +XXX,XX @@ static inline void do_imopa(uint64_t *za, uint64_t *zn, uint64_t *zm,
30
+ if (attrs.user) {
31
+ /* Generate BusFault for unprivileged accesses */
32
+ return MEMTX_ERROR;
33
+ }
34
+
35
switch (addr) {
36
case 0x0: /* SysTick Control and Status. */
37
val = s->control;
38
@@ -XXX,XX +XXX,XX @@ static uint64_t systick_read(void *opaque, hwaddr addr, unsigned size)
39
}
40
41
trace_systick_read(addr, val, size);
42
- return val;
43
+ *data = val;
44
+ return MEMTX_OK;
45
}
65
}
46
66
47
-static void systick_write(void *opaque, hwaddr addr,
67
#define DEF_IMOP_32(NAME, NTYPE, MTYPE) \
48
- uint64_t value, unsigned size)
68
-static uint64_t NAME(uint64_t n, uint64_t m, uint64_t a, uint8_t p, bool neg) \
49
+static MemTxResult systick_write(void *opaque, hwaddr addr,
69
+static uint32_t NAME(uint32_t n, uint32_t m, uint32_t a, uint8_t p, bool neg) \
50
+ uint64_t value, unsigned size,
70
{ \
51
+ MemTxAttrs attrs)
71
- uint32_t sum0 = 0, sum1 = 0; \
52
{
72
+ uint32_t sum = 0; \
53
SysTickState *s = opaque;
73
/* Apply P to N as a mask, making the inactive elements 0. */ \
54
74
n &= expand_pred_b(p); \
55
+ if (attrs.user) {
75
- sum0 += (NTYPE)(n >> 0) * (MTYPE)(m >> 0); \
56
+ /* Generate BusFault for unprivileged accesses */
76
- sum0 += (NTYPE)(n >> 8) * (MTYPE)(m >> 8); \
57
+ return MEMTX_ERROR;
77
- sum0 += (NTYPE)(n >> 16) * (MTYPE)(m >> 16); \
58
+ }
78
- sum0 += (NTYPE)(n >> 24) * (MTYPE)(m >> 24); \
59
+
79
- sum1 += (NTYPE)(n >> 32) * (MTYPE)(m >> 32); \
60
trace_systick_write(addr, value, size);
80
- sum1 += (NTYPE)(n >> 40) * (MTYPE)(m >> 40); \
61
81
- sum1 += (NTYPE)(n >> 48) * (MTYPE)(m >> 48); \
62
switch (addr) {
82
- sum1 += (NTYPE)(n >> 56) * (MTYPE)(m >> 56); \
63
@@ -XXX,XX +XXX,XX @@ static void systick_write(void *opaque, hwaddr addr,
83
- if (neg) { \
64
qemu_log_mask(LOG_GUEST_ERROR,
84
- sum0 = (uint32_t)a - sum0, sum1 = (uint32_t)(a >> 32) - sum1; \
65
"SysTick: Bad write offset 0x%" HWADDR_PRIx "\n", addr);
85
- } else { \
66
}
86
- sum0 = (uint32_t)a + sum0, sum1 = (uint32_t)(a >> 32) + sum1; \
67
+ return MEMTX_OK;
87
- } \
88
- return ((uint64_t)sum1 << 32) | sum0; \
89
+ sum += (NTYPE)(n >> 0) * (MTYPE)(m >> 0); \
90
+ sum += (NTYPE)(n >> 8) * (MTYPE)(m >> 8); \
91
+ sum += (NTYPE)(n >> 16) * (MTYPE)(m >> 16); \
92
+ sum += (NTYPE)(n >> 24) * (MTYPE)(m >> 24); \
93
+ return neg ? a - sum : a + sum; \
68
}
94
}
69
95
70
static const MemoryRegionOps systick_ops = {
96
#define DEF_IMOP_64(NAME, NTYPE, MTYPE) \
71
- .read = systick_read,
97
@@ -XXX,XX +XXX,XX @@ DEF_IMOP_64(umopa_d, uint16_t, uint16_t)
72
- .write = systick_write,
98
DEF_IMOP_64(sumopa_d, int16_t, uint16_t)
73
+ .read_with_attrs = systick_read,
99
DEF_IMOP_64(usmopa_d, uint16_t, int16_t)
74
+ .write_with_attrs = systick_write,
100
75
.endianness = DEVICE_NATIVE_ENDIAN,
101
-#define DEF_IMOPH(NAME) \
76
.valid.min_access_size = 4,
102
- void HELPER(sme_##NAME)(void *vza, void *vzn, void *vzm, void *vpn, \
77
.valid.max_access_size = 4,
103
- void *vpm, uint32_t desc) \
104
- { do_imopa(vza, vzn, vzm, vpn, vpm, desc, NAME); }
105
+#define DEF_IMOPH(NAME, S) \
106
+ void HELPER(sme_##NAME##_##S)(void *vza, void *vzn, void *vzm, \
107
+ void *vpn, void *vpm, uint32_t desc) \
108
+ { do_imopa_##S(vza, vzn, vzm, vpn, vpm, desc, NAME##_##S); }
109
110
-DEF_IMOPH(smopa_s)
111
-DEF_IMOPH(umopa_s)
112
-DEF_IMOPH(sumopa_s)
113
-DEF_IMOPH(usmopa_s)
114
-DEF_IMOPH(smopa_d)
115
-DEF_IMOPH(umopa_d)
116
-DEF_IMOPH(sumopa_d)
117
-DEF_IMOPH(usmopa_d)
118
+DEF_IMOPH(smopa, s)
119
+DEF_IMOPH(umopa, s)
120
+DEF_IMOPH(sumopa, s)
121
+DEF_IMOPH(usmopa, s)
122
+
123
+DEF_IMOPH(smopa, d)
124
+DEF_IMOPH(umopa, d)
125
+DEF_IMOPH(sumopa, d)
126
+DEF_IMOPH(usmopa, d)
127
diff --git a/tests/tcg/aarch64/sme-smopa-1.c b/tests/tcg/aarch64/sme-smopa-1.c
128
new file mode 100644
129
index XXXXXXX..XXXXXXX
130
--- /dev/null
131
+++ b/tests/tcg/aarch64/sme-smopa-1.c
132
@@ -XXX,XX +XXX,XX @@
133
+#include <stdio.h>
134
+#include <string.h>
135
+
136
+int main()
137
+{
138
+ static const int cmp[4][4] = {
139
+ { 110, 134, 158, 182 },
140
+ { 390, 478, 566, 654 },
141
+ { 670, 822, 974, 1126 },
142
+ { 950, 1166, 1382, 1598 }
143
+ };
144
+ int dst[4][4];
145
+ int *tmp = &dst[0][0];
146
+
147
+ asm volatile(
148
+ ".arch armv8-r+sme\n\t"
149
+ "smstart\n\t"
150
+ "index z0.b, #0, #1\n\t"
151
+ "movprfx z1, z0\n\t"
152
+ "add z1.b, z1.b, #16\n\t"
153
+ "ptrue p0.b\n\t"
154
+ "smopa za0.s, p0/m, p0/m, z0.b, z1.b\n\t"
155
+ "ptrue p0.s, vl4\n\t"
156
+ "mov w12, #0\n\t"
157
+ "st1w { za0h.s[w12, #0] }, p0, [%0]\n\t"
158
+ "add %0, %0, #16\n\t"
159
+ "st1w { za0h.s[w12, #1] }, p0, [%0]\n\t"
160
+ "add %0, %0, #16\n\t"
161
+ "st1w { za0h.s[w12, #2] }, p0, [%0]\n\t"
162
+ "add %0, %0, #16\n\t"
163
+ "st1w { za0h.s[w12, #3] }, p0, [%0]\n\t"
164
+ "smstop"
165
+ : "+r"(tmp) : : "memory");
166
+
167
+ if (memcmp(cmp, dst, sizeof(dst)) == 0) {
168
+ return 0;
169
+ }
170
+
171
+ /* See above for correct results. */
172
+ for (int i = 0; i < 4; ++i) {
173
+ for (int j = 0; j < 4; ++j) {
174
+ printf("%6d", dst[i][j]);
175
+ }
176
+ printf("\n");
177
+ }
178
+ return 1;
179
+}
180
diff --git a/tests/tcg/aarch64/sme-smopa-2.c b/tests/tcg/aarch64/sme-smopa-2.c
181
new file mode 100644
182
index XXXXXXX..XXXXXXX
183
--- /dev/null
184
+++ b/tests/tcg/aarch64/sme-smopa-2.c
185
@@ -XXX,XX +XXX,XX @@
186
+#include <stdio.h>
187
+#include <string.h>
188
+
189
+int main()
190
+{
191
+ static const long cmp[4][4] = {
192
+ { 110, 134, 158, 182 },
193
+ { 390, 478, 566, 654 },
194
+ { 670, 822, 974, 1126 },
195
+ { 950, 1166, 1382, 1598 }
196
+ };
197
+ long dst[4][4];
198
+ long *tmp = &dst[0][0];
199
+ long svl;
200
+
201
+ /* Validate that we have a wide enough vector for 4 elements. */
202
+ asm(".arch armv8-r+sme-i64\n\trdsvl %0, #1" : "=r"(svl));
203
+ if (svl < 32) {
204
+ return 0;
205
+ }
206
+
207
+ asm volatile(
208
+ "smstart\n\t"
209
+ "index z0.h, #0, #1\n\t"
210
+ "movprfx z1, z0\n\t"
211
+ "add z1.h, z1.h, #16\n\t"
212
+ "ptrue p0.b\n\t"
213
+ "smopa za0.d, p0/m, p0/m, z0.h, z1.h\n\t"
214
+ "ptrue p0.d, vl4\n\t"
215
+ "mov w12, #0\n\t"
216
+ "st1d { za0h.d[w12, #0] }, p0, [%0]\n\t"
217
+ "add %0, %0, #32\n\t"
218
+ "st1d { za0h.d[w12, #1] }, p0, [%0]\n\t"
219
+ "mov w12, #2\n\t"
220
+ "add %0, %0, #32\n\t"
221
+ "st1d { za0h.d[w12, #0] }, p0, [%0]\n\t"
222
+ "add %0, %0, #32\n\t"
223
+ "st1d { za0h.d[w12, #1] }, p0, [%0]\n\t"
224
+ "smstop"
225
+ : "+r"(tmp) : : "memory");
226
+
227
+ if (memcmp(cmp, dst, sizeof(dst)) == 0) {
228
+ return 0;
229
+ }
230
+
231
+ /* See above for correct results. */
232
+ for (int i = 0; i < 4; ++i) {
233
+ for (int j = 0; j < 4; ++j) {
234
+ printf("%6ld", dst[i][j]);
235
+ }
236
+ printf("\n");
237
+ }
238
+ return 1;
239
+}
240
diff --git a/tests/tcg/aarch64/Makefile.target b/tests/tcg/aarch64/Makefile.target
241
index XXXXXXX..XXXXXXX 100644
242
--- a/tests/tcg/aarch64/Makefile.target
243
+++ b/tests/tcg/aarch64/Makefile.target
244
@@ -XXX,XX +XXX,XX @@ endif
245
246
# SME Tests
247
ifneq ($(CROSS_AS_HAS_ARMV9_SME),)
248
-AARCH64_TESTS += sme-outprod1
249
+AARCH64_TESTS += sme-outprod1 sme-smopa-1 sme-smopa-2
250
endif
251
252
# System Registers Tests
78
--
253
--
79
2.20.1
254
2.34.1
80
255
81
256
diff view generated by jsdifflib
1
In the various helper functions for v7M/v8M instructions, use
1
The sun4v RTC device model added under commit a0e893039cf2ce0 in 2016
2
the _ra versions of cpu_stl_data() and friends. Otherwise we
2
was unfortunately added with a license of GPL-v3-or-later, which is
3
may get wrong behaviour or an assert() due to not being able
3
not compatible with other QEMU code which has a GPL-v2-only license.
4
to locate the TB if there is an exception on the memory access
5
or if it performs an IO operation when in icount mode.
6
4
5
Relicense the code in the .c and the .h file to GPL-v2-or-later,
6
to make it compatible with the rest of QEMU.
7
8
Cc: qemu-stable@nongnu.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Signed-off-by: Paolo Bonzini (for Red Hat) <pbonzini@redhat.com>
9
Message-id: 20190617175317.27557-5-peter.maydell@linaro.org
11
Signed-off-by: Artyom Tarasenko <atar4qemu@gmail.com>
12
Signed-off-by: Markus Armbruster <armbru@redhat.com>
13
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
14
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
15
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
16
Acked-by: Alex Bennée <alex.bennee@linaro.org>
17
Message-id: 20240223161300.938542-1-peter.maydell@linaro.org
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
19
---
11
target/arm/m_helper.c | 21 ++++++++++++---------
20
include/hw/rtc/sun4v-rtc.h | 2 +-
12
1 file changed, 12 insertions(+), 9 deletions(-)
21
hw/rtc/sun4v-rtc.c | 2 +-
22
2 files changed, 2 insertions(+), 2 deletions(-)
13
23
14
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
24
diff --git a/include/hw/rtc/sun4v-rtc.h b/include/hw/rtc/sun4v-rtc.h
15
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/m_helper.c
26
--- a/include/hw/rtc/sun4v-rtc.h
17
+++ b/target/arm/m_helper.c
27
+++ b/include/hw/rtc/sun4v-rtc.h
18
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_blxns)(CPUARMState *env, uint32_t dest)
28
@@ -XXX,XX +XXX,XX @@
19
}
29
*
20
30
* Copyright (c) 2016 Artyom Tarasenko
21
/* Note that these stores can throw exceptions on MPU faults */
31
*
22
- cpu_stl_data(env, sp, nextinst);
32
- * This code is licensed under the GNU GPL v3 or (at your option) any later
23
- cpu_stl_data(env, sp + 4, saved_psr);
33
+ * This code is licensed under the GNU GPL v2 or (at your option) any later
24
+ cpu_stl_data_ra(env, sp, nextinst, GETPC());
34
* version.
25
+ cpu_stl_data_ra(env, sp + 4, saved_psr, GETPC());
35
*/
26
36
27
env->regs[13] = sp;
37
diff --git a/hw/rtc/sun4v-rtc.c b/hw/rtc/sun4v-rtc.c
28
env->regs[14] = 0xfeffffff;
38
index XXXXXXX..XXXXXXX 100644
29
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr)
39
--- a/hw/rtc/sun4v-rtc.c
30
/* fptr is the value of Rn, the frame pointer we store the FP regs to */
40
+++ b/hw/rtc/sun4v-rtc.c
31
bool s = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_S_MASK;
41
@@ -XXX,XX +XXX,XX @@
32
bool lspact = env->v7m.fpccr[s] & R_V7M_FPCCR_LSPACT_MASK;
42
*
33
+ uintptr_t ra = GETPC();
43
* Copyright (c) 2016 Artyom Tarasenko
34
44
*
35
assert(env->v7m.secure);
45
- * This code is licensed under the GNU GPL v3 or (at your option) any later
36
46
+ * This code is licensed under the GNU GPL v2 or (at your option) any later
37
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr)
47
* version.
38
* Note that we do not use v7m_stack_write() here, because the
48
*/
39
* accesses should not set the FSR bits for stacking errors if they
40
* fail. (In pseudocode terms, they are AccType_NORMAL, not AccType_STACK
41
- * or AccType_LAZYFP). Faults in cpu_stl_data() will throw exceptions
42
+ * or AccType_LAZYFP). Faults in cpu_stl_data_ra() will throw exceptions
43
* and longjmp out.
44
*/
45
if (!(env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPEN_MASK)) {
46
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr)
47
if (i >= 16) {
48
faddr += 8; /* skip the slot for the FPSCR */
49
}
50
- cpu_stl_data(env, faddr, slo);
51
- cpu_stl_data(env, faddr + 4, shi);
52
+ cpu_stl_data_ra(env, faddr, slo, ra);
53
+ cpu_stl_data_ra(env, faddr + 4, shi, ra);
54
}
55
- cpu_stl_data(env, fptr + 0x40, vfp_get_fpscr(env));
56
+ cpu_stl_data_ra(env, fptr + 0x40, vfp_get_fpscr(env), ra);
57
58
/*
59
* If TS is 0 then s0 to s15 and FPSCR are UNKNOWN; we choose to
60
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr)
61
62
void HELPER(v7m_vlldm)(CPUARMState *env, uint32_t fptr)
63
{
64
+ uintptr_t ra = GETPC();
65
+
66
/* fptr is the value of Rn, the frame pointer we load the FP regs from */
67
assert(env->v7m.secure);
68
69
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_vlldm)(CPUARMState *env, uint32_t fptr)
70
faddr += 8; /* skip the slot for the FPSCR */
71
}
72
73
- slo = cpu_ldl_data(env, faddr);
74
- shi = cpu_ldl_data(env, faddr + 4);
75
+ slo = cpu_ldl_data_ra(env, faddr, ra);
76
+ shi = cpu_ldl_data_ra(env, faddr + 4, ra);
77
78
dn = (uint64_t) shi << 32 | slo;
79
*aa32_vfp_dreg(env, i / 2) = dn;
80
}
81
- fpscr = cpu_ldl_data(env, fptr + 0x40);
82
+ fpscr = cpu_ldl_data_ra(env, fptr + 0x40, ra);
83
vfp_set_fpscr(env, fpscr);
84
}
85
49
86
--
50
--
87
2.20.1
51
2.34.1
88
52
89
53
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Thomas Huth <thuth@redhat.com>
2
2
3
These routines are TCG specific.
3
Move the code to a separate file so that we do not have to compile
4
it anymore if CONFIG_ARM_V7M is not set.
4
5
5
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
6
Signed-off-by: Thomas Huth <thuth@redhat.com>
6
Message-id: 20190701194942.10092-2-philmd@redhat.com
7
Message-id: 20240308141051.536599-2-thuth@redhat.com
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
---
10
target/arm/Makefile.objs | 2 +-
11
target/arm/tcg/cpu-v7m.c | 290 +++++++++++++++++++++++++++++++++++++
11
target/arm/cpu.c | 9 +-
12
target/arm/tcg/cpu32.c | 261 ---------------------------------
12
target/arm/debug_helper.c | 311 ++++++++++++++++++++++++++++++++++++++
13
target/arm/meson.build | 3 +
13
target/arm/op_helper.c | 295 ------------------------------------
14
target/arm/tcg/meson.build | 3 +
14
4 files changed, 315 insertions(+), 302 deletions(-)
15
4 files changed, 296 insertions(+), 261 deletions(-)
15
create mode 100644 target/arm/debug_helper.c
16
create mode 100644 target/arm/tcg/cpu-v7m.c
16
17
17
diff --git a/target/arm/Makefile.objs b/target/arm/Makefile.objs
18
diff --git a/target/arm/tcg/cpu-v7m.c b/target/arm/tcg/cpu-v7m.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/Makefile.objs
20
+++ b/target/arm/Makefile.objs
21
@@ -XXX,XX +XXX,XX @@ target/arm/translate-sve.o: target/arm/decode-sve.inc.c
22
target/arm/translate.o: target/arm/decode-vfp.inc.c
23
target/arm/translate.o: target/arm/decode-vfp-uncond.inc.c
24
25
-obj-y += tlb_helper.o
26
+obj-y += tlb_helper.o debug_helper.o
27
obj-y += translate.o op_helper.o
28
obj-y += crypto_helper.o
29
obj-y += iwmmxt_helper.o vec_helper.o neon_helper.o
30
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
31
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/cpu.c
33
+++ b/target/arm/cpu.c
34
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_class_init(ObjectClass *oc, void *data)
35
cc->gdb_arch_name = arm_gdb_arch_name;
36
cc->gdb_get_dynamic_xml = arm_gdb_get_dynamic_xml;
37
cc->gdb_stop_before_watchpoint = true;
38
- cc->debug_excp_handler = arm_debug_excp_handler;
39
- cc->debug_check_watchpoint = arm_debug_check_watchpoint;
40
-#if !defined(CONFIG_USER_ONLY)
41
- cc->adjust_watchpoint_address = arm_adjust_watchpoint_address;
42
-#endif
43
-
44
cc->disas_set_info = arm_disas_set_info;
45
#ifdef CONFIG_TCG
46
cc->tcg_initialize = arm_translate_init;
47
cc->tlb_fill = arm_cpu_tlb_fill;
48
+ cc->debug_excp_handler = arm_debug_excp_handler;
49
+ cc->debug_check_watchpoint = arm_debug_check_watchpoint;
50
#if !defined(CONFIG_USER_ONLY)
51
cc->do_unaligned_access = arm_cpu_do_unaligned_access;
52
cc->do_transaction_failed = arm_cpu_do_transaction_failed;
53
+ cc->adjust_watchpoint_address = arm_adjust_watchpoint_address;
54
#endif /* CONFIG_TCG && !CONFIG_USER_ONLY */
55
#endif
56
}
57
diff --git a/target/arm/debug_helper.c b/target/arm/debug_helper.c
58
new file mode 100644
19
new file mode 100644
59
index XXXXXXX..XXXXXXX
20
index XXXXXXX..XXXXXXX
60
--- /dev/null
21
--- /dev/null
61
+++ b/target/arm/debug_helper.c
22
+++ b/target/arm/tcg/cpu-v7m.c
62
@@ -XXX,XX +XXX,XX @@
23
@@ -XXX,XX +XXX,XX @@
63
+/*
24
+/*
64
+ * ARM debug helpers.
25
+ * QEMU ARMv7-M TCG-only CPUs.
26
+ *
27
+ * Copyright (c) 2012 SUSE LINUX Products GmbH
65
+ *
28
+ *
66
+ * This code is licensed under the GNU GPL v2 or later.
29
+ * This code is licensed under the GNU GPL v2 or later.
67
+ *
30
+ *
68
+ * SPDX-License-Identifier: GPL-2.0-or-later
31
+ * SPDX-License-Identifier: GPL-2.0-or-later
69
+ */
32
+ */
33
+
70
+#include "qemu/osdep.h"
34
+#include "qemu/osdep.h"
71
+#include "cpu.h"
35
+#include "cpu.h"
36
+#include "hw/core/tcg-cpu-ops.h"
72
+#include "internals.h"
37
+#include "internals.h"
73
+#include "exec/exec-all.h"
38
+
74
+#include "exec/helper-proto.h"
39
+#if !defined(CONFIG_USER_ONLY)
75
+
40
+
76
+/* Return true if the linked breakpoint entry lbn passes its checks */
41
+#include "hw/intc/armv7m_nvic.h"
77
+static bool linked_bp_matches(ARMCPU *cpu, int lbn)
42
+
78
+{
43
+static bool arm_v7m_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
79
+ CPUARMState *env = &cpu->env;
44
+{
80
+ uint64_t bcr = env->cp15.dbgbcr[lbn];
45
+ CPUClass *cc = CPU_GET_CLASS(cs);
81
+ int brps = extract32(cpu->dbgdidr, 24, 4);
82
+ int ctx_cmps = extract32(cpu->dbgdidr, 20, 4);
83
+ int bt;
84
+ uint32_t contextidr;
85
+
86
+ /*
87
+ * Links to unimplemented or non-context aware breakpoints are
88
+ * CONSTRAINED UNPREDICTABLE: either behave as if disabled, or
89
+ * as if linked to an UNKNOWN context-aware breakpoint (in which
90
+ * case DBGWCR<n>_EL1.LBN must indicate that breakpoint).
91
+ * We choose the former.
92
+ */
93
+ if (lbn > brps || lbn < (brps - ctx_cmps)) {
94
+ return false;
95
+ }
96
+
97
+ bcr = env->cp15.dbgbcr[lbn];
98
+
99
+ if (extract64(bcr, 0, 1) == 0) {
100
+ /* Linked breakpoint disabled : generate no events */
101
+ return false;
102
+ }
103
+
104
+ bt = extract64(bcr, 20, 4);
105
+
106
+ /*
107
+ * We match the whole register even if this is AArch32 using the
108
+ * short descriptor format (in which case it holds both PROCID and ASID),
109
+ * since we don't implement the optional v7 context ID masking.
110
+ */
111
+ contextidr = extract64(env->cp15.contextidr_el[1], 0, 32);
112
+
113
+ switch (bt) {
114
+ case 3: /* linked context ID match */
115
+ if (arm_current_el(env) > 1) {
116
+ /* Context matches never fire in EL2 or (AArch64) EL3 */
117
+ return false;
118
+ }
119
+ return (contextidr == extract64(env->cp15.dbgbvr[lbn], 0, 32));
120
+ case 5: /* linked address mismatch (reserved in AArch64) */
121
+ case 9: /* linked VMID match (reserved if no EL2) */
122
+ case 11: /* linked context ID and VMID match (reserved if no EL2) */
123
+ default:
124
+ /*
125
+ * Links to Unlinked context breakpoints must generate no
126
+ * events; we choose to do the same for reserved values too.
127
+ */
128
+ return false;
129
+ }
130
+
131
+ return false;
132
+}
133
+
134
+static bool bp_wp_matches(ARMCPU *cpu, int n, bool is_wp)
135
+{
136
+ CPUARMState *env = &cpu->env;
137
+ uint64_t cr;
138
+ int pac, hmc, ssc, wt, lbn;
139
+ /*
140
+ * Note that for watchpoints the check is against the CPU security
141
+ * state, not the S/NS attribute on the offending data access.
142
+ */
143
+ bool is_secure = arm_is_secure(env);
144
+ int access_el = arm_current_el(env);
145
+
146
+ if (is_wp) {
147
+ CPUWatchpoint *wp = env->cpu_watchpoint[n];
148
+
149
+ if (!wp || !(wp->flags & BP_WATCHPOINT_HIT)) {
150
+ return false;
151
+ }
152
+ cr = env->cp15.dbgwcr[n];
153
+ if (wp->hitattrs.user) {
154
+ /*
155
+ * The LDRT/STRT/LDT/STT "unprivileged access" instructions should
156
+ * match watchpoints as if they were accesses done at EL0, even if
157
+ * the CPU is at EL1 or higher.
158
+ */
159
+ access_el = 0;
160
+ }
161
+ } else {
162
+ uint64_t pc = is_a64(env) ? env->pc : env->regs[15];
163
+
164
+ if (!env->cpu_breakpoint[n] || env->cpu_breakpoint[n]->pc != pc) {
165
+ return false;
166
+ }
167
+ cr = env->cp15.dbgbcr[n];
168
+ }
169
+ /*
170
+ * The WATCHPOINT_HIT flag guarantees us that the watchpoint is
171
+ * enabled and that the address and access type match; for breakpoints
172
+ * we know the address matched; check the remaining fields, including
173
+ * linked breakpoints. We rely on WCR and BCR having the same layout
174
+ * for the LBN, SSC, HMC, PAC/PMC and is-linked fields.
175
+ * Note that some combinations of {PAC, HMC, SSC} are reserved and
176
+ * must act either like some valid combination or as if the watchpoint
177
+ * were disabled. We choose the former, and use this together with
178
+ * the fact that EL3 must always be Secure and EL2 must always be
179
+ * Non-Secure to simplify the code slightly compared to the full
180
+ * table in the ARM ARM.
181
+ */
182
+ pac = extract64(cr, 1, 2);
183
+ hmc = extract64(cr, 13, 1);
184
+ ssc = extract64(cr, 14, 2);
185
+
186
+ switch (ssc) {
187
+ case 0:
188
+ break;
189
+ case 1:
190
+ case 3:
191
+ if (is_secure) {
192
+ return false;
193
+ }
194
+ break;
195
+ case 2:
196
+ if (!is_secure) {
197
+ return false;
198
+ }
199
+ break;
200
+ }
201
+
202
+ switch (access_el) {
203
+ case 3:
204
+ case 2:
205
+ if (!hmc) {
206
+ return false;
207
+ }
208
+ break;
209
+ case 1:
210
+ if (extract32(pac, 0, 1) == 0) {
211
+ return false;
212
+ }
213
+ break;
214
+ case 0:
215
+ if (extract32(pac, 1, 1) == 0) {
216
+ return false;
217
+ }
218
+ break;
219
+ default:
220
+ g_assert_not_reached();
221
+ }
222
+
223
+ wt = extract64(cr, 20, 1);
224
+ lbn = extract64(cr, 16, 4);
225
+
226
+ if (wt && !linked_bp_matches(cpu, lbn)) {
227
+ return false;
228
+ }
229
+
230
+ return true;
231
+}
232
+
233
+static bool check_watchpoints(ARMCPU *cpu)
234
+{
235
+ CPUARMState *env = &cpu->env;
236
+ int n;
237
+
238
+ /*
239
+ * If watchpoints are disabled globally or we can't take debug
240
+ * exceptions here then watchpoint firings are ignored.
241
+ */
242
+ if (extract32(env->cp15.mdscr_el1, 15, 1) == 0
243
+ || !arm_generate_debug_exceptions(env)) {
244
+ return false;
245
+ }
246
+
247
+ for (n = 0; n < ARRAY_SIZE(env->cpu_watchpoint); n++) {
248
+ if (bp_wp_matches(cpu, n, true)) {
249
+ return true;
250
+ }
251
+ }
252
+ return false;
253
+}
254
+
255
+static bool check_breakpoints(ARMCPU *cpu)
256
+{
257
+ CPUARMState *env = &cpu->env;
258
+ int n;
259
+
260
+ /*
261
+ * If breakpoints are disabled globally or we can't take debug
262
+ * exceptions here then breakpoint firings are ignored.
263
+ */
264
+ if (extract32(env->cp15.mdscr_el1, 15, 1) == 0
265
+ || !arm_generate_debug_exceptions(env)) {
266
+ return false;
267
+ }
268
+
269
+ for (n = 0; n < ARRAY_SIZE(env->cpu_breakpoint); n++) {
270
+ if (bp_wp_matches(cpu, n, false)) {
271
+ return true;
272
+ }
273
+ }
274
+ return false;
275
+}
276
+
277
+void HELPER(check_breakpoints)(CPUARMState *env)
278
+{
279
+ ARMCPU *cpu = env_archcpu(env);
280
+
281
+ if (check_breakpoints(cpu)) {
282
+ HELPER(exception_internal(env, EXCP_DEBUG));
283
+ }
284
+}
285
+
286
+bool arm_debug_check_watchpoint(CPUState *cs, CPUWatchpoint *wp)
287
+{
288
+ /*
289
+ * Called by core code when a CPU watchpoint fires; need to check if this
290
+ * is also an architectural watchpoint match.
291
+ */
292
+ ARMCPU *cpu = ARM_CPU(cs);
293
+
294
+ return check_watchpoints(cpu);
295
+}
296
+
297
+void arm_debug_excp_handler(CPUState *cs)
298
+{
299
+ /*
300
+ * Called by core code when a watchpoint or breakpoint fires;
301
+ * need to check which one and raise the appropriate exception.
302
+ */
303
+ ARMCPU *cpu = ARM_CPU(cs);
46
+ ARMCPU *cpu = ARM_CPU(cs);
304
+ CPUARMState *env = &cpu->env;
47
+ CPUARMState *env = &cpu->env;
305
+ CPUWatchpoint *wp_hit = cs->watchpoint_hit;
48
+ bool ret = false;
306
+
49
+
307
+ if (wp_hit) {
50
+ /*
308
+ if (wp_hit->flags & BP_CPU) {
51
+ * ARMv7-M interrupt masking works differently than -A or -R.
309
+ bool wnr = (wp_hit->flags & BP_WATCHPOINT_HIT_WRITE) != 0;
52
+ * There is no FIQ/IRQ distinction. Instead of I and F bits
310
+ bool same_el = arm_debug_target_el(env) == arm_current_el(env);
53
+ * masking FIQ and IRQ interrupts, an exception is taken only
311
+
54
+ * if it is higher priority than the current execution priority
312
+ cs->watchpoint_hit = NULL;
55
+ * (which depends on state like BASEPRI, FAULTMASK and the
313
+
56
+ * currently active exception).
314
+ env->exception.fsr = arm_debug_exception_fsr(env);
57
+ */
315
+ env->exception.vaddress = wp_hit->hitaddr;
58
+ if (interrupt_request & CPU_INTERRUPT_HARD
316
+ raise_exception(env, EXCP_DATA_ABORT,
59
+ && (armv7m_nvic_can_take_pending_exception(env->nvic))) {
317
+ syn_watchpoint(same_el, 0, wnr),
60
+ cs->exception_index = EXCP_IRQ;
318
+ arm_debug_target_el(env));
61
+ cc->tcg_ops->do_interrupt(cs);
319
+ }
62
+ ret = true;
320
+ } else {
321
+ uint64_t pc = is_a64(env) ? env->pc : env->regs[15];
322
+ bool same_el = (arm_debug_target_el(env) == arm_current_el(env));
323
+
324
+ /*
325
+ * (1) GDB breakpoints should be handled first.
326
+ * (2) Do not raise a CPU exception if no CPU breakpoint has fired,
327
+ * since singlestep is also done by generating a debug internal
328
+ * exception.
329
+ */
330
+ if (cpu_breakpoint_test(cs, pc, BP_GDB)
331
+ || !cpu_breakpoint_test(cs, pc, BP_CPU)) {
332
+ return;
333
+ }
334
+
335
+ env->exception.fsr = arm_debug_exception_fsr(env);
336
+ /*
337
+ * FAR is UNKNOWN: clear vaddress to avoid potentially exposing
338
+ * values to the guest that it shouldn't be able to see at its
339
+ * exception/security level.
340
+ */
341
+ env->exception.vaddress = 0;
342
+ raise_exception(env, EXCP_PREFETCH_ABORT,
343
+ syn_breakpoint(same_el),
344
+ arm_debug_target_el(env));
345
+ }
63
+ }
346
+}
64
+ return ret;
347
+
65
+}
348
+#if !defined(CONFIG_USER_ONLY)
66
+
349
+
67
+#endif /* !CONFIG_USER_ONLY */
350
+vaddr arm_adjust_watchpoint_address(CPUState *cs, vaddr addr, int len)
68
+
351
+{
69
+static void cortex_m0_initfn(Object *obj)
352
+ ARMCPU *cpu = ARM_CPU(cs);
70
+{
353
+ CPUARMState *env = &cpu->env;
71
+ ARMCPU *cpu = ARM_CPU(obj);
72
+ set_feature(&cpu->env, ARM_FEATURE_V6);
73
+ set_feature(&cpu->env, ARM_FEATURE_M);
74
+
75
+ cpu->midr = 0x410cc200;
354
+
76
+
355
+ /*
77
+ /*
356
+ * In BE32 system mode, target memory is stored byteswapped (on a
78
+ * These ID register values are not guest visible, because
357
+ * little-endian host system), and by the time we reach here (via an
79
+ * we do not implement the Main Extension. They must be set
358
+ * opcode helper) the addresses of subword accesses have been adjusted
80
+ * to values corresponding to the Cortex-M0's implemented
359
+ * to account for that, which means that watchpoints will not match.
81
+ * features, because QEMU generally controls its emulation
360
+ * Undo the adjustment here.
82
+ * by looking at ID register fields. We use the same values as
83
+ * for the M3.
361
+ */
84
+ */
362
+ if (arm_sctlr_b(env)) {
85
+ cpu->isar.id_pfr0 = 0x00000030;
363
+ if (len == 1) {
86
+ cpu->isar.id_pfr1 = 0x00000200;
364
+ addr ^= 3;
87
+ cpu->isar.id_dfr0 = 0x00100000;
365
+ } else if (len == 2) {
88
+ cpu->id_afr0 = 0x00000000;
366
+ addr ^= 2;
89
+ cpu->isar.id_mmfr0 = 0x00000030;
367
+ }
90
+ cpu->isar.id_mmfr1 = 0x00000000;
91
+ cpu->isar.id_mmfr2 = 0x00000000;
92
+ cpu->isar.id_mmfr3 = 0x00000000;
93
+ cpu->isar.id_isar0 = 0x01141110;
94
+ cpu->isar.id_isar1 = 0x02111000;
95
+ cpu->isar.id_isar2 = 0x21112231;
96
+ cpu->isar.id_isar3 = 0x01111110;
97
+ cpu->isar.id_isar4 = 0x01310102;
98
+ cpu->isar.id_isar5 = 0x00000000;
99
+ cpu->isar.id_isar6 = 0x00000000;
100
+}
101
+
102
+static void cortex_m3_initfn(Object *obj)
103
+{
104
+ ARMCPU *cpu = ARM_CPU(obj);
105
+ set_feature(&cpu->env, ARM_FEATURE_V7);
106
+ set_feature(&cpu->env, ARM_FEATURE_M);
107
+ set_feature(&cpu->env, ARM_FEATURE_M_MAIN);
108
+ cpu->midr = 0x410fc231;
109
+ cpu->pmsav7_dregion = 8;
110
+ cpu->isar.id_pfr0 = 0x00000030;
111
+ cpu->isar.id_pfr1 = 0x00000200;
112
+ cpu->isar.id_dfr0 = 0x00100000;
113
+ cpu->id_afr0 = 0x00000000;
114
+ cpu->isar.id_mmfr0 = 0x00000030;
115
+ cpu->isar.id_mmfr1 = 0x00000000;
116
+ cpu->isar.id_mmfr2 = 0x00000000;
117
+ cpu->isar.id_mmfr3 = 0x00000000;
118
+ cpu->isar.id_isar0 = 0x01141110;
119
+ cpu->isar.id_isar1 = 0x02111000;
120
+ cpu->isar.id_isar2 = 0x21112231;
121
+ cpu->isar.id_isar3 = 0x01111110;
122
+ cpu->isar.id_isar4 = 0x01310102;
123
+ cpu->isar.id_isar5 = 0x00000000;
124
+ cpu->isar.id_isar6 = 0x00000000;
125
+}
126
+
127
+static void cortex_m4_initfn(Object *obj)
128
+{
129
+ ARMCPU *cpu = ARM_CPU(obj);
130
+
131
+ set_feature(&cpu->env, ARM_FEATURE_V7);
132
+ set_feature(&cpu->env, ARM_FEATURE_M);
133
+ set_feature(&cpu->env, ARM_FEATURE_M_MAIN);
134
+ set_feature(&cpu->env, ARM_FEATURE_THUMB_DSP);
135
+ cpu->midr = 0x410fc240; /* r0p0 */
136
+ cpu->pmsav7_dregion = 8;
137
+ cpu->isar.mvfr0 = 0x10110021;
138
+ cpu->isar.mvfr1 = 0x11000011;
139
+ cpu->isar.mvfr2 = 0x00000000;
140
+ cpu->isar.id_pfr0 = 0x00000030;
141
+ cpu->isar.id_pfr1 = 0x00000200;
142
+ cpu->isar.id_dfr0 = 0x00100000;
143
+ cpu->id_afr0 = 0x00000000;
144
+ cpu->isar.id_mmfr0 = 0x00000030;
145
+ cpu->isar.id_mmfr1 = 0x00000000;
146
+ cpu->isar.id_mmfr2 = 0x00000000;
147
+ cpu->isar.id_mmfr3 = 0x00000000;
148
+ cpu->isar.id_isar0 = 0x01141110;
149
+ cpu->isar.id_isar1 = 0x02111000;
150
+ cpu->isar.id_isar2 = 0x21112231;
151
+ cpu->isar.id_isar3 = 0x01111110;
152
+ cpu->isar.id_isar4 = 0x01310102;
153
+ cpu->isar.id_isar5 = 0x00000000;
154
+ cpu->isar.id_isar6 = 0x00000000;
155
+}
156
+
157
+static void cortex_m7_initfn(Object *obj)
158
+{
159
+ ARMCPU *cpu = ARM_CPU(obj);
160
+
161
+ set_feature(&cpu->env, ARM_FEATURE_V7);
162
+ set_feature(&cpu->env, ARM_FEATURE_M);
163
+ set_feature(&cpu->env, ARM_FEATURE_M_MAIN);
164
+ set_feature(&cpu->env, ARM_FEATURE_THUMB_DSP);
165
+ cpu->midr = 0x411fc272; /* r1p2 */
166
+ cpu->pmsav7_dregion = 8;
167
+ cpu->isar.mvfr0 = 0x10110221;
168
+ cpu->isar.mvfr1 = 0x12000011;
169
+ cpu->isar.mvfr2 = 0x00000040;
170
+ cpu->isar.id_pfr0 = 0x00000030;
171
+ cpu->isar.id_pfr1 = 0x00000200;
172
+ cpu->isar.id_dfr0 = 0x00100000;
173
+ cpu->id_afr0 = 0x00000000;
174
+ cpu->isar.id_mmfr0 = 0x00100030;
175
+ cpu->isar.id_mmfr1 = 0x00000000;
176
+ cpu->isar.id_mmfr2 = 0x01000000;
177
+ cpu->isar.id_mmfr3 = 0x00000000;
178
+ cpu->isar.id_isar0 = 0x01101110;
179
+ cpu->isar.id_isar1 = 0x02112000;
180
+ cpu->isar.id_isar2 = 0x20232231;
181
+ cpu->isar.id_isar3 = 0x01111131;
182
+ cpu->isar.id_isar4 = 0x01310132;
183
+ cpu->isar.id_isar5 = 0x00000000;
184
+ cpu->isar.id_isar6 = 0x00000000;
185
+}
186
+
187
+static void cortex_m33_initfn(Object *obj)
188
+{
189
+ ARMCPU *cpu = ARM_CPU(obj);
190
+
191
+ set_feature(&cpu->env, ARM_FEATURE_V8);
192
+ set_feature(&cpu->env, ARM_FEATURE_M);
193
+ set_feature(&cpu->env, ARM_FEATURE_M_MAIN);
194
+ set_feature(&cpu->env, ARM_FEATURE_M_SECURITY);
195
+ set_feature(&cpu->env, ARM_FEATURE_THUMB_DSP);
196
+ cpu->midr = 0x410fd213; /* r0p3 */
197
+ cpu->pmsav7_dregion = 16;
198
+ cpu->sau_sregion = 8;
199
+ cpu->isar.mvfr0 = 0x10110021;
200
+ cpu->isar.mvfr1 = 0x11000011;
201
+ cpu->isar.mvfr2 = 0x00000040;
202
+ cpu->isar.id_pfr0 = 0x00000030;
203
+ cpu->isar.id_pfr1 = 0x00000210;
204
+ cpu->isar.id_dfr0 = 0x00200000;
205
+ cpu->id_afr0 = 0x00000000;
206
+ cpu->isar.id_mmfr0 = 0x00101F40;
207
+ cpu->isar.id_mmfr1 = 0x00000000;
208
+ cpu->isar.id_mmfr2 = 0x01000000;
209
+ cpu->isar.id_mmfr3 = 0x00000000;
210
+ cpu->isar.id_isar0 = 0x01101110;
211
+ cpu->isar.id_isar1 = 0x02212000;
212
+ cpu->isar.id_isar2 = 0x20232232;
213
+ cpu->isar.id_isar3 = 0x01111131;
214
+ cpu->isar.id_isar4 = 0x01310132;
215
+ cpu->isar.id_isar5 = 0x00000000;
216
+ cpu->isar.id_isar6 = 0x00000000;
217
+ cpu->clidr = 0x00000000;
218
+ cpu->ctr = 0x8000c000;
219
+}
220
+
221
+static void cortex_m55_initfn(Object *obj)
222
+{
223
+ ARMCPU *cpu = ARM_CPU(obj);
224
+
225
+ set_feature(&cpu->env, ARM_FEATURE_V8);
226
+ set_feature(&cpu->env, ARM_FEATURE_V8_1M);
227
+ set_feature(&cpu->env, ARM_FEATURE_M);
228
+ set_feature(&cpu->env, ARM_FEATURE_M_MAIN);
229
+ set_feature(&cpu->env, ARM_FEATURE_M_SECURITY);
230
+ set_feature(&cpu->env, ARM_FEATURE_THUMB_DSP);
231
+ cpu->midr = 0x410fd221; /* r0p1 */
232
+ cpu->revidr = 0;
233
+ cpu->pmsav7_dregion = 16;
234
+ cpu->sau_sregion = 8;
235
+ /* These are the MVFR* values for the FPU + full MVE configuration */
236
+ cpu->isar.mvfr0 = 0x10110221;
237
+ cpu->isar.mvfr1 = 0x12100211;
238
+ cpu->isar.mvfr2 = 0x00000040;
239
+ cpu->isar.id_pfr0 = 0x20000030;
240
+ cpu->isar.id_pfr1 = 0x00000230;
241
+ cpu->isar.id_dfr0 = 0x10200000;
242
+ cpu->id_afr0 = 0x00000000;
243
+ cpu->isar.id_mmfr0 = 0x00111040;
244
+ cpu->isar.id_mmfr1 = 0x00000000;
245
+ cpu->isar.id_mmfr2 = 0x01000000;
246
+ cpu->isar.id_mmfr3 = 0x00000011;
247
+ cpu->isar.id_isar0 = 0x01103110;
248
+ cpu->isar.id_isar1 = 0x02212000;
249
+ cpu->isar.id_isar2 = 0x20232232;
250
+ cpu->isar.id_isar3 = 0x01111131;
251
+ cpu->isar.id_isar4 = 0x01310132;
252
+ cpu->isar.id_isar5 = 0x00000000;
253
+ cpu->isar.id_isar6 = 0x00000000;
254
+ cpu->clidr = 0x00000000; /* caches not implemented */
255
+ cpu->ctr = 0x8303c003;
256
+}
257
+
258
+static const TCGCPUOps arm_v7m_tcg_ops = {
259
+ .initialize = arm_translate_init,
260
+ .synchronize_from_tb = arm_cpu_synchronize_from_tb,
261
+ .debug_excp_handler = arm_debug_excp_handler,
262
+ .restore_state_to_opc = arm_restore_state_to_opc,
263
+
264
+#ifdef CONFIG_USER_ONLY
265
+ .record_sigsegv = arm_cpu_record_sigsegv,
266
+ .record_sigbus = arm_cpu_record_sigbus,
267
+#else
268
+ .tlb_fill = arm_cpu_tlb_fill,
269
+ .cpu_exec_interrupt = arm_v7m_cpu_exec_interrupt,
270
+ .do_interrupt = arm_v7m_cpu_do_interrupt,
271
+ .do_transaction_failed = arm_cpu_do_transaction_failed,
272
+ .do_unaligned_access = arm_cpu_do_unaligned_access,
273
+ .adjust_watchpoint_address = arm_adjust_watchpoint_address,
274
+ .debug_check_watchpoint = arm_debug_check_watchpoint,
275
+ .debug_check_breakpoint = arm_debug_check_breakpoint,
276
+#endif /* !CONFIG_USER_ONLY */
277
+};
278
+
279
+static void arm_v7m_class_init(ObjectClass *oc, void *data)
280
+{
281
+ ARMCPUClass *acc = ARM_CPU_CLASS(oc);
282
+ CPUClass *cc = CPU_CLASS(oc);
283
+
284
+ acc->info = data;
285
+ cc->tcg_ops = &arm_v7m_tcg_ops;
286
+ cc->gdb_core_xml_file = "arm-m-profile.xml";
287
+}
288
+
289
+static const ARMCPUInfo arm_v7m_cpus[] = {
290
+ { .name = "cortex-m0", .initfn = cortex_m0_initfn,
291
+ .class_init = arm_v7m_class_init },
292
+ { .name = "cortex-m3", .initfn = cortex_m3_initfn,
293
+ .class_init = arm_v7m_class_init },
294
+ { .name = "cortex-m4", .initfn = cortex_m4_initfn,
295
+ .class_init = arm_v7m_class_init },
296
+ { .name = "cortex-m7", .initfn = cortex_m7_initfn,
297
+ .class_init = arm_v7m_class_init },
298
+ { .name = "cortex-m33", .initfn = cortex_m33_initfn,
299
+ .class_init = arm_v7m_class_init },
300
+ { .name = "cortex-m55", .initfn = cortex_m55_initfn,
301
+ .class_init = arm_v7m_class_init },
302
+};
303
+
304
+static void arm_v7m_cpu_register_types(void)
305
+{
306
+ size_t i;
307
+
308
+ for (i = 0; i < ARRAY_SIZE(arm_v7m_cpus); ++i) {
309
+ arm_cpu_register(&arm_v7m_cpus[i]);
368
+ }
310
+ }
369
+
311
+}
370
+ return addr;
312
+
371
+}
313
+type_init(arm_v7m_cpu_register_types)
372
+
314
diff --git a/target/arm/tcg/cpu32.c b/target/arm/tcg/cpu32.c
373
+#endif
374
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
375
index XXXXXXX..XXXXXXX 100644
315
index XXXXXXX..XXXXXXX 100644
376
--- a/target/arm/op_helper.c
316
--- a/target/arm/tcg/cpu32.c
377
+++ b/target/arm/op_helper.c
317
+++ b/target/arm/tcg/cpu32.c
378
@@ -XXX,XX +XXX,XX @@ void HELPER(pre_smc)(CPUARMState *env, uint32_t syndrome)
318
@@ -XXX,XX +XXX,XX @@
379
}
319
#include "hw/boards.h"
380
}
320
#endif
381
321
#include "cpregs.h"
382
-/* Return true if the linked breakpoint entry lbn passes its checks */
322
-#if !defined(CONFIG_USER_ONLY) && defined(CONFIG_TCG)
383
-static bool linked_bp_matches(ARMCPU *cpu, int lbn)
323
-#include "hw/intc/armv7m_nvic.h"
384
-{
324
-#endif
385
- CPUARMState *env = &cpu->env;
325
386
- uint64_t bcr = env->cp15.dbgbcr[lbn];
326
387
- int brps = extract32(cpu->dbgdidr, 24, 4);
327
/* Share AArch32 -cpu max features with AArch64. */
388
- int ctx_cmps = extract32(cpu->dbgdidr, 20, 4);
328
@@ -XXX,XX +XXX,XX @@ void aa32_max_features(ARMCPU *cpu)
389
- int bt;
329
/* CPU models. These are not needed for the AArch64 linux-user build. */
390
- uint32_t contextidr;
330
#if !defined(CONFIG_USER_ONLY) || !defined(TARGET_AARCH64)
391
-
331
392
- /*
332
-#if !defined(CONFIG_USER_ONLY)
393
- * Links to unimplemented or non-context aware breakpoints are
333
-static bool arm_v7m_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
394
- * CONSTRAINED UNPREDICTABLE: either behave as if disabled, or
334
-{
395
- * as if linked to an UNKNOWN context-aware breakpoint (in which
335
- CPUClass *cc = CPU_GET_CLASS(cs);
396
- * case DBGWCR<n>_EL1.LBN must indicate that breakpoint).
397
- * We choose the former.
398
- */
399
- if (lbn > brps || lbn < (brps - ctx_cmps)) {
400
- return false;
401
- }
402
-
403
- bcr = env->cp15.dbgbcr[lbn];
404
-
405
- if (extract64(bcr, 0, 1) == 0) {
406
- /* Linked breakpoint disabled : generate no events */
407
- return false;
408
- }
409
-
410
- bt = extract64(bcr, 20, 4);
411
-
412
- /*
413
- * We match the whole register even if this is AArch32 using the
414
- * short descriptor format (in which case it holds both PROCID and ASID),
415
- * since we don't implement the optional v7 context ID masking.
416
- */
417
- contextidr = extract64(env->cp15.contextidr_el[1], 0, 32);
418
-
419
- switch (bt) {
420
- case 3: /* linked context ID match */
421
- if (arm_current_el(env) > 1) {
422
- /* Context matches never fire in EL2 or (AArch64) EL3 */
423
- return false;
424
- }
425
- return (contextidr == extract64(env->cp15.dbgbvr[lbn], 0, 32));
426
- case 5: /* linked address mismatch (reserved in AArch64) */
427
- case 9: /* linked VMID match (reserved if no EL2) */
428
- case 11: /* linked context ID and VMID match (reserved if no EL2) */
429
- default:
430
- /*
431
- * Links to Unlinked context breakpoints must generate no
432
- * events; we choose to do the same for reserved values too.
433
- */
434
- return false;
435
- }
436
-
437
- return false;
438
-}
439
-
440
-static bool bp_wp_matches(ARMCPU *cpu, int n, bool is_wp)
441
-{
442
- CPUARMState *env = &cpu->env;
443
- uint64_t cr;
444
- int pac, hmc, ssc, wt, lbn;
445
- /*
446
- * Note that for watchpoints the check is against the CPU security
447
- * state, not the S/NS attribute on the offending data access.
448
- */
449
- bool is_secure = arm_is_secure(env);
450
- int access_el = arm_current_el(env);
451
-
452
- if (is_wp) {
453
- CPUWatchpoint *wp = env->cpu_watchpoint[n];
454
-
455
- if (!wp || !(wp->flags & BP_WATCHPOINT_HIT)) {
456
- return false;
457
- }
458
- cr = env->cp15.dbgwcr[n];
459
- if (wp->hitattrs.user) {
460
- /*
461
- * The LDRT/STRT/LDT/STT "unprivileged access" instructions should
462
- * match watchpoints as if they were accesses done at EL0, even if
463
- * the CPU is at EL1 or higher.
464
- */
465
- access_el = 0;
466
- }
467
- } else {
468
- uint64_t pc = is_a64(env) ? env->pc : env->regs[15];
469
-
470
- if (!env->cpu_breakpoint[n] || env->cpu_breakpoint[n]->pc != pc) {
471
- return false;
472
- }
473
- cr = env->cp15.dbgbcr[n];
474
- }
475
- /*
476
- * The WATCHPOINT_HIT flag guarantees us that the watchpoint is
477
- * enabled and that the address and access type match; for breakpoints
478
- * we know the address matched; check the remaining fields, including
479
- * linked breakpoints. We rely on WCR and BCR having the same layout
480
- * for the LBN, SSC, HMC, PAC/PMC and is-linked fields.
481
- * Note that some combinations of {PAC, HMC, SSC} are reserved and
482
- * must act either like some valid combination or as if the watchpoint
483
- * were disabled. We choose the former, and use this together with
484
- * the fact that EL3 must always be Secure and EL2 must always be
485
- * Non-Secure to simplify the code slightly compared to the full
486
- * table in the ARM ARM.
487
- */
488
- pac = extract64(cr, 1, 2);
489
- hmc = extract64(cr, 13, 1);
490
- ssc = extract64(cr, 14, 2);
491
-
492
- switch (ssc) {
493
- case 0:
494
- break;
495
- case 1:
496
- case 3:
497
- if (is_secure) {
498
- return false;
499
- }
500
- break;
501
- case 2:
502
- if (!is_secure) {
503
- return false;
504
- }
505
- break;
506
- }
507
-
508
- switch (access_el) {
509
- case 3:
510
- case 2:
511
- if (!hmc) {
512
- return false;
513
- }
514
- break;
515
- case 1:
516
- if (extract32(pac, 0, 1) == 0) {
517
- return false;
518
- }
519
- break;
520
- case 0:
521
- if (extract32(pac, 1, 1) == 0) {
522
- return false;
523
- }
524
- break;
525
- default:
526
- g_assert_not_reached();
527
- }
528
-
529
- wt = extract64(cr, 20, 1);
530
- lbn = extract64(cr, 16, 4);
531
-
532
- if (wt && !linked_bp_matches(cpu, lbn)) {
533
- return false;
534
- }
535
-
536
- return true;
537
-}
538
-
539
-static bool check_watchpoints(ARMCPU *cpu)
540
-{
541
- CPUARMState *env = &cpu->env;
542
- int n;
543
-
544
- /*
545
- * If watchpoints are disabled globally or we can't take debug
546
- * exceptions here then watchpoint firings are ignored.
547
- */
548
- if (extract32(env->cp15.mdscr_el1, 15, 1) == 0
549
- || !arm_generate_debug_exceptions(env)) {
550
- return false;
551
- }
552
-
553
- for (n = 0; n < ARRAY_SIZE(env->cpu_watchpoint); n++) {
554
- if (bp_wp_matches(cpu, n, true)) {
555
- return true;
556
- }
557
- }
558
- return false;
559
-}
560
-
561
-static bool check_breakpoints(ARMCPU *cpu)
562
-{
563
- CPUARMState *env = &cpu->env;
564
- int n;
565
-
566
- /*
567
- * If breakpoints are disabled globally or we can't take debug
568
- * exceptions here then breakpoint firings are ignored.
569
- */
570
- if (extract32(env->cp15.mdscr_el1, 15, 1) == 0
571
- || !arm_generate_debug_exceptions(env)) {
572
- return false;
573
- }
574
-
575
- for (n = 0; n < ARRAY_SIZE(env->cpu_breakpoint); n++) {
576
- if (bp_wp_matches(cpu, n, false)) {
577
- return true;
578
- }
579
- }
580
- return false;
581
-}
582
-
583
-void HELPER(check_breakpoints)(CPUARMState *env)
584
-{
585
- ARMCPU *cpu = env_archcpu(env);
586
-
587
- if (check_breakpoints(cpu)) {
588
- HELPER(exception_internal(env, EXCP_DEBUG));
589
- }
590
-}
591
-
592
-bool arm_debug_check_watchpoint(CPUState *cs, CPUWatchpoint *wp)
593
-{
594
- /*
595
- * Called by core code when a CPU watchpoint fires; need to check if this
596
- * is also an architectural watchpoint match.
597
- */
598
- ARMCPU *cpu = ARM_CPU(cs);
599
-
600
- return check_watchpoints(cpu);
601
-}
602
-
603
-vaddr arm_adjust_watchpoint_address(CPUState *cs, vaddr addr, int len)
604
-{
605
- ARMCPU *cpu = ARM_CPU(cs);
336
- ARMCPU *cpu = ARM_CPU(cs);
606
- CPUARMState *env = &cpu->env;
337
- CPUARMState *env = &cpu->env;
338
- bool ret = false;
607
-
339
-
608
- /*
340
- /*
609
- * In BE32 system mode, target memory is stored byteswapped (on a
341
- * ARMv7-M interrupt masking works differently than -A or -R.
610
- * little-endian host system), and by the time we reach here (via an
342
- * There is no FIQ/IRQ distinction. Instead of I and F bits
611
- * opcode helper) the addresses of subword accesses have been adjusted
343
- * masking FIQ and IRQ interrupts, an exception is taken only
612
- * to account for that, which means that watchpoints will not match.
344
- * if it is higher priority than the current execution priority
613
- * Undo the adjustment here.
345
- * (which depends on state like BASEPRI, FAULTMASK and the
346
- * currently active exception).
614
- */
347
- */
615
- if (arm_sctlr_b(env)) {
348
- if (interrupt_request & CPU_INTERRUPT_HARD
616
- if (len == 1) {
349
- && (armv7m_nvic_can_take_pending_exception(env->nvic))) {
617
- addr ^= 3;
350
- cs->exception_index = EXCP_IRQ;
618
- } else if (len == 2) {
351
- cc->tcg_ops->do_interrupt(cs);
619
- addr ^= 2;
352
- ret = true;
620
- }
621
- }
353
- }
622
-
354
- return ret;
623
- return addr;
355
-}
624
-}
356
-#endif /* !CONFIG_USER_ONLY */
625
-
357
-
626
-void arm_debug_excp_handler(CPUState *cs)
358
static void arm926_initfn(Object *obj)
627
-{
359
{
360
ARMCPU *cpu = ARM_CPU(obj);
361
@@ -XXX,XX +XXX,XX @@ static void cortex_a15_initfn(Object *obj)
362
define_arm_cp_regs(cpu, cortexa15_cp_reginfo);
363
}
364
365
-static void cortex_m0_initfn(Object *obj)
366
-{
367
- ARMCPU *cpu = ARM_CPU(obj);
368
- set_feature(&cpu->env, ARM_FEATURE_V6);
369
- set_feature(&cpu->env, ARM_FEATURE_M);
370
-
371
- cpu->midr = 0x410cc200;
372
-
628
- /*
373
- /*
629
- * Called by core code when a watchpoint or breakpoint fires;
374
- * These ID register values are not guest visible, because
630
- * need to check which one and raise the appropriate exception.
375
- * we do not implement the Main Extension. They must be set
376
- * to values corresponding to the Cortex-M0's implemented
377
- * features, because QEMU generally controls its emulation
378
- * by looking at ID register fields. We use the same values as
379
- * for the M3.
631
- */
380
- */
632
- ARMCPU *cpu = ARM_CPU(cs);
381
- cpu->isar.id_pfr0 = 0x00000030;
633
- CPUARMState *env = &cpu->env;
382
- cpu->isar.id_pfr1 = 0x00000200;
634
- CPUWatchpoint *wp_hit = cs->watchpoint_hit;
383
- cpu->isar.id_dfr0 = 0x00100000;
635
-
384
- cpu->id_afr0 = 0x00000000;
636
- if (wp_hit) {
385
- cpu->isar.id_mmfr0 = 0x00000030;
637
- if (wp_hit->flags & BP_CPU) {
386
- cpu->isar.id_mmfr1 = 0x00000000;
638
- bool wnr = (wp_hit->flags & BP_WATCHPOINT_HIT_WRITE) != 0;
387
- cpu->isar.id_mmfr2 = 0x00000000;
639
- bool same_el = arm_debug_target_el(env) == arm_current_el(env);
388
- cpu->isar.id_mmfr3 = 0x00000000;
640
-
389
- cpu->isar.id_isar0 = 0x01141110;
641
- cs->watchpoint_hit = NULL;
390
- cpu->isar.id_isar1 = 0x02111000;
642
-
391
- cpu->isar.id_isar2 = 0x21112231;
643
- env->exception.fsr = arm_debug_exception_fsr(env);
392
- cpu->isar.id_isar3 = 0x01111110;
644
- env->exception.vaddress = wp_hit->hitaddr;
393
- cpu->isar.id_isar4 = 0x01310102;
645
- raise_exception(env, EXCP_DATA_ABORT,
394
- cpu->isar.id_isar5 = 0x00000000;
646
- syn_watchpoint(same_el, 0, wnr),
395
- cpu->isar.id_isar6 = 0x00000000;
647
- arm_debug_target_el(env));
396
-}
648
- }
397
-
649
- } else {
398
-static void cortex_m3_initfn(Object *obj)
650
- uint64_t pc = is_a64(env) ? env->pc : env->regs[15];
399
-{
651
- bool same_el = (arm_debug_target_el(env) == arm_current_el(env));
400
- ARMCPU *cpu = ARM_CPU(obj);
652
-
401
- set_feature(&cpu->env, ARM_FEATURE_V7);
653
- /*
402
- set_feature(&cpu->env, ARM_FEATURE_M);
654
- * (1) GDB breakpoints should be handled first.
403
- set_feature(&cpu->env, ARM_FEATURE_M_MAIN);
655
- * (2) Do not raise a CPU exception if no CPU breakpoint has fired,
404
- cpu->midr = 0x410fc231;
656
- * since singlestep is also done by generating a debug internal
405
- cpu->pmsav7_dregion = 8;
657
- * exception.
406
- cpu->isar.id_pfr0 = 0x00000030;
658
- */
407
- cpu->isar.id_pfr1 = 0x00000200;
659
- if (cpu_breakpoint_test(cs, pc, BP_GDB)
408
- cpu->isar.id_dfr0 = 0x00100000;
660
- || !cpu_breakpoint_test(cs, pc, BP_CPU)) {
409
- cpu->id_afr0 = 0x00000000;
661
- return;
410
- cpu->isar.id_mmfr0 = 0x00000030;
662
- }
411
- cpu->isar.id_mmfr1 = 0x00000000;
663
-
412
- cpu->isar.id_mmfr2 = 0x00000000;
664
- env->exception.fsr = arm_debug_exception_fsr(env);
413
- cpu->isar.id_mmfr3 = 0x00000000;
665
- /*
414
- cpu->isar.id_isar0 = 0x01141110;
666
- * FAR is UNKNOWN: clear vaddress to avoid potentially exposing
415
- cpu->isar.id_isar1 = 0x02111000;
667
- * values to the guest that it shouldn't be able to see at its
416
- cpu->isar.id_isar2 = 0x21112231;
668
- * exception/security level.
417
- cpu->isar.id_isar3 = 0x01111110;
669
- */
418
- cpu->isar.id_isar4 = 0x01310102;
670
- env->exception.vaddress = 0;
419
- cpu->isar.id_isar5 = 0x00000000;
671
- raise_exception(env, EXCP_PREFETCH_ABORT,
420
- cpu->isar.id_isar6 = 0x00000000;
672
- syn_breakpoint(same_el),
421
-}
673
- arm_debug_target_el(env));
422
-
674
- }
423
-static void cortex_m4_initfn(Object *obj)
675
-}
424
-{
676
-
425
- ARMCPU *cpu = ARM_CPU(obj);
677
/* ??? Flag setting arithmetic is awkward because we need to do comparisons.
426
-
678
The only way to do that in TCG is a conditional branch, which clobbers
427
- set_feature(&cpu->env, ARM_FEATURE_V7);
679
all our temporaries. For now implement these as helper functions. */
428
- set_feature(&cpu->env, ARM_FEATURE_M);
429
- set_feature(&cpu->env, ARM_FEATURE_M_MAIN);
430
- set_feature(&cpu->env, ARM_FEATURE_THUMB_DSP);
431
- cpu->midr = 0x410fc240; /* r0p0 */
432
- cpu->pmsav7_dregion = 8;
433
- cpu->isar.mvfr0 = 0x10110021;
434
- cpu->isar.mvfr1 = 0x11000011;
435
- cpu->isar.mvfr2 = 0x00000000;
436
- cpu->isar.id_pfr0 = 0x00000030;
437
- cpu->isar.id_pfr1 = 0x00000200;
438
- cpu->isar.id_dfr0 = 0x00100000;
439
- cpu->id_afr0 = 0x00000000;
440
- cpu->isar.id_mmfr0 = 0x00000030;
441
- cpu->isar.id_mmfr1 = 0x00000000;
442
- cpu->isar.id_mmfr2 = 0x00000000;
443
- cpu->isar.id_mmfr3 = 0x00000000;
444
- cpu->isar.id_isar0 = 0x01141110;
445
- cpu->isar.id_isar1 = 0x02111000;
446
- cpu->isar.id_isar2 = 0x21112231;
447
- cpu->isar.id_isar3 = 0x01111110;
448
- cpu->isar.id_isar4 = 0x01310102;
449
- cpu->isar.id_isar5 = 0x00000000;
450
- cpu->isar.id_isar6 = 0x00000000;
451
-}
452
-
453
-static void cortex_m7_initfn(Object *obj)
454
-{
455
- ARMCPU *cpu = ARM_CPU(obj);
456
-
457
- set_feature(&cpu->env, ARM_FEATURE_V7);
458
- set_feature(&cpu->env, ARM_FEATURE_M);
459
- set_feature(&cpu->env, ARM_FEATURE_M_MAIN);
460
- set_feature(&cpu->env, ARM_FEATURE_THUMB_DSP);
461
- cpu->midr = 0x411fc272; /* r1p2 */
462
- cpu->pmsav7_dregion = 8;
463
- cpu->isar.mvfr0 = 0x10110221;
464
- cpu->isar.mvfr1 = 0x12000011;
465
- cpu->isar.mvfr2 = 0x00000040;
466
- cpu->isar.id_pfr0 = 0x00000030;
467
- cpu->isar.id_pfr1 = 0x00000200;
468
- cpu->isar.id_dfr0 = 0x00100000;
469
- cpu->id_afr0 = 0x00000000;
470
- cpu->isar.id_mmfr0 = 0x00100030;
471
- cpu->isar.id_mmfr1 = 0x00000000;
472
- cpu->isar.id_mmfr2 = 0x01000000;
473
- cpu->isar.id_mmfr3 = 0x00000000;
474
- cpu->isar.id_isar0 = 0x01101110;
475
- cpu->isar.id_isar1 = 0x02112000;
476
- cpu->isar.id_isar2 = 0x20232231;
477
- cpu->isar.id_isar3 = 0x01111131;
478
- cpu->isar.id_isar4 = 0x01310132;
479
- cpu->isar.id_isar5 = 0x00000000;
480
- cpu->isar.id_isar6 = 0x00000000;
481
-}
482
-
483
-static void cortex_m33_initfn(Object *obj)
484
-{
485
- ARMCPU *cpu = ARM_CPU(obj);
486
-
487
- set_feature(&cpu->env, ARM_FEATURE_V8);
488
- set_feature(&cpu->env, ARM_FEATURE_M);
489
- set_feature(&cpu->env, ARM_FEATURE_M_MAIN);
490
- set_feature(&cpu->env, ARM_FEATURE_M_SECURITY);
491
- set_feature(&cpu->env, ARM_FEATURE_THUMB_DSP);
492
- cpu->midr = 0x410fd213; /* r0p3 */
493
- cpu->pmsav7_dregion = 16;
494
- cpu->sau_sregion = 8;
495
- cpu->isar.mvfr0 = 0x10110021;
496
- cpu->isar.mvfr1 = 0x11000011;
497
- cpu->isar.mvfr2 = 0x00000040;
498
- cpu->isar.id_pfr0 = 0x00000030;
499
- cpu->isar.id_pfr1 = 0x00000210;
500
- cpu->isar.id_dfr0 = 0x00200000;
501
- cpu->id_afr0 = 0x00000000;
502
- cpu->isar.id_mmfr0 = 0x00101F40;
503
- cpu->isar.id_mmfr1 = 0x00000000;
504
- cpu->isar.id_mmfr2 = 0x01000000;
505
- cpu->isar.id_mmfr3 = 0x00000000;
506
- cpu->isar.id_isar0 = 0x01101110;
507
- cpu->isar.id_isar1 = 0x02212000;
508
- cpu->isar.id_isar2 = 0x20232232;
509
- cpu->isar.id_isar3 = 0x01111131;
510
- cpu->isar.id_isar4 = 0x01310132;
511
- cpu->isar.id_isar5 = 0x00000000;
512
- cpu->isar.id_isar6 = 0x00000000;
513
- cpu->clidr = 0x00000000;
514
- cpu->ctr = 0x8000c000;
515
-}
516
-
517
-static void cortex_m55_initfn(Object *obj)
518
-{
519
- ARMCPU *cpu = ARM_CPU(obj);
520
-
521
- set_feature(&cpu->env, ARM_FEATURE_V8);
522
- set_feature(&cpu->env, ARM_FEATURE_V8_1M);
523
- set_feature(&cpu->env, ARM_FEATURE_M);
524
- set_feature(&cpu->env, ARM_FEATURE_M_MAIN);
525
- set_feature(&cpu->env, ARM_FEATURE_M_SECURITY);
526
- set_feature(&cpu->env, ARM_FEATURE_THUMB_DSP);
527
- cpu->midr = 0x410fd221; /* r0p1 */
528
- cpu->revidr = 0;
529
- cpu->pmsav7_dregion = 16;
530
- cpu->sau_sregion = 8;
531
- /* These are the MVFR* values for the FPU + full MVE configuration */
532
- cpu->isar.mvfr0 = 0x10110221;
533
- cpu->isar.mvfr1 = 0x12100211;
534
- cpu->isar.mvfr2 = 0x00000040;
535
- cpu->isar.id_pfr0 = 0x20000030;
536
- cpu->isar.id_pfr1 = 0x00000230;
537
- cpu->isar.id_dfr0 = 0x10200000;
538
- cpu->id_afr0 = 0x00000000;
539
- cpu->isar.id_mmfr0 = 0x00111040;
540
- cpu->isar.id_mmfr1 = 0x00000000;
541
- cpu->isar.id_mmfr2 = 0x01000000;
542
- cpu->isar.id_mmfr3 = 0x00000011;
543
- cpu->isar.id_isar0 = 0x01103110;
544
- cpu->isar.id_isar1 = 0x02212000;
545
- cpu->isar.id_isar2 = 0x20232232;
546
- cpu->isar.id_isar3 = 0x01111131;
547
- cpu->isar.id_isar4 = 0x01310132;
548
- cpu->isar.id_isar5 = 0x00000000;
549
- cpu->isar.id_isar6 = 0x00000000;
550
- cpu->clidr = 0x00000000; /* caches not implemented */
551
- cpu->ctr = 0x8303c003;
552
-}
553
-
554
static const ARMCPRegInfo cortexr5_cp_reginfo[] = {
555
/* Dummy the TCM region regs for the moment */
556
{ .name = "ATCM", .cp = 15, .opc1 = 0, .crn = 9, .crm = 1, .opc2 = 0,
557
@@ -XXX,XX +XXX,XX @@ static void pxa270c5_initfn(Object *obj)
558
cpu->reset_sctlr = 0x00000078;
559
}
560
561
-static const TCGCPUOps arm_v7m_tcg_ops = {
562
- .initialize = arm_translate_init,
563
- .synchronize_from_tb = arm_cpu_synchronize_from_tb,
564
- .debug_excp_handler = arm_debug_excp_handler,
565
- .restore_state_to_opc = arm_restore_state_to_opc,
566
-
567
-#ifdef CONFIG_USER_ONLY
568
- .record_sigsegv = arm_cpu_record_sigsegv,
569
- .record_sigbus = arm_cpu_record_sigbus,
570
-#else
571
- .tlb_fill = arm_cpu_tlb_fill,
572
- .cpu_exec_interrupt = arm_v7m_cpu_exec_interrupt,
573
- .do_interrupt = arm_v7m_cpu_do_interrupt,
574
- .do_transaction_failed = arm_cpu_do_transaction_failed,
575
- .do_unaligned_access = arm_cpu_do_unaligned_access,
576
- .adjust_watchpoint_address = arm_adjust_watchpoint_address,
577
- .debug_check_watchpoint = arm_debug_check_watchpoint,
578
- .debug_check_breakpoint = arm_debug_check_breakpoint,
579
-#endif /* !CONFIG_USER_ONLY */
580
-};
581
-
582
-static void arm_v7m_class_init(ObjectClass *oc, void *data)
583
-{
584
- ARMCPUClass *acc = ARM_CPU_CLASS(oc);
585
- CPUClass *cc = CPU_CLASS(oc);
586
-
587
- acc->info = data;
588
- cc->tcg_ops = &arm_v7m_tcg_ops;
589
- cc->gdb_core_xml_file = "arm-m-profile.xml";
590
-}
591
-
592
#ifndef TARGET_AARCH64
593
/*
594
* -cpu max: a CPU with as many features enabled as our emulation supports.
595
@@ -XXX,XX +XXX,XX @@ static const ARMCPUInfo arm_tcg_cpus[] = {
596
{ .name = "cortex-a8", .initfn = cortex_a8_initfn },
597
{ .name = "cortex-a9", .initfn = cortex_a9_initfn },
598
{ .name = "cortex-a15", .initfn = cortex_a15_initfn },
599
- { .name = "cortex-m0", .initfn = cortex_m0_initfn,
600
- .class_init = arm_v7m_class_init },
601
- { .name = "cortex-m3", .initfn = cortex_m3_initfn,
602
- .class_init = arm_v7m_class_init },
603
- { .name = "cortex-m4", .initfn = cortex_m4_initfn,
604
- .class_init = arm_v7m_class_init },
605
- { .name = "cortex-m7", .initfn = cortex_m7_initfn,
606
- .class_init = arm_v7m_class_init },
607
- { .name = "cortex-m33", .initfn = cortex_m33_initfn,
608
- .class_init = arm_v7m_class_init },
609
- { .name = "cortex-m55", .initfn = cortex_m55_initfn,
610
- .class_init = arm_v7m_class_init },
611
{ .name = "cortex-r5", .initfn = cortex_r5_initfn },
612
{ .name = "cortex-r5f", .initfn = cortex_r5f_initfn },
613
{ .name = "cortex-r52", .initfn = cortex_r52_initfn },
614
diff --git a/target/arm/meson.build b/target/arm/meson.build
615
index XXXXXXX..XXXXXXX 100644
616
--- a/target/arm/meson.build
617
+++ b/target/arm/meson.build
618
@@ -XXX,XX +XXX,XX @@ arm_system_ss.add(files(
619
'ptw.c',
620
))
621
622
+arm_user_ss = ss.source_set()
623
+
624
subdir('hvf')
625
626
if 'CONFIG_TCG' in config_all_accel
627
@@ -XXX,XX +XXX,XX @@ endif
628
629
target_arch += {'arm': arm_ss}
630
target_system_arch += {'arm': arm_system_ss}
631
+target_user_arch += {'arm': arm_user_ss}
632
diff --git a/target/arm/tcg/meson.build b/target/arm/tcg/meson.build
633
index XXXXXXX..XXXXXXX 100644
634
--- a/target/arm/tcg/meson.build
635
+++ b/target/arm/tcg/meson.build
636
@@ -XXX,XX +XXX,XX @@ arm_ss.add(when: 'TARGET_AARCH64', if_true: files(
637
arm_system_ss.add(files(
638
'psci.c',
639
))
640
+
641
+arm_system_ss.add(when: 'CONFIG_ARM_V7M', if_true: files('cpu-v7m.c'))
642
+arm_user_ss.add(when: 'TARGET_AARCH64', if_false: files('cpu-v7m.c'))
680
--
643
--
681
2.20.1
644
2.34.1
682
683
diff view generated by jsdifflib