1
target-arm queue:
1
target-arm queue for rc1 -- these are all bug fixes.
2
* mostly my latest v8M stuff, plus a couple of minor patches
3
2
4
The following changes since commit a0b261db8c030813e30a39eae47359ac2a37f7e2:
3
thanks
4
-- PMM
5
5
6
Merge remote-tracking branch 'remotes/ehabkost/tags/python-next-pull-request' into staging (2017-10-12 10:02:09 +0100)
6
The following changes since commit b9404bf592e7ba74180e1a54ed7a266ec6ee67f2:
7
7
8
are available in the git repository at:
8
Merge remote-tracking branch 'remotes/dgilbert/tags/pull-hmp-20190715' into staging (2019-07-15 12:22:07 +0100)
9
9
10
git://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20171012
10
are available in the Git repository at:
11
11
12
for you to fetch changes up to cf5f7937b05c84d5565134f058c00cd48304a117:
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20190715
13
13
14
nvic: Fix miscalculation of offsets into ITNS array (2017-10-12 16:33:16 +0100)
14
for you to fetch changes up to 51c9122e92b776a3f16af0b9282f1dc5012e2a19:
15
16
target/arm: NS BusFault on vector table fetch escalates to NS HardFault (2019-07-15 14:17:04 +0100)
15
17
16
----------------------------------------------------------------
18
----------------------------------------------------------------
17
target-arm queue:
19
target-arm queue:
18
* v8M: SG, BLXNS, secure-return
20
* report ARMv8-A FP support for AArch32 -cpu max
19
* v8M: fixes for coverity issues in previous patches
21
* hw/ssi/xilinx_spips: Avoid AXI writes to the LQSPI linear memory
20
* arm: fix armv7m_init() declaration to match definition
22
* hw/ssi/xilinx_spips: Avoid out-of-bound access to lqspi_buf[]
21
* watchdog/aspeed: fix variable type to store reload value
23
* hw/ssi/mss-spi: Avoid crash when reading empty RX FIFO
24
* hw/display/xlnx_dp: Avoid crash when reading empty RX FIFO
25
* hw/arm/virt: Fix non-secure flash mode
26
* pl031: Correctly migrate state when using -rtc clock=host
27
* fix regression that meant arm926 and arm1026 lost VFP
28
double-precision support
29
* v8M: NS BusFault on vector table fetch escalates to NS HardFault
22
30
23
----------------------------------------------------------------
31
----------------------------------------------------------------
24
Cédric Le Goater (1):
32
Alex Bennée (1):
25
watchdog/aspeed: fix variable type to store reload value
33
target/arm: report ARMv8-A FP support for AArch32 -cpu max
26
34
27
Igor Mammedov (1):
35
David Engraf (1):
28
arm: fix armv7m_init() declaration to match definition
36
hw/arm/virt: Fix non-secure flash mode
29
37
30
Peter Maydell (11):
38
Peter Maydell (3):
31
target/arm: Add M profile secure MMU index values to get_a32_user_mem_index()
39
pl031: Correctly migrate state when using -rtc clock=host
32
target/arm: Implement SG instruction
40
target/arm: Set VFP-related MVFR0 fields for arm926 and arm1026
33
target/arm: Implement BLXNS
41
target/arm: NS BusFault on vector table fetch escalates to NS HardFault
34
target/arm: Implement secure function return
35
target-arm: Don't check for "Thumb2 or M profile" for not-Thumb1
36
target/arm: Pull Thumb insn word loads up to top level
37
target-arm: Simplify insn_crosses_page()
38
target/arm: Support some Thumb insns being always unconditional
39
target/arm: Implement SG instruction corner cases
40
nvic: Add missing 'break'
41
nvic: Fix miscalculation of offsets into ITNS array
42
42
43
include/hw/arm/arm.h | 2 +-
43
Philippe Mathieu-Daudé (5):
44
target/arm/helper.h | 1 +
44
hw/ssi/xilinx_spips: Convert lqspi_read() to read_with_attrs
45
target/arm/internals.h | 8 ++
45
hw/ssi/xilinx_spips: Avoid AXI writes to the LQSPI linear memory
46
hw/intc/armv7m_nvic.c | 5 +-
46
hw/ssi/xilinx_spips: Avoid out-of-bound access to lqspi_buf[]
47
hw/watchdog/wdt_aspeed.c | 4 +-
47
hw/ssi/mss-spi: Avoid crash when reading empty RX FIFO
48
target/arm/helper.c | 306 ++++++++++++++++++++++++++++++++++++++++++++--
48
hw/display/xlnx_dp: Avoid crash when reading empty RX FIFO
49
target/arm/translate.c | 310 ++++++++++++++++++++++++++++++++---------------
50
7 files changed, 521 insertions(+), 115 deletions(-)
51
49
50
include/hw/timer/pl031.h | 2 ++
51
hw/arm/virt.c | 2 +-
52
hw/core/machine.c | 1 +
53
hw/display/xlnx_dp.c | 15 +++++---
54
hw/ssi/mss-spi.c | 8 ++++-
55
hw/ssi/xilinx_spips.c | 43 +++++++++++++++-------
56
hw/timer/pl031.c | 92 +++++++++++++++++++++++++++++++++++++++++++++---
57
target/arm/cpu.c | 16 +++++++++
58
target/arm/m_helper.c | 21 ++++++++---
59
9 files changed, 174 insertions(+), 26 deletions(-)
60
diff view generated by jsdifflib
1
This calculation of the first exception vector in
1
From: Alex Bennée <alex.bennee@linaro.org>
2
the ITNS<n> register being accessed:
3
int startvec = 32 * (offset - 0x380) + NVIC_FIRST_IRQ;
4
2
5
is incorrect, because offset is in bytes, so we only want
3
When we converted to using feature bits in 602f6e42cfbf we missed out
6
to multiply by 8.
4
the fact (dp && arm_dc_feature(s, ARM_FEATURE_V8)) was supported for
5
-cpu max configurations. This caused a regression in the GCC test
6
suite. Fix this by setting the appropriate bits in mvfr1.FPHP to
7
report ARMv8-A with FP support (but not ARMv8.2-FP16).
7
8
8
Spotted by Coverity (CID 1381484, CID 1381488), though it is
9
Fixes: https://bugs.launchpad.net/qemu/+bug/1836078
9
not correct that it actually overflows the buffer, because
10
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
10
we have a 'startvec + i < s->num_irq' guard.
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20190711103737.10017-1-alex.bennee@linaro.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
target/arm/cpu.c | 4 ++++
16
1 file changed, 4 insertions(+)
11
17
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 1507650856-11718-1-git-send-email-peter.maydell@linaro.org
15
---
16
hw/intc/armv7m_nvic.c | 4 ++--
17
1 file changed, 2 insertions(+), 2 deletions(-)
18
19
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
20
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
21
--- a/hw/intc/armv7m_nvic.c
20
--- a/target/arm/cpu.c
22
+++ b/hw/intc/armv7m_nvic.c
21
+++ b/target/arm/cpu.c
23
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
22
@@ -XXX,XX +XXX,XX @@ static void arm_max_initfn(Object *obj)
24
return ((s->num_irq - NVIC_FIRST_IRQ) / 32) - 1;
23
t = FIELD_DP32(t, ID_ISAR6, SPECRES, 1);
25
case 0x380 ... 0x3bf: /* NVIC_ITNS<n> */
24
cpu->isar.id_isar6 = t;
26
{
25
27
- int startvec = 32 * (offset - 0x380) + NVIC_FIRST_IRQ;
26
+ t = cpu->isar.mvfr1;
28
+ int startvec = 8 * (offset - 0x380) + NVIC_FIRST_IRQ;
27
+ t = FIELD_DP32(t, MVFR1, FPHP, 2); /* v8.0 FP support */
29
int i;
28
+ cpu->isar.mvfr1 = t;
30
29
+
31
if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
30
t = cpu->isar.mvfr2;
32
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
31
t = FIELD_DP32(t, MVFR2, SIMDMISC, 3); /* SIMD MaxNum */
33
switch (offset) {
32
t = FIELD_DP32(t, MVFR2, FPMISC, 4); /* FP MaxNum */
34
case 0x380 ... 0x3bf: /* NVIC_ITNS<n> */
35
{
36
- int startvec = 32 * (offset - 0x380) + NVIC_FIRST_IRQ;
37
+ int startvec = 8 * (offset - 0x380) + NVIC_FIRST_IRQ;
38
int i;
39
40
if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
41
--
33
--
42
2.7.4
34
2.20.1
43
35
44
36
diff view generated by jsdifflib
1
Refactor the Thumb decode to do the loads of the instruction words at
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
2
the top level rather than only loading the second half of a 32-bit
3
Thumb insn in the middle of the decode.
4
2
5
This is simple apart from the awkward case of Thumb1, where the
3
In the next commit we will implement the write_with_attrs()
6
BL/BLX prefix and suffix instructions live in what in Thumb2 is the
4
handler. To avoid using different APIs, convert the read()
7
32-bit insn space. To handle these we decode enough to identify
5
handler first.
8
whether we're looking at a prefix/suffix that we handle as a 16 bit
9
insn, or a prefix that we're going to merge with the following suffix
10
to consider as a 32 bit insn. The translation of the 16 bit cases
11
then moves from disas_thumb2_insn() to disas_thumb_insn().
12
6
13
The refactoring has the benefit that we don't need to pass the
7
Reviewed-by: Francisco Iglesias <frasse.iglesias@gmail.com>
14
CPUARMState* down into the decoder code any more, but the major
8
Tested-by: Francisco Iglesias <frasse.iglesias@gmail.com>
15
reason for doing this is that some Thumb instructions must be always
9
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
16
unconditional regardless of the IT state bits, so we need to know the
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
whole insn before we emit the "skip this insn if the IT bits and cond
11
---
18
state tell us to" code. (The always unconditional insns are BKPT,
12
hw/ssi/xilinx_spips.c | 23 +++++++++++------------
19
HLT and SG; the last of these is 32 bits.)
13
1 file changed, 11 insertions(+), 12 deletions(-)
20
14
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
22
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
23
Message-id: 1507556919-24992-7-git-send-email-peter.maydell@linaro.org
24
---
25
target/arm/translate.c | 178 ++++++++++++++++++++++++++++++-------------------
26
1 file changed, 108 insertions(+), 70 deletions(-)
27
28
diff --git a/target/arm/translate.c b/target/arm/translate.c
29
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/translate.c
17
--- a/hw/ssi/xilinx_spips.c
31
+++ b/target/arm/translate.c
18
+++ b/hw/ssi/xilinx_spips.c
32
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
19
@@ -XXX,XX +XXX,XX @@ static void lqspi_load_cache(void *opaque, hwaddr addr)
33
}
20
}
34
}
21
}
35
22
36
+static bool thumb_insn_is_16bit(DisasContext *s, uint32_t insn)
23
-static uint64_t
37
+{
24
-lqspi_read(void *opaque, hwaddr addr, unsigned int size)
38
+ /* Return true if this is a 16 bit instruction. We must be precise
25
+static MemTxResult lqspi_read(void *opaque, hwaddr addr, uint64_t *value,
39
+ * about this (matching the decode). We assume that s->pc still
26
+ unsigned size, MemTxAttrs attrs)
40
+ * points to the first 16 bits of the insn.
27
{
41
+ */
28
- XilinxQSPIPS *q = opaque;
42
+ if ((insn >> 11) < 0x1d) {
29
- uint32_t ret;
43
+ /* Definitely a 16-bit instruction */
30
+ XilinxQSPIPS *q = XILINX_QSPIPS(opaque);
44
+ return true;
31
45
+ }
32
if (addr >= q->lqspi_cached_addr &&
33
addr <= q->lqspi_cached_addr + LQSPI_CACHE_SIZE - 4) {
34
uint8_t *retp = &q->lqspi_buf[addr - q->lqspi_cached_addr];
35
- ret = cpu_to_le32(*(uint32_t *)retp);
36
- DB_PRINT_L(1, "addr: %08x, data: %08x\n", (unsigned)addr,
37
- (unsigned)ret);
38
- return ret;
39
- } else {
40
- lqspi_load_cache(opaque, addr);
41
- return lqspi_read(opaque, addr, size);
42
+ *value = cpu_to_le32(*(uint32_t *)retp);
43
+ DB_PRINT_L(1, "addr: %08" HWADDR_PRIx ", data: %08" PRIx64 "\n",
44
+ addr, *value);
45
+ return MEMTX_OK;
46
}
46
+
47
+
47
+ /* Top five bits 0b11101 / 0b11110 / 0b11111 : this is the
48
+ lqspi_load_cache(opaque, addr);
48
+ * first half of a 32-bit Thumb insn. Thumb-1 cores might
49
+ return lqspi_read(opaque, addr, value, size, attrs);
49
+ * end up actually treating this as two 16-bit insns, though,
50
+ * if it's half of a bl/blx pair that might span a page boundary.
51
+ */
52
+ if (arm_dc_feature(s, ARM_FEATURE_THUMB2)) {
53
+ /* Thumb2 cores (including all M profile ones) always treat
54
+ * 32-bit insns as 32-bit.
55
+ */
56
+ return false;
57
+ }
58
+
59
+ if ((insn >> 11) == 0x1e && (s->pc < s->next_page_start - 3)) {
60
+ /* 0b1111_0xxx_xxxx_xxxx : BL/BLX prefix, and the suffix
61
+ * is not on the next page; we merge this into a 32-bit
62
+ * insn.
63
+ */
64
+ return false;
65
+ }
66
+ /* 0b1110_1xxx_xxxx_xxxx : BLX suffix (or UNDEF);
67
+ * 0b1111_1xxx_xxxx_xxxx : BL suffix;
68
+ * 0b1111_0xxx_xxxx_xxxx : BL/BLX prefix on the end of a page
69
+ * -- handle as single 16 bit insn
70
+ */
71
+ return true;
72
+}
73
+
74
/* Return true if this is a Thumb-2 logical op. */
75
static int
76
thumb2_logic_op(int op)
77
@@ -XXX,XX +XXX,XX @@ gen_thumb2_data_op(DisasContext *s, int op, int conds, uint32_t shifter_out,
78
79
/* Translate a 32-bit thumb instruction. Returns nonzero if the instruction
80
is not legal. */
81
-static int disas_thumb2_insn(CPUARMState *env, DisasContext *s, uint16_t insn_hw1)
82
+static int disas_thumb2_insn(DisasContext *s, uint32_t insn)
83
{
84
- uint32_t insn, imm, shift, offset;
85
+ uint32_t imm, shift, offset;
86
uint32_t rd, rn, rm, rs;
87
TCGv_i32 tmp;
88
TCGv_i32 tmp2;
89
@@ -XXX,XX +XXX,XX @@ static int disas_thumb2_insn(CPUARMState *env, DisasContext *s, uint16_t insn_hw
90
int conds;
91
int logic_cc;
92
93
- if (!arm_dc_feature(s, ARM_FEATURE_THUMB2)) {
94
- /* Thumb-1 cores may need to treat bl and blx as a pair of
95
- 16-bit instructions to get correct prefetch abort behavior. */
96
- insn = insn_hw1;
97
- if ((insn & (1 << 12)) == 0) {
98
- ARCH(5);
99
- /* Second half of blx. */
100
- offset = ((insn & 0x7ff) << 1);
101
- tmp = load_reg(s, 14);
102
- tcg_gen_addi_i32(tmp, tmp, offset);
103
- tcg_gen_andi_i32(tmp, tmp, 0xfffffffc);
104
-
105
- tmp2 = tcg_temp_new_i32();
106
- tcg_gen_movi_i32(tmp2, s->pc | 1);
107
- store_reg(s, 14, tmp2);
108
- gen_bx(s, tmp);
109
- return 0;
110
- }
111
- if (insn & (1 << 11)) {
112
- /* Second half of bl. */
113
- offset = ((insn & 0x7ff) << 1) | 1;
114
- tmp = load_reg(s, 14);
115
- tcg_gen_addi_i32(tmp, tmp, offset);
116
-
117
- tmp2 = tcg_temp_new_i32();
118
- tcg_gen_movi_i32(tmp2, s->pc | 1);
119
- store_reg(s, 14, tmp2);
120
- gen_bx(s, tmp);
121
- return 0;
122
- }
123
- if ((s->pc & ~TARGET_PAGE_MASK) == 0) {
124
- /* Instruction spans a page boundary. Implement it as two
125
- 16-bit instructions in case the second half causes an
126
- prefetch abort. */
127
- offset = ((int32_t)insn << 21) >> 9;
128
- tcg_gen_movi_i32(cpu_R[14], s->pc + 2 + offset);
129
- return 0;
130
- }
131
- /* Fall through to 32-bit decode. */
132
- }
133
-
134
- insn = arm_lduw_code(env, s->pc, s->sctlr_b);
135
- s->pc += 2;
136
- insn |= (uint32_t)insn_hw1 << 16;
137
-
138
+ /* The only 32 bit insn that's allowed for Thumb1 is the combined
139
+ * BL/BLX prefix and suffix.
140
+ */
141
if ((insn & 0xf800e800) != 0xf000e800) {
142
ARCH(6T2);
143
}
144
@@ -XXX,XX +XXX,XX @@ illegal_op:
145
return 1;
146
}
50
}
147
51
148
-static void disas_thumb_insn(CPUARMState *env, DisasContext *s)
52
static const MemoryRegionOps lqspi_ops = {
149
+static void disas_thumb_insn(DisasContext *s, uint32_t insn)
53
- .read = lqspi_read,
150
{
54
+ .read_with_attrs = lqspi_read,
151
- uint32_t val, insn, op, rm, rn, rd, shift, cond;
55
.endianness = DEVICE_NATIVE_ENDIAN,
152
+ uint32_t val, op, rm, rn, rd, shift, cond;
56
.valid = {
153
int32_t offset;
57
.min_access_size = 1,
154
int i;
155
TCGv_i32 tmp;
156
TCGv_i32 tmp2;
157
TCGv_i32 addr;
158
159
- if (s->condexec_mask) {
160
- cond = s->condexec_cond;
161
- if (cond != 0x0e) { /* Skip conditional when condition is AL. */
162
- s->condlabel = gen_new_label();
163
- arm_gen_test_cc(cond ^ 1, s->condlabel);
164
- s->condjmp = 1;
165
- }
166
- }
167
-
168
- insn = arm_lduw_code(env, s->pc, s->sctlr_b);
169
- s->pc += 2;
170
-
171
switch (insn >> 12) {
172
case 0: case 1:
173
174
@@ -XXX,XX +XXX,XX @@ static void disas_thumb_insn(CPUARMState *env, DisasContext *s)
175
176
case 14:
177
if (insn & (1 << 11)) {
178
- if (disas_thumb2_insn(env, s, insn))
179
- goto undef32;
180
+ /* thumb_insn_is_16bit() ensures we can't get here for
181
+ * a Thumb2 CPU, so this must be a thumb1 split BL/BLX:
182
+ * 0b1110_1xxx_xxxx_xxxx : BLX suffix (or UNDEF)
183
+ */
184
+ assert(!arm_dc_feature(s, ARM_FEATURE_THUMB2));
185
+ ARCH(5);
186
+ offset = ((insn & 0x7ff) << 1);
187
+ tmp = load_reg(s, 14);
188
+ tcg_gen_addi_i32(tmp, tmp, offset);
189
+ tcg_gen_andi_i32(tmp, tmp, 0xfffffffc);
190
+
191
+ tmp2 = tcg_temp_new_i32();
192
+ tcg_gen_movi_i32(tmp2, s->pc | 1);
193
+ store_reg(s, 14, tmp2);
194
+ gen_bx(s, tmp);
195
break;
196
}
197
/* unconditional branch */
198
@@ -XXX,XX +XXX,XX @@ static void disas_thumb_insn(CPUARMState *env, DisasContext *s)
199
break;
200
201
case 15:
202
- if (disas_thumb2_insn(env, s, insn))
203
- goto undef32;
204
+ /* thumb_insn_is_16bit() ensures we can't get here for
205
+ * a Thumb2 CPU, so this must be a thumb1 split BL/BLX.
206
+ */
207
+ assert(!arm_dc_feature(s, ARM_FEATURE_THUMB2));
208
+
209
+ if (insn & (1 << 11)) {
210
+ /* 0b1111_1xxx_xxxx_xxxx : BL suffix */
211
+ offset = ((insn & 0x7ff) << 1) | 1;
212
+ tmp = load_reg(s, 14);
213
+ tcg_gen_addi_i32(tmp, tmp, offset);
214
+
215
+ tmp2 = tcg_temp_new_i32();
216
+ tcg_gen_movi_i32(tmp2, s->pc | 1);
217
+ store_reg(s, 14, tmp2);
218
+ gen_bx(s, tmp);
219
+ } else {
220
+ /* 0b1111_0xxx_xxxx_xxxx : BL/BLX prefix */
221
+ uint32_t uoffset = ((int32_t)insn << 21) >> 9;
222
+
223
+ tcg_gen_movi_i32(cpu_R[14], s->pc + 2 + uoffset);
224
+ }
225
break;
226
}
227
return;
228
-undef32:
229
- gen_exception_insn(s, 4, EXCP_UDEF, syn_uncategorized(),
230
- default_exception_el(s));
231
- return;
232
illegal_op:
233
undef:
234
gen_exception_insn(s, 2, EXCP_UDEF, syn_uncategorized(),
235
@@ -XXX,XX +XXX,XX @@ static void thumb_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
236
{
237
DisasContext *dc = container_of(dcbase, DisasContext, base);
238
CPUARMState *env = cpu->env_ptr;
239
+ uint32_t insn;
240
+ bool is_16bit;
241
242
if (arm_pre_translate_insn(dc)) {
243
return;
244
}
245
246
- disas_thumb_insn(env, dc);
247
+ insn = arm_lduw_code(env, dc->pc, dc->sctlr_b);
248
+ is_16bit = thumb_insn_is_16bit(dc, insn);
249
+ dc->pc += 2;
250
+ if (!is_16bit) {
251
+ uint32_t insn2 = arm_lduw_code(env, dc->pc, dc->sctlr_b);
252
+
253
+ insn = insn << 16 | insn2;
254
+ dc->pc += 2;
255
+ }
256
+
257
+ if (dc->condexec_mask) {
258
+ uint32_t cond = dc->condexec_cond;
259
+
260
+ if (cond != 0x0e) { /* Skip conditional when condition is AL. */
261
+ dc->condlabel = gen_new_label();
262
+ arm_gen_test_cc(cond ^ 1, dc->condlabel);
263
+ dc->condjmp = 1;
264
+ }
265
+ }
266
+
267
+ if (is_16bit) {
268
+ disas_thumb_insn(dc, insn);
269
+ } else {
270
+ disas_thumb2_insn(dc, insn);
271
+ }
272
273
/* Advance the Thumb condexec condition. */
274
if (dc->condexec_mask) {
275
--
58
--
276
2.7.4
59
2.20.1
277
60
278
61
diff view generated by jsdifflib
1
A few Thumb instructions are always unconditional even inside an
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
2
IT block (as opposed to being UNPREDICTABLE if used inside an
3
IT block): BKPT, the v8M SG instruction, and the A profile
4
HLT (debug halt) instruction.
5
2
6
This means we need to suppress the jump-over-instruction-on-condfail
3
Lei Sun found while auditing the code that a CPU write would
7
code generation (though the IT state still advances as usual and
4
trigger a NULL pointer dereference.
8
subsequent insns in the IT block may be conditional).
9
5
6
>From UG1085 datasheet [*] AXI writes in this region are ignored
7
and generates an AXI Slave Error (SLVERR).
8
9
Fix by implementing the write_with_attrs() handler.
10
Return MEMTX_ERROR when the region is accessed (this error maps
11
to an AXI slave error).
12
13
[*] https://www.xilinx.com/support/documentation/user_guides/ug1085-zynq-ultrascale-trm.pdf
14
15
Reported-by: Lei Sun <slei.casper@gmail.com>
16
Reviewed-by: Francisco Iglesias <frasse.iglesias@gmail.com>
17
Tested-by: Francisco Iglesias <frasse.iglesias@gmail.com>
18
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 1507556919-24992-9-git-send-email-peter.maydell@linaro.org
13
---
20
---
14
target/arm/translate.c | 48 +++++++++++++++++++++++++++++++++++++++++++++++-
21
hw/ssi/xilinx_spips.c | 16 ++++++++++++++++
15
1 file changed, 47 insertions(+), 1 deletion(-)
22
1 file changed, 16 insertions(+)
16
23
17
diff --git a/target/arm/translate.c b/target/arm/translate.c
24
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
18
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/translate.c
26
--- a/hw/ssi/xilinx_spips.c
20
+++ b/target/arm/translate.c
27
+++ b/hw/ssi/xilinx_spips.c
21
@@ -XXX,XX +XXX,XX @@ static void arm_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
28
@@ -XXX,XX +XXX,XX @@ static MemTxResult lqspi_read(void *opaque, hwaddr addr, uint64_t *value,
22
in init_disas_context by adjusting max_insns. */
29
return lqspi_read(opaque, addr, value, size, attrs);
23
}
30
}
24
31
25
+static bool thumb_insn_is_unconditional(DisasContext *s, uint32_t insn)
32
+static MemTxResult lqspi_write(void *opaque, hwaddr offset, uint64_t value,
33
+ unsigned size, MemTxAttrs attrs)
26
+{
34
+{
27
+ /* Return true if this Thumb insn is always unconditional,
35
+ /*
28
+ * even inside an IT block. This is true of only a very few
36
+ * From UG1085, Chapter 24 (Quad-SPI controllers):
29
+ * instructions: BKPT, HLT, and SG.
37
+ * - Writes are ignored
30
+ *
38
+ * - AXI writes generate an external AXI slave error (SLVERR)
31
+ * A larger class of instructions are UNPREDICTABLE if used
32
+ * inside an IT block; we do not need to detect those here, because
33
+ * what we do by default (perform the cc check and update the IT
34
+ * bits state machine) is a permitted CONSTRAINED UNPREDICTABLE
35
+ * choice for those situations.
36
+ *
37
+ * insn is either a 16-bit or a 32-bit instruction; the two are
38
+ * distinguishable because for the 16-bit case the top 16 bits
39
+ * are zeroes, and that isn't a valid 32-bit encoding.
40
+ */
39
+ */
41
+ if ((insn & 0xffffff00) == 0xbe00) {
40
+ qemu_log_mask(LOG_GUEST_ERROR, "%s Unexpected %u-bit access to 0x%" PRIx64
42
+ /* BKPT */
41
+ " (value: 0x%" PRIx64 "\n",
43
+ return true;
42
+ __func__, size << 3, offset, value);
44
+ }
45
+
43
+
46
+ if ((insn & 0xffffffc0) == 0xba80 && arm_dc_feature(s, ARM_FEATURE_V8) &&
44
+ return MEMTX_ERROR;
47
+ !arm_dc_feature(s, ARM_FEATURE_M)) {
48
+ /* HLT: v8A only. This is unconditional even when it is going to
49
+ * UNDEF; see the v8A ARM ARM DDI0487B.a H3.3.
50
+ * For v7 cores this was a plain old undefined encoding and so
51
+ * honours its cc check. (We might be using the encoding as
52
+ * a semihosting trap, but we don't change the cc check behaviour
53
+ * on that account, because a debugger connected to a real v7A
54
+ * core and emulating semihosting traps by catching the UNDEF
55
+ * exception would also only see cases where the cc check passed.
56
+ * No guest code should be trying to do a HLT semihosting trap
57
+ * in an IT block anyway.
58
+ */
59
+ return true;
60
+ }
61
+
62
+ if (insn == 0xe97fe97f && arm_dc_feature(s, ARM_FEATURE_V8) &&
63
+ arm_dc_feature(s, ARM_FEATURE_M)) {
64
+ /* SG: v8M only */
65
+ return true;
66
+ }
67
+
68
+ return false;
69
+}
45
+}
70
+
46
+
71
static void thumb_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
47
static const MemoryRegionOps lqspi_ops = {
72
{
48
.read_with_attrs = lqspi_read,
73
DisasContext *dc = container_of(dcbase, DisasContext, base);
49
+ .write_with_attrs = lqspi_write,
74
@@ -XXX,XX +XXX,XX @@ static void thumb_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
50
.endianness = DEVICE_NATIVE_ENDIAN,
75
dc->pc += 2;
51
.valid = {
76
}
52
.min_access_size = 1,
77
78
- if (dc->condexec_mask) {
79
+ if (dc->condexec_mask && !thumb_insn_is_unconditional(dc, insn)) {
80
uint32_t cond = dc->condexec_cond;
81
82
if (cond != 0x0e) { /* Skip conditional when condition is AL. */
83
--
53
--
84
2.7.4
54
2.20.1
85
55
86
56
diff view generated by jsdifflib
1
Add the M profile secure MMU index values to the switch in
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
2
get_a32_user_mem_index() so that LDRT/STRT work correctly
3
rather than asserting at translate time.
4
2
3
Both lqspi_read() and lqspi_load_cache() expect a 32-bit
4
aligned address.
5
6
>From UG1085 datasheet [*] chapter on 'Quad-SPI Controller':
7
8
Transfer Size Limitations
9
10
Because of the 32-bit wide TX, RX, and generic FIFO, all
11
APB/AXI transfers must be an integer multiple of 4-bytes.
12
Shorter transfers are not possible.
13
14
Set MemoryRegionOps.impl values to force 32-bit accesses,
15
this way we are sure we do not access the lqspi_buf[] array
16
out of bound.
17
18
[*] https://www.xilinx.com/support/documentation/user_guides/ug1085-zynq-ultrascale-trm.pdf
19
20
Reviewed-by: Francisco Iglesias <frasse.iglesias@gmail.com>
21
Tested-by: Francisco Iglesias <frasse.iglesias@gmail.com>
22
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 1507556919-24992-2-git-send-email-peter.maydell@linaro.org
8
---
24
---
9
target/arm/translate.c | 4 ++++
25
hw/ssi/xilinx_spips.c | 4 ++++
10
1 file changed, 4 insertions(+)
26
1 file changed, 4 insertions(+)
11
27
12
diff --git a/target/arm/translate.c b/target/arm/translate.c
28
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
13
index XXXXXXX..XXXXXXX 100644
29
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/translate.c
30
--- a/hw/ssi/xilinx_spips.c
15
+++ b/target/arm/translate.c
31
+++ b/hw/ssi/xilinx_spips.c
16
@@ -XXX,XX +XXX,XX @@ static inline int get_a32_user_mem_index(DisasContext *s)
32
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps lqspi_ops = {
17
case ARMMMUIdx_MPriv:
33
.read_with_attrs = lqspi_read,
18
case ARMMMUIdx_MNegPri:
34
.write_with_attrs = lqspi_write,
19
return arm_to_core_mmu_idx(ARMMMUIdx_MUser);
35
.endianness = DEVICE_NATIVE_ENDIAN,
20
+ case ARMMMUIdx_MSUser:
36
+ .impl = {
21
+ case ARMMMUIdx_MSPriv:
37
+ .min_access_size = 4,
22
+ case ARMMMUIdx_MSNegPri:
38
+ .max_access_size = 4,
23
+ return arm_to_core_mmu_idx(ARMMMUIdx_MSUser);
39
+ },
24
case ARMMMUIdx_S2NS:
40
.valid = {
25
default:
41
.min_access_size = 1,
26
g_assert_not_reached();
42
.max_access_size = 4
27
--
43
--
28
2.7.4
44
2.20.1
29
45
30
46
diff view generated by jsdifflib
1
From: Igor Mammedov <imammedo@redhat.com>
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
2
2
3
s/cpu_model/cpu_type/ that has been forgotten during
3
Reading the RX_DATA register when the RX_FIFO is empty triggers
4
conversion (ba1ba5cc), while touching the line also
4
an abort. This can be easily reproduced:
5
fixup alignment.
6
5
7
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
6
$ qemu-system-arm -M emcraft-sf2 -monitor stdio -S
8
Message-id: 1507710805-221721-1-git-send-email-imammedo@redhat.com
7
QEMU 4.0.50 monitor - type 'help' for more information
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
(qemu) x 0x40001010
9
Aborted (core dumped)
10
11
(gdb) bt
12
#1 0x00007f035874f895 in abort () at /lib64/libc.so.6
13
#2 0x00005628686591ff in fifo8_pop (fifo=0x56286a9a4c68) at util/fifo8.c:66
14
#3 0x00005628683e0b8e in fifo32_pop (fifo=0x56286a9a4c68) at include/qemu/fifo32.h:137
15
#4 0x00005628683e0efb in spi_read (opaque=0x56286a9a4850, addr=4, size=4) at hw/ssi/mss-spi.c:168
16
#5 0x0000562867f96801 in memory_region_read_accessor (mr=0x56286a9a4b60, addr=16, value=0x7ffeecb0c5c8, size=4, shift=0, mask=4294967295, attrs=...) at memory.c:439
17
#6 0x0000562867f96cdb in access_with_adjusted_size (addr=16, value=0x7ffeecb0c5c8, size=4, access_size_min=1, access_size_max=4, access_fn=0x562867f967c3 <memory_region_read_accessor>, mr=0x56286a9a4b60, attrs=...) at memory.c:569
18
#7 0x0000562867f99940 in memory_region_dispatch_read1 (mr=0x56286a9a4b60, addr=16, pval=0x7ffeecb0c5c8, size=4, attrs=...) at memory.c:1420
19
#8 0x0000562867f99a08 in memory_region_dispatch_read (mr=0x56286a9a4b60, addr=16, pval=0x7ffeecb0c5c8, size=4, attrs=...) at memory.c:1447
20
#9 0x0000562867f38721 in flatview_read_continue (fv=0x56286aec6360, addr=1073745936, attrs=..., buf=0x7ffeecb0c7c0 "\340ǰ\354\376\177", len=4, addr1=16, l=4, mr=0x56286a9a4b60) at exec.c:3385
21
#10 0x0000562867f38874 in flatview_read (fv=0x56286aec6360, addr=1073745936, attrs=..., buf=0x7ffeecb0c7c0 "\340ǰ\354\376\177", len=4) at exec.c:3423
22
#11 0x0000562867f388ea in address_space_read_full (as=0x56286aa3e890, addr=1073745936, attrs=..., buf=0x7ffeecb0c7c0 "\340ǰ\354\376\177", len=4) at exec.c:3436
23
#12 0x0000562867f389c5 in address_space_rw (as=0x56286aa3e890, addr=1073745936, attrs=..., buf=0x7ffeecb0c7c0 "\340ǰ\354\376\177", len=4, is_write=false) at exec.c:3466
24
#13 0x0000562867f3bdd7 in cpu_memory_rw_debug (cpu=0x56286aa19d00, addr=1073745936, buf=0x7ffeecb0c7c0 "\340ǰ\354\376\177", len=4, is_write=0) at exec.c:3976
25
#14 0x000056286811ed51 in memory_dump (mon=0x56286a8c32d0, count=1, format=120, wsize=4, addr=1073745936, is_physical=0) at monitor/misc.c:730
26
#15 0x000056286811eff1 in hmp_memory_dump (mon=0x56286a8c32d0, qdict=0x56286b15c400) at monitor/misc.c:785
27
#16 0x00005628684740ee in handle_hmp_command (mon=0x56286a8c32d0, cmdline=0x56286a8caeb2 "0x40001010") at monitor/hmp.c:1082
28
29
From the datasheet "Actel SmartFusion Microcontroller Subsystem
30
User's Guide" Rev.1, Table 13-3 "SPI Register Summary", this
31
register has a reset value of 0.
32
33
Check the FIFO is not empty before accessing it, else log an
34
error message.
35
36
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
37
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
38
Message-id: 20190709113715.7761-3-philmd@redhat.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
39
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
40
---
12
include/hw/arm/arm.h | 2 +-
41
hw/ssi/mss-spi.c | 8 +++++++-
13
1 file changed, 1 insertion(+), 1 deletion(-)
42
1 file changed, 7 insertions(+), 1 deletion(-)
14
43
15
diff --git a/include/hw/arm/arm.h b/include/hw/arm/arm.h
44
diff --git a/hw/ssi/mss-spi.c b/hw/ssi/mss-spi.c
16
index XXXXXXX..XXXXXXX 100644
45
index XXXXXXX..XXXXXXX 100644
17
--- a/include/hw/arm/arm.h
46
--- a/hw/ssi/mss-spi.c
18
+++ b/include/hw/arm/arm.h
47
+++ b/hw/ssi/mss-spi.c
19
@@ -XXX,XX +XXX,XX @@ typedef enum {
48
@@ -XXX,XX +XXX,XX @@ spi_read(void *opaque, hwaddr addr, unsigned int size)
20
49
case R_SPI_RX:
21
/* armv7m.c */
50
s->regs[R_SPI_STATUS] &= ~S_RXFIFOFUL;
22
DeviceState *armv7m_init(MemoryRegion *system_memory, int mem_size, int num_irq,
51
s->regs[R_SPI_STATUS] &= ~S_RXCHOVRF;
23
- const char *kernel_filename, const char *cpu_model);
52
- ret = fifo32_pop(&s->rx_fifo);
24
+ const char *kernel_filename, const char *cpu_type);
53
+ if (fifo32_is_empty(&s->rx_fifo)) {
25
/**
54
+ qemu_log_mask(LOG_GUEST_ERROR,
26
* armv7m_load_kernel:
55
+ "%s: Reading empty RX_FIFO\n",
27
* @cpu: CPU
56
+ __func__);
57
+ } else {
58
+ ret = fifo32_pop(&s->rx_fifo);
59
+ }
60
if (fifo32_is_empty(&s->rx_fifo)) {
61
s->regs[R_SPI_STATUS] |= S_RXFIFOEMP;
62
}
28
--
63
--
29
2.7.4
64
2.20.1
30
65
31
66
diff view generated by jsdifflib
1
Secure function return happens when a non-secure function has been
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
2
called using BLXNS and so has a particular magic LR value (either
3
0xfefffffe or 0xfeffffff). The function return via BX behaves
4
specially when the new PC value is this magic value, in the same
5
way that exception returns are handled.
6
2
7
Adjust our BX excret guards so that they recognize the function
3
In the previous commit we fixed a crash when the guest read a
8
return magic number as well, and perform the function-return
4
register that pop from an empty FIFO.
9
unstacking in do_v7m_exception_exit().
5
By auditing the repository, we found another similar use with
6
an easy way to reproduce:
10
7
8
$ qemu-system-aarch64 -M xlnx-zcu102 -monitor stdio -S
9
QEMU 4.0.50 monitor - type 'help' for more information
10
(qemu) xp/b 0xfd4a0134
11
Aborted (core dumped)
12
13
(gdb) bt
14
#0 0x00007f6936dea57f in raise () at /lib64/libc.so.6
15
#1 0x00007f6936dd4895 in abort () at /lib64/libc.so.6
16
#2 0x0000561ad32975ec in xlnx_dp_aux_pop_rx_fifo (s=0x7f692babee70) at hw/display/xlnx_dp.c:431
17
#3 0x0000561ad3297dc0 in xlnx_dp_read (opaque=0x7f692babee70, offset=77, size=4) at hw/display/xlnx_dp.c:667
18
#4 0x0000561ad321b896 in memory_region_read_accessor (mr=0x7f692babf620, addr=308, value=0x7ffe05c1db88, size=4, shift=0, mask=4294967295, attrs=...) at memory.c:439
19
#5 0x0000561ad321bd70 in access_with_adjusted_size (addr=308, value=0x7ffe05c1db88, size=1, access_size_min=4, access_size_max=4, access_fn=0x561ad321b858 <memory_region_read_accessor>, mr=0x7f692babf620, attrs=...) at memory.c:569
20
#6 0x0000561ad321e9d5 in memory_region_dispatch_read1 (mr=0x7f692babf620, addr=308, pval=0x7ffe05c1db88, size=1, attrs=...) at memory.c:1420
21
#7 0x0000561ad321ea9d in memory_region_dispatch_read (mr=0x7f692babf620, addr=308, pval=0x7ffe05c1db88, size=1, attrs=...) at memory.c:1447
22
#8 0x0000561ad31bd742 in flatview_read_continue (fv=0x561ad69c04f0, addr=4249485620, attrs=..., buf=0x7ffe05c1dcf0 "\020\335\301\005\376\177", len=1, addr1=308, l=1, mr=0x7f692babf620) at exec.c:3385
23
#9 0x0000561ad31bd895 in flatview_read (fv=0x561ad69c04f0, addr=4249485620, attrs=..., buf=0x7ffe05c1dcf0 "\020\335\301\005\376\177", len=1) at exec.c:3423
24
#10 0x0000561ad31bd90b in address_space_read_full (as=0x561ad5bb3020, addr=4249485620, attrs=..., buf=0x7ffe05c1dcf0 "\020\335\301\005\376\177", len=1) at exec.c:3436
25
#11 0x0000561ad33b1c42 in address_space_read (len=1, buf=0x7ffe05c1dcf0 "\020\335\301\005\376\177", attrs=..., addr=4249485620, as=0x561ad5bb3020) at include/exec/memory.h:2131
26
#12 0x0000561ad33b1c42 in memory_dump (mon=0x561ad59c4530, count=1, format=120, wsize=1, addr=4249485620, is_physical=1) at monitor/misc.c:723
27
#13 0x0000561ad33b1fc1 in hmp_physical_memory_dump (mon=0x561ad59c4530, qdict=0x561ad6c6fd00) at monitor/misc.c:795
28
#14 0x0000561ad37b4a9f in handle_hmp_command (mon=0x561ad59c4530, cmdline=0x561ad59d0f22 "/b 0x00000000fd4a0134") at monitor/hmp.c:1082
29
30
Fix by checking the FIFO is not empty before popping from it.
31
32
The datasheet is not clear about the reset value of this register,
33
we choose to return '0'.
34
35
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
36
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
37
Message-id: 20190709113715.7761-4-philmd@redhat.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
38
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Acked-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 1507556919-24992-5-git-send-email-peter.maydell@linaro.org
15
---
39
---
16
target/arm/internals.h | 7 +++
40
hw/display/xlnx_dp.c | 15 +++++++++++----
17
target/arm/helper.c | 115 +++++++++++++++++++++++++++++++++++++++++++++----
41
1 file changed, 11 insertions(+), 4 deletions(-)
18
target/arm/translate.c | 14 +++++-
19
3 files changed, 126 insertions(+), 10 deletions(-)
20
42
21
diff --git a/target/arm/internals.h b/target/arm/internals.h
43
diff --git a/hw/display/xlnx_dp.c b/hw/display/xlnx_dp.c
22
index XXXXXXX..XXXXXXX 100644
44
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/internals.h
45
--- a/hw/display/xlnx_dp.c
24
+++ b/target/arm/internals.h
46
+++ b/hw/display/xlnx_dp.c
25
@@ -XXX,XX +XXX,XX @@ FIELD(V7M_EXCRET, DCRS, 5, 1)
47
@@ -XXX,XX +XXX,XX @@ static uint8_t xlnx_dp_aux_pop_rx_fifo(XlnxDPState *s)
26
FIELD(V7M_EXCRET, S, 6, 1)
48
uint8_t ret;
27
FIELD(V7M_EXCRET, RES1, 7, 25) /* including the must-be-1 prefix */
49
28
50
if (fifo8_is_empty(&s->rx_fifo)) {
29
+/* Minimum value which is a magic number for exception return */
51
- DPRINTF("rx_fifo underflow..\n");
30
+#define EXC_RETURN_MIN_MAGIC 0xff000000
52
- abort();
31
+/* Minimum number which is a magic number for function or exception return
53
+ qemu_log_mask(LOG_GUEST_ERROR,
32
+ * when using v8M security extension
54
+ "%s: Reading empty RX_FIFO\n",
33
+ */
55
+ __func__);
34
+#define FNC_RETURN_MIN_MAGIC 0xfefffffe
56
+ /*
35
+
57
+ * The datasheet is not clear about the reset value, it seems
36
/* We use a few fake FSR values for internal purposes in M profile.
58
+ * to be unspecified. We choose to return '0'.
37
* M profile cores don't have A/R format FSRs, but currently our
59
+ */
38
* get_phys_addr() code assumes A/R profile and reports failures via
60
+ ret = 0;
39
diff --git a/target/arm/helper.c b/target/arm/helper.c
40
index XXXXXXX..XXXXXXX 100644
41
--- a/target/arm/helper.c
42
+++ b/target/arm/helper.c
43
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_bxns)(CPUARMState *env, uint32_t dest)
44
* - if the return value is a magic value, do exception return (like BX)
45
* - otherwise bit 0 of the return value is the target security state
46
*/
47
- if (dest >= 0xff000000) {
48
+ uint32_t min_magic;
49
+
50
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
51
+ /* Covers FNC_RETURN and EXC_RETURN magic */
52
+ min_magic = FNC_RETURN_MIN_MAGIC;
53
+ } else {
61
+ } else {
54
+ /* EXC_RETURN magic only */
62
+ ret = fifo8_pop(&s->rx_fifo);
55
+ min_magic = EXC_RETURN_MIN_MAGIC;
63
+ DPRINTF("pop 0x%" PRIX8 " from rx_fifo.\n", ret);
56
+ }
64
}
57
+
65
- ret = fifo8_pop(&s->rx_fifo);
58
+ if (dest >= min_magic) {
66
- DPRINTF("pop 0x%" PRIX8 " from rx_fifo.\n", ret);
59
/* This is an exception return magic value; put it where
67
return ret;
60
* do_v7m_exception_exit() expects and raise EXCEPTION_EXIT.
61
* Note that if we ever add gen_ss_advance() singlestep support to
62
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
63
bool exc_secure = false;
64
bool return_to_secure;
65
66
- /* We can only get here from an EXCP_EXCEPTION_EXIT, and
67
- * gen_bx_excret() enforces the architectural rule
68
- * that jumps to magic addresses don't have magic behaviour unless
69
- * we're in Handler mode (compare pseudocode BXWritePC()).
70
+ /* If we're not in Handler mode then jumps to magic exception-exit
71
+ * addresses don't have magic behaviour. However for the v8M
72
+ * security extensions the magic secure-function-return has to
73
+ * work in thread mode too, so to avoid doing an extra check in
74
+ * the generated code we allow exception-exit magic to also cause the
75
+ * internal exception and bring us here in thread mode. Correct code
76
+ * will never try to do this (the following insn fetch will always
77
+ * fault) so we the overhead of having taken an unnecessary exception
78
+ * doesn't matter.
79
*/
80
- assert(arm_v7m_is_handler_mode(env));
81
+ if (!arm_v7m_is_handler_mode(env)) {
82
+ return;
83
+ }
84
85
/* In the spec pseudocode ExceptionReturn() is called directly
86
* from BXWritePC() and gets the full target PC value including
87
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
88
qemu_log_mask(CPU_LOG_INT, "...successful exception return\n");
89
}
68
}
90
69
91
+static bool do_v7m_function_return(ARMCPU *cpu)
92
+{
93
+ /* v8M security extensions magic function return.
94
+ * We may either:
95
+ * (1) throw an exception (longjump)
96
+ * (2) return true if we successfully handled the function return
97
+ * (3) return false if we failed a consistency check and have
98
+ * pended a UsageFault that needs to be taken now
99
+ *
100
+ * At this point the magic return value is split between env->regs[15]
101
+ * and env->thumb. We don't bother to reconstitute it because we don't
102
+ * need it (all values are handled the same way).
103
+ */
104
+ CPUARMState *env = &cpu->env;
105
+ uint32_t newpc, newpsr, newpsr_exc;
106
+
107
+ qemu_log_mask(CPU_LOG_INT, "...really v7M secure function return\n");
108
+
109
+ {
110
+ bool threadmode, spsel;
111
+ TCGMemOpIdx oi;
112
+ ARMMMUIdx mmu_idx;
113
+ uint32_t *frame_sp_p;
114
+ uint32_t frameptr;
115
+
116
+ /* Pull the return address and IPSR from the Secure stack */
117
+ threadmode = !arm_v7m_is_handler_mode(env);
118
+ spsel = env->v7m.control[M_REG_S] & R_V7M_CONTROL_SPSEL_MASK;
119
+
120
+ frame_sp_p = get_v7m_sp_ptr(env, true, threadmode, spsel);
121
+ frameptr = *frame_sp_p;
122
+
123
+ /* These loads may throw an exception (for MPU faults). We want to
124
+ * do them as secure, so work out what MMU index that is.
125
+ */
126
+ mmu_idx = arm_v7m_mmu_idx_for_secstate(env, true);
127
+ oi = make_memop_idx(MO_LE, arm_to_core_mmu_idx(mmu_idx));
128
+ newpc = helper_le_ldul_mmu(env, frameptr, oi, 0);
129
+ newpsr = helper_le_ldul_mmu(env, frameptr + 4, oi, 0);
130
+
131
+ /* Consistency checks on new IPSR */
132
+ newpsr_exc = newpsr & XPSR_EXCP;
133
+ if (!((env->v7m.exception == 0 && newpsr_exc == 0) ||
134
+ (env->v7m.exception == 1 && newpsr_exc != 0))) {
135
+ /* Pend the fault and tell our caller to take it */
136
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
137
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
138
+ env->v7m.secure);
139
+ qemu_log_mask(CPU_LOG_INT,
140
+ "...taking INVPC UsageFault: "
141
+ "IPSR consistency check failed\n");
142
+ return false;
143
+ }
144
+
145
+ *frame_sp_p = frameptr + 8;
146
+ }
147
+
148
+ /* This invalidates frame_sp_p */
149
+ switch_v7m_security_state(env, true);
150
+ env->v7m.exception = newpsr_exc;
151
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
152
+ if (newpsr & XPSR_SFPA) {
153
+ env->v7m.control[M_REG_S] |= R_V7M_CONTROL_SFPA_MASK;
154
+ }
155
+ xpsr_write(env, 0, XPSR_IT);
156
+ env->thumb = newpc & 1;
157
+ env->regs[15] = newpc & ~1;
158
+
159
+ qemu_log_mask(CPU_LOG_INT, "...function return successful\n");
160
+ return true;
161
+}
162
+
163
static void arm_log_exception(int idx)
164
{
165
if (qemu_loglevel_mask(CPU_LOG_INT)) {
166
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
167
case EXCP_IRQ:
168
break;
169
case EXCP_EXCEPTION_EXIT:
170
- do_v7m_exception_exit(cpu);
171
- return;
172
+ if (env->regs[15] < EXC_RETURN_MIN_MAGIC) {
173
+ /* Must be v8M security extension function return */
174
+ assert(env->regs[15] >= FNC_RETURN_MIN_MAGIC);
175
+ assert(arm_feature(env, ARM_FEATURE_M_SECURITY));
176
+ if (do_v7m_function_return(cpu)) {
177
+ return;
178
+ }
179
+ } else {
180
+ do_v7m_exception_exit(cpu);
181
+ return;
182
+ }
183
+ break;
184
default:
185
cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index);
186
return; /* Never happens. Keep compiler happy. */
187
diff --git a/target/arm/translate.c b/target/arm/translate.c
188
index XXXXXXX..XXXXXXX 100644
189
--- a/target/arm/translate.c
190
+++ b/target/arm/translate.c
191
@@ -XXX,XX +XXX,XX @@ static inline void gen_bx_excret(DisasContext *s, TCGv_i32 var)
192
* s->base.is_jmp that we need to do the rest of the work later.
193
*/
194
gen_bx(s, var);
195
- if (s->v7m_handler_mode && arm_dc_feature(s, ARM_FEATURE_M)) {
196
+ if (arm_dc_feature(s, ARM_FEATURE_M_SECURITY) ||
197
+ (s->v7m_handler_mode && arm_dc_feature(s, ARM_FEATURE_M))) {
198
s->base.is_jmp = DISAS_BX_EXCRET;
199
}
200
}
201
@@ -XXX,XX +XXX,XX @@ static inline void gen_bx_excret_final_code(DisasContext *s)
202
{
203
/* Generate the code to finish possible exception return and end the TB */
204
TCGLabel *excret_label = gen_new_label();
205
+ uint32_t min_magic;
206
+
207
+ if (arm_dc_feature(s, ARM_FEATURE_M_SECURITY)) {
208
+ /* Covers FNC_RETURN and EXC_RETURN magic */
209
+ min_magic = FNC_RETURN_MIN_MAGIC;
210
+ } else {
211
+ /* EXC_RETURN magic only */
212
+ min_magic = EXC_RETURN_MIN_MAGIC;
213
+ }
214
215
/* Is the new PC value in the magic range indicating exception return? */
216
- tcg_gen_brcondi_i32(TCG_COND_GEU, cpu_R[15], 0xff000000, excret_label);
217
+ tcg_gen_brcondi_i32(TCG_COND_GEU, cpu_R[15], min_magic, excret_label);
218
/* No: end the TB as we would for a DISAS_JMP */
219
if (is_singlestepping(s)) {
220
gen_singlestep_exception(s);
221
--
70
--
222
2.7.4
71
2.20.1
223
72
224
73
diff view generated by jsdifflib
1
From: Cédric Le Goater <clg@kaod.org>
1
From: David Engraf <david.engraf@sysgo.com>
2
2
3
Initially from Anton D. Kachalov" <mouse@yandex-team.ru> but the SoB was
3
Using the whole 128 MiB flash in non-secure mode is not working because
4
missing.
4
virt_flash_fdt() expects the same address for secure_sysmem and sysmem.
5
This is not correctly handled by caller because it forwards NULL for
6
secure_sysmem in non-secure flash mode.
5
7
6
Signed-off-by: Cédric Le Goater <clg@kaod.org>
8
Fixed by using sysmem when secure_sysmem is NULL.
7
Acked-by: Andrew Jeffery <andrew@aj.id.au>
9
8
Message-id: 20170920064915.30027-1-clg@kaod.org
10
Signed-off-by: David Engraf <david.engraf@sysgo.com>
9
[clg: change commit log and subject
11
Message-id: 20190712075002.14326-1-david.engraf@sysgo.com
10
replace UL suffix by ULL ]
11
Signed-off-by: Cédric Le Goater <clg@kaod.org>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
14
---
15
hw/watchdog/wdt_aspeed.c | 4 ++--
15
hw/arm/virt.c | 2 +-
16
1 file changed, 2 insertions(+), 2 deletions(-)
16
1 file changed, 1 insertion(+), 1 deletion(-)
17
17
18
diff --git a/hw/watchdog/wdt_aspeed.c b/hw/watchdog/wdt_aspeed.c
18
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
19
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/watchdog/wdt_aspeed.c
20
--- a/hw/arm/virt.c
21
+++ b/hw/watchdog/wdt_aspeed.c
21
+++ b/hw/arm/virt.c
22
@@ -XXX,XX +XXX,XX @@ static uint64_t aspeed_wdt_read(void *opaque, hwaddr offset, unsigned size)
22
@@ -XXX,XX +XXX,XX @@ static void machvirt_init(MachineState *machine)
23
23
&machine->device_memory->mr);
24
static void aspeed_wdt_reload(AspeedWDTState *s, bool pclk)
25
{
26
- uint32_t reload;
27
+ uint64_t reload;
28
29
if (pclk) {
30
reload = muldiv64(s->regs[WDT_RELOAD_VALUE], NANOSECONDS_PER_SECOND,
31
s->pclk_freq);
32
} else {
33
- reload = s->regs[WDT_RELOAD_VALUE] * 1000;
34
+ reload = s->regs[WDT_RELOAD_VALUE] * 1000ULL;
35
}
24
}
36
25
37
if (aspeed_wdt_is_enabled(s)) {
26
- virt_flash_fdt(vms, sysmem, secure_sysmem);
27
+ virt_flash_fdt(vms, sysmem, secure_sysmem ?: sysmem);
28
29
create_gic(vms, pic);
30
38
--
31
--
39
2.7.4
32
2.20.1
40
33
41
34
diff view generated by jsdifflib
Deleted patch
1
Implement the SG instruction, which we emulate 'by hand' in the
2
exception handling code path.
3
1
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 1507556919-24992-3-git-send-email-peter.maydell@linaro.org
7
---
8
target/arm/helper.c | 132 ++++++++++++++++++++++++++++++++++++++++++++++++++--
9
1 file changed, 127 insertions(+), 5 deletions(-)
10
11
diff --git a/target/arm/helper.c b/target/arm/helper.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/helper.c
14
+++ b/target/arm/helper.c
15
@@ -XXX,XX +XXX,XX @@ typedef struct V8M_SAttributes {
16
bool irvalid;
17
} V8M_SAttributes;
18
19
+static void v8m_security_lookup(CPUARMState *env, uint32_t address,
20
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
21
+ V8M_SAttributes *sattrs);
22
+
23
/* Definitions for the PMCCNTR and PMCR registers */
24
#define PMCRD 0x8
25
#define PMCRC 0x4
26
@@ -XXX,XX +XXX,XX @@ static void arm_log_exception(int idx)
27
}
28
}
29
30
+static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx,
31
+ uint32_t addr, uint16_t *insn)
32
+{
33
+ /* Load a 16-bit portion of a v7M instruction, returning true on success,
34
+ * or false on failure (in which case we will have pended the appropriate
35
+ * exception).
36
+ * We need to do the instruction fetch's MPU and SAU checks
37
+ * like this because there is no MMU index that would allow
38
+ * doing the load with a single function call. Instead we must
39
+ * first check that the security attributes permit the load
40
+ * and that they don't mismatch on the two halves of the instruction,
41
+ * and then we do the load as a secure load (ie using the security
42
+ * attributes of the address, not the CPU, as architecturally required).
43
+ */
44
+ CPUState *cs = CPU(cpu);
45
+ CPUARMState *env = &cpu->env;
46
+ V8M_SAttributes sattrs = {};
47
+ MemTxAttrs attrs = {};
48
+ ARMMMUFaultInfo fi = {};
49
+ MemTxResult txres;
50
+ target_ulong page_size;
51
+ hwaddr physaddr;
52
+ int prot;
53
+ uint32_t fsr;
54
+
55
+ v8m_security_lookup(env, addr, MMU_INST_FETCH, mmu_idx, &sattrs);
56
+ if (!sattrs.nsc || sattrs.ns) {
57
+ /* This must be the second half of the insn, and it straddles a
58
+ * region boundary with the second half not being S&NSC.
59
+ */
60
+ env->v7m.sfsr |= R_V7M_SFSR_INVEP_MASK;
61
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
62
+ qemu_log_mask(CPU_LOG_INT,
63
+ "...really SecureFault with SFSR.INVEP\n");
64
+ return false;
65
+ }
66
+ if (get_phys_addr(env, addr, MMU_INST_FETCH, mmu_idx,
67
+ &physaddr, &attrs, &prot, &page_size, &fsr, &fi)) {
68
+ /* the MPU lookup failed */
69
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_IACCVIOL_MASK;
70
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM, env->v7m.secure);
71
+ qemu_log_mask(CPU_LOG_INT, "...really MemManage with CFSR.IACCVIOL\n");
72
+ return false;
73
+ }
74
+ *insn = address_space_lduw_le(arm_addressspace(cs, attrs), physaddr,
75
+ attrs, &txres);
76
+ if (txres != MEMTX_OK) {
77
+ env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_IBUSERR_MASK;
78
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false);
79
+ qemu_log_mask(CPU_LOG_INT, "...really BusFault with CFSR.IBUSERR\n");
80
+ return false;
81
+ }
82
+ return true;
83
+}
84
+
85
+static bool v7m_handle_execute_nsc(ARMCPU *cpu)
86
+{
87
+ /* Check whether this attempt to execute code in a Secure & NS-Callable
88
+ * memory region is for an SG instruction; if so, then emulate the
89
+ * effect of the SG instruction and return true. Otherwise pend
90
+ * the correct kind of exception and return false.
91
+ */
92
+ CPUARMState *env = &cpu->env;
93
+ ARMMMUIdx mmu_idx;
94
+ uint16_t insn;
95
+
96
+ /* We should never get here unless get_phys_addr_pmsav8() caused
97
+ * an exception for NS executing in S&NSC memory.
98
+ */
99
+ assert(!env->v7m.secure);
100
+ assert(arm_feature(env, ARM_FEATURE_M_SECURITY));
101
+
102
+ /* We want to do the MPU lookup as secure; work out what mmu_idx that is */
103
+ mmu_idx = arm_v7m_mmu_idx_for_secstate(env, true);
104
+
105
+ if (!v7m_read_half_insn(cpu, mmu_idx, env->regs[15], &insn)) {
106
+ return false;
107
+ }
108
+
109
+ if (!env->thumb) {
110
+ goto gen_invep;
111
+ }
112
+
113
+ if (insn != 0xe97f) {
114
+ /* Not an SG instruction first half (we choose the IMPDEF
115
+ * early-SG-check option).
116
+ */
117
+ goto gen_invep;
118
+ }
119
+
120
+ if (!v7m_read_half_insn(cpu, mmu_idx, env->regs[15] + 2, &insn)) {
121
+ return false;
122
+ }
123
+
124
+ if (insn != 0xe97f) {
125
+ /* Not an SG instruction second half (yes, both halves of the SG
126
+ * insn have the same hex value)
127
+ */
128
+ goto gen_invep;
129
+ }
130
+
131
+ /* OK, we have confirmed that we really have an SG instruction.
132
+ * We know we're NS in S memory so don't need to repeat those checks.
133
+ */
134
+ qemu_log_mask(CPU_LOG_INT, "...really an SG instruction at 0x%08" PRIx32
135
+ ", executing it\n", env->regs[15]);
136
+ env->regs[14] &= ~1;
137
+ switch_v7m_security_state(env, true);
138
+ xpsr_write(env, 0, XPSR_IT);
139
+ env->regs[15] += 4;
140
+ return true;
141
+
142
+gen_invep:
143
+ env->v7m.sfsr |= R_V7M_SFSR_INVEP_MASK;
144
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
145
+ qemu_log_mask(CPU_LOG_INT,
146
+ "...really SecureFault with SFSR.INVEP\n");
147
+ return false;
148
+}
149
+
150
void arm_v7m_cpu_do_interrupt(CPUState *cs)
151
{
152
ARMCPU *cpu = ARM_CPU(cs);
153
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
154
* the SG instruction have the same security attributes.)
155
* Everything else must generate an INVEP SecureFault, so we
156
* emulate the SG instruction here.
157
- * TODO: actually emulate SG.
158
*/
159
- env->v7m.sfsr |= R_V7M_SFSR_INVEP_MASK;
160
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
161
- qemu_log_mask(CPU_LOG_INT,
162
- "...really SecureFault with SFSR.INVEP\n");
163
+ if (v7m_handle_execute_nsc(cpu)) {
164
+ return;
165
+ }
166
break;
167
case M_FAKE_FSR_SFAULT:
168
/* Various flavours of SecureFault for attempts to execute or
169
--
170
2.7.4
171
172
diff view generated by jsdifflib
1
Implement the BLXNS instruction, which allows secure code to
1
The PL031 RTC tracks the difference between the guest RTC
2
call non-secure code.
2
and the host RTC using a tick_offset field. For migration,
3
3
however, we currently always migrate the offset between
4
the guest and the vm_clock, even if the RTC clock is not
5
the same as the vm_clock; this was an attempt to retain
6
migration backwards compatibility.
7
8
Unfortunately this results in the RTC behaving oddly across
9
a VM state save and restore -- since the VM clock stands still
10
across save-then-restore, regardless of how much real world
11
time has elapsed, the guest RTC ends up out of sync with the
12
host RTC in the restored VM.
13
14
Fix this by migrating the raw tick_offset. To retain migration
15
compatibility as far as possible, we have a new property
16
migrate-tick-offset; by default this is 'true' and we will
17
migrate the true tick offset in a new subsection; if the
18
incoming data has no subsection we fall back to the old
19
vm_clock-based offset information, so old->new migration
20
compatibility is preserved. For complete new->old migration
21
compatibility, the property is set to 'false' for 4.0 and
22
earlier machine types (this will only affect 'virt-4.0'
23
and below, as none of the other pl031-using machines are
24
versioned).
25
26
Reported-by: Russell King <rmk@armlinux.org.uk>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
27
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
28
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
6
Message-id: 1507556919-24992-4-git-send-email-peter.maydell@linaro.org
29
Message-id: 20190709143912.28905-1-peter.maydell@linaro.org
7
---
30
---
8
target/arm/helper.h | 1 +
31
include/hw/timer/pl031.h | 2 +
9
target/arm/internals.h | 1 +
32
hw/core/machine.c | 1 +
10
target/arm/helper.c | 59 ++++++++++++++++++++++++++++++++++++++++++++++++++
33
hw/timer/pl031.c | 92 ++++++++++++++++++++++++++++++++++++++--
11
target/arm/translate.c | 17 +++++++++++++--
34
3 files changed, 91 insertions(+), 4 deletions(-)
12
4 files changed, 76 insertions(+), 2 deletions(-)
35
13
36
diff --git a/include/hw/timer/pl031.h b/include/hw/timer/pl031.h
14
diff --git a/target/arm/helper.h b/target/arm/helper.h
15
index XXXXXXX..XXXXXXX 100644
37
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper.h
38
--- a/include/hw/timer/pl031.h
17
+++ b/target/arm/helper.h
39
+++ b/include/hw/timer/pl031.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_3(v7m_msr, void, env, i32, i32)
40
@@ -XXX,XX +XXX,XX @@ typedef struct PL031State {
19
DEF_HELPER_2(v7m_mrs, i32, env, i32)
41
*/
20
42
uint32_t tick_offset_vmstate;
21
DEF_HELPER_2(v7m_bxns, void, env, i32)
43
uint32_t tick_offset;
22
+DEF_HELPER_2(v7m_blxns, void, env, i32)
44
+ bool tick_offset_migrated;
23
45
+ bool migrate_tick_offset;
24
DEF_HELPER_4(access_check_cp_reg, void, env, ptr, i32, i32)
46
25
DEF_HELPER_3(set_cp_reg, void, env, ptr, i32)
47
uint32_t mr;
26
diff --git a/target/arm/internals.h b/target/arm/internals.h
48
uint32_t lr;
49
diff --git a/hw/core/machine.c b/hw/core/machine.c
27
index XXXXXXX..XXXXXXX 100644
50
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/internals.h
51
--- a/hw/core/machine.c
29
+++ b/target/arm/internals.h
52
+++ b/hw/core/machine.c
30
@@ -XXX,XX +XXX,XX @@ static inline bool excp_is_internal(int excp)
53
@@ -XXX,XX +XXX,XX @@ GlobalProperty hw_compat_4_0[] = {
31
FIELD(V7M_CONTROL, NPRIV, 0, 1)
54
{ "virtio-gpu-pci", "edid", "false" },
32
FIELD(V7M_CONTROL, SPSEL, 1, 1)
55
{ "virtio-device", "use-started", "false" },
33
FIELD(V7M_CONTROL, FPCA, 2, 1)
56
{ "virtio-balloon-device", "qemu-4-0-config-size", "true" },
34
+FIELD(V7M_CONTROL, SFPA, 3, 1)
57
+ { "pl031", "migrate-tick-offset", "false" },
35
58
};
36
/* Bit definitions for v7M exception return payload */
59
const size_t hw_compat_4_0_len = G_N_ELEMENTS(hw_compat_4_0);
37
FIELD(V7M_EXCRET, ES, 0, 1)
60
38
diff --git a/target/arm/helper.c b/target/arm/helper.c
61
diff --git a/hw/timer/pl031.c b/hw/timer/pl031.c
39
index XXXXXXX..XXXXXXX 100644
62
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/helper.c
63
--- a/hw/timer/pl031.c
41
+++ b/target/arm/helper.c
64
+++ b/hw/timer/pl031.c
42
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_bxns)(CPUARMState *env, uint32_t dest)
65
@@ -XXX,XX +XXX,XX @@ static int pl031_pre_save(void *opaque)
43
g_assert_not_reached();
66
{
67
PL031State *s = opaque;
68
69
- /* tick_offset is base_time - rtc_clock base time. Instead, we want to
70
- * store the base time relative to the QEMU_CLOCK_VIRTUAL for backwards-compatibility. */
71
+ /*
72
+ * The PL031 device model code uses the tick_offset field, which is
73
+ * the offset between what the guest RTC should read and what the
74
+ * QEMU rtc_clock reads:
75
+ * guest_rtc = rtc_clock + tick_offset
76
+ * and so
77
+ * tick_offset = guest_rtc - rtc_clock
78
+ *
79
+ * We want to migrate this offset, which sounds straightforward.
80
+ * Unfortunately older versions of QEMU migrated a conversion of this
81
+ * offset into an offset from the vm_clock. (This was in turn an
82
+ * attempt to be compatible with even older QEMU versions, but it
83
+ * has incorrect behaviour if the rtc_clock is not the same as the
84
+ * vm_clock.) So we put the actual tick_offset into a migration
85
+ * subsection, and the backwards-compatible time-relative-to-vm_clock
86
+ * in the main migration state.
87
+ *
88
+ * Calculate base time relative to QEMU_CLOCK_VIRTUAL:
89
+ */
90
int64_t delta = qemu_clock_get_ns(rtc_clock) - qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
91
s->tick_offset_vmstate = s->tick_offset + delta / NANOSECONDS_PER_SECOND;
92
93
return 0;
44
}
94
}
45
95
46
+void HELPER(v7m_blxns)(CPUARMState *env, uint32_t dest)
96
+static int pl031_pre_load(void *opaque)
47
+{
97
+{
48
+ /* translate.c should never generate calls here in user-only mode */
98
+ PL031State *s = opaque;
49
+ g_assert_not_reached();
99
+
100
+ s->tick_offset_migrated = false;
101
+ return 0;
50
+}
102
+}
51
+
103
+
52
void switch_mode(CPUARMState *env, int mode)
104
static int pl031_post_load(void *opaque, int version_id)
53
{
105
{
54
ARMCPU *cpu = arm_env_get_cpu(env);
106
PL031State *s = opaque;
55
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_bxns)(CPUARMState *env, uint32_t dest)
107
56
env->regs[15] = dest & ~1;
108
- int64_t delta = qemu_clock_get_ns(rtc_clock) - qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
109
- s->tick_offset = s->tick_offset_vmstate - delta / NANOSECONDS_PER_SECOND;
110
+ /*
111
+ * If we got the tick_offset subsection, then we can just use
112
+ * the value in that. Otherwise the source is an older QEMU and
113
+ * has given us the offset from the vm_clock; convert it back to
114
+ * an offset from the rtc_clock. This will cause time to incorrectly
115
+ * go backwards compared to the host RTC, but this is unavoidable.
116
+ */
117
+
118
+ if (!s->tick_offset_migrated) {
119
+ int64_t delta = qemu_clock_get_ns(rtc_clock) -
120
+ qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
121
+ s->tick_offset = s->tick_offset_vmstate -
122
+ delta / NANOSECONDS_PER_SECOND;
123
+ }
124
pl031_set_alarm(s);
125
return 0;
57
}
126
}
58
127
59
+void HELPER(v7m_blxns)(CPUARMState *env, uint32_t dest)
128
+static int pl031_tick_offset_post_load(void *opaque, int version_id)
60
+{
129
+{
61
+ /* Handle v7M BLXNS:
130
+ PL031State *s = opaque;
62
+ * - bit 0 of the destination address is the target security state
131
+
132
+ s->tick_offset_migrated = true;
133
+ return 0;
134
+}
135
+
136
+static bool pl031_tick_offset_needed(void *opaque)
137
+{
138
+ PL031State *s = opaque;
139
+
140
+ return s->migrate_tick_offset;
141
+}
142
+
143
+static const VMStateDescription vmstate_pl031_tick_offset = {
144
+ .name = "pl031/tick-offset",
145
+ .version_id = 1,
146
+ .minimum_version_id = 1,
147
+ .needed = pl031_tick_offset_needed,
148
+ .post_load = pl031_tick_offset_post_load,
149
+ .fields = (VMStateField[]) {
150
+ VMSTATE_UINT32(tick_offset, PL031State),
151
+ VMSTATE_END_OF_LIST()
152
+ }
153
+};
154
+
155
static const VMStateDescription vmstate_pl031 = {
156
.name = "pl031",
157
.version_id = 1,
158
.minimum_version_id = 1,
159
.pre_save = pl031_pre_save,
160
+ .pre_load = pl031_pre_load,
161
.post_load = pl031_post_load,
162
.fields = (VMStateField[]) {
163
VMSTATE_UINT32(tick_offset_vmstate, PL031State),
164
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_pl031 = {
165
VMSTATE_UINT32(im, PL031State),
166
VMSTATE_UINT32(is, PL031State),
167
VMSTATE_END_OF_LIST()
168
+ },
169
+ .subsections = (const VMStateDescription*[]) {
170
+ &vmstate_pl031_tick_offset,
171
+ NULL
172
}
173
};
174
175
+static Property pl031_properties[] = {
176
+ /*
177
+ * True to correctly migrate the tick offset of the RTC. False to
178
+ * obtain backward migration compatibility with older QEMU versions,
179
+ * at the expense of the guest RTC going backwards compared with the
180
+ * host RTC when the VM is saved/restored if using -rtc host.
181
+ * (Even if set to 'true' older QEMU can migrate forward to newer QEMU;
182
+ * 'false' also permits newer QEMU to migrate to older QEMU.)
63
+ */
183
+ */
64
+
184
+ DEFINE_PROP_BOOL("migrate-tick-offset",
65
+ /* At this point regs[15] is the address just after the BLXNS */
185
+ PL031State, migrate_tick_offset, true),
66
+ uint32_t nextinst = env->regs[15] | 1;
186
+ DEFINE_PROP_END_OF_LIST()
67
+ uint32_t sp = env->regs[13] - 8;
187
+};
68
+ uint32_t saved_psr;
188
+
69
+
189
static void pl031_class_init(ObjectClass *klass, void *data)
70
+ /* translate.c will have made BLXNS UNDEF unless we're secure */
71
+ assert(env->v7m.secure);
72
+
73
+ if (dest & 1) {
74
+ /* target is Secure, so this is just a normal BLX,
75
+ * except that the low bit doesn't indicate Thumb/not.
76
+ */
77
+ env->regs[14] = nextinst;
78
+ env->thumb = 1;
79
+ env->regs[15] = dest & ~1;
80
+ return;
81
+ }
82
+
83
+ /* Target is non-secure: first push a stack frame */
84
+ if (!QEMU_IS_ALIGNED(sp, 8)) {
85
+ qemu_log_mask(LOG_GUEST_ERROR,
86
+ "BLXNS with misaligned SP is UNPREDICTABLE\n");
87
+ }
88
+
89
+ saved_psr = env->v7m.exception;
90
+ if (env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK) {
91
+ saved_psr |= XPSR_SFPA;
92
+ }
93
+
94
+ /* Note that these stores can throw exceptions on MPU faults */
95
+ cpu_stl_data(env, sp, nextinst);
96
+ cpu_stl_data(env, sp + 4, saved_psr);
97
+
98
+ env->regs[13] = sp;
99
+ env->regs[14] = 0xfeffffff;
100
+ if (arm_v7m_is_handler_mode(env)) {
101
+ /* Write a dummy value to IPSR, to avoid leaking the current secure
102
+ * exception number to non-secure code. This is guaranteed not
103
+ * to cause write_v7m_exception() to actually change stacks.
104
+ */
105
+ write_v7m_exception(env, 1);
106
+ }
107
+ switch_v7m_security_state(env, 0);
108
+ env->thumb = 1;
109
+ env->regs[15] = dest;
110
+}
111
+
112
static uint32_t *get_v7m_sp_ptr(CPUARMState *env, bool secure, bool threadmode,
113
bool spsel)
114
{
190
{
115
diff --git a/target/arm/translate.c b/target/arm/translate.c
191
DeviceClass *dc = DEVICE_CLASS(klass);
116
index XXXXXXX..XXXXXXX 100644
192
117
--- a/target/arm/translate.c
193
dc->vmsd = &vmstate_pl031;
118
+++ b/target/arm/translate.c
194
+ dc->props = pl031_properties;
119
@@ -XXX,XX +XXX,XX @@ static inline void gen_bxns(DisasContext *s, int rm)
120
s->base.is_jmp = DISAS_EXIT;
121
}
195
}
122
196
123
+static inline void gen_blxns(DisasContext *s, int rm)
197
static const TypeInfo pl031_info = {
124
+{
125
+ TCGv_i32 var = load_reg(s, rm);
126
+
127
+ /* We don't need to sync condexec state, for the same reason as bxns.
128
+ * We do however need to set the PC, because the blxns helper reads it.
129
+ * The blxns helper may throw an exception.
130
+ */
131
+ gen_set_pc_im(s, s->pc);
132
+ gen_helper_v7m_blxns(cpu_env, var);
133
+ tcg_temp_free_i32(var);
134
+ s->base.is_jmp = DISAS_EXIT;
135
+}
136
+
137
/* Variant of store_reg which uses branch&exchange logic when storing
138
to r15 in ARM architecture v7 and above. The source must be a temporary
139
and will be marked as dead. */
140
@@ -XXX,XX +XXX,XX @@ static void disas_thumb_insn(CPUARMState *env, DisasContext *s)
141
goto undef;
142
}
143
if (link) {
144
- /* BLXNS: not yet implemented */
145
- goto undef;
146
+ gen_blxns(s, rm);
147
} else {
148
gen_bxns(s, rm);
149
}
150
--
198
--
151
2.7.4
199
2.20.1
152
200
153
201
diff view generated by jsdifflib
Deleted patch
1
The code which implements the Thumb1 split BL/BLX instructions
2
is guarded by a check on "not M or THUMB2". All we really need
3
to check here is "not THUMB2" (and we assume that elsewhere too,
4
eg in the ARCH(6T2) test that UNDEFs the Thumb2 insns).
5
1
6
This doesn't change behaviour because all M profile cores
7
have Thumb2 and so ARM_FEATURE_M implies ARM_FEATURE_THUMB2.
8
(v6M implements a very restricted subset of Thumb2, but we
9
can cross that bridge when we get to it with appropriate
10
feature bits.)
11
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 1507556919-24992-6-git-send-email-peter.maydell@linaro.org
15
---
16
target/arm/translate.c | 3 +--
17
1 file changed, 1 insertion(+), 2 deletions(-)
18
19
diff --git a/target/arm/translate.c b/target/arm/translate.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/translate.c
22
+++ b/target/arm/translate.c
23
@@ -XXX,XX +XXX,XX @@ static int disas_thumb2_insn(CPUARMState *env, DisasContext *s, uint16_t insn_hw
24
int conds;
25
int logic_cc;
26
27
- if (!(arm_dc_feature(s, ARM_FEATURE_THUMB2)
28
- || arm_dc_feature(s, ARM_FEATURE_M))) {
29
+ if (!arm_dc_feature(s, ARM_FEATURE_THUMB2)) {
30
/* Thumb-1 cores may need to treat bl and blx as a pair of
31
16-bit instructions to get correct prefetch abort behavior. */
32
insn = insn_hw1;
33
--
34
2.7.4
35
36
diff view generated by jsdifflib
1
Recent changes have left insn_crosses_page() more complicated
1
The ARMv5 architecture didn't specify detailed per-feature ID
2
than it needed to be:
2
registers. Now that we're using the MVFR0 register fields to
3
* it's only called from thumb_tr_translate_insn() so we know
3
gate the existence of VFP instructions, we need to set up
4
for certain that we're looking at a Thumb insn
4
the correct values in the cpu->isar structure so that we still
5
* the caller's check for dc->pc >= dc->next_page_start - 3
5
provide an FPU to the guest.
6
means that dc->pc can't possibly be 4 aligned, so there's
7
no need to check that (the check was partly there to ensure
8
that we didn't treat an ARM insn as Thumb, I think)
9
* we now have thumb_insn_is_16bit() which lets us do a precise
10
check of the length of the next insn, rather than opencoding
11
an inaccurate check
12
6
13
Simplify it down to just loading the first half of the insn
7
This fixes a regression in the arm926 and arm1026 CPUs, which
14
and calling thumb_insn_is_16bit() on it.
8
are the only ones that both have VFP and are ARMv5 or earlier.
9
This regression was introduced by the VFP refactoring, and more
10
specifically by commits 1120827fa182f0e76 and 266bd25c485597c,
11
which accidentally disabled VFP short-vector support and
12
double-precision support on these CPUs.
15
13
14
Fixes: 1120827fa182f0e
15
Fixes: 266bd25c485597c
16
Fixes: https://bugs.launchpad.net/qemu/+bug/1836192
17
Reported-by: Christophe Lyon <christophe.lyon@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
19
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
18
Message-id: 1507556919-24992-8-git-send-email-peter.maydell@linaro.org
20
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
21
Tested-by: Christophe Lyon <christophe.lyon@linaro.org>
22
Message-id: 20190711131241.22231-1-peter.maydell@linaro.org
19
---
23
---
20
target/arm/translate.c | 27 ++++++---------------------
24
target/arm/cpu.c | 12 ++++++++++++
21
1 file changed, 6 insertions(+), 21 deletions(-)
25
1 file changed, 12 insertions(+)
22
26
23
diff --git a/target/arm/translate.c b/target/arm/translate.c
27
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
24
index XXXXXXX..XXXXXXX 100644
28
index XXXXXXX..XXXXXXX 100644
25
--- a/target/arm/translate.c
29
--- a/target/arm/cpu.c
26
+++ b/target/arm/translate.c
30
+++ b/target/arm/cpu.c
27
@@ -XXX,XX +XXX,XX @@ static bool insn_crosses_page(CPUARMState *env, DisasContext *s)
31
@@ -XXX,XX +XXX,XX @@ static void arm926_initfn(Object *obj)
28
{
32
* set the field to indicate Jazelle support within QEMU.
29
/* Return true if the insn at dc->pc might cross a page boundary.
30
* (False positives are OK, false negatives are not.)
31
+ * We know this is a Thumb insn, and our caller ensures we are
32
+ * only called if dc->pc is less than 4 bytes from the page
33
+ * boundary, so we cross the page if the first 16 bits indicate
34
+ * that this is a 32 bit insn.
35
*/
33
*/
36
- uint16_t insn;
34
cpu->isar.id_isar1 = FIELD_DP32(cpu->isar.id_isar1, ID_ISAR1, JAZELLE, 1);
37
+ uint16_t insn = arm_lduw_code(env, s->pc, s->sctlr_b);
35
+ /*
38
36
+ * Similarly, we need to set MVFR0 fields to enable double precision
39
- if ((s->pc & 3) == 0) {
37
+ * and short vector support even though ARMv5 doesn't have this register.
40
- /* At a 4-aligned address we can't be crossing a page */
38
+ */
41
- return false;
39
+ cpu->isar.mvfr0 = FIELD_DP32(cpu->isar.mvfr0, MVFR0, FPSHVEC, 1);
42
- }
40
+ cpu->isar.mvfr0 = FIELD_DP32(cpu->isar.mvfr0, MVFR0, FPDP, 1);
43
-
44
- /* This must be a Thumb insn */
45
- insn = arm_lduw_code(env, s->pc, s->sctlr_b);
46
-
47
- if ((insn >> 11) >= 0x1d) {
48
- /* Top five bits 0b11101 / 0b11110 / 0b11111 : this is the
49
- * First half of a 32-bit Thumb insn. Thumb-1 cores might
50
- * end up actually treating this as two 16-bit insns (see the
51
- * code at the start of disas_thumb2_insn()) but we don't bother
52
- * to check for that as it is unlikely, and false positives here
53
- * are harmless.
54
- */
55
- return true;
56
- }
57
- /* Definitely a 16-bit insn, can't be crossing a page. */
58
- return false;
59
+ return !thumb_insn_is_16bit(s, insn);
60
}
41
}
61
42
62
static int arm_tr_init_disas_context(DisasContextBase *dcbase,
43
static void arm946_initfn(Object *obj)
44
@@ -XXX,XX +XXX,XX @@ static void arm1026_initfn(Object *obj)
45
* set the field to indicate Jazelle support within QEMU.
46
*/
47
cpu->isar.id_isar1 = FIELD_DP32(cpu->isar.id_isar1, ID_ISAR1, JAZELLE, 1);
48
+ /*
49
+ * Similarly, we need to set MVFR0 fields to enable double precision
50
+ * and short vector support even though ARMv5 doesn't have this register.
51
+ */
52
+ cpu->isar.mvfr0 = FIELD_DP32(cpu->isar.mvfr0, MVFR0, FPSHVEC, 1);
53
+ cpu->isar.mvfr0 = FIELD_DP32(cpu->isar.mvfr0, MVFR0, FPDP, 1);
54
55
{
56
/* The 1026 had an IFAR at c6,c0,0,1 rather than the ARMv6 c6,c0,0,2 */
63
--
57
--
64
2.7.4
58
2.20.1
65
59
66
60
diff view generated by jsdifflib
Deleted patch
1
The common situation of the SG instruction is that it is
2
executed from S&NSC memory by a CPU in NS state. That case
3
is handled by v7m_handle_execute_nsc(). However the instruction
4
also has defined behaviour in a couple of other cases:
5
* SG instruction in NS memory (behaves as a NOP)
6
* SG in S memory but CPU already secure (clears IT bits and
7
does nothing else)
8
* SG instruction in v8M without Security Extension (NOP)
9
1
10
These can be implemented in translate.c.
11
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 1507556919-24992-10-git-send-email-peter.maydell@linaro.org
15
---
16
target/arm/translate.c | 23 ++++++++++++++++++++++-
17
1 file changed, 22 insertions(+), 1 deletion(-)
18
19
diff --git a/target/arm/translate.c b/target/arm/translate.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/translate.c
22
+++ b/target/arm/translate.c
23
@@ -XXX,XX +XXX,XX @@ static int disas_thumb2_insn(DisasContext *s, uint32_t insn)
24
* - load/store doubleword, load/store exclusive, ldacq/strel,
25
* table branch.
26
*/
27
- if (insn & 0x01200000) {
28
+ if (insn == 0xe97fe97f && arm_dc_feature(s, ARM_FEATURE_M) &&
29
+ arm_dc_feature(s, ARM_FEATURE_V8)) {
30
+ /* 0b1110_1001_0111_1111_1110_1001_0111_111
31
+ * - SG (v8M only)
32
+ * The bulk of the behaviour for this instruction is implemented
33
+ * in v7m_handle_execute_nsc(), which deals with the insn when
34
+ * it is executed by a CPU in non-secure state from memory
35
+ * which is Secure & NonSecure-Callable.
36
+ * Here we only need to handle the remaining cases:
37
+ * * in NS memory (including the "security extension not
38
+ * implemented" case) : NOP
39
+ * * in S memory but CPU already secure (clear IT bits)
40
+ * We know that the attribute for the memory this insn is
41
+ * in must match the current CPU state, because otherwise
42
+ * get_phys_addr_pmsav8 would have generated an exception.
43
+ */
44
+ if (s->v8m_secure) {
45
+ /* Like the IT insn, we don't need to generate any code */
46
+ s->condexec_cond = 0;
47
+ s->condexec_mask = 0;
48
+ }
49
+ } else if (insn & 0x01200000) {
50
/* 0b1110_1000_x11x_xxxx_xxxx_xxxx_xxxx_xxxx
51
* - load/store dual (post-indexed)
52
* 0b1111_1001_x10x_xxxx_xxxx_xxxx_xxxx_xxxx
53
--
54
2.7.4
55
56
diff view generated by jsdifflib
1
Coverity points out that we forgot the 'break' for
1
In the M-profile architecture, when we do a vector table fetch and it
2
the SAU_CTRL write case (CID1381683). This has
2
fails, we need to report a HardFault. Whether this is a Secure HF or
3
no actual visible consequences because it happens
3
a NonSecure HF depends on several things. If AIRCR.BFHFNMINS is 0
4
that the following case is effectively a no-op.
4
then HF is always Secure, because there is no NonSecure HardFault.
5
Otherwise, the answer depends on whether the 'underlying exception'
6
(MemManage, BusFault, SecureFault) targets Secure or NonSecure. (In
7
the pseudocode, this is handled in the Vector() function: the final
8
exc.isSecure is calculated by looking at the exc.isSecure from the
9
exception returned from the memory access, not the isSecure input
10
argument.)
11
12
We weren't doing this correctly, because we were looking at
13
the target security domain of the exception we were trying to
14
load the vector table entry for. This produces errors of two kinds:
15
* a load from the NS vector table which hits the "NS access
16
to S memory" SecureFault should end up as a Secure HardFault,
17
but we were raising an NS HardFault
18
* a load from the S vector table which causes a BusFault
19
should raise an NS HardFault if BFHFNMINS == 1 (because
20
in that case all BusFaults are NonSecure), but we were raising
21
a Secure HardFault
22
23
Correct the logic.
24
25
We also fix a comment error where we claimed that we might
26
be escalating MemManage to HardFault, and forgot about SecureFault.
27
(Vector loads can never hit MPU access faults, because they're
28
always aligned and always use the default address map.)
5
29
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
30
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
31
Message-id: 20190705094823.28905-1-peter.maydell@linaro.org
8
Message-id: 1507742676-9908-1-git-send-email-peter.maydell@linaro.org
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
---
32
---
11
hw/intc/armv7m_nvic.c | 1 +
33
target/arm/m_helper.c | 21 +++++++++++++++++----
12
1 file changed, 1 insertion(+)
34
1 file changed, 17 insertions(+), 4 deletions(-)
13
35
14
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
36
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
15
index XXXXXXX..XXXXXXX 100644
37
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/intc/armv7m_nvic.c
38
--- a/target/arm/m_helper.c
17
+++ b/hw/intc/armv7m_nvic.c
39
+++ b/target/arm/m_helper.c
18
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
40
@@ -XXX,XX +XXX,XX @@ static bool arm_v7m_load_vector(ARMCPU *cpu, int exc, bool targets_secure,
19
return;
41
if (sattrs.ns) {
42
attrs.secure = false;
43
} else if (!targets_secure) {
44
- /* NS access to S memory */
45
+ /*
46
+ * NS access to S memory: the underlying exception which we escalate
47
+ * to HardFault is SecureFault, which always targets Secure.
48
+ */
49
+ exc_secure = true;
50
goto load_fail;
20
}
51
}
21
cpu->env.sau.ctrl = value & 3;
52
}
22
+ break;
53
@@ -XXX,XX +XXX,XX @@ static bool arm_v7m_load_vector(ARMCPU *cpu, int exc, bool targets_secure,
23
case 0xdd4: /* SAU_TYPE */
54
vector_entry = address_space_ldl(arm_addressspace(cs, attrs), addr,
24
if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
55
attrs, &result);
25
goto bad_offset;
56
if (result != MEMTX_OK) {
57
+ /*
58
+ * Underlying exception is BusFault: its target security state
59
+ * depends on BFHFNMINS.
60
+ */
61
+ exc_secure = !(cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK);
62
goto load_fail;
63
}
64
*pvec = vector_entry;
65
@@ -XXX,XX +XXX,XX @@ load_fail:
66
/*
67
* All vector table fetch fails are reported as HardFault, with
68
* HFSR.VECTTBL and .FORCED set. (FORCED is set because
69
- * technically the underlying exception is a MemManage or BusFault
70
+ * technically the underlying exception is a SecureFault or BusFault
71
* that is escalated to HardFault.) This is a terminal exception,
72
* so we will either take the HardFault immediately or else enter
73
* lockup (the latter case is handled in armv7m_nvic_set_pending_derived()).
74
+ * The HardFault is Secure if BFHFNMINS is 0 (meaning that all HFs are
75
+ * secure); otherwise it targets the same security state as the
76
+ * underlying exception.
77
*/
78
- exc_secure = targets_secure ||
79
- !(cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK);
80
+ if (!(cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK)) {
81
+ exc_secure = true;
82
+ }
83
env->v7m.hfsr |= R_V7M_HFSR_VECTTBL_MASK | R_V7M_HFSR_FORCED_MASK;
84
armv7m_nvic_set_pending_derived(env->nvic, ARMV7M_EXCP_HARD, exc_secure);
85
return false;
26
--
86
--
27
2.7.4
87
2.20.1
28
88
29
89
diff view generated by jsdifflib