1
Mostly my stuff with a few easy patches from others. I know I have
1
A few last patches to go in for rc3...
2
a few big series in my to-review queue, but I've been too jetlagged
3
to try to tackle those :-(
4
2
5
thanks
3
The following changes since commit c1e90def01bdb8fcbdbebd9d1eaa8e4827ece620:
6
-- PMM
7
4
8
The following changes since commit a26a98dfb9d448d7234d931ae3720feddf6f0651:
5
Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20210412' into staging (2021-04-12 12:12:09 +0100)
9
6
10
Merge remote-tracking branch 'remotes/cohuck/tags/s390x-20171006' into staging (2017-10-06 13:19:03 +0100)
7
are available in the Git repository at:
11
8
12
are available in the git repository at:
9
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20210413
13
10
14
git://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20171006
11
for you to fetch changes up to 2d18b4ca023ca1a3aee18064251d6e6e1084f3eb:
15
12
16
for you to fetch changes up to 04829ce334bece78d4fa1d0fdbc8bc27dae9b242:
13
sphinx: qapidoc: Wrap "If" section body in a paragraph node (2021-04-13 10:14:58 +0100)
17
18
nvic: Add missing code for writing SHCSR.HARDFAULTPENDED bit (2017-10-06 16:46:49 +0100)
19
14
20
----------------------------------------------------------------
15
----------------------------------------------------------------
21
target-arm:
16
target-arm queue:
22
* v8M: more preparatory work
17
* Fix MPC setting for AN524 SRAM block
23
* nvic: reset properly rather than leaving the nvic in a weird state
18
* sphinx: qapidoc: Wrap "If" section body in a paragraph node
24
* xlnx-zynqmp: Mark the "xlnx, zynqmp" device with user_creatable = false
25
* sd: fix out-of-bounds check for multi block reads
26
* arm: Fix SMC reporting to EL2 when QEMU provides PSCI
27
19
28
----------------------------------------------------------------
20
----------------------------------------------------------------
29
Jan Kiszka (1):
21
John Snow (1):
30
arm: Fix SMC reporting to EL2 when QEMU provides PSCI
22
sphinx: qapidoc: Wrap "If" section body in a paragraph node
31
23
32
Michael Olbrich (1):
24
Peter Maydell (2):
33
hw/sd: fix out-of-bounds check for multi block reads
25
hw/arm/mps2-tz: Fix MPC setting for AN524 SRAM block
26
hw/arm/mps2-tz: Assert if more than one RAM is attached to an MPC
34
27
35
Peter Maydell (17):
28
docs/sphinx/qapidoc.py | 4 +++-
36
nvic: Clear the vector arrays and prigroup on reset
29
hw/arm/mps2-tz.c | 10 +++++++---
37
target/arm: Don't switch to target stack early in v7M exception return
30
2 files changed, 10 insertions(+), 4 deletions(-)
38
target/arm: Prepare for CONTROL.SPSEL being nonzero in Handler mode
39
target/arm: Restore security state on exception return
40
target/arm: Restore SPSEL to correct CONTROL register on exception return
41
target/arm: Check for xPSR mismatch usage faults earlier for v8M
42
target/arm: Warn about restoring to unaligned stack
43
target/arm: Don't warn about exception return with PC low bit set for v8M
44
target/arm: Add new-in-v8M SFSR and SFAR
45
target/arm: Update excret sanity checks for v8M
46
target/arm: Add support for restoring v8M additional state context
47
target/arm: Add v8M support to exception entry code
48
nvic: Implement Security Attribution Unit registers
49
target/arm: Implement security attribute lookups for memory accesses
50
target/arm: Fix calculation of secure mm_idx values
51
target/arm: Factor out "get mmuidx for specified security state"
52
nvic: Add missing code for writing SHCSR.HARDFAULTPENDED bit
53
31
54
Thomas Huth (1):
55
hw/arm/xlnx-zynqmp: Mark the "xlnx, zynqmp" device with user_creatable = false
56
57
target/arm/cpu.h | 60 ++++-
58
target/arm/internals.h | 15 ++
59
hw/arm/xlnx-zynqmp.c | 2 +
60
hw/intc/armv7m_nvic.c | 158 ++++++++++-
61
hw/sd/sd.c | 12 +-
62
target/arm/cpu.c | 27 ++
63
target/arm/helper.c | 691 +++++++++++++++++++++++++++++++++++++++++++------
64
target/arm/machine.c | 16 ++
65
target/arm/op_helper.c | 27 +-
66
9 files changed, 898 insertions(+), 110 deletions(-)
67
diff view generated by jsdifflib
Deleted patch
1
From: Jan Kiszka <jan.kiszka@siemens.com>
2
1
3
This properly forwards SMC events to EL2 when PSCI is provided by QEMU
4
itself and, thus, ARM_FEATURE_EL3 is off.
5
6
Found and tested with the Jailhouse hypervisor. Solution based on
7
suggestions by Peter Maydell.
8
9
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
10
Message-id: 4f243068-aaea-776f-d18f-f9e05e7be9cd@siemens.com
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
target/arm/helper.c | 9 ++++++++-
15
target/arm/op_helper.c | 27 +++++++++++++++++----------
16
2 files changed, 25 insertions(+), 11 deletions(-)
17
18
diff --git a/target/arm/helper.c b/target/arm/helper.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/helper.c
21
+++ b/target/arm/helper.c
22
@@ -XXX,XX +XXX,XX @@ static void hcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
23
24
if (arm_feature(env, ARM_FEATURE_EL3)) {
25
valid_mask &= ~HCR_HCD;
26
- } else {
27
+ } else if (cpu->psci_conduit != QEMU_PSCI_CONDUIT_SMC) {
28
+ /* Architecturally HCR.TSC is RES0 if EL3 is not implemented.
29
+ * However, if we're using the SMC PSCI conduit then QEMU is
30
+ * effectively acting like EL3 firmware and so the guest at
31
+ * EL2 should retain the ability to prevent EL1 from being
32
+ * able to make SMC calls into the ersatz firmware, so in
33
+ * that case HCR.TSC should be read/write.
34
+ */
35
valid_mask &= ~HCR_TSC;
36
}
37
38
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/op_helper.c
41
+++ b/target/arm/op_helper.c
42
@@ -XXX,XX +XXX,XX @@ void HELPER(pre_smc)(CPUARMState *env, uint32_t syndrome)
43
*/
44
bool undef = arm_feature(env, ARM_FEATURE_AARCH64) ? smd : smd && !secure;
45
46
- if (arm_is_psci_call(cpu, EXCP_SMC)) {
47
- /* If PSCI is enabled and this looks like a valid PSCI call then
48
- * that overrides the architecturally mandated SMC behaviour.
49
+ if (!arm_feature(env, ARM_FEATURE_EL3) &&
50
+ cpu->psci_conduit != QEMU_PSCI_CONDUIT_SMC) {
51
+ /* If we have no EL3 then SMC always UNDEFs and can't be
52
+ * trapped to EL2. PSCI-via-SMC is a sort of ersatz EL3
53
+ * firmware within QEMU, and we want an EL2 guest to be able
54
+ * to forbid its EL1 from making PSCI calls into QEMU's
55
+ * "firmware" via HCR.TSC, so for these purposes treat
56
+ * PSCI-via-SMC as implying an EL3.
57
*/
58
- return;
59
- }
60
-
61
- if (!arm_feature(env, ARM_FEATURE_EL3)) {
62
- /* If we have no EL3 then SMC always UNDEFs */
63
undef = true;
64
} else if (!secure && cur_el == 1 && (env->cp15.hcr_el2 & HCR_TSC)) {
65
- /* In NS EL1, HCR controlled routing to EL2 has priority over SMD. */
66
+ /* In NS EL1, HCR controlled routing to EL2 has priority over SMD.
67
+ * We also want an EL2 guest to be able to forbid its EL1 from
68
+ * making PSCI calls into QEMU's "firmware" via HCR.TSC.
69
+ */
70
raise_exception(env, EXCP_HYP_TRAP, syndrome, 2);
71
}
72
73
- if (undef) {
74
+ /* If PSCI is enabled and this looks like a valid PSCI call then
75
+ * suppress the UNDEF -- we'll catch the SMC exception and
76
+ * implement the PSCI call behaviour there.
77
+ */
78
+ if (undef && !arm_is_psci_call(cpu, EXCP_SMC)) {
79
raise_exception(env, EXCP_UDEF, syn_uncategorized(),
80
exception_target_el(env));
81
}
82
--
83
2.7.4
84
85
diff view generated by jsdifflib
1
Reset for devices does not include an automatic clear of the
1
The AN524 has three MPCs: one for the BRAM, one for the QSPI flash,
2
device state (unlike CPU state, where most of the state
2
and one for the DDR. We incorrectly set the .mpc field in the
3
structure is cleared to zero). Add some missing initialization
3
RAMInfo struct for the SRAM block to 1, giving it the same MPC we are
4
of NVIC state that meant that the device was left in the wrong
4
using for the QSPI. The effect of this was that the QSPI didn't get
5
state if the guest did a warm reset.
5
mapped into the system address space at all, via an MPC or otherwise,
6
and guest programs which tried to read from the QSPI would get a bus
7
error. Correct the SRAM RAMInfo to indicate that it does not have an
8
associated MPC.
6
9
7
(In particular, since we were resetting the computed state like
10
Fixes: 25ff112a8cc ("hw/arm/mps2-tz: Add new mps3-an524 board")
8
s->exception_prio but not all the state it was computed
9
from like s->vectors[x].active, the NVIC wound up in an
10
inconsistent state that could later trigger assertion failures.)
11
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
13
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
15
Message-id: 1506092407-26985-2-git-send-email-peter.maydell@linaro.org
14
Message-id: 20210409150527.15053-2-peter.maydell@linaro.org
16
---
15
---
17
hw/intc/armv7m_nvic.c | 5 +++++
16
hw/arm/mps2-tz.c | 2 +-
18
1 file changed, 5 insertions(+)
17
1 file changed, 1 insertion(+), 1 deletion(-)
19
18
20
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
19
diff --git a/hw/arm/mps2-tz.c b/hw/arm/mps2-tz.c
21
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/intc/armv7m_nvic.c
21
--- a/hw/arm/mps2-tz.c
23
+++ b/hw/intc/armv7m_nvic.c
22
+++ b/hw/arm/mps2-tz.c
24
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_reset(DeviceState *dev)
23
@@ -XXX,XX +XXX,XX @@ static const RAMInfo an524_raminfo[] = { {
25
int resetprio;
24
.name = "sram",
26
NVICState *s = NVIC(dev);
25
.base = 0x20000000,
27
26
.size = 32 * 4 * KiB,
28
+ memset(s->vectors, 0, sizeof(s->vectors));
27
- .mpc = 1,
29
+ memset(s->sec_vectors, 0, sizeof(s->sec_vectors));
28
+ .mpc = -1,
30
+ s->prigroup[M_REG_NS] = 0;
29
.mrindex = 1,
31
+ s->prigroup[M_REG_S] = 0;
30
}, {
32
+
31
/* We don't model QSPI flash yet; for now expose it as simple ROM */
33
s->vectors[ARMV7M_EXCP_NMI].enabled = 1;
34
/* MEM, BUS, and USAGE are enabled through
35
* the System Handler Control register
36
--
32
--
37
2.7.4
33
2.20.1
38
34
39
35
diff view generated by jsdifflib
1
On exception return for v8M, the SPSEL bit in the EXC_RETURN magic
1
Each board in mps2-tz.c specifies a RAMInfo[] array providing
2
value should be restored to the SPSEL bit in the CONTROL register
2
information about each RAM in the board. The .mpc field of the
3
banked specified by the EXC_RETURN.ES bit.
3
RAMInfo struct specifies which MPC, if any, the RAM is attached to.
4
We already assert if the array doesn't have any entry for an MPC, but
5
we don't diagnose the error of using the same MPC number twice (which
6
is quite easy to do by accident if copy-and-pasting structure
7
entries).
4
8
5
Add write_v7m_control_spsel_for_secstate() which behaves like
9
Enhance find_raminfo_for_mpc() so that it detects multiple entries
6
write_v7m_control_spsel() but allows the caller to specify which
10
for the MPC as well as missing entries.
7
CONTROL bank to use, reimplement write_v7m_control_spsel() in
8
terms of it, and use it in exception return.
9
11
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 1506092407-26985-6-git-send-email-peter.maydell@linaro.org
14
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
15
Message-id: 20210409150527.15053-3-peter.maydell@linaro.org
13
---
16
---
14
target/arm/helper.c | 40 +++++++++++++++++++++++++++-------------
17
hw/arm/mps2-tz.c | 8 ++++++--
15
1 file changed, 27 insertions(+), 13 deletions(-)
18
1 file changed, 6 insertions(+), 2 deletions(-)
16
19
17
diff --git a/target/arm/helper.c b/target/arm/helper.c
20
diff --git a/hw/arm/mps2-tz.c b/hw/arm/mps2-tz.c
18
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper.c
22
--- a/hw/arm/mps2-tz.c
20
+++ b/target/arm/helper.c
23
+++ b/hw/arm/mps2-tz.c
21
@@ -XXX,XX +XXX,XX @@ static bool v7m_using_psp(CPUARMState *env)
24
@@ -XXX,XX +XXX,XX @@ static const RAMInfo *find_raminfo_for_mpc(MPS2TZMachineState *mms, int mpc)
22
env->v7m.control[env->v7m.secure] & R_V7M_CONTROL_SPSEL_MASK;
25
{
26
MPS2TZMachineClass *mmc = MPS2TZ_MACHINE_GET_CLASS(mms);
27
const RAMInfo *p;
28
+ const RAMInfo *found = NULL;
29
30
for (p = mmc->raminfo; p->name; p++) {
31
if (p->mpc == mpc && !(p->flags & IS_ALIAS)) {
32
- return p;
33
+ /* There should only be one entry in the array for this MPC */
34
+ g_assert(!found);
35
+ found = p;
36
}
37
}
38
/* if raminfo array doesn't have an entry for each MPC this is a bug */
39
- g_assert_not_reached();
40
+ assert(found);
41
+ return found;
23
}
42
}
24
43
25
-/* Write to v7M CONTROL.SPSEL bit. This may change the current
44
static MemoryRegion *mr_for_raminfo(MPS2TZMachineState *mms,
26
- * stack pointer between Main and Process stack pointers.
27
+/* Write to v7M CONTROL.SPSEL bit for the specified security bank.
28
+ * This may change the current stack pointer between Main and Process
29
+ * stack pointers if it is done for the CONTROL register for the current
30
+ * security state.
31
*/
32
-static void write_v7m_control_spsel(CPUARMState *env, bool new_spsel)
33
+static void write_v7m_control_spsel_for_secstate(CPUARMState *env,
34
+ bool new_spsel,
35
+ bool secstate)
36
{
37
- uint32_t tmp;
38
- bool new_is_psp, old_is_psp = v7m_using_psp(env);
39
+ bool old_is_psp = v7m_using_psp(env);
40
41
- env->v7m.control[env->v7m.secure] =
42
- deposit32(env->v7m.control[env->v7m.secure],
43
+ env->v7m.control[secstate] =
44
+ deposit32(env->v7m.control[secstate],
45
R_V7M_CONTROL_SPSEL_SHIFT,
46
R_V7M_CONTROL_SPSEL_LENGTH, new_spsel);
47
48
- new_is_psp = v7m_using_psp(env);
49
+ if (secstate == env->v7m.secure) {
50
+ bool new_is_psp = v7m_using_psp(env);
51
+ uint32_t tmp;
52
53
- if (old_is_psp != new_is_psp) {
54
- tmp = env->v7m.other_sp;
55
- env->v7m.other_sp = env->regs[13];
56
- env->regs[13] = tmp;
57
+ if (old_is_psp != new_is_psp) {
58
+ tmp = env->v7m.other_sp;
59
+ env->v7m.other_sp = env->regs[13];
60
+ env->regs[13] = tmp;
61
+ }
62
}
63
}
64
65
+/* Write to v7M CONTROL.SPSEL bit. This may change the current
66
+ * stack pointer between Main and Process stack pointers.
67
+ */
68
+static void write_v7m_control_spsel(CPUARMState *env, bool new_spsel)
69
+{
70
+ write_v7m_control_spsel_for_secstate(env, new_spsel, env->v7m.secure);
71
+}
72
+
73
void write_v7m_exception(CPUARMState *env, uint32_t new_exc)
74
{
75
/* Write a new value to v7m.exception, thus transitioning into or out
76
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
77
* Handler mode (and will be until we write the new XPSR.Interrupt
78
* field) this does not switch around the current stack pointer.
79
*/
80
- write_v7m_control_spsel(env, return_to_sp_process);
81
+ write_v7m_control_spsel_for_secstate(env, return_to_sp_process, exc_secure);
82
83
switch_v7m_security_state(env, return_to_secure);
84
85
--
45
--
86
2.7.4
46
2.20.1
87
47
88
48
diff view generated by jsdifflib
1
From: Michael Olbrich <m.olbrich@pengutronix.de>
1
From: John Snow <jsnow@redhat.com>
2
2
3
The current code checks if the next block exceeds the size of the card.
3
These sections need to be wrapped in a block-level element, such as
4
This generates an error while reading the last block of the card.
4
Paragraph in order for them to be rendered into Texinfo correctly.
5
Do the out-of-bounds check when starting to read a new block to fix this.
6
5
7
This issue became visible with increased error checking in Linux 4.13.
6
Before (e.g.):
8
7
9
Cc: qemu-stable@nongnu.org
8
<section ids="qapidoc-713">
10
Signed-off-by: Michael Olbrich <m.olbrich@pengutronix.de>
9
<title>If</title>
11
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
10
<literal>defined(CONFIG_REPLICATION)</literal>
12
Message-id: 20170916091611.10241-1-m.olbrich@pengutronix.de
11
</section>
12
13
became:
14
15
.SS If
16
\fBdefined(CONFIG_REPLICATION)\fP.SS \fBBlockdevOptionsReplication\fP (Object)
17
...
18
19
After:
20
21
<section ids="qapidoc-713">
22
<title>If</title>
23
<paragraph>
24
<literal>defined(CONFIG_REPLICATION)</literal>
25
</paragraph>
26
</section>
27
28
becomes:
29
30
.SS If
31
.sp
32
\fBdefined(CONFIG_REPLICATION)\fP
33
.SS \fBBlockdevOptionsReplication\fP (Object)
34
...
35
36
Reported-by: Markus Armbruster <armbru@redhat.com>
37
Tested-by: Markus Armbruster <armbru@redhat.com>
38
Signed-off-by: John Snow <jsnow@redhat.com>
39
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
40
Message-id: 20210406141909.1992225-2-jsnow@redhat.com
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
41
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
42
---
15
hw/sd/sd.c | 12 ++++++------
43
docs/sphinx/qapidoc.py | 4 +++-
16
1 file changed, 6 insertions(+), 6 deletions(-)
44
1 file changed, 3 insertions(+), 1 deletion(-)
17
45
18
diff --git a/hw/sd/sd.c b/hw/sd/sd.c
46
diff --git a/docs/sphinx/qapidoc.py b/docs/sphinx/qapidoc.py
19
index XXXXXXX..XXXXXXX 100644
47
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/sd/sd.c
48
--- a/docs/sphinx/qapidoc.py
21
+++ b/hw/sd/sd.c
49
+++ b/docs/sphinx/qapidoc.py
22
@@ -XXX,XX +XXX,XX @@ uint8_t sd_read_data(SDState *sd)
50
@@ -XXX,XX +XXX,XX @@ def _nodes_for_if_section(self, ifcond):
23
break;
51
nodelist = []
24
52
if ifcond:
25
case 18:    /* CMD18: READ_MULTIPLE_BLOCK */
53
snode = self._make_section('If')
26
- if (sd->data_offset == 0)
54
- snode += self._nodes_for_ifcond(ifcond, with_if=False)
27
+ if (sd->data_offset == 0) {
55
+ snode += nodes.paragraph(
28
+ if (sd->data_start + io_len > sd->size) {
56
+ '', '', *self._nodes_for_ifcond(ifcond, with_if=False)
29
+ sd->card_status |= ADDRESS_ERROR;
57
+ )
30
+ return 0x00;
58
nodelist.append(snode)
31
+ }
59
return nodelist
32
BLK_READ_BLOCK(sd->data_start, io_len);
33
+ }
34
ret = sd->data[sd->data_offset ++];
35
36
if (sd->data_offset >= io_len) {
37
@@ -XXX,XX +XXX,XX @@ uint8_t sd_read_data(SDState *sd)
38
break;
39
}
40
}
41
-
42
- if (sd->data_start + io_len > sd->size) {
43
- sd->card_status |= ADDRESS_ERROR;
44
- break;
45
- }
46
}
47
break;
48
60
49
--
61
--
50
2.7.4
62
2.20.1
51
63
52
64
diff view generated by jsdifflib
Deleted patch
1
From: Thomas Huth <thuth@redhat.com>
2
1
3
The device uses serial_hds in its realize function and thus can't be
4
used twice. Apart from that, the comma in its name makes it quite hard
5
to use for the user anyway, since a comma is normally used to separate
6
the device name from its properties when using the "-device" parameter
7
or the "device_add" HMP command.
8
9
Signed-off-by: Thomas Huth <thuth@redhat.com>
10
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
11
Message-id: 1506441116-16627-1-git-send-email-thuth@redhat.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
hw/arm/xlnx-zynqmp.c | 2 ++
15
1 file changed, 2 insertions(+)
16
17
diff --git a/hw/arm/xlnx-zynqmp.c b/hw/arm/xlnx-zynqmp.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/arm/xlnx-zynqmp.c
20
+++ b/hw/arm/xlnx-zynqmp.c
21
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_class_init(ObjectClass *oc, void *data)
22
23
dc->props = xlnx_zynqmp_props;
24
dc->realize = xlnx_zynqmp_realize;
25
+ /* Reason: Uses serial_hds in realize function, thus can't be used twice */
26
+ dc->user_creatable = false;
27
}
28
29
static const TypeInfo xlnx_zynqmp_type_info = {
30
--
31
2.7.4
32
33
diff view generated by jsdifflib
Deleted patch
1
Currently our M profile exception return code switches to the
2
target stack pointer relatively early in the process, before
3
it tries to pop the exception frame off the stack. This is
4
awkward for v8M for two reasons:
5
* in v8M the process vs main stack pointer is not selected
6
purely by the value of CONTROL.SPSEL, so updating SPSEL
7
and relying on that to switch to the right stack pointer
8
won't work
9
* the stack we should be reading the stack frame from and
10
the stack we will eventually switch to might not be the
11
same if the guest is doing strange things
12
1
13
Change our exception return code to use a 'frame pointer'
14
to read the exception frame rather than assuming that we
15
can switch the live stack pointer this early.
16
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
19
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Message-id: 1506092407-26985-3-git-send-email-peter.maydell@linaro.org
21
---
22
target/arm/helper.c | 130 +++++++++++++++++++++++++++++++++++++++-------------
23
1 file changed, 98 insertions(+), 32 deletions(-)
24
25
diff --git a/target/arm/helper.c b/target/arm/helper.c
26
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/helper.c
28
+++ b/target/arm/helper.c
29
@@ -XXX,XX +XXX,XX @@ static void v7m_push(CPUARMState *env, uint32_t val)
30
stl_phys(cs->as, env->regs[13], val);
31
}
32
33
-static uint32_t v7m_pop(CPUARMState *env)
34
-{
35
- CPUState *cs = CPU(arm_env_get_cpu(env));
36
- uint32_t val;
37
-
38
- val = ldl_phys(cs->as, env->regs[13]);
39
- env->regs[13] += 4;
40
- return val;
41
-}
42
-
43
/* Return true if we're using the process stack pointer (not the MSP) */
44
static bool v7m_using_psp(CPUARMState *env)
45
{
46
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_bxns)(CPUARMState *env, uint32_t dest)
47
env->regs[15] = dest & ~1;
48
}
49
50
+static uint32_t *get_v7m_sp_ptr(CPUARMState *env, bool secure, bool threadmode,
51
+ bool spsel)
52
+{
53
+ /* Return a pointer to the location where we currently store the
54
+ * stack pointer for the requested security state and thread mode.
55
+ * This pointer will become invalid if the CPU state is updated
56
+ * such that the stack pointers are switched around (eg changing
57
+ * the SPSEL control bit).
58
+ * Compare the v8M ARM ARM pseudocode LookUpSP_with_security_mode().
59
+ * Unlike that pseudocode, we require the caller to pass us in the
60
+ * SPSEL control bit value; this is because we also use this
61
+ * function in handling of pushing of the callee-saves registers
62
+ * part of the v8M stack frame (pseudocode PushCalleeStack()),
63
+ * and in the tailchain codepath the SPSEL bit comes from the exception
64
+ * return magic LR value from the previous exception. The pseudocode
65
+ * opencodes the stack-selection in PushCalleeStack(), but we prefer
66
+ * to make this utility function generic enough to do the job.
67
+ */
68
+ bool want_psp = threadmode && spsel;
69
+
70
+ if (secure == env->v7m.secure) {
71
+ /* Currently switch_v7m_sp switches SP as it updates SPSEL,
72
+ * so the SP we want is always in regs[13].
73
+ * When we decouple SPSEL from the actually selected SP
74
+ * we need to check want_psp against v7m_using_psp()
75
+ * to see whether we need regs[13] or v7m.other_sp.
76
+ */
77
+ return &env->regs[13];
78
+ } else {
79
+ if (want_psp) {
80
+ return &env->v7m.other_ss_psp;
81
+ } else {
82
+ return &env->v7m.other_ss_msp;
83
+ }
84
+ }
85
+}
86
+
87
static uint32_t arm_v7m_load_vector(ARMCPU *cpu)
88
{
89
CPUState *cs = CPU(cpu);
90
@@ -XXX,XX +XXX,XX @@ static void v7m_push_stack(ARMCPU *cpu)
91
static void do_v7m_exception_exit(ARMCPU *cpu)
92
{
93
CPUARMState *env = &cpu->env;
94
+ CPUState *cs = CPU(cpu);
95
uint32_t excret;
96
uint32_t xpsr;
97
bool ufault = false;
98
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
99
bool return_to_handler = false;
100
bool rettobase = false;
101
bool exc_secure = false;
102
+ bool return_to_secure;
103
104
/* We can only get here from an EXCP_EXCEPTION_EXIT, and
105
* gen_bx_excret() enforces the architectural rule
106
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
107
g_assert_not_reached();
108
}
109
110
+ return_to_secure = arm_feature(env, ARM_FEATURE_M_SECURITY) &&
111
+ (excret & R_V7M_EXCRET_S_MASK);
112
+
113
switch (excret & 0xf) {
114
case 1: /* Return to Handler */
115
return_to_handler = true;
116
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
117
return;
118
}
119
120
- /* Switch to the target stack. */
121
+ /* Set CONTROL.SPSEL from excret.SPSEL. For QEMU this currently
122
+ * causes us to switch the active SP, but we will change this
123
+ * later to not do that so we can support v8M.
124
+ */
125
switch_v7m_sp(env, return_to_sp_process);
126
- /* Pop registers. */
127
- env->regs[0] = v7m_pop(env);
128
- env->regs[1] = v7m_pop(env);
129
- env->regs[2] = v7m_pop(env);
130
- env->regs[3] = v7m_pop(env);
131
- env->regs[12] = v7m_pop(env);
132
- env->regs[14] = v7m_pop(env);
133
- env->regs[15] = v7m_pop(env);
134
- if (env->regs[15] & 1) {
135
- qemu_log_mask(LOG_GUEST_ERROR,
136
- "M profile return from interrupt with misaligned "
137
- "PC is UNPREDICTABLE\n");
138
- /* Actual hardware seems to ignore the lsbit, and there are several
139
- * RTOSes out there which incorrectly assume the r15 in the stack
140
- * frame should be a Thumb-style "lsbit indicates ARM/Thumb" value.
141
+
142
+ {
143
+ /* The stack pointer we should be reading the exception frame from
144
+ * depends on bits in the magic exception return type value (and
145
+ * for v8M isn't necessarily the stack pointer we will eventually
146
+ * end up resuming execution with). Get a pointer to the location
147
+ * in the CPU state struct where the SP we need is currently being
148
+ * stored; we will use and modify it in place.
149
+ * We use this limited C variable scope so we don't accidentally
150
+ * use 'frame_sp_p' after we do something that makes it invalid.
151
+ */
152
+ uint32_t *frame_sp_p = get_v7m_sp_ptr(env,
153
+ return_to_secure,
154
+ !return_to_handler,
155
+ return_to_sp_process);
156
+ uint32_t frameptr = *frame_sp_p;
157
+
158
+ /* Pop registers. TODO: make these accesses use the correct
159
+ * attributes and address space (S/NS, priv/unpriv) and handle
160
+ * memory transaction failures.
161
*/
162
- env->regs[15] &= ~1U;
163
+ env->regs[0] = ldl_phys(cs->as, frameptr);
164
+ env->regs[1] = ldl_phys(cs->as, frameptr + 0x4);
165
+ env->regs[2] = ldl_phys(cs->as, frameptr + 0x8);
166
+ env->regs[3] = ldl_phys(cs->as, frameptr + 0xc);
167
+ env->regs[12] = ldl_phys(cs->as, frameptr + 0x10);
168
+ env->regs[14] = ldl_phys(cs->as, frameptr + 0x14);
169
+ env->regs[15] = ldl_phys(cs->as, frameptr + 0x18);
170
+ if (env->regs[15] & 1) {
171
+ qemu_log_mask(LOG_GUEST_ERROR,
172
+ "M profile return from interrupt with misaligned "
173
+ "PC is UNPREDICTABLE\n");
174
+ /* Actual hardware seems to ignore the lsbit, and there are several
175
+ * RTOSes out there which incorrectly assume the r15 in the stack
176
+ * frame should be a Thumb-style "lsbit indicates ARM/Thumb" value.
177
+ */
178
+ env->regs[15] &= ~1U;
179
+ }
180
+ xpsr = ldl_phys(cs->as, frameptr + 0x1c);
181
+
182
+ /* Commit to consuming the stack frame */
183
+ frameptr += 0x20;
184
+ /* Undo stack alignment (the SPREALIGN bit indicates that the original
185
+ * pre-exception SP was not 8-aligned and we added a padding word to
186
+ * align it, so we undo this by ORing in the bit that increases it
187
+ * from the current 8-aligned value to the 8-unaligned value. (Adding 4
188
+ * would work too but a logical OR is how the pseudocode specifies it.)
189
+ */
190
+ if (xpsr & XPSR_SPREALIGN) {
191
+ frameptr |= 4;
192
+ }
193
+ *frame_sp_p = frameptr;
194
}
195
- xpsr = v7m_pop(env);
196
+ /* This xpsr_write() will invalidate frame_sp_p as it may switch stack */
197
xpsr_write(env, xpsr, ~XPSR_SPREALIGN);
198
- /* Undo stack alignment. */
199
- if (xpsr & XPSR_SPREALIGN) {
200
- env->regs[13] |= 4;
201
- }
202
203
/* The restored xPSR exception field will be zero if we're
204
* resuming in Thread mode. If that doesn't match what the
205
--
206
2.7.4
207
208
diff view generated by jsdifflib
Deleted patch
1
In the v7M architecture, there is an invariant that if the CPU is
2
in Handler mode then the CONTROL.SPSEL bit cannot be nonzero.
3
This in turn means that the current stack pointer is always
4
indicated by CONTROL.SPSEL, even though Handler mode always uses
5
the Main stack pointer.
6
1
7
In v8M, this invariant is removed, and CONTROL.SPSEL may now
8
be nonzero in Handler mode (though Handler mode still always
9
uses the Main stack pointer). In preparation for this change,
10
change how we handle this bit: rename switch_v7m_sp() to
11
the now more accurate write_v7m_control_spsel(), and make it
12
check both the handler mode state and the SPSEL bit.
13
14
Note that this implicitly changes the point at which we switch
15
active SP on exception exit from before we pop the exception
16
frame to after it.
17
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
20
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
21
Message-id: 1506092407-26985-4-git-send-email-peter.maydell@linaro.org
22
---
23
target/arm/cpu.h | 8 ++++++-
24
hw/intc/armv7m_nvic.c | 2 +-
25
target/arm/helper.c | 65 ++++++++++++++++++++++++++++++++++-----------------
26
3 files changed, 51 insertions(+), 24 deletions(-)
27
28
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/cpu.h
31
+++ b/target/arm/cpu.h
32
@@ -XXX,XX +XXX,XX @@ void pmccntr_sync(CPUARMState *env);
33
#define PSTATE_MODE_EL1t 4
34
#define PSTATE_MODE_EL0t 0
35
36
+/* Write a new value to v7m.exception, thus transitioning into or out
37
+ * of Handler mode; this may result in a change of active stack pointer.
38
+ */
39
+void write_v7m_exception(CPUARMState *env, uint32_t new_exc);
40
+
41
/* Map EL and handler into a PSTATE_MODE. */
42
static inline unsigned int aarch64_pstate_mode(unsigned int el, bool handler)
43
{
44
@@ -XXX,XX +XXX,XX @@ static inline void xpsr_write(CPUARMState *env, uint32_t val, uint32_t mask)
45
env->condexec_bits |= (val >> 8) & 0xfc;
46
}
47
if (mask & XPSR_EXCP) {
48
- env->v7m.exception = val & XPSR_EXCP;
49
+ /* Note that this only happens on exception exit */
50
+ write_v7m_exception(env, val & XPSR_EXCP);
51
}
52
}
53
54
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
55
index XXXXXXX..XXXXXXX 100644
56
--- a/hw/intc/armv7m_nvic.c
57
+++ b/hw/intc/armv7m_nvic.c
58
@@ -XXX,XX +XXX,XX @@ bool armv7m_nvic_acknowledge_irq(void *opaque)
59
vec->active = 1;
60
vec->pending = 0;
61
62
- env->v7m.exception = s->vectpending;
63
+ write_v7m_exception(env, s->vectpending);
64
65
nvic_irq_update(s);
66
67
diff --git a/target/arm/helper.c b/target/arm/helper.c
68
index XXXXXXX..XXXXXXX 100644
69
--- a/target/arm/helper.c
70
+++ b/target/arm/helper.c
71
@@ -XXX,XX +XXX,XX @@ static bool v7m_using_psp(CPUARMState *env)
72
env->v7m.control[env->v7m.secure] & R_V7M_CONTROL_SPSEL_MASK;
73
}
74
75
-/* Switch to V7M main or process stack pointer. */
76
-static void switch_v7m_sp(CPUARMState *env, bool new_spsel)
77
+/* Write to v7M CONTROL.SPSEL bit. This may change the current
78
+ * stack pointer between Main and Process stack pointers.
79
+ */
80
+static void write_v7m_control_spsel(CPUARMState *env, bool new_spsel)
81
{
82
uint32_t tmp;
83
- uint32_t old_control = env->v7m.control[env->v7m.secure];
84
- bool old_spsel = old_control & R_V7M_CONTROL_SPSEL_MASK;
85
+ bool new_is_psp, old_is_psp = v7m_using_psp(env);
86
+
87
+ env->v7m.control[env->v7m.secure] =
88
+ deposit32(env->v7m.control[env->v7m.secure],
89
+ R_V7M_CONTROL_SPSEL_SHIFT,
90
+ R_V7M_CONTROL_SPSEL_LENGTH, new_spsel);
91
+
92
+ new_is_psp = v7m_using_psp(env);
93
94
- if (old_spsel != new_spsel) {
95
+ if (old_is_psp != new_is_psp) {
96
tmp = env->v7m.other_sp;
97
env->v7m.other_sp = env->regs[13];
98
env->regs[13] = tmp;
99
+ }
100
+}
101
+
102
+void write_v7m_exception(CPUARMState *env, uint32_t new_exc)
103
+{
104
+ /* Write a new value to v7m.exception, thus transitioning into or out
105
+ * of Handler mode; this may result in a change of active stack pointer.
106
+ */
107
+ bool new_is_psp, old_is_psp = v7m_using_psp(env);
108
+ uint32_t tmp;
109
110
- env->v7m.control[env->v7m.secure] = deposit32(old_control,
111
- R_V7M_CONTROL_SPSEL_SHIFT,
112
- R_V7M_CONTROL_SPSEL_LENGTH, new_spsel);
113
+ env->v7m.exception = new_exc;
114
+
115
+ new_is_psp = v7m_using_psp(env);
116
+
117
+ if (old_is_psp != new_is_psp) {
118
+ tmp = env->v7m.other_sp;
119
+ env->v7m.other_sp = env->regs[13];
120
+ env->regs[13] = tmp;
121
}
122
}
123
124
@@ -XXX,XX +XXX,XX @@ static uint32_t *get_v7m_sp_ptr(CPUARMState *env, bool secure, bool threadmode,
125
bool want_psp = threadmode && spsel;
126
127
if (secure == env->v7m.secure) {
128
- /* Currently switch_v7m_sp switches SP as it updates SPSEL,
129
- * so the SP we want is always in regs[13].
130
- * When we decouple SPSEL from the actually selected SP
131
- * we need to check want_psp against v7m_using_psp()
132
- * to see whether we need regs[13] or v7m.other_sp.
133
- */
134
- return &env->regs[13];
135
+ if (want_psp == v7m_using_psp(env)) {
136
+ return &env->regs[13];
137
+ } else {
138
+ return &env->v7m.other_sp;
139
+ }
140
} else {
141
if (want_psp) {
142
return &env->v7m.other_ss_psp;
143
@@ -XXX,XX +XXX,XX @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr)
144
uint32_t addr;
145
146
armv7m_nvic_acknowledge_irq(env->nvic);
147
- switch_v7m_sp(env, 0);
148
+ write_v7m_control_spsel(env, 0);
149
arm_clear_exclusive(env);
150
/* Clear IT bits */
151
env->condexec_bits = 0;
152
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
153
return;
154
}
155
156
- /* Set CONTROL.SPSEL from excret.SPSEL. For QEMU this currently
157
- * causes us to switch the active SP, but we will change this
158
- * later to not do that so we can support v8M.
159
+ /* Set CONTROL.SPSEL from excret.SPSEL. Since we're still in
160
+ * Handler mode (and will be until we write the new XPSR.Interrupt
161
+ * field) this does not switch around the current stack pointer.
162
*/
163
- switch_v7m_sp(env, return_to_sp_process);
164
+ write_v7m_control_spsel(env, return_to_sp_process);
165
166
{
167
/* The stack pointer we should be reading the exception frame from
168
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
169
case 20: /* CONTROL */
170
/* Writing to the SPSEL bit only has an effect if we are in
171
* thread mode; other bits can be updated by any privileged code.
172
- * switch_v7m_sp() deals with updating the SPSEL bit in
173
+ * write_v7m_control_spsel() deals with updating the SPSEL bit in
174
* env->v7m.control, so we only need update the others.
175
*/
176
if (!arm_v7m_is_handler_mode(env)) {
177
- switch_v7m_sp(env, (val & R_V7M_CONTROL_SPSEL_MASK) != 0);
178
+ write_v7m_control_spsel(env, (val & R_V7M_CONTROL_SPSEL_MASK) != 0);
179
}
180
env->v7m.control[env->v7m.secure] &= ~R_V7M_CONTROL_NPRIV_MASK;
181
env->v7m.control[env->v7m.secure] |= val & R_V7M_CONTROL_NPRIV_MASK;
182
--
183
2.7.4
184
185
diff view generated by jsdifflib
Deleted patch
1
Now that we can handle the CONTROL.SPSEL bit not necessarily being
2
in sync with the current stack pointer, we can restore the correct
3
security state on exception return. This happens before we start
4
to read registers off the stack frame, but after we have taken
5
possible usage faults for bad exception return magic values and
6
updated CONTROL.SPSEL.
7
1
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 1506092407-26985-5-git-send-email-peter.maydell@linaro.org
11
---
12
target/arm/helper.c | 2 ++
13
1 file changed, 2 insertions(+)
14
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
18
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
20
*/
21
write_v7m_control_spsel(env, return_to_sp_process);
22
23
+ switch_v7m_security_state(env, return_to_secure);
24
+
25
{
26
/* The stack pointer we should be reading the exception frame from
27
* depends on bits in the magic exception return type value (and
28
--
29
2.7.4
30
31
diff view generated by jsdifflib
Deleted patch
1
ARM v8M specifies that the INVPC usage fault for mismatched
2
xPSR exception field and handler mode bit should be checked
3
before updating the PSR and SP, so that the fault is taken
4
with the existing stack frame rather than by pushing a new one.
5
Perform this check in the right place for v8M.
6
1
7
Since v7M specifies in its pseudocode that this usage fault
8
check should happen later, we have to retain the original
9
code for that check rather than being able to merge the two.
10
(The distinction is architecturally visible but only in
11
very obscure corner cases like attempting an invalid exception
12
return with an exception frame in read only memory.)
13
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Message-id: 1506092407-26985-7-git-send-email-peter.maydell@linaro.org
17
---
18
target/arm/helper.c | 30 +++++++++++++++++++++++++++---
19
1 file changed, 27 insertions(+), 3 deletions(-)
20
21
diff --git a/target/arm/helper.c b/target/arm/helper.c
22
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/helper.c
24
+++ b/target/arm/helper.c
25
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
26
}
27
xpsr = ldl_phys(cs->as, frameptr + 0x1c);
28
29
+ if (arm_feature(env, ARM_FEATURE_V8)) {
30
+ /* For v8M we have to check whether the xPSR exception field
31
+ * matches the EXCRET value for return to handler/thread
32
+ * before we commit to changing the SP and xPSR.
33
+ */
34
+ bool will_be_handler = (xpsr & XPSR_EXCP) != 0;
35
+ if (return_to_handler != will_be_handler) {
36
+ /* Take an INVPC UsageFault on the current stack.
37
+ * By this point we will have switched to the security state
38
+ * for the background state, so this UsageFault will target
39
+ * that state.
40
+ */
41
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
42
+ env->v7m.secure);
43
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
44
+ v7m_exception_taken(cpu, excret);
45
+ qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
46
+ "stackframe: failed exception return integrity "
47
+ "check\n");
48
+ return;
49
+ }
50
+ }
51
+
52
/* Commit to consuming the stack frame */
53
frameptr += 0x20;
54
/* Undo stack alignment (the SPREALIGN bit indicates that the original
55
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
56
/* The restored xPSR exception field will be zero if we're
57
* resuming in Thread mode. If that doesn't match what the
58
* exception return excret specified then this is a UsageFault.
59
+ * v7M requires we make this check here; v8M did it earlier.
60
*/
61
if (return_to_handler != arm_v7m_is_handler_mode(env)) {
62
- /* Take an INVPC UsageFault by pushing the stack again.
63
- * TODO: the v8M version of this code should target the
64
- * background state for this exception.
65
+ /* Take an INVPC UsageFault by pushing the stack again;
66
+ * we know we're v7M so this is never a Secure UsageFault.
67
*/
68
+ assert(!arm_feature(env, ARM_FEATURE_V8));
69
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, false);
70
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
71
v7m_push_stack(cpu);
72
--
73
2.7.4
74
75
diff view generated by jsdifflib
Deleted patch
1
Attempting to do an exception return with an exception frame that
2
is not 8-aligned is UNPREDICTABLE in v8M; warn about this.
3
(It is not UNPREDICTABLE in v7M, and our implementation can
4
handle the merely-4-aligned case fine, so we don't need to
5
do anything except warn.)
6
1
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 1506092407-26985-8-git-send-email-peter.maydell@linaro.org
11
---
12
target/arm/helper.c | 7 +++++++
13
1 file changed, 7 insertions(+)
14
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
18
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
20
return_to_sp_process);
21
uint32_t frameptr = *frame_sp_p;
22
23
+ if (!QEMU_IS_ALIGNED(frameptr, 8) &&
24
+ arm_feature(env, ARM_FEATURE_V8)) {
25
+ qemu_log_mask(LOG_GUEST_ERROR,
26
+ "M profile exception return with non-8-aligned SP "
27
+ "for destination state is UNPREDICTABLE\n");
28
+ }
29
+
30
/* Pop registers. TODO: make these accesses use the correct
31
* attributes and address space (S/NS, priv/unpriv) and handle
32
* memory transaction failures.
33
--
34
2.7.4
35
36
diff view generated by jsdifflib
Deleted patch
1
In the v8M architecture, return from an exception to a PC which
2
has bit 0 set is not UNPREDICTABLE; it is defined that bit 0
3
is discarded [R_HRJH]. Restrict our complaint about this to v7M.
4
1
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 1506092407-26985-9-git-send-email-peter.maydell@linaro.org
9
---
10
target/arm/helper.c | 22 +++++++++++++++-------
11
1 file changed, 15 insertions(+), 7 deletions(-)
12
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
16
+++ b/target/arm/helper.c
17
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
18
env->regs[12] = ldl_phys(cs->as, frameptr + 0x10);
19
env->regs[14] = ldl_phys(cs->as, frameptr + 0x14);
20
env->regs[15] = ldl_phys(cs->as, frameptr + 0x18);
21
+
22
+ /* Returning from an exception with a PC with bit 0 set is defined
23
+ * behaviour on v8M (bit 0 is ignored), but for v7M it was specified
24
+ * to be UNPREDICTABLE. In practice actual v7M hardware seems to ignore
25
+ * the lsbit, and there are several RTOSes out there which incorrectly
26
+ * assume the r15 in the stack frame should be a Thumb-style "lsbit
27
+ * indicates ARM/Thumb" value, so ignore the bit on v7M as well, but
28
+ * complain about the badly behaved guest.
29
+ */
30
if (env->regs[15] & 1) {
31
- qemu_log_mask(LOG_GUEST_ERROR,
32
- "M profile return from interrupt with misaligned "
33
- "PC is UNPREDICTABLE\n");
34
- /* Actual hardware seems to ignore the lsbit, and there are several
35
- * RTOSes out there which incorrectly assume the r15 in the stack
36
- * frame should be a Thumb-style "lsbit indicates ARM/Thumb" value.
37
- */
38
env->regs[15] &= ~1U;
39
+ if (!arm_feature(env, ARM_FEATURE_V8)) {
40
+ qemu_log_mask(LOG_GUEST_ERROR,
41
+ "M profile return from interrupt with misaligned "
42
+ "PC is UNPREDICTABLE on v7M\n");
43
+ }
44
}
45
+
46
xpsr = ldl_phys(cs->as, frameptr + 0x1c);
47
48
if (arm_feature(env, ARM_FEATURE_V8)) {
49
--
50
2.7.4
51
52
diff view generated by jsdifflib
Deleted patch
1
Add the new M profile Secure Fault Status Register
2
and Secure Fault Address Register.
3
1
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 1506092407-26985-10-git-send-email-peter.maydell@linaro.org
7
---
8
target/arm/cpu.h | 12 ++++++++++++
9
hw/intc/armv7m_nvic.c | 34 ++++++++++++++++++++++++++++++++++
10
target/arm/machine.c | 2 ++
11
3 files changed, 48 insertions(+)
12
13
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/cpu.h
16
+++ b/target/arm/cpu.h
17
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
18
uint32_t cfsr[M_REG_NUM_BANKS]; /* Configurable Fault Status */
19
uint32_t hfsr; /* HardFault Status */
20
uint32_t dfsr; /* Debug Fault Status Register */
21
+ uint32_t sfsr; /* Secure Fault Status Register */
22
uint32_t mmfar[M_REG_NUM_BANKS]; /* MemManage Fault Address */
23
uint32_t bfar; /* BusFault Address */
24
+ uint32_t sfar; /* Secure Fault Address Register */
25
unsigned mpu_ctrl[M_REG_NUM_BANKS]; /* MPU_CTRL */
26
int exception;
27
uint32_t primask[M_REG_NUM_BANKS];
28
@@ -XXX,XX +XXX,XX @@ FIELD(V7M_DFSR, DWTTRAP, 2, 1)
29
FIELD(V7M_DFSR, VCATCH, 3, 1)
30
FIELD(V7M_DFSR, EXTERNAL, 4, 1)
31
32
+/* V7M SFSR bits */
33
+FIELD(V7M_SFSR, INVEP, 0, 1)
34
+FIELD(V7M_SFSR, INVIS, 1, 1)
35
+FIELD(V7M_SFSR, INVER, 2, 1)
36
+FIELD(V7M_SFSR, AUVIOL, 3, 1)
37
+FIELD(V7M_SFSR, INVTRAN, 4, 1)
38
+FIELD(V7M_SFSR, LSPERR, 5, 1)
39
+FIELD(V7M_SFSR, SFARVALID, 6, 1)
40
+FIELD(V7M_SFSR, LSERR, 7, 1)
41
+
42
/* v7M MPU_CTRL bits */
43
FIELD(V7M_MPU_CTRL, ENABLE, 0, 1)
44
FIELD(V7M_MPU_CTRL, HFNMIENA, 1, 1)
45
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
46
index XXXXXXX..XXXXXXX 100644
47
--- a/hw/intc/armv7m_nvic.c
48
+++ b/hw/intc/armv7m_nvic.c
49
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
50
goto bad_offset;
51
}
52
return cpu->env.pmsav8.mair1[attrs.secure];
53
+ case 0xde4: /* SFSR */
54
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
55
+ goto bad_offset;
56
+ }
57
+ if (!attrs.secure) {
58
+ return 0;
59
+ }
60
+ return cpu->env.v7m.sfsr;
61
+ case 0xde8: /* SFAR */
62
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
63
+ goto bad_offset;
64
+ }
65
+ if (!attrs.secure) {
66
+ return 0;
67
+ }
68
+ return cpu->env.v7m.sfar;
69
default:
70
bad_offset:
71
qemu_log_mask(LOG_GUEST_ERROR, "NVIC: Bad read offset 0x%x\n", offset);
72
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
73
* only affect cacheability, and we don't implement caching.
74
*/
75
break;
76
+ case 0xde4: /* SFSR */
77
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
78
+ goto bad_offset;
79
+ }
80
+ if (!attrs.secure) {
81
+ return;
82
+ }
83
+ cpu->env.v7m.sfsr &= ~value; /* W1C */
84
+ break;
85
+ case 0xde8: /* SFAR */
86
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
87
+ goto bad_offset;
88
+ }
89
+ if (!attrs.secure) {
90
+ return;
91
+ }
92
+ cpu->env.v7m.sfsr = value;
93
+ break;
94
case 0xf00: /* Software Triggered Interrupt Register */
95
{
96
int excnum = (value & 0x1ff) + NVIC_FIRST_IRQ;
97
diff --git a/target/arm/machine.c b/target/arm/machine.c
98
index XXXXXXX..XXXXXXX 100644
99
--- a/target/arm/machine.c
100
+++ b/target/arm/machine.c
101
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_m_security = {
102
VMSTATE_UINT32(env.v7m.ccr[M_REG_S], ARMCPU),
103
VMSTATE_UINT32(env.v7m.mmfar[M_REG_S], ARMCPU),
104
VMSTATE_UINT32(env.v7m.cfsr[M_REG_S], ARMCPU),
105
+ VMSTATE_UINT32(env.v7m.sfsr, ARMCPU),
106
+ VMSTATE_UINT32(env.v7m.sfar, ARMCPU),
107
VMSTATE_END_OF_LIST()
108
}
109
};
110
--
111
2.7.4
112
113
diff view generated by jsdifflib
Deleted patch
1
In v8M, more bits are defined in the exception-return magic
2
values; update the code that checks these so we accept
3
the v8M values when the CPU permits them.
4
1
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 1506092407-26985-11-git-send-email-peter.maydell@linaro.org
8
---
9
target/arm/helper.c | 73 ++++++++++++++++++++++++++++++++++++++++++-----------
10
1 file changed, 58 insertions(+), 15 deletions(-)
11
12
diff --git a/target/arm/helper.c b/target/arm/helper.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/helper.c
15
+++ b/target/arm/helper.c
16
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
17
uint32_t excret;
18
uint32_t xpsr;
19
bool ufault = false;
20
- bool return_to_sp_process = false;
21
- bool return_to_handler = false;
22
+ bool sfault = false;
23
+ bool return_to_sp_process;
24
+ bool return_to_handler;
25
bool rettobase = false;
26
bool exc_secure = false;
27
bool return_to_secure;
28
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
29
excret);
30
}
31
32
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
33
+ /* EXC_RETURN.ES validation check (R_SMFL). We must do this before
34
+ * we pick which FAULTMASK to clear.
35
+ */
36
+ if (!env->v7m.secure &&
37
+ ((excret & R_V7M_EXCRET_ES_MASK) ||
38
+ !(excret & R_V7M_EXCRET_DCRS_MASK))) {
39
+ sfault = 1;
40
+ /* For all other purposes, treat ES as 0 (R_HXSR) */
41
+ excret &= ~R_V7M_EXCRET_ES_MASK;
42
+ }
43
+ }
44
+
45
if (env->v7m.exception != ARMV7M_EXCP_NMI) {
46
/* Auto-clear FAULTMASK on return from other than NMI.
47
* If the security extension is implemented then this only
48
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
49
g_assert_not_reached();
50
}
51
52
+ return_to_handler = !(excret & R_V7M_EXCRET_MODE_MASK);
53
+ return_to_sp_process = excret & R_V7M_EXCRET_SPSEL_MASK;
54
return_to_secure = arm_feature(env, ARM_FEATURE_M_SECURITY) &&
55
(excret & R_V7M_EXCRET_S_MASK);
56
57
- switch (excret & 0xf) {
58
- case 1: /* Return to Handler */
59
- return_to_handler = true;
60
- break;
61
- case 13: /* Return to Thread using Process stack */
62
- return_to_sp_process = true;
63
- /* fall through */
64
- case 9: /* Return to Thread using Main stack */
65
- if (!rettobase &&
66
- !(env->v7m.ccr[env->v7m.secure] & R_V7M_CCR_NONBASETHRDENA_MASK)) {
67
+ if (arm_feature(env, ARM_FEATURE_V8)) {
68
+ if (!arm_feature(env, ARM_FEATURE_M_SECURITY)) {
69
+ /* UNPREDICTABLE if S == 1 or DCRS == 0 or ES == 1 (R_XLCP);
70
+ * we choose to take the UsageFault.
71
+ */
72
+ if ((excret & R_V7M_EXCRET_S_MASK) ||
73
+ (excret & R_V7M_EXCRET_ES_MASK) ||
74
+ !(excret & R_V7M_EXCRET_DCRS_MASK)) {
75
+ ufault = true;
76
+ }
77
+ }
78
+ if (excret & R_V7M_EXCRET_RES0_MASK) {
79
ufault = true;
80
}
81
- break;
82
- default:
83
- ufault = true;
84
+ } else {
85
+ /* For v7M we only recognize certain combinations of the low bits */
86
+ switch (excret & 0xf) {
87
+ case 1: /* Return to Handler */
88
+ break;
89
+ case 13: /* Return to Thread using Process stack */
90
+ case 9: /* Return to Thread using Main stack */
91
+ /* We only need to check NONBASETHRDENA for v7M, because in
92
+ * v8M this bit does not exist (it is RES1).
93
+ */
94
+ if (!rettobase &&
95
+ !(env->v7m.ccr[env->v7m.secure] &
96
+ R_V7M_CCR_NONBASETHRDENA_MASK)) {
97
+ ufault = true;
98
+ }
99
+ break;
100
+ default:
101
+ ufault = true;
102
+ }
103
+ }
104
+
105
+ if (sfault) {
106
+ env->v7m.sfsr |= R_V7M_SFSR_INVER_MASK;
107
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
108
+ v7m_exception_taken(cpu, excret);
109
+ qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing "
110
+ "stackframe: failed EXC_RETURN.ES validity check\n");
111
+ return;
112
}
113
114
if (ufault) {
115
--
116
2.7.4
117
118
diff view generated by jsdifflib
Deleted patch
1
For v8M, exceptions from Secure to Non-Secure state will save
2
callee-saved registers to the exception frame as well as the
3
caller-saved registers. Add support for unstacking these
4
registers in exception exit when necessary.
5
1
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 1506092407-26985-12-git-send-email-peter.maydell@linaro.org
9
---
10
target/arm/helper.c | 30 ++++++++++++++++++++++++++++++
11
1 file changed, 30 insertions(+)
12
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
16
+++ b/target/arm/helper.c
17
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
18
"for destination state is UNPREDICTABLE\n");
19
}
20
21
+ /* Do we need to pop callee-saved registers? */
22
+ if (return_to_secure &&
23
+ ((excret & R_V7M_EXCRET_ES_MASK) == 0 ||
24
+ (excret & R_V7M_EXCRET_DCRS_MASK) == 0)) {
25
+ uint32_t expected_sig = 0xfefa125b;
26
+ uint32_t actual_sig = ldl_phys(cs->as, frameptr);
27
+
28
+ if (expected_sig != actual_sig) {
29
+ /* Take a SecureFault on the current stack */
30
+ env->v7m.sfsr |= R_V7M_SFSR_INVIS_MASK;
31
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
32
+ v7m_exception_taken(cpu, excret);
33
+ qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing "
34
+ "stackframe: failed exception return integrity "
35
+ "signature check\n");
36
+ return;
37
+ }
38
+
39
+ env->regs[4] = ldl_phys(cs->as, frameptr + 0x8);
40
+ env->regs[5] = ldl_phys(cs->as, frameptr + 0xc);
41
+ env->regs[6] = ldl_phys(cs->as, frameptr + 0x10);
42
+ env->regs[7] = ldl_phys(cs->as, frameptr + 0x14);
43
+ env->regs[8] = ldl_phys(cs->as, frameptr + 0x18);
44
+ env->regs[9] = ldl_phys(cs->as, frameptr + 0x1c);
45
+ env->regs[10] = ldl_phys(cs->as, frameptr + 0x20);
46
+ env->regs[11] = ldl_phys(cs->as, frameptr + 0x24);
47
+
48
+ frameptr += 0x28;
49
+ }
50
+
51
/* Pop registers. TODO: make these accesses use the correct
52
* attributes and address space (S/NS, priv/unpriv) and handle
53
* memory transaction failures.
54
--
55
2.7.4
56
57
diff view generated by jsdifflib
Deleted patch
1
Add support for v8M and in particular the security extension
2
to the exception entry code. This requires changes to:
3
* calculation of the exception-return magic LR value
4
* push the callee-saves registers in certain cases
5
* clear registers when taking non-secure exceptions to avoid
6
leaking information from the interrupted secure code
7
* switch to the correct security state on entry
8
* use the vector table for the security state we're targeting
9
1
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 1506092407-26985-13-git-send-email-peter.maydell@linaro.org
13
---
14
target/arm/helper.c | 165 +++++++++++++++++++++++++++++++++++++++++++++-------
15
1 file changed, 145 insertions(+), 20 deletions(-)
16
17
diff --git a/target/arm/helper.c b/target/arm/helper.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper.c
20
+++ b/target/arm/helper.c
21
@@ -XXX,XX +XXX,XX @@ static uint32_t *get_v7m_sp_ptr(CPUARMState *env, bool secure, bool threadmode,
22
}
23
}
24
25
-static uint32_t arm_v7m_load_vector(ARMCPU *cpu)
26
+static uint32_t arm_v7m_load_vector(ARMCPU *cpu, bool targets_secure)
27
{
28
CPUState *cs = CPU(cpu);
29
CPUARMState *env = &cpu->env;
30
MemTxResult result;
31
- hwaddr vec = env->v7m.vecbase[env->v7m.secure] + env->v7m.exception * 4;
32
+ hwaddr vec = env->v7m.vecbase[targets_secure] + env->v7m.exception * 4;
33
uint32_t addr;
34
35
addr = address_space_ldl(cs->as, vec,
36
@@ -XXX,XX +XXX,XX @@ static uint32_t arm_v7m_load_vector(ARMCPU *cpu)
37
* Since we don't model Lockup, we just report this guest error
38
* via cpu_abort().
39
*/
40
- cpu_abort(cs, "Failed to read from exception vector table "
41
- "entry %08x\n", (unsigned)vec);
42
+ cpu_abort(cs, "Failed to read from %s exception vector table "
43
+ "entry %08x\n", targets_secure ? "secure" : "nonsecure",
44
+ (unsigned)vec);
45
}
46
return addr;
47
}
48
49
-static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr)
50
+static void v7m_push_callee_stack(ARMCPU *cpu, uint32_t lr, bool dotailchain)
51
+{
52
+ /* For v8M, push the callee-saves register part of the stack frame.
53
+ * Compare the v8M pseudocode PushCalleeStack().
54
+ * In the tailchaining case this may not be the current stack.
55
+ */
56
+ CPUARMState *env = &cpu->env;
57
+ CPUState *cs = CPU(cpu);
58
+ uint32_t *frame_sp_p;
59
+ uint32_t frameptr;
60
+
61
+ if (dotailchain) {
62
+ frame_sp_p = get_v7m_sp_ptr(env, true,
63
+ lr & R_V7M_EXCRET_MODE_MASK,
64
+ lr & R_V7M_EXCRET_SPSEL_MASK);
65
+ } else {
66
+ frame_sp_p = &env->regs[13];
67
+ }
68
+
69
+ frameptr = *frame_sp_p - 0x28;
70
+
71
+ stl_phys(cs->as, frameptr, 0xfefa125b);
72
+ stl_phys(cs->as, frameptr + 0x8, env->regs[4]);
73
+ stl_phys(cs->as, frameptr + 0xc, env->regs[5]);
74
+ stl_phys(cs->as, frameptr + 0x10, env->regs[6]);
75
+ stl_phys(cs->as, frameptr + 0x14, env->regs[7]);
76
+ stl_phys(cs->as, frameptr + 0x18, env->regs[8]);
77
+ stl_phys(cs->as, frameptr + 0x1c, env->regs[9]);
78
+ stl_phys(cs->as, frameptr + 0x20, env->regs[10]);
79
+ stl_phys(cs->as, frameptr + 0x24, env->regs[11]);
80
+
81
+ *frame_sp_p = frameptr;
82
+}
83
+
84
+static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain)
85
{
86
/* Do the "take the exception" parts of exception entry,
87
* but not the pushing of state to the stack. This is
88
@@ -XXX,XX +XXX,XX @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr)
89
*/
90
CPUARMState *env = &cpu->env;
91
uint32_t addr;
92
+ bool targets_secure;
93
+
94
+ targets_secure = armv7m_nvic_acknowledge_irq(env->nvic);
95
96
- armv7m_nvic_acknowledge_irq(env->nvic);
97
+ if (arm_feature(env, ARM_FEATURE_V8)) {
98
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY) &&
99
+ (lr & R_V7M_EXCRET_S_MASK)) {
100
+ /* The background code (the owner of the registers in the
101
+ * exception frame) is Secure. This means it may either already
102
+ * have or now needs to push callee-saves registers.
103
+ */
104
+ if (targets_secure) {
105
+ if (dotailchain && !(lr & R_V7M_EXCRET_ES_MASK)) {
106
+ /* We took an exception from Secure to NonSecure
107
+ * (which means the callee-saved registers got stacked)
108
+ * and are now tailchaining to a Secure exception.
109
+ * Clear DCRS so eventual return from this Secure
110
+ * exception unstacks the callee-saved registers.
111
+ */
112
+ lr &= ~R_V7M_EXCRET_DCRS_MASK;
113
+ }
114
+ } else {
115
+ /* We're going to a non-secure exception; push the
116
+ * callee-saves registers to the stack now, if they're
117
+ * not already saved.
118
+ */
119
+ if (lr & R_V7M_EXCRET_DCRS_MASK &&
120
+ !(dotailchain && (lr & R_V7M_EXCRET_ES_MASK))) {
121
+ v7m_push_callee_stack(cpu, lr, dotailchain);
122
+ }
123
+ lr |= R_V7M_EXCRET_DCRS_MASK;
124
+ }
125
+ }
126
+
127
+ lr &= ~R_V7M_EXCRET_ES_MASK;
128
+ if (targets_secure || !arm_feature(env, ARM_FEATURE_M_SECURITY)) {
129
+ lr |= R_V7M_EXCRET_ES_MASK;
130
+ }
131
+ lr &= ~R_V7M_EXCRET_SPSEL_MASK;
132
+ if (env->v7m.control[targets_secure] & R_V7M_CONTROL_SPSEL_MASK) {
133
+ lr |= R_V7M_EXCRET_SPSEL_MASK;
134
+ }
135
+
136
+ /* Clear registers if necessary to prevent non-secure exception
137
+ * code being able to see register values from secure code.
138
+ * Where register values become architecturally UNKNOWN we leave
139
+ * them with their previous values.
140
+ */
141
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
142
+ if (!targets_secure) {
143
+ /* Always clear the caller-saved registers (they have been
144
+ * pushed to the stack earlier in v7m_push_stack()).
145
+ * Clear callee-saved registers if the background code is
146
+ * Secure (in which case these regs were saved in
147
+ * v7m_push_callee_stack()).
148
+ */
149
+ int i;
150
+
151
+ for (i = 0; i < 13; i++) {
152
+ /* r4..r11 are callee-saves, zero only if EXCRET.S == 1 */
153
+ if (i < 4 || i > 11 || (lr & R_V7M_EXCRET_S_MASK)) {
154
+ env->regs[i] = 0;
155
+ }
156
+ }
157
+ /* Clear EAPSR */
158
+ xpsr_write(env, 0, XPSR_NZCV | XPSR_Q | XPSR_GE | XPSR_IT);
159
+ }
160
+ }
161
+ }
162
+
163
+ /* Switch to target security state -- must do this before writing SPSEL */
164
+ switch_v7m_security_state(env, targets_secure);
165
write_v7m_control_spsel(env, 0);
166
arm_clear_exclusive(env);
167
/* Clear IT bits */
168
env->condexec_bits = 0;
169
env->regs[14] = lr;
170
- addr = arm_v7m_load_vector(cpu);
171
+ addr = arm_v7m_load_vector(cpu, targets_secure);
172
env->regs[15] = addr & 0xfffffffe;
173
env->thumb = addr & 1;
174
}
175
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
176
if (sfault) {
177
env->v7m.sfsr |= R_V7M_SFSR_INVER_MASK;
178
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
179
- v7m_exception_taken(cpu, excret);
180
+ v7m_exception_taken(cpu, excret, true);
181
qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing "
182
"stackframe: failed EXC_RETURN.ES validity check\n");
183
return;
184
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
185
*/
186
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
187
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
188
- v7m_exception_taken(cpu, excret);
189
+ v7m_exception_taken(cpu, excret, true);
190
qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
191
"stackframe: failed exception return integrity check\n");
192
return;
193
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
194
/* Take a SecureFault on the current stack */
195
env->v7m.sfsr |= R_V7M_SFSR_INVIS_MASK;
196
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
197
- v7m_exception_taken(cpu, excret);
198
+ v7m_exception_taken(cpu, excret, true);
199
qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing "
200
"stackframe: failed exception return integrity "
201
"signature check\n");
202
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
203
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
204
env->v7m.secure);
205
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
206
- v7m_exception_taken(cpu, excret);
207
+ v7m_exception_taken(cpu, excret, true);
208
qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
209
"stackframe: failed exception return integrity "
210
"check\n");
211
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
212
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, false);
213
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
214
v7m_push_stack(cpu);
215
- v7m_exception_taken(cpu, excret);
216
+ v7m_exception_taken(cpu, excret, false);
217
qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on new stackframe: "
218
"failed exception return integrity check\n");
219
return;
220
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
221
return; /* Never happens. Keep compiler happy. */
222
}
223
224
- lr = R_V7M_EXCRET_RES1_MASK |
225
- R_V7M_EXCRET_S_MASK |
226
- R_V7M_EXCRET_DCRS_MASK |
227
- R_V7M_EXCRET_FTYPE_MASK |
228
- R_V7M_EXCRET_ES_MASK;
229
- if (env->v7m.control[env->v7m.secure] & R_V7M_CONTROL_SPSEL_MASK) {
230
- lr |= R_V7M_EXCRET_SPSEL_MASK;
231
+ if (arm_feature(env, ARM_FEATURE_V8)) {
232
+ lr = R_V7M_EXCRET_RES1_MASK |
233
+ R_V7M_EXCRET_DCRS_MASK |
234
+ R_V7M_EXCRET_FTYPE_MASK;
235
+ /* The S bit indicates whether we should return to Secure
236
+ * or NonSecure (ie our current state).
237
+ * The ES bit indicates whether we're taking this exception
238
+ * to Secure or NonSecure (ie our target state). We set it
239
+ * later, in v7m_exception_taken().
240
+ * The SPSEL bit is also set in v7m_exception_taken() for v8M.
241
+ * This corresponds to the ARM ARM pseudocode for v8M setting
242
+ * some LR bits in PushStack() and some in ExceptionTaken();
243
+ * the distinction matters for the tailchain cases where we
244
+ * can take an exception without pushing the stack.
245
+ */
246
+ if (env->v7m.secure) {
247
+ lr |= R_V7M_EXCRET_S_MASK;
248
+ }
249
+ } else {
250
+ lr = R_V7M_EXCRET_RES1_MASK |
251
+ R_V7M_EXCRET_S_MASK |
252
+ R_V7M_EXCRET_DCRS_MASK |
253
+ R_V7M_EXCRET_FTYPE_MASK |
254
+ R_V7M_EXCRET_ES_MASK;
255
+ if (env->v7m.control[M_REG_NS] & R_V7M_CONTROL_SPSEL_MASK) {
256
+ lr |= R_V7M_EXCRET_SPSEL_MASK;
257
+ }
258
}
259
if (!arm_v7m_is_handler_mode(env)) {
260
lr |= R_V7M_EXCRET_MODE_MASK;
261
}
262
263
v7m_push_stack(cpu);
264
- v7m_exception_taken(cpu, lr);
265
+ v7m_exception_taken(cpu, lr, false);
266
qemu_log_mask(CPU_LOG_INT, "... as %d\n", env->v7m.exception);
267
}
268
269
--
270
2.7.4
271
272
diff view generated by jsdifflib
Deleted patch
1
Implement the register interface for the SAU: SAU_CTRL,
2
SAU_TYPE, SAU_RNR, SAU_RBAR and SAU_RLAR. None of the
3
actual behaviour is implemented here; registers just
4
read back as written.
5
1
6
When the CPU definition for Cortex-M33 is eventually
7
added, its initfn will set cpu->sau_sregion, in the same
8
way that we currently set cpu->pmsav7_dregion for the
9
M3 and M4.
10
11
Number of SAU regions is typically a configurable
12
CPU parameter, but this patch doesn't provide a
13
QEMU CPU property for it. We can easily add one when
14
we have a board that requires it.
15
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
18
Message-id: 1506092407-26985-14-git-send-email-peter.maydell@linaro.org
19
---
20
target/arm/cpu.h | 10 +++++
21
hw/intc/armv7m_nvic.c | 116 ++++++++++++++++++++++++++++++++++++++++++++++++++
22
target/arm/cpu.c | 27 ++++++++++++
23
target/arm/machine.c | 14 ++++++
24
4 files changed, 167 insertions(+)
25
26
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
27
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/cpu.h
29
+++ b/target/arm/cpu.h
30
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
31
uint32_t mair1[M_REG_NUM_BANKS];
32
} pmsav8;
33
34
+ /* v8M SAU */
35
+ struct {
36
+ uint32_t *rbar;
37
+ uint32_t *rlar;
38
+ uint32_t rnr;
39
+ uint32_t ctrl;
40
+ } sau;
41
+
42
void *nvic;
43
const struct arm_boot_info *boot_info;
44
/* Store GICv3CPUState to access from this struct */
45
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
46
bool has_mpu;
47
/* PMSAv7 MPU number of supported regions */
48
uint32_t pmsav7_dregion;
49
+ /* v8M SAU number of supported regions */
50
+ uint32_t sau_sregion;
51
52
/* PSCI conduit used to invoke PSCI methods
53
* 0 - disabled, 1 - smc, 2 - hvc
54
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
55
index XXXXXXX..XXXXXXX 100644
56
--- a/hw/intc/armv7m_nvic.c
57
+++ b/hw/intc/armv7m_nvic.c
58
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
59
goto bad_offset;
60
}
61
return cpu->env.pmsav8.mair1[attrs.secure];
62
+ case 0xdd0: /* SAU_CTRL */
63
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
64
+ goto bad_offset;
65
+ }
66
+ if (!attrs.secure) {
67
+ return 0;
68
+ }
69
+ return cpu->env.sau.ctrl;
70
+ case 0xdd4: /* SAU_TYPE */
71
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
72
+ goto bad_offset;
73
+ }
74
+ if (!attrs.secure) {
75
+ return 0;
76
+ }
77
+ return cpu->sau_sregion;
78
+ case 0xdd8: /* SAU_RNR */
79
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
80
+ goto bad_offset;
81
+ }
82
+ if (!attrs.secure) {
83
+ return 0;
84
+ }
85
+ return cpu->env.sau.rnr;
86
+ case 0xddc: /* SAU_RBAR */
87
+ {
88
+ int region = cpu->env.sau.rnr;
89
+
90
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
91
+ goto bad_offset;
92
+ }
93
+ if (!attrs.secure) {
94
+ return 0;
95
+ }
96
+ if (region >= cpu->sau_sregion) {
97
+ return 0;
98
+ }
99
+ return cpu->env.sau.rbar[region];
100
+ }
101
+ case 0xde0: /* SAU_RLAR */
102
+ {
103
+ int region = cpu->env.sau.rnr;
104
+
105
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
106
+ goto bad_offset;
107
+ }
108
+ if (!attrs.secure) {
109
+ return 0;
110
+ }
111
+ if (region >= cpu->sau_sregion) {
112
+ return 0;
113
+ }
114
+ return cpu->env.sau.rlar[region];
115
+ }
116
case 0xde4: /* SFSR */
117
if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
118
goto bad_offset;
119
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
120
* only affect cacheability, and we don't implement caching.
121
*/
122
break;
123
+ case 0xdd0: /* SAU_CTRL */
124
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
125
+ goto bad_offset;
126
+ }
127
+ if (!attrs.secure) {
128
+ return;
129
+ }
130
+ cpu->env.sau.ctrl = value & 3;
131
+ case 0xdd4: /* SAU_TYPE */
132
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
133
+ goto bad_offset;
134
+ }
135
+ break;
136
+ case 0xdd8: /* SAU_RNR */
137
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
138
+ goto bad_offset;
139
+ }
140
+ if (!attrs.secure) {
141
+ return;
142
+ }
143
+ if (value >= cpu->sau_sregion) {
144
+ qemu_log_mask(LOG_GUEST_ERROR, "SAU region out of range %"
145
+ PRIu32 "/%" PRIu32 "\n",
146
+ value, cpu->sau_sregion);
147
+ } else {
148
+ cpu->env.sau.rnr = value;
149
+ }
150
+ break;
151
+ case 0xddc: /* SAU_RBAR */
152
+ {
153
+ int region = cpu->env.sau.rnr;
154
+
155
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
156
+ goto bad_offset;
157
+ }
158
+ if (!attrs.secure) {
159
+ return;
160
+ }
161
+ if (region >= cpu->sau_sregion) {
162
+ return;
163
+ }
164
+ cpu->env.sau.rbar[region] = value & ~0x1f;
165
+ tlb_flush(CPU(cpu));
166
+ break;
167
+ }
168
+ case 0xde0: /* SAU_RLAR */
169
+ {
170
+ int region = cpu->env.sau.rnr;
171
+
172
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
173
+ goto bad_offset;
174
+ }
175
+ if (!attrs.secure) {
176
+ return;
177
+ }
178
+ if (region >= cpu->sau_sregion) {
179
+ return;
180
+ }
181
+ cpu->env.sau.rlar[region] = value & ~0x1c;
182
+ tlb_flush(CPU(cpu));
183
+ break;
184
+ }
185
case 0xde4: /* SFSR */
186
if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
187
goto bad_offset;
188
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
189
index XXXXXXX..XXXXXXX 100644
190
--- a/target/arm/cpu.c
191
+++ b/target/arm/cpu.c
192
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(CPUState *s)
193
env->pmsav8.mair1[M_REG_S] = 0;
194
}
195
196
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
197
+ if (cpu->sau_sregion > 0) {
198
+ memset(env->sau.rbar, 0, sizeof(*env->sau.rbar) * cpu->sau_sregion);
199
+ memset(env->sau.rlar, 0, sizeof(*env->sau.rlar) * cpu->sau_sregion);
200
+ }
201
+ env->sau.rnr = 0;
202
+ /* SAU_CTRL reset value is IMPDEF; we choose 0, which is what
203
+ * the Cortex-M33 does.
204
+ */
205
+ env->sau.ctrl = 0;
206
+ }
207
+
208
set_flush_to_zero(1, &env->vfp.standard_fp_status);
209
set_flush_inputs_to_zero(1, &env->vfp.standard_fp_status);
210
set_default_nan_mode(1, &env->vfp.standard_fp_status);
211
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
212
}
213
}
214
215
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
216
+ uint32_t nr = cpu->sau_sregion;
217
+
218
+ if (nr > 0xff) {
219
+ error_setg(errp, "v8M SAU #regions invalid %" PRIu32, nr);
220
+ return;
221
+ }
222
+
223
+ if (nr) {
224
+ env->sau.rbar = g_new0(uint32_t, nr);
225
+ env->sau.rlar = g_new0(uint32_t, nr);
226
+ }
227
+ }
228
+
229
if (arm_feature(env, ARM_FEATURE_EL3)) {
230
set_feature(env, ARM_FEATURE_VBAR);
231
}
232
@@ -XXX,XX +XXX,XX @@ static void cortex_m4_initfn(Object *obj)
233
cpu->midr = 0x410fc240; /* r0p0 */
234
cpu->pmsav7_dregion = 8;
235
}
236
+
237
static void arm_v7m_class_init(ObjectClass *oc, void *data)
238
{
239
CPUClass *cc = CPU_CLASS(oc);
240
diff --git a/target/arm/machine.c b/target/arm/machine.c
241
index XXXXXXX..XXXXXXX 100644
242
--- a/target/arm/machine.c
243
+++ b/target/arm/machine.c
244
@@ -XXX,XX +XXX,XX @@ static bool s_rnr_vmstate_validate(void *opaque, int version_id)
245
return cpu->env.pmsav7.rnr[M_REG_S] < cpu->pmsav7_dregion;
246
}
247
248
+static bool sau_rnr_vmstate_validate(void *opaque, int version_id)
249
+{
250
+ ARMCPU *cpu = opaque;
251
+
252
+ return cpu->env.sau.rnr < cpu->sau_sregion;
253
+}
254
+
255
static bool m_security_needed(void *opaque)
256
{
257
ARMCPU *cpu = opaque;
258
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_m_security = {
259
VMSTATE_UINT32(env.v7m.cfsr[M_REG_S], ARMCPU),
260
VMSTATE_UINT32(env.v7m.sfsr, ARMCPU),
261
VMSTATE_UINT32(env.v7m.sfar, ARMCPU),
262
+ VMSTATE_VARRAY_UINT32(env.sau.rbar, ARMCPU, sau_sregion, 0,
263
+ vmstate_info_uint32, uint32_t),
264
+ VMSTATE_VARRAY_UINT32(env.sau.rlar, ARMCPU, sau_sregion, 0,
265
+ vmstate_info_uint32, uint32_t),
266
+ VMSTATE_UINT32(env.sau.rnr, ARMCPU),
267
+ VMSTATE_VALIDATE("SAU_RNR is valid", sau_rnr_vmstate_validate),
268
+ VMSTATE_UINT32(env.sau.ctrl, ARMCPU),
269
VMSTATE_END_OF_LIST()
270
}
271
};
272
--
273
2.7.4
274
275
diff view generated by jsdifflib
Deleted patch
1
Implement the security attribute lookups for memory accesses
2
in the get_phys_addr() functions, causing these to generate
3
various kinds of SecureFault for bad accesses.
4
1
5
The major subtlety in this code relates to handling of the
6
case when the security attributes the SAU assigns to the
7
address don't match the current security state of the CPU.
8
9
In the ARM ARM pseudocode for validating instruction
10
accesses, the security attributes of the address determine
11
whether the Secure or NonSecure MPU state is used. At face
12
value, handling this would require us to encode the relevant
13
bits of state into mmu_idx for both S and NS at once, which
14
would result in our needing 16 mmu indexes. Fortunately we
15
don't actually need to do this because a mismatch between
16
address attributes and CPU state means either:
17
* some kind of fault (usually a SecureFault, but in theory
18
perhaps a UserFault for unaligned access to Device memory)
19
* execution of the SG instruction in NS state from a
20
Secure & NonSecure code region
21
22
The purpose of SG is simply to flip the CPU into Secure
23
state, so we can handle it by emulating execution of that
24
instruction directly in arm_v7m_cpu_do_interrupt(), which
25
means we can treat all the mismatch cases as "throw an
26
exception" and we don't need to encode the state of the
27
other MPU bank into our mmu_idx values.
28
29
This commit doesn't include the actual emulation of SG;
30
it also doesn't include implementation of the IDAU, which
31
is a per-board way to specify hard-coded memory attributes
32
for addresses, which override the CPU-internal SAU if they
33
specify a more secure setting than the SAU is programmed to.
34
35
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
36
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
37
Message-id: 1506092407-26985-15-git-send-email-peter.maydell@linaro.org
38
---
39
target/arm/internals.h | 15 ++++
40
target/arm/helper.c | 182 ++++++++++++++++++++++++++++++++++++++++++++++++-
41
2 files changed, 195 insertions(+), 2 deletions(-)
42
43
diff --git a/target/arm/internals.h b/target/arm/internals.h
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/arm/internals.h
46
+++ b/target/arm/internals.h
47
@@ -XXX,XX +XXX,XX @@ FIELD(V7M_EXCRET, DCRS, 5, 1)
48
FIELD(V7M_EXCRET, S, 6, 1)
49
FIELD(V7M_EXCRET, RES1, 7, 25) /* including the must-be-1 prefix */
50
51
+/* We use a few fake FSR values for internal purposes in M profile.
52
+ * M profile cores don't have A/R format FSRs, but currently our
53
+ * get_phys_addr() code assumes A/R profile and reports failures via
54
+ * an A/R format FSR value. We then translate that into the proper
55
+ * M profile exception and FSR status bit in arm_v7m_cpu_do_interrupt().
56
+ * Mostly the FSR values we use for this are those defined for v7PMSA,
57
+ * since we share some of that codepath. A few kinds of fault are
58
+ * only for M profile and have no A/R equivalent, though, so we have
59
+ * to pick a value from the reserved range (which we never otherwise
60
+ * generate) to use for these.
61
+ * These values will never be visible to the guest.
62
+ */
63
+#define M_FAKE_FSR_NSC_EXEC 0xf /* NS executing in S&NSC memory */
64
+#define M_FAKE_FSR_SFAULT 0xe /* SecureFault INVTRAN, INVEP or AUVIOL */
65
+
66
/*
67
* For AArch64, map a given EL to an index in the banked_spsr array.
68
* Note that this mapping and the AArch32 mapping defined in bank_number()
69
diff --git a/target/arm/helper.c b/target/arm/helper.c
70
index XXXXXXX..XXXXXXX 100644
71
--- a/target/arm/helper.c
72
+++ b/target/arm/helper.c
73
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
74
target_ulong *page_size_ptr, uint32_t *fsr,
75
ARMMMUFaultInfo *fi);
76
77
+/* Security attributes for an address, as returned by v8m_security_lookup. */
78
+typedef struct V8M_SAttributes {
79
+ bool ns;
80
+ bool nsc;
81
+ uint8_t sregion;
82
+ bool srvalid;
83
+ uint8_t iregion;
84
+ bool irvalid;
85
+} V8M_SAttributes;
86
+
87
/* Definitions for the PMCCNTR and PMCR registers */
88
#define PMCRD 0x8
89
#define PMCRC 0x4
90
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
91
* raises the fault, in the A profile short-descriptor format.
92
*/
93
switch (env->exception.fsr & 0xf) {
94
+ case M_FAKE_FSR_NSC_EXEC:
95
+ /* Exception generated when we try to execute code at an address
96
+ * which is marked as Secure & Non-Secure Callable and the CPU
97
+ * is in the Non-Secure state. The only instruction which can
98
+ * be executed like this is SG (and that only if both halves of
99
+ * the SG instruction have the same security attributes.)
100
+ * Everything else must generate an INVEP SecureFault, so we
101
+ * emulate the SG instruction here.
102
+ * TODO: actually emulate SG.
103
+ */
104
+ env->v7m.sfsr |= R_V7M_SFSR_INVEP_MASK;
105
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
106
+ qemu_log_mask(CPU_LOG_INT,
107
+ "...really SecureFault with SFSR.INVEP\n");
108
+ break;
109
+ case M_FAKE_FSR_SFAULT:
110
+ /* Various flavours of SecureFault for attempts to execute or
111
+ * access data in the wrong security state.
112
+ */
113
+ switch (cs->exception_index) {
114
+ case EXCP_PREFETCH_ABORT:
115
+ if (env->v7m.secure) {
116
+ env->v7m.sfsr |= R_V7M_SFSR_INVTRAN_MASK;
117
+ qemu_log_mask(CPU_LOG_INT,
118
+ "...really SecureFault with SFSR.INVTRAN\n");
119
+ } else {
120
+ env->v7m.sfsr |= R_V7M_SFSR_INVEP_MASK;
121
+ qemu_log_mask(CPU_LOG_INT,
122
+ "...really SecureFault with SFSR.INVEP\n");
123
+ }
124
+ break;
125
+ case EXCP_DATA_ABORT:
126
+ /* This must be an NS access to S memory */
127
+ env->v7m.sfsr |= R_V7M_SFSR_AUVIOL_MASK;
128
+ qemu_log_mask(CPU_LOG_INT,
129
+ "...really SecureFault with SFSR.AUVIOL\n");
130
+ break;
131
+ }
132
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
133
+ break;
134
case 0x8: /* External Abort */
135
switch (cs->exception_index) {
136
case EXCP_PREFETCH_ABORT:
137
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
138
return !(*prot & (1 << access_type));
139
}
140
141
+static bool v8m_is_sau_exempt(CPUARMState *env,
142
+ uint32_t address, MMUAccessType access_type)
143
+{
144
+ /* The architecture specifies that certain address ranges are
145
+ * exempt from v8M SAU/IDAU checks.
146
+ */
147
+ return
148
+ (access_type == MMU_INST_FETCH && m_is_system_region(env, address)) ||
149
+ (address >= 0xe0000000 && address <= 0xe0002fff) ||
150
+ (address >= 0xe000e000 && address <= 0xe000efff) ||
151
+ (address >= 0xe002e000 && address <= 0xe002efff) ||
152
+ (address >= 0xe0040000 && address <= 0xe0041fff) ||
153
+ (address >= 0xe00ff000 && address <= 0xe00fffff);
154
+}
155
+
156
+static void v8m_security_lookup(CPUARMState *env, uint32_t address,
157
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
158
+ V8M_SAttributes *sattrs)
159
+{
160
+ /* Look up the security attributes for this address. Compare the
161
+ * pseudocode SecurityCheck() function.
162
+ * We assume the caller has zero-initialized *sattrs.
163
+ */
164
+ ARMCPU *cpu = arm_env_get_cpu(env);
165
+ int r;
166
+
167
+ /* TODO: implement IDAU */
168
+
169
+ if (access_type == MMU_INST_FETCH && extract32(address, 28, 4) == 0xf) {
170
+ /* 0xf0000000..0xffffffff is always S for insn fetches */
171
+ return;
172
+ }
173
+
174
+ if (v8m_is_sau_exempt(env, address, access_type)) {
175
+ sattrs->ns = !regime_is_secure(env, mmu_idx);
176
+ return;
177
+ }
178
+
179
+ switch (env->sau.ctrl & 3) {
180
+ case 0: /* SAU.ENABLE == 0, SAU.ALLNS == 0 */
181
+ break;
182
+ case 2: /* SAU.ENABLE == 0, SAU.ALLNS == 1 */
183
+ sattrs->ns = true;
184
+ break;
185
+ default: /* SAU.ENABLE == 1 */
186
+ for (r = 0; r < cpu->sau_sregion; r++) {
187
+ if (env->sau.rlar[r] & 1) {
188
+ uint32_t base = env->sau.rbar[r] & ~0x1f;
189
+ uint32_t limit = env->sau.rlar[r] | 0x1f;
190
+
191
+ if (base <= address && limit >= address) {
192
+ if (sattrs->srvalid) {
193
+ /* If we hit in more than one region then we must report
194
+ * as Secure, not NS-Callable, with no valid region
195
+ * number info.
196
+ */
197
+ sattrs->ns = false;
198
+ sattrs->nsc = false;
199
+ sattrs->sregion = 0;
200
+ sattrs->srvalid = false;
201
+ break;
202
+ } else {
203
+ if (env->sau.rlar[r] & 2) {
204
+ sattrs->nsc = true;
205
+ } else {
206
+ sattrs->ns = true;
207
+ }
208
+ sattrs->srvalid = true;
209
+ sattrs->sregion = r;
210
+ }
211
+ }
212
+ }
213
+ }
214
+
215
+ /* TODO when we support the IDAU then it may override the result here */
216
+ break;
217
+ }
218
+}
219
+
220
static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
221
MMUAccessType access_type, ARMMMUIdx mmu_idx,
222
- hwaddr *phys_ptr, int *prot, uint32_t *fsr)
223
+ hwaddr *phys_ptr, MemTxAttrs *txattrs,
224
+ int *prot, uint32_t *fsr)
225
{
226
ARMCPU *cpu = arm_env_get_cpu(env);
227
bool is_user = regime_is_user(env, mmu_idx);
228
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
229
int n;
230
int matchregion = -1;
231
bool hit = false;
232
+ V8M_SAttributes sattrs = {};
233
234
*phys_ptr = address;
235
*prot = 0;
236
237
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
238
+ v8m_security_lookup(env, address, access_type, mmu_idx, &sattrs);
239
+ if (access_type == MMU_INST_FETCH) {
240
+ /* Instruction fetches always use the MMU bank and the
241
+ * transaction attribute determined by the fetch address,
242
+ * regardless of CPU state. This is painful for QEMU
243
+ * to handle, because it would mean we need to encode
244
+ * into the mmu_idx not just the (user, negpri) information
245
+ * for the current security state but also that for the
246
+ * other security state, which would balloon the number
247
+ * of mmu_idx values needed alarmingly.
248
+ * Fortunately we can avoid this because it's not actually
249
+ * possible to arbitrarily execute code from memory with
250
+ * the wrong security attribute: it will always generate
251
+ * an exception of some kind or another, apart from the
252
+ * special case of an NS CPU executing an SG instruction
253
+ * in S&NSC memory. So we always just fail the translation
254
+ * here and sort things out in the exception handler
255
+ * (including possibly emulating an SG instruction).
256
+ */
257
+ if (sattrs.ns != !secure) {
258
+ *fsr = sattrs.nsc ? M_FAKE_FSR_NSC_EXEC : M_FAKE_FSR_SFAULT;
259
+ return true;
260
+ }
261
+ } else {
262
+ /* For data accesses we always use the MMU bank indicated
263
+ * by the current CPU state, but the security attributes
264
+ * might downgrade a secure access to nonsecure.
265
+ */
266
+ if (sattrs.ns) {
267
+ txattrs->secure = false;
268
+ } else if (!secure) {
269
+ /* NS access to S memory must fault.
270
+ * Architecturally we should first check whether the
271
+ * MPU information for this address indicates that we
272
+ * are doing an unaligned access to Device memory, which
273
+ * should generate a UsageFault instead. QEMU does not
274
+ * currently check for that kind of unaligned access though.
275
+ * If we added it we would need to do so as a special case
276
+ * for M_FAKE_FSR_SFAULT in arm_v7m_cpu_do_interrupt().
277
+ */
278
+ *fsr = M_FAKE_FSR_SFAULT;
279
+ return true;
280
+ }
281
+ }
282
+ }
283
+
284
/* Unlike the ARM ARM pseudocode, we don't need to check whether this
285
* was an exception vector read from the vector table (which is always
286
* done using the default system address map), because those accesses
287
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
288
if (arm_feature(env, ARM_FEATURE_V8)) {
289
/* PMSAv8 */
290
ret = get_phys_addr_pmsav8(env, address, access_type, mmu_idx,
291
- phys_ptr, prot, fsr);
292
+ phys_ptr, attrs, prot, fsr);
293
} else if (arm_feature(env, ARM_FEATURE_V7)) {
294
/* PMSAv7 */
295
ret = get_phys_addr_pmsav7(env, address, access_type, mmu_idx,
296
--
297
2.7.4
298
299
diff view generated by jsdifflib
Deleted patch
1
In cpu_mmu_index() we try to do this:
2
if (env->v7m.secure) {
3
mmu_idx += ARMMMUIdx_MSUser;
4
}
5
but it will give the wrong answer, because ARMMMUIdx_MSUser
6
includes the 0x40 ARM_MMU_IDX_M field, and so does the
7
mmu_idx we're adding to, and we'll end up with 0x8n rather
8
than 0x4n. This error is then nullified by the call to
9
arm_to_core_mmu_idx() which masks out the high part, but
10
we're about to factor out the code that calculates the
11
ARMMMUIdx values so it can be used without passing it through
12
arm_to_core_mmu_idx(), so fix this bug first.
13
1
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
16
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
17
Message-id: 1506092407-26985-16-git-send-email-peter.maydell@linaro.org
18
---
19
target/arm/cpu.h | 12 +++++++-----
20
1 file changed, 7 insertions(+), 5 deletions(-)
21
22
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
23
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/cpu.h
25
+++ b/target/arm/cpu.h
26
@@ -XXX,XX +XXX,XX @@ static inline int cpu_mmu_index(CPUARMState *env, bool ifetch)
27
int el = arm_current_el(env);
28
29
if (arm_feature(env, ARM_FEATURE_M)) {
30
- ARMMMUIdx mmu_idx = el == 0 ? ARMMMUIdx_MUser : ARMMMUIdx_MPriv;
31
+ ARMMMUIdx mmu_idx;
32
33
- if (armv7m_nvic_neg_prio_requested(env->nvic, env->v7m.secure)) {
34
- mmu_idx = ARMMMUIdx_MNegPri;
35
+ if (el == 0) {
36
+ mmu_idx = env->v7m.secure ? ARMMMUIdx_MSUser : ARMMMUIdx_MUser;
37
+ } else {
38
+ mmu_idx = env->v7m.secure ? ARMMMUIdx_MSPriv : ARMMMUIdx_MPriv;
39
}
40
41
- if (env->v7m.secure) {
42
- mmu_idx += ARMMMUIdx_MSUser;
43
+ if (armv7m_nvic_neg_prio_requested(env->nvic, env->v7m.secure)) {
44
+ mmu_idx = env->v7m.secure ? ARMMMUIdx_MSNegPri : ARMMMUIdx_MNegPri;
45
}
46
47
return arm_to_core_mmu_idx(mmu_idx);
48
--
49
2.7.4
50
51
diff view generated by jsdifflib
Deleted patch
1
For the SG instruction and secure function return we are going
2
to want to do memory accesses using the MMU index of the CPU
3
in secure state, even though the CPU is currently in non-secure
4
state. Write arm_v7m_mmu_idx_for_secstate() to do this job,
5
and use it in cpu_mmu_index().
6
1
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 1506092407-26985-17-git-send-email-peter.maydell@linaro.org
11
---
12
target/arm/cpu.h | 32 +++++++++++++++++++++-----------
13
1 file changed, 21 insertions(+), 11 deletions(-)
14
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpu.h
18
+++ b/target/arm/cpu.h
19
@@ -XXX,XX +XXX,XX @@ static inline int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx)
20
}
21
}
22
23
+/* Return the MMU index for a v7M CPU in the specified security state */
24
+static inline ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env,
25
+ bool secstate)
26
+{
27
+ int el = arm_current_el(env);
28
+ ARMMMUIdx mmu_idx;
29
+
30
+ if (el == 0) {
31
+ mmu_idx = secstate ? ARMMMUIdx_MSUser : ARMMMUIdx_MUser;
32
+ } else {
33
+ mmu_idx = secstate ? ARMMMUIdx_MSPriv : ARMMMUIdx_MPriv;
34
+ }
35
+
36
+ if (armv7m_nvic_neg_prio_requested(env->nvic, secstate)) {
37
+ mmu_idx = secstate ? ARMMMUIdx_MSNegPri : ARMMMUIdx_MNegPri;
38
+ }
39
+
40
+ return mmu_idx;
41
+}
42
+
43
/* Determine the current mmu_idx to use for normal loads/stores */
44
static inline int cpu_mmu_index(CPUARMState *env, bool ifetch)
45
{
46
int el = arm_current_el(env);
47
48
if (arm_feature(env, ARM_FEATURE_M)) {
49
- ARMMMUIdx mmu_idx;
50
-
51
- if (el == 0) {
52
- mmu_idx = env->v7m.secure ? ARMMMUIdx_MSUser : ARMMMUIdx_MUser;
53
- } else {
54
- mmu_idx = env->v7m.secure ? ARMMMUIdx_MSPriv : ARMMMUIdx_MPriv;
55
- }
56
-
57
- if (armv7m_nvic_neg_prio_requested(env->nvic, env->v7m.secure)) {
58
- mmu_idx = env->v7m.secure ? ARMMMUIdx_MSNegPri : ARMMMUIdx_MNegPri;
59
- }
60
+ ARMMMUIdx mmu_idx = arm_v7m_mmu_idx_for_secstate(env, env->v7m.secure);
61
62
return arm_to_core_mmu_idx(mmu_idx);
63
}
64
--
65
2.7.4
66
67
diff view generated by jsdifflib
Deleted patch
1
When we added support for the new SHCSR bits in v8M in commit
2
437d59c17e9 the code to support writing to the new HARDFAULTPENDED
3
bit was accidentally only added for non-secure writes; the
4
secure banked version of the bit should also be writable.
5
1
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 1506092407-26985-21-git-send-email-peter.maydell@linaro.org
10
---
11
hw/intc/armv7m_nvic.c | 1 +
12
1 file changed, 1 insertion(+)
13
14
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/intc/armv7m_nvic.c
17
+++ b/hw/intc/armv7m_nvic.c
18
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
19
s->sec_vectors[ARMV7M_EXCP_BUS].enabled = (value & (1 << 17)) != 0;
20
s->sec_vectors[ARMV7M_EXCP_USAGE].enabled =
21
(value & (1 << 18)) != 0;
22
+ s->sec_vectors[ARMV7M_EXCP_HARD].pending = (value & (1 << 21)) != 0;
23
/* SecureFault not banked, but RAZ/WI to NS */
24
s->vectors[ARMV7M_EXCP_SECURE].active = (value & (1 << 4)) != 0;
25
s->vectors[ARMV7M_EXCP_SECURE].enabled = (value & (1 << 19)) != 0;
26
--
27
2.7.4
28
29
diff view generated by jsdifflib