1
Mostly my stuff with a few easy patches from others. I know I have
1
Some Arm bugfixes for rc2...
2
a few big series in my to-review queue, but I've been too jetlagged
3
to try to tackle those :-(
4
2
5
thanks
3
thanks
6
-- PMM
4
-- PMM
7
5
8
The following changes since commit a26a98dfb9d448d7234d931ae3720feddf6f0651:
6
The following changes since commit e6ebbd46b6e539f3613136111977721d212c2812:
9
7
10
Merge remote-tracking branch 'remotes/cohuck/tags/s390x-20171006' into staging (2017-10-06 13:19:03 +0100)
8
Merge remote-tracking branch 'remotes/kevin/tags/for-upstream' into staging (2018-11-19 14:31:48 +0000)
11
9
12
are available in the git repository at:
10
are available in the Git repository at:
13
11
14
git://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20171006
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20181119
15
13
16
for you to fetch changes up to 04829ce334bece78d4fa1d0fdbc8bc27dae9b242:
14
for you to fetch changes up to a00d7f2048c2a1a6a4487ac195c804c78adcf60e:
17
15
18
nvic: Add missing code for writing SHCSR.HARDFAULTPENDED bit (2017-10-06 16:46:49 +0100)
16
MAINTAINERS: list myself as maintainer for various Arm boards (2018-11-19 15:55:11 +0000)
19
17
20
----------------------------------------------------------------
18
----------------------------------------------------------------
21
target-arm:
19
target-arm queue:
22
* v8M: more preparatory work
20
* various MAINTAINERS file updates
23
* nvic: reset properly rather than leaving the nvic in a weird state
21
* hw/block/onenand: use qemu_log_mask() for reporting
24
* xlnx-zynqmp: Mark the "xlnx, zynqmp" device with user_creatable = false
22
* hw/block/onenand: Fix off-by-one error allowing out-of-bounds read
25
* sd: fix out-of-bounds check for multi block reads
23
on the n800 and n810 machine models
26
* arm: Fix SMC reporting to EL2 when QEMU provides PSCI
24
* target/arm: fix smc incorrectly trapping to EL3 when secure is off
25
* hw/arm/stm32f205: Fix the UART and Timer region size
26
* target/arm: read ID registers for KVM guests so they can be
27
used to gate "is feature X present" checks
27
28
28
----------------------------------------------------------------
29
----------------------------------------------------------------
29
Jan Kiszka (1):
30
Luc Michel (1):
30
arm: Fix SMC reporting to EL2 when QEMU provides PSCI
31
target/arm: fix smc incorrectly trapping to EL3 when secure is off
31
32
32
Michael Olbrich (1):
33
Peter Maydell (3):
33
hw/sd: fix out-of-bounds check for multi block reads
34
hw/block/onenand: Fix off-by-one error allowing out-of-bounds read
35
hw/block/onenand: use qemu_log_mask() for reporting
36
MAINTAINERS: list myself as maintainer for various Arm boards
34
37
35
Peter Maydell (17):
38
Richard Henderson (4):
36
nvic: Clear the vector arrays and prigroup on reset
39
target/arm: Install ARMISARegisters from kvm host
37
target/arm: Don't switch to target stack early in v7M exception return
40
target/arm: Fill in ARMISARegisters for kvm64
38
target/arm: Prepare for CONTROL.SPSEL being nonzero in Handler mode
41
target/arm: Introduce read_sys_reg32 for kvm32
39
target/arm: Restore security state on exception return
42
target/arm: Fill in ARMISARegisters for kvm32
40
target/arm: Restore SPSEL to correct CONTROL register on exception return
43
41
target/arm: Check for xPSR mismatch usage faults earlier for v8M
44
Seth Kintigh (1):
42
target/arm: Warn about restoring to unaligned stack
45
hw/arm/stm32f205: Fix the UART and Timer region size
43
target/arm: Don't warn about exception return with PC low bit set for v8M
44
target/arm: Add new-in-v8M SFSR and SFAR
45
target/arm: Update excret sanity checks for v8M
46
target/arm: Add support for restoring v8M additional state context
47
target/arm: Add v8M support to exception entry code
48
nvic: Implement Security Attribution Unit registers
49
target/arm: Implement security attribute lookups for memory accesses
50
target/arm: Fix calculation of secure mm_idx values
51
target/arm: Factor out "get mmuidx for specified security state"
52
nvic: Add missing code for writing SHCSR.HARDFAULTPENDED bit
53
46
54
Thomas Huth (1):
47
Thomas Huth (1):
55
hw/arm/xlnx-zynqmp: Mark the "xlnx, zynqmp" device with user_creatable = false
48
MAINTAINERS: Add entries for missing ARM boards
56
49
57
target/arm/cpu.h | 60 ++++-
50
target/arm/kvm_arm.h | 1 +
58
target/arm/internals.h | 15 ++
51
hw/block/onenand.c | 24 +++++-----
59
hw/arm/xlnx-zynqmp.c | 2 +
52
hw/char/stm32f2xx_usart.c | 2 +-
60
hw/intc/armv7m_nvic.c | 158 ++++++++++-
53
hw/timer/stm32f2xx_timer.c | 2 +-
61
hw/sd/sd.c | 12 +-
54
target/arm/kvm.c | 1 +
62
target/arm/cpu.c | 27 ++
55
target/arm/kvm32.c | 77 ++++++++++++++++++++------------
63
target/arm/helper.c | 691 +++++++++++++++++++++++++++++++++++++++++++------
56
target/arm/kvm64.c | 90 +++++++++++++++++++++++++++++++++++++-
64
target/arm/machine.c | 16 ++
57
target/arm/op_helper.c | 54 +++++++++++++++++++----
65
target/arm/op_helper.c | 27 +-
58
MAINTAINERS | 106 +++++++++++++++++++++++++++++++++++++++------
66
9 files changed, 898 insertions(+), 110 deletions(-)
59
9 files changed, 293 insertions(+), 64 deletions(-)
67
60
diff view generated by jsdifflib
1
From: Michael Olbrich <m.olbrich@pengutronix.de>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The current code checks if the next block exceeds the size of the card.
3
The ID registers are replacing (some of) the feature bits.
4
This generates an error while reading the last block of the card.
4
We need (some of) these values to determine the set of data
5
Do the out-of-bounds check when starting to read a new block to fix this.
5
to be handled during migration.
6
6
7
This issue became visible with increased error checking in Linux 4.13.
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
8
Message-id: 20181113180154.17903-2-richard.henderson@linaro.org
9
Cc: qemu-stable@nongnu.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Michael Olbrich <m.olbrich@pengutronix.de>
11
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
12
Message-id: 20170916091611.10241-1-m.olbrich@pengutronix.de
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
11
---
15
hw/sd/sd.c | 12 ++++++------
12
target/arm/kvm_arm.h | 1 +
16
1 file changed, 6 insertions(+), 6 deletions(-)
13
target/arm/kvm.c | 1 +
14
2 files changed, 2 insertions(+)
17
15
18
diff --git a/hw/sd/sd.c b/hw/sd/sd.c
16
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
19
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/sd/sd.c
18
--- a/target/arm/kvm_arm.h
21
+++ b/hw/sd/sd.c
19
+++ b/target/arm/kvm_arm.h
22
@@ -XXX,XX +XXX,XX @@ uint8_t sd_read_data(SDState *sd)
20
@@ -XXX,XX +XXX,XX @@ void kvm_arm_destroy_scratch_host_vcpu(int *fdarray);
23
break;
21
* by asking the host kernel)
24
22
*/
25
case 18:    /* CMD18: READ_MULTIPLE_BLOCK */
23
typedef struct ARMHostCPUFeatures {
26
- if (sd->data_offset == 0)
24
+ ARMISARegisters isar;
27
+ if (sd->data_offset == 0) {
25
uint64_t features;
28
+ if (sd->data_start + io_len > sd->size) {
26
uint32_t target;
29
+ sd->card_status |= ADDRESS_ERROR;
27
const char *dtb_compatible;
30
+ return 0x00;
28
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
31
+ }
29
index XXXXXXX..XXXXXXX 100644
32
BLK_READ_BLOCK(sd->data_start, io_len);
30
--- a/target/arm/kvm.c
33
+ }
31
+++ b/target/arm/kvm.c
34
ret = sd->data[sd->data_offset ++];
32
@@ -XXX,XX +XXX,XX @@ void kvm_arm_set_cpu_features_from_host(ARMCPU *cpu)
35
33
36
if (sd->data_offset >= io_len) {
34
cpu->kvm_target = arm_host_cpu_features.target;
37
@@ -XXX,XX +XXX,XX @@ uint8_t sd_read_data(SDState *sd)
35
cpu->dtb_compatible = arm_host_cpu_features.dtb_compatible;
38
break;
36
+ cpu->isar = arm_host_cpu_features.isar;
39
}
37
env->features = arm_host_cpu_features.features;
40
}
38
}
41
-
42
- if (sd->data_start + io_len > sd->size) {
43
- sd->card_status |= ADDRESS_ERROR;
44
- break;
45
- }
46
}
47
break;
48
39
49
--
40
--
50
2.7.4
41
2.19.1
51
42
52
43
diff view generated by jsdifflib
1
Implement the security attribute lookups for memory accesses
1
From: Richard Henderson <richard.henderson@linaro.org>
2
in the get_phys_addr() functions, causing these to generate
3
various kinds of SecureFault for bad accesses.
4
2
5
The major subtlety in this code relates to handling of the
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
case when the security attributes the SAU assigns to the
4
Message-id: 20181113180154.17903-3-richard.henderson@linaro.org
7
address don't match the current security state of the CPU.
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/kvm64.c | 90 ++++++++++++++++++++++++++++++++++++++++++++--
9
1 file changed, 88 insertions(+), 2 deletions(-)
8
10
9
In the ARM ARM pseudocode for validating instruction
11
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
10
accesses, the security attributes of the address determine
11
whether the Secure or NonSecure MPU state is used. At face
12
value, handling this would require us to encode the relevant
13
bits of state into mmu_idx for both S and NS at once, which
14
would result in our needing 16 mmu indexes. Fortunately we
15
don't actually need to do this because a mismatch between
16
address attributes and CPU state means either:
17
* some kind of fault (usually a SecureFault, but in theory
18
perhaps a UserFault for unaligned access to Device memory)
19
* execution of the SG instruction in NS state from a
20
Secure & NonSecure code region
21
22
The purpose of SG is simply to flip the CPU into Secure
23
state, so we can handle it by emulating execution of that
24
instruction directly in arm_v7m_cpu_do_interrupt(), which
25
means we can treat all the mismatch cases as "throw an
26
exception" and we don't need to encode the state of the
27
other MPU bank into our mmu_idx values.
28
29
This commit doesn't include the actual emulation of SG;
30
it also doesn't include implementation of the IDAU, which
31
is a per-board way to specify hard-coded memory attributes
32
for addresses, which override the CPU-internal SAU if they
33
specify a more secure setting than the SAU is programmed to.
34
35
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
36
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
37
Message-id: 1506092407-26985-15-git-send-email-peter.maydell@linaro.org
38
---
39
target/arm/internals.h | 15 ++++
40
target/arm/helper.c | 182 ++++++++++++++++++++++++++++++++++++++++++++++++-
41
2 files changed, 195 insertions(+), 2 deletions(-)
42
43
diff --git a/target/arm/internals.h b/target/arm/internals.h
44
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
45
--- a/target/arm/internals.h
13
--- a/target/arm/kvm64.c
46
+++ b/target/arm/internals.h
14
+++ b/target/arm/kvm64.c
47
@@ -XXX,XX +XXX,XX @@ FIELD(V7M_EXCRET, DCRS, 5, 1)
15
@@ -XXX,XX +XXX,XX @@ static inline void unset_feature(uint64_t *features, int feature)
48
FIELD(V7M_EXCRET, S, 6, 1)
16
*features &= ~(1ULL << feature);
49
FIELD(V7M_EXCRET, RES1, 7, 25) /* including the must-be-1 prefix */
17
}
50
18
51
+/* We use a few fake FSR values for internal purposes in M profile.
19
+static int read_sys_reg32(int fd, uint32_t *pret, uint64_t id)
52
+ * M profile cores don't have A/R format FSRs, but currently our
20
+{
53
+ * get_phys_addr() code assumes A/R profile and reports failures via
21
+ uint64_t ret;
54
+ * an A/R format FSR value. We then translate that into the proper
22
+ struct kvm_one_reg idreg = { .id = id, .addr = (uintptr_t)&ret };
55
+ * M profile exception and FSR status bit in arm_v7m_cpu_do_interrupt().
23
+ int err;
56
+ * Mostly the FSR values we use for this are those defined for v7PMSA,
57
+ * since we share some of that codepath. A few kinds of fault are
58
+ * only for M profile and have no A/R equivalent, though, so we have
59
+ * to pick a value from the reserved range (which we never otherwise
60
+ * generate) to use for these.
61
+ * These values will never be visible to the guest.
62
+ */
63
+#define M_FAKE_FSR_NSC_EXEC 0xf /* NS executing in S&NSC memory */
64
+#define M_FAKE_FSR_SFAULT 0xe /* SecureFault INVTRAN, INVEP or AUVIOL */
65
+
24
+
66
/*
25
+ assert((id & KVM_REG_SIZE_MASK) == KVM_REG_SIZE_U64);
67
* For AArch64, map a given EL to an index in the banked_spsr array.
26
+ err = ioctl(fd, KVM_GET_ONE_REG, &idreg);
68
* Note that this mapping and the AArch32 mapping defined in bank_number()
27
+ if (err < 0) {
69
diff --git a/target/arm/helper.c b/target/arm/helper.c
28
+ return -1;
70
index XXXXXXX..XXXXXXX 100644
29
+ }
71
--- a/target/arm/helper.c
30
+ *pret = ret;
72
+++ b/target/arm/helper.c
31
+ return 0;
73
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
74
target_ulong *page_size_ptr, uint32_t *fsr,
75
ARMMMUFaultInfo *fi);
76
77
+/* Security attributes for an address, as returned by v8m_security_lookup. */
78
+typedef struct V8M_SAttributes {
79
+ bool ns;
80
+ bool nsc;
81
+ uint8_t sregion;
82
+ bool srvalid;
83
+ uint8_t iregion;
84
+ bool irvalid;
85
+} V8M_SAttributes;
86
+
87
/* Definitions for the PMCCNTR and PMCR registers */
88
#define PMCRD 0x8
89
#define PMCRC 0x4
90
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
91
* raises the fault, in the A profile short-descriptor format.
92
*/
93
switch (env->exception.fsr & 0xf) {
94
+ case M_FAKE_FSR_NSC_EXEC:
95
+ /* Exception generated when we try to execute code at an address
96
+ * which is marked as Secure & Non-Secure Callable and the CPU
97
+ * is in the Non-Secure state. The only instruction which can
98
+ * be executed like this is SG (and that only if both halves of
99
+ * the SG instruction have the same security attributes.)
100
+ * Everything else must generate an INVEP SecureFault, so we
101
+ * emulate the SG instruction here.
102
+ * TODO: actually emulate SG.
103
+ */
104
+ env->v7m.sfsr |= R_V7M_SFSR_INVEP_MASK;
105
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
106
+ qemu_log_mask(CPU_LOG_INT,
107
+ "...really SecureFault with SFSR.INVEP\n");
108
+ break;
109
+ case M_FAKE_FSR_SFAULT:
110
+ /* Various flavours of SecureFault for attempts to execute or
111
+ * access data in the wrong security state.
112
+ */
113
+ switch (cs->exception_index) {
114
+ case EXCP_PREFETCH_ABORT:
115
+ if (env->v7m.secure) {
116
+ env->v7m.sfsr |= R_V7M_SFSR_INVTRAN_MASK;
117
+ qemu_log_mask(CPU_LOG_INT,
118
+ "...really SecureFault with SFSR.INVTRAN\n");
119
+ } else {
120
+ env->v7m.sfsr |= R_V7M_SFSR_INVEP_MASK;
121
+ qemu_log_mask(CPU_LOG_INT,
122
+ "...really SecureFault with SFSR.INVEP\n");
123
+ }
124
+ break;
125
+ case EXCP_DATA_ABORT:
126
+ /* This must be an NS access to S memory */
127
+ env->v7m.sfsr |= R_V7M_SFSR_AUVIOL_MASK;
128
+ qemu_log_mask(CPU_LOG_INT,
129
+ "...really SecureFault with SFSR.AUVIOL\n");
130
+ break;
131
+ }
132
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
133
+ break;
134
case 0x8: /* External Abort */
135
switch (cs->exception_index) {
136
case EXCP_PREFETCH_ABORT:
137
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
138
return !(*prot & (1 << access_type));
139
}
140
141
+static bool v8m_is_sau_exempt(CPUARMState *env,
142
+ uint32_t address, MMUAccessType access_type)
143
+{
144
+ /* The architecture specifies that certain address ranges are
145
+ * exempt from v8M SAU/IDAU checks.
146
+ */
147
+ return
148
+ (access_type == MMU_INST_FETCH && m_is_system_region(env, address)) ||
149
+ (address >= 0xe0000000 && address <= 0xe0002fff) ||
150
+ (address >= 0xe000e000 && address <= 0xe000efff) ||
151
+ (address >= 0xe002e000 && address <= 0xe002efff) ||
152
+ (address >= 0xe0040000 && address <= 0xe0041fff) ||
153
+ (address >= 0xe00ff000 && address <= 0xe00fffff);
154
+}
32
+}
155
+
33
+
156
+static void v8m_security_lookup(CPUARMState *env, uint32_t address,
34
+static int read_sys_reg64(int fd, uint64_t *pret, uint64_t id)
157
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
158
+ V8M_SAttributes *sattrs)
159
+{
35
+{
160
+ /* Look up the security attributes for this address. Compare the
36
+ struct kvm_one_reg idreg = { .id = id, .addr = (uintptr_t)pret };
161
+ * pseudocode SecurityCheck() function.
162
+ * We assume the caller has zero-initialized *sattrs.
163
+ */
164
+ ARMCPU *cpu = arm_env_get_cpu(env);
165
+ int r;
166
+
37
+
167
+ /* TODO: implement IDAU */
38
+ assert((id & KVM_REG_SIZE_MASK) == KVM_REG_SIZE_U64);
39
+ return ioctl(fd, KVM_GET_ONE_REG, &idreg);
40
+}
168
+
41
+
169
+ if (access_type == MMU_INST_FETCH && extract32(address, 28, 4) == 0xf) {
42
bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
170
+ /* 0xf0000000..0xffffffff is always S for insn fetches */
43
{
171
+ return;
44
/* Identify the feature bits corresponding to the host CPU, and
45
* fill out the ARMHostCPUClass fields accordingly. To do this
46
* we have to create a scratch VM, create a single CPU inside it,
47
* and then query that CPU for the relevant ID registers.
48
- * For AArch64 we currently don't care about ID registers at
49
- * all; we just want to know the CPU type.
50
*/
51
int fdarray[3];
52
uint64_t features = 0;
53
+ int err;
54
+
55
/* Old kernels may not know about the PREFERRED_TARGET ioctl: however
56
* we know these will only support creating one kind of guest CPU,
57
* which is its preferred CPU type. Fortunately these old kernels
58
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
59
ahcf->target = init.target;
60
ahcf->dtb_compatible = "arm,arm-v8";
61
62
+ err = read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64pfr0,
63
+ ARM64_SYS_REG(3, 0, 0, 4, 0));
64
+ if (unlikely(err < 0)) {
65
+ /*
66
+ * Before v4.15, the kernel only exposed a limited number of system
67
+ * registers, not including any of the interesting AArch64 ID regs.
68
+ * For the most part we could leave these fields as zero with minimal
69
+ * effect, since this does not affect the values seen by the guest.
70
+ *
71
+ * However, it could cause problems down the line for QEMU,
72
+ * so provide a minimal v8.0 default.
73
+ *
74
+ * ??? Could read MIDR and use knowledge from cpu64.c.
75
+ * ??? Could map a page of memory into our temp guest and
76
+ * run the tiniest of hand-crafted kernels to extract
77
+ * the values seen by the guest.
78
+ * ??? Either of these sounds like too much effort just
79
+ * to work around running a modern host kernel.
80
+ */
81
+ ahcf->isar.id_aa64pfr0 = 0x00000011; /* EL1&0, AArch64 only */
82
+ err = 0;
83
+ } else {
84
+ err |= read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64pfr1,
85
+ ARM64_SYS_REG(3, 0, 0, 4, 1));
86
+ err |= read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64isar0,
87
+ ARM64_SYS_REG(3, 0, 0, 6, 0));
88
+ err |= read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64isar1,
89
+ ARM64_SYS_REG(3, 0, 0, 6, 1));
90
+
91
+ /*
92
+ * Note that if AArch32 support is not present in the host,
93
+ * the AArch32 sysregs are present to be read, but will
94
+ * return UNKNOWN values. This is neither better nor worse
95
+ * than skipping the reads and leaving 0, as we must avoid
96
+ * considering the values in every case.
97
+ */
98
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_isar0,
99
+ ARM64_SYS_REG(3, 0, 0, 2, 0));
100
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_isar1,
101
+ ARM64_SYS_REG(3, 0, 0, 2, 1));
102
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_isar2,
103
+ ARM64_SYS_REG(3, 0, 0, 2, 2));
104
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_isar3,
105
+ ARM64_SYS_REG(3, 0, 0, 2, 3));
106
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_isar4,
107
+ ARM64_SYS_REG(3, 0, 0, 2, 4));
108
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_isar5,
109
+ ARM64_SYS_REG(3, 0, 0, 2, 5));
110
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_isar6,
111
+ ARM64_SYS_REG(3, 0, 0, 2, 7));
112
+
113
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.mvfr0,
114
+ ARM64_SYS_REG(3, 0, 0, 3, 0));
115
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.mvfr1,
116
+ ARM64_SYS_REG(3, 0, 0, 3, 1));
117
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.mvfr2,
118
+ ARM64_SYS_REG(3, 0, 0, 3, 2));
172
+ }
119
+ }
173
+
120
+
174
+ if (v8m_is_sau_exempt(env, address, access_type)) {
121
kvm_arm_destroy_scratch_host_vcpu(fdarray);
175
+ sattrs->ns = !regime_is_secure(env, mmu_idx);
122
176
+ return;
123
+ if (err < 0) {
124
+ return false;
177
+ }
125
+ }
178
+
126
+
179
+ switch (env->sau.ctrl & 3) {
127
/* We can assume any KVM supporting CPU is at least a v8
180
+ case 0: /* SAU.ENABLE == 0, SAU.ALLNS == 0 */
128
* with VFPv4+Neon; this in turn implies most of the other
181
+ break;
129
* feature bits.
182
+ case 2: /* SAU.ENABLE == 0, SAU.ALLNS == 1 */
183
+ sattrs->ns = true;
184
+ break;
185
+ default: /* SAU.ENABLE == 1 */
186
+ for (r = 0; r < cpu->sau_sregion; r++) {
187
+ if (env->sau.rlar[r] & 1) {
188
+ uint32_t base = env->sau.rbar[r] & ~0x1f;
189
+ uint32_t limit = env->sau.rlar[r] | 0x1f;
190
+
191
+ if (base <= address && limit >= address) {
192
+ if (sattrs->srvalid) {
193
+ /* If we hit in more than one region then we must report
194
+ * as Secure, not NS-Callable, with no valid region
195
+ * number info.
196
+ */
197
+ sattrs->ns = false;
198
+ sattrs->nsc = false;
199
+ sattrs->sregion = 0;
200
+ sattrs->srvalid = false;
201
+ break;
202
+ } else {
203
+ if (env->sau.rlar[r] & 2) {
204
+ sattrs->nsc = true;
205
+ } else {
206
+ sattrs->ns = true;
207
+ }
208
+ sattrs->srvalid = true;
209
+ sattrs->sregion = r;
210
+ }
211
+ }
212
+ }
213
+ }
214
+
215
+ /* TODO when we support the IDAU then it may override the result here */
216
+ break;
217
+ }
218
+}
219
+
220
static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
221
MMUAccessType access_type, ARMMMUIdx mmu_idx,
222
- hwaddr *phys_ptr, int *prot, uint32_t *fsr)
223
+ hwaddr *phys_ptr, MemTxAttrs *txattrs,
224
+ int *prot, uint32_t *fsr)
225
{
226
ARMCPU *cpu = arm_env_get_cpu(env);
227
bool is_user = regime_is_user(env, mmu_idx);
228
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
229
int n;
230
int matchregion = -1;
231
bool hit = false;
232
+ V8M_SAttributes sattrs = {};
233
234
*phys_ptr = address;
235
*prot = 0;
236
237
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
238
+ v8m_security_lookup(env, address, access_type, mmu_idx, &sattrs);
239
+ if (access_type == MMU_INST_FETCH) {
240
+ /* Instruction fetches always use the MMU bank and the
241
+ * transaction attribute determined by the fetch address,
242
+ * regardless of CPU state. This is painful for QEMU
243
+ * to handle, because it would mean we need to encode
244
+ * into the mmu_idx not just the (user, negpri) information
245
+ * for the current security state but also that for the
246
+ * other security state, which would balloon the number
247
+ * of mmu_idx values needed alarmingly.
248
+ * Fortunately we can avoid this because it's not actually
249
+ * possible to arbitrarily execute code from memory with
250
+ * the wrong security attribute: it will always generate
251
+ * an exception of some kind or another, apart from the
252
+ * special case of an NS CPU executing an SG instruction
253
+ * in S&NSC memory. So we always just fail the translation
254
+ * here and sort things out in the exception handler
255
+ * (including possibly emulating an SG instruction).
256
+ */
257
+ if (sattrs.ns != !secure) {
258
+ *fsr = sattrs.nsc ? M_FAKE_FSR_NSC_EXEC : M_FAKE_FSR_SFAULT;
259
+ return true;
260
+ }
261
+ } else {
262
+ /* For data accesses we always use the MMU bank indicated
263
+ * by the current CPU state, but the security attributes
264
+ * might downgrade a secure access to nonsecure.
265
+ */
266
+ if (sattrs.ns) {
267
+ txattrs->secure = false;
268
+ } else if (!secure) {
269
+ /* NS access to S memory must fault.
270
+ * Architecturally we should first check whether the
271
+ * MPU information for this address indicates that we
272
+ * are doing an unaligned access to Device memory, which
273
+ * should generate a UsageFault instead. QEMU does not
274
+ * currently check for that kind of unaligned access though.
275
+ * If we added it we would need to do so as a special case
276
+ * for M_FAKE_FSR_SFAULT in arm_v7m_cpu_do_interrupt().
277
+ */
278
+ *fsr = M_FAKE_FSR_SFAULT;
279
+ return true;
280
+ }
281
+ }
282
+ }
283
+
284
/* Unlike the ARM ARM pseudocode, we don't need to check whether this
285
* was an exception vector read from the vector table (which is always
286
* done using the default system address map), because those accesses
287
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
288
if (arm_feature(env, ARM_FEATURE_V8)) {
289
/* PMSAv8 */
290
ret = get_phys_addr_pmsav8(env, address, access_type, mmu_idx,
291
- phys_ptr, prot, fsr);
292
+ phys_ptr, attrs, prot, fsr);
293
} else if (arm_feature(env, ARM_FEATURE_V7)) {
294
/* PMSAv7 */
295
ret = get_phys_addr_pmsav7(env, address, access_type, mmu_idx,
296
--
130
--
297
2.7.4
131
2.19.1
298
132
299
133
diff view generated by jsdifflib
1
Add support for v8M and in particular the security extension
1
From: Richard Henderson <richard.henderson@linaro.org>
2
to the exception entry code. This requires changes to:
3
* calculation of the exception-return magic LR value
4
* push the callee-saves registers in certain cases
5
* clear registers when taking non-secure exceptions to avoid
6
leaking information from the interrupted secure code
7
* switch to the correct security state on entry
8
* use the vector table for the security state we're targeting
9
2
3
Assert that the value to be written is the correct size.
4
No change in functionality here, just mirroring the same
5
function from kvm64.
6
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20181113180154.17903-4-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 1506092407-26985-13-git-send-email-peter.maydell@linaro.org
13
---
11
---
14
target/arm/helper.c | 165 +++++++++++++++++++++++++++++++++++++++++++++-------
12
target/arm/kvm32.c | 41 ++++++++++++++++-------------------------
15
1 file changed, 145 insertions(+), 20 deletions(-)
13
1 file changed, 16 insertions(+), 25 deletions(-)
16
14
17
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
diff --git a/target/arm/kvm32.c b/target/arm/kvm32.c
18
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper.c
17
--- a/target/arm/kvm32.c
20
+++ b/target/arm/helper.c
18
+++ b/target/arm/kvm32.c
21
@@ -XXX,XX +XXX,XX @@ static uint32_t *get_v7m_sp_ptr(CPUARMState *env, bool secure, bool threadmode,
19
@@ -XXX,XX +XXX,XX @@ static inline void set_feature(uint64_t *features, int feature)
22
}
20
*features |= 1ULL << feature;
23
}
21
}
24
22
25
-static uint32_t arm_v7m_load_vector(ARMCPU *cpu)
23
+static int read_sys_reg32(int fd, uint32_t *pret, uint64_t id)
26
+static uint32_t arm_v7m_load_vector(ARMCPU *cpu, bool targets_secure)
27
{
28
CPUState *cs = CPU(cpu);
29
CPUARMState *env = &cpu->env;
30
MemTxResult result;
31
- hwaddr vec = env->v7m.vecbase[env->v7m.secure] + env->v7m.exception * 4;
32
+ hwaddr vec = env->v7m.vecbase[targets_secure] + env->v7m.exception * 4;
33
uint32_t addr;
34
35
addr = address_space_ldl(cs->as, vec,
36
@@ -XXX,XX +XXX,XX @@ static uint32_t arm_v7m_load_vector(ARMCPU *cpu)
37
* Since we don't model Lockup, we just report this guest error
38
* via cpu_abort().
39
*/
40
- cpu_abort(cs, "Failed to read from exception vector table "
41
- "entry %08x\n", (unsigned)vec);
42
+ cpu_abort(cs, "Failed to read from %s exception vector table "
43
+ "entry %08x\n", targets_secure ? "secure" : "nonsecure",
44
+ (unsigned)vec);
45
}
46
return addr;
47
}
48
49
-static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr)
50
+static void v7m_push_callee_stack(ARMCPU *cpu, uint32_t lr, bool dotailchain)
51
+{
24
+{
52
+ /* For v8M, push the callee-saves register part of the stack frame.
25
+ struct kvm_one_reg idreg = { .id = id, .addr = (uintptr_t)pret };
53
+ * Compare the v8M pseudocode PushCalleeStack().
54
+ * In the tailchaining case this may not be the current stack.
55
+ */
56
+ CPUARMState *env = &cpu->env;
57
+ CPUState *cs = CPU(cpu);
58
+ uint32_t *frame_sp_p;
59
+ uint32_t frameptr;
60
+
26
+
61
+ if (dotailchain) {
27
+ assert((id & KVM_REG_SIZE_MASK) == KVM_REG_SIZE_U32);
62
+ frame_sp_p = get_v7m_sp_ptr(env, true,
28
+ return ioctl(fd, KVM_GET_ONE_REG, &idreg);
63
+ lr & R_V7M_EXCRET_MODE_MASK,
64
+ lr & R_V7M_EXCRET_SPSEL_MASK);
65
+ } else {
66
+ frame_sp_p = &env->regs[13];
67
+ }
68
+
69
+ frameptr = *frame_sp_p - 0x28;
70
+
71
+ stl_phys(cs->as, frameptr, 0xfefa125b);
72
+ stl_phys(cs->as, frameptr + 0x8, env->regs[4]);
73
+ stl_phys(cs->as, frameptr + 0xc, env->regs[5]);
74
+ stl_phys(cs->as, frameptr + 0x10, env->regs[6]);
75
+ stl_phys(cs->as, frameptr + 0x14, env->regs[7]);
76
+ stl_phys(cs->as, frameptr + 0x18, env->regs[8]);
77
+ stl_phys(cs->as, frameptr + 0x1c, env->regs[9]);
78
+ stl_phys(cs->as, frameptr + 0x20, env->regs[10]);
79
+ stl_phys(cs->as, frameptr + 0x24, env->regs[11]);
80
+
81
+ *frame_sp_p = frameptr;
82
+}
29
+}
83
+
30
+
84
+static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain)
31
bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
85
{
32
{
86
/* Do the "take the exception" parts of exception entry,
33
/* Identify the feature bits corresponding to the host CPU, and
87
* but not the pushing of state to the stack. This is
34
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
88
@@ -XXX,XX +XXX,XX @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr)
35
* we have to create a scratch VM, create a single CPU inside it,
36
* and then query that CPU for the relevant ID registers.
89
*/
37
*/
90
CPUARMState *env = &cpu->env;
38
- int i, ret, fdarray[3];
91
uint32_t addr;
39
+ int err = 0, fdarray[3];
92
+ bool targets_secure;
40
uint32_t midr, id_pfr0, mvfr1;
41
uint64_t features = 0;
93
+
42
+
94
+ targets_secure = armv7m_nvic_acknowledge_irq(env->nvic);
43
/* Old kernels may not know about the PREFERRED_TARGET ioctl: however
95
44
* we know these will only support creating one kind of guest CPU,
96
- armv7m_nvic_acknowledge_irq(env->nvic);
45
* which is its preferred CPU type.
97
+ if (arm_feature(env, ARM_FEATURE_V8)) {
46
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
98
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY) &&
47
QEMU_KVM_ARM_TARGET_NONE
99
+ (lr & R_V7M_EXCRET_S_MASK)) {
48
};
100
+ /* The background code (the owner of the registers in the
49
struct kvm_vcpu_init init;
101
+ * exception frame) is Secure. This means it may either already
50
- struct kvm_one_reg idregs[] = {
102
+ * have or now needs to push callee-saves registers.
51
- {
103
+ */
52
- .id = KVM_REG_ARM | KVM_REG_SIZE_U32
104
+ if (targets_secure) {
53
- | ENCODE_CP_REG(15, 0, 0, 0, 0, 0, 0),
105
+ if (dotailchain && !(lr & R_V7M_EXCRET_ES_MASK)) {
54
- .addr = (uintptr_t)&midr,
106
+ /* We took an exception from Secure to NonSecure
55
- },
107
+ * (which means the callee-saved registers got stacked)
56
- {
108
+ * and are now tailchaining to a Secure exception.
57
- .id = KVM_REG_ARM | KVM_REG_SIZE_U32
109
+ * Clear DCRS so eventual return from this Secure
58
- | ENCODE_CP_REG(15, 0, 0, 0, 1, 0, 0),
110
+ * exception unstacks the callee-saved registers.
59
- .addr = (uintptr_t)&id_pfr0,
111
+ */
60
- },
112
+ lr &= ~R_V7M_EXCRET_DCRS_MASK;
61
- {
113
+ }
62
- .id = KVM_REG_ARM | KVM_REG_SIZE_U32
114
+ } else {
63
- | KVM_REG_ARM_VFP | KVM_REG_ARM_VFP_MVFR1,
115
+ /* We're going to a non-secure exception; push the
64
- .addr = (uintptr_t)&mvfr1,
116
+ * callee-saves registers to the stack now, if they're
65
- },
117
+ * not already saved.
66
- };
118
+ */
67
119
+ if (lr & R_V7M_EXCRET_DCRS_MASK &&
68
if (!kvm_arm_create_scratch_host_vcpu(cpus_to_try, fdarray, &init)) {
120
+ !(dotailchain && (lr & R_V7M_EXCRET_ES_MASK))) {
69
return false;
121
+ v7m_push_callee_stack(cpu, lr, dotailchain);
70
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
122
+ }
71
*/
123
+ lr |= R_V7M_EXCRET_DCRS_MASK;
72
ahcf->dtb_compatible = "arm,arm-v7";
124
+ }
73
125
+ }
74
- for (i = 0; i < ARRAY_SIZE(idregs); i++) {
126
+
75
- ret = ioctl(fdarray[2], KVM_GET_ONE_REG, &idregs[i]);
127
+ lr &= ~R_V7M_EXCRET_ES_MASK;
76
- if (ret) {
128
+ if (targets_secure || !arm_feature(env, ARM_FEATURE_M_SECURITY)) {
77
- break;
129
+ lr |= R_V7M_EXCRET_ES_MASK;
78
- }
130
+ }
79
- }
131
+ lr &= ~R_V7M_EXCRET_SPSEL_MASK;
80
+ err |= read_sys_reg32(fdarray[2], &midr, ARM_CP15_REG32(0, 0, 0, 0));
132
+ if (env->v7m.control[targets_secure] & R_V7M_CONTROL_SPSEL_MASK) {
81
+ err |= read_sys_reg32(fdarray[2], &id_pfr0, ARM_CP15_REG32(0, 0, 1, 0));
133
+ lr |= R_V7M_EXCRET_SPSEL_MASK;
82
+ err |= read_sys_reg32(fdarray[2], &mvfr1,
134
+ }
83
+ KVM_REG_ARM | KVM_REG_SIZE_U32 |
135
+
84
+ KVM_REG_ARM_VFP | KVM_REG_ARM_VFP_MVFR1);
136
+ /* Clear registers if necessary to prevent non-secure exception
85
137
+ * code being able to see register values from secure code.
86
kvm_arm_destroy_scratch_host_vcpu(fdarray);
138
+ * Where register values become architecturally UNKNOWN we leave
87
139
+ * them with their previous values.
88
- if (ret) {
140
+ */
89
+ if (err < 0) {
141
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
90
return false;
142
+ if (!targets_secure) {
143
+ /* Always clear the caller-saved registers (they have been
144
+ * pushed to the stack earlier in v7m_push_stack()).
145
+ * Clear callee-saved registers if the background code is
146
+ * Secure (in which case these regs were saved in
147
+ * v7m_push_callee_stack()).
148
+ */
149
+ int i;
150
+
151
+ for (i = 0; i < 13; i++) {
152
+ /* r4..r11 are callee-saves, zero only if EXCRET.S == 1 */
153
+ if (i < 4 || i > 11 || (lr & R_V7M_EXCRET_S_MASK)) {
154
+ env->regs[i] = 0;
155
+ }
156
+ }
157
+ /* Clear EAPSR */
158
+ xpsr_write(env, 0, XPSR_NZCV | XPSR_Q | XPSR_GE | XPSR_IT);
159
+ }
160
+ }
161
+ }
162
+
163
+ /* Switch to target security state -- must do this before writing SPSEL */
164
+ switch_v7m_security_state(env, targets_secure);
165
write_v7m_control_spsel(env, 0);
166
arm_clear_exclusive(env);
167
/* Clear IT bits */
168
env->condexec_bits = 0;
169
env->regs[14] = lr;
170
- addr = arm_v7m_load_vector(cpu);
171
+ addr = arm_v7m_load_vector(cpu, targets_secure);
172
env->regs[15] = addr & 0xfffffffe;
173
env->thumb = addr & 1;
174
}
175
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
176
if (sfault) {
177
env->v7m.sfsr |= R_V7M_SFSR_INVER_MASK;
178
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
179
- v7m_exception_taken(cpu, excret);
180
+ v7m_exception_taken(cpu, excret, true);
181
qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing "
182
"stackframe: failed EXC_RETURN.ES validity check\n");
183
return;
184
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
185
*/
186
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
187
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
188
- v7m_exception_taken(cpu, excret);
189
+ v7m_exception_taken(cpu, excret, true);
190
qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
191
"stackframe: failed exception return integrity check\n");
192
return;
193
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
194
/* Take a SecureFault on the current stack */
195
env->v7m.sfsr |= R_V7M_SFSR_INVIS_MASK;
196
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
197
- v7m_exception_taken(cpu, excret);
198
+ v7m_exception_taken(cpu, excret, true);
199
qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing "
200
"stackframe: failed exception return integrity "
201
"signature check\n");
202
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
203
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
204
env->v7m.secure);
205
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
206
- v7m_exception_taken(cpu, excret);
207
+ v7m_exception_taken(cpu, excret, true);
208
qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
209
"stackframe: failed exception return integrity "
210
"check\n");
211
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
212
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, false);
213
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
214
v7m_push_stack(cpu);
215
- v7m_exception_taken(cpu, excret);
216
+ v7m_exception_taken(cpu, excret, false);
217
qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on new stackframe: "
218
"failed exception return integrity check\n");
219
return;
220
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
221
return; /* Never happens. Keep compiler happy. */
222
}
91
}
223
92
224
- lr = R_V7M_EXCRET_RES1_MASK |
225
- R_V7M_EXCRET_S_MASK |
226
- R_V7M_EXCRET_DCRS_MASK |
227
- R_V7M_EXCRET_FTYPE_MASK |
228
- R_V7M_EXCRET_ES_MASK;
229
- if (env->v7m.control[env->v7m.secure] & R_V7M_CONTROL_SPSEL_MASK) {
230
- lr |= R_V7M_EXCRET_SPSEL_MASK;
231
+ if (arm_feature(env, ARM_FEATURE_V8)) {
232
+ lr = R_V7M_EXCRET_RES1_MASK |
233
+ R_V7M_EXCRET_DCRS_MASK |
234
+ R_V7M_EXCRET_FTYPE_MASK;
235
+ /* The S bit indicates whether we should return to Secure
236
+ * or NonSecure (ie our current state).
237
+ * The ES bit indicates whether we're taking this exception
238
+ * to Secure or NonSecure (ie our target state). We set it
239
+ * later, in v7m_exception_taken().
240
+ * The SPSEL bit is also set in v7m_exception_taken() for v8M.
241
+ * This corresponds to the ARM ARM pseudocode for v8M setting
242
+ * some LR bits in PushStack() and some in ExceptionTaken();
243
+ * the distinction matters for the tailchain cases where we
244
+ * can take an exception without pushing the stack.
245
+ */
246
+ if (env->v7m.secure) {
247
+ lr |= R_V7M_EXCRET_S_MASK;
248
+ }
249
+ } else {
250
+ lr = R_V7M_EXCRET_RES1_MASK |
251
+ R_V7M_EXCRET_S_MASK |
252
+ R_V7M_EXCRET_DCRS_MASK |
253
+ R_V7M_EXCRET_FTYPE_MASK |
254
+ R_V7M_EXCRET_ES_MASK;
255
+ if (env->v7m.control[M_REG_NS] & R_V7M_CONTROL_SPSEL_MASK) {
256
+ lr |= R_V7M_EXCRET_SPSEL_MASK;
257
+ }
258
}
259
if (!arm_v7m_is_handler_mode(env)) {
260
lr |= R_V7M_EXCRET_MODE_MASK;
261
}
262
263
v7m_push_stack(cpu);
264
- v7m_exception_taken(cpu, lr);
265
+ v7m_exception_taken(cpu, lr, false);
266
qemu_log_mask(CPU_LOG_INT, "... as %d\n", env->v7m.exception);
267
}
268
269
--
93
--
270
2.7.4
94
2.19.1
271
95
272
96
diff view generated by jsdifflib
1
For the SG instruction and secure function return we are going
1
From: Richard Henderson <richard.henderson@linaro.org>
2
to want to do memory accesses using the MMU index of the CPU
3
in secure state, even though the CPU is currently in non-secure
4
state. Write arm_v7m_mmu_idx_for_secstate() to do this job,
5
and use it in cpu_mmu_index().
6
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20181113180154.17903-5-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 1506092407-26985-17-git-send-email-peter.maydell@linaro.org
11
---
7
---
12
target/arm/cpu.h | 32 +++++++++++++++++++++-----------
8
target/arm/kvm32.c | 40 +++++++++++++++++++++++++++++++++++-----
13
1 file changed, 21 insertions(+), 11 deletions(-)
9
1 file changed, 35 insertions(+), 5 deletions(-)
14
10
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
11
diff --git a/target/arm/kvm32.c b/target/arm/kvm32.c
16
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpu.h
13
--- a/target/arm/kvm32.c
18
+++ b/target/arm/cpu.h
14
+++ b/target/arm/kvm32.c
19
@@ -XXX,XX +XXX,XX @@ static inline int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx)
15
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
20
}
16
* and then query that CPU for the relevant ID registers.
21
}
17
*/
22
18
int err = 0, fdarray[3];
23
+/* Return the MMU index for a v7M CPU in the specified security state */
19
- uint32_t midr, id_pfr0, mvfr1;
24
+static inline ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env,
20
+ uint32_t midr, id_pfr0;
25
+ bool secstate)
21
uint64_t features = 0;
26
+{
22
27
+ int el = arm_current_el(env);
23
/* Old kernels may not know about the PREFERRED_TARGET ioctl: however
28
+ ARMMMUIdx mmu_idx;
24
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
25
26
err |= read_sys_reg32(fdarray[2], &midr, ARM_CP15_REG32(0, 0, 0, 0));
27
err |= read_sys_reg32(fdarray[2], &id_pfr0, ARM_CP15_REG32(0, 0, 1, 0));
28
- err |= read_sys_reg32(fdarray[2], &mvfr1,
29
+
29
+
30
+ if (el == 0) {
30
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_isar0,
31
+ mmu_idx = secstate ? ARMMMUIdx_MSUser : ARMMMUIdx_MUser;
31
+ ARM_CP15_REG32(0, 0, 2, 0));
32
+ } else {
32
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_isar1,
33
+ mmu_idx = secstate ? ARMMMUIdx_MSPriv : ARMMMUIdx_MPriv;
33
+ ARM_CP15_REG32(0, 0, 2, 1));
34
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_isar2,
35
+ ARM_CP15_REG32(0, 0, 2, 2));
36
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_isar3,
37
+ ARM_CP15_REG32(0, 0, 2, 3));
38
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_isar4,
39
+ ARM_CP15_REG32(0, 0, 2, 4));
40
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_isar5,
41
+ ARM_CP15_REG32(0, 0, 2, 5));
42
+ if (read_sys_reg32(fdarray[2], &ahcf->isar.id_isar6,
43
+ ARM_CP15_REG32(0, 0, 2, 7))) {
44
+ /*
45
+ * Older kernels don't support reading ID_ISAR6. This register was
46
+ * only introduced in ARMv8, so we can assume that it is zero on a
47
+ * CPU that a kernel this old is running on.
48
+ */
49
+ ahcf->isar.id_isar6 = 0;
34
+ }
50
+ }
35
+
51
+
36
+ if (armv7m_nvic_neg_prio_requested(env->nvic, secstate)) {
52
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.mvfr0,
37
+ mmu_idx = secstate ? ARMMMUIdx_MSNegPri : ARMMMUIdx_MNegPri;
53
+ KVM_REG_ARM | KVM_REG_SIZE_U32 |
38
+ }
54
+ KVM_REG_ARM_VFP | KVM_REG_ARM_VFP_MVFR0);
39
+
55
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.mvfr1,
40
+ return mmu_idx;
56
KVM_REG_ARM | KVM_REG_SIZE_U32 |
41
+}
57
KVM_REG_ARM_VFP | KVM_REG_ARM_VFP_MVFR1);
42
+
58
+ /*
43
/* Determine the current mmu_idx to use for normal loads/stores */
59
+ * FIXME: There is not yet a way to read MVFR2.
44
static inline int cpu_mmu_index(CPUARMState *env, bool ifetch)
60
+ * Fortunately there is not yet anything in there that affects migration.
45
{
61
+ */
46
int el = arm_current_el(env);
62
47
63
kvm_arm_destroy_scratch_host_vcpu(fdarray);
48
if (arm_feature(env, ARM_FEATURE_M)) {
64
49
- ARMMMUIdx mmu_idx;
65
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
50
-
66
if (extract32(id_pfr0, 12, 4) == 1) {
51
- if (el == 0) {
67
set_feature(&features, ARM_FEATURE_THUMB2EE);
52
- mmu_idx = env->v7m.secure ? ARMMMUIdx_MSUser : ARMMMUIdx_MUser;
68
}
53
- } else {
69
- if (extract32(mvfr1, 20, 4) == 1) {
54
- mmu_idx = env->v7m.secure ? ARMMMUIdx_MSPriv : ARMMMUIdx_MPriv;
70
+ if (extract32(ahcf->isar.mvfr1, 20, 4) == 1) {
55
- }
71
set_feature(&features, ARM_FEATURE_VFP_FP16);
56
-
72
}
57
- if (armv7m_nvic_neg_prio_requested(env->nvic, env->v7m.secure)) {
73
- if (extract32(mvfr1, 12, 4) == 1) {
58
- mmu_idx = env->v7m.secure ? ARMMMUIdx_MSNegPri : ARMMMUIdx_MNegPri;
74
+ if (extract32(ahcf->isar.mvfr1, 12, 4) == 1) {
59
- }
75
set_feature(&features, ARM_FEATURE_NEON);
60
+ ARMMMUIdx mmu_idx = arm_v7m_mmu_idx_for_secstate(env, env->v7m.secure);
76
}
61
77
- if (extract32(mvfr1, 28, 4) == 1) {
62
return arm_to_core_mmu_idx(mmu_idx);
78
+ if (extract32(ahcf->isar.mvfr1, 28, 4) == 1) {
79
/* FMAC support implies VFPv4 */
80
set_feature(&features, ARM_FEATURE_VFP4);
63
}
81
}
64
--
82
--
65
2.7.4
83
2.19.1
66
84
67
85
diff view generated by jsdifflib
1
From: Thomas Huth <thuth@redhat.com>
1
From: Thomas Huth <thuth@redhat.com>
2
2
3
The device uses serial_hds in its realize function and thus can't be
3
Add entries for the boards "mcimx6ul-evk", "mcimx7d-sabre", "raspi2",
4
used twice. Apart from that, the comma in its name makes it quite hard
4
"raspi3", "sabrelite", "vexpress-a15", "vexpress-a9" and "virt".
5
to use for the user anyway, since a comma is normally used to separate
5
While we're at it, also adjust the "i.MX31" section a little bit,
6
the device name from its properties when using the "-device" parameter
6
so that the wildcards there do not match anymore for unrelated files
7
or the "device_add" HMP command.
7
(e.g. the new hw/misc/imx6ul_ccm.c file).
8
8
9
Signed-off-by: Thomas Huth <thuth@redhat.com>
9
Signed-off-by: Thomas Huth <thuth@redhat.com>
10
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
10
Message-id: 1542184999-11145-1-git-send-email-thuth@redhat.com
11
Message-id: 1506441116-16627-1-git-send-email-thuth@redhat.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
12
---
14
hw/arm/xlnx-zynqmp.c | 2 ++
13
MAINTAINERS | 70 +++++++++++++++++++++++++++++++++++++++++++++++++----
15
1 file changed, 2 insertions(+)
14
1 file changed, 65 insertions(+), 5 deletions(-)
16
15
17
diff --git a/hw/arm/xlnx-zynqmp.c b/hw/arm/xlnx-zynqmp.c
16
diff --git a/MAINTAINERS b/MAINTAINERS
18
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/arm/xlnx-zynqmp.c
18
--- a/MAINTAINERS
20
+++ b/hw/arm/xlnx-zynqmp.c
19
+++ b/MAINTAINERS
21
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_class_init(ObjectClass *oc, void *data)
20
@@ -XXX,XX +XXX,XX @@ L: qemu-arm@nongnu.org
22
21
S: Odd Fixes
23
dc->props = xlnx_zynqmp_props;
22
F: hw/arm/gumstix.c
24
dc->realize = xlnx_zynqmp_realize;
23
25
+ /* Reason: Uses serial_hds in realize function, thus can't be used twice */
24
-i.MX31
26
+ dc->user_creatable = false;
25
+i.MX31 (kzm)
27
}
26
M: Peter Chubb <peter.chubb@nicta.com.au>
28
27
L: qemu-arm@nongnu.org
29
static const TypeInfo xlnx_zynqmp_type_info = {
28
-S: Odd fixes
29
-F: hw/*/imx*
30
-F: include/hw/*/imx*
31
+S: Odd Fixes
32
F: hw/arm/kzm.c
33
-F: include/hw/arm/fsl-imx31.h
34
+F: hw/*/imx_*
35
+F: hw/*/*imx31*
36
+F: include/hw/*/imx_*
37
+F: include/hw/*/*imx31*
38
39
Integrator CP
40
M: Peter Maydell <peter.maydell@linaro.org>
41
@@ -XXX,XX +XXX,XX @@ S: Maintained
42
F: hw/arm/integratorcp.c
43
F: hw/misc/arm_integrator_debug.c
44
45
+MCIMX6UL EVK / i.MX6ul
46
+M: Peter Maydell <peter.maydell@linaro.org>
47
+R: Jean-Christophe Dubois <jcd@tribudubois.net>
48
+L: qemu-arm@nongnu.org
49
+S: Odd Fixes
50
+F: hw/arm/mcimx6ul-evk.c
51
+F: hw/arm/fsl-imx6ul.c
52
+F: hw/misc/imx6ul_ccm.c
53
+F: include/hw/arm/fsl-imx6ul.h
54
+F: include/hw/misc/imx6ul_ccm.h
55
+
56
+MCIMX7D SABRE / i.MX7
57
+M: Peter Maydell <peter.maydell@linaro.org>
58
+R: Andrey Smirnov <andrew.smirnov@gmail.com>
59
+L: qemu-arm@nongnu.org
60
+S: Odd Fixes
61
+F: hw/arm/mcimx7d-sabre.c
62
+F: hw/arm/fsl-imx7.c
63
+F: include/hw/arm/fsl-imx7.h
64
+F: hw/pci-host/designware.c
65
+F: include/hw/pci-host/designware.h
66
+
67
MPS2
68
M: Peter Maydell <peter.maydell@linaro.org>
69
L: qemu-arm@nongnu.org
70
@@ -XXX,XX +XXX,XX @@ L: qemu-arm@nongnu.org
71
S: Maintained
72
F: hw/arm/palm.c
73
74
+Raspberry Pi
75
+M: Peter Maydell <peter.maydell@linaro.org>
76
+R: Andrew Baumann <Andrew.Baumann@microsoft.com>
77
+R: Philippe Mathieu-Daudé <f4bug@amsat.org>
78
+L: qemu-arm@nongnu.org
79
+S: Odd Fixes
80
+F: hw/arm/raspi_platform.h
81
+F: hw/*/bcm283*
82
+F: include/hw/arm/raspi*
83
+F: include/hw/*/bcm283*
84
+
85
Real View
86
M: Peter Maydell <peter.maydell@linaro.org>
87
L: qemu-arm@nongnu.org
88
@@ -XXX,XX +XXX,XX @@ F: hw/*/pxa2xx*
89
F: hw/misc/mst_fpga.c
90
F: include/hw/arm/pxa.h
91
92
+SABRELITE / i.MX6
93
+M: Peter Maydell <peter.maydell@linaro.org>
94
+R: Jean-Christophe Dubois <jcd@tribudubois.net>
95
+L: qemu-arm@nongnu.org
96
+S: Odd Fixes
97
+F: hw/arm/sabrelite.c
98
+F: hw/arm/fsl-imx6.c
99
+F: hw/misc/imx6_src.c
100
+F: hw/ssi/imx_spi.c
101
+F: include/hw/arm/fsl-imx6.h
102
+F: include/hw/misc/imx6_src.h
103
+F: include/hw/ssi/imx_spi.h
104
+
105
Sharp SL-5500 (Collie) PDA
106
M: Peter Maydell <peter.maydell@linaro.org>
107
L: qemu-arm@nongnu.org
108
@@ -XXX,XX +XXX,XX @@ L: qemu-arm@nongnu.org
109
S: Maintained
110
F: hw/*/stellaris*
111
112
+Versatile Express
113
+M: Peter Maydell <peter.maydell@linaro.org>
114
+L: qemu-arm@nongnu.org
115
+S: Maintained
116
+F: hw/arm/vexpress.c
117
+
118
Versatile PB
119
M: Peter Maydell <peter.maydell@linaro.org>
120
L: qemu-arm@nongnu.org
121
@@ -XXX,XX +XXX,XX @@ S: Maintained
122
F: hw/*/versatile*
123
F: hw/misc/arm_sysctl.c
124
125
+Virt
126
+M: Peter Maydell <peter.maydell@linaro.org>
127
+L: qemu-arm@nongnu.org
128
+S: Maintained
129
+F: hw/arm/virt*
130
+F: include/hw/arm/virt.h
131
+
132
Xilinx Zynq
133
M: Edgar E. Iglesias <edgar.iglesias@gmail.com>
134
M: Alistair Francis <alistair@alistair23.me>
30
--
135
--
31
2.7.4
136
2.19.1
32
137
33
138
diff view generated by jsdifflib
1
Currently our M profile exception return code switches to the
1
From: Seth Kintigh <skintigh@gmail.com>
2
target stack pointer relatively early in the process, before
3
it tries to pop the exception frame off the stack. This is
4
awkward for v8M for two reasons:
5
* in v8M the process vs main stack pointer is not selected
6
purely by the value of CONTROL.SPSEL, so updating SPSEL
7
and relying on that to switch to the right stack pointer
8
won't work
9
* the stack we should be reading the stack frame from and
10
the stack we will eventually switch to might not be the
11
same if the guest is doing strange things
12
2
13
Change our exception return code to use a 'frame pointer'
3
The UART and timer devices for the stm32f205 were being created
14
to read the exception frame rather than assuming that we
4
with memory regions that were too large. Use the size specified
15
can switch the live stack pointer this early.
5
in the chip datasheet.
16
6
7
The old sizes were so large that the devices would overlap with
8
each other in the SoC memory map, so this fixes a bug that
9
caused odd behavior and/or crashes when trying to set up multiple
10
UARTs.
11
12
Signed-off-by: Seth Kintigh <skintigh@gmail.com>
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
[PMM: rephrased commit message to follow our usual standard]
15
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
16
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
19
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Message-id: 1506092407-26985-3-git-send-email-peter.maydell@linaro.org
21
---
18
---
22
target/arm/helper.c | 130 +++++++++++++++++++++++++++++++++++++++-------------
19
hw/char/stm32f2xx_usart.c | 2 +-
23
1 file changed, 98 insertions(+), 32 deletions(-)
20
hw/timer/stm32f2xx_timer.c | 2 +-
21
2 files changed, 2 insertions(+), 2 deletions(-)
24
22
25
diff --git a/target/arm/helper.c b/target/arm/helper.c
23
diff --git a/hw/char/stm32f2xx_usart.c b/hw/char/stm32f2xx_usart.c
26
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/helper.c
25
--- a/hw/char/stm32f2xx_usart.c
28
+++ b/target/arm/helper.c
26
+++ b/hw/char/stm32f2xx_usart.c
29
@@ -XXX,XX +XXX,XX @@ static void v7m_push(CPUARMState *env, uint32_t val)
27
@@ -XXX,XX +XXX,XX @@ static void stm32f2xx_usart_init(Object *obj)
30
stl_phys(cs->as, env->regs[13], val);
28
sysbus_init_irq(SYS_BUS_DEVICE(obj), &s->irq);
29
30
memory_region_init_io(&s->mmio, obj, &stm32f2xx_usart_ops, s,
31
- TYPE_STM32F2XX_USART, 0x2000);
32
+ TYPE_STM32F2XX_USART, 0x400);
33
sysbus_init_mmio(SYS_BUS_DEVICE(obj), &s->mmio);
31
}
34
}
32
35
33
-static uint32_t v7m_pop(CPUARMState *env)
36
diff --git a/hw/timer/stm32f2xx_timer.c b/hw/timer/stm32f2xx_timer.c
34
-{
37
index XXXXXXX..XXXXXXX 100644
35
- CPUState *cs = CPU(arm_env_get_cpu(env));
38
--- a/hw/timer/stm32f2xx_timer.c
36
- uint32_t val;
39
+++ b/hw/timer/stm32f2xx_timer.c
37
-
40
@@ -XXX,XX +XXX,XX @@ static void stm32f2xx_timer_init(Object *obj)
38
- val = ldl_phys(cs->as, env->regs[13]);
41
sysbus_init_irq(SYS_BUS_DEVICE(obj), &s->irq);
39
- env->regs[13] += 4;
42
40
- return val;
43
memory_region_init_io(&s->iomem, obj, &stm32f2xx_timer_ops, s,
41
-}
44
- "stm32f2xx_timer", 0x4000);
42
-
45
+ "stm32f2xx_timer", 0x400);
43
/* Return true if we're using the process stack pointer (not the MSP) */
46
sysbus_init_mmio(SYS_BUS_DEVICE(obj), &s->iomem);
44
static bool v7m_using_psp(CPUARMState *env)
47
45
{
48
s->timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, stm32f2xx_timer_interrupt, s);
46
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_bxns)(CPUARMState *env, uint32_t dest)
47
env->regs[15] = dest & ~1;
48
}
49
50
+static uint32_t *get_v7m_sp_ptr(CPUARMState *env, bool secure, bool threadmode,
51
+ bool spsel)
52
+{
53
+ /* Return a pointer to the location where we currently store the
54
+ * stack pointer for the requested security state and thread mode.
55
+ * This pointer will become invalid if the CPU state is updated
56
+ * such that the stack pointers are switched around (eg changing
57
+ * the SPSEL control bit).
58
+ * Compare the v8M ARM ARM pseudocode LookUpSP_with_security_mode().
59
+ * Unlike that pseudocode, we require the caller to pass us in the
60
+ * SPSEL control bit value; this is because we also use this
61
+ * function in handling of pushing of the callee-saves registers
62
+ * part of the v8M stack frame (pseudocode PushCalleeStack()),
63
+ * and in the tailchain codepath the SPSEL bit comes from the exception
64
+ * return magic LR value from the previous exception. The pseudocode
65
+ * opencodes the stack-selection in PushCalleeStack(), but we prefer
66
+ * to make this utility function generic enough to do the job.
67
+ */
68
+ bool want_psp = threadmode && spsel;
69
+
70
+ if (secure == env->v7m.secure) {
71
+ /* Currently switch_v7m_sp switches SP as it updates SPSEL,
72
+ * so the SP we want is always in regs[13].
73
+ * When we decouple SPSEL from the actually selected SP
74
+ * we need to check want_psp against v7m_using_psp()
75
+ * to see whether we need regs[13] or v7m.other_sp.
76
+ */
77
+ return &env->regs[13];
78
+ } else {
79
+ if (want_psp) {
80
+ return &env->v7m.other_ss_psp;
81
+ } else {
82
+ return &env->v7m.other_ss_msp;
83
+ }
84
+ }
85
+}
86
+
87
static uint32_t arm_v7m_load_vector(ARMCPU *cpu)
88
{
89
CPUState *cs = CPU(cpu);
90
@@ -XXX,XX +XXX,XX @@ static void v7m_push_stack(ARMCPU *cpu)
91
static void do_v7m_exception_exit(ARMCPU *cpu)
92
{
93
CPUARMState *env = &cpu->env;
94
+ CPUState *cs = CPU(cpu);
95
uint32_t excret;
96
uint32_t xpsr;
97
bool ufault = false;
98
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
99
bool return_to_handler = false;
100
bool rettobase = false;
101
bool exc_secure = false;
102
+ bool return_to_secure;
103
104
/* We can only get here from an EXCP_EXCEPTION_EXIT, and
105
* gen_bx_excret() enforces the architectural rule
106
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
107
g_assert_not_reached();
108
}
109
110
+ return_to_secure = arm_feature(env, ARM_FEATURE_M_SECURITY) &&
111
+ (excret & R_V7M_EXCRET_S_MASK);
112
+
113
switch (excret & 0xf) {
114
case 1: /* Return to Handler */
115
return_to_handler = true;
116
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
117
return;
118
}
119
120
- /* Switch to the target stack. */
121
+ /* Set CONTROL.SPSEL from excret.SPSEL. For QEMU this currently
122
+ * causes us to switch the active SP, but we will change this
123
+ * later to not do that so we can support v8M.
124
+ */
125
switch_v7m_sp(env, return_to_sp_process);
126
- /* Pop registers. */
127
- env->regs[0] = v7m_pop(env);
128
- env->regs[1] = v7m_pop(env);
129
- env->regs[2] = v7m_pop(env);
130
- env->regs[3] = v7m_pop(env);
131
- env->regs[12] = v7m_pop(env);
132
- env->regs[14] = v7m_pop(env);
133
- env->regs[15] = v7m_pop(env);
134
- if (env->regs[15] & 1) {
135
- qemu_log_mask(LOG_GUEST_ERROR,
136
- "M profile return from interrupt with misaligned "
137
- "PC is UNPREDICTABLE\n");
138
- /* Actual hardware seems to ignore the lsbit, and there are several
139
- * RTOSes out there which incorrectly assume the r15 in the stack
140
- * frame should be a Thumb-style "lsbit indicates ARM/Thumb" value.
141
+
142
+ {
143
+ /* The stack pointer we should be reading the exception frame from
144
+ * depends on bits in the magic exception return type value (and
145
+ * for v8M isn't necessarily the stack pointer we will eventually
146
+ * end up resuming execution with). Get a pointer to the location
147
+ * in the CPU state struct where the SP we need is currently being
148
+ * stored; we will use and modify it in place.
149
+ * We use this limited C variable scope so we don't accidentally
150
+ * use 'frame_sp_p' after we do something that makes it invalid.
151
+ */
152
+ uint32_t *frame_sp_p = get_v7m_sp_ptr(env,
153
+ return_to_secure,
154
+ !return_to_handler,
155
+ return_to_sp_process);
156
+ uint32_t frameptr = *frame_sp_p;
157
+
158
+ /* Pop registers. TODO: make these accesses use the correct
159
+ * attributes and address space (S/NS, priv/unpriv) and handle
160
+ * memory transaction failures.
161
*/
162
- env->regs[15] &= ~1U;
163
+ env->regs[0] = ldl_phys(cs->as, frameptr);
164
+ env->regs[1] = ldl_phys(cs->as, frameptr + 0x4);
165
+ env->regs[2] = ldl_phys(cs->as, frameptr + 0x8);
166
+ env->regs[3] = ldl_phys(cs->as, frameptr + 0xc);
167
+ env->regs[12] = ldl_phys(cs->as, frameptr + 0x10);
168
+ env->regs[14] = ldl_phys(cs->as, frameptr + 0x14);
169
+ env->regs[15] = ldl_phys(cs->as, frameptr + 0x18);
170
+ if (env->regs[15] & 1) {
171
+ qemu_log_mask(LOG_GUEST_ERROR,
172
+ "M profile return from interrupt with misaligned "
173
+ "PC is UNPREDICTABLE\n");
174
+ /* Actual hardware seems to ignore the lsbit, and there are several
175
+ * RTOSes out there which incorrectly assume the r15 in the stack
176
+ * frame should be a Thumb-style "lsbit indicates ARM/Thumb" value.
177
+ */
178
+ env->regs[15] &= ~1U;
179
+ }
180
+ xpsr = ldl_phys(cs->as, frameptr + 0x1c);
181
+
182
+ /* Commit to consuming the stack frame */
183
+ frameptr += 0x20;
184
+ /* Undo stack alignment (the SPREALIGN bit indicates that the original
185
+ * pre-exception SP was not 8-aligned and we added a padding word to
186
+ * align it, so we undo this by ORing in the bit that increases it
187
+ * from the current 8-aligned value to the 8-unaligned value. (Adding 4
188
+ * would work too but a logical OR is how the pseudocode specifies it.)
189
+ */
190
+ if (xpsr & XPSR_SPREALIGN) {
191
+ frameptr |= 4;
192
+ }
193
+ *frame_sp_p = frameptr;
194
}
195
- xpsr = v7m_pop(env);
196
+ /* This xpsr_write() will invalidate frame_sp_p as it may switch stack */
197
xpsr_write(env, xpsr, ~XPSR_SPREALIGN);
198
- /* Undo stack alignment. */
199
- if (xpsr & XPSR_SPREALIGN) {
200
- env->regs[13] |= 4;
201
- }
202
203
/* The restored xPSR exception field will be zero if we're
204
* resuming in Thread mode. If that doesn't match what the
205
--
49
--
206
2.7.4
50
2.19.1
207
51
208
52
diff view generated by jsdifflib
1
From: Jan Kiszka <jan.kiszka@siemens.com>
1
From: Luc Michel <luc.michel@greensocs.com>
2
2
3
This properly forwards SMC events to EL2 when PSCI is provided by QEMU
3
This commit fixes a case where the CPU would try to go to EL3 when
4
itself and, thus, ARM_FEATURE_EL3 is off.
4
executing an smc instruction, even though ARM_FEATURE_EL3 is false. This
5
case is raised when the PSCI conduit is set to smc, but the smc
6
instruction does not lead to a valid PSCI call.
5
7
6
Found and tested with the Jailhouse hypervisor. Solution based on
8
QEMU crashes with an assertion failure latter on because of incoherent
7
suggestions by Peter Maydell.
9
mmu_idx.
8
10
9
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
11
This commit refactors the pre_smc helper by enumerating all the possible
10
Message-id: 4f243068-aaea-776f-d18f-f9e05e7be9cd@siemens.com
12
way of handling an scm instruction, and covering the previously missing
13
case leading to the crash.
14
15
The following minimal test would crash before this commit:
16
17
.global _start
18
.text
19
_start:
20
ldr x0, =0xdeadbeef ; invalid PSCI call
21
smc #0
22
23
run with the following command line:
24
25
aarch64-linux-gnu-gcc -nostdinc -nostdlib -Wl,-Ttext=40000000 \
26
-o test test.s
27
28
qemu-system-aarch64 -M virt,virtualization=on,secure=off \
29
-cpu cortex-a57 -kernel test
30
31
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
32
Message-id: 20181117160213.18995-1-luc.michel@greensocs.com
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
33
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
34
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
35
---
14
target/arm/helper.c | 9 ++++++++-
36
target/arm/op_helper.c | 54 +++++++++++++++++++++++++++++++++++-------
15
target/arm/op_helper.c | 27 +++++++++++++++++----------
37
1 file changed, 46 insertions(+), 8 deletions(-)
16
2 files changed, 25 insertions(+), 11 deletions(-)
17
38
18
diff --git a/target/arm/helper.c b/target/arm/helper.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/helper.c
21
+++ b/target/arm/helper.c
22
@@ -XXX,XX +XXX,XX @@ static void hcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
23
24
if (arm_feature(env, ARM_FEATURE_EL3)) {
25
valid_mask &= ~HCR_HCD;
26
- } else {
27
+ } else if (cpu->psci_conduit != QEMU_PSCI_CONDUIT_SMC) {
28
+ /* Architecturally HCR.TSC is RES0 if EL3 is not implemented.
29
+ * However, if we're using the SMC PSCI conduit then QEMU is
30
+ * effectively acting like EL3 firmware and so the guest at
31
+ * EL2 should retain the ability to prevent EL1 from being
32
+ * able to make SMC calls into the ersatz firmware, so in
33
+ * that case HCR.TSC should be read/write.
34
+ */
35
valid_mask &= ~HCR_TSC;
36
}
37
38
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
39
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
39
index XXXXXXX..XXXXXXX 100644
40
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/op_helper.c
41
--- a/target/arm/op_helper.c
41
+++ b/target/arm/op_helper.c
42
+++ b/target/arm/op_helper.c
42
@@ -XXX,XX +XXX,XX @@ void HELPER(pre_smc)(CPUARMState *env, uint32_t syndrome)
43
@@ -XXX,XX +XXX,XX @@ void HELPER(pre_smc)(CPUARMState *env, uint32_t syndrome)
44
ARMCPU *cpu = arm_env_get_cpu(env);
45
int cur_el = arm_current_el(env);
46
bool secure = arm_is_secure(env);
47
- bool smd = env->cp15.scr_el3 & SCR_SMD;
48
+ bool smd_flag = env->cp15.scr_el3 & SCR_SMD;
49
+
50
+ /*
51
+ * SMC behaviour is summarized in the following table.
52
+ * This helper handles the "Trap to EL2" and "Undef insn" cases.
53
+ * The "Trap to EL3" and "PSCI call" cases are handled in the exception
54
+ * helper.
55
+ *
56
+ * -> ARM_FEATURE_EL3 and !SMD
57
+ * HCR_TSC && NS EL1 !HCR_TSC || !NS EL1
58
+ *
59
+ * Conduit SMC, valid call Trap to EL2 PSCI Call
60
+ * Conduit SMC, inval call Trap to EL2 Trap to EL3
61
+ * Conduit not SMC Trap to EL2 Trap to EL3
62
+ *
63
+ *
64
+ * -> ARM_FEATURE_EL3 and SMD
65
+ * HCR_TSC && NS EL1 !HCR_TSC || !NS EL1
66
+ *
67
+ * Conduit SMC, valid call Trap to EL2 PSCI Call
68
+ * Conduit SMC, inval call Trap to EL2 Undef insn
69
+ * Conduit not SMC Trap to EL2 Undef insn
70
+ *
71
+ *
72
+ * -> !ARM_FEATURE_EL3
73
+ * HCR_TSC && NS EL1 !HCR_TSC || !NS EL1
74
+ *
75
+ * Conduit SMC, valid call Trap to EL2 PSCI Call
76
+ * Conduit SMC, inval call Trap to EL2 Undef insn
77
+ * Conduit not SMC Undef insn Undef insn
78
+ */
79
+
80
/* On ARMv8 with EL3 AArch64, SMD applies to both S and NS state.
81
* On ARMv8 with EL3 AArch32, or ARMv7 with the Virtualization
82
* extensions, SMD only applies to NS state.
83
@@ -XXX,XX +XXX,XX @@ void HELPER(pre_smc)(CPUARMState *env, uint32_t syndrome)
84
* doesn't exist, but we forbid the guest to set it to 1 in scr_write(),
85
* so we need not special case this here.
43
*/
86
*/
44
bool undef = arm_feature(env, ARM_FEATURE_AARCH64) ? smd : smd && !secure;
87
- bool undef = arm_feature(env, ARM_FEATURE_AARCH64) ? smd : smd && !secure;
45
88
+ bool smd = arm_feature(env, ARM_FEATURE_AARCH64) ? smd_flag
46
- if (arm_is_psci_call(cpu, EXCP_SMC)) {
89
+ : smd_flag && !secure;
47
- /* If PSCI is enabled and this looks like a valid PSCI call then
90
48
- * that overrides the architecturally mandated SMC behaviour.
91
if (!arm_feature(env, ARM_FEATURE_EL3) &&
49
+ if (!arm_feature(env, ARM_FEATURE_EL3) &&
92
cpu->psci_conduit != QEMU_PSCI_CONDUIT_SMC) {
50
+ cpu->psci_conduit != QEMU_PSCI_CONDUIT_SMC) {
93
@@ -XXX,XX +XXX,XX @@ void HELPER(pre_smc)(CPUARMState *env, uint32_t syndrome)
51
+ /* If we have no EL3 then SMC always UNDEFs and can't be
94
* to forbid its EL1 from making PSCI calls into QEMU's
52
+ * trapped to EL2. PSCI-via-SMC is a sort of ersatz EL3
95
* "firmware" via HCR.TSC, so for these purposes treat
53
+ * firmware within QEMU, and we want an EL2 guest to be able
96
* PSCI-via-SMC as implying an EL3.
54
+ * to forbid its EL1 from making PSCI calls into QEMU's
97
+ * This handles the very last line of the previous table.
55
+ * "firmware" via HCR.TSC, so for these purposes treat
56
+ * PSCI-via-SMC as implying an EL3.
57
*/
98
*/
58
- return;
99
- undef = true;
59
- }
100
- } else if (!secure && cur_el == 1 && (env->cp15.hcr_el2 & HCR_TSC)) {
60
-
101
+ raise_exception(env, EXCP_UDEF, syn_uncategorized(),
61
- if (!arm_feature(env, ARM_FEATURE_EL3)) {
102
+ exception_target_el(env));
62
- /* If we have no EL3 then SMC always UNDEFs */
103
+ }
63
undef = true;
104
+
64
} else if (!secure && cur_el == 1 && (env->cp15.hcr_el2 & HCR_TSC)) {
105
+ if (!secure && cur_el == 1 && (env->cp15.hcr_el2 & HCR_TSC)) {
65
- /* In NS EL1, HCR controlled routing to EL2 has priority over SMD. */
106
/* In NS EL1, HCR controlled routing to EL2 has priority over SMD.
66
+ /* In NS EL1, HCR controlled routing to EL2 has priority over SMD.
107
* We also want an EL2 guest to be able to forbid its EL1 from
67
+ * We also want an EL2 guest to be able to forbid its EL1 from
108
* making PSCI calls into QEMU's "firmware" via HCR.TSC.
68
+ * making PSCI calls into QEMU's "firmware" via HCR.TSC.
109
+ * This handles all the "Trap to EL2" cases of the previous table.
69
+ */
110
*/
70
raise_exception(env, EXCP_HYP_TRAP, syndrome, 2);
111
raise_exception(env, EXCP_HYP_TRAP, syndrome, 2);
71
}
112
}
72
113
73
- if (undef) {
114
- /* If PSCI is enabled and this looks like a valid PSCI call then
74
+ /* If PSCI is enabled and this looks like a valid PSCI call then
115
- * suppress the UNDEF -- we'll catch the SMC exception and
75
+ * suppress the UNDEF -- we'll catch the SMC exception and
116
- * implement the PSCI call behaviour there.
76
+ * implement the PSCI call behaviour there.
117
+ /* Catch the two remaining "Undef insn" cases of the previous table:
77
+ */
118
+ * - PSCI conduit is SMC but we don't have a valid PCSI call,
78
+ if (undef && !arm_is_psci_call(cpu, EXCP_SMC)) {
119
+ * - We don't have EL3 or SMD is set.
120
*/
121
- if (undef && !arm_is_psci_call(cpu, EXCP_SMC)) {
122
+ if (!arm_is_psci_call(cpu, EXCP_SMC) &&
123
+ (smd || !arm_feature(env, ARM_FEATURE_EL3))) {
79
raise_exception(env, EXCP_UDEF, syn_uncategorized(),
124
raise_exception(env, EXCP_UDEF, syn_uncategorized(),
80
exception_target_el(env));
125
exception_target_el(env));
81
}
126
}
82
--
127
--
83
2.7.4
128
2.19.1
84
129
85
130
diff view generated by jsdifflib
Deleted patch
1
Reset for devices does not include an automatic clear of the
2
device state (unlike CPU state, where most of the state
3
structure is cleared to zero). Add some missing initialization
4
of NVIC state that meant that the device was left in the wrong
5
state if the guest did a warm reset.
6
1
7
(In particular, since we were resetting the computed state like
8
s->exception_prio but not all the state it was computed
9
from like s->vectors[x].active, the NVIC wound up in an
10
inconsistent state that could later trigger assertion failures.)
11
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
15
Message-id: 1506092407-26985-2-git-send-email-peter.maydell@linaro.org
16
---
17
hw/intc/armv7m_nvic.c | 5 +++++
18
1 file changed, 5 insertions(+)
19
20
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
21
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/intc/armv7m_nvic.c
23
+++ b/hw/intc/armv7m_nvic.c
24
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_reset(DeviceState *dev)
25
int resetprio;
26
NVICState *s = NVIC(dev);
27
28
+ memset(s->vectors, 0, sizeof(s->vectors));
29
+ memset(s->sec_vectors, 0, sizeof(s->sec_vectors));
30
+ s->prigroup[M_REG_NS] = 0;
31
+ s->prigroup[M_REG_S] = 0;
32
+
33
s->vectors[ARMV7M_EXCP_NMI].enabled = 1;
34
/* MEM, BUS, and USAGE are enabled through
35
* the System Handler Control register
36
--
37
2.7.4
38
39
diff view generated by jsdifflib
Deleted patch
1
In the v7M architecture, there is an invariant that if the CPU is
2
in Handler mode then the CONTROL.SPSEL bit cannot be nonzero.
3
This in turn means that the current stack pointer is always
4
indicated by CONTROL.SPSEL, even though Handler mode always uses
5
the Main stack pointer.
6
1
7
In v8M, this invariant is removed, and CONTROL.SPSEL may now
8
be nonzero in Handler mode (though Handler mode still always
9
uses the Main stack pointer). In preparation for this change,
10
change how we handle this bit: rename switch_v7m_sp() to
11
the now more accurate write_v7m_control_spsel(), and make it
12
check both the handler mode state and the SPSEL bit.
13
14
Note that this implicitly changes the point at which we switch
15
active SP on exception exit from before we pop the exception
16
frame to after it.
17
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
20
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
21
Message-id: 1506092407-26985-4-git-send-email-peter.maydell@linaro.org
22
---
23
target/arm/cpu.h | 8 ++++++-
24
hw/intc/armv7m_nvic.c | 2 +-
25
target/arm/helper.c | 65 ++++++++++++++++++++++++++++++++++-----------------
26
3 files changed, 51 insertions(+), 24 deletions(-)
27
28
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/cpu.h
31
+++ b/target/arm/cpu.h
32
@@ -XXX,XX +XXX,XX @@ void pmccntr_sync(CPUARMState *env);
33
#define PSTATE_MODE_EL1t 4
34
#define PSTATE_MODE_EL0t 0
35
36
+/* Write a new value to v7m.exception, thus transitioning into or out
37
+ * of Handler mode; this may result in a change of active stack pointer.
38
+ */
39
+void write_v7m_exception(CPUARMState *env, uint32_t new_exc);
40
+
41
/* Map EL and handler into a PSTATE_MODE. */
42
static inline unsigned int aarch64_pstate_mode(unsigned int el, bool handler)
43
{
44
@@ -XXX,XX +XXX,XX @@ static inline void xpsr_write(CPUARMState *env, uint32_t val, uint32_t mask)
45
env->condexec_bits |= (val >> 8) & 0xfc;
46
}
47
if (mask & XPSR_EXCP) {
48
- env->v7m.exception = val & XPSR_EXCP;
49
+ /* Note that this only happens on exception exit */
50
+ write_v7m_exception(env, val & XPSR_EXCP);
51
}
52
}
53
54
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
55
index XXXXXXX..XXXXXXX 100644
56
--- a/hw/intc/armv7m_nvic.c
57
+++ b/hw/intc/armv7m_nvic.c
58
@@ -XXX,XX +XXX,XX @@ bool armv7m_nvic_acknowledge_irq(void *opaque)
59
vec->active = 1;
60
vec->pending = 0;
61
62
- env->v7m.exception = s->vectpending;
63
+ write_v7m_exception(env, s->vectpending);
64
65
nvic_irq_update(s);
66
67
diff --git a/target/arm/helper.c b/target/arm/helper.c
68
index XXXXXXX..XXXXXXX 100644
69
--- a/target/arm/helper.c
70
+++ b/target/arm/helper.c
71
@@ -XXX,XX +XXX,XX @@ static bool v7m_using_psp(CPUARMState *env)
72
env->v7m.control[env->v7m.secure] & R_V7M_CONTROL_SPSEL_MASK;
73
}
74
75
-/* Switch to V7M main or process stack pointer. */
76
-static void switch_v7m_sp(CPUARMState *env, bool new_spsel)
77
+/* Write to v7M CONTROL.SPSEL bit. This may change the current
78
+ * stack pointer between Main and Process stack pointers.
79
+ */
80
+static void write_v7m_control_spsel(CPUARMState *env, bool new_spsel)
81
{
82
uint32_t tmp;
83
- uint32_t old_control = env->v7m.control[env->v7m.secure];
84
- bool old_spsel = old_control & R_V7M_CONTROL_SPSEL_MASK;
85
+ bool new_is_psp, old_is_psp = v7m_using_psp(env);
86
+
87
+ env->v7m.control[env->v7m.secure] =
88
+ deposit32(env->v7m.control[env->v7m.secure],
89
+ R_V7M_CONTROL_SPSEL_SHIFT,
90
+ R_V7M_CONTROL_SPSEL_LENGTH, new_spsel);
91
+
92
+ new_is_psp = v7m_using_psp(env);
93
94
- if (old_spsel != new_spsel) {
95
+ if (old_is_psp != new_is_psp) {
96
tmp = env->v7m.other_sp;
97
env->v7m.other_sp = env->regs[13];
98
env->regs[13] = tmp;
99
+ }
100
+}
101
+
102
+void write_v7m_exception(CPUARMState *env, uint32_t new_exc)
103
+{
104
+ /* Write a new value to v7m.exception, thus transitioning into or out
105
+ * of Handler mode; this may result in a change of active stack pointer.
106
+ */
107
+ bool new_is_psp, old_is_psp = v7m_using_psp(env);
108
+ uint32_t tmp;
109
110
- env->v7m.control[env->v7m.secure] = deposit32(old_control,
111
- R_V7M_CONTROL_SPSEL_SHIFT,
112
- R_V7M_CONTROL_SPSEL_LENGTH, new_spsel);
113
+ env->v7m.exception = new_exc;
114
+
115
+ new_is_psp = v7m_using_psp(env);
116
+
117
+ if (old_is_psp != new_is_psp) {
118
+ tmp = env->v7m.other_sp;
119
+ env->v7m.other_sp = env->regs[13];
120
+ env->regs[13] = tmp;
121
}
122
}
123
124
@@ -XXX,XX +XXX,XX @@ static uint32_t *get_v7m_sp_ptr(CPUARMState *env, bool secure, bool threadmode,
125
bool want_psp = threadmode && spsel;
126
127
if (secure == env->v7m.secure) {
128
- /* Currently switch_v7m_sp switches SP as it updates SPSEL,
129
- * so the SP we want is always in regs[13].
130
- * When we decouple SPSEL from the actually selected SP
131
- * we need to check want_psp against v7m_using_psp()
132
- * to see whether we need regs[13] or v7m.other_sp.
133
- */
134
- return &env->regs[13];
135
+ if (want_psp == v7m_using_psp(env)) {
136
+ return &env->regs[13];
137
+ } else {
138
+ return &env->v7m.other_sp;
139
+ }
140
} else {
141
if (want_psp) {
142
return &env->v7m.other_ss_psp;
143
@@ -XXX,XX +XXX,XX @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr)
144
uint32_t addr;
145
146
armv7m_nvic_acknowledge_irq(env->nvic);
147
- switch_v7m_sp(env, 0);
148
+ write_v7m_control_spsel(env, 0);
149
arm_clear_exclusive(env);
150
/* Clear IT bits */
151
env->condexec_bits = 0;
152
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
153
return;
154
}
155
156
- /* Set CONTROL.SPSEL from excret.SPSEL. For QEMU this currently
157
- * causes us to switch the active SP, but we will change this
158
- * later to not do that so we can support v8M.
159
+ /* Set CONTROL.SPSEL from excret.SPSEL. Since we're still in
160
+ * Handler mode (and will be until we write the new XPSR.Interrupt
161
+ * field) this does not switch around the current stack pointer.
162
*/
163
- switch_v7m_sp(env, return_to_sp_process);
164
+ write_v7m_control_spsel(env, return_to_sp_process);
165
166
{
167
/* The stack pointer we should be reading the exception frame from
168
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
169
case 20: /* CONTROL */
170
/* Writing to the SPSEL bit only has an effect if we are in
171
* thread mode; other bits can be updated by any privileged code.
172
- * switch_v7m_sp() deals with updating the SPSEL bit in
173
+ * write_v7m_control_spsel() deals with updating the SPSEL bit in
174
* env->v7m.control, so we only need update the others.
175
*/
176
if (!arm_v7m_is_handler_mode(env)) {
177
- switch_v7m_sp(env, (val & R_V7M_CONTROL_SPSEL_MASK) != 0);
178
+ write_v7m_control_spsel(env, (val & R_V7M_CONTROL_SPSEL_MASK) != 0);
179
}
180
env->v7m.control[env->v7m.secure] &= ~R_V7M_CONTROL_NPRIV_MASK;
181
env->v7m.control[env->v7m.secure] |= val & R_V7M_CONTROL_NPRIV_MASK;
182
--
183
2.7.4
184
185
diff view generated by jsdifflib
Deleted patch
1
Now that we can handle the CONTROL.SPSEL bit not necessarily being
2
in sync with the current stack pointer, we can restore the correct
3
security state on exception return. This happens before we start
4
to read registers off the stack frame, but after we have taken
5
possible usage faults for bad exception return magic values and
6
updated CONTROL.SPSEL.
7
1
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 1506092407-26985-5-git-send-email-peter.maydell@linaro.org
11
---
12
target/arm/helper.c | 2 ++
13
1 file changed, 2 insertions(+)
14
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
18
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
20
*/
21
write_v7m_control_spsel(env, return_to_sp_process);
22
23
+ switch_v7m_security_state(env, return_to_secure);
24
+
25
{
26
/* The stack pointer we should be reading the exception frame from
27
* depends on bits in the magic exception return type value (and
28
--
29
2.7.4
30
31
diff view generated by jsdifflib
1
When we added support for the new SHCSR bits in v8M in commit
1
An off-by-one error in a switch case in onenand_read() allowed
2
437d59c17e9 the code to support writing to the new HARDFAULTPENDED
2
a misbehaving guest to read off the end of a block of memory.
3
bit was accidentally only added for non-secure writes; the
4
secure banked version of the bit should also be writable.
5
3
4
NB: the onenand device is used only by the "n800" and "n810"
5
machines, which are usable only with TCG, not KVM, so this is
6
not a security issue.
7
8
Reported-by: Thomas Huth <thuth@redhat.com>
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
12
Message-id: 20181115143535.5885-2-peter.maydell@linaro.org
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 1506092407-26985-21-git-send-email-peter.maydell@linaro.org
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
15
---
11
hw/intc/armv7m_nvic.c | 1 +
16
hw/block/onenand.c | 2 +-
12
1 file changed, 1 insertion(+)
17
1 file changed, 1 insertion(+), 1 deletion(-)
13
18
14
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
19
diff --git a/hw/block/onenand.c b/hw/block/onenand.c
15
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/intc/armv7m_nvic.c
21
--- a/hw/block/onenand.c
17
+++ b/hw/intc/armv7m_nvic.c
22
+++ b/hw/block/onenand.c
18
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
23
@@ -XXX,XX +XXX,XX @@ static uint64_t onenand_read(void *opaque, hwaddr addr,
19
s->sec_vectors[ARMV7M_EXCP_BUS].enabled = (value & (1 << 17)) != 0;
24
int offset = addr >> s->shift;
20
s->sec_vectors[ARMV7M_EXCP_USAGE].enabled =
25
21
(value & (1 << 18)) != 0;
26
switch (offset) {
22
+ s->sec_vectors[ARMV7M_EXCP_HARD].pending = (value & (1 << 21)) != 0;
27
- case 0x0000 ... 0xc000:
23
/* SecureFault not banked, but RAZ/WI to NS */
28
+ case 0x0000 ... 0xbffe:
24
s->vectors[ARMV7M_EXCP_SECURE].active = (value & (1 << 4)) != 0;
29
return lduw_le_p(s->boot[0] + addr);
25
s->vectors[ARMV7M_EXCP_SECURE].enabled = (value & (1 << 19)) != 0;
30
31
case 0xf000:    /* Manufacturer ID */
26
--
32
--
27
2.7.4
33
2.19.1
28
34
29
35
diff view generated by jsdifflib
1
On exception return for v8M, the SPSEL bit in the EXC_RETURN magic
1
Update the onenand device to use qemu_log_mask() for reporting
2
value should be restored to the SPSEL bit in the CONTROL register
2
guest errors and unimplemented features, rather than plain
3
banked specified by the EXC_RETURN.ES bit.
3
fprintf() and hw_error().
4
4
5
Add write_v7m_control_spsel_for_secstate() which behaves like
5
(We leave the hw_error() in onenand_reset(), as that is
6
write_v7m_control_spsel() but allows the caller to specify which
6
triggered by a failure to read the underlying block device
7
CONTROL bank to use, reimplement write_v7m_control_spsel() in
7
for the bootRAM, not by guest action.)
8
terms of it, and use it in exception return.
9
8
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 1506092407-26985-6-git-send-email-peter.maydell@linaro.org
12
Reviewed-by: Thomas Huth <thuth@redhat.com>
13
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
14
Message-id: 20181115143535.5885-3-peter.maydell@linaro.org
13
---
15
---
14
target/arm/helper.c | 40 +++++++++++++++++++++++++++-------------
16
hw/block/onenand.c | 22 +++++++++++++---------
15
1 file changed, 27 insertions(+), 13 deletions(-)
17
1 file changed, 13 insertions(+), 9 deletions(-)
16
18
17
diff --git a/target/arm/helper.c b/target/arm/helper.c
19
diff --git a/hw/block/onenand.c b/hw/block/onenand.c
18
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper.c
21
--- a/hw/block/onenand.c
20
+++ b/target/arm/helper.c
22
+++ b/hw/block/onenand.c
21
@@ -XXX,XX +XXX,XX @@ static bool v7m_using_psp(CPUARMState *env)
23
@@ -XXX,XX +XXX,XX @@
22
env->v7m.control[env->v7m.secure] & R_V7M_CONTROL_SPSEL_MASK;
24
#include "exec/memory.h"
25
#include "hw/sysbus.h"
26
#include "qemu/error-report.h"
27
+#include "qemu/log.h"
28
29
/* 11 for 2kB-page OneNAND ("2nd generation") and 10 for 1kB-page chips */
30
#define PAGE_SHIFT    11
31
@@ -XXX,XX +XXX,XX @@ static void onenand_command(OneNANDState *s)
32
default:
33
s->status |= ONEN_ERR_CMD;
34
s->intstatus |= ONEN_INT;
35
- fprintf(stderr, "%s: unknown OneNAND command %x\n",
36
- __func__, s->command);
37
+ qemu_log_mask(LOG_GUEST_ERROR, "unknown OneNAND command %x\n",
38
+ s->command);
39
}
40
41
onenand_intr_update(s);
42
@@ -XXX,XX +XXX,XX @@ static uint64_t onenand_read(void *opaque, hwaddr addr,
43
case 0xff02:    /* ECC Result of spare area data */
44
case 0xff03:    /* ECC Result of main area data */
45
case 0xff04:    /* ECC Result of spare area data */
46
- hw_error("%s: implement ECC\n", __func__);
47
+ qemu_log_mask(LOG_UNIMP,
48
+ "onenand: ECC result registers unimplemented\n");
49
return 0x0000;
50
}
51
52
- fprintf(stderr, "%s: unknown OneNAND register %x\n",
53
- __func__, offset);
54
+ qemu_log_mask(LOG_GUEST_ERROR, "read of unknown OneNAND register 0x%x\n",
55
+ offset);
56
return 0;
23
}
57
}
24
58
25
-/* Write to v7M CONTROL.SPSEL bit. This may change the current
59
@@ -XXX,XX +XXX,XX @@ static void onenand_write(void *opaque, hwaddr addr,
26
- * stack pointer between Main and Process stack pointers.
60
break;
27
+/* Write to v7M CONTROL.SPSEL bit for the specified security bank.
61
28
+ * This may change the current stack pointer between Main and Process
62
default:
29
+ * stack pointers if it is done for the CONTROL register for the current
63
- fprintf(stderr, "%s: unknown OneNAND boot command %"PRIx64"\n",
30
+ * security state.
64
- __func__, value);
31
*/
65
+ qemu_log_mask(LOG_GUEST_ERROR,
32
-static void write_v7m_control_spsel(CPUARMState *env, bool new_spsel)
66
+ "unknown OneNAND boot command %" PRIx64 "\n",
33
+static void write_v7m_control_spsel_for_secstate(CPUARMState *env,
67
+ value);
34
+ bool new_spsel,
68
}
35
+ bool secstate)
69
break;
36
{
70
37
- uint32_t tmp;
71
@@ -XXX,XX +XXX,XX @@ static void onenand_write(void *opaque, hwaddr addr,
38
- bool new_is_psp, old_is_psp = v7m_using_psp(env);
72
break;
39
+ bool old_is_psp = v7m_using_psp(env);
73
40
74
default:
41
- env->v7m.control[env->v7m.secure] =
75
- fprintf(stderr, "%s: unknown OneNAND register %x\n",
42
- deposit32(env->v7m.control[env->v7m.secure],
76
- __func__, offset);
43
+ env->v7m.control[secstate] =
77
+ qemu_log_mask(LOG_GUEST_ERROR,
44
+ deposit32(env->v7m.control[secstate],
78
+ "write to unknown OneNAND register 0x%x\n",
45
R_V7M_CONTROL_SPSEL_SHIFT,
79
+ offset);
46
R_V7M_CONTROL_SPSEL_LENGTH, new_spsel);
47
48
- new_is_psp = v7m_using_psp(env);
49
+ if (secstate == env->v7m.secure) {
50
+ bool new_is_psp = v7m_using_psp(env);
51
+ uint32_t tmp;
52
53
- if (old_is_psp != new_is_psp) {
54
- tmp = env->v7m.other_sp;
55
- env->v7m.other_sp = env->regs[13];
56
- env->regs[13] = tmp;
57
+ if (old_is_psp != new_is_psp) {
58
+ tmp = env->v7m.other_sp;
59
+ env->v7m.other_sp = env->regs[13];
60
+ env->regs[13] = tmp;
61
+ }
62
}
80
}
63
}
81
}
64
82
65
+/* Write to v7M CONTROL.SPSEL bit. This may change the current
66
+ * stack pointer between Main and Process stack pointers.
67
+ */
68
+static void write_v7m_control_spsel(CPUARMState *env, bool new_spsel)
69
+{
70
+ write_v7m_control_spsel_for_secstate(env, new_spsel, env->v7m.secure);
71
+}
72
+
73
void write_v7m_exception(CPUARMState *env, uint32_t new_exc)
74
{
75
/* Write a new value to v7m.exception, thus transitioning into or out
76
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
77
* Handler mode (and will be until we write the new XPSR.Interrupt
78
* field) this does not switch around the current stack pointer.
79
*/
80
- write_v7m_control_spsel(env, return_to_sp_process);
81
+ write_v7m_control_spsel_for_secstate(env, return_to_sp_process, exc_secure);
82
83
switch_v7m_security_state(env, return_to_secure);
84
85
--
83
--
86
2.7.4
84
2.19.1
87
85
88
86
diff view generated by jsdifflib
Deleted patch
1
ARM v8M specifies that the INVPC usage fault for mismatched
2
xPSR exception field and handler mode bit should be checked
3
before updating the PSR and SP, so that the fault is taken
4
with the existing stack frame rather than by pushing a new one.
5
Perform this check in the right place for v8M.
6
1
7
Since v7M specifies in its pseudocode that this usage fault
8
check should happen later, we have to retain the original
9
code for that check rather than being able to merge the two.
10
(The distinction is architecturally visible but only in
11
very obscure corner cases like attempting an invalid exception
12
return with an exception frame in read only memory.)
13
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Message-id: 1506092407-26985-7-git-send-email-peter.maydell@linaro.org
17
---
18
target/arm/helper.c | 30 +++++++++++++++++++++++++++---
19
1 file changed, 27 insertions(+), 3 deletions(-)
20
21
diff --git a/target/arm/helper.c b/target/arm/helper.c
22
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/helper.c
24
+++ b/target/arm/helper.c
25
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
26
}
27
xpsr = ldl_phys(cs->as, frameptr + 0x1c);
28
29
+ if (arm_feature(env, ARM_FEATURE_V8)) {
30
+ /* For v8M we have to check whether the xPSR exception field
31
+ * matches the EXCRET value for return to handler/thread
32
+ * before we commit to changing the SP and xPSR.
33
+ */
34
+ bool will_be_handler = (xpsr & XPSR_EXCP) != 0;
35
+ if (return_to_handler != will_be_handler) {
36
+ /* Take an INVPC UsageFault on the current stack.
37
+ * By this point we will have switched to the security state
38
+ * for the background state, so this UsageFault will target
39
+ * that state.
40
+ */
41
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
42
+ env->v7m.secure);
43
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
44
+ v7m_exception_taken(cpu, excret);
45
+ qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
46
+ "stackframe: failed exception return integrity "
47
+ "check\n");
48
+ return;
49
+ }
50
+ }
51
+
52
/* Commit to consuming the stack frame */
53
frameptr += 0x20;
54
/* Undo stack alignment (the SPREALIGN bit indicates that the original
55
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
56
/* The restored xPSR exception field will be zero if we're
57
* resuming in Thread mode. If that doesn't match what the
58
* exception return excret specified then this is a UsageFault.
59
+ * v7M requires we make this check here; v8M did it earlier.
60
*/
61
if (return_to_handler != arm_v7m_is_handler_mode(env)) {
62
- /* Take an INVPC UsageFault by pushing the stack again.
63
- * TODO: the v8M version of this code should target the
64
- * background state for this exception.
65
+ /* Take an INVPC UsageFault by pushing the stack again;
66
+ * we know we're v7M so this is never a Secure UsageFault.
67
*/
68
+ assert(!arm_feature(env, ARM_FEATURE_V8));
69
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, false);
70
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
71
v7m_push_stack(cpu);
72
--
73
2.7.4
74
75
diff view generated by jsdifflib
Deleted patch
1
Attempting to do an exception return with an exception frame that
2
is not 8-aligned is UNPREDICTABLE in v8M; warn about this.
3
(It is not UNPREDICTABLE in v7M, and our implementation can
4
handle the merely-4-aligned case fine, so we don't need to
5
do anything except warn.)
6
1
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 1506092407-26985-8-git-send-email-peter.maydell@linaro.org
11
---
12
target/arm/helper.c | 7 +++++++
13
1 file changed, 7 insertions(+)
14
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
18
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
20
return_to_sp_process);
21
uint32_t frameptr = *frame_sp_p;
22
23
+ if (!QEMU_IS_ALIGNED(frameptr, 8) &&
24
+ arm_feature(env, ARM_FEATURE_V8)) {
25
+ qemu_log_mask(LOG_GUEST_ERROR,
26
+ "M profile exception return with non-8-aligned SP "
27
+ "for destination state is UNPREDICTABLE\n");
28
+ }
29
+
30
/* Pop registers. TODO: make these accesses use the correct
31
* attributes and address space (S/NS, priv/unpriv) and handle
32
* memory transaction failures.
33
--
34
2.7.4
35
36
diff view generated by jsdifflib
Deleted patch
1
In the v8M architecture, return from an exception to a PC which
2
has bit 0 set is not UNPREDICTABLE; it is defined that bit 0
3
is discarded [R_HRJH]. Restrict our complaint about this to v7M.
4
1
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 1506092407-26985-9-git-send-email-peter.maydell@linaro.org
9
---
10
target/arm/helper.c | 22 +++++++++++++++-------
11
1 file changed, 15 insertions(+), 7 deletions(-)
12
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
16
+++ b/target/arm/helper.c
17
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
18
env->regs[12] = ldl_phys(cs->as, frameptr + 0x10);
19
env->regs[14] = ldl_phys(cs->as, frameptr + 0x14);
20
env->regs[15] = ldl_phys(cs->as, frameptr + 0x18);
21
+
22
+ /* Returning from an exception with a PC with bit 0 set is defined
23
+ * behaviour on v8M (bit 0 is ignored), but for v7M it was specified
24
+ * to be UNPREDICTABLE. In practice actual v7M hardware seems to ignore
25
+ * the lsbit, and there are several RTOSes out there which incorrectly
26
+ * assume the r15 in the stack frame should be a Thumb-style "lsbit
27
+ * indicates ARM/Thumb" value, so ignore the bit on v7M as well, but
28
+ * complain about the badly behaved guest.
29
+ */
30
if (env->regs[15] & 1) {
31
- qemu_log_mask(LOG_GUEST_ERROR,
32
- "M profile return from interrupt with misaligned "
33
- "PC is UNPREDICTABLE\n");
34
- /* Actual hardware seems to ignore the lsbit, and there are several
35
- * RTOSes out there which incorrectly assume the r15 in the stack
36
- * frame should be a Thumb-style "lsbit indicates ARM/Thumb" value.
37
- */
38
env->regs[15] &= ~1U;
39
+ if (!arm_feature(env, ARM_FEATURE_V8)) {
40
+ qemu_log_mask(LOG_GUEST_ERROR,
41
+ "M profile return from interrupt with misaligned "
42
+ "PC is UNPREDICTABLE on v7M\n");
43
+ }
44
}
45
+
46
xpsr = ldl_phys(cs->as, frameptr + 0x1c);
47
48
if (arm_feature(env, ARM_FEATURE_V8)) {
49
--
50
2.7.4
51
52
diff view generated by jsdifflib
Deleted patch
1
Add the new M profile Secure Fault Status Register
2
and Secure Fault Address Register.
3
1
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 1506092407-26985-10-git-send-email-peter.maydell@linaro.org
7
---
8
target/arm/cpu.h | 12 ++++++++++++
9
hw/intc/armv7m_nvic.c | 34 ++++++++++++++++++++++++++++++++++
10
target/arm/machine.c | 2 ++
11
3 files changed, 48 insertions(+)
12
13
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/cpu.h
16
+++ b/target/arm/cpu.h
17
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
18
uint32_t cfsr[M_REG_NUM_BANKS]; /* Configurable Fault Status */
19
uint32_t hfsr; /* HardFault Status */
20
uint32_t dfsr; /* Debug Fault Status Register */
21
+ uint32_t sfsr; /* Secure Fault Status Register */
22
uint32_t mmfar[M_REG_NUM_BANKS]; /* MemManage Fault Address */
23
uint32_t bfar; /* BusFault Address */
24
+ uint32_t sfar; /* Secure Fault Address Register */
25
unsigned mpu_ctrl[M_REG_NUM_BANKS]; /* MPU_CTRL */
26
int exception;
27
uint32_t primask[M_REG_NUM_BANKS];
28
@@ -XXX,XX +XXX,XX @@ FIELD(V7M_DFSR, DWTTRAP, 2, 1)
29
FIELD(V7M_DFSR, VCATCH, 3, 1)
30
FIELD(V7M_DFSR, EXTERNAL, 4, 1)
31
32
+/* V7M SFSR bits */
33
+FIELD(V7M_SFSR, INVEP, 0, 1)
34
+FIELD(V7M_SFSR, INVIS, 1, 1)
35
+FIELD(V7M_SFSR, INVER, 2, 1)
36
+FIELD(V7M_SFSR, AUVIOL, 3, 1)
37
+FIELD(V7M_SFSR, INVTRAN, 4, 1)
38
+FIELD(V7M_SFSR, LSPERR, 5, 1)
39
+FIELD(V7M_SFSR, SFARVALID, 6, 1)
40
+FIELD(V7M_SFSR, LSERR, 7, 1)
41
+
42
/* v7M MPU_CTRL bits */
43
FIELD(V7M_MPU_CTRL, ENABLE, 0, 1)
44
FIELD(V7M_MPU_CTRL, HFNMIENA, 1, 1)
45
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
46
index XXXXXXX..XXXXXXX 100644
47
--- a/hw/intc/armv7m_nvic.c
48
+++ b/hw/intc/armv7m_nvic.c
49
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
50
goto bad_offset;
51
}
52
return cpu->env.pmsav8.mair1[attrs.secure];
53
+ case 0xde4: /* SFSR */
54
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
55
+ goto bad_offset;
56
+ }
57
+ if (!attrs.secure) {
58
+ return 0;
59
+ }
60
+ return cpu->env.v7m.sfsr;
61
+ case 0xde8: /* SFAR */
62
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
63
+ goto bad_offset;
64
+ }
65
+ if (!attrs.secure) {
66
+ return 0;
67
+ }
68
+ return cpu->env.v7m.sfar;
69
default:
70
bad_offset:
71
qemu_log_mask(LOG_GUEST_ERROR, "NVIC: Bad read offset 0x%x\n", offset);
72
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
73
* only affect cacheability, and we don't implement caching.
74
*/
75
break;
76
+ case 0xde4: /* SFSR */
77
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
78
+ goto bad_offset;
79
+ }
80
+ if (!attrs.secure) {
81
+ return;
82
+ }
83
+ cpu->env.v7m.sfsr &= ~value; /* W1C */
84
+ break;
85
+ case 0xde8: /* SFAR */
86
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
87
+ goto bad_offset;
88
+ }
89
+ if (!attrs.secure) {
90
+ return;
91
+ }
92
+ cpu->env.v7m.sfsr = value;
93
+ break;
94
case 0xf00: /* Software Triggered Interrupt Register */
95
{
96
int excnum = (value & 0x1ff) + NVIC_FIRST_IRQ;
97
diff --git a/target/arm/machine.c b/target/arm/machine.c
98
index XXXXXXX..XXXXXXX 100644
99
--- a/target/arm/machine.c
100
+++ b/target/arm/machine.c
101
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_m_security = {
102
VMSTATE_UINT32(env.v7m.ccr[M_REG_S], ARMCPU),
103
VMSTATE_UINT32(env.v7m.mmfar[M_REG_S], ARMCPU),
104
VMSTATE_UINT32(env.v7m.cfsr[M_REG_S], ARMCPU),
105
+ VMSTATE_UINT32(env.v7m.sfsr, ARMCPU),
106
+ VMSTATE_UINT32(env.v7m.sfar, ARMCPU),
107
VMSTATE_END_OF_LIST()
108
}
109
};
110
--
111
2.7.4
112
113
diff view generated by jsdifflib
Deleted patch
1
In v8M, more bits are defined in the exception-return magic
2
values; update the code that checks these so we accept
3
the v8M values when the CPU permits them.
4
1
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 1506092407-26985-11-git-send-email-peter.maydell@linaro.org
8
---
9
target/arm/helper.c | 73 ++++++++++++++++++++++++++++++++++++++++++-----------
10
1 file changed, 58 insertions(+), 15 deletions(-)
11
12
diff --git a/target/arm/helper.c b/target/arm/helper.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/helper.c
15
+++ b/target/arm/helper.c
16
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
17
uint32_t excret;
18
uint32_t xpsr;
19
bool ufault = false;
20
- bool return_to_sp_process = false;
21
- bool return_to_handler = false;
22
+ bool sfault = false;
23
+ bool return_to_sp_process;
24
+ bool return_to_handler;
25
bool rettobase = false;
26
bool exc_secure = false;
27
bool return_to_secure;
28
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
29
excret);
30
}
31
32
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
33
+ /* EXC_RETURN.ES validation check (R_SMFL). We must do this before
34
+ * we pick which FAULTMASK to clear.
35
+ */
36
+ if (!env->v7m.secure &&
37
+ ((excret & R_V7M_EXCRET_ES_MASK) ||
38
+ !(excret & R_V7M_EXCRET_DCRS_MASK))) {
39
+ sfault = 1;
40
+ /* For all other purposes, treat ES as 0 (R_HXSR) */
41
+ excret &= ~R_V7M_EXCRET_ES_MASK;
42
+ }
43
+ }
44
+
45
if (env->v7m.exception != ARMV7M_EXCP_NMI) {
46
/* Auto-clear FAULTMASK on return from other than NMI.
47
* If the security extension is implemented then this only
48
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
49
g_assert_not_reached();
50
}
51
52
+ return_to_handler = !(excret & R_V7M_EXCRET_MODE_MASK);
53
+ return_to_sp_process = excret & R_V7M_EXCRET_SPSEL_MASK;
54
return_to_secure = arm_feature(env, ARM_FEATURE_M_SECURITY) &&
55
(excret & R_V7M_EXCRET_S_MASK);
56
57
- switch (excret & 0xf) {
58
- case 1: /* Return to Handler */
59
- return_to_handler = true;
60
- break;
61
- case 13: /* Return to Thread using Process stack */
62
- return_to_sp_process = true;
63
- /* fall through */
64
- case 9: /* Return to Thread using Main stack */
65
- if (!rettobase &&
66
- !(env->v7m.ccr[env->v7m.secure] & R_V7M_CCR_NONBASETHRDENA_MASK)) {
67
+ if (arm_feature(env, ARM_FEATURE_V8)) {
68
+ if (!arm_feature(env, ARM_FEATURE_M_SECURITY)) {
69
+ /* UNPREDICTABLE if S == 1 or DCRS == 0 or ES == 1 (R_XLCP);
70
+ * we choose to take the UsageFault.
71
+ */
72
+ if ((excret & R_V7M_EXCRET_S_MASK) ||
73
+ (excret & R_V7M_EXCRET_ES_MASK) ||
74
+ !(excret & R_V7M_EXCRET_DCRS_MASK)) {
75
+ ufault = true;
76
+ }
77
+ }
78
+ if (excret & R_V7M_EXCRET_RES0_MASK) {
79
ufault = true;
80
}
81
- break;
82
- default:
83
- ufault = true;
84
+ } else {
85
+ /* For v7M we only recognize certain combinations of the low bits */
86
+ switch (excret & 0xf) {
87
+ case 1: /* Return to Handler */
88
+ break;
89
+ case 13: /* Return to Thread using Process stack */
90
+ case 9: /* Return to Thread using Main stack */
91
+ /* We only need to check NONBASETHRDENA for v7M, because in
92
+ * v8M this bit does not exist (it is RES1).
93
+ */
94
+ if (!rettobase &&
95
+ !(env->v7m.ccr[env->v7m.secure] &
96
+ R_V7M_CCR_NONBASETHRDENA_MASK)) {
97
+ ufault = true;
98
+ }
99
+ break;
100
+ default:
101
+ ufault = true;
102
+ }
103
+ }
104
+
105
+ if (sfault) {
106
+ env->v7m.sfsr |= R_V7M_SFSR_INVER_MASK;
107
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
108
+ v7m_exception_taken(cpu, excret);
109
+ qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing "
110
+ "stackframe: failed EXC_RETURN.ES validity check\n");
111
+ return;
112
}
113
114
if (ufault) {
115
--
116
2.7.4
117
118
diff view generated by jsdifflib
Deleted patch
1
For v8M, exceptions from Secure to Non-Secure state will save
2
callee-saved registers to the exception frame as well as the
3
caller-saved registers. Add support for unstacking these
4
registers in exception exit when necessary.
5
1
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 1506092407-26985-12-git-send-email-peter.maydell@linaro.org
9
---
10
target/arm/helper.c | 30 ++++++++++++++++++++++++++++++
11
1 file changed, 30 insertions(+)
12
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
16
+++ b/target/arm/helper.c
17
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
18
"for destination state is UNPREDICTABLE\n");
19
}
20
21
+ /* Do we need to pop callee-saved registers? */
22
+ if (return_to_secure &&
23
+ ((excret & R_V7M_EXCRET_ES_MASK) == 0 ||
24
+ (excret & R_V7M_EXCRET_DCRS_MASK) == 0)) {
25
+ uint32_t expected_sig = 0xfefa125b;
26
+ uint32_t actual_sig = ldl_phys(cs->as, frameptr);
27
+
28
+ if (expected_sig != actual_sig) {
29
+ /* Take a SecureFault on the current stack */
30
+ env->v7m.sfsr |= R_V7M_SFSR_INVIS_MASK;
31
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
32
+ v7m_exception_taken(cpu, excret);
33
+ qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing "
34
+ "stackframe: failed exception return integrity "
35
+ "signature check\n");
36
+ return;
37
+ }
38
+
39
+ env->regs[4] = ldl_phys(cs->as, frameptr + 0x8);
40
+ env->regs[5] = ldl_phys(cs->as, frameptr + 0xc);
41
+ env->regs[6] = ldl_phys(cs->as, frameptr + 0x10);
42
+ env->regs[7] = ldl_phys(cs->as, frameptr + 0x14);
43
+ env->regs[8] = ldl_phys(cs->as, frameptr + 0x18);
44
+ env->regs[9] = ldl_phys(cs->as, frameptr + 0x1c);
45
+ env->regs[10] = ldl_phys(cs->as, frameptr + 0x20);
46
+ env->regs[11] = ldl_phys(cs->as, frameptr + 0x24);
47
+
48
+ frameptr += 0x28;
49
+ }
50
+
51
/* Pop registers. TODO: make these accesses use the correct
52
* attributes and address space (S/NS, priv/unpriv) and handle
53
* memory transaction failures.
54
--
55
2.7.4
56
57
diff view generated by jsdifflib
Deleted patch
1
Implement the register interface for the SAU: SAU_CTRL,
2
SAU_TYPE, SAU_RNR, SAU_RBAR and SAU_RLAR. None of the
3
actual behaviour is implemented here; registers just
4
read back as written.
5
1
6
When the CPU definition for Cortex-M33 is eventually
7
added, its initfn will set cpu->sau_sregion, in the same
8
way that we currently set cpu->pmsav7_dregion for the
9
M3 and M4.
10
11
Number of SAU regions is typically a configurable
12
CPU parameter, but this patch doesn't provide a
13
QEMU CPU property for it. We can easily add one when
14
we have a board that requires it.
15
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
18
Message-id: 1506092407-26985-14-git-send-email-peter.maydell@linaro.org
19
---
20
target/arm/cpu.h | 10 +++++
21
hw/intc/armv7m_nvic.c | 116 ++++++++++++++++++++++++++++++++++++++++++++++++++
22
target/arm/cpu.c | 27 ++++++++++++
23
target/arm/machine.c | 14 ++++++
24
4 files changed, 167 insertions(+)
25
26
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
27
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/cpu.h
29
+++ b/target/arm/cpu.h
30
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
31
uint32_t mair1[M_REG_NUM_BANKS];
32
} pmsav8;
33
34
+ /* v8M SAU */
35
+ struct {
36
+ uint32_t *rbar;
37
+ uint32_t *rlar;
38
+ uint32_t rnr;
39
+ uint32_t ctrl;
40
+ } sau;
41
+
42
void *nvic;
43
const struct arm_boot_info *boot_info;
44
/* Store GICv3CPUState to access from this struct */
45
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
46
bool has_mpu;
47
/* PMSAv7 MPU number of supported regions */
48
uint32_t pmsav7_dregion;
49
+ /* v8M SAU number of supported regions */
50
+ uint32_t sau_sregion;
51
52
/* PSCI conduit used to invoke PSCI methods
53
* 0 - disabled, 1 - smc, 2 - hvc
54
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
55
index XXXXXXX..XXXXXXX 100644
56
--- a/hw/intc/armv7m_nvic.c
57
+++ b/hw/intc/armv7m_nvic.c
58
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
59
goto bad_offset;
60
}
61
return cpu->env.pmsav8.mair1[attrs.secure];
62
+ case 0xdd0: /* SAU_CTRL */
63
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
64
+ goto bad_offset;
65
+ }
66
+ if (!attrs.secure) {
67
+ return 0;
68
+ }
69
+ return cpu->env.sau.ctrl;
70
+ case 0xdd4: /* SAU_TYPE */
71
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
72
+ goto bad_offset;
73
+ }
74
+ if (!attrs.secure) {
75
+ return 0;
76
+ }
77
+ return cpu->sau_sregion;
78
+ case 0xdd8: /* SAU_RNR */
79
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
80
+ goto bad_offset;
81
+ }
82
+ if (!attrs.secure) {
83
+ return 0;
84
+ }
85
+ return cpu->env.sau.rnr;
86
+ case 0xddc: /* SAU_RBAR */
87
+ {
88
+ int region = cpu->env.sau.rnr;
89
+
90
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
91
+ goto bad_offset;
92
+ }
93
+ if (!attrs.secure) {
94
+ return 0;
95
+ }
96
+ if (region >= cpu->sau_sregion) {
97
+ return 0;
98
+ }
99
+ return cpu->env.sau.rbar[region];
100
+ }
101
+ case 0xde0: /* SAU_RLAR */
102
+ {
103
+ int region = cpu->env.sau.rnr;
104
+
105
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
106
+ goto bad_offset;
107
+ }
108
+ if (!attrs.secure) {
109
+ return 0;
110
+ }
111
+ if (region >= cpu->sau_sregion) {
112
+ return 0;
113
+ }
114
+ return cpu->env.sau.rlar[region];
115
+ }
116
case 0xde4: /* SFSR */
117
if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
118
goto bad_offset;
119
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
120
* only affect cacheability, and we don't implement caching.
121
*/
122
break;
123
+ case 0xdd0: /* SAU_CTRL */
124
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
125
+ goto bad_offset;
126
+ }
127
+ if (!attrs.secure) {
128
+ return;
129
+ }
130
+ cpu->env.sau.ctrl = value & 3;
131
+ case 0xdd4: /* SAU_TYPE */
132
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
133
+ goto bad_offset;
134
+ }
135
+ break;
136
+ case 0xdd8: /* SAU_RNR */
137
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
138
+ goto bad_offset;
139
+ }
140
+ if (!attrs.secure) {
141
+ return;
142
+ }
143
+ if (value >= cpu->sau_sregion) {
144
+ qemu_log_mask(LOG_GUEST_ERROR, "SAU region out of range %"
145
+ PRIu32 "/%" PRIu32 "\n",
146
+ value, cpu->sau_sregion);
147
+ } else {
148
+ cpu->env.sau.rnr = value;
149
+ }
150
+ break;
151
+ case 0xddc: /* SAU_RBAR */
152
+ {
153
+ int region = cpu->env.sau.rnr;
154
+
155
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
156
+ goto bad_offset;
157
+ }
158
+ if (!attrs.secure) {
159
+ return;
160
+ }
161
+ if (region >= cpu->sau_sregion) {
162
+ return;
163
+ }
164
+ cpu->env.sau.rbar[region] = value & ~0x1f;
165
+ tlb_flush(CPU(cpu));
166
+ break;
167
+ }
168
+ case 0xde0: /* SAU_RLAR */
169
+ {
170
+ int region = cpu->env.sau.rnr;
171
+
172
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
173
+ goto bad_offset;
174
+ }
175
+ if (!attrs.secure) {
176
+ return;
177
+ }
178
+ if (region >= cpu->sau_sregion) {
179
+ return;
180
+ }
181
+ cpu->env.sau.rlar[region] = value & ~0x1c;
182
+ tlb_flush(CPU(cpu));
183
+ break;
184
+ }
185
case 0xde4: /* SFSR */
186
if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
187
goto bad_offset;
188
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
189
index XXXXXXX..XXXXXXX 100644
190
--- a/target/arm/cpu.c
191
+++ b/target/arm/cpu.c
192
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(CPUState *s)
193
env->pmsav8.mair1[M_REG_S] = 0;
194
}
195
196
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
197
+ if (cpu->sau_sregion > 0) {
198
+ memset(env->sau.rbar, 0, sizeof(*env->sau.rbar) * cpu->sau_sregion);
199
+ memset(env->sau.rlar, 0, sizeof(*env->sau.rlar) * cpu->sau_sregion);
200
+ }
201
+ env->sau.rnr = 0;
202
+ /* SAU_CTRL reset value is IMPDEF; we choose 0, which is what
203
+ * the Cortex-M33 does.
204
+ */
205
+ env->sau.ctrl = 0;
206
+ }
207
+
208
set_flush_to_zero(1, &env->vfp.standard_fp_status);
209
set_flush_inputs_to_zero(1, &env->vfp.standard_fp_status);
210
set_default_nan_mode(1, &env->vfp.standard_fp_status);
211
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
212
}
213
}
214
215
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
216
+ uint32_t nr = cpu->sau_sregion;
217
+
218
+ if (nr > 0xff) {
219
+ error_setg(errp, "v8M SAU #regions invalid %" PRIu32, nr);
220
+ return;
221
+ }
222
+
223
+ if (nr) {
224
+ env->sau.rbar = g_new0(uint32_t, nr);
225
+ env->sau.rlar = g_new0(uint32_t, nr);
226
+ }
227
+ }
228
+
229
if (arm_feature(env, ARM_FEATURE_EL3)) {
230
set_feature(env, ARM_FEATURE_VBAR);
231
}
232
@@ -XXX,XX +XXX,XX @@ static void cortex_m4_initfn(Object *obj)
233
cpu->midr = 0x410fc240; /* r0p0 */
234
cpu->pmsav7_dregion = 8;
235
}
236
+
237
static void arm_v7m_class_init(ObjectClass *oc, void *data)
238
{
239
CPUClass *cc = CPU_CLASS(oc);
240
diff --git a/target/arm/machine.c b/target/arm/machine.c
241
index XXXXXXX..XXXXXXX 100644
242
--- a/target/arm/machine.c
243
+++ b/target/arm/machine.c
244
@@ -XXX,XX +XXX,XX @@ static bool s_rnr_vmstate_validate(void *opaque, int version_id)
245
return cpu->env.pmsav7.rnr[M_REG_S] < cpu->pmsav7_dregion;
246
}
247
248
+static bool sau_rnr_vmstate_validate(void *opaque, int version_id)
249
+{
250
+ ARMCPU *cpu = opaque;
251
+
252
+ return cpu->env.sau.rnr < cpu->sau_sregion;
253
+}
254
+
255
static bool m_security_needed(void *opaque)
256
{
257
ARMCPU *cpu = opaque;
258
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_m_security = {
259
VMSTATE_UINT32(env.v7m.cfsr[M_REG_S], ARMCPU),
260
VMSTATE_UINT32(env.v7m.sfsr, ARMCPU),
261
VMSTATE_UINT32(env.v7m.sfar, ARMCPU),
262
+ VMSTATE_VARRAY_UINT32(env.sau.rbar, ARMCPU, sau_sregion, 0,
263
+ vmstate_info_uint32, uint32_t),
264
+ VMSTATE_VARRAY_UINT32(env.sau.rlar, ARMCPU, sau_sregion, 0,
265
+ vmstate_info_uint32, uint32_t),
266
+ VMSTATE_UINT32(env.sau.rnr, ARMCPU),
267
+ VMSTATE_VALIDATE("SAU_RNR is valid", sau_rnr_vmstate_validate),
268
+ VMSTATE_UINT32(env.sau.ctrl, ARMCPU),
269
VMSTATE_END_OF_LIST()
270
}
271
};
272
--
273
2.7.4
274
275
diff view generated by jsdifflib
1
In cpu_mmu_index() we try to do this:
1
In practice for most of the more-or-less orphan Arm board models,
2
if (env->v7m.secure) {
2
I will review patches and put them in via the target-arm tree.
3
mmu_idx += ARMMMUIdx_MSUser;
3
So list myself as an "Odd Fixes" status maintainer for them.
4
}
4
5
but it will give the wrong answer, because ARMMMUIdx_MSUser
5
This commit downgrades these boards to "Odd Fixes":
6
includes the 0x40 ARM_MMU_IDX_M field, and so does the
6
* Allwinner-A10
7
mmu_idx we're adding to, and we'll end up with 0x8n rather
7
* Exynos
8
than 0x4n. This error is then nullified by the call to
8
* Calxeda Highbank
9
arm_to_core_mmu_idx() which masks out the high part, but
9
* Canon DIGIC
10
we're about to factor out the code that calculates the
10
* Musicpal
11
ARMMMUIdx values so it can be used without passing it through
11
* nSeries
12
arm_to_core_mmu_idx(), so fix this bug first.
12
* Palm
13
* PXA2xx
14
15
These boards were already "Odd Fixes":
16
* Gumstix
17
* i.MX31 (kzm)
18
19
Philippe Mathieu-Daudé has requested to be moved to R:
20
status for Gumstix now that I am listed as the M: contact.
21
22
Some boards are maintained, but their patches still go
23
via the target-arm tree, so add myself as a secondary
24
maintainer contact for those:
25
* Xilinx Zynq
26
* Xilinx ZynqMP
27
* STM32F205
28
* Netduino 2
29
* SmartFusion2
30
* Mecraft M2S-FG484
31
* ASPEED BMCs
32
* NRF51
13
33
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
34
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
35
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
16
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
36
Reviewed-by: Thomas Huth <thuth@redhat.com>
17
Message-id: 1506092407-26985-16-git-send-email-peter.maydell@linaro.org
37
Message-id: 20181108134139.31666-1-peter.maydell@linaro.org
18
---
38
---
19
target/arm/cpu.h | 12 +++++++-----
39
MAINTAINERS | 36 +++++++++++++++++++++++++++---------
20
1 file changed, 7 insertions(+), 5 deletions(-)
40
1 file changed, 27 insertions(+), 9 deletions(-)
21
41
22
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
42
diff --git a/MAINTAINERS b/MAINTAINERS
23
index XXXXXXX..XXXXXXX 100644
43
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/cpu.h
44
--- a/MAINTAINERS
25
+++ b/target/arm/cpu.h
45
+++ b/MAINTAINERS
26
@@ -XXX,XX +XXX,XX @@ static inline int cpu_mmu_index(CPUARMState *env, bool ifetch)
46
@@ -XXX,XX +XXX,XX @@ ARM Machines
27
int el = arm_current_el(env);
47
------------
28
48
Allwinner-a10
29
if (arm_feature(env, ARM_FEATURE_M)) {
49
M: Beniamino Galvani <b.galvani@gmail.com>
30
- ARMMMUIdx mmu_idx = el == 0 ? ARMMMUIdx_MUser : ARMMMUIdx_MPriv;
50
+M: Peter Maydell <peter.maydell@linaro.org>
31
+ ARMMMUIdx mmu_idx;
51
L: qemu-arm@nongnu.org
32
52
-S: Maintained
33
- if (armv7m_nvic_neg_prio_requested(env->nvic, env->v7m.secure)) {
53
+S: Odd Fixes
34
- mmu_idx = ARMMMUIdx_MNegPri;
54
F: hw/*/allwinner*
35
+ if (el == 0) {
55
F: include/hw/*/allwinner*
36
+ mmu_idx = env->v7m.secure ? ARMMMUIdx_MSUser : ARMMMUIdx_MUser;
56
F: hw/arm/cubieboard.c
37
+ } else {
57
@@ -XXX,XX +XXX,XX @@ F: tests/test-arm-mptimer.c
38
+ mmu_idx = env->v7m.secure ? ARMMMUIdx_MSPriv : ARMMMUIdx_MPriv;
58
39
}
59
Exynos
40
60
M: Igor Mitsyanko <i.mitsyanko@gmail.com>
41
- if (env->v7m.secure) {
61
+M: Peter Maydell <peter.maydell@linaro.org>
42
- mmu_idx += ARMMMUIdx_MSUser;
62
L: qemu-arm@nongnu.org
43
+ if (armv7m_nvic_neg_prio_requested(env->nvic, env->v7m.secure)) {
63
-S: Maintained
44
+ mmu_idx = env->v7m.secure ? ARMMMUIdx_MSNegPri : ARMMMUIdx_MNegPri;
64
+S: Odd Fixes
45
}
65
F: hw/*/exynos*
46
66
F: include/hw/arm/exynos4210.h
47
return arm_to_core_mmu_idx(mmu_idx);
67
68
Calxeda Highbank
69
M: Rob Herring <robh@kernel.org>
70
+M: Peter Maydell <peter.maydell@linaro.org>
71
L: qemu-arm@nongnu.org
72
-S: Maintained
73
+S: Odd Fixes
74
F: hw/arm/highbank.c
75
F: hw/net/xgmac.c
76
77
Canon DIGIC
78
M: Antony Pavlov <antonynpavlov@gmail.com>
79
+M: Peter Maydell <peter.maydell@linaro.org>
80
L: qemu-arm@nongnu.org
81
-S: Maintained
82
+S: Odd Fixes
83
F: include/hw/arm/digic.h
84
F: hw/*/digic*
85
86
Gumstix
87
-M: Philippe Mathieu-Daudé <f4bug@amsat.org>
88
+M: Peter Maydell <peter.maydell@linaro.org>
89
+R: Philippe Mathieu-Daudé <f4bug@amsat.org>
90
L: qemu-devel@nongnu.org
91
L: qemu-arm@nongnu.org
92
S: Odd Fixes
93
@@ -XXX,XX +XXX,XX @@ F: hw/arm/gumstix.c
94
95
i.MX31 (kzm)
96
M: Peter Chubb <peter.chubb@nicta.com.au>
97
+M: Peter Maydell <peter.maydell@linaro.org>
98
L: qemu-arm@nongnu.org
99
S: Odd Fixes
100
F: hw/arm/kzm.c
101
@@ -XXX,XX +XXX,XX @@ F: include/hw/misc/iotkit-sysinfo.h
102
103
Musicpal
104
M: Jan Kiszka <jan.kiszka@web.de>
105
+M: Peter Maydell <peter.maydell@linaro.org>
106
L: qemu-arm@nongnu.org
107
-S: Maintained
108
+S: Odd Fixes
109
F: hw/arm/musicpal.c
110
111
nSeries
112
M: Andrzej Zaborowski <balrogg@gmail.com>
113
+M: Peter Maydell <peter.maydell@linaro.org>
114
L: qemu-arm@nongnu.org
115
-S: Maintained
116
+S: Odd Fixes
117
F: hw/arm/nseries.c
118
119
Palm
120
M: Andrzej Zaborowski <balrogg@gmail.com>
121
+M: Peter Maydell <peter.maydell@linaro.org>
122
L: qemu-arm@nongnu.org
123
-S: Maintained
124
+S: Odd Fixes
125
F: hw/arm/palm.c
126
127
Raspberry Pi
128
@@ -XXX,XX +XXX,XX @@ F: include/hw/intc/realview_gic.h
129
130
PXA2XX
131
M: Andrzej Zaborowski <balrogg@gmail.com>
132
+M: Peter Maydell <peter.maydell@linaro.org>
133
L: qemu-arm@nongnu.org
134
-S: Maintained
135
+S: Odd Fixes
136
F: hw/arm/mainstone.c
137
F: hw/arm/spitz.c
138
F: hw/arm/tosa.c
139
@@ -XXX,XX +XXX,XX @@ F: include/hw/arm/virt.h
140
Xilinx Zynq
141
M: Edgar E. Iglesias <edgar.iglesias@gmail.com>
142
M: Alistair Francis <alistair@alistair23.me>
143
+M: Peter Maydell <peter.maydell@linaro.org>
144
L: qemu-arm@nongnu.org
145
S: Maintained
146
F: hw/*/xilinx_*
147
@@ -XXX,XX +XXX,XX @@ X: hw/ssi/xilinx_*
148
Xilinx ZynqMP
149
M: Alistair Francis <alistair@alistair23.me>
150
M: Edgar E. Iglesias <edgar.iglesias@gmail.com>
151
+M: Peter Maydell <peter.maydell@linaro.org>
152
L: qemu-arm@nongnu.org
153
S: Maintained
154
F: hw/*/xlnx*.c
155
@@ -XXX,XX +XXX,XX @@ F: hw/arm/virt-acpi-build.c
156
157
STM32F205
158
M: Alistair Francis <alistair@alistair23.me>
159
+M: Peter Maydell <peter.maydell@linaro.org>
160
S: Maintained
161
F: hw/arm/stm32f205_soc.c
162
F: hw/misc/stm32f2xx_syscfg.c
163
@@ -XXX,XX +XXX,XX @@ F: include/hw/*/stm32*.h
164
165
Netduino 2
166
M: Alistair Francis <alistair@alistair23.me>
167
+M: Peter Maydell <peter.maydell@linaro.org>
168
S: Maintained
169
F: hw/arm/netduino2.c
170
171
SmartFusion2
172
M: Subbaraya Sundeep <sundeep.lkml@gmail.com>
173
+M: Peter Maydell <peter.maydell@linaro.org>
174
S: Maintained
175
F: hw/arm/msf2-soc.c
176
F: hw/misc/msf2-sysreg.c
177
@@ -XXX,XX +XXX,XX @@ F: include/hw/ssi/mss-spi.h
178
179
Emcraft M2S-FG484
180
M: Subbaraya Sundeep <sundeep.lkml@gmail.com>
181
+M: Peter Maydell <peter.maydell@linaro.org>
182
S: Maintained
183
F: hw/arm/msf2-som.c
184
185
ASPEED BMCs
186
M: Cédric Le Goater <clg@kaod.org>
187
+M: Peter Maydell <peter.maydell@linaro.org>
188
R: Andrew Jeffery <andrew@aj.id.au>
189
R: Joel Stanley <joel@jms.id.au>
190
L: qemu-arm@nongnu.org
191
@@ -XXX,XX +XXX,XX @@ F: include/hw/net/ftgmac100.h
192
193
NRF51
194
M: Joel Stanley <joel@jms.id.au>
195
+M: Peter Maydell <peter.maydell@linaro.org>
196
L: qemu-arm@nongnu.org
197
S: Maintained
198
F: hw/arm/nrf51_soc.c
48
--
199
--
49
2.7.4
200
2.19.1
50
201
51
202
diff view generated by jsdifflib