1
target-arm queue for 3.1: mostly bug fixes, but the "turn on
1
Hi; this pull request has a collection of bug fixes for rc0.
2
EL2 support for Cortex-A7 and -A15" is technically enabling
2
The big one is the trusted firmware boot regression fix.
3
of a new feature... I think this is OK since we're only at rc1,
4
and it's easy to revert that feature bit flip if necessary.
5
3
6
thanks
4
thanks
7
-- PMM
5
-- PMM
8
6
7
The following changes since commit ece5f8374d0416a339f0c0a9399faa2c42d4ad6f:
9
8
10
The following changes since commit 5704c36d25ee84e7129722cb0db53df9faefe943:
9
Merge tag 'linux-user-for-7.2-pull-request' of https://gitlab.com/laurent_vivier/qemu into staging (2022-11-03 10:55:05 -0400)
11
12
Merge remote-tracking branch 'remotes/kraxel/tags/fixes-31-20181112-pull-request' into staging (2018-11-12 15:55:40 +0000)
13
10
14
are available in the Git repository at:
11
are available in the Git repository at:
15
12
16
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20181112
13
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20221104
17
14
18
for you to fetch changes up to 1a4c1a6dbf60aebddd07753f1013ea896c06ad29:
15
for you to fetch changes up to cead7fa4c06087c86c67c5ce815cc1ff0bfeac3a:
19
16
20
target/arm/cpu: Give Cortex-A15 and -A7 the EL2 feature (2018-11-12 16:52:29 +0000)
17
target/arm: Two fixes for secure ptw (2022-11-04 10:58:58 +0000)
21
18
22
----------------------------------------------------------------
19
----------------------------------------------------------------
23
target/arm queue:
20
target-arm queue:
24
* Remove no-longer-needed workaround for small SAU regions for v8M
21
* Fix regression booting Trusted Firmware
25
* Remove antique TODO comment
22
* Honor HCR_E2H and HCR_TGE in ats_write64()
26
* MAINTAINERS: Add an entry for the 'collie' machine
23
* Copy the entire vector in DO_ZIP
27
* hw/arm/sysbus-fdt: Only call match_fn callback if the type matches
24
* Fix Privileged Access Never (PAN) for aarch32
28
* Fix infinite recursion in tlbi_aa64_vmalle1_write()
25
* Make TLBIOS and TLBIRANGE ops trap on HCR_EL2.TTLB
29
* ARM KVM: fix various bugs in handling of guest debugging
26
* Set SCR_EL3.HXEn when direct booting kernel
30
* Correctly implement handling of HCR_EL2.{VI, VF}
27
* Set SME and SVE EL3 vector lengths when direct booting kernel
31
* Hyp mode R14 is shared with User and System
32
* Give Cortex-A15 and -A7 the EL2 feature
33
28
34
----------------------------------------------------------------
29
----------------------------------------------------------------
35
Alex Bennée (6):
30
Ake Koomsin (1):
36
target/arm64: properly handle DBGVR RESS bits
31
target/arm: Honor HCR_E2H and HCR_TGE in ats_write64()
37
target/arm64: hold BQL when calling do_interrupt()
38
target/arm64: kvm debug set target_el when passing exception to guest
39
tests/guest-debug: fix scoping of failcount
40
arm: use symbolic MDCR_TDE in arm_debug_target_el
41
arm: fix aa64_generate_debug_exceptions to work with EL2
42
32
43
Eric Auger (1):
33
Peter Maydell (3):
44
hw/arm/sysbus-fdt: Only call match_fn callback if the type matches
34
hw/arm/boot: Set SME and SVE EL3 vector lengths when booting kernel
35
hw/arm/boot: Set SCR_EL3.HXEn when booting kernel
36
target/arm: Make TLBIOS and TLBIRANGE ops trap on HCR_EL2.TTLB
45
37
46
Peter Maydell (7):
38
Richard Henderson (2):
47
target/arm: Remove workaround for small SAU regions
39
target/arm: Copy the entire vector in DO_ZIP
48
target/arm: Remove antique TODO comment
40
target/arm: Two fixes for secure ptw
49
Revert "target/arm: Implement HCR.VI and VF"
50
target/arm: Track the state of our irq lines from the GIC explicitly
51
target/arm: Correctly implement handling of HCR_EL2.{VI, VF}
52
target/arm: Hyp mode R14 is shared with User and System
53
target/arm/cpu: Give Cortex-A15 and -A7 the EL2 feature
54
41
55
Richard Henderson (1):
42
Timofey Kutergin (1):
56
target/arm: Fix typo in tlbi_aa64_vmalle1_write
43
target/arm: Fix Privileged Access Never (PAN) for aarch32
57
44
58
Thomas Huth (1):
45
hw/arm/boot.c | 5 ++++
59
MAINTAINERS: Add an entry for the 'collie' machine
46
target/arm/helper.c | 64 +++++++++++++++++++++++++++++--------------------
60
47
target/arm/ptw.c | 50 ++++++++++++++++++++++++++++----------
61
target/arm/cpu.h | 44 +++++++++++------
48
target/arm/sve_helper.c | 4 ++--
62
target/arm/internals.h | 34 +++++++++++++
49
4 files changed, 83 insertions(+), 40 deletions(-)
63
hw/arm/sysbus-fdt.c | 12 +++--
64
target/arm/cpu.c | 66 ++++++++++++++++++++++++-
65
target/arm/helper.c | 101 +++++++++++++-------------------------
66
target/arm/kvm32.c | 4 +-
67
target/arm/kvm64.c | 20 +++++++-
68
target/arm/machine.c | 51 +++++++++++++++++++
69
target/arm/op_helper.c | 4 +-
70
MAINTAINERS | 7 +++
71
tests/guest-debug/test-gdbstub.py | 1 +
72
11 files changed, 248 insertions(+), 96 deletions(-)
73
diff view generated by jsdifflib
1
The Cortex-A15 and Cortex-A7 both have EL2; now we've implemented
1
When we direct boot a kernel on a CPU which emulates EL3, we need
2
it properly we can enable the feature bit.
2
to set up the EL3 system registers as the Linux kernel documentation
3
specifies:
4
https://www.kernel.org/doc/Documentation/arm64/booting.rst
5
6
For SVE and SME this includes:
7
- ZCR_EL3.LEN must be initialised to the same value for all CPUs the
8
kernel is executed on.
9
- SMCR_EL3.LEN must be initialised to the same value for all CPUs the
10
kernel will execute on.
11
12
Although we are technically compliant with this, the "same value" we
13
currently use by default is the reset value of 0. This will end up
14
forcing the guest kernel's SVE and SME vector length to be only the
15
smallest supported length.
16
17
Initialize the vector length fields to their maximum possible value,
18
which is 0xf. If the implementation doesn't actually support that
19
vector length then the effective vector length will be constrained
20
down to the maximum supported value at point of use.
21
22
This allows the guest to use all the vector lengths the emulated CPU
23
supports (by programming the _EL2 and _EL1 versions of these
24
registers.)
3
25
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
26
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
27
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
28
Message-id: 20221027140207.413084-2-peter.maydell@linaro.org
7
Message-id: 20181109173553.22341-3-peter.maydell@linaro.org
8
---
29
---
9
target/arm/cpu.c | 2 ++
30
hw/arm/boot.c | 2 ++
10
1 file changed, 2 insertions(+)
31
1 file changed, 2 insertions(+)
11
32
12
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
33
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
13
index XXXXXXX..XXXXXXX 100644
34
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/cpu.c
35
--- a/hw/arm/boot.c
15
+++ b/target/arm/cpu.c
36
+++ b/hw/arm/boot.c
16
@@ -XXX,XX +XXX,XX @@ static void cortex_a7_initfn(Object *obj)
37
@@ -XXX,XX +XXX,XX @@ static void do_cpu_reset(void *opaque)
17
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
38
}
18
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
39
if (cpu_isar_feature(aa64_sve, cpu)) {
19
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
40
env->cp15.cptr_el[3] |= R_CPTR_EL3_EZ_MASK;
20
+ set_feature(&cpu->env, ARM_FEATURE_EL2);
41
+ env->vfp.zcr_el[3] = 0xf;
21
set_feature(&cpu->env, ARM_FEATURE_EL3);
42
}
22
cpu->kvm_target = QEMU_KVM_ARM_TARGET_CORTEX_A7;
43
if (cpu_isar_feature(aa64_sme, cpu)) {
23
cpu->midr = 0x410fc075;
44
env->cp15.cptr_el[3] |= R_CPTR_EL3_ESM_MASK;
24
@@ -XXX,XX +XXX,XX @@ static void cortex_a15_initfn(Object *obj)
45
env->cp15.scr_el3 |= SCR_ENTP2;
25
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
46
+ env->vfp.smcr_el[3] = 0xf;
26
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
47
}
27
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
48
/* AArch64 kernels never boot in secure mode */
28
+ set_feature(&cpu->env, ARM_FEATURE_EL2);
49
assert(!info->secure_boot);
29
set_feature(&cpu->env, ARM_FEATURE_EL3);
30
cpu->kvm_target = QEMU_KVM_ARM_TARGET_CORTEX_A15;
31
cpu->midr = 0x412fc0f1;
32
--
50
--
33
2.19.1
51
2.25.1
34
35
diff view generated by jsdifflib
1
Hyp mode is an exception to the general rule that each AArch32
1
When we direct boot a kernel on a CPU which emulates EL3, we need to
2
mode has its own r13, r14 and SPSR -- it has a banked r13 and
2
set up the EL3 system registers as the Linux kernel documentation
3
SPSR but shares its r14 with User and System mode. We were
3
specifies:
4
incorrectly implementing it as banked, which meant that on
4
https://www.kernel.org/doc/Documentation/arm64/booting.rst
5
entry to Hyp mode r14 was 0 rather than the USR/SYS r14.
6
5
7
We provide a new function r14_bank_number() which is like
6
For CPUs with FEAT_HCX support this includes:
8
the existing bank_number() but provides the index into
7
- SCR_EL3.HXEn (bit 38) must be initialised to 0b1.
9
env->banked_r14[]; bank_number() provides the index to use
10
for env->banked_r13[] and env->banked_cpsr[].
11
8
12
All the points in the code that were using bank_number()
9
but we forgot to do this when implementing FEAT_HCX, which would mean
13
to index into env->banked_r14[] are updated for consintency:
10
that a guest trying to access the HCRX_EL2 register would crash.
14
* switch_mode() -- this is the only place where we fix
15
an actual bug
16
* aarch64_sync_32_to_64() and aarch64_sync_64_to_32():
17
no behavioural change as we already special-cased Hyp R14
18
* kvm32.c: no behavioural change since the guest can't ever
19
be in Hyp mode, but conceptually the right thing to do
20
* msr_banked()/mrs_banked(): we can never get to the case
21
that accesses banked_r14[] with tgtmode == ARM_CPU_MODE_HYP,
22
so no behavioural change
23
11
24
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
25
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
26
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
14
Message-id: 20221027140207.413084-3-peter.maydell@linaro.org
27
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
28
Message-id: 20181109173553.22341-2-peter.maydell@linaro.org
29
---
15
---
30
target/arm/internals.h | 16 ++++++++++++++++
16
hw/arm/boot.c | 3 +++
31
target/arm/helper.c | 29 +++++++++++++++--------------
17
1 file changed, 3 insertions(+)
32
target/arm/kvm32.c | 4 ++--
33
target/arm/op_helper.c | 4 ++--
34
4 files changed, 35 insertions(+), 18 deletions(-)
35
18
36
diff --git a/target/arm/internals.h b/target/arm/internals.h
19
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
37
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/internals.h
21
--- a/hw/arm/boot.c
39
+++ b/target/arm/internals.h
22
+++ b/hw/arm/boot.c
40
@@ -XXX,XX +XXX,XX @@ static inline int bank_number(int mode)
23
@@ -XXX,XX +XXX,XX @@ static void do_cpu_reset(void *opaque)
41
g_assert_not_reached();
24
env->cp15.scr_el3 |= SCR_ENTP2;
42
}
25
env->vfp.smcr_el[3] = 0xf;
43
26
}
44
+/**
27
+ if (cpu_isar_feature(aa64_hcx, cpu)) {
45
+ * r14_bank_number: Map CPU mode onto register bank for r14
28
+ env->cp15.scr_el3 |= SCR_HXEN;
46
+ *
29
+ }
47
+ * Given an AArch32 CPU mode, return the index into the saved register
30
/* AArch64 kernels never boot in secure mode */
48
+ * banks to use for the R14 (LR) in that mode. This is the same as
31
assert(!info->secure_boot);
49
+ * bank_number(), except for the special case of Hyp mode, where
32
/* This hook is only supported for AArch32 currently:
50
+ * R14 is shared with USR and SYS, unlike its R13 and SPSR.
51
+ * This should be used as the index into env->banked_r14[], and
52
+ * bank_number() used for the index into env->banked_r13[] and
53
+ * env->banked_spsr[].
54
+ */
55
+static inline int r14_bank_number(int mode)
56
+{
57
+ return (mode == ARM_CPU_MODE_HYP) ? BANK_USRSYS : bank_number(mode);
58
+}
59
+
60
void arm_cpu_register_gdb_regs_for_features(ARMCPU *cpu);
61
void arm_translate_init(void);
62
63
diff --git a/target/arm/helper.c b/target/arm/helper.c
64
index XXXXXXX..XXXXXXX 100644
65
--- a/target/arm/helper.c
66
+++ b/target/arm/helper.c
67
@@ -XXX,XX +XXX,XX @@ static void switch_mode(CPUARMState *env, int mode)
68
69
i = bank_number(old_mode);
70
env->banked_r13[i] = env->regs[13];
71
- env->banked_r14[i] = env->regs[14];
72
env->banked_spsr[i] = env->spsr;
73
74
i = bank_number(mode);
75
env->regs[13] = env->banked_r13[i];
76
- env->regs[14] = env->banked_r14[i];
77
env->spsr = env->banked_spsr[i];
78
+
79
+ env->banked_r14[r14_bank_number(old_mode)] = env->regs[14];
80
+ env->regs[14] = env->banked_r14[r14_bank_number(mode)];
81
}
82
83
/* Physical Interrupt Target EL Lookup Table
84
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_32_to_64(CPUARMState *env)
85
if (mode == ARM_CPU_MODE_HYP) {
86
env->xregs[14] = env->regs[14];
87
} else {
88
- env->xregs[14] = env->banked_r14[bank_number(ARM_CPU_MODE_USR)];
89
+ env->xregs[14] = env->banked_r14[r14_bank_number(ARM_CPU_MODE_USR)];
90
}
91
}
92
93
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_32_to_64(CPUARMState *env)
94
env->xregs[16] = env->regs[14];
95
env->xregs[17] = env->regs[13];
96
} else {
97
- env->xregs[16] = env->banked_r14[bank_number(ARM_CPU_MODE_IRQ)];
98
+ env->xregs[16] = env->banked_r14[r14_bank_number(ARM_CPU_MODE_IRQ)];
99
env->xregs[17] = env->banked_r13[bank_number(ARM_CPU_MODE_IRQ)];
100
}
101
102
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_32_to_64(CPUARMState *env)
103
env->xregs[18] = env->regs[14];
104
env->xregs[19] = env->regs[13];
105
} else {
106
- env->xregs[18] = env->banked_r14[bank_number(ARM_CPU_MODE_SVC)];
107
+ env->xregs[18] = env->banked_r14[r14_bank_number(ARM_CPU_MODE_SVC)];
108
env->xregs[19] = env->banked_r13[bank_number(ARM_CPU_MODE_SVC)];
109
}
110
111
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_32_to_64(CPUARMState *env)
112
env->xregs[20] = env->regs[14];
113
env->xregs[21] = env->regs[13];
114
} else {
115
- env->xregs[20] = env->banked_r14[bank_number(ARM_CPU_MODE_ABT)];
116
+ env->xregs[20] = env->banked_r14[r14_bank_number(ARM_CPU_MODE_ABT)];
117
env->xregs[21] = env->banked_r13[bank_number(ARM_CPU_MODE_ABT)];
118
}
119
120
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_32_to_64(CPUARMState *env)
121
env->xregs[22] = env->regs[14];
122
env->xregs[23] = env->regs[13];
123
} else {
124
- env->xregs[22] = env->banked_r14[bank_number(ARM_CPU_MODE_UND)];
125
+ env->xregs[22] = env->banked_r14[r14_bank_number(ARM_CPU_MODE_UND)];
126
env->xregs[23] = env->banked_r13[bank_number(ARM_CPU_MODE_UND)];
127
}
128
129
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_32_to_64(CPUARMState *env)
130
env->xregs[i] = env->fiq_regs[i - 24];
131
}
132
env->xregs[29] = env->banked_r13[bank_number(ARM_CPU_MODE_FIQ)];
133
- env->xregs[30] = env->banked_r14[bank_number(ARM_CPU_MODE_FIQ)];
134
+ env->xregs[30] = env->banked_r14[r14_bank_number(ARM_CPU_MODE_FIQ)];
135
}
136
137
env->pc = env->regs[15];
138
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_64_to_32(CPUARMState *env)
139
if (mode == ARM_CPU_MODE_HYP) {
140
env->regs[14] = env->xregs[14];
141
} else {
142
- env->banked_r14[bank_number(ARM_CPU_MODE_USR)] = env->xregs[14];
143
+ env->banked_r14[r14_bank_number(ARM_CPU_MODE_USR)] = env->xregs[14];
144
}
145
}
146
147
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_64_to_32(CPUARMState *env)
148
env->regs[14] = env->xregs[16];
149
env->regs[13] = env->xregs[17];
150
} else {
151
- env->banked_r14[bank_number(ARM_CPU_MODE_IRQ)] = env->xregs[16];
152
+ env->banked_r14[r14_bank_number(ARM_CPU_MODE_IRQ)] = env->xregs[16];
153
env->banked_r13[bank_number(ARM_CPU_MODE_IRQ)] = env->xregs[17];
154
}
155
156
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_64_to_32(CPUARMState *env)
157
env->regs[14] = env->xregs[18];
158
env->regs[13] = env->xregs[19];
159
} else {
160
- env->banked_r14[bank_number(ARM_CPU_MODE_SVC)] = env->xregs[18];
161
+ env->banked_r14[r14_bank_number(ARM_CPU_MODE_SVC)] = env->xregs[18];
162
env->banked_r13[bank_number(ARM_CPU_MODE_SVC)] = env->xregs[19];
163
}
164
165
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_64_to_32(CPUARMState *env)
166
env->regs[14] = env->xregs[20];
167
env->regs[13] = env->xregs[21];
168
} else {
169
- env->banked_r14[bank_number(ARM_CPU_MODE_ABT)] = env->xregs[20];
170
+ env->banked_r14[r14_bank_number(ARM_CPU_MODE_ABT)] = env->xregs[20];
171
env->banked_r13[bank_number(ARM_CPU_MODE_ABT)] = env->xregs[21];
172
}
173
174
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_64_to_32(CPUARMState *env)
175
env->regs[14] = env->xregs[22];
176
env->regs[13] = env->xregs[23];
177
} else {
178
- env->banked_r14[bank_number(ARM_CPU_MODE_UND)] = env->xregs[22];
179
+ env->banked_r14[r14_bank_number(ARM_CPU_MODE_UND)] = env->xregs[22];
180
env->banked_r13[bank_number(ARM_CPU_MODE_UND)] = env->xregs[23];
181
}
182
183
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_64_to_32(CPUARMState *env)
184
env->fiq_regs[i - 24] = env->xregs[i];
185
}
186
env->banked_r13[bank_number(ARM_CPU_MODE_FIQ)] = env->xregs[29];
187
- env->banked_r14[bank_number(ARM_CPU_MODE_FIQ)] = env->xregs[30];
188
+ env->banked_r14[r14_bank_number(ARM_CPU_MODE_FIQ)] = env->xregs[30];
189
}
190
191
env->regs[15] = env->pc;
192
diff --git a/target/arm/kvm32.c b/target/arm/kvm32.c
193
index XXXXXXX..XXXXXXX 100644
194
--- a/target/arm/kvm32.c
195
+++ b/target/arm/kvm32.c
196
@@ -XXX,XX +XXX,XX @@ int kvm_arch_put_registers(CPUState *cs, int level)
197
memcpy(env->usr_regs, env->regs + 8, 5 * sizeof(uint32_t));
198
}
199
env->banked_r13[bn] = env->regs[13];
200
- env->banked_r14[bn] = env->regs[14];
201
env->banked_spsr[bn] = env->spsr;
202
+ env->banked_r14[r14_bank_number(mode)] = env->regs[14];
203
204
/* Now we can safely copy stuff down to the kernel */
205
for (i = 0; i < ARRAY_SIZE(regs); i++) {
206
@@ -XXX,XX +XXX,XX @@ int kvm_arch_get_registers(CPUState *cs)
207
memcpy(env->regs + 8, env->usr_regs, 5 * sizeof(uint32_t));
208
}
209
env->regs[13] = env->banked_r13[bn];
210
- env->regs[14] = env->banked_r14[bn];
211
env->spsr = env->banked_spsr[bn];
212
+ env->regs[14] = env->banked_r14[r14_bank_number(mode)];
213
214
/* VFP registers */
215
r.id = KVM_REG_ARM | KVM_REG_SIZE_U64 | KVM_REG_ARM_VFP;
216
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
217
index XXXXXXX..XXXXXXX 100644
218
--- a/target/arm/op_helper.c
219
+++ b/target/arm/op_helper.c
220
@@ -XXX,XX +XXX,XX @@ void HELPER(msr_banked)(CPUARMState *env, uint32_t value, uint32_t tgtmode,
221
env->banked_r13[bank_number(tgtmode)] = value;
222
break;
223
case 14:
224
- env->banked_r14[bank_number(tgtmode)] = value;
225
+ env->banked_r14[r14_bank_number(tgtmode)] = value;
226
break;
227
case 8 ... 12:
228
switch (tgtmode) {
229
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(mrs_banked)(CPUARMState *env, uint32_t tgtmode, uint32_t regno)
230
case 13:
231
return env->banked_r13[bank_number(tgtmode)];
232
case 14:
233
- return env->banked_r14[bank_number(tgtmode)];
234
+ return env->banked_r14[r14_bank_number(tgtmode)];
235
case 8 ... 12:
236
switch (tgtmode) {
237
case ARM_CPU_MODE_USR:
238
--
33
--
239
2.19.1
34
2.25.1
240
241
diff view generated by jsdifflib
1
Remove a TODO comment about implementing the vectored interrupt
1
The HCR_EL2.TTLB bit is supposed to trap all EL1 execution of TLB
2
controller. We have had an implementation of that for a decade;
2
maintenance instructions. However we have added new TLB insns for
3
it's in hw/intc/pl190.c.
3
FEAT_TLBIOS and FEAT_TLBIRANGE, and forgot to set their accessfn to
4
access_ttlb. Add the missing accessfns.
4
5
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20181106164118.16184-1-peter.maydell@linaro.org
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
10
---
8
---
11
target/arm/helper.c | 1 -
9
target/arm/helper.c | 36 ++++++++++++++++++------------------
12
1 file changed, 1 deletion(-)
10
1 file changed, 18 insertions(+), 18 deletions(-)
13
11
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
12
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper.c
14
--- a/target/arm/helper.c
17
+++ b/target/arm/helper.c
15
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch32(CPUState *cs)
16
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo pauth_reginfo[] = {
19
return;
17
static const ARMCPRegInfo tlbirange_reginfo[] = {
20
}
18
{ .name = "TLBI_RVAE1IS", .state = ARM_CP_STATE_AA64,
21
19
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 1,
22
- /* TODO: Vectored interrupt controller. */
20
- .access = PL1_W, .type = ARM_CP_NO_RAW,
23
switch (cs->exception_index) {
21
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
24
case EXCP_UDEF:
22
.writefn = tlbi_aa64_rvae1is_write },
25
new_mode = ARM_CPU_MODE_UND;
23
{ .name = "TLBI_RVAAE1IS", .state = ARM_CP_STATE_AA64,
24
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 3,
25
- .access = PL1_W, .type = ARM_CP_NO_RAW,
26
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
27
.writefn = tlbi_aa64_rvae1is_write },
28
{ .name = "TLBI_RVALE1IS", .state = ARM_CP_STATE_AA64,
29
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 5,
30
- .access = PL1_W, .type = ARM_CP_NO_RAW,
31
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
32
.writefn = tlbi_aa64_rvae1is_write },
33
{ .name = "TLBI_RVAALE1IS", .state = ARM_CP_STATE_AA64,
34
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 7,
35
- .access = PL1_W, .type = ARM_CP_NO_RAW,
36
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
37
.writefn = tlbi_aa64_rvae1is_write },
38
{ .name = "TLBI_RVAE1OS", .state = ARM_CP_STATE_AA64,
39
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 1,
40
- .access = PL1_W, .type = ARM_CP_NO_RAW,
41
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
42
.writefn = tlbi_aa64_rvae1is_write },
43
{ .name = "TLBI_RVAAE1OS", .state = ARM_CP_STATE_AA64,
44
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 3,
45
- .access = PL1_W, .type = ARM_CP_NO_RAW,
46
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
47
.writefn = tlbi_aa64_rvae1is_write },
48
{ .name = "TLBI_RVALE1OS", .state = ARM_CP_STATE_AA64,
49
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 5,
50
- .access = PL1_W, .type = ARM_CP_NO_RAW,
51
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
52
.writefn = tlbi_aa64_rvae1is_write },
53
{ .name = "TLBI_RVAALE1OS", .state = ARM_CP_STATE_AA64,
54
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 7,
55
- .access = PL1_W, .type = ARM_CP_NO_RAW,
56
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
57
.writefn = tlbi_aa64_rvae1is_write },
58
{ .name = "TLBI_RVAE1", .state = ARM_CP_STATE_AA64,
59
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 1,
60
- .access = PL1_W, .type = ARM_CP_NO_RAW,
61
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
62
.writefn = tlbi_aa64_rvae1_write },
63
{ .name = "TLBI_RVAAE1", .state = ARM_CP_STATE_AA64,
64
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 3,
65
- .access = PL1_W, .type = ARM_CP_NO_RAW,
66
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
67
.writefn = tlbi_aa64_rvae1_write },
68
{ .name = "TLBI_RVALE1", .state = ARM_CP_STATE_AA64,
69
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 5,
70
- .access = PL1_W, .type = ARM_CP_NO_RAW,
71
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
72
.writefn = tlbi_aa64_rvae1_write },
73
{ .name = "TLBI_RVAALE1", .state = ARM_CP_STATE_AA64,
74
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 7,
75
- .access = PL1_W, .type = ARM_CP_NO_RAW,
76
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
77
.writefn = tlbi_aa64_rvae1_write },
78
{ .name = "TLBI_RIPAS2E1IS", .state = ARM_CP_STATE_AA64,
79
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 2,
80
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbirange_reginfo[] = {
81
static const ARMCPRegInfo tlbios_reginfo[] = {
82
{ .name = "TLBI_VMALLE1OS", .state = ARM_CP_STATE_AA64,
83
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 0,
84
- .access = PL1_W, .type = ARM_CP_NO_RAW,
85
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
86
.writefn = tlbi_aa64_vmalle1is_write },
87
{ .name = "TLBI_VAE1OS", .state = ARM_CP_STATE_AA64,
88
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 1,
89
- .access = PL1_W, .type = ARM_CP_NO_RAW,
90
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
91
.writefn = tlbi_aa64_vae1is_write },
92
{ .name = "TLBI_ASIDE1OS", .state = ARM_CP_STATE_AA64,
93
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 2,
94
- .access = PL1_W, .type = ARM_CP_NO_RAW,
95
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
96
.writefn = tlbi_aa64_vmalle1is_write },
97
{ .name = "TLBI_VAAE1OS", .state = ARM_CP_STATE_AA64,
98
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 3,
99
- .access = PL1_W, .type = ARM_CP_NO_RAW,
100
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
101
.writefn = tlbi_aa64_vae1is_write },
102
{ .name = "TLBI_VALE1OS", .state = ARM_CP_STATE_AA64,
103
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 5,
104
- .access = PL1_W, .type = ARM_CP_NO_RAW,
105
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
106
.writefn = tlbi_aa64_vae1is_write },
107
{ .name = "TLBI_VAALE1OS", .state = ARM_CP_STATE_AA64,
108
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 7,
109
- .access = PL1_W, .type = ARM_CP_NO_RAW,
110
+ .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
111
.writefn = tlbi_aa64_vae1is_write },
112
{ .name = "TLBI_ALLE2OS", .state = ARM_CP_STATE_AA64,
113
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 0,
26
--
114
--
27
2.19.1
115
2.25.1
28
29
diff view generated by jsdifflib
1
This reverts commit 8a0fc3a29fc2315325400c738f807d0d4ae0ab7f.
1
From: Timofey Kutergin <tkutergin@gmail.com>
2
2
3
The implementation of HCR.VI and VF in that commit is not
3
When we implemented the PAN support we theoretically wanted
4
correct -- they do not track the overall "is there a pending
4
to support it for both AArch32 and AArch64, but in practice
5
VIRQ or VFIQ" status, but whether there is a pending interrupt
5
several bugs made it essentially unusable with an AArch32
6
due to "this mechanism", ie the hypervisor having set the VI/VF
6
guest. Fix all those problems:
7
bits. The overall pending state for VIRQ and VFIQ is effectively
8
the logical OR of the inbound lines from the GIC with the
9
VI and VF bits. Commit 8a0fc3a29fc231 would result in pending
10
VIRQ/VFIQ possibly being lost when the hypervisor wrote to HCR.
11
7
12
As a preliminary to implementing the HCR.VI/VF feature properly,
8
- Use CPSR.PAN to check for PAN state in aarch32 mode
13
revert the broken one entirely.
9
- throw permission fault during address translation when PAN is
10
enabled and kernel tries to access user acessible page
11
- ignore SCTLR_XP bit for armv7 and armv8 (conflicts with SCTLR_SPAN).
14
12
13
Signed-off-by: Timofey Kutergin <tkutergin@gmail.com>
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Message-id: 20221027112619.2205229-1-tkutergin@gmail.com
16
[PMM: tweak commit message]
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
17
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
18
Message-id: 20181109134731.11605-2-peter.maydell@linaro.org
19
---
18
---
20
target/arm/helper.c | 47 ++++-----------------------------------------
19
target/arm/helper.c | 13 +++++++++++--
21
1 file changed, 4 insertions(+), 43 deletions(-)
20
target/arm/ptw.c | 35 ++++++++++++++++++++++++++++++-----
21
2 files changed, 41 insertions(+), 7 deletions(-)
22
22
23
diff --git a/target/arm/helper.c b/target/arm/helper.c
23
diff --git a/target/arm/helper.c b/target/arm/helper.c
24
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
25
--- a/target/arm/helper.c
25
--- a/target/arm/helper.c
26
+++ b/target/arm/helper.c
26
+++ b/target/arm/helper.c
27
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el3_no_el2_v8_cp_reginfo[] = {
27
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate)
28
static void hcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
28
}
29
#endif
30
31
+static bool arm_pan_enabled(CPUARMState *env)
32
+{
33
+ if (is_a64(env)) {
34
+ return env->pstate & PSTATE_PAN;
35
+ } else {
36
+ return env->uncached_cpsr & CPSR_PAN;
37
+ }
38
+}
39
+
40
ARMMMUIdx arm_mmu_idx_el(CPUARMState *env, int el)
29
{
41
{
30
ARMCPU *cpu = arm_env_get_cpu(env);
42
ARMMMUIdx idx;
31
- CPUState *cs = ENV_GET_CPU(env);
43
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_mmu_idx_el(CPUARMState *env, int el)
32
uint64_t valid_mask = HCR_MASK;
44
}
33
45
break;
34
if (arm_feature(env, ARM_FEATURE_EL3)) {
46
case 1:
35
@@ -XXX,XX +XXX,XX @@ static void hcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
47
- if (env->pstate & PSTATE_PAN) {
36
/* Clear RES0 bits. */
48
+ if (arm_pan_enabled(env)) {
37
value &= valid_mask;
49
idx = ARMMMUIdx_E10_1_PAN;
38
50
} else {
39
- /*
51
idx = ARMMMUIdx_E10_1;
40
- * VI and VF are kept in cs->interrupt_request. Modifying that
52
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_mmu_idx_el(CPUARMState *env, int el)
41
- * requires that we have the iothread lock, which is done by
53
case 2:
42
- * marking the reginfo structs as ARM_CP_IO.
54
/* Note that TGE does not apply at EL2. */
43
- * Note that if a write to HCR pends a VIRQ or VFIQ it is never
55
if (arm_hcr_el2_eff(env) & HCR_E2H) {
44
- * possible for it to be taken immediately, because VIRQ and
56
- if (env->pstate & PSTATE_PAN) {
45
- * VFIQ are masked unless running at EL0 or EL1, and HCR
57
+ if (arm_pan_enabled(env)) {
46
- * can only be written at EL2.
58
idx = ARMMMUIdx_E20_2_PAN;
47
- */
59
} else {
48
- g_assert(qemu_mutex_iothread_locked());
60
idx = ARMMMUIdx_E20_2;
49
- if (value & HCR_VI) {
61
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
50
- cs->interrupt_request |= CPU_INTERRUPT_VIRQ;
62
index XXXXXXX..XXXXXXX 100644
51
- } else {
63
--- a/target/arm/ptw.c
52
- cs->interrupt_request &= ~CPU_INTERRUPT_VIRQ;
64
+++ b/target/arm/ptw.c
53
- }
65
@@ -XXX,XX +XXX,XX @@ static bool get_level1_table_address(CPUARMState *env, ARMMMUIdx mmu_idx,
54
- if (value & HCR_VF) {
66
* @mmu_idx: MMU index indicating required translation regime
55
- cs->interrupt_request |= CPU_INTERRUPT_VFIQ;
67
* @ap: The 3-bit access permissions (AP[2:0])
56
- } else {
68
* @domain_prot: The 2-bit domain access permissions
57
- cs->interrupt_request &= ~CPU_INTERRUPT_VFIQ;
69
+ * @is_user: TRUE if accessing from PL0
58
- }
70
*/
59
- value &= ~(HCR_VI | HCR_VF);
71
-static int ap_to_rw_prot(CPUARMState *env, ARMMMUIdx mmu_idx,
72
- int ap, int domain_prot)
73
+static int ap_to_rw_prot_is_user(CPUARMState *env, ARMMMUIdx mmu_idx,
74
+ int ap, int domain_prot, bool is_user)
75
{
76
- bool is_user = regime_is_user(env, mmu_idx);
60
-
77
-
61
/* These bits change the MMU setup:
78
if (domain_prot == 3) {
62
* HCR_VM enables stage 2 translation
79
return PAGE_READ | PAGE_WRITE;
63
* HCR_PTW forbids certain page-table setups
80
}
64
@@ -XXX,XX +XXX,XX @@ static void hcr_writelow(CPUARMState *env, const ARMCPRegInfo *ri,
81
@@ -XXX,XX +XXX,XX @@ static int ap_to_rw_prot(CPUARMState *env, ARMMMUIdx mmu_idx,
65
hcr_write(env, NULL, value);
82
}
66
}
83
}
67
84
68
-static uint64_t hcr_read(CPUARMState *env, const ARMCPRegInfo *ri)
85
+/*
69
-{
86
+ * Translate section/page access permissions to page R/W protection flags
70
- /* The VI and VF bits live in cs->interrupt_request */
87
+ * @env: CPUARMState
71
- uint64_t ret = env->cp15.hcr_el2 & ~(HCR_VI | HCR_VF);
88
+ * @mmu_idx: MMU index indicating required translation regime
72
- CPUState *cs = ENV_GET_CPU(env);
89
+ * @ap: The 3-bit access permissions (AP[2:0])
73
-
90
+ * @domain_prot: The 2-bit domain access permissions
74
- if (cs->interrupt_request & CPU_INTERRUPT_VIRQ) {
91
+ */
75
- ret |= HCR_VI;
92
+static int ap_to_rw_prot(CPUARMState *env, ARMMMUIdx mmu_idx,
76
- }
93
+ int ap, int domain_prot)
77
- if (cs->interrupt_request & CPU_INTERRUPT_VFIQ) {
94
+{
78
- ret |= HCR_VF;
95
+ return ap_to_rw_prot_is_user(env, mmu_idx, ap, domain_prot,
79
- }
96
+ regime_is_user(env, mmu_idx));
80
- return ret;
97
+}
81
-}
98
+
82
-
99
/*
83
static const ARMCPRegInfo el2_cp_reginfo[] = {
100
* Translate section/page access permissions to page R/W protection flags.
84
{ .name = "HCR_EL2", .state = ARM_CP_STATE_AA64,
101
* @ap: The 2-bit simple AP (AP[2:1])
85
- .type = ARM_CP_IO,
102
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, S1Translate *ptw,
86
.opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 0,
103
hwaddr phys_addr;
87
.access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.hcr_el2),
104
uint32_t dacr;
88
- .writefn = hcr_write, .readfn = hcr_read },
105
bool ns;
89
+ .writefn = hcr_write },
106
+ int user_prot;
90
{ .name = "HCR", .state = ARM_CP_STATE_AA32,
107
91
- .type = ARM_CP_ALIAS | ARM_CP_IO,
108
/* Pagetable walk. */
92
+ .type = ARM_CP_ALIAS,
109
/* Lookup l1 descriptor. */
93
.cp = 15, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 0,
110
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, S1Translate *ptw,
94
.access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.hcr_el2),
111
goto do_fault;
95
- .writefn = hcr_writelow, .readfn = hcr_read },
112
}
96
+ .writefn = hcr_writelow },
113
result->f.prot = simple_ap_to_rw_prot(env, mmu_idx, ap >> 1);
97
{ .name = "ELR_EL2", .state = ARM_CP_STATE_AA64,
114
+ user_prot = simple_ap_to_rw_prot_is_user(ap >> 1, 1);
98
.type = ARM_CP_ALIAS,
115
} else {
99
.opc0 = 3, .opc1 = 4, .crn = 4, .crm = 0, .opc2 = 1,
116
result->f.prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot);
100
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
117
+ user_prot = ap_to_rw_prot_is_user(env, mmu_idx, ap, domain_prot, 1);
101
118
}
102
static const ARMCPRegInfo el2_v8_cp_reginfo[] = {
119
if (result->f.prot && !xn) {
103
{ .name = "HCR2", .state = ARM_CP_STATE_AA32,
120
result->f.prot |= PAGE_EXEC;
104
- .type = ARM_CP_ALIAS | ARM_CP_IO,
121
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, S1Translate *ptw,
105
+ .type = ARM_CP_ALIAS,
122
fi->type = ARMFault_Permission;
106
.cp = 15, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 4,
123
goto do_fault;
107
.access = PL2_RW,
124
}
108
.fieldoffset = offsetofhigh32(CPUARMState, cp15.hcr_el2),
125
+ if (regime_is_pan(env, mmu_idx) &&
126
+ !regime_is_user(env, mmu_idx) &&
127
+ user_prot &&
128
+ access_type != MMU_INST_FETCH) {
129
+ /* Privileged Access Never fault */
130
+ fi->type = ARMFault_Permission;
131
+ goto do_fault;
132
+ }
133
}
134
if (ns) {
135
/* The NS bit will (as required by the architecture) have no effect if
136
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_with_struct(CPUARMState *env, S1Translate *ptw,
137
if (regime_using_lpae_format(env, mmu_idx)) {
138
return get_phys_addr_lpae(env, ptw, address, access_type, false,
139
result, fi);
140
- } else if (regime_sctlr(env, mmu_idx) & SCTLR_XP) {
141
+ } else if (arm_feature(env, ARM_FEATURE_V7) ||
142
+ regime_sctlr(env, mmu_idx) & SCTLR_XP) {
143
return get_phys_addr_v6(env, ptw, address, access_type, result, fi);
144
} else {
145
return get_phys_addr_v5(env, ptw, address, access_type, result, fi);
109
--
146
--
110
2.19.1
147
2.25.1
111
112
diff view generated by jsdifflib
1
From: Alex Bennée <alex.bennee@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The test was incomplete and incorrectly caused debug exceptions to be
3
With odd_ofs set, we weren't copying enough data.
4
generated when returning to EL2 after a failed attempt to single-step
5
an EL1 instruction. Fix this while cleaning up the function a little.
6
4
7
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
5
Fixes: 09eb6d7025d1 ("target/arm: Move sve zip high_ofs into simd_data")
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reported-by: Idan Horowitz <idan.horowitz@gmail.com>
9
Message-id: 20181109152119.9242-8-alex.bennee@linaro.org
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
Message-id: 20221031054144.3574-1-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
11
---
12
target/arm/cpu.h | 39 ++++++++++++++++++++++++---------------
12
target/arm/sve_helper.c | 4 ++--
13
1 file changed, 24 insertions(+), 15 deletions(-)
13
1 file changed, 2 insertions(+), 2 deletions(-)
14
14
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
16
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpu.h
17
--- a/target/arm/sve_helper.c
18
+++ b/target/arm/cpu.h
18
+++ b/target/arm/sve_helper.c
19
@@ -XXX,XX +XXX,XX @@ static inline bool arm_v7m_csselr_razwi(ARMCPU *cpu)
19
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
20
return (cpu->clidr & R_V7M_CLIDR_CTYPE_ALL_MASK) != 0;
20
/* We produce output faster than we consume input. \
21
}
21
Therefore we must be mindful of possible overlap. */ \
22
22
if (unlikely((vn - vd) < (uintptr_t)oprsz)) { \
23
+/* See AArch64.GenerateDebugExceptionsFrom() in ARM ARM pseudocode */
23
- vn = memcpy(&tmp_n, vn, oprsz_2); \
24
static inline bool aa64_generate_debug_exceptions(CPUARMState *env)
24
+ vn = memcpy(&tmp_n, vn, oprsz); \
25
{
25
} \
26
- if (arm_is_secure(env)) {
26
if (unlikely((vm - vd) < (uintptr_t)oprsz)) { \
27
- /* MDCR_EL3.SDD disables debug events from Secure state */
27
- vm = memcpy(&tmp_m, vm, oprsz_2); \
28
- if (extract32(env->cp15.mdcr_el3, 16, 1) != 0
28
+ vm = memcpy(&tmp_m, vm, oprsz); \
29
- || arm_current_el(env) == 3) {
29
} \
30
- return false;
30
for (i = 0; i < oprsz_2; i += sizeof(TYPE)) { \
31
- }
31
*(TYPE *)(vd + H(2 * i + 0)) = *(TYPE *)(vn + odd_ofs + H(i)); \
32
+ int cur_el = arm_current_el(env);
33
+ int debug_el;
34
+
35
+ if (cur_el == 3) {
36
+ return false;
37
}
38
39
- if (arm_current_el(env) == arm_debug_target_el(env)) {
40
- if ((extract32(env->cp15.mdscr_el1, 13, 1) == 0)
41
- || (env->daif & PSTATE_D)) {
42
- return false;
43
- }
44
+ /* MDCR_EL3.SDD disables debug events from Secure state */
45
+ if (arm_is_secure_below_el3(env)
46
+ && extract32(env->cp15.mdcr_el3, 16, 1)) {
47
+ return false;
48
}
49
- return true;
50
+
51
+ /*
52
+ * Same EL to same EL debug exceptions need MDSCR_KDE enabled
53
+ * while not masking the (D)ebug bit in DAIF.
54
+ */
55
+ debug_el = arm_debug_target_el(env);
56
+
57
+ if (cur_el == debug_el) {
58
+ return extract32(env->cp15.mdscr_el1, 13, 1)
59
+ && !(env->daif & PSTATE_D);
60
+ }
61
+
62
+ /* Otherwise the debug target needs to be a higher EL */
63
+ return debug_el > cur_el;
64
}
65
66
static inline bool aa32_generate_debug_exceptions(CPUARMState *env)
67
@@ -XXX,XX +XXX,XX @@ static inline bool aa32_generate_debug_exceptions(CPUARMState *env)
68
* since the pseudocode has it at all callsites except for the one in
69
* CheckSoftwareStep(), where it is elided because both branches would
70
* always return the same value.
71
- *
72
- * Parts of the pseudocode relating to EL2 and EL3 are omitted because we
73
- * don't yet implement those exception levels or their associated trap bits.
74
*/
75
static inline bool arm_generate_debug_exceptions(CPUARMState *env)
76
{
77
--
32
--
78
2.19.1
33
2.25.1
79
34
80
35
diff view generated by jsdifflib
1
Before we supported direct execution from MMIO regions, we
1
From: Ake Koomsin <ake@igel.co.jp>
2
implemented workarounds in commit 720424359917887c926a33d2
3
which let us avoid doing so, even if the SAU or MPU region
4
was less than page-sized.
5
2
6
Once we implemented execute-from-MMIO, we removed part
3
We need to check HCR_E2H and HCR_TGE to select the right MMU index for
7
of those workarounds in commit d4b6275df320cee76; but
4
the correct translation regime.
8
we forgot the one in get_phys_addr_pmsav8() which
9
suppressed use of small SAU regions in executable regions.
10
Remove that workaround now.
11
5
6
To check for EL2&0 translation regime:
7
- For S1E0*, S1E1* and S12E* ops, check both HCR_E2H and HCR_TGE
8
- For S1E2* ops, check only HCR_E2H
9
10
Signed-off-by: Ake Koomsin <ake@igel.co.jp>
11
Message-id: 20221101064250.12444-1-ake@igel.co.jp
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20181106163801.14474-1-peter.maydell@linaro.org
16
---
14
---
17
target/arm/helper.c | 12 ------------
15
target/arm/helper.c | 15 +++++++++------
18
1 file changed, 12 deletions(-)
16
1 file changed, 9 insertions(+), 6 deletions(-)
19
17
20
diff --git a/target/arm/helper.c b/target/arm/helper.c
18
diff --git a/target/arm/helper.c b/target/arm/helper.c
21
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/helper.c
20
--- a/target/arm/helper.c
23
+++ b/target/arm/helper.c
21
+++ b/target/arm/helper.c
24
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
22
@@ -XXX,XX +XXX,XX @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri,
25
23
MMUAccessType access_type = ri->opc2 & 1 ? MMU_DATA_STORE : MMU_DATA_LOAD;
26
ret = pmsav8_mpu_lookup(env, address, access_type, mmu_idx, phys_ptr,
24
ARMMMUIdx mmu_idx;
27
txattrs, prot, &mpu_is_subpage, fi, NULL);
25
int secure = arm_is_secure_below_el3(env);
28
- /*
26
+ uint64_t hcr_el2 = arm_hcr_el2_eff(env);
29
- * TODO: this is a temporary hack to ignore the fact that the SAU region
27
+ bool regime_e20 = (hcr_el2 & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE);
30
- * is smaller than a page if this is an executable region. We never
28
31
- * supported small MPU regions, but we did (accidentally) allow small
29
switch (ri->opc2 & 6) {
32
- * SAU regions, and if we now made small SAU regions not be executable
30
case 0:
33
- * then this would break previously working guest code. We can't
31
switch (ri->opc1) {
34
- * remove this until/unless we implement support for execution from
32
case 0: /* AT S1E1R, AT S1E1W, AT S1E1RP, AT S1E1WP */
35
- * small regions.
33
if (ri->crm == 9 && (env->pstate & PSTATE_PAN)) {
36
- */
34
- mmu_idx = ARMMMUIdx_Stage1_E1_PAN;
37
- if (*prot & PAGE_EXEC) {
35
+ mmu_idx = regime_e20 ?
38
- sattrs.subpage = false;
36
+ ARMMMUIdx_E20_2_PAN : ARMMMUIdx_Stage1_E1_PAN;
39
- }
37
} else {
40
*page_size = sattrs.subpage || mpu_is_subpage ? 1 : TARGET_PAGE_SIZE;
38
- mmu_idx = ARMMMUIdx_Stage1_E1;
41
return ret;
39
+ mmu_idx = regime_e20 ? ARMMMUIdx_E20_2 : ARMMMUIdx_Stage1_E1;
42
}
40
}
41
break;
42
case 4: /* AT S1E2R, AT S1E2W */
43
- mmu_idx = ARMMMUIdx_E2;
44
+ mmu_idx = hcr_el2 & HCR_E2H ? ARMMMUIdx_E20_2 : ARMMMUIdx_E2;
45
break;
46
case 6: /* AT S1E3R, AT S1E3W */
47
mmu_idx = ARMMMUIdx_E3;
48
@@ -XXX,XX +XXX,XX @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri,
49
}
50
break;
51
case 2: /* AT S1E0R, AT S1E0W */
52
- mmu_idx = ARMMMUIdx_Stage1_E0;
53
+ mmu_idx = regime_e20 ? ARMMMUIdx_E20_0 : ARMMMUIdx_Stage1_E0;
54
break;
55
case 4: /* AT S12E1R, AT S12E1W */
56
- mmu_idx = ARMMMUIdx_E10_1;
57
+ mmu_idx = regime_e20 ? ARMMMUIdx_E20_2 : ARMMMUIdx_E10_1;
58
break;
59
case 6: /* AT S12E0R, AT S12E0W */
60
- mmu_idx = ARMMMUIdx_E10_0;
61
+ mmu_idx = regime_e20 ? ARMMMUIdx_E20_0 : ARMMMUIdx_E10_0;
62
break;
63
default:
64
g_assert_not_reached();
43
--
65
--
44
2.19.1
66
2.25.1
45
46
diff view generated by jsdifflib
Deleted patch
1
From: Thomas Huth <thuth@redhat.com>
2
1
3
There is no active maintainer, but since Peter is picking up
4
patches via qemu-arm@nongnu.org, I think we could at least use
5
"Odd Fixes" as status here.
6
7
Signed-off-by: Thomas Huth <thuth@redhat.com>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Message-id: 1541528230-31817-1-git-send-email-thuth@redhat.com
10
[PMM: Also add myself as an M: contact]
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
MAINTAINERS | 7 +++++++
14
1 file changed, 7 insertions(+)
15
16
diff --git a/MAINTAINERS b/MAINTAINERS
17
index XXXXXXX..XXXXXXX 100644
18
--- a/MAINTAINERS
19
+++ b/MAINTAINERS
20
@@ -XXX,XX +XXX,XX @@ F: hw/*/pxa2xx*
21
F: hw/misc/mst_fpga.c
22
F: include/hw/arm/pxa.h
23
24
+Sharp SL-5500 (Collie) PDA
25
+M: Peter Maydell <peter.maydell@linaro.org>
26
+L: qemu-arm@nongnu.org
27
+S: Odd Fixes
28
+F: hw/arm/collie.c
29
+F: hw/arm/strongarm*
30
+
31
Stellaris
32
M: Peter Maydell <peter.maydell@linaro.org>
33
L: qemu-arm@nongnu.org
34
--
35
2.19.1
36
37
diff view generated by jsdifflib
Deleted patch
1
From: Eric Auger <eric.auger@redhat.com>
2
1
3
Commit af7d64ede0b9 (hw/arm/sysbus-fdt: Allow device matching with DT
4
compatible value) introduced a match_fn callback which gets called
5
for each registered combo to check whether a sysbus device can be
6
dynamically instantiated. However the callback gets called even if
7
the device type does not match the binding combo typename field.
8
This causes an assert when passing "-device ramfb" to the qemu
9
command line as vfio_platform_match() gets called on a non
10
vfio-platform device.
11
12
To fix this regression, let's change the add_fdt_node() logic so
13
that we first check the type and if the match_fn callback is defined,
14
then we also call it.
15
16
Binding combos only requesting a type check do not define the
17
match_fn callback.
18
19
Fixes: af7d64ede0b9 (hw/arm/sysbus-fdt: Allow device matching with
20
DT compatible value)
21
22
Signed-off-by: Eric Auger <eric.auger@redhat.com>
23
Reported-by: Thomas Huth <thuth@redhat.com>
24
Reviewed-by: Alex Williamson <alex.williamson@redhat.com>
25
Tested-by: Geert Uytterhoeven <geert+renesas@glider.be>
26
Message-id: 20181106184212.29377-1-eric.auger@redhat.com
27
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
28
---
29
hw/arm/sysbus-fdt.c | 12 +++++++-----
30
1 file changed, 7 insertions(+), 5 deletions(-)
31
32
diff --git a/hw/arm/sysbus-fdt.c b/hw/arm/sysbus-fdt.c
33
index XXXXXXX..XXXXXXX 100644
34
--- a/hw/arm/sysbus-fdt.c
35
+++ b/hw/arm/sysbus-fdt.c
36
@@ -XXX,XX +XXX,XX @@ static bool type_match(SysBusDevice *sbdev, const BindingEntry *entry)
37
return !strcmp(object_get_typename(OBJECT(sbdev)), entry->typename);
38
}
39
40
-#define TYPE_BINDING(type, add_fn) {(type), NULL, (add_fn), type_match}
41
+#define TYPE_BINDING(type, add_fn) {(type), NULL, (add_fn), NULL}
42
43
/* list of supported dynamic sysbus bindings */
44
static const BindingEntry bindings[] = {
45
@@ -XXX,XX +XXX,XX @@ static void add_fdt_node(SysBusDevice *sbdev, void *opaque)
46
for (i = 0; i < ARRAY_SIZE(bindings); i++) {
47
const BindingEntry *iter = &bindings[i];
48
49
- if (iter->match_fn(sbdev, iter)) {
50
- ret = iter->add_fn(sbdev, opaque);
51
- assert(!ret);
52
- return;
53
+ if (type_match(sbdev, iter)) {
54
+ if (!iter->match_fn || iter->match_fn(sbdev, iter)) {
55
+ ret = iter->add_fn(sbdev, opaque);
56
+ assert(!ret);
57
+ return;
58
+ }
59
}
60
}
61
error_report("Device %s can not be dynamically instantiated",
62
--
63
2.19.1
64
65
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This would cause an infinite recursion or loop.
3
Reversed the sense of non-secure in get_phys_addr_lpae,
4
and failed to initialize attrs.secure for ARMMMUIdx_Phys_S.
4
5
6
Fixes: 48da29e4 ("target/arm: Add ptw_idx to S1Translate")
7
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1293
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Message-id: 20181110121711.15257-1-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
---
12
target/arm/helper.c | 2 +-
13
target/arm/ptw.c | 15 ++++++++-------
13
1 file changed, 1 insertion(+), 1 deletion(-)
14
1 file changed, 8 insertions(+), 7 deletions(-)
14
15
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
16
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
18
--- a/target/arm/ptw.c
18
+++ b/target/arm/helper.c
19
+++ b/target/arm/ptw.c
19
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vmalle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
20
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
20
CPUState *cs = ENV_GET_CPU(env);
21
descaddr |= (address >> (stride * (4 - level))) & indexmask;
21
22
descaddr &= ~7ULL;
22
if (tlb_force_broadcast(env)) {
23
nstable = extract32(tableattrs, 4, 1);
23
- tlbi_aa64_vmalle1_write(env, NULL, value);
24
- if (!nstable) {
24
+ tlbi_aa64_vmalle1is_write(env, NULL, value);
25
+ if (nstable) {
25
return;
26
/*
27
* Stage2_S -> Stage2 or Phys_S -> Phys_NS
28
* Assert that the non-secure idx are even, and relative order.
29
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_with_struct(CPUARMState *env, S1Translate *ptw,
30
bool is_secure = ptw->in_secure;
31
ARMMMUIdx s1_mmu_idx;
32
33
+ /*
34
+ * The page table entries may downgrade secure to non-secure, but
35
+ * cannot upgrade an non-secure translation regime's attributes
36
+ * to secure.
37
+ */
38
+ result->f.attrs.secure = is_secure;
39
+
40
switch (mmu_idx) {
41
case ARMMMUIdx_Phys_S:
42
case ARMMMUIdx_Phys_NS:
43
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_with_struct(CPUARMState *env, S1Translate *ptw,
44
break;
26
}
45
}
27
46
47
- /*
48
- * The page table entries may downgrade secure to non-secure, but
49
- * cannot upgrade an non-secure translation regime's attributes
50
- * to secure.
51
- */
52
- result->f.attrs.secure = is_secure;
53
result->f.attrs.user = regime_is_user(env, mmu_idx);
54
55
/*
28
--
56
--
29
2.19.1
57
2.25.1
30
58
31
59
diff view generated by jsdifflib
Deleted patch
1
From: Alex Bennée <alex.bennee@linaro.org>
2
1
3
This only fails with some (broken) versions of gdb but we should
4
treat the top bits of DBGBVR as RESS. Properly sign extend QEMU's
5
reference copy of dbgbvr and also update the register descriptions in
6
the comment.
7
8
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20181109152119.9242-2-alex.bennee@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
target/arm/kvm64.c | 17 +++++++++++++++--
14
1 file changed, 15 insertions(+), 2 deletions(-)
15
16
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/kvm64.c
19
+++ b/target/arm/kvm64.c
20
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_init_debug(CPUState *cs)
21
* capable of fancier matching but that will require exposing that
22
* fanciness to GDB's interface
23
*
24
- * D7.3.2 DBGBCR<n>_EL1, Debug Breakpoint Control Registers
25
+ * DBGBCR<n>_EL1, Debug Breakpoint Control Registers
26
*
27
* 31 24 23 20 19 16 15 14 13 12 9 8 5 4 3 2 1 0
28
* +------+------+-------+-----+----+------+-----+------+-----+---+
29
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_init_debug(CPUState *cs)
30
* SSC/HMC/PMC: Security, Higher and Priv access control (Table D-12)
31
* BAS: Byte Address Select (RES1 for AArch64)
32
* E: Enable bit
33
+ *
34
+ * DBGBVR<n>_EL1, Debug Breakpoint Value Registers
35
+ *
36
+ * 63 53 52 49 48 2 1 0
37
+ * +------+-----------+----------+-----+
38
+ * | RESS | VA[52:49] | VA[48:2] | 0 0 |
39
+ * +------+-----------+----------+-----+
40
+ *
41
+ * Depending on the addressing mode bits the top bits of the register
42
+ * are a sign extension of the highest applicable VA bit. Some
43
+ * versions of GDB don't do it correctly so we ensure they are correct
44
+ * here so future PC comparisons will work properly.
45
*/
46
+
47
static int insert_hw_breakpoint(target_ulong addr)
48
{
49
HWBreakpoint brk = {
50
.bcr = 0x1, /* BCR E=1, enable */
51
- .bvr = addr
52
+ .bvr = sextract64(addr, 0, 53)
53
};
54
55
if (cur_hw_bps >= max_hw_bps) {
56
--
57
2.19.1
58
59
diff view generated by jsdifflib
Deleted patch
1
From: Alex Bennée <alex.bennee@linaro.org>
2
1
3
Fix the assertion failure when running interrupts.
4
5
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20181109152119.9242-3-alex.bennee@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/kvm64.c | 2 ++
12
1 file changed, 2 insertions(+)
13
14
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/kvm64.c
17
+++ b/target/arm/kvm64.c
18
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_handle_debug(CPUState *cs, struct kvm_debug_exit_arch *debug_exit)
19
cs->exception_index = EXCP_BKPT;
20
env->exception.syndrome = debug_exit->hsr;
21
env->exception.vaddress = debug_exit->far;
22
+ qemu_mutex_lock_iothread();
23
cc->do_interrupt(cs);
24
+ qemu_mutex_unlock_iothread();
25
26
return false;
27
}
28
--
29
2.19.1
30
31
diff view generated by jsdifflib
Deleted patch
1
From: Alex Bennée <alex.bennee@linaro.org>
2
1
3
When we are debugging the guest all exceptions come our way but might
4
be for the guest's own debug exceptions. We use the ->do_interrupt()
5
infrastructure to inject the exception into the guest. However, we are
6
missing a full setup of the exception structure, causing an assert
7
later down the line.
8
9
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20181109152119.9242-4-alex.bennee@linaro.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
target/arm/kvm64.c | 1 +
16
1 file changed, 1 insertion(+)
17
18
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/kvm64.c
21
+++ b/target/arm/kvm64.c
22
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_handle_debug(CPUState *cs, struct kvm_debug_exit_arch *debug_exit)
23
cs->exception_index = EXCP_BKPT;
24
env->exception.syndrome = debug_exit->hsr;
25
env->exception.vaddress = debug_exit->far;
26
+ env->exception.target_el = 1;
27
qemu_mutex_lock_iothread();
28
cc->do_interrupt(cs);
29
qemu_mutex_unlock_iothread();
30
--
31
2.19.1
32
33
diff view generated by jsdifflib
Deleted patch
1
From: Alex Bennée <alex.bennee@linaro.org>
2
1
3
You should declare you are using a global version of a variable before
4
you attempt to modify it in a function.
5
6
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20181109152119.9242-5-alex.bennee@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
tests/guest-debug/test-gdbstub.py | 1 +
13
1 file changed, 1 insertion(+)
14
15
diff --git a/tests/guest-debug/test-gdbstub.py b/tests/guest-debug/test-gdbstub.py
16
index XXXXXXX..XXXXXXX 100644
17
--- a/tests/guest-debug/test-gdbstub.py
18
+++ b/tests/guest-debug/test-gdbstub.py
19
@@ -XXX,XX +XXX,XX @@ def report(cond, msg):
20
print ("PASS: %s" % (msg))
21
else:
22
print ("FAIL: %s" % (msg))
23
+ global failcount
24
failcount += 1
25
26
27
--
28
2.19.1
29
30
diff view generated by jsdifflib
Deleted patch
1
From: Alex Bennée <alex.bennee@linaro.org>
2
1
3
We already have this symbol defined so lets use it.
4
5
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20181109152119.9242-7-alex.bennee@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/cpu.h | 2 +-
11
1 file changed, 1 insertion(+), 1 deletion(-)
12
13
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/cpu.h
16
+++ b/target/arm/cpu.h
17
@@ -XXX,XX +XXX,XX @@ static inline int arm_debug_target_el(CPUARMState *env)
18
19
if (arm_feature(env, ARM_FEATURE_EL2) && !secure) {
20
route_to_el2 = env->cp15.hcr_el2 & HCR_TGE ||
21
- env->cp15.mdcr_el2 & (1 << 8);
22
+ env->cp15.mdcr_el2 & MDCR_TDE;
23
}
24
25
if (route_to_el2) {
26
--
27
2.19.1
28
29
diff view generated by jsdifflib
Deleted patch
1
Currently we track the state of the four irq lines from the GIC
2
only via the cs->interrupt_request or KVM irq state. That means
3
that we assume that an interrupt is asserted if and only if the
4
external line is set. This assumption is incorrect for VIRQ
5
and VFIQ, because the HCR_EL2.{VI,VF} bits allow assertion
6
of VIRQ and VFIQ separately from the state of the external line.
7
1
8
To handle this, start tracking the state of the external lines
9
explicitly in a CPU state struct field, as is common practice
10
for devices.
11
12
The complicated part of this is dealing with inbound migration
13
from an older QEMU which didn't have this state. We assume in
14
that case that the older QEMU did not implement the HCR_EL2.{VI,VF}
15
bits as generating interrupts, and so the line state matches
16
the current state in cs->interrupt_request. (This is not quite
17
true between commit 8a0fc3a29fc2315325400c7 and its revert, but
18
that commit is broken and never made it into any released QEMU
19
version.)
20
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
23
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
24
Message-id: 20181109134731.11605-3-peter.maydell@linaro.org
25
---
26
target/arm/cpu.h | 3 +++
27
target/arm/cpu.c | 16 ++++++++++++++
28
target/arm/machine.c | 51 ++++++++++++++++++++++++++++++++++++++++++++
29
3 files changed, 70 insertions(+)
30
31
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/cpu.h
34
+++ b/target/arm/cpu.h
35
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
36
uint64_t esr;
37
} serror;
38
39
+ /* State of our input IRQ/FIQ/VIRQ/VFIQ lines */
40
+ uint32_t irq_line_state;
41
+
42
/* Thumb-2 EE state. */
43
uint32_t teecr;
44
uint32_t teehbr;
45
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
46
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/cpu.c
48
+++ b/target/arm/cpu.c
49
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_set_irq(void *opaque, int irq, int level)
50
[ARM_CPU_VFIQ] = CPU_INTERRUPT_VFIQ
51
};
52
53
+ if (level) {
54
+ env->irq_line_state |= mask[irq];
55
+ } else {
56
+ env->irq_line_state &= ~mask[irq];
57
+ }
58
+
59
switch (irq) {
60
case ARM_CPU_VIRQ:
61
case ARM_CPU_VFIQ:
62
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_kvm_set_irq(void *opaque, int irq, int level)
63
ARMCPU *cpu = opaque;
64
CPUState *cs = CPU(cpu);
65
int kvm_irq = KVM_ARM_IRQ_TYPE_CPU << KVM_ARM_IRQ_TYPE_SHIFT;
66
+ uint32_t linestate_bit;
67
68
switch (irq) {
69
case ARM_CPU_IRQ:
70
kvm_irq |= KVM_ARM_IRQ_CPU_IRQ;
71
+ linestate_bit = CPU_INTERRUPT_HARD;
72
break;
73
case ARM_CPU_FIQ:
74
kvm_irq |= KVM_ARM_IRQ_CPU_FIQ;
75
+ linestate_bit = CPU_INTERRUPT_FIQ;
76
break;
77
default:
78
g_assert_not_reached();
79
}
80
+
81
+ if (level) {
82
+ env->irq_line_state |= linestate_bit;
83
+ } else {
84
+ env->irq_line_state &= ~linestate_bit;
85
+ }
86
+
87
kvm_irq |= cs->cpu_index << KVM_ARM_IRQ_VCPU_SHIFT;
88
kvm_set_irq(kvm_state, kvm_irq, level ? 1 : 0);
89
#endif
90
diff --git a/target/arm/machine.c b/target/arm/machine.c
91
index XXXXXXX..XXXXXXX 100644
92
--- a/target/arm/machine.c
93
+++ b/target/arm/machine.c
94
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_serror = {
95
}
96
};
97
98
+static bool irq_line_state_needed(void *opaque)
99
+{
100
+ return true;
101
+}
102
+
103
+static const VMStateDescription vmstate_irq_line_state = {
104
+ .name = "cpu/irq-line-state",
105
+ .version_id = 1,
106
+ .minimum_version_id = 1,
107
+ .needed = irq_line_state_needed,
108
+ .fields = (VMStateField[]) {
109
+ VMSTATE_UINT32(env.irq_line_state, ARMCPU),
110
+ VMSTATE_END_OF_LIST()
111
+ }
112
+};
113
+
114
static bool m_needed(void *opaque)
115
{
116
ARMCPU *cpu = opaque;
117
@@ -XXX,XX +XXX,XX @@ static int cpu_pre_save(void *opaque)
118
return 0;
119
}
120
121
+static int cpu_pre_load(void *opaque)
122
+{
123
+ ARMCPU *cpu = opaque;
124
+ CPUARMState *env = &cpu->env;
125
+
126
+ /*
127
+ * Pre-initialize irq_line_state to a value that's never valid as
128
+ * real data, so cpu_post_load() can tell whether we've seen the
129
+ * irq-line-state subsection in the incoming migration state.
130
+ */
131
+ env->irq_line_state = UINT32_MAX;
132
+
133
+ return 0;
134
+}
135
+
136
static int cpu_post_load(void *opaque, int version_id)
137
{
138
ARMCPU *cpu = opaque;
139
+ CPUARMState *env = &cpu->env;
140
int i, v;
141
142
+ /*
143
+ * Handle migration compatibility from old QEMU which didn't
144
+ * send the irq-line-state subsection. A QEMU without it did not
145
+ * implement the HCR_EL2.{VI,VF} bits as generating interrupts,
146
+ * so for TCG the line state matches the bits set in cs->interrupt_request.
147
+ * For KVM the line state is not stored in cs->interrupt_request
148
+ * and so this will leave irq_line_state as 0, but this is OK because
149
+ * we only need to care about it for TCG.
150
+ */
151
+ if (env->irq_line_state == UINT32_MAX) {
152
+ CPUState *cs = CPU(cpu);
153
+
154
+ env->irq_line_state = cs->interrupt_request &
155
+ (CPU_INTERRUPT_HARD | CPU_INTERRUPT_FIQ |
156
+ CPU_INTERRUPT_VIRQ | CPU_INTERRUPT_VFIQ);
157
+ }
158
+
159
/* Update the values list from the incoming migration data.
160
* Anything in the incoming data which we don't know about is
161
* a migration failure; anything we know about but the incoming
162
@@ -XXX,XX +XXX,XX @@ const VMStateDescription vmstate_arm_cpu = {
163
.version_id = 22,
164
.minimum_version_id = 22,
165
.pre_save = cpu_pre_save,
166
+ .pre_load = cpu_pre_load,
167
.post_load = cpu_post_load,
168
.fields = (VMStateField[]) {
169
VMSTATE_UINT32_ARRAY(env.regs, ARMCPU, 16),
170
@@ -XXX,XX +XXX,XX @@ const VMStateDescription vmstate_arm_cpu = {
171
&vmstate_sve,
172
#endif
173
&vmstate_serror,
174
+ &vmstate_irq_line_state,
175
NULL
176
}
177
};
178
--
179
2.19.1
180
181
diff view generated by jsdifflib
Deleted patch
1
In commit 8a0fc3a29fc2315325400 we tried to implement HCR_EL2.{VI,VF},
2
but we got it wrong and had to revert it.
3
1
4
In that commit we implemented them as simply tracking whether there
5
is a pending virtual IRQ or virtual FIQ. This is not correct -- these
6
bits cause a software-generated VIRQ/VFIQ, which is distinct from
7
whether there is a hardware-generated VIRQ/VFIQ caused by the
8
external interrupt controller. So we need to track separately
9
the HCR_EL2 bit state and the external virq/vfiq line state, and
10
OR the two together to get the actual pending VIRQ/VFIQ state.
11
12
Fixes: 8a0fc3a29fc2315325400c738f807d0d4ae0ab7f
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
15
Message-id: 20181109134731.11605-4-peter.maydell@linaro.org
16
---
17
target/arm/internals.h | 18 ++++++++++++++++
18
target/arm/cpu.c | 48 +++++++++++++++++++++++++++++++++++++++++-
19
target/arm/helper.c | 20 ++++++++++++++++--
20
3 files changed, 83 insertions(+), 3 deletions(-)
21
22
diff --git a/target/arm/internals.h b/target/arm/internals.h
23
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/internals.h
25
+++ b/target/arm/internals.h
26
@@ -XXX,XX +XXX,XX @@ static inline const char *aarch32_mode_name(uint32_t psr)
27
return cpu_mode_names[psr & 0xf];
28
}
29
30
+/**
31
+ * arm_cpu_update_virq: Update CPU_INTERRUPT_VIRQ bit in cs->interrupt_request
32
+ *
33
+ * Update the CPU_INTERRUPT_VIRQ bit in cs->interrupt_request, following
34
+ * a change to either the input VIRQ line from the GIC or the HCR_EL2.VI bit.
35
+ * Must be called with the iothread lock held.
36
+ */
37
+void arm_cpu_update_virq(ARMCPU *cpu);
38
+
39
+/**
40
+ * arm_cpu_update_vfiq: Update CPU_INTERRUPT_VFIQ bit in cs->interrupt_request
41
+ *
42
+ * Update the CPU_INTERRUPT_VFIQ bit in cs->interrupt_request, following
43
+ * a change to either the input VFIQ line from the GIC or the HCR_EL2.VF bit.
44
+ * Must be called with the iothread lock held.
45
+ */
46
+void arm_cpu_update_vfiq(ARMCPU *cpu);
47
+
48
#endif
49
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
50
index XXXXXXX..XXXXXXX 100644
51
--- a/target/arm/cpu.c
52
+++ b/target/arm/cpu.c
53
@@ -XXX,XX +XXX,XX @@ static bool arm_v7m_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
54
}
55
#endif
56
57
+void arm_cpu_update_virq(ARMCPU *cpu)
58
+{
59
+ /*
60
+ * Update the interrupt level for VIRQ, which is the logical OR of
61
+ * the HCR_EL2.VI bit and the input line level from the GIC.
62
+ */
63
+ CPUARMState *env = &cpu->env;
64
+ CPUState *cs = CPU(cpu);
65
+
66
+ bool new_state = (env->cp15.hcr_el2 & HCR_VI) ||
67
+ (env->irq_line_state & CPU_INTERRUPT_VIRQ);
68
+
69
+ if (new_state != ((cs->interrupt_request & CPU_INTERRUPT_VIRQ) != 0)) {
70
+ if (new_state) {
71
+ cpu_interrupt(cs, CPU_INTERRUPT_VIRQ);
72
+ } else {
73
+ cpu_reset_interrupt(cs, CPU_INTERRUPT_VIRQ);
74
+ }
75
+ }
76
+}
77
+
78
+void arm_cpu_update_vfiq(ARMCPU *cpu)
79
+{
80
+ /*
81
+ * Update the interrupt level for VFIQ, which is the logical OR of
82
+ * the HCR_EL2.VF bit and the input line level from the GIC.
83
+ */
84
+ CPUARMState *env = &cpu->env;
85
+ CPUState *cs = CPU(cpu);
86
+
87
+ bool new_state = (env->cp15.hcr_el2 & HCR_VF) ||
88
+ (env->irq_line_state & CPU_INTERRUPT_VFIQ);
89
+
90
+ if (new_state != ((cs->interrupt_request & CPU_INTERRUPT_VFIQ) != 0)) {
91
+ if (new_state) {
92
+ cpu_interrupt(cs, CPU_INTERRUPT_VFIQ);
93
+ } else {
94
+ cpu_reset_interrupt(cs, CPU_INTERRUPT_VFIQ);
95
+ }
96
+ }
97
+}
98
+
99
#ifndef CONFIG_USER_ONLY
100
static void arm_cpu_set_irq(void *opaque, int irq, int level)
101
{
102
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_set_irq(void *opaque, int irq, int level)
103
104
switch (irq) {
105
case ARM_CPU_VIRQ:
106
+ assert(arm_feature(env, ARM_FEATURE_EL2));
107
+ arm_cpu_update_virq(cpu);
108
+ break;
109
case ARM_CPU_VFIQ:
110
assert(arm_feature(env, ARM_FEATURE_EL2));
111
- /* fall through */
112
+ arm_cpu_update_vfiq(cpu);
113
+ break;
114
case ARM_CPU_IRQ:
115
case ARM_CPU_FIQ:
116
if (level) {
117
diff --git a/target/arm/helper.c b/target/arm/helper.c
118
index XXXXXXX..XXXXXXX 100644
119
--- a/target/arm/helper.c
120
+++ b/target/arm/helper.c
121
@@ -XXX,XX +XXX,XX @@ static void hcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
122
tlb_flush(CPU(cpu));
123
}
124
env->cp15.hcr_el2 = value;
125
+
126
+ /*
127
+ * Updates to VI and VF require us to update the status of
128
+ * virtual interrupts, which are the logical OR of these bits
129
+ * and the state of the input lines from the GIC. (This requires
130
+ * that we have the iothread lock, which is done by marking the
131
+ * reginfo structs as ARM_CP_IO.)
132
+ * Note that if a write to HCR pends a VIRQ or VFIQ it is never
133
+ * possible for it to be taken immediately, because VIRQ and
134
+ * VFIQ are masked unless running at EL0 or EL1, and HCR
135
+ * can only be written at EL2.
136
+ */
137
+ g_assert(qemu_mutex_iothread_locked());
138
+ arm_cpu_update_virq(cpu);
139
+ arm_cpu_update_vfiq(cpu);
140
}
141
142
static void hcr_writehigh(CPUARMState *env, const ARMCPRegInfo *ri,
143
@@ -XXX,XX +XXX,XX @@ static void hcr_writelow(CPUARMState *env, const ARMCPRegInfo *ri,
144
145
static const ARMCPRegInfo el2_cp_reginfo[] = {
146
{ .name = "HCR_EL2", .state = ARM_CP_STATE_AA64,
147
+ .type = ARM_CP_IO,
148
.opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 0,
149
.access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.hcr_el2),
150
.writefn = hcr_write },
151
{ .name = "HCR", .state = ARM_CP_STATE_AA32,
152
- .type = ARM_CP_ALIAS,
153
+ .type = ARM_CP_ALIAS | ARM_CP_IO,
154
.cp = 15, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 0,
155
.access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.hcr_el2),
156
.writefn = hcr_writelow },
157
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
158
159
static const ARMCPRegInfo el2_v8_cp_reginfo[] = {
160
{ .name = "HCR2", .state = ARM_CP_STATE_AA32,
161
- .type = ARM_CP_ALIAS,
162
+ .type = ARM_CP_ALIAS | ARM_CP_IO,
163
.cp = 15, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 4,
164
.access = PL2_RW,
165
.fieldoffset = offsetofhigh32(CPUARMState, cp15.hcr_el2),
166
--
167
2.19.1
168
169
diff view generated by jsdifflib