1
target-arm queue for 3.1: mostly bug fixes, but the "turn on
1
A last small test of bug fixes before rc1.
2
EL2 support for Cortex-A7 and -A15" is technically enabling
3
of a new feature... I think this is OK since we're only at rc1,
4
and it's easy to revert that feature bit flip if necessary.
5
2
6
thanks
3
thanks
7
-- PMM
4
-- PMM
8
5
6
The following changes since commit ed8ad9728a9c0eec34db9dff61dfa2f1dd625637:
9
7
10
The following changes since commit 5704c36d25ee84e7129722cb0db53df9faefe943:
8
Merge tag 'pull-tpm-2023-07-14-1' of https://github.com/stefanberger/qemu-tpm into staging (2023-07-15 14:54:04 +0100)
11
12
Merge remote-tracking branch 'remotes/kraxel/tags/fixes-31-20181112-pull-request' into staging (2018-11-12 15:55:40 +0000)
13
9
14
are available in the Git repository at:
10
are available in the Git repository at:
15
11
16
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20181112
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20230717
17
13
18
for you to fetch changes up to 1a4c1a6dbf60aebddd07753f1013ea896c06ad29:
14
for you to fetch changes up to c2c1c4a35c7c2b1a4140b0942b9797c857e476a4:
19
15
20
target/arm/cpu: Give Cortex-A15 and -A7 the EL2 feature (2018-11-12 16:52:29 +0000)
16
hw/nvram: Avoid unnecessary Xilinx eFuse backstore write (2023-07-17 11:05:52 +0100)
21
17
22
----------------------------------------------------------------
18
----------------------------------------------------------------
23
target/arm queue:
19
target-arm queue:
24
* Remove no-longer-needed workaround for small SAU regions for v8M
20
* hw/arm/sbsa-ref: set 'slots' property of xhci
25
* Remove antique TODO comment
21
* linux-user: Remove pointless NULL check in clock_adjtime handling
26
* MAINTAINERS: Add an entry for the 'collie' machine
22
* ptw: Fix S1_ptw_translate() debug path
27
* hw/arm/sysbus-fdt: Only call match_fn callback if the type matches
23
* ptw: Account for FEAT_RME when applying {N}SW, SA bits
28
* Fix infinite recursion in tlbi_aa64_vmalle1_write()
24
* accel/tcg: Zero-pad PC in TCG CPU exec trace lines
29
* ARM KVM: fix various bugs in handling of guest debugging
25
* hw/nvram: Avoid unnecessary Xilinx eFuse backstore write
30
* Correctly implement handling of HCR_EL2.{VI, VF}
31
* Hyp mode R14 is shared with User and System
32
* Give Cortex-A15 and -A7 the EL2 feature
33
26
34
----------------------------------------------------------------
27
----------------------------------------------------------------
35
Alex Bennée (6):
28
Peter Maydell (5):
36
target/arm64: properly handle DBGVR RESS bits
29
linux-user: Remove pointless NULL check in clock_adjtime handling
37
target/arm64: hold BQL when calling do_interrupt()
30
target/arm/ptw.c: Add comments to S1Translate struct fields
38
target/arm64: kvm debug set target_el when passing exception to guest
31
target/arm: Fix S1_ptw_translate() debug path
39
tests/guest-debug: fix scoping of failcount
32
target/arm/ptw.c: Account for FEAT_RME when applying {N}SW, SA bits
40
arm: use symbolic MDCR_TDE in arm_debug_target_el
33
accel/tcg: Zero-pad PC in TCG CPU exec trace lines
41
arm: fix aa64_generate_debug_exceptions to work with EL2
42
34
43
Eric Auger (1):
35
Tong Ho (1):
44
hw/arm/sysbus-fdt: Only call match_fn callback if the type matches
36
hw/nvram: Avoid unnecessary Xilinx eFuse backstore write
45
37
46
Peter Maydell (7):
38
Yuquan Wang (1):
47
target/arm: Remove workaround for small SAU regions
39
hw/arm/sbsa-ref: set 'slots' property of xhci
48
target/arm: Remove antique TODO comment
49
Revert "target/arm: Implement HCR.VI and VF"
50
target/arm: Track the state of our irq lines from the GIC explicitly
51
target/arm: Correctly implement handling of HCR_EL2.{VI, VF}
52
target/arm: Hyp mode R14 is shared with User and System
53
target/arm/cpu: Give Cortex-A15 and -A7 the EL2 feature
54
40
55
Richard Henderson (1):
41
accel/tcg/cpu-exec.c | 4 +--
56
target/arm: Fix typo in tlbi_aa64_vmalle1_write
42
accel/tcg/translate-all.c | 2 +-
57
43
hw/arm/sbsa-ref.c | 1 +
58
Thomas Huth (1):
44
hw/nvram/xlnx-efuse.c | 11 ++++--
59
MAINTAINERS: Add an entry for the 'collie' machine
45
linux-user/syscall.c | 12 +++----
60
46
target/arm/ptw.c | 90 +++++++++++++++++++++++++++++++++++++++++------
61
target/arm/cpu.h | 44 +++++++++++------
47
6 files changed, 98 insertions(+), 22 deletions(-)
62
target/arm/internals.h | 34 +++++++++++++
63
hw/arm/sysbus-fdt.c | 12 +++--
64
target/arm/cpu.c | 66 ++++++++++++++++++++++++-
65
target/arm/helper.c | 101 +++++++++++++-------------------------
66
target/arm/kvm32.c | 4 +-
67
target/arm/kvm64.c | 20 +++++++-
68
target/arm/machine.c | 51 +++++++++++++++++++
69
target/arm/op_helper.c | 4 +-
70
MAINTAINERS | 7 +++
71
tests/guest-debug/test-gdbstub.py | 1 +
72
11 files changed, 248 insertions(+), 96 deletions(-)
73
diff view generated by jsdifflib
Deleted patch
1
Before we supported direct execution from MMIO regions, we
2
implemented workarounds in commit 720424359917887c926a33d2
3
which let us avoid doing so, even if the SAU or MPU region
4
was less than page-sized.
5
1
6
Once we implemented execute-from-MMIO, we removed part
7
of those workarounds in commit d4b6275df320cee76; but
8
we forgot the one in get_phys_addr_pmsav8() which
9
suppressed use of small SAU regions in executable regions.
10
Remove that workaround now.
11
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20181106163801.14474-1-peter.maydell@linaro.org
16
---
17
target/arm/helper.c | 12 ------------
18
1 file changed, 12 deletions(-)
19
20
diff --git a/target/arm/helper.c b/target/arm/helper.c
21
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/helper.c
23
+++ b/target/arm/helper.c
24
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
25
26
ret = pmsav8_mpu_lookup(env, address, access_type, mmu_idx, phys_ptr,
27
txattrs, prot, &mpu_is_subpage, fi, NULL);
28
- /*
29
- * TODO: this is a temporary hack to ignore the fact that the SAU region
30
- * is smaller than a page if this is an executable region. We never
31
- * supported small MPU regions, but we did (accidentally) allow small
32
- * SAU regions, and if we now made small SAU regions not be executable
33
- * then this would break previously working guest code. We can't
34
- * remove this until/unless we implement support for execution from
35
- * small regions.
36
- */
37
- if (*prot & PAGE_EXEC) {
38
- sattrs.subpage = false;
39
- }
40
*page_size = sattrs.subpage || mpu_is_subpage ? 1 : TARGET_PAGE_SIZE;
41
return ret;
42
}
43
--
44
2.19.1
45
46
diff view generated by jsdifflib
1
From: Alex Bennée <alex.bennee@linaro.org>
1
From: Yuquan Wang <wangyuquan1236@phytium.com.cn>
2
2
3
When we are debugging the guest all exceptions come our way but might
3
This extends the slots of xhci to 64, since the default xhci_sysbus
4
be for the guest's own debug exceptions. We use the ->do_interrupt()
4
just supports one slot.
5
infrastructure to inject the exception into the guest. However, we are
6
missing a full setup of the exception structure, causing an assert
7
later down the line.
8
5
9
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
6
Signed-off-by: Wang Yuquan <wangyuquan1236@phytium.com.cn>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Chen Baozi <chenbaozi@phytium.com.cn>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20181109152119.9242-4-alex.bennee@linaro.org
9
Reviewed-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
10
Tested-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
11
Message-id: 20230710063750.473510-2-wangyuquan1236@phytium.com.cn
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
13
---
15
target/arm/kvm64.c | 1 +
14
hw/arm/sbsa-ref.c | 1 +
16
1 file changed, 1 insertion(+)
15
1 file changed, 1 insertion(+)
17
16
18
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
17
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
19
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/kvm64.c
19
--- a/hw/arm/sbsa-ref.c
21
+++ b/target/arm/kvm64.c
20
+++ b/hw/arm/sbsa-ref.c
22
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_handle_debug(CPUState *cs, struct kvm_debug_exit_arch *debug_exit)
21
@@ -XXX,XX +XXX,XX @@ static void create_xhci(const SBSAMachineState *sms)
23
cs->exception_index = EXCP_BKPT;
22
hwaddr base = sbsa_ref_memmap[SBSA_XHCI].base;
24
env->exception.syndrome = debug_exit->hsr;
23
int irq = sbsa_ref_irqmap[SBSA_XHCI];
25
env->exception.vaddress = debug_exit->far;
24
DeviceState *dev = qdev_new(TYPE_XHCI_SYSBUS);
26
+ env->exception.target_el = 1;
25
+ qdev_prop_set_uint32(dev, "slots", XHCI_MAXSLOTS);
27
qemu_mutex_lock_iothread();
26
28
cc->do_interrupt(cs);
27
sysbus_realize_and_unref(SYS_BUS_DEVICE(dev), &error_fatal);
29
qemu_mutex_unlock_iothread();
28
sysbus_mmio_map(SYS_BUS_DEVICE(dev), 0, base);
30
--
29
--
31
2.19.1
30
2.34.1
32
33
diff view generated by jsdifflib
1
The Cortex-A15 and Cortex-A7 both have EL2; now we've implemented
1
In the code for TARGET_NR_clock_adjtime, we set the pointer phtx to
2
it properly we can enable the feature bit.
2
the address of the local variable htx. This means it can never be
3
NULL, but later in the code we check it for NULL anyway. Coverity
4
complains about this (CID 1507683) because the NULL check comes after
5
a call to clock_adjtime() that assumes it is non-NULL.
6
7
Since phtx is always &htx, and is used only in three places, it's not
8
really necessary. Remove it, bringing the code structure in to line
9
with that for TARGET_NR_clock_adjtime64, which already uses a simple
10
'&htx' when it wants a pointer to 'htx'.
3
11
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
13
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20181109173553.22341-3-peter.maydell@linaro.org
15
Message-id: 20230623144410.1837261-1-peter.maydell@linaro.org
8
---
16
---
9
target/arm/cpu.c | 2 ++
17
linux-user/syscall.c | 12 +++++-------
10
1 file changed, 2 insertions(+)
18
1 file changed, 5 insertions(+), 7 deletions(-)
11
19
12
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
20
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
13
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/cpu.c
22
--- a/linux-user/syscall.c
15
+++ b/target/arm/cpu.c
23
+++ b/linux-user/syscall.c
16
@@ -XXX,XX +XXX,XX @@ static void cortex_a7_initfn(Object *obj)
24
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(CPUArchState *cpu_env, int num, abi_long arg1,
17
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
25
#if defined(TARGET_NR_clock_adjtime) && defined(CONFIG_CLOCK_ADJTIME)
18
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
26
case TARGET_NR_clock_adjtime:
19
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
27
{
20
+ set_feature(&cpu->env, ARM_FEATURE_EL2);
28
- struct timex htx, *phtx = &htx;
21
set_feature(&cpu->env, ARM_FEATURE_EL3);
29
+ struct timex htx;
22
cpu->kvm_target = QEMU_KVM_ARM_TARGET_CORTEX_A7;
30
23
cpu->midr = 0x410fc075;
31
- if (target_to_host_timex(phtx, arg2) != 0) {
24
@@ -XXX,XX +XXX,XX @@ static void cortex_a15_initfn(Object *obj)
32
+ if (target_to_host_timex(&htx, arg2) != 0) {
25
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
33
return -TARGET_EFAULT;
26
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
34
}
27
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
35
- ret = get_errno(clock_adjtime(arg1, phtx));
28
+ set_feature(&cpu->env, ARM_FEATURE_EL2);
36
- if (!is_error(ret) && phtx) {
29
set_feature(&cpu->env, ARM_FEATURE_EL3);
37
- if (host_to_target_timex(arg2, phtx) != 0) {
30
cpu->kvm_target = QEMU_KVM_ARM_TARGET_CORTEX_A15;
38
- return -TARGET_EFAULT;
31
cpu->midr = 0x412fc0f1;
39
- }
40
+ ret = get_errno(clock_adjtime(arg1, &htx));
41
+ if (!is_error(ret) && host_to_target_timex(arg2, &htx)) {
42
+ return -TARGET_EFAULT;
43
}
44
}
45
return ret;
32
--
46
--
33
2.19.1
47
2.34.1
34
48
35
49
diff view generated by jsdifflib
1
Remove a TODO comment about implementing the vectored interrupt
1
Add comments to the in_* fields in the S1Translate struct
2
controller. We have had an implementation of that for a decade;
2
that explain what they're doing.
3
it's in hw/intc/pl190.c.
4
3
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20181106164118.16184-1-peter.maydell@linaro.org
6
Message-id: 20230710152130.3928330-2-peter.maydell@linaro.org
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
10
---
7
---
11
target/arm/helper.c | 1 -
8
target/arm/ptw.c | 40 ++++++++++++++++++++++++++++++++++++++++
12
1 file changed, 1 deletion(-)
9
1 file changed, 40 insertions(+)
13
10
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
11
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
15
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper.c
13
--- a/target/arm/ptw.c
17
+++ b/target/arm/helper.c
14
+++ b/target/arm/ptw.c
18
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch32(CPUState *cs)
15
@@ -XXX,XX +XXX,XX @@
19
return;
16
#endif
20
}
17
21
18
typedef struct S1Translate {
22
- /* TODO: Vectored interrupt controller. */
19
+ /*
23
switch (cs->exception_index) {
20
+ * in_mmu_idx : specifies which TTBR, TCR, etc to use for the walk.
24
case EXCP_UDEF:
21
+ * Together with in_space, specifies the architectural translation regime.
25
new_mode = ARM_CPU_MODE_UND;
22
+ */
23
ARMMMUIdx in_mmu_idx;
24
+ /*
25
+ * in_ptw_idx: specifies which mmuidx to use for the actual
26
+ * page table descriptor load operations. This will be one of the
27
+ * ARMMMUIdx_Stage2* or one of the ARMMMUIdx_Phys_* indexes.
28
+ * If a Secure ptw is "downgraded" to NonSecure by an NSTable bit,
29
+ * this field is updated accordingly.
30
+ */
31
ARMMMUIdx in_ptw_idx;
32
+ /*
33
+ * in_space: the security space for this walk. This plus
34
+ * the in_mmu_idx specify the architectural translation regime.
35
+ * If a Secure ptw is "downgraded" to NonSecure by an NSTable bit,
36
+ * this field is updated accordingly.
37
+ *
38
+ * Note that the security space for the in_ptw_idx may be different
39
+ * from that for the in_mmu_idx. We do not need to explicitly track
40
+ * the in_ptw_idx security space because:
41
+ * - if the in_ptw_idx is an ARMMMUIdx_Phys_* then the mmuidx
42
+ * itself specifies the security space
43
+ * - if the in_ptw_idx is an ARMMMUIdx_Stage2* then the security
44
+ * space used for ptw reads is the same as that of the security
45
+ * space of the stage 1 translation for all cases except where
46
+ * stage 1 is Secure; in that case the only possibilities for
47
+ * the ptw read are Secure and NonSecure, and the in_ptw_idx
48
+ * value being Stage2 vs Stage2_S distinguishes those.
49
+ */
50
ARMSecuritySpace in_space;
51
+ /*
52
+ * in_secure: whether the translation regime is a Secure one.
53
+ * This is always equal to arm_space_is_secure(in_space).
54
+ * If a Secure ptw is "downgraded" to NonSecure by an NSTable bit,
55
+ * this field is updated accordingly.
56
+ */
57
bool in_secure;
58
+ /*
59
+ * in_debug: is this a QEMU debug access (gdbstub, etc)? Debug
60
+ * accesses will not update the guest page table access flags
61
+ * and will not change the state of the softmmu TLBs.
62
+ */
63
bool in_debug;
64
/*
65
* If this is stage 2 of a stage 1+2 page table walk, then this must
26
--
66
--
27
2.19.1
67
2.34.1
28
29
diff view generated by jsdifflib
Deleted patch
1
From: Thomas Huth <thuth@redhat.com>
2
1
3
There is no active maintainer, but since Peter is picking up
4
patches via qemu-arm@nongnu.org, I think we could at least use
5
"Odd Fixes" as status here.
6
7
Signed-off-by: Thomas Huth <thuth@redhat.com>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Message-id: 1541528230-31817-1-git-send-email-thuth@redhat.com
10
[PMM: Also add myself as an M: contact]
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
MAINTAINERS | 7 +++++++
14
1 file changed, 7 insertions(+)
15
16
diff --git a/MAINTAINERS b/MAINTAINERS
17
index XXXXXXX..XXXXXXX 100644
18
--- a/MAINTAINERS
19
+++ b/MAINTAINERS
20
@@ -XXX,XX +XXX,XX @@ F: hw/*/pxa2xx*
21
F: hw/misc/mst_fpga.c
22
F: include/hw/arm/pxa.h
23
24
+Sharp SL-5500 (Collie) PDA
25
+M: Peter Maydell <peter.maydell@linaro.org>
26
+L: qemu-arm@nongnu.org
27
+S: Odd Fixes
28
+F: hw/arm/collie.c
29
+F: hw/arm/strongarm*
30
+
31
Stellaris
32
M: Peter Maydell <peter.maydell@linaro.org>
33
L: qemu-arm@nongnu.org
34
--
35
2.19.1
36
37
diff view generated by jsdifflib
Deleted patch
1
From: Eric Auger <eric.auger@redhat.com>
2
1
3
Commit af7d64ede0b9 (hw/arm/sysbus-fdt: Allow device matching with DT
4
compatible value) introduced a match_fn callback which gets called
5
for each registered combo to check whether a sysbus device can be
6
dynamically instantiated. However the callback gets called even if
7
the device type does not match the binding combo typename field.
8
This causes an assert when passing "-device ramfb" to the qemu
9
command line as vfio_platform_match() gets called on a non
10
vfio-platform device.
11
12
To fix this regression, let's change the add_fdt_node() logic so
13
that we first check the type and if the match_fn callback is defined,
14
then we also call it.
15
16
Binding combos only requesting a type check do not define the
17
match_fn callback.
18
19
Fixes: af7d64ede0b9 (hw/arm/sysbus-fdt: Allow device matching with
20
DT compatible value)
21
22
Signed-off-by: Eric Auger <eric.auger@redhat.com>
23
Reported-by: Thomas Huth <thuth@redhat.com>
24
Reviewed-by: Alex Williamson <alex.williamson@redhat.com>
25
Tested-by: Geert Uytterhoeven <geert+renesas@glider.be>
26
Message-id: 20181106184212.29377-1-eric.auger@redhat.com
27
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
28
---
29
hw/arm/sysbus-fdt.c | 12 +++++++-----
30
1 file changed, 7 insertions(+), 5 deletions(-)
31
32
diff --git a/hw/arm/sysbus-fdt.c b/hw/arm/sysbus-fdt.c
33
index XXXXXXX..XXXXXXX 100644
34
--- a/hw/arm/sysbus-fdt.c
35
+++ b/hw/arm/sysbus-fdt.c
36
@@ -XXX,XX +XXX,XX @@ static bool type_match(SysBusDevice *sbdev, const BindingEntry *entry)
37
return !strcmp(object_get_typename(OBJECT(sbdev)), entry->typename);
38
}
39
40
-#define TYPE_BINDING(type, add_fn) {(type), NULL, (add_fn), type_match}
41
+#define TYPE_BINDING(type, add_fn) {(type), NULL, (add_fn), NULL}
42
43
/* list of supported dynamic sysbus bindings */
44
static const BindingEntry bindings[] = {
45
@@ -XXX,XX +XXX,XX @@ static void add_fdt_node(SysBusDevice *sbdev, void *opaque)
46
for (i = 0; i < ARRAY_SIZE(bindings); i++) {
47
const BindingEntry *iter = &bindings[i];
48
49
- if (iter->match_fn(sbdev, iter)) {
50
- ret = iter->add_fn(sbdev, opaque);
51
- assert(!ret);
52
- return;
53
+ if (type_match(sbdev, iter)) {
54
+ if (!iter->match_fn || iter->match_fn(sbdev, iter)) {
55
+ ret = iter->add_fn(sbdev, opaque);
56
+ assert(!ret);
57
+ return;
58
+ }
59
}
60
}
61
error_report("Device %s can not be dynamically instantiated",
62
--
63
2.19.1
64
65
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
This would cause an infinite recursion or loop.
4
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Message-id: 20181110121711.15257-1-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/helper.c | 2 +-
13
1 file changed, 1 insertion(+), 1 deletion(-)
14
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
18
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vmalle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
20
CPUState *cs = ENV_GET_CPU(env);
21
22
if (tlb_force_broadcast(env)) {
23
- tlbi_aa64_vmalle1_write(env, NULL, value);
24
+ tlbi_aa64_vmalle1is_write(env, NULL, value);
25
return;
26
}
27
28
--
29
2.19.1
30
31
diff view generated by jsdifflib
Deleted patch
1
From: Alex Bennée <alex.bennee@linaro.org>
2
1
3
This only fails with some (broken) versions of gdb but we should
4
treat the top bits of DBGBVR as RESS. Properly sign extend QEMU's
5
reference copy of dbgbvr and also update the register descriptions in
6
the comment.
7
8
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20181109152119.9242-2-alex.bennee@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
target/arm/kvm64.c | 17 +++++++++++++++--
14
1 file changed, 15 insertions(+), 2 deletions(-)
15
16
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/kvm64.c
19
+++ b/target/arm/kvm64.c
20
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_init_debug(CPUState *cs)
21
* capable of fancier matching but that will require exposing that
22
* fanciness to GDB's interface
23
*
24
- * D7.3.2 DBGBCR<n>_EL1, Debug Breakpoint Control Registers
25
+ * DBGBCR<n>_EL1, Debug Breakpoint Control Registers
26
*
27
* 31 24 23 20 19 16 15 14 13 12 9 8 5 4 3 2 1 0
28
* +------+------+-------+-----+----+------+-----+------+-----+---+
29
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_init_debug(CPUState *cs)
30
* SSC/HMC/PMC: Security, Higher and Priv access control (Table D-12)
31
* BAS: Byte Address Select (RES1 for AArch64)
32
* E: Enable bit
33
+ *
34
+ * DBGBVR<n>_EL1, Debug Breakpoint Value Registers
35
+ *
36
+ * 63 53 52 49 48 2 1 0
37
+ * +------+-----------+----------+-----+
38
+ * | RESS | VA[52:49] | VA[48:2] | 0 0 |
39
+ * +------+-----------+----------+-----+
40
+ *
41
+ * Depending on the addressing mode bits the top bits of the register
42
+ * are a sign extension of the highest applicable VA bit. Some
43
+ * versions of GDB don't do it correctly so we ensure they are correct
44
+ * here so future PC comparisons will work properly.
45
*/
46
+
47
static int insert_hw_breakpoint(target_ulong addr)
48
{
49
HWBreakpoint brk = {
50
.bcr = 0x1, /* BCR E=1, enable */
51
- .bvr = addr
52
+ .bvr = sextract64(addr, 0, 53)
53
};
54
55
if (cur_hw_bps >= max_hw_bps) {
56
--
57
2.19.1
58
59
diff view generated by jsdifflib
1
In commit 8a0fc3a29fc2315325400 we tried to implement HCR_EL2.{VI,VF},
1
In commit fe4a5472ccd6 we rearranged the logic in S1_ptw_translate()
2
but we got it wrong and had to revert it.
2
so that the debug-access "call get_phys_addr_*" codepath is used both
3
when S1 is doing ptw reads from stage 2 and when it is doing ptw
4
reads from physical memory. However, we didn't update the
5
calculation of s2ptw->in_space and s2ptw->in_secure to account for
6
the "ptw reads from physical memory" case. This meant that debug
7
accesses when in Secure state broke.
3
8
4
In that commit we implemented them as simply tracking whether there
9
Create a new function S2_security_space() which returns the
5
is a pending virtual IRQ or virtual FIQ. This is not correct -- these
10
correct security space to use for the ptw load, and use it to
6
bits cause a software-generated VIRQ/VFIQ, which is distinct from
11
determine the correct .in_secure and .in_space fields for the
7
whether there is a hardware-generated VIRQ/VFIQ caused by the
12
stage 2 lookup for the ptw load.
8
external interrupt controller. So we need to track separately
9
the HCR_EL2 bit state and the external virq/vfiq line state, and
10
OR the two together to get the actual pending VIRQ/VFIQ state.
11
13
12
Fixes: 8a0fc3a29fc2315325400c738f807d0d4ae0ab7f
14
Reported-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
16
Tested-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
15
Message-id: 20181109134731.11605-4-peter.maydell@linaro.org
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
18
Message-id: 20230710152130.3928330-3-peter.maydell@linaro.org
19
Fixes: fe4a5472ccd6 ("target/arm: Use get_phys_addr_with_struct in S1_ptw_translate")
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
21
---
17
target/arm/internals.h | 18 ++++++++++++++++
22
target/arm/ptw.c | 37 ++++++++++++++++++++++++++++++++-----
18
target/arm/cpu.c | 48 +++++++++++++++++++++++++++++++++++++++++-
23
1 file changed, 32 insertions(+), 5 deletions(-)
19
target/arm/helper.c | 20 ++++++++++++++++--
20
3 files changed, 83 insertions(+), 3 deletions(-)
21
24
22
diff --git a/target/arm/internals.h b/target/arm/internals.h
25
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
23
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/internals.h
27
--- a/target/arm/ptw.c
25
+++ b/target/arm/internals.h
28
+++ b/target/arm/ptw.c
26
@@ -XXX,XX +XXX,XX @@ static inline const char *aarch32_mode_name(uint32_t psr)
29
@@ -XXX,XX +XXX,XX @@ static bool S2_attrs_are_device(uint64_t hcr, uint8_t attrs)
27
return cpu_mode_names[psr & 0xf];
30
}
28
}
31
}
29
32
30
+/**
33
+static ARMSecuritySpace S2_security_space(ARMSecuritySpace s1_space,
31
+ * arm_cpu_update_virq: Update CPU_INTERRUPT_VIRQ bit in cs->interrupt_request
34
+ ARMMMUIdx s2_mmu_idx)
32
+ *
33
+ * Update the CPU_INTERRUPT_VIRQ bit in cs->interrupt_request, following
34
+ * a change to either the input VIRQ line from the GIC or the HCR_EL2.VI bit.
35
+ * Must be called with the iothread lock held.
36
+ */
37
+void arm_cpu_update_virq(ARMCPU *cpu);
38
+
39
+/**
40
+ * arm_cpu_update_vfiq: Update CPU_INTERRUPT_VFIQ bit in cs->interrupt_request
41
+ *
42
+ * Update the CPU_INTERRUPT_VFIQ bit in cs->interrupt_request, following
43
+ * a change to either the input VFIQ line from the GIC or the HCR_EL2.VF bit.
44
+ * Must be called with the iothread lock held.
45
+ */
46
+void arm_cpu_update_vfiq(ARMCPU *cpu);
47
+
48
#endif
49
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
50
index XXXXXXX..XXXXXXX 100644
51
--- a/target/arm/cpu.c
52
+++ b/target/arm/cpu.c
53
@@ -XXX,XX +XXX,XX @@ static bool arm_v7m_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
54
}
55
#endif
56
57
+void arm_cpu_update_virq(ARMCPU *cpu)
58
+{
35
+{
59
+ /*
36
+ /*
60
+ * Update the interrupt level for VIRQ, which is the logical OR of
37
+ * Return the security space to use for stage 2 when doing
61
+ * the HCR_EL2.VI bit and the input line level from the GIC.
38
+ * the S1 page table descriptor load.
62
+ */
39
+ */
63
+ CPUARMState *env = &cpu->env;
40
+ if (regime_is_stage2(s2_mmu_idx)) {
64
+ CPUState *cs = CPU(cpu);
41
+ /*
65
+
42
+ * The security space for ptw reads is almost always the same
66
+ bool new_state = (env->cp15.hcr_el2 & HCR_VI) ||
43
+ * as that of the security space of the stage 1 translation.
67
+ (env->irq_line_state & CPU_INTERRUPT_VIRQ);
44
+ * The only exception is when stage 1 is Secure; in that case
68
+
45
+ * the ptw read might be to the Secure or the NonSecure space
69
+ if (new_state != ((cs->interrupt_request & CPU_INTERRUPT_VIRQ) != 0)) {
46
+ * (but never Realm or Root), and the s2_mmu_idx tells us which.
70
+ if (new_state) {
47
+ * Root translations are always single-stage.
71
+ cpu_interrupt(cs, CPU_INTERRUPT_VIRQ);
48
+ */
49
+ if (s1_space == ARMSS_Secure) {
50
+ return arm_secure_to_space(s2_mmu_idx == ARMMMUIdx_Stage2_S);
72
+ } else {
51
+ } else {
73
+ cpu_reset_interrupt(cs, CPU_INTERRUPT_VIRQ);
52
+ assert(s2_mmu_idx != ARMMMUIdx_Stage2_S);
53
+ assert(s1_space != ARMSS_Root);
54
+ return s1_space;
74
+ }
55
+ }
56
+ } else {
57
+ /* ptw loads are from phys: the mmu idx itself says which space */
58
+ return arm_phys_to_space(s2_mmu_idx);
75
+ }
59
+ }
76
+}
60
+}
77
+
61
+
78
+void arm_cpu_update_vfiq(ARMCPU *cpu)
62
/* Translate a S1 pagetable walk through S2 if needed. */
79
+{
63
static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
80
+ /*
64
hwaddr addr, ARMMMUFaultInfo *fi)
81
+ * Update the interrupt level for VFIQ, which is the logical OR of
82
+ * the HCR_EL2.VF bit and the input line level from the GIC.
83
+ */
84
+ CPUARMState *env = &cpu->env;
85
+ CPUState *cs = CPU(cpu);
86
+
87
+ bool new_state = (env->cp15.hcr_el2 & HCR_VF) ||
88
+ (env->irq_line_state & CPU_INTERRUPT_VFIQ);
89
+
90
+ if (new_state != ((cs->interrupt_request & CPU_INTERRUPT_VFIQ) != 0)) {
91
+ if (new_state) {
92
+ cpu_interrupt(cs, CPU_INTERRUPT_VFIQ);
93
+ } else {
94
+ cpu_reset_interrupt(cs, CPU_INTERRUPT_VFIQ);
95
+ }
96
+ }
97
+}
98
+
99
#ifndef CONFIG_USER_ONLY
100
static void arm_cpu_set_irq(void *opaque, int irq, int level)
101
{
65
{
102
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_set_irq(void *opaque, int irq, int level)
66
- ARMSecuritySpace space = ptw->in_space;
103
67
bool is_secure = ptw->in_secure;
104
switch (irq) {
68
ARMMMUIdx mmu_idx = ptw->in_mmu_idx;
105
case ARM_CPU_VIRQ:
69
ARMMMUIdx s2_mmu_idx = ptw->in_ptw_idx;
106
+ assert(arm_feature(env, ARM_FEATURE_EL2));
70
@@ -XXX,XX +XXX,XX @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
107
+ arm_cpu_update_virq(cpu);
71
* From gdbstub, do not use softmmu so that we don't modify the
108
+ break;
72
* state of the cpu at all, including softmmu tlb contents.
109
case ARM_CPU_VFIQ:
73
*/
110
assert(arm_feature(env, ARM_FEATURE_EL2));
74
+ ARMSecuritySpace s2_space = S2_security_space(ptw->in_space, s2_mmu_idx);
111
- /* fall through */
75
S1Translate s2ptw = {
112
+ arm_cpu_update_vfiq(cpu);
76
.in_mmu_idx = s2_mmu_idx,
113
+ break;
77
.in_ptw_idx = ptw_idx_for_stage_2(env, s2_mmu_idx),
114
case ARM_CPU_IRQ:
78
- .in_secure = s2_mmu_idx == ARMMMUIdx_Stage2_S,
115
case ARM_CPU_FIQ:
79
- .in_space = (s2_mmu_idx == ARMMMUIdx_Stage2_S ? ARMSS_Secure
116
if (level) {
80
- : space == ARMSS_Realm ? ARMSS_Realm
117
diff --git a/target/arm/helper.c b/target/arm/helper.c
81
- : ARMSS_NonSecure),
118
index XXXXXXX..XXXXXXX 100644
82
+ .in_secure = arm_space_is_secure(s2_space),
119
--- a/target/arm/helper.c
83
+ .in_space = s2_space,
120
+++ b/target/arm/helper.c
84
.in_debug = true,
121
@@ -XXX,XX +XXX,XX @@ static void hcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
85
};
122
tlb_flush(CPU(cpu));
86
GetPhysAddrResult s2 = { };
123
}
124
env->cp15.hcr_el2 = value;
125
+
126
+ /*
127
+ * Updates to VI and VF require us to update the status of
128
+ * virtual interrupts, which are the logical OR of these bits
129
+ * and the state of the input lines from the GIC. (This requires
130
+ * that we have the iothread lock, which is done by marking the
131
+ * reginfo structs as ARM_CP_IO.)
132
+ * Note that if a write to HCR pends a VIRQ or VFIQ it is never
133
+ * possible for it to be taken immediately, because VIRQ and
134
+ * VFIQ are masked unless running at EL0 or EL1, and HCR
135
+ * can only be written at EL2.
136
+ */
137
+ g_assert(qemu_mutex_iothread_locked());
138
+ arm_cpu_update_virq(cpu);
139
+ arm_cpu_update_vfiq(cpu);
140
}
141
142
static void hcr_writehigh(CPUARMState *env, const ARMCPRegInfo *ri,
143
@@ -XXX,XX +XXX,XX @@ static void hcr_writelow(CPUARMState *env, const ARMCPRegInfo *ri,
144
145
static const ARMCPRegInfo el2_cp_reginfo[] = {
146
{ .name = "HCR_EL2", .state = ARM_CP_STATE_AA64,
147
+ .type = ARM_CP_IO,
148
.opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 0,
149
.access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.hcr_el2),
150
.writefn = hcr_write },
151
{ .name = "HCR", .state = ARM_CP_STATE_AA32,
152
- .type = ARM_CP_ALIAS,
153
+ .type = ARM_CP_ALIAS | ARM_CP_IO,
154
.cp = 15, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 0,
155
.access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.hcr_el2),
156
.writefn = hcr_writelow },
157
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
158
159
static const ARMCPRegInfo el2_v8_cp_reginfo[] = {
160
{ .name = "HCR2", .state = ARM_CP_STATE_AA32,
161
- .type = ARM_CP_ALIAS,
162
+ .type = ARM_CP_ALIAS | ARM_CP_IO,
163
.cp = 15, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 4,
164
.access = PL2_RW,
165
.fieldoffset = offsetofhigh32(CPUARMState, cp15.hcr_el2),
166
--
87
--
167
2.19.1
88
2.34.1
168
169
diff view generated by jsdifflib
1
From: Alex Bennée <alex.bennee@linaro.org>
1
In get_phys_addr_twostage() the code that applies the effects of
2
VSTCR.{SA,SW} and VTCR.{NSA,NSW} only updates result->f.attrs.secure.
3
Now we also have f.attrs.space for FEAT_RME, we need to keep the two
4
in sync.
2
5
3
Fix the assertion failure when running interrupts.
6
These bits only have an effect for Secure space translations, not
7
for Root, so use the input in_space field to determine whether to
8
apply them rather than the input is_secure. This doesn't actually
9
make a difference because Root translations are never two-stage,
10
but it's a little clearer.
4
11
5
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20181109152119.9242-3-alex.bennee@linaro.org
14
Message-id: 20230710152130.3928330-4-peter.maydell@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
15
---
11
target/arm/kvm64.c | 2 ++
16
target/arm/ptw.c | 13 ++++++++-----
12
1 file changed, 2 insertions(+)
17
1 file changed, 8 insertions(+), 5 deletions(-)
13
18
14
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
19
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
15
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/kvm64.c
21
--- a/target/arm/ptw.c
17
+++ b/target/arm/kvm64.c
22
+++ b/target/arm/ptw.c
18
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_handle_debug(CPUState *cs, struct kvm_debug_exit_arch *debug_exit)
23
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_twostage(CPUARMState *env, S1Translate *ptw,
19
cs->exception_index = EXCP_BKPT;
24
hwaddr ipa;
20
env->exception.syndrome = debug_exit->hsr;
25
int s1_prot, s1_lgpgsz;
21
env->exception.vaddress = debug_exit->far;
26
bool is_secure = ptw->in_secure;
22
+ qemu_mutex_lock_iothread();
27
+ ARMSecuritySpace in_space = ptw->in_space;
23
cc->do_interrupt(cs);
28
bool ret, ipa_secure;
24
+ qemu_mutex_unlock_iothread();
29
ARMCacheAttrs cacheattrs1;
30
ARMSecuritySpace ipa_space;
31
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_twostage(CPUARMState *env, S1Translate *ptw,
32
* Check if IPA translates to secure or non-secure PA space.
33
* Note that VSTCR overrides VTCR and {N}SW overrides {N}SA.
34
*/
35
- result->f.attrs.secure =
36
- (is_secure
37
- && !(env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW))
38
- && (ipa_secure
39
- || !(env->cp15.vtcr_el2 & (VTCR_NSA | VTCR_NSW))));
40
+ if (in_space == ARMSS_Secure) {
41
+ result->f.attrs.secure =
42
+ !(env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW))
43
+ && (ipa_secure
44
+ || !(env->cp15.vtcr_el2 & (VTCR_NSA | VTCR_NSW)));
45
+ result->f.attrs.space = arm_secure_to_space(result->f.attrs.secure);
46
+ }
25
47
26
return false;
48
return false;
27
}
49
}
28
--
50
--
29
2.19.1
51
2.34.1
30
31
diff view generated by jsdifflib
Deleted patch
1
From: Alex Bennée <alex.bennee@linaro.org>
2
1
3
You should declare you are using a global version of a variable before
4
you attempt to modify it in a function.
5
6
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20181109152119.9242-5-alex.bennee@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
tests/guest-debug/test-gdbstub.py | 1 +
13
1 file changed, 1 insertion(+)
14
15
diff --git a/tests/guest-debug/test-gdbstub.py b/tests/guest-debug/test-gdbstub.py
16
index XXXXXXX..XXXXXXX 100644
17
--- a/tests/guest-debug/test-gdbstub.py
18
+++ b/tests/guest-debug/test-gdbstub.py
19
@@ -XXX,XX +XXX,XX @@ def report(cond, msg):
20
print ("PASS: %s" % (msg))
21
else:
22
print ("FAIL: %s" % (msg))
23
+ global failcount
24
failcount += 1
25
26
27
--
28
2.19.1
29
30
diff view generated by jsdifflib
Deleted patch
1
From: Alex Bennée <alex.bennee@linaro.org>
2
1
3
We already have this symbol defined so lets use it.
4
5
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20181109152119.9242-7-alex.bennee@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/cpu.h | 2 +-
11
1 file changed, 1 insertion(+), 1 deletion(-)
12
13
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/cpu.h
16
+++ b/target/arm/cpu.h
17
@@ -XXX,XX +XXX,XX @@ static inline int arm_debug_target_el(CPUARMState *env)
18
19
if (arm_feature(env, ARM_FEATURE_EL2) && !secure) {
20
route_to_el2 = env->cp15.hcr_el2 & HCR_TGE ||
21
- env->cp15.mdcr_el2 & (1 << 8);
22
+ env->cp15.mdcr_el2 & MDCR_TDE;
23
}
24
25
if (route_to_el2) {
26
--
27
2.19.1
28
29
diff view generated by jsdifflib
1
Hyp mode is an exception to the general rule that each AArch32
1
In commit f0a08b0913befbd we changed the type of the PC from
2
mode has its own r13, r14 and SPSR -- it has a banked r13 and
2
target_ulong to vaddr. In doing so we inadvertently dropped the
3
SPSR but shares its r14 with User and System mode. We were
3
zero-padding on the PC in trace lines (the second item inside the []
4
incorrectly implementing it as banked, which meant that on
4
in these lines). They used to look like this on AArch64, for
5
entry to Hyp mode r14 was 0 rather than the USR/SYS r14.
5
instance:
6
6
7
We provide a new function r14_bank_number() which is like
7
Trace 0: 0x7f2260000100 [00000000/0000000040000000/00000061/ff200000]
8
the existing bank_number() but provides the index into
9
env->banked_r14[]; bank_number() provides the index to use
10
for env->banked_r13[] and env->banked_cpsr[].
11
8
12
All the points in the code that were using bank_number()
9
and now they look like this:
13
to index into env->banked_r14[] are updated for consintency:
10
Trace 0: 0x7f4f50000100 [00000000/40000000/00000061/ff200000]
14
* switch_mode() -- this is the only place where we fix
15
an actual bug
16
* aarch64_sync_32_to_64() and aarch64_sync_64_to_32():
17
no behavioural change as we already special-cased Hyp R14
18
* kvm32.c: no behavioural change since the guest can't ever
19
be in Hyp mode, but conceptually the right thing to do
20
* msr_banked()/mrs_banked(): we can never get to the case
21
that accesses banked_r14[] with tgtmode == ARM_CPU_MODE_HYP,
22
so no behavioural change
23
11
12
and if the PC happens to be somewhere low like 0x5000
13
then the field is shown as /5000/.
14
15
This is because TARGET_FMT_lx is a "%08x" or "%016x" specifier,
16
depending on TARGET_LONG_SIZE, whereas VADDR_PRIx is just PRIx64
17
with no width specifier.
18
19
Restore the zero-padding by adding an 016 width specifier to
20
this tracing and a couple of others that were similarly recently
21
changed to use VADDR_PRIx without a width specifier.
22
23
We can't unfortunately restore the "32-bit guests are padded to
24
8 hex digits and 64-bit guests to 16 hex digits" behaviour so
25
easily.
26
27
Fixes: f0a08b0913befbd ("accel/tcg/cpu-exec.c: Widen pc to vaddr")
24
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
28
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
25
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
29
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
26
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
30
Reviewed-by: Anton Johansson <anjo@rev.ng>
27
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
31
Message-id: 20230711165434.4123674-1-peter.maydell@linaro.org
28
Message-id: 20181109173553.22341-2-peter.maydell@linaro.org
29
---
32
---
30
target/arm/internals.h | 16 ++++++++++++++++
33
accel/tcg/cpu-exec.c | 4 ++--
31
target/arm/helper.c | 29 +++++++++++++++--------------
34
accel/tcg/translate-all.c | 2 +-
32
target/arm/kvm32.c | 4 ++--
35
2 files changed, 3 insertions(+), 3 deletions(-)
33
target/arm/op_helper.c | 4 ++--
34
4 files changed, 35 insertions(+), 18 deletions(-)
35
36
36
diff --git a/target/arm/internals.h b/target/arm/internals.h
37
diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
37
index XXXXXXX..XXXXXXX 100644
38
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/internals.h
39
--- a/accel/tcg/cpu-exec.c
39
+++ b/target/arm/internals.h
40
+++ b/accel/tcg/cpu-exec.c
40
@@ -XXX,XX +XXX,XX @@ static inline int bank_number(int mode)
41
@@ -XXX,XX +XXX,XX @@ static void log_cpu_exec(vaddr pc, CPUState *cpu,
41
g_assert_not_reached();
42
if (qemu_log_in_addr_range(pc)) {
42
}
43
qemu_log_mask(CPU_LOG_EXEC,
43
44
"Trace %d: %p [%08" PRIx64
44
+/**
45
- "/%" VADDR_PRIx "/%08x/%08x] %s\n",
45
+ * r14_bank_number: Map CPU mode onto register bank for r14
46
+ "/%016" VADDR_PRIx "/%08x/%08x] %s\n",
46
+ *
47
cpu->cpu_index, tb->tc.ptr, tb->cs_base, pc,
47
+ * Given an AArch32 CPU mode, return the index into the saved register
48
tb->flags, tb->cflags, lookup_symbol(pc));
48
+ * banks to use for the R14 (LR) in that mode. This is the same as
49
49
+ * bank_number(), except for the special case of Hyp mode, where
50
@@ -XXX,XX +XXX,XX @@ cpu_tb_exec(CPUState *cpu, TranslationBlock *itb, int *tb_exit)
50
+ * R14 is shared with USR and SYS, unlike its R13 and SPSR.
51
if (qemu_loglevel_mask(CPU_LOG_EXEC)) {
51
+ * This should be used as the index into env->banked_r14[], and
52
vaddr pc = log_pc(cpu, last_tb);
52
+ * bank_number() used for the index into env->banked_r13[] and
53
if (qemu_log_in_addr_range(pc)) {
53
+ * env->banked_spsr[].
54
- qemu_log("Stopped execution of TB chain before %p [%"
54
+ */
55
+ qemu_log("Stopped execution of TB chain before %p [%016"
55
+static inline int r14_bank_number(int mode)
56
VADDR_PRIx "] %s\n",
56
+{
57
last_tb->tc.ptr, pc, lookup_symbol(pc));
57
+ return (mode == ARM_CPU_MODE_HYP) ? BANK_USRSYS : bank_number(mode);
58
}
58
+}
59
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
59
+
60
void arm_cpu_register_gdb_regs_for_features(ARMCPU *cpu);
61
void arm_translate_init(void);
62
63
diff --git a/target/arm/helper.c b/target/arm/helper.c
64
index XXXXXXX..XXXXXXX 100644
60
index XXXXXXX..XXXXXXX 100644
65
--- a/target/arm/helper.c
61
--- a/accel/tcg/translate-all.c
66
+++ b/target/arm/helper.c
62
+++ b/accel/tcg/translate-all.c
67
@@ -XXX,XX +XXX,XX @@ static void switch_mode(CPUARMState *env, int mode)
63
@@ -XXX,XX +XXX,XX @@ void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr)
68
64
if (qemu_loglevel_mask(CPU_LOG_EXEC)) {
69
i = bank_number(old_mode);
65
vaddr pc = log_pc(cpu, tb);
70
env->banked_r13[i] = env->regs[13];
66
if (qemu_log_in_addr_range(pc)) {
71
- env->banked_r14[i] = env->regs[14];
67
- qemu_log("cpu_io_recompile: rewound execution of TB to %"
72
env->banked_spsr[i] = env->spsr;
68
+ qemu_log("cpu_io_recompile: rewound execution of TB to %016"
73
69
VADDR_PRIx "\n", pc);
74
i = bank_number(mode);
75
env->regs[13] = env->banked_r13[i];
76
- env->regs[14] = env->banked_r14[i];
77
env->spsr = env->banked_spsr[i];
78
+
79
+ env->banked_r14[r14_bank_number(old_mode)] = env->regs[14];
80
+ env->regs[14] = env->banked_r14[r14_bank_number(mode)];
81
}
82
83
/* Physical Interrupt Target EL Lookup Table
84
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_32_to_64(CPUARMState *env)
85
if (mode == ARM_CPU_MODE_HYP) {
86
env->xregs[14] = env->regs[14];
87
} else {
88
- env->xregs[14] = env->banked_r14[bank_number(ARM_CPU_MODE_USR)];
89
+ env->xregs[14] = env->banked_r14[r14_bank_number(ARM_CPU_MODE_USR)];
90
}
70
}
91
}
71
}
92
93
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_32_to_64(CPUARMState *env)
94
env->xregs[16] = env->regs[14];
95
env->xregs[17] = env->regs[13];
96
} else {
97
- env->xregs[16] = env->banked_r14[bank_number(ARM_CPU_MODE_IRQ)];
98
+ env->xregs[16] = env->banked_r14[r14_bank_number(ARM_CPU_MODE_IRQ)];
99
env->xregs[17] = env->banked_r13[bank_number(ARM_CPU_MODE_IRQ)];
100
}
101
102
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_32_to_64(CPUARMState *env)
103
env->xregs[18] = env->regs[14];
104
env->xregs[19] = env->regs[13];
105
} else {
106
- env->xregs[18] = env->banked_r14[bank_number(ARM_CPU_MODE_SVC)];
107
+ env->xregs[18] = env->banked_r14[r14_bank_number(ARM_CPU_MODE_SVC)];
108
env->xregs[19] = env->banked_r13[bank_number(ARM_CPU_MODE_SVC)];
109
}
110
111
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_32_to_64(CPUARMState *env)
112
env->xregs[20] = env->regs[14];
113
env->xregs[21] = env->regs[13];
114
} else {
115
- env->xregs[20] = env->banked_r14[bank_number(ARM_CPU_MODE_ABT)];
116
+ env->xregs[20] = env->banked_r14[r14_bank_number(ARM_CPU_MODE_ABT)];
117
env->xregs[21] = env->banked_r13[bank_number(ARM_CPU_MODE_ABT)];
118
}
119
120
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_32_to_64(CPUARMState *env)
121
env->xregs[22] = env->regs[14];
122
env->xregs[23] = env->regs[13];
123
} else {
124
- env->xregs[22] = env->banked_r14[bank_number(ARM_CPU_MODE_UND)];
125
+ env->xregs[22] = env->banked_r14[r14_bank_number(ARM_CPU_MODE_UND)];
126
env->xregs[23] = env->banked_r13[bank_number(ARM_CPU_MODE_UND)];
127
}
128
129
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_32_to_64(CPUARMState *env)
130
env->xregs[i] = env->fiq_regs[i - 24];
131
}
132
env->xregs[29] = env->banked_r13[bank_number(ARM_CPU_MODE_FIQ)];
133
- env->xregs[30] = env->banked_r14[bank_number(ARM_CPU_MODE_FIQ)];
134
+ env->xregs[30] = env->banked_r14[r14_bank_number(ARM_CPU_MODE_FIQ)];
135
}
136
137
env->pc = env->regs[15];
138
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_64_to_32(CPUARMState *env)
139
if (mode == ARM_CPU_MODE_HYP) {
140
env->regs[14] = env->xregs[14];
141
} else {
142
- env->banked_r14[bank_number(ARM_CPU_MODE_USR)] = env->xregs[14];
143
+ env->banked_r14[r14_bank_number(ARM_CPU_MODE_USR)] = env->xregs[14];
144
}
145
}
146
147
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_64_to_32(CPUARMState *env)
148
env->regs[14] = env->xregs[16];
149
env->regs[13] = env->xregs[17];
150
} else {
151
- env->banked_r14[bank_number(ARM_CPU_MODE_IRQ)] = env->xregs[16];
152
+ env->banked_r14[r14_bank_number(ARM_CPU_MODE_IRQ)] = env->xregs[16];
153
env->banked_r13[bank_number(ARM_CPU_MODE_IRQ)] = env->xregs[17];
154
}
155
156
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_64_to_32(CPUARMState *env)
157
env->regs[14] = env->xregs[18];
158
env->regs[13] = env->xregs[19];
159
} else {
160
- env->banked_r14[bank_number(ARM_CPU_MODE_SVC)] = env->xregs[18];
161
+ env->banked_r14[r14_bank_number(ARM_CPU_MODE_SVC)] = env->xregs[18];
162
env->banked_r13[bank_number(ARM_CPU_MODE_SVC)] = env->xregs[19];
163
}
164
165
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_64_to_32(CPUARMState *env)
166
env->regs[14] = env->xregs[20];
167
env->regs[13] = env->xregs[21];
168
} else {
169
- env->banked_r14[bank_number(ARM_CPU_MODE_ABT)] = env->xregs[20];
170
+ env->banked_r14[r14_bank_number(ARM_CPU_MODE_ABT)] = env->xregs[20];
171
env->banked_r13[bank_number(ARM_CPU_MODE_ABT)] = env->xregs[21];
172
}
173
174
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_64_to_32(CPUARMState *env)
175
env->regs[14] = env->xregs[22];
176
env->regs[13] = env->xregs[23];
177
} else {
178
- env->banked_r14[bank_number(ARM_CPU_MODE_UND)] = env->xregs[22];
179
+ env->banked_r14[r14_bank_number(ARM_CPU_MODE_UND)] = env->xregs[22];
180
env->banked_r13[bank_number(ARM_CPU_MODE_UND)] = env->xregs[23];
181
}
182
183
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_64_to_32(CPUARMState *env)
184
env->fiq_regs[i - 24] = env->xregs[i];
185
}
186
env->banked_r13[bank_number(ARM_CPU_MODE_FIQ)] = env->xregs[29];
187
- env->banked_r14[bank_number(ARM_CPU_MODE_FIQ)] = env->xregs[30];
188
+ env->banked_r14[r14_bank_number(ARM_CPU_MODE_FIQ)] = env->xregs[30];
189
}
190
191
env->regs[15] = env->pc;
192
diff --git a/target/arm/kvm32.c b/target/arm/kvm32.c
193
index XXXXXXX..XXXXXXX 100644
194
--- a/target/arm/kvm32.c
195
+++ b/target/arm/kvm32.c
196
@@ -XXX,XX +XXX,XX @@ int kvm_arch_put_registers(CPUState *cs, int level)
197
memcpy(env->usr_regs, env->regs + 8, 5 * sizeof(uint32_t));
198
}
199
env->banked_r13[bn] = env->regs[13];
200
- env->banked_r14[bn] = env->regs[14];
201
env->banked_spsr[bn] = env->spsr;
202
+ env->banked_r14[r14_bank_number(mode)] = env->regs[14];
203
204
/* Now we can safely copy stuff down to the kernel */
205
for (i = 0; i < ARRAY_SIZE(regs); i++) {
206
@@ -XXX,XX +XXX,XX @@ int kvm_arch_get_registers(CPUState *cs)
207
memcpy(env->regs + 8, env->usr_regs, 5 * sizeof(uint32_t));
208
}
209
env->regs[13] = env->banked_r13[bn];
210
- env->regs[14] = env->banked_r14[bn];
211
env->spsr = env->banked_spsr[bn];
212
+ env->regs[14] = env->banked_r14[r14_bank_number(mode)];
213
214
/* VFP registers */
215
r.id = KVM_REG_ARM | KVM_REG_SIZE_U64 | KVM_REG_ARM_VFP;
216
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
217
index XXXXXXX..XXXXXXX 100644
218
--- a/target/arm/op_helper.c
219
+++ b/target/arm/op_helper.c
220
@@ -XXX,XX +XXX,XX @@ void HELPER(msr_banked)(CPUARMState *env, uint32_t value, uint32_t tgtmode,
221
env->banked_r13[bank_number(tgtmode)] = value;
222
break;
223
case 14:
224
- env->banked_r14[bank_number(tgtmode)] = value;
225
+ env->banked_r14[r14_bank_number(tgtmode)] = value;
226
break;
227
case 8 ... 12:
228
switch (tgtmode) {
229
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(mrs_banked)(CPUARMState *env, uint32_t tgtmode, uint32_t regno)
230
case 13:
231
return env->banked_r13[bank_number(tgtmode)];
232
case 14:
233
- return env->banked_r14[bank_number(tgtmode)];
234
+ return env->banked_r14[r14_bank_number(tgtmode)];
235
case 8 ... 12:
236
switch (tgtmode) {
237
case ARM_CPU_MODE_USR:
238
--
72
--
239
2.19.1
73
2.34.1
240
74
241
75
diff view generated by jsdifflib
1
From: Alex Bennée <alex.bennee@linaro.org>
1
From: Tong Ho <tong.ho@amd.com>
2
2
3
The test was incomplete and incorrectly caused debug exceptions to be
3
Add a check in the bit-set operation to write the backstore
4
generated when returning to EL2 after a failed attempt to single-step
4
only if the affected bit is 0 before.
5
an EL1 instruction. Fix this while cleaning up the function a little.
6
5
7
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
6
With this in place, there will be no need for callers to
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
do the checking in order to avoid unnecessary writes.
9
Message-id: 20181109152119.9242-8-alex.bennee@linaro.org
8
9
Signed-off-by: Tong Ho <tong.ho@amd.com>
10
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
11
Reviewed-by: Francisco Iglesias <frasse.iglesias@gmail.com>
12
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
14
---
12
target/arm/cpu.h | 39 ++++++++++++++++++++++++---------------
15
hw/nvram/xlnx-efuse.c | 11 +++++++++--
13
1 file changed, 24 insertions(+), 15 deletions(-)
16
1 file changed, 9 insertions(+), 2 deletions(-)
14
17
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
18
diff --git a/hw/nvram/xlnx-efuse.c b/hw/nvram/xlnx-efuse.c
16
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpu.h
20
--- a/hw/nvram/xlnx-efuse.c
18
+++ b/target/arm/cpu.h
21
+++ b/hw/nvram/xlnx-efuse.c
19
@@ -XXX,XX +XXX,XX @@ static inline bool arm_v7m_csselr_razwi(ARMCPU *cpu)
22
@@ -XXX,XX +XXX,XX @@ static bool efuse_ro_bits_find(XlnxEFuse *s, uint32_t k)
20
return (cpu->clidr & R_V7M_CLIDR_CTYPE_ALL_MASK) != 0;
23
24
bool xlnx_efuse_set_bit(XlnxEFuse *s, unsigned int bit)
25
{
26
+ uint32_t set, *row;
27
+
28
if (efuse_ro_bits_find(s, bit)) {
29
g_autofree char *path = object_get_canonical_path(OBJECT(s));
30
31
@@ -XXX,XX +XXX,XX @@ bool xlnx_efuse_set_bit(XlnxEFuse *s, unsigned int bit)
32
return false;
33
}
34
35
- s->fuse32[bit / 32] |= 1 << (bit % 32);
36
- efuse_bdrv_sync(s, bit);
37
+ /* Avoid back-end write unless there is a real update */
38
+ row = &s->fuse32[bit / 32];
39
+ set = 1 << (bit % 32);
40
+ if (!(set & *row)) {
41
+ *row |= set;
42
+ efuse_bdrv_sync(s, bit);
43
+ }
44
return true;
21
}
45
}
22
46
23
+/* See AArch64.GenerateDebugExceptionsFrom() in ARM ARM pseudocode */
24
static inline bool aa64_generate_debug_exceptions(CPUARMState *env)
25
{
26
- if (arm_is_secure(env)) {
27
- /* MDCR_EL3.SDD disables debug events from Secure state */
28
- if (extract32(env->cp15.mdcr_el3, 16, 1) != 0
29
- || arm_current_el(env) == 3) {
30
- return false;
31
- }
32
+ int cur_el = arm_current_el(env);
33
+ int debug_el;
34
+
35
+ if (cur_el == 3) {
36
+ return false;
37
}
38
39
- if (arm_current_el(env) == arm_debug_target_el(env)) {
40
- if ((extract32(env->cp15.mdscr_el1, 13, 1) == 0)
41
- || (env->daif & PSTATE_D)) {
42
- return false;
43
- }
44
+ /* MDCR_EL3.SDD disables debug events from Secure state */
45
+ if (arm_is_secure_below_el3(env)
46
+ && extract32(env->cp15.mdcr_el3, 16, 1)) {
47
+ return false;
48
}
49
- return true;
50
+
51
+ /*
52
+ * Same EL to same EL debug exceptions need MDSCR_KDE enabled
53
+ * while not masking the (D)ebug bit in DAIF.
54
+ */
55
+ debug_el = arm_debug_target_el(env);
56
+
57
+ if (cur_el == debug_el) {
58
+ return extract32(env->cp15.mdscr_el1, 13, 1)
59
+ && !(env->daif & PSTATE_D);
60
+ }
61
+
62
+ /* Otherwise the debug target needs to be a higher EL */
63
+ return debug_el > cur_el;
64
}
65
66
static inline bool aa32_generate_debug_exceptions(CPUARMState *env)
67
@@ -XXX,XX +XXX,XX @@ static inline bool aa32_generate_debug_exceptions(CPUARMState *env)
68
* since the pseudocode has it at all callsites except for the one in
69
* CheckSoftwareStep(), where it is elided because both branches would
70
* always return the same value.
71
- *
72
- * Parts of the pseudocode relating to EL2 and EL3 are omitted because we
73
- * don't yet implement those exception levels or their associated trap bits.
74
*/
75
static inline bool arm_generate_debug_exceptions(CPUARMState *env)
76
{
77
--
47
--
78
2.19.1
48
2.34.1
79
49
80
50
diff view generated by jsdifflib
Deleted patch
1
This reverts commit 8a0fc3a29fc2315325400c738f807d0d4ae0ab7f.
2
1
3
The implementation of HCR.VI and VF in that commit is not
4
correct -- they do not track the overall "is there a pending
5
VIRQ or VFIQ" status, but whether there is a pending interrupt
6
due to "this mechanism", ie the hypervisor having set the VI/VF
7
bits. The overall pending state for VIRQ and VFIQ is effectively
8
the logical OR of the inbound lines from the GIC with the
9
VI and VF bits. Commit 8a0fc3a29fc231 would result in pending
10
VIRQ/VFIQ possibly being lost when the hypervisor wrote to HCR.
11
12
As a preliminary to implementing the HCR.VI/VF feature properly,
13
revert the broken one entirely.
14
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
17
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
18
Message-id: 20181109134731.11605-2-peter.maydell@linaro.org
19
---
20
target/arm/helper.c | 47 ++++-----------------------------------------
21
1 file changed, 4 insertions(+), 43 deletions(-)
22
23
diff --git a/target/arm/helper.c b/target/arm/helper.c
24
index XXXXXXX..XXXXXXX 100644
25
--- a/target/arm/helper.c
26
+++ b/target/arm/helper.c
27
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el3_no_el2_v8_cp_reginfo[] = {
28
static void hcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
29
{
30
ARMCPU *cpu = arm_env_get_cpu(env);
31
- CPUState *cs = ENV_GET_CPU(env);
32
uint64_t valid_mask = HCR_MASK;
33
34
if (arm_feature(env, ARM_FEATURE_EL3)) {
35
@@ -XXX,XX +XXX,XX @@ static void hcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
36
/* Clear RES0 bits. */
37
value &= valid_mask;
38
39
- /*
40
- * VI and VF are kept in cs->interrupt_request. Modifying that
41
- * requires that we have the iothread lock, which is done by
42
- * marking the reginfo structs as ARM_CP_IO.
43
- * Note that if a write to HCR pends a VIRQ or VFIQ it is never
44
- * possible for it to be taken immediately, because VIRQ and
45
- * VFIQ are masked unless running at EL0 or EL1, and HCR
46
- * can only be written at EL2.
47
- */
48
- g_assert(qemu_mutex_iothread_locked());
49
- if (value & HCR_VI) {
50
- cs->interrupt_request |= CPU_INTERRUPT_VIRQ;
51
- } else {
52
- cs->interrupt_request &= ~CPU_INTERRUPT_VIRQ;
53
- }
54
- if (value & HCR_VF) {
55
- cs->interrupt_request |= CPU_INTERRUPT_VFIQ;
56
- } else {
57
- cs->interrupt_request &= ~CPU_INTERRUPT_VFIQ;
58
- }
59
- value &= ~(HCR_VI | HCR_VF);
60
-
61
/* These bits change the MMU setup:
62
* HCR_VM enables stage 2 translation
63
* HCR_PTW forbids certain page-table setups
64
@@ -XXX,XX +XXX,XX @@ static void hcr_writelow(CPUARMState *env, const ARMCPRegInfo *ri,
65
hcr_write(env, NULL, value);
66
}
67
68
-static uint64_t hcr_read(CPUARMState *env, const ARMCPRegInfo *ri)
69
-{
70
- /* The VI and VF bits live in cs->interrupt_request */
71
- uint64_t ret = env->cp15.hcr_el2 & ~(HCR_VI | HCR_VF);
72
- CPUState *cs = ENV_GET_CPU(env);
73
-
74
- if (cs->interrupt_request & CPU_INTERRUPT_VIRQ) {
75
- ret |= HCR_VI;
76
- }
77
- if (cs->interrupt_request & CPU_INTERRUPT_VFIQ) {
78
- ret |= HCR_VF;
79
- }
80
- return ret;
81
-}
82
-
83
static const ARMCPRegInfo el2_cp_reginfo[] = {
84
{ .name = "HCR_EL2", .state = ARM_CP_STATE_AA64,
85
- .type = ARM_CP_IO,
86
.opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 0,
87
.access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.hcr_el2),
88
- .writefn = hcr_write, .readfn = hcr_read },
89
+ .writefn = hcr_write },
90
{ .name = "HCR", .state = ARM_CP_STATE_AA32,
91
- .type = ARM_CP_ALIAS | ARM_CP_IO,
92
+ .type = ARM_CP_ALIAS,
93
.cp = 15, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 0,
94
.access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.hcr_el2),
95
- .writefn = hcr_writelow, .readfn = hcr_read },
96
+ .writefn = hcr_writelow },
97
{ .name = "ELR_EL2", .state = ARM_CP_STATE_AA64,
98
.type = ARM_CP_ALIAS,
99
.opc0 = 3, .opc1 = 4, .crn = 4, .crm = 0, .opc2 = 1,
100
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
101
102
static const ARMCPRegInfo el2_v8_cp_reginfo[] = {
103
{ .name = "HCR2", .state = ARM_CP_STATE_AA32,
104
- .type = ARM_CP_ALIAS | ARM_CP_IO,
105
+ .type = ARM_CP_ALIAS,
106
.cp = 15, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 4,
107
.access = PL2_RW,
108
.fieldoffset = offsetofhigh32(CPUARMState, cp15.hcr_el2),
109
--
110
2.19.1
111
112
diff view generated by jsdifflib
Deleted patch
1
Currently we track the state of the four irq lines from the GIC
2
only via the cs->interrupt_request or KVM irq state. That means
3
that we assume that an interrupt is asserted if and only if the
4
external line is set. This assumption is incorrect for VIRQ
5
and VFIQ, because the HCR_EL2.{VI,VF} bits allow assertion
6
of VIRQ and VFIQ separately from the state of the external line.
7
1
8
To handle this, start tracking the state of the external lines
9
explicitly in a CPU state struct field, as is common practice
10
for devices.
11
12
The complicated part of this is dealing with inbound migration
13
from an older QEMU which didn't have this state. We assume in
14
that case that the older QEMU did not implement the HCR_EL2.{VI,VF}
15
bits as generating interrupts, and so the line state matches
16
the current state in cs->interrupt_request. (This is not quite
17
true between commit 8a0fc3a29fc2315325400c7 and its revert, but
18
that commit is broken and never made it into any released QEMU
19
version.)
20
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
23
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
24
Message-id: 20181109134731.11605-3-peter.maydell@linaro.org
25
---
26
target/arm/cpu.h | 3 +++
27
target/arm/cpu.c | 16 ++++++++++++++
28
target/arm/machine.c | 51 ++++++++++++++++++++++++++++++++++++++++++++
29
3 files changed, 70 insertions(+)
30
31
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/cpu.h
34
+++ b/target/arm/cpu.h
35
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
36
uint64_t esr;
37
} serror;
38
39
+ /* State of our input IRQ/FIQ/VIRQ/VFIQ lines */
40
+ uint32_t irq_line_state;
41
+
42
/* Thumb-2 EE state. */
43
uint32_t teecr;
44
uint32_t teehbr;
45
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
46
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/cpu.c
48
+++ b/target/arm/cpu.c
49
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_set_irq(void *opaque, int irq, int level)
50
[ARM_CPU_VFIQ] = CPU_INTERRUPT_VFIQ
51
};
52
53
+ if (level) {
54
+ env->irq_line_state |= mask[irq];
55
+ } else {
56
+ env->irq_line_state &= ~mask[irq];
57
+ }
58
+
59
switch (irq) {
60
case ARM_CPU_VIRQ:
61
case ARM_CPU_VFIQ:
62
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_kvm_set_irq(void *opaque, int irq, int level)
63
ARMCPU *cpu = opaque;
64
CPUState *cs = CPU(cpu);
65
int kvm_irq = KVM_ARM_IRQ_TYPE_CPU << KVM_ARM_IRQ_TYPE_SHIFT;
66
+ uint32_t linestate_bit;
67
68
switch (irq) {
69
case ARM_CPU_IRQ:
70
kvm_irq |= KVM_ARM_IRQ_CPU_IRQ;
71
+ linestate_bit = CPU_INTERRUPT_HARD;
72
break;
73
case ARM_CPU_FIQ:
74
kvm_irq |= KVM_ARM_IRQ_CPU_FIQ;
75
+ linestate_bit = CPU_INTERRUPT_FIQ;
76
break;
77
default:
78
g_assert_not_reached();
79
}
80
+
81
+ if (level) {
82
+ env->irq_line_state |= linestate_bit;
83
+ } else {
84
+ env->irq_line_state &= ~linestate_bit;
85
+ }
86
+
87
kvm_irq |= cs->cpu_index << KVM_ARM_IRQ_VCPU_SHIFT;
88
kvm_set_irq(kvm_state, kvm_irq, level ? 1 : 0);
89
#endif
90
diff --git a/target/arm/machine.c b/target/arm/machine.c
91
index XXXXXXX..XXXXXXX 100644
92
--- a/target/arm/machine.c
93
+++ b/target/arm/machine.c
94
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_serror = {
95
}
96
};
97
98
+static bool irq_line_state_needed(void *opaque)
99
+{
100
+ return true;
101
+}
102
+
103
+static const VMStateDescription vmstate_irq_line_state = {
104
+ .name = "cpu/irq-line-state",
105
+ .version_id = 1,
106
+ .minimum_version_id = 1,
107
+ .needed = irq_line_state_needed,
108
+ .fields = (VMStateField[]) {
109
+ VMSTATE_UINT32(env.irq_line_state, ARMCPU),
110
+ VMSTATE_END_OF_LIST()
111
+ }
112
+};
113
+
114
static bool m_needed(void *opaque)
115
{
116
ARMCPU *cpu = opaque;
117
@@ -XXX,XX +XXX,XX @@ static int cpu_pre_save(void *opaque)
118
return 0;
119
}
120
121
+static int cpu_pre_load(void *opaque)
122
+{
123
+ ARMCPU *cpu = opaque;
124
+ CPUARMState *env = &cpu->env;
125
+
126
+ /*
127
+ * Pre-initialize irq_line_state to a value that's never valid as
128
+ * real data, so cpu_post_load() can tell whether we've seen the
129
+ * irq-line-state subsection in the incoming migration state.
130
+ */
131
+ env->irq_line_state = UINT32_MAX;
132
+
133
+ return 0;
134
+}
135
+
136
static int cpu_post_load(void *opaque, int version_id)
137
{
138
ARMCPU *cpu = opaque;
139
+ CPUARMState *env = &cpu->env;
140
int i, v;
141
142
+ /*
143
+ * Handle migration compatibility from old QEMU which didn't
144
+ * send the irq-line-state subsection. A QEMU without it did not
145
+ * implement the HCR_EL2.{VI,VF} bits as generating interrupts,
146
+ * so for TCG the line state matches the bits set in cs->interrupt_request.
147
+ * For KVM the line state is not stored in cs->interrupt_request
148
+ * and so this will leave irq_line_state as 0, but this is OK because
149
+ * we only need to care about it for TCG.
150
+ */
151
+ if (env->irq_line_state == UINT32_MAX) {
152
+ CPUState *cs = CPU(cpu);
153
+
154
+ env->irq_line_state = cs->interrupt_request &
155
+ (CPU_INTERRUPT_HARD | CPU_INTERRUPT_FIQ |
156
+ CPU_INTERRUPT_VIRQ | CPU_INTERRUPT_VFIQ);
157
+ }
158
+
159
/* Update the values list from the incoming migration data.
160
* Anything in the incoming data which we don't know about is
161
* a migration failure; anything we know about but the incoming
162
@@ -XXX,XX +XXX,XX @@ const VMStateDescription vmstate_arm_cpu = {
163
.version_id = 22,
164
.minimum_version_id = 22,
165
.pre_save = cpu_pre_save,
166
+ .pre_load = cpu_pre_load,
167
.post_load = cpu_post_load,
168
.fields = (VMStateField[]) {
169
VMSTATE_UINT32_ARRAY(env.regs, ARMCPU, 16),
170
@@ -XXX,XX +XXX,XX @@ const VMStateDescription vmstate_arm_cpu = {
171
&vmstate_sve,
172
#endif
173
&vmstate_serror,
174
+ &vmstate_irq_line_state,
175
NULL
176
}
177
};
178
--
179
2.19.1
180
181
diff view generated by jsdifflib