1
target-arm queue for rc1 -- these are all bug fixes.
1
My OS Lock/DoubleLock patches, plus a small selection of other
2
bug fixes and minor things.
2
3
3
thanks
4
thanks
4
-- PMM
5
-- PMM
5
6
6
The following changes since commit b9404bf592e7ba74180e1a54ed7a266ec6ee67f2:
7
The following changes since commit 8e9398e3b1a860b8c29c670c1b6c36afe8d87849:
7
8
8
Merge remote-tracking branch 'remotes/dgilbert/tags/pull-hmp-20190715' into staging (2019-07-15 12:22:07 +0100)
9
Merge tag 'pull-ppc-20220706' of https://gitlab.com/danielhb/qemu into staging (2022-07-07 06:21:05 +0530)
9
10
10
are available in the Git repository at:
11
are available in the Git repository at:
11
12
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20190715
13
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20220707
13
14
14
for you to fetch changes up to 51c9122e92b776a3f16af0b9282f1dc5012e2a19:
15
for you to fetch changes up to c2360eaa0262a816faf8032b7762d0c73df2cc62:
15
16
16
target/arm: NS BusFault on vector table fetch escalates to NS HardFault (2019-07-15 14:17:04 +0100)
17
target/arm: Fix qemu-system-arm handling of LPAE block descriptors for highmem (2022-07-07 11:41:04 +0100)
17
18
18
----------------------------------------------------------------
19
----------------------------------------------------------------
19
target-arm queue:
20
target-arm queue:
20
* report ARMv8-A FP support for AArch32 -cpu max
21
* hw/arm/virt: dt: add rng-seed property
21
* hw/ssi/xilinx_spips: Avoid AXI writes to the LQSPI linear memory
22
* Fix MTE check in sve_ldnfff1_r
22
* hw/ssi/xilinx_spips: Avoid out-of-bound access to lqspi_buf[]
23
* Record tagged bit for user-only in sve_probe_page
23
* hw/ssi/mss-spi: Avoid crash when reading empty RX FIFO
24
* Correctly implement OS Lock and OS DoubleLock
24
* hw/display/xlnx_dp: Avoid crash when reading empty RX FIFO
25
* Implement DBGDEVID, DBGDEVID1, DBGDEVID2 registers
25
* hw/arm/virt: Fix non-secure flash mode
26
* Fix qemu-system-arm handling of LPAE block descriptors for highmem
26
* pl031: Correctly migrate state when using -rtc clock=host
27
* fix regression that meant arm926 and arm1026 lost VFP
28
double-precision support
29
* v8M: NS BusFault on vector table fetch escalates to NS HardFault
30
27
31
----------------------------------------------------------------
28
----------------------------------------------------------------
32
Alex Bennée (1):
29
Jason A. Donenfeld (1):
33
target/arm: report ARMv8-A FP support for AArch32 -cpu max
30
hw/arm/virt: dt: add rng-seed property
34
31
35
David Engraf (1):
32
Peter Maydell (6):
36
hw/arm/virt: Fix non-secure flash mode
33
target/arm: Fix code style issues in debug helper functions
34
target/arm: Move define_debug_regs() to debug_helper.c
35
target/arm: Suppress debug exceptions when OS Lock set
36
target/arm: Implement AArch32 DBGDEVID, DBGDEVID1, DBGDEVID2
37
target/arm: Correctly implement Feat_DoubleLock
38
target/arm: Fix qemu-system-arm handling of LPAE block descriptors for highmem
37
39
38
Peter Maydell (3):
40
Richard Henderson (2):
39
pl031: Correctly migrate state when using -rtc clock=host
41
target/arm: Fix MTE check in sve_ldnfff1_r
40
target/arm: Set VFP-related MVFR0 fields for arm926 and arm1026
42
target/arm: Record tagged bit for user-only in sve_probe_page
41
target/arm: NS BusFault on vector table fetch escalates to NS HardFault
42
43
43
Philippe Mathieu-Daudé (5):
44
docs/about/deprecated.rst | 8 +
44
hw/ssi/xilinx_spips: Convert lqspi_read() to read_with_attrs
45
docs/system/arm/virt.rst | 17 +-
45
hw/ssi/xilinx_spips: Avoid AXI writes to the LQSPI linear memory
46
include/hw/arm/virt.h | 2 +-
46
hw/ssi/xilinx_spips: Avoid out-of-bound access to lqspi_buf[]
47
target/arm/cpregs.h | 3 +
47
hw/ssi/mss-spi: Avoid crash when reading empty RX FIFO
48
target/arm/cpu.h | 27 +++
48
hw/display/xlnx_dp: Avoid crash when reading empty RX FIFO
49
target/arm/internals.h | 9 +
49
50
hw/arm/virt.c | 44 ++--
50
include/hw/timer/pl031.h | 2 ++
51
target/arm/cpu64.c | 6 +
51
hw/arm/virt.c | 2 +-
52
target/arm/cpu_tcg.c | 6 +
52
hw/core/machine.c | 1 +
53
target/arm/debug_helper.c | 580 ++++++++++++++++++++++++++++++++++++++++++++++
53
hw/display/xlnx_dp.c | 15 +++++---
54
target/arm/helper.c | 513 +---------------------------------------
54
hw/ssi/mss-spi.c | 8 ++++-
55
target/arm/ptw.c | 2 +-
55
hw/ssi/xilinx_spips.c | 43 +++++++++++++++-------
56
target/arm/sve_helper.c | 5 +-
56
hw/timer/pl031.c | 92 +++++++++++++++++++++++++++++++++++++++++++++---
57
13 files changed, 684 insertions(+), 538 deletions(-)
57
target/arm/cpu.c | 16 +++++++++
58
target/arm/m_helper.c | 21 ++++++++---
59
9 files changed, 174 insertions(+), 26 deletions(-)
60
diff view generated by jsdifflib
Deleted patch
1
From: Alex Bennée <alex.bennee@linaro.org>
2
1
3
When we converted to using feature bits in 602f6e42cfbf we missed out
4
the fact (dp && arm_dc_feature(s, ARM_FEATURE_V8)) was supported for
5
-cpu max configurations. This caused a regression in the GCC test
6
suite. Fix this by setting the appropriate bits in mvfr1.FPHP to
7
report ARMv8-A with FP support (but not ARMv8.2-FP16).
8
9
Fixes: https://bugs.launchpad.net/qemu/+bug/1836078
10
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20190711103737.10017-1-alex.bennee@linaro.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
target/arm/cpu.c | 4 ++++
16
1 file changed, 4 insertions(+)
17
18
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/cpu.c
21
+++ b/target/arm/cpu.c
22
@@ -XXX,XX +XXX,XX @@ static void arm_max_initfn(Object *obj)
23
t = FIELD_DP32(t, ID_ISAR6, SPECRES, 1);
24
cpu->isar.id_isar6 = t;
25
26
+ t = cpu->isar.mvfr1;
27
+ t = FIELD_DP32(t, MVFR1, FPHP, 2); /* v8.0 FP support */
28
+ cpu->isar.mvfr1 = t;
29
+
30
t = cpu->isar.mvfr2;
31
t = FIELD_DP32(t, MVFR2, SIMDMISC, 3); /* SIMD MaxNum */
32
t = FIELD_DP32(t, MVFR2, FPMISC, 4); /* FP MaxNum */
33
--
34
2.20.1
35
36
diff view generated by jsdifflib
1
From: David Engraf <david.engraf@sysgo.com>
1
From: "Jason A. Donenfeld" <Jason@zx2c4.com>
2
2
3
Using the whole 128 MiB flash in non-secure mode is not working because
3
In 60592cfed2 ("hw/arm/virt: dt: add kaslr-seed property"), the
4
virt_flash_fdt() expects the same address for secure_sysmem and sysmem.
4
kaslr-seed property was added, but the equally as important rng-seed
5
This is not correctly handled by caller because it forwards NULL for
5
property was forgotten about, which has identical semantics for a
6
secure_sysmem in non-secure flash mode.
6
similar purpose. This commit implements it in exactly the same way as
7
kaslr-seed. It then changes the name of the disabling option to reflect
8
that this has more to do with randomness vs determinism, rather than
9
something particular about kaslr.
7
10
8
Fixed by using sysmem when secure_sysmem is NULL.
11
Cc: Peter Maydell <peter.maydell@linaro.org>
9
12
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
10
Signed-off-by: David Engraf <david.engraf@sysgo.com>
13
[PMM: added deprecated.rst section for the deprecation]
11
Message-id: 20190712075002.14326-1-david.engraf@sysgo.com
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
16
---
15
hw/arm/virt.c | 2 +-
17
docs/about/deprecated.rst | 8 +++++++
16
1 file changed, 1 insertion(+), 1 deletion(-)
18
docs/system/arm/virt.rst | 17 +++++++++------
19
include/hw/arm/virt.h | 2 +-
20
hw/arm/virt.c | 44 ++++++++++++++++++++++++---------------
21
4 files changed, 47 insertions(+), 24 deletions(-)
17
22
23
diff --git a/docs/about/deprecated.rst b/docs/about/deprecated.rst
24
index XXXXXXX..XXXXXXX 100644
25
--- a/docs/about/deprecated.rst
26
+++ b/docs/about/deprecated.rst
27
@@ -XXX,XX +XXX,XX @@ Use the more generic event ``DEVICE_UNPLUG_GUEST_ERROR`` instead.
28
System emulator machines
29
------------------------
30
31
+Arm ``virt`` machine ``dtb-kaslr-seed`` property
32
+''''''''''''''''''''''''''''''''''''''''''''''''
33
+
34
+The ``dtb-kaslr-seed`` property on the ``virt`` board has been
35
+deprecated; use the new name ``dtb-randomness`` instead. The new name
36
+better reflects the way this property affects all random data within
37
+the device tree blob, not just the ``kaslr-seed`` node.
38
+
39
PPC 405 ``taihu`` machine (since 7.0)
40
'''''''''''''''''''''''''''''''''''''
41
42
diff --git a/docs/system/arm/virt.rst b/docs/system/arm/virt.rst
43
index XXXXXXX..XXXXXXX 100644
44
--- a/docs/system/arm/virt.rst
45
+++ b/docs/system/arm/virt.rst
46
@@ -XXX,XX +XXX,XX @@ ras
47
Set ``on``/``off`` to enable/disable reporting host memory errors to a guest
48
using ACPI and guest external abort exceptions. The default is off.
49
50
+dtb-randomness
51
+ Set ``on``/``off`` to pass random seeds via the guest DTB
52
+ rng-seed and kaslr-seed nodes (in both "/chosen" and
53
+ "/secure-chosen") to use for features like the random number
54
+ generator and address space randomisation. The default is
55
+ ``on``. You will want to disable it if your trusted boot chain
56
+ will verify the DTB it is passed, since this option causes the
57
+ DTB to be non-deterministic. It would be the responsibility of
58
+ the firmware to come up with a seed and pass it on if it wants to.
59
+
60
dtb-kaslr-seed
61
- Set ``on``/``off`` to pass a random seed via the guest dtb
62
- kaslr-seed node (in both "/chosen" and /secure-chosen) to use
63
- for features like address space randomisation. The default is
64
- ``on``. You will want to disable it if your trusted boot chain will
65
- verify the DTB it is passed. It would be the responsibility of the
66
- firmware to come up with a seed and pass it on if it wants to.
67
+ A deprecated synonym for dtb-randomness.
68
69
Linux guest kernel configuration
70
""""""""""""""""""""""""""""""""
71
diff --git a/include/hw/arm/virt.h b/include/hw/arm/virt.h
72
index XXXXXXX..XXXXXXX 100644
73
--- a/include/hw/arm/virt.h
74
+++ b/include/hw/arm/virt.h
75
@@ -XXX,XX +XXX,XX @@ struct VirtMachineState {
76
bool virt;
77
bool ras;
78
bool mte;
79
- bool dtb_kaslr_seed;
80
+ bool dtb_randomness;
81
OnOffAuto acpi;
82
VirtGICType gic_version;
83
VirtIOMMUType iommu;
18
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
84
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
19
index XXXXXXX..XXXXXXX 100644
85
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/arm/virt.c
86
--- a/hw/arm/virt.c
21
+++ b/hw/arm/virt.c
87
+++ b/hw/arm/virt.c
22
@@ -XXX,XX +XXX,XX @@ static void machvirt_init(MachineState *machine)
88
@@ -XXX,XX +XXX,XX @@ static bool cpu_type_valid(const char *cpu)
23
&machine->device_memory->mr);
89
return false;
90
}
91
92
-static void create_kaslr_seed(MachineState *ms, const char *node)
93
+static void create_randomness(MachineState *ms, const char *node)
94
{
95
- uint64_t seed;
96
+ struct {
97
+ uint64_t kaslr;
98
+ uint8_t rng[32];
99
+ } seed;
100
101
if (qemu_guest_getrandom(&seed, sizeof(seed), NULL)) {
102
return;
24
}
103
}
25
104
- qemu_fdt_setprop_u64(ms->fdt, node, "kaslr-seed", seed);
26
- virt_flash_fdt(vms, sysmem, secure_sysmem);
105
+ qemu_fdt_setprop_u64(ms->fdt, node, "kaslr-seed", seed.kaslr);
27
+ virt_flash_fdt(vms, sysmem, secure_sysmem ?: sysmem);
106
+ qemu_fdt_setprop(ms->fdt, node, "rng-seed", seed.rng, sizeof(seed.rng));
28
107
}
29
create_gic(vms, pic);
108
109
static void create_fdt(VirtMachineState *vms)
110
@@ -XXX,XX +XXX,XX @@ static void create_fdt(VirtMachineState *vms)
111
112
/* /chosen must exist for load_dtb to fill in necessary properties later */
113
qemu_fdt_add_subnode(fdt, "/chosen");
114
- if (vms->dtb_kaslr_seed) {
115
- create_kaslr_seed(ms, "/chosen");
116
+ if (vms->dtb_randomness) {
117
+ create_randomness(ms, "/chosen");
118
}
119
120
if (vms->secure) {
121
qemu_fdt_add_subnode(fdt, "/secure-chosen");
122
- if (vms->dtb_kaslr_seed) {
123
- create_kaslr_seed(ms, "/secure-chosen");
124
+ if (vms->dtb_randomness) {
125
+ create_randomness(ms, "/secure-chosen");
126
}
127
}
128
129
@@ -XXX,XX +XXX,XX @@ static void virt_set_its(Object *obj, bool value, Error **errp)
130
vms->its = value;
131
}
132
133
-static bool virt_get_dtb_kaslr_seed(Object *obj, Error **errp)
134
+static bool virt_get_dtb_randomness(Object *obj, Error **errp)
135
{
136
VirtMachineState *vms = VIRT_MACHINE(obj);
137
138
- return vms->dtb_kaslr_seed;
139
+ return vms->dtb_randomness;
140
}
141
142
-static void virt_set_dtb_kaslr_seed(Object *obj, bool value, Error **errp)
143
+static void virt_set_dtb_randomness(Object *obj, bool value, Error **errp)
144
{
145
VirtMachineState *vms = VIRT_MACHINE(obj);
146
147
- vms->dtb_kaslr_seed = value;
148
+ vms->dtb_randomness = value;
149
}
150
151
static char *virt_get_oem_id(Object *obj, Error **errp)
152
@@ -XXX,XX +XXX,XX @@ static void virt_machine_class_init(ObjectClass *oc, void *data)
153
"Set on/off to enable/disable "
154
"ITS instantiation");
155
156
+ object_class_property_add_bool(oc, "dtb-randomness",
157
+ virt_get_dtb_randomness,
158
+ virt_set_dtb_randomness);
159
+ object_class_property_set_description(oc, "dtb-randomness",
160
+ "Set off to disable passing random or "
161
+ "non-deterministic dtb nodes to guest");
162
+
163
object_class_property_add_bool(oc, "dtb-kaslr-seed",
164
- virt_get_dtb_kaslr_seed,
165
- virt_set_dtb_kaslr_seed);
166
+ virt_get_dtb_randomness,
167
+ virt_set_dtb_randomness);
168
object_class_property_set_description(oc, "dtb-kaslr-seed",
169
- "Set off to disable passing of kaslr-seed "
170
- "dtb node to guest");
171
+ "Deprecated synonym of dtb-randomness");
172
173
object_class_property_add_str(oc, "x-oem-id",
174
virt_get_oem_id,
175
@@ -XXX,XX +XXX,XX @@ static void virt_instance_init(Object *obj)
176
/* MTE is disabled by default. */
177
vms->mte = false;
178
179
- /* Supply a kaslr-seed by default */
180
- vms->dtb_kaslr_seed = true;
181
+ /* Supply kaslr-seed and rng-seed by default */
182
+ vms->dtb_randomness = true;
183
184
vms->irqmap = a15irqmap;
30
185
31
--
186
--
32
2.20.1
187
2.25.1
33
34
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
In the previous commit we fixed a crash when the guest read a
3
The comment was correct, but the test was not:
4
register that pop from an empty FIFO.
4
disable mte if tagged is *not* set.
5
By auditing the repository, we found another similar use with
6
an easy way to reproduce:
7
5
8
$ qemu-system-aarch64 -M xlnx-zcu102 -monitor stdio -S
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
QEMU 4.0.50 monitor - type 'help' for more information
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
(qemu) xp/b 0xfd4a0134
11
Aborted (core dumped)
12
13
(gdb) bt
14
#0 0x00007f6936dea57f in raise () at /lib64/libc.so.6
15
#1 0x00007f6936dd4895 in abort () at /lib64/libc.so.6
16
#2 0x0000561ad32975ec in xlnx_dp_aux_pop_rx_fifo (s=0x7f692babee70) at hw/display/xlnx_dp.c:431
17
#3 0x0000561ad3297dc0 in xlnx_dp_read (opaque=0x7f692babee70, offset=77, size=4) at hw/display/xlnx_dp.c:667
18
#4 0x0000561ad321b896 in memory_region_read_accessor (mr=0x7f692babf620, addr=308, value=0x7ffe05c1db88, size=4, shift=0, mask=4294967295, attrs=...) at memory.c:439
19
#5 0x0000561ad321bd70 in access_with_adjusted_size (addr=308, value=0x7ffe05c1db88, size=1, access_size_min=4, access_size_max=4, access_fn=0x561ad321b858 <memory_region_read_accessor>, mr=0x7f692babf620, attrs=...) at memory.c:569
20
#6 0x0000561ad321e9d5 in memory_region_dispatch_read1 (mr=0x7f692babf620, addr=308, pval=0x7ffe05c1db88, size=1, attrs=...) at memory.c:1420
21
#7 0x0000561ad321ea9d in memory_region_dispatch_read (mr=0x7f692babf620, addr=308, pval=0x7ffe05c1db88, size=1, attrs=...) at memory.c:1447
22
#8 0x0000561ad31bd742 in flatview_read_continue (fv=0x561ad69c04f0, addr=4249485620, attrs=..., buf=0x7ffe05c1dcf0 "\020\335\301\005\376\177", len=1, addr1=308, l=1, mr=0x7f692babf620) at exec.c:3385
23
#9 0x0000561ad31bd895 in flatview_read (fv=0x561ad69c04f0, addr=4249485620, attrs=..., buf=0x7ffe05c1dcf0 "\020\335\301\005\376\177", len=1) at exec.c:3423
24
#10 0x0000561ad31bd90b in address_space_read_full (as=0x561ad5bb3020, addr=4249485620, attrs=..., buf=0x7ffe05c1dcf0 "\020\335\301\005\376\177", len=1) at exec.c:3436
25
#11 0x0000561ad33b1c42 in address_space_read (len=1, buf=0x7ffe05c1dcf0 "\020\335\301\005\376\177", attrs=..., addr=4249485620, as=0x561ad5bb3020) at include/exec/memory.h:2131
26
#12 0x0000561ad33b1c42 in memory_dump (mon=0x561ad59c4530, count=1, format=120, wsize=1, addr=4249485620, is_physical=1) at monitor/misc.c:723
27
#13 0x0000561ad33b1fc1 in hmp_physical_memory_dump (mon=0x561ad59c4530, qdict=0x561ad6c6fd00) at monitor/misc.c:795
28
#14 0x0000561ad37b4a9f in handle_hmp_command (mon=0x561ad59c4530, cmdline=0x561ad59d0f22 "/b 0x00000000fd4a0134") at monitor/hmp.c:1082
29
30
Fix by checking the FIFO is not empty before popping from it.
31
32
The datasheet is not clear about the reset value of this register,
33
we choose to return '0'.
34
35
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
36
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
37
Message-id: 20190709113715.7761-4-philmd@redhat.com
38
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
39
---
9
---
40
hw/display/xlnx_dp.c | 15 +++++++++++----
10
target/arm/sve_helper.c | 2 +-
41
1 file changed, 11 insertions(+), 4 deletions(-)
11
1 file changed, 1 insertion(+), 1 deletion(-)
42
12
43
diff --git a/hw/display/xlnx_dp.c b/hw/display/xlnx_dp.c
13
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
44
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
45
--- a/hw/display/xlnx_dp.c
15
--- a/target/arm/sve_helper.c
46
+++ b/hw/display/xlnx_dp.c
16
+++ b/target/arm/sve_helper.c
47
@@ -XXX,XX +XXX,XX @@ static uint8_t xlnx_dp_aux_pop_rx_fifo(XlnxDPState *s)
17
@@ -XXX,XX +XXX,XX @@ void sve_ldnfff1_r(CPUARMState *env, void *vg, const target_ulong addr,
48
uint8_t ret;
18
* Disable MTE checking if the Tagged bit is not set. Since TBI must
49
19
* be set within MTEDESC for MTE, !mtedesc => !mte_active.
50
if (fifo8_is_empty(&s->rx_fifo)) {
20
*/
51
- DPRINTF("rx_fifo underflow..\n");
21
- if (arm_tlb_mte_tagged(&info.page[0].attrs)) {
52
- abort();
22
+ if (!arm_tlb_mte_tagged(&info.page[0].attrs)) {
53
+ qemu_log_mask(LOG_GUEST_ERROR,
23
mtedesc = 0;
54
+ "%s: Reading empty RX_FIFO\n",
55
+ __func__);
56
+ /*
57
+ * The datasheet is not clear about the reset value, it seems
58
+ * to be unspecified. We choose to return '0'.
59
+ */
60
+ ret = 0;
61
+ } else {
62
+ ret = fifo8_pop(&s->rx_fifo);
63
+ DPRINTF("pop 0x%" PRIX8 " from rx_fifo.\n", ret);
64
}
24
}
65
- ret = fifo8_pop(&s->rx_fifo);
66
- DPRINTF("pop 0x%" PRIX8 " from rx_fifo.\n", ret);
67
return ret;
68
}
69
25
70
--
26
--
71
2.20.1
27
2.25.1
72
73
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Reading the RX_DATA register when the RX_FIFO is empty triggers
3
Fixes a bug in that we were not honoring MTE from user-only
4
an abort. This can be easily reproduced:
4
SVE. Copy the user-only MTE logic from allocation_tag_mem
5
into sve_probe_page.
5
6
6
$ qemu-system-arm -M emcraft-sf2 -monitor stdio -S
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
QEMU 4.0.50 monitor - type 'help' for more information
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
(qemu) x 0x40001010
9
Aborted (core dumped)
10
11
(gdb) bt
12
#1 0x00007f035874f895 in abort () at /lib64/libc.so.6
13
#2 0x00005628686591ff in fifo8_pop (fifo=0x56286a9a4c68) at util/fifo8.c:66
14
#3 0x00005628683e0b8e in fifo32_pop (fifo=0x56286a9a4c68) at include/qemu/fifo32.h:137
15
#4 0x00005628683e0efb in spi_read (opaque=0x56286a9a4850, addr=4, size=4) at hw/ssi/mss-spi.c:168
16
#5 0x0000562867f96801 in memory_region_read_accessor (mr=0x56286a9a4b60, addr=16, value=0x7ffeecb0c5c8, size=4, shift=0, mask=4294967295, attrs=...) at memory.c:439
17
#6 0x0000562867f96cdb in access_with_adjusted_size (addr=16, value=0x7ffeecb0c5c8, size=4, access_size_min=1, access_size_max=4, access_fn=0x562867f967c3 <memory_region_read_accessor>, mr=0x56286a9a4b60, attrs=...) at memory.c:569
18
#7 0x0000562867f99940 in memory_region_dispatch_read1 (mr=0x56286a9a4b60, addr=16, pval=0x7ffeecb0c5c8, size=4, attrs=...) at memory.c:1420
19
#8 0x0000562867f99a08 in memory_region_dispatch_read (mr=0x56286a9a4b60, addr=16, pval=0x7ffeecb0c5c8, size=4, attrs=...) at memory.c:1447
20
#9 0x0000562867f38721 in flatview_read_continue (fv=0x56286aec6360, addr=1073745936, attrs=..., buf=0x7ffeecb0c7c0 "\340ǰ\354\376\177", len=4, addr1=16, l=4, mr=0x56286a9a4b60) at exec.c:3385
21
#10 0x0000562867f38874 in flatview_read (fv=0x56286aec6360, addr=1073745936, attrs=..., buf=0x7ffeecb0c7c0 "\340ǰ\354\376\177", len=4) at exec.c:3423
22
#11 0x0000562867f388ea in address_space_read_full (as=0x56286aa3e890, addr=1073745936, attrs=..., buf=0x7ffeecb0c7c0 "\340ǰ\354\376\177", len=4) at exec.c:3436
23
#12 0x0000562867f389c5 in address_space_rw (as=0x56286aa3e890, addr=1073745936, attrs=..., buf=0x7ffeecb0c7c0 "\340ǰ\354\376\177", len=4, is_write=false) at exec.c:3466
24
#13 0x0000562867f3bdd7 in cpu_memory_rw_debug (cpu=0x56286aa19d00, addr=1073745936, buf=0x7ffeecb0c7c0 "\340ǰ\354\376\177", len=4, is_write=0) at exec.c:3976
25
#14 0x000056286811ed51 in memory_dump (mon=0x56286a8c32d0, count=1, format=120, wsize=4, addr=1073745936, is_physical=0) at monitor/misc.c:730
26
#15 0x000056286811eff1 in hmp_memory_dump (mon=0x56286a8c32d0, qdict=0x56286b15c400) at monitor/misc.c:785
27
#16 0x00005628684740ee in handle_hmp_command (mon=0x56286a8c32d0, cmdline=0x56286a8caeb2 "0x40001010") at monitor/hmp.c:1082
28
29
From the datasheet "Actel SmartFusion Microcontroller Subsystem
30
User's Guide" Rev.1, Table 13-3 "SPI Register Summary", this
31
register has a reset value of 0.
32
33
Check the FIFO is not empty before accessing it, else log an
34
error message.
35
36
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
37
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
38
Message-id: 20190709113715.7761-3-philmd@redhat.com
39
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
40
---
10
---
41
hw/ssi/mss-spi.c | 8 +++++++-
11
target/arm/sve_helper.c | 3 +++
42
1 file changed, 7 insertions(+), 1 deletion(-)
12
1 file changed, 3 insertions(+)
43
13
44
diff --git a/hw/ssi/mss-spi.c b/hw/ssi/mss-spi.c
14
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
45
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
46
--- a/hw/ssi/mss-spi.c
16
--- a/target/arm/sve_helper.c
47
+++ b/hw/ssi/mss-spi.c
17
+++ b/target/arm/sve_helper.c
48
@@ -XXX,XX +XXX,XX @@ spi_read(void *opaque, hwaddr addr, unsigned int size)
18
@@ -XXX,XX +XXX,XX @@ bool sve_probe_page(SVEHostPage *info, bool nofault, CPUARMState *env,
49
case R_SPI_RX:
19
50
s->regs[R_SPI_STATUS] &= ~S_RXFIFOFUL;
20
#ifdef CONFIG_USER_ONLY
51
s->regs[R_SPI_STATUS] &= ~S_RXCHOVRF;
21
memset(&info->attrs, 0, sizeof(info->attrs));
52
- ret = fifo32_pop(&s->rx_fifo);
22
+ /* Require both MAP_ANON and PROT_MTE -- see allocation_tag_mem. */
53
+ if (fifo32_is_empty(&s->rx_fifo)) {
23
+ arm_tlb_mte_tagged(&info->attrs) =
54
+ qemu_log_mask(LOG_GUEST_ERROR,
24
+ (flags & PAGE_ANON) && (flags & PAGE_MTE);
55
+ "%s: Reading empty RX_FIFO\n",
25
#else
56
+ __func__);
26
/*
57
+ } else {
27
* Find the iotlbentry for addr and return the transaction attributes.
58
+ ret = fifo32_pop(&s->rx_fifo);
59
+ }
60
if (fifo32_is_empty(&s->rx_fifo)) {
61
s->regs[R_SPI_STATUS] |= S_RXFIFOEMP;
62
}
63
--
28
--
64
2.20.1
29
2.25.1
65
66
diff view generated by jsdifflib
1
In the M-profile architecture, when we do a vector table fetch and it
1
Before moving debug system register helper functions to a
2
fails, we need to report a HardFault. Whether this is a Secure HF or
2
different file, fix the code style issues (mostly block
3
a NonSecure HF depends on several things. If AIRCR.BFHFNMINS is 0
3
comment syntax) so checkpatch doesn't complain about the
4
then HF is always Secure, because there is no NonSecure HardFault.
4
code-motion patch.
5
Otherwise, the answer depends on whether the 'underlying exception'
6
(MemManage, BusFault, SecureFault) targets Secure or NonSecure. (In
7
the pseudocode, this is handled in the Vector() function: the final
8
exc.isSecure is calculated by looking at the exc.isSecure from the
9
exception returned from the memory access, not the isSecure input
10
argument.)
11
12
We weren't doing this correctly, because we were looking at
13
the target security domain of the exception we were trying to
14
load the vector table entry for. This produces errors of two kinds:
15
* a load from the NS vector table which hits the "NS access
16
to S memory" SecureFault should end up as a Secure HardFault,
17
but we were raising an NS HardFault
18
* a load from the S vector table which causes a BusFault
19
should raise an NS HardFault if BFHFNMINS == 1 (because
20
in that case all BusFaults are NonSecure), but we were raising
21
a Secure HardFault
22
23
Correct the logic.
24
25
We also fix a comment error where we claimed that we might
26
be escalating MemManage to HardFault, and forgot about SecureFault.
27
(Vector loads can never hit MPU access faults, because they're
28
always aligned and always use the default address map.)
29
5
30
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
31
Message-id: 20190705094823.28905-1-peter.maydell@linaro.org
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20220630194116.3438513-2-peter.maydell@linaro.org
32
---
9
---
33
target/arm/m_helper.c | 21 +++++++++++++++++----
10
target/arm/helper.c | 58 +++++++++++++++++++++++++++++----------------
34
1 file changed, 17 insertions(+), 4 deletions(-)
11
1 file changed, 38 insertions(+), 20 deletions(-)
35
12
36
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
37
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/m_helper.c
15
--- a/target/arm/helper.c
39
+++ b/target/arm/m_helper.c
16
+++ b/target/arm/helper.c
40
@@ -XXX,XX +XXX,XX @@ static bool arm_v7m_load_vector(ARMCPU *cpu, int exc, bool targets_secure,
17
@@ -XXX,XX +XXX,XX @@ static uint64_t arm_mdcr_el2_eff(CPUARMState *env)
41
if (sattrs.ns) {
18
return arm_is_el2_enabled(env) ? env->cp15.mdcr_el2 : 0;
42
attrs.secure = false;
19
}
43
} else if (!targets_secure) {
20
44
- /* NS access to S memory */
21
-/* Check for traps to "powerdown debug" registers, which are controlled
22
+/*
23
+ * Check for traps to "powerdown debug" registers, which are controlled
24
* by MDCR.TDOSA
25
*/
26
static CPAccessResult access_tdosa(CPUARMState *env, const ARMCPRegInfo *ri,
27
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tdosa(CPUARMState *env, const ARMCPRegInfo *ri,
28
return CP_ACCESS_OK;
29
}
30
31
-/* Check for traps to "debug ROM" registers, which are controlled
32
+/*
33
+ * Check for traps to "debug ROM" registers, which are controlled
34
* by MDCR_EL2.TDRA for EL2 but by the more general MDCR_EL3.TDA for EL3.
35
*/
36
static CPAccessResult access_tdra(CPUARMState *env, const ARMCPRegInfo *ri,
37
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tdra(CPUARMState *env, const ARMCPRegInfo *ri,
38
return CP_ACCESS_OK;
39
}
40
41
-/* Check for traps to general debug registers, which are controlled
42
+/*
43
+ * Check for traps to general debug registers, which are controlled
44
* by MDCR_EL2.TDA for EL2 and MDCR_EL3.TDA for EL3.
45
*/
46
static CPAccessResult access_tda(CPUARMState *env, const ARMCPRegInfo *ri,
47
@@ -XXX,XX +XXX,XX @@ static CPAccessResult ctr_el0_access(CPUARMState *env, const ARMCPRegInfo *ri,
48
static void oslar_write(CPUARMState *env, const ARMCPRegInfo *ri,
49
uint64_t value)
50
{
51
- /* Writes to OSLAR_EL1 may update the OS lock status, which can be
52
+ /*
53
+ * Writes to OSLAR_EL1 may update the OS lock status, which can be
54
* read via a bit in OSLSR_EL1.
55
*/
56
int oslock;
57
@@ -XXX,XX +XXX,XX @@ static void oslar_write(CPUARMState *env, const ARMCPRegInfo *ri,
58
}
59
60
static const ARMCPRegInfo debug_cp_reginfo[] = {
61
- /* DBGDRAR, DBGDSAR: always RAZ since we don't implement memory mapped
62
+ /*
63
+ * DBGDRAR, DBGDSAR: always RAZ since we don't implement memory mapped
64
* debug components. The AArch64 version of DBGDRAR is named MDRAR_EL1;
65
* unlike DBGDRAR it is never accessible from EL0.
66
* DBGDSAR is deprecated and must RAZ from v8 anyway, so it has no AArch64
67
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo debug_cp_reginfo[] = {
68
.cp = 14, .opc0 = 2, .opc1 = 0, .crn = 1, .crm = 3, .opc2 = 4,
69
.access = PL1_RW, .accessfn = access_tdosa,
70
.type = ARM_CP_NOP },
71
- /* Dummy DBGVCR: Linux wants to clear this on startup, but we don't
72
+ /*
73
+ * Dummy DBGVCR: Linux wants to clear this on startup, but we don't
74
* implement vector catch debug events yet.
75
*/
76
{ .name = "DBGVCR",
77
.cp = 14, .opc1 = 0, .crn = 0, .crm = 7, .opc2 = 0,
78
.access = PL1_RW, .accessfn = access_tda,
79
.type = ARM_CP_NOP },
80
- /* Dummy DBGVCR32_EL2 (which is only for a 64-bit hypervisor
81
+ /*
82
+ * Dummy DBGVCR32_EL2 (which is only for a 64-bit hypervisor
83
* to save and restore a 32-bit guest's DBGVCR)
84
*/
85
{ .name = "DBGVCR32_EL2", .state = ARM_CP_STATE_AA64,
86
.opc0 = 2, .opc1 = 4, .crn = 0, .crm = 7, .opc2 = 0,
87
.access = PL2_RW, .accessfn = access_tda,
88
.type = ARM_CP_NOP | ARM_CP_EL3_NO_EL2_KEEP },
89
- /* Dummy MDCCINT_EL1, since we don't implement the Debug Communications
90
+ /*
91
+ * Dummy MDCCINT_EL1, since we don't implement the Debug Communications
92
* Channel but Linux may try to access this register. The 32-bit
93
* alias is DBGDCCINT.
94
*/
95
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo debug_cp_reginfo[] = {
96
static const ARMCPRegInfo debug_lpae_cp_reginfo[] = {
97
/* 64 bit access versions of the (dummy) debug registers */
98
{ .name = "DBGDRAR", .cp = 14, .crm = 1, .opc1 = 0,
99
- .access = PL0_R, .type = ARM_CP_CONST|ARM_CP_64BIT, .resetvalue = 0 },
100
+ .access = PL0_R, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
101
{ .name = "DBGDSAR", .cp = 14, .crm = 2, .opc1 = 0,
102
- .access = PL0_R, .type = ARM_CP_CONST|ARM_CP_64BIT, .resetvalue = 0 },
103
+ .access = PL0_R, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
104
};
105
106
/*
107
@@ -XXX,XX +XXX,XX @@ void hw_watchpoint_update(ARMCPU *cpu, int n)
108
break;
109
}
110
111
- /* Attempts to use both MASK and BAS fields simultaneously are
112
+ /*
113
+ * Attempts to use both MASK and BAS fields simultaneously are
114
* CONSTRAINED UNPREDICTABLE; we opt to ignore BAS in this case,
115
* thus generating a watchpoint for every byte in the masked region.
116
*/
117
mask = FIELD_EX64(wcr, DBGWCR, MASK);
118
if (mask == 1 || mask == 2) {
119
- /* Reserved values of MASK; we must act as if the mask value was
120
+ /*
121
+ * Reserved values of MASK; we must act as if the mask value was
122
* some non-reserved value, or as if the watchpoint were disabled.
123
* We choose the latter.
124
*/
125
@@ -XXX,XX +XXX,XX @@ void hw_watchpoint_update(ARMCPU *cpu, int n)
126
} else if (mask) {
127
/* Watchpoint covers an aligned area up to 2GB in size */
128
len = 1ULL << mask;
129
- /* If masked bits in WVR are not zero it's CONSTRAINED UNPREDICTABLE
130
+ /*
131
+ * If masked bits in WVR are not zero it's CONSTRAINED UNPREDICTABLE
132
* whether the watchpoint fires when the unmasked bits match; we opt
133
* to generate the exceptions.
134
*/
135
@@ -XXX,XX +XXX,XX @@ void hw_watchpoint_update(ARMCPU *cpu, int n)
136
int basstart;
137
138
if (extract64(wvr, 2, 1)) {
139
- /* Deprecated case of an only 4-aligned address. BAS[7:4] are
45
+ /*
140
+ /*
46
+ * NS access to S memory: the underlying exception which we escalate
141
+ * Deprecated case of an only 4-aligned address. BAS[7:4] are
47
+ * to HardFault is SecureFault, which always targets Secure.
142
* ignored, and BAS[3:0] define which bytes to watch.
48
+ */
143
*/
49
+ exc_secure = true;
144
bas &= 0xf;
50
goto load_fail;
145
@@ -XXX,XX +XXX,XX @@ void hw_watchpoint_update(ARMCPU *cpu, int n)
146
return;
51
}
147
}
52
}
148
53
@@ -XXX,XX +XXX,XX @@ static bool arm_v7m_load_vector(ARMCPU *cpu, int exc, bool targets_secure,
149
- /* The BAS bits are supposed to be programmed to indicate a contiguous
54
vector_entry = address_space_ldl(arm_addressspace(cs, attrs), addr,
150
+ /*
55
attrs, &result);
151
+ * The BAS bits are supposed to be programmed to indicate a contiguous
56
if (result != MEMTX_OK) {
152
* range of bytes. Otherwise it is CONSTRAINED UNPREDICTABLE whether
57
+ /*
153
* we fire for each byte in the word/doubleword addressed by the WVR.
58
+ * Underlying exception is BusFault: its target security state
154
* We choose to ignore any non-zero bits after the first range of 1s.
59
+ * depends on BFHFNMINS.
155
@@ -XXX,XX +XXX,XX @@ void hw_watchpoint_update_all(ARMCPU *cpu)
60
+ */
156
int i;
61
+ exc_secure = !(cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK);
157
CPUARMState *env = &cpu->env;
62
goto load_fail;
158
63
}
159
- /* Completely clear out existing QEMU watchpoints and our array, to
64
*pvec = vector_entry;
160
+ /*
65
@@ -XXX,XX +XXX,XX @@ load_fail:
161
+ * Completely clear out existing QEMU watchpoints and our array, to
66
/*
162
* avoid possible stale entries following migration load.
67
* All vector table fetch fails are reported as HardFault, with
163
*/
68
* HFSR.VECTTBL and .FORCED set. (FORCED is set because
164
cpu_watchpoint_remove_all(CPU(cpu), BP_CPU);
69
- * technically the underlying exception is a MemManage or BusFault
165
@@ -XXX,XX +XXX,XX @@ void hw_breakpoint_update(ARMCPU *cpu, int n)
70
+ * technically the underlying exception is a SecureFault or BusFault
166
case 11: /* linked context ID and VMID match (reserved if no EL2) */
71
* that is escalated to HardFault.) This is a terminal exception,
167
case 3: /* linked context ID match */
72
* so we will either take the HardFault immediately or else enter
168
default:
73
* lockup (the latter case is handled in armv7m_nvic_set_pending_derived()).
169
- /* We must generate no events for Linked context matches (unless
74
+ * The HardFault is Secure if BFHFNMINS is 0 (meaning that all HFs are
170
+ /*
75
+ * secure); otherwise it targets the same security state as the
171
+ * We must generate no events for Linked context matches (unless
76
+ * underlying exception.
172
* they are linked to by some other bp/wp, which is handled in
77
*/
173
* updates for the linking bp/wp). We choose to also generate no events
78
- exc_secure = targets_secure ||
174
* for reserved values.
79
- !(cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK);
175
@@ -XXX,XX +XXX,XX @@ void hw_breakpoint_update_all(ARMCPU *cpu)
80
+ if (!(cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK)) {
176
int i;
81
+ exc_secure = true;
177
CPUARMState *env = &cpu->env;
82
+ }
178
83
env->v7m.hfsr |= R_V7M_HFSR_VECTTBL_MASK | R_V7M_HFSR_FORCED_MASK;
179
- /* Completely clear out existing QEMU breakpoints and our array, to
84
armv7m_nvic_set_pending_derived(env->nvic, ARMV7M_EXCP_HARD, exc_secure);
180
+ /*
85
return false;
181
+ * Completely clear out existing QEMU breakpoints and our array, to
182
* avoid possible stale entries following migration load.
183
*/
184
cpu_breakpoint_remove_all(CPU(cpu), BP_CPU);
185
@@ -XXX,XX +XXX,XX @@ static void dbgbcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
186
ARMCPU *cpu = env_archcpu(env);
187
int i = ri->crm;
188
189
- /* BAS[3] is a read-only copy of BAS[2], and BAS[1] a read-only
190
+ /*
191
+ * BAS[3] is a read-only copy of BAS[2], and BAS[1] a read-only
192
* copy of BAS[0].
193
*/
194
value = deposit64(value, 6, 1, extract64(value, 5, 1));
195
@@ -XXX,XX +XXX,XX @@ static void dbgbcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
196
197
static void define_debug_regs(ARMCPU *cpu)
198
{
199
- /* Define v7 and v8 architectural debug registers.
200
+ /*
201
+ * Define v7 and v8 architectural debug registers.
202
* These are just dummy implementations for now.
203
*/
204
int i;
86
--
205
--
87
2.20.1
206
2.25.1
88
89
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
The target/arm/helper.c file is very long and is a grabbag of all
2
kinds of functionality. We have already a debug_helper.c which has
3
code for implementing architectural debug. Move the code which
4
defines the debug-related system registers out to this file also.
5
This affects the define_debug_regs() function and the various
6
functions and arrays which are used only by it.
2
7
3
In the next commit we will implement the write_with_attrs()
8
The functions raw_write() and arm_mdcr_el2_eff() and
4
handler. To avoid using different APIs, convert the read()
9
define_debug_regs() now need to be global rather than local to
5
handler first.
10
helper.c; everything else is pure code movement.
6
11
7
Reviewed-by: Francisco Iglesias <frasse.iglesias@gmail.com>
8
Tested-by: Francisco Iglesias <frasse.iglesias@gmail.com>
9
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20220630194116.3438513-3-peter.maydell@linaro.org
11
---
15
---
12
hw/ssi/xilinx_spips.c | 23 +++++++++++------------
16
target/arm/cpregs.h | 3 +
13
1 file changed, 11 insertions(+), 12 deletions(-)
17
target/arm/internals.h | 9 +
18
target/arm/debug_helper.c | 525 +++++++++++++++++++++++++++++++++++++
19
target/arm/helper.c | 531 +-------------------------------------
20
4 files changed, 538 insertions(+), 530 deletions(-)
14
21
15
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
22
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
16
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/ssi/xilinx_spips.c
24
--- a/target/arm/cpregs.h
18
+++ b/hw/ssi/xilinx_spips.c
25
+++ b/target/arm/cpregs.h
19
@@ -XXX,XX +XXX,XX @@ static void lqspi_load_cache(void *opaque, hwaddr addr)
26
@@ -XXX,XX +XXX,XX @@ void arm_cp_write_ignore(CPUARMState *env, const ARMCPRegInfo *ri,
27
/* CPReadFn that can be used for read-as-zero behaviour */
28
uint64_t arm_cp_read_zero(CPUARMState *env, const ARMCPRegInfo *ri);
29
30
+/* CPWriteFn that just writes the value to ri->fieldoffset */
31
+void raw_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value);
32
+
33
/*
34
* CPResetFn that does nothing, for use if no reset is required even
35
* if fieldoffset is non zero.
36
diff --git a/target/arm/internals.h b/target/arm/internals.h
37
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/internals.h
39
+++ b/target/arm/internals.h
40
@@ -XXX,XX +XXX,XX @@ int exception_target_el(CPUARMState *env);
41
bool arm_singlestep_active(CPUARMState *env);
42
bool arm_generate_debug_exceptions(CPUARMState *env);
43
44
+/* Add the cpreg definitions for debug related system registers */
45
+void define_debug_regs(ARMCPU *cpu);
46
+
47
+/* Effective value of MDCR_EL2 */
48
+static inline uint64_t arm_mdcr_el2_eff(CPUARMState *env)
49
+{
50
+ return arm_is_el2_enabled(env) ? env->cp15.mdcr_el2 : 0;
51
+}
52
+
53
/* Powers of 2 for sve_vq_map et al. */
54
#define SVE_VQ_POW2_MAP \
55
((1 << (1 - 1)) | (1 << (2 - 1)) | \
56
diff --git a/target/arm/debug_helper.c b/target/arm/debug_helper.c
57
index XXXXXXX..XXXXXXX 100644
58
--- a/target/arm/debug_helper.c
59
+++ b/target/arm/debug_helper.c
60
@@ -XXX,XX +XXX,XX @@
61
* SPDX-License-Identifier: GPL-2.0-or-later
62
*/
63
#include "qemu/osdep.h"
64
+#include "qemu/log.h"
65
#include "cpu.h"
66
#include "internals.h"
67
+#include "cpregs.h"
68
#include "exec/exec-all.h"
69
#include "exec/helper-proto.h"
70
71
@@ -XXX,XX +XXX,XX @@ void HELPER(exception_swstep)(CPUARMState *env, uint32_t syndrome)
72
raise_exception_debug(env, EXCP_UDEF, syndrome);
73
}
74
75
+/*
76
+ * Check for traps to "powerdown debug" registers, which are controlled
77
+ * by MDCR.TDOSA
78
+ */
79
+static CPAccessResult access_tdosa(CPUARMState *env, const ARMCPRegInfo *ri,
80
+ bool isread)
81
+{
82
+ int el = arm_current_el(env);
83
+ uint64_t mdcr_el2 = arm_mdcr_el2_eff(env);
84
+ bool mdcr_el2_tdosa = (mdcr_el2 & MDCR_TDOSA) || (mdcr_el2 & MDCR_TDE) ||
85
+ (arm_hcr_el2_eff(env) & HCR_TGE);
86
+
87
+ if (el < 2 && mdcr_el2_tdosa) {
88
+ return CP_ACCESS_TRAP_EL2;
89
+ }
90
+ if (el < 3 && (env->cp15.mdcr_el3 & MDCR_TDOSA)) {
91
+ return CP_ACCESS_TRAP_EL3;
92
+ }
93
+ return CP_ACCESS_OK;
94
+}
95
+
96
+/*
97
+ * Check for traps to "debug ROM" registers, which are controlled
98
+ * by MDCR_EL2.TDRA for EL2 but by the more general MDCR_EL3.TDA for EL3.
99
+ */
100
+static CPAccessResult access_tdra(CPUARMState *env, const ARMCPRegInfo *ri,
101
+ bool isread)
102
+{
103
+ int el = arm_current_el(env);
104
+ uint64_t mdcr_el2 = arm_mdcr_el2_eff(env);
105
+ bool mdcr_el2_tdra = (mdcr_el2 & MDCR_TDRA) || (mdcr_el2 & MDCR_TDE) ||
106
+ (arm_hcr_el2_eff(env) & HCR_TGE);
107
+
108
+ if (el < 2 && mdcr_el2_tdra) {
109
+ return CP_ACCESS_TRAP_EL2;
110
+ }
111
+ if (el < 3 && (env->cp15.mdcr_el3 & MDCR_TDA)) {
112
+ return CP_ACCESS_TRAP_EL3;
113
+ }
114
+ return CP_ACCESS_OK;
115
+}
116
+
117
+/*
118
+ * Check for traps to general debug registers, which are controlled
119
+ * by MDCR_EL2.TDA for EL2 and MDCR_EL3.TDA for EL3.
120
+ */
121
+static CPAccessResult access_tda(CPUARMState *env, const ARMCPRegInfo *ri,
122
+ bool isread)
123
+{
124
+ int el = arm_current_el(env);
125
+ uint64_t mdcr_el2 = arm_mdcr_el2_eff(env);
126
+ bool mdcr_el2_tda = (mdcr_el2 & MDCR_TDA) || (mdcr_el2 & MDCR_TDE) ||
127
+ (arm_hcr_el2_eff(env) & HCR_TGE);
128
+
129
+ if (el < 2 && mdcr_el2_tda) {
130
+ return CP_ACCESS_TRAP_EL2;
131
+ }
132
+ if (el < 3 && (env->cp15.mdcr_el3 & MDCR_TDA)) {
133
+ return CP_ACCESS_TRAP_EL3;
134
+ }
135
+ return CP_ACCESS_OK;
136
+}
137
+
138
+static void oslar_write(CPUARMState *env, const ARMCPRegInfo *ri,
139
+ uint64_t value)
140
+{
141
+ /*
142
+ * Writes to OSLAR_EL1 may update the OS lock status, which can be
143
+ * read via a bit in OSLSR_EL1.
144
+ */
145
+ int oslock;
146
+
147
+ if (ri->state == ARM_CP_STATE_AA32) {
148
+ oslock = (value == 0xC5ACCE55);
149
+ } else {
150
+ oslock = value & 1;
151
+ }
152
+
153
+ env->cp15.oslsr_el1 = deposit32(env->cp15.oslsr_el1, 1, 1, oslock);
154
+}
155
+
156
+static const ARMCPRegInfo debug_cp_reginfo[] = {
157
+ /*
158
+ * DBGDRAR, DBGDSAR: always RAZ since we don't implement memory mapped
159
+ * debug components. The AArch64 version of DBGDRAR is named MDRAR_EL1;
160
+ * unlike DBGDRAR it is never accessible from EL0.
161
+ * DBGDSAR is deprecated and must RAZ from v8 anyway, so it has no AArch64
162
+ * accessor.
163
+ */
164
+ { .name = "DBGDRAR", .cp = 14, .crn = 1, .crm = 0, .opc1 = 0, .opc2 = 0,
165
+ .access = PL0_R, .accessfn = access_tdra,
166
+ .type = ARM_CP_CONST, .resetvalue = 0 },
167
+ { .name = "MDRAR_EL1", .state = ARM_CP_STATE_AA64,
168
+ .opc0 = 2, .opc1 = 0, .crn = 1, .crm = 0, .opc2 = 0,
169
+ .access = PL1_R, .accessfn = access_tdra,
170
+ .type = ARM_CP_CONST, .resetvalue = 0 },
171
+ { .name = "DBGDSAR", .cp = 14, .crn = 2, .crm = 0, .opc1 = 0, .opc2 = 0,
172
+ .access = PL0_R, .accessfn = access_tdra,
173
+ .type = ARM_CP_CONST, .resetvalue = 0 },
174
+ /* Monitor debug system control register; the 32-bit alias is DBGDSCRext. */
175
+ { .name = "MDSCR_EL1", .state = ARM_CP_STATE_BOTH,
176
+ .cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 2,
177
+ .access = PL1_RW, .accessfn = access_tda,
178
+ .fieldoffset = offsetof(CPUARMState, cp15.mdscr_el1),
179
+ .resetvalue = 0 },
180
+ /*
181
+ * MDCCSR_EL0[30:29] map to EDSCR[30:29]. Simply RAZ as the external
182
+ * Debug Communication Channel is not implemented.
183
+ */
184
+ { .name = "MDCCSR_EL0", .state = ARM_CP_STATE_AA64,
185
+ .opc0 = 2, .opc1 = 3, .crn = 0, .crm = 1, .opc2 = 0,
186
+ .access = PL0_R, .accessfn = access_tda,
187
+ .type = ARM_CP_CONST, .resetvalue = 0 },
188
+ /*
189
+ * DBGDSCRint[15,12,5:2] map to MDSCR_EL1[15,12,5:2]. Map all bits as
190
+ * it is unlikely a guest will care.
191
+ * We don't implement the configurable EL0 access.
192
+ */
193
+ { .name = "DBGDSCRint", .state = ARM_CP_STATE_AA32,
194
+ .cp = 14, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 0,
195
+ .type = ARM_CP_ALIAS,
196
+ .access = PL1_R, .accessfn = access_tda,
197
+ .fieldoffset = offsetof(CPUARMState, cp15.mdscr_el1), },
198
+ { .name = "OSLAR_EL1", .state = ARM_CP_STATE_BOTH,
199
+ .cp = 14, .opc0 = 2, .opc1 = 0, .crn = 1, .crm = 0, .opc2 = 4,
200
+ .access = PL1_W, .type = ARM_CP_NO_RAW,
201
+ .accessfn = access_tdosa,
202
+ .writefn = oslar_write },
203
+ { .name = "OSLSR_EL1", .state = ARM_CP_STATE_BOTH,
204
+ .cp = 14, .opc0 = 2, .opc1 = 0, .crn = 1, .crm = 1, .opc2 = 4,
205
+ .access = PL1_R, .resetvalue = 10,
206
+ .accessfn = access_tdosa,
207
+ .fieldoffset = offsetof(CPUARMState, cp15.oslsr_el1) },
208
+ /* Dummy OSDLR_EL1: 32-bit Linux will read this */
209
+ { .name = "OSDLR_EL1", .state = ARM_CP_STATE_BOTH,
210
+ .cp = 14, .opc0 = 2, .opc1 = 0, .crn = 1, .crm = 3, .opc2 = 4,
211
+ .access = PL1_RW, .accessfn = access_tdosa,
212
+ .type = ARM_CP_NOP },
213
+ /*
214
+ * Dummy DBGVCR: Linux wants to clear this on startup, but we don't
215
+ * implement vector catch debug events yet.
216
+ */
217
+ { .name = "DBGVCR",
218
+ .cp = 14, .opc1 = 0, .crn = 0, .crm = 7, .opc2 = 0,
219
+ .access = PL1_RW, .accessfn = access_tda,
220
+ .type = ARM_CP_NOP },
221
+ /*
222
+ * Dummy DBGVCR32_EL2 (which is only for a 64-bit hypervisor
223
+ * to save and restore a 32-bit guest's DBGVCR)
224
+ */
225
+ { .name = "DBGVCR32_EL2", .state = ARM_CP_STATE_AA64,
226
+ .opc0 = 2, .opc1 = 4, .crn = 0, .crm = 7, .opc2 = 0,
227
+ .access = PL2_RW, .accessfn = access_tda,
228
+ .type = ARM_CP_NOP | ARM_CP_EL3_NO_EL2_KEEP },
229
+ /*
230
+ * Dummy MDCCINT_EL1, since we don't implement the Debug Communications
231
+ * Channel but Linux may try to access this register. The 32-bit
232
+ * alias is DBGDCCINT.
233
+ */
234
+ { .name = "MDCCINT_EL1", .state = ARM_CP_STATE_BOTH,
235
+ .cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 0,
236
+ .access = PL1_RW, .accessfn = access_tda,
237
+ .type = ARM_CP_NOP },
238
+};
239
+
240
+static const ARMCPRegInfo debug_lpae_cp_reginfo[] = {
241
+ /* 64 bit access versions of the (dummy) debug registers */
242
+ { .name = "DBGDRAR", .cp = 14, .crm = 1, .opc1 = 0,
243
+ .access = PL0_R, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
244
+ { .name = "DBGDSAR", .cp = 14, .crm = 2, .opc1 = 0,
245
+ .access = PL0_R, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
246
+};
247
+
248
+void hw_watchpoint_update(ARMCPU *cpu, int n)
249
+{
250
+ CPUARMState *env = &cpu->env;
251
+ vaddr len = 0;
252
+ vaddr wvr = env->cp15.dbgwvr[n];
253
+ uint64_t wcr = env->cp15.dbgwcr[n];
254
+ int mask;
255
+ int flags = BP_CPU | BP_STOP_BEFORE_ACCESS;
256
+
257
+ if (env->cpu_watchpoint[n]) {
258
+ cpu_watchpoint_remove_by_ref(CPU(cpu), env->cpu_watchpoint[n]);
259
+ env->cpu_watchpoint[n] = NULL;
260
+ }
261
+
262
+ if (!FIELD_EX64(wcr, DBGWCR, E)) {
263
+ /* E bit clear : watchpoint disabled */
264
+ return;
265
+ }
266
+
267
+ switch (FIELD_EX64(wcr, DBGWCR, LSC)) {
268
+ case 0:
269
+ /* LSC 00 is reserved and must behave as if the wp is disabled */
270
+ return;
271
+ case 1:
272
+ flags |= BP_MEM_READ;
273
+ break;
274
+ case 2:
275
+ flags |= BP_MEM_WRITE;
276
+ break;
277
+ case 3:
278
+ flags |= BP_MEM_ACCESS;
279
+ break;
280
+ }
281
+
282
+ /*
283
+ * Attempts to use both MASK and BAS fields simultaneously are
284
+ * CONSTRAINED UNPREDICTABLE; we opt to ignore BAS in this case,
285
+ * thus generating a watchpoint for every byte in the masked region.
286
+ */
287
+ mask = FIELD_EX64(wcr, DBGWCR, MASK);
288
+ if (mask == 1 || mask == 2) {
289
+ /*
290
+ * Reserved values of MASK; we must act as if the mask value was
291
+ * some non-reserved value, or as if the watchpoint were disabled.
292
+ * We choose the latter.
293
+ */
294
+ return;
295
+ } else if (mask) {
296
+ /* Watchpoint covers an aligned area up to 2GB in size */
297
+ len = 1ULL << mask;
298
+ /*
299
+ * If masked bits in WVR are not zero it's CONSTRAINED UNPREDICTABLE
300
+ * whether the watchpoint fires when the unmasked bits match; we opt
301
+ * to generate the exceptions.
302
+ */
303
+ wvr &= ~(len - 1);
304
+ } else {
305
+ /* Watchpoint covers bytes defined by the byte address select bits */
306
+ int bas = FIELD_EX64(wcr, DBGWCR, BAS);
307
+ int basstart;
308
+
309
+ if (extract64(wvr, 2, 1)) {
310
+ /*
311
+ * Deprecated case of an only 4-aligned address. BAS[7:4] are
312
+ * ignored, and BAS[3:0] define which bytes to watch.
313
+ */
314
+ bas &= 0xf;
315
+ }
316
+
317
+ if (bas == 0) {
318
+ /* This must act as if the watchpoint is disabled */
319
+ return;
320
+ }
321
+
322
+ /*
323
+ * The BAS bits are supposed to be programmed to indicate a contiguous
324
+ * range of bytes. Otherwise it is CONSTRAINED UNPREDICTABLE whether
325
+ * we fire for each byte in the word/doubleword addressed by the WVR.
326
+ * We choose to ignore any non-zero bits after the first range of 1s.
327
+ */
328
+ basstart = ctz32(bas);
329
+ len = cto32(bas >> basstart);
330
+ wvr += basstart;
331
+ }
332
+
333
+ cpu_watchpoint_insert(CPU(cpu), wvr, len, flags,
334
+ &env->cpu_watchpoint[n]);
335
+}
336
+
337
+void hw_watchpoint_update_all(ARMCPU *cpu)
338
+{
339
+ int i;
340
+ CPUARMState *env = &cpu->env;
341
+
342
+ /*
343
+ * Completely clear out existing QEMU watchpoints and our array, to
344
+ * avoid possible stale entries following migration load.
345
+ */
346
+ cpu_watchpoint_remove_all(CPU(cpu), BP_CPU);
347
+ memset(env->cpu_watchpoint, 0, sizeof(env->cpu_watchpoint));
348
+
349
+ for (i = 0; i < ARRAY_SIZE(cpu->env.cpu_watchpoint); i++) {
350
+ hw_watchpoint_update(cpu, i);
351
+ }
352
+}
353
+
354
+static void dbgwvr_write(CPUARMState *env, const ARMCPRegInfo *ri,
355
+ uint64_t value)
356
+{
357
+ ARMCPU *cpu = env_archcpu(env);
358
+ int i = ri->crm;
359
+
360
+ /*
361
+ * Bits [1:0] are RES0.
362
+ *
363
+ * It is IMPLEMENTATION DEFINED whether [63:49] ([63:53] with FEAT_LVA)
364
+ * are hardwired to the value of bit [48] ([52] with FEAT_LVA), or if
365
+ * they contain the value written. It is CONSTRAINED UNPREDICTABLE
366
+ * whether the RESS bits are ignored when comparing an address.
367
+ *
368
+ * Therefore we are allowed to compare the entire register, which lets
369
+ * us avoid considering whether or not FEAT_LVA is actually enabled.
370
+ */
371
+ value &= ~3ULL;
372
+
373
+ raw_write(env, ri, value);
374
+ hw_watchpoint_update(cpu, i);
375
+}
376
+
377
+static void dbgwcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
378
+ uint64_t value)
379
+{
380
+ ARMCPU *cpu = env_archcpu(env);
381
+ int i = ri->crm;
382
+
383
+ raw_write(env, ri, value);
384
+ hw_watchpoint_update(cpu, i);
385
+}
386
+
387
+void hw_breakpoint_update(ARMCPU *cpu, int n)
388
+{
389
+ CPUARMState *env = &cpu->env;
390
+ uint64_t bvr = env->cp15.dbgbvr[n];
391
+ uint64_t bcr = env->cp15.dbgbcr[n];
392
+ vaddr addr;
393
+ int bt;
394
+ int flags = BP_CPU;
395
+
396
+ if (env->cpu_breakpoint[n]) {
397
+ cpu_breakpoint_remove_by_ref(CPU(cpu), env->cpu_breakpoint[n]);
398
+ env->cpu_breakpoint[n] = NULL;
399
+ }
400
+
401
+ if (!extract64(bcr, 0, 1)) {
402
+ /* E bit clear : watchpoint disabled */
403
+ return;
404
+ }
405
+
406
+ bt = extract64(bcr, 20, 4);
407
+
408
+ switch (bt) {
409
+ case 4: /* unlinked address mismatch (reserved if AArch64) */
410
+ case 5: /* linked address mismatch (reserved if AArch64) */
411
+ qemu_log_mask(LOG_UNIMP,
412
+ "arm: address mismatch breakpoint types not implemented\n");
413
+ return;
414
+ case 0: /* unlinked address match */
415
+ case 1: /* linked address match */
416
+ {
417
+ /*
418
+ * Bits [1:0] are RES0.
419
+ *
420
+ * It is IMPLEMENTATION DEFINED whether bits [63:49]
421
+ * ([63:53] for FEAT_LVA) are hardwired to a copy of the sign bit
422
+ * of the VA field ([48] or [52] for FEAT_LVA), or whether the
423
+ * value is read as written. It is CONSTRAINED UNPREDICTABLE
424
+ * whether the RESS bits are ignored when comparing an address.
425
+ * Therefore we are allowed to compare the entire register, which
426
+ * lets us avoid considering whether FEAT_LVA is actually enabled.
427
+ *
428
+ * The BAS field is used to allow setting breakpoints on 16-bit
429
+ * wide instructions; it is CONSTRAINED UNPREDICTABLE whether
430
+ * a bp will fire if the addresses covered by the bp and the addresses
431
+ * covered by the insn overlap but the insn doesn't start at the
432
+ * start of the bp address range. We choose to require the insn and
433
+ * the bp to have the same address. The constraints on writing to
434
+ * BAS enforced in dbgbcr_write mean we have only four cases:
435
+ * 0b0000 => no breakpoint
436
+ * 0b0011 => breakpoint on addr
437
+ * 0b1100 => breakpoint on addr + 2
438
+ * 0b1111 => breakpoint on addr
439
+ * See also figure D2-3 in the v8 ARM ARM (DDI0487A.c).
440
+ */
441
+ int bas = extract64(bcr, 5, 4);
442
+ addr = bvr & ~3ULL;
443
+ if (bas == 0) {
444
+ return;
445
+ }
446
+ if (bas == 0xc) {
447
+ addr += 2;
448
+ }
449
+ break;
450
+ }
451
+ case 2: /* unlinked context ID match */
452
+ case 8: /* unlinked VMID match (reserved if no EL2) */
453
+ case 10: /* unlinked context ID and VMID match (reserved if no EL2) */
454
+ qemu_log_mask(LOG_UNIMP,
455
+ "arm: unlinked context breakpoint types not implemented\n");
456
+ return;
457
+ case 9: /* linked VMID match (reserved if no EL2) */
458
+ case 11: /* linked context ID and VMID match (reserved if no EL2) */
459
+ case 3: /* linked context ID match */
460
+ default:
461
+ /*
462
+ * We must generate no events for Linked context matches (unless
463
+ * they are linked to by some other bp/wp, which is handled in
464
+ * updates for the linking bp/wp). We choose to also generate no events
465
+ * for reserved values.
466
+ */
467
+ return;
468
+ }
469
+
470
+ cpu_breakpoint_insert(CPU(cpu), addr, flags, &env->cpu_breakpoint[n]);
471
+}
472
+
473
+void hw_breakpoint_update_all(ARMCPU *cpu)
474
+{
475
+ int i;
476
+ CPUARMState *env = &cpu->env;
477
+
478
+ /*
479
+ * Completely clear out existing QEMU breakpoints and our array, to
480
+ * avoid possible stale entries following migration load.
481
+ */
482
+ cpu_breakpoint_remove_all(CPU(cpu), BP_CPU);
483
+ memset(env->cpu_breakpoint, 0, sizeof(env->cpu_breakpoint));
484
+
485
+ for (i = 0; i < ARRAY_SIZE(cpu->env.cpu_breakpoint); i++) {
486
+ hw_breakpoint_update(cpu, i);
487
+ }
488
+}
489
+
490
+static void dbgbvr_write(CPUARMState *env, const ARMCPRegInfo *ri,
491
+ uint64_t value)
492
+{
493
+ ARMCPU *cpu = env_archcpu(env);
494
+ int i = ri->crm;
495
+
496
+ raw_write(env, ri, value);
497
+ hw_breakpoint_update(cpu, i);
498
+}
499
+
500
+static void dbgbcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
501
+ uint64_t value)
502
+{
503
+ ARMCPU *cpu = env_archcpu(env);
504
+ int i = ri->crm;
505
+
506
+ /*
507
+ * BAS[3] is a read-only copy of BAS[2], and BAS[1] a read-only
508
+ * copy of BAS[0].
509
+ */
510
+ value = deposit64(value, 6, 1, extract64(value, 5, 1));
511
+ value = deposit64(value, 8, 1, extract64(value, 7, 1));
512
+
513
+ raw_write(env, ri, value);
514
+ hw_breakpoint_update(cpu, i);
515
+}
516
+
517
+void define_debug_regs(ARMCPU *cpu)
518
+{
519
+ /*
520
+ * Define v7 and v8 architectural debug registers.
521
+ * These are just dummy implementations for now.
522
+ */
523
+ int i;
524
+ int wrps, brps, ctx_cmps;
525
+
526
+ /*
527
+ * The Arm ARM says DBGDIDR is optional and deprecated if EL1 cannot
528
+ * use AArch32. Given that bit 15 is RES1, if the value is 0 then
529
+ * the register must not exist for this cpu.
530
+ */
531
+ if (cpu->isar.dbgdidr != 0) {
532
+ ARMCPRegInfo dbgdidr = {
533
+ .name = "DBGDIDR", .cp = 14, .crn = 0, .crm = 0,
534
+ .opc1 = 0, .opc2 = 0,
535
+ .access = PL0_R, .accessfn = access_tda,
536
+ .type = ARM_CP_CONST, .resetvalue = cpu->isar.dbgdidr,
537
+ };
538
+ define_one_arm_cp_reg(cpu, &dbgdidr);
539
+ }
540
+
541
+ brps = arm_num_brps(cpu);
542
+ wrps = arm_num_wrps(cpu);
543
+ ctx_cmps = arm_num_ctx_cmps(cpu);
544
+
545
+ assert(ctx_cmps <= brps);
546
+
547
+ define_arm_cp_regs(cpu, debug_cp_reginfo);
548
+
549
+ if (arm_feature(&cpu->env, ARM_FEATURE_LPAE)) {
550
+ define_arm_cp_regs(cpu, debug_lpae_cp_reginfo);
551
+ }
552
+
553
+ for (i = 0; i < brps; i++) {
554
+ char *dbgbvr_el1_name = g_strdup_printf("DBGBVR%d_EL1", i);
555
+ char *dbgbcr_el1_name = g_strdup_printf("DBGBCR%d_EL1", i);
556
+ ARMCPRegInfo dbgregs[] = {
557
+ { .name = dbgbvr_el1_name, .state = ARM_CP_STATE_BOTH,
558
+ .cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = i, .opc2 = 4,
559
+ .access = PL1_RW, .accessfn = access_tda,
560
+ .fieldoffset = offsetof(CPUARMState, cp15.dbgbvr[i]),
561
+ .writefn = dbgbvr_write, .raw_writefn = raw_write
562
+ },
563
+ { .name = dbgbcr_el1_name, .state = ARM_CP_STATE_BOTH,
564
+ .cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = i, .opc2 = 5,
565
+ .access = PL1_RW, .accessfn = access_tda,
566
+ .fieldoffset = offsetof(CPUARMState, cp15.dbgbcr[i]),
567
+ .writefn = dbgbcr_write, .raw_writefn = raw_write
568
+ },
569
+ };
570
+ define_arm_cp_regs(cpu, dbgregs);
571
+ g_free(dbgbvr_el1_name);
572
+ g_free(dbgbcr_el1_name);
573
+ }
574
+
575
+ for (i = 0; i < wrps; i++) {
576
+ char *dbgwvr_el1_name = g_strdup_printf("DBGWVR%d_EL1", i);
577
+ char *dbgwcr_el1_name = g_strdup_printf("DBGWCR%d_EL1", i);
578
+ ARMCPRegInfo dbgregs[] = {
579
+ { .name = dbgwvr_el1_name, .state = ARM_CP_STATE_BOTH,
580
+ .cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = i, .opc2 = 6,
581
+ .access = PL1_RW, .accessfn = access_tda,
582
+ .fieldoffset = offsetof(CPUARMState, cp15.dbgwvr[i]),
583
+ .writefn = dbgwvr_write, .raw_writefn = raw_write
584
+ },
585
+ { .name = dbgwcr_el1_name, .state = ARM_CP_STATE_BOTH,
586
+ .cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = i, .opc2 = 7,
587
+ .access = PL1_RW, .accessfn = access_tda,
588
+ .fieldoffset = offsetof(CPUARMState, cp15.dbgwcr[i]),
589
+ .writefn = dbgwcr_write, .raw_writefn = raw_write
590
+ },
591
+ };
592
+ define_arm_cp_regs(cpu, dbgregs);
593
+ g_free(dbgwvr_el1_name);
594
+ g_free(dbgwcr_el1_name);
595
+ }
596
+}
597
+
598
#if !defined(CONFIG_USER_ONLY)
599
600
vaddr arm_adjust_watchpoint_address(CPUState *cs, vaddr addr, int len)
601
diff --git a/target/arm/helper.c b/target/arm/helper.c
602
index XXXXXXX..XXXXXXX 100644
603
--- a/target/arm/helper.c
604
+++ b/target/arm/helper.c
605
@@ -XXX,XX +XXX,XX @@ static uint64_t raw_read(CPUARMState *env, const ARMCPRegInfo *ri)
20
}
606
}
21
}
607
}
22
608
23
-static uint64_t
609
-static void raw_write(CPUARMState *env, const ARMCPRegInfo *ri,
24
-lqspi_read(void *opaque, hwaddr addr, unsigned int size)
610
- uint64_t value)
25
+static MemTxResult lqspi_read(void *opaque, hwaddr addr, uint64_t *value,
611
+void raw_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
26
+ unsigned size, MemTxAttrs attrs)
27
{
612
{
28
- XilinxQSPIPS *q = opaque;
613
assert(ri->fieldoffset);
29
- uint32_t ret;
614
if (cpreg_field_is_64bit(ri)) {
30
+ XilinxQSPIPS *q = XILINX_QSPIPS(opaque);
615
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_trap_aa32s_el1(CPUARMState *env,
31
616
return CP_ACCESS_TRAP_UNCATEGORIZED;
32
if (addr >= q->lqspi_cached_addr &&
617
}
33
addr <= q->lqspi_cached_addr + LQSPI_CACHE_SIZE - 4) {
618
34
uint8_t *retp = &q->lqspi_buf[addr - q->lqspi_cached_addr];
619
-static uint64_t arm_mdcr_el2_eff(CPUARMState *env)
35
- ret = cpu_to_le32(*(uint32_t *)retp);
620
-{
36
- DB_PRINT_L(1, "addr: %08x, data: %08x\n", (unsigned)addr,
621
- return arm_is_el2_enabled(env) ? env->cp15.mdcr_el2 : 0;
37
- (unsigned)ret);
622
-}
38
- return ret;
623
-
624
-/*
625
- * Check for traps to "powerdown debug" registers, which are controlled
626
- * by MDCR.TDOSA
627
- */
628
-static CPAccessResult access_tdosa(CPUARMState *env, const ARMCPRegInfo *ri,
629
- bool isread)
630
-{
631
- int el = arm_current_el(env);
632
- uint64_t mdcr_el2 = arm_mdcr_el2_eff(env);
633
- bool mdcr_el2_tdosa = (mdcr_el2 & MDCR_TDOSA) || (mdcr_el2 & MDCR_TDE) ||
634
- (arm_hcr_el2_eff(env) & HCR_TGE);
635
-
636
- if (el < 2 && mdcr_el2_tdosa) {
637
- return CP_ACCESS_TRAP_EL2;
638
- }
639
- if (el < 3 && (env->cp15.mdcr_el3 & MDCR_TDOSA)) {
640
- return CP_ACCESS_TRAP_EL3;
641
- }
642
- return CP_ACCESS_OK;
643
-}
644
-
645
-/*
646
- * Check for traps to "debug ROM" registers, which are controlled
647
- * by MDCR_EL2.TDRA for EL2 but by the more general MDCR_EL3.TDA for EL3.
648
- */
649
-static CPAccessResult access_tdra(CPUARMState *env, const ARMCPRegInfo *ri,
650
- bool isread)
651
-{
652
- int el = arm_current_el(env);
653
- uint64_t mdcr_el2 = arm_mdcr_el2_eff(env);
654
- bool mdcr_el2_tdra = (mdcr_el2 & MDCR_TDRA) || (mdcr_el2 & MDCR_TDE) ||
655
- (arm_hcr_el2_eff(env) & HCR_TGE);
656
-
657
- if (el < 2 && mdcr_el2_tdra) {
658
- return CP_ACCESS_TRAP_EL2;
659
- }
660
- if (el < 3 && (env->cp15.mdcr_el3 & MDCR_TDA)) {
661
- return CP_ACCESS_TRAP_EL3;
662
- }
663
- return CP_ACCESS_OK;
664
-}
665
-
666
-/*
667
- * Check for traps to general debug registers, which are controlled
668
- * by MDCR_EL2.TDA for EL2 and MDCR_EL3.TDA for EL3.
669
- */
670
-static CPAccessResult access_tda(CPUARMState *env, const ARMCPRegInfo *ri,
671
- bool isread)
672
-{
673
- int el = arm_current_el(env);
674
- uint64_t mdcr_el2 = arm_mdcr_el2_eff(env);
675
- bool mdcr_el2_tda = (mdcr_el2 & MDCR_TDA) || (mdcr_el2 & MDCR_TDE) ||
676
- (arm_hcr_el2_eff(env) & HCR_TGE);
677
-
678
- if (el < 2 && mdcr_el2_tda) {
679
- return CP_ACCESS_TRAP_EL2;
680
- }
681
- if (el < 3 && (env->cp15.mdcr_el3 & MDCR_TDA)) {
682
- return CP_ACCESS_TRAP_EL3;
683
- }
684
- return CP_ACCESS_OK;
685
-}
686
-
687
/* Check for traps to performance monitor registers, which are controlled
688
* by MDCR_EL2.TPM for EL2 and MDCR_EL3.TPM for EL3.
689
*/
690
@@ -XXX,XX +XXX,XX @@ static CPAccessResult ctr_el0_access(CPUARMState *env, const ARMCPRegInfo *ri,
691
return CP_ACCESS_OK;
692
}
693
694
-static void oslar_write(CPUARMState *env, const ARMCPRegInfo *ri,
695
- uint64_t value)
696
-{
697
- /*
698
- * Writes to OSLAR_EL1 may update the OS lock status, which can be
699
- * read via a bit in OSLSR_EL1.
700
- */
701
- int oslock;
702
-
703
- if (ri->state == ARM_CP_STATE_AA32) {
704
- oslock = (value == 0xC5ACCE55);
39
- } else {
705
- } else {
40
- lqspi_load_cache(opaque, addr);
706
- oslock = value & 1;
41
- return lqspi_read(opaque, addr, size);
707
- }
42
+ *value = cpu_to_le32(*(uint32_t *)retp);
708
-
43
+ DB_PRINT_L(1, "addr: %08" HWADDR_PRIx ", data: %08" PRIx64 "\n",
709
- env->cp15.oslsr_el1 = deposit32(env->cp15.oslsr_el1, 1, 1, oslock);
44
+ addr, *value);
710
-}
45
+ return MEMTX_OK;
711
-
46
}
712
-static const ARMCPRegInfo debug_cp_reginfo[] = {
47
+
713
- /*
48
+ lqspi_load_cache(opaque, addr);
714
- * DBGDRAR, DBGDSAR: always RAZ since we don't implement memory mapped
49
+ return lqspi_read(opaque, addr, value, size, attrs);
715
- * debug components. The AArch64 version of DBGDRAR is named MDRAR_EL1;
50
}
716
- * unlike DBGDRAR it is never accessible from EL0.
51
717
- * DBGDSAR is deprecated and must RAZ from v8 anyway, so it has no AArch64
52
static const MemoryRegionOps lqspi_ops = {
718
- * accessor.
53
- .read = lqspi_read,
719
- */
54
+ .read_with_attrs = lqspi_read,
720
- { .name = "DBGDRAR", .cp = 14, .crn = 1, .crm = 0, .opc1 = 0, .opc2 = 0,
55
.endianness = DEVICE_NATIVE_ENDIAN,
721
- .access = PL0_R, .accessfn = access_tdra,
56
.valid = {
722
- .type = ARM_CP_CONST, .resetvalue = 0 },
57
.min_access_size = 1,
723
- { .name = "MDRAR_EL1", .state = ARM_CP_STATE_AA64,
724
- .opc0 = 2, .opc1 = 0, .crn = 1, .crm = 0, .opc2 = 0,
725
- .access = PL1_R, .accessfn = access_tdra,
726
- .type = ARM_CP_CONST, .resetvalue = 0 },
727
- { .name = "DBGDSAR", .cp = 14, .crn = 2, .crm = 0, .opc1 = 0, .opc2 = 0,
728
- .access = PL0_R, .accessfn = access_tdra,
729
- .type = ARM_CP_CONST, .resetvalue = 0 },
730
- /* Monitor debug system control register; the 32-bit alias is DBGDSCRext. */
731
- { .name = "MDSCR_EL1", .state = ARM_CP_STATE_BOTH,
732
- .cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 2,
733
- .access = PL1_RW, .accessfn = access_tda,
734
- .fieldoffset = offsetof(CPUARMState, cp15.mdscr_el1),
735
- .resetvalue = 0 },
736
- /*
737
- * MDCCSR_EL0[30:29] map to EDSCR[30:29]. Simply RAZ as the external
738
- * Debug Communication Channel is not implemented.
739
- */
740
- { .name = "MDCCSR_EL0", .state = ARM_CP_STATE_AA64,
741
- .opc0 = 2, .opc1 = 3, .crn = 0, .crm = 1, .opc2 = 0,
742
- .access = PL0_R, .accessfn = access_tda,
743
- .type = ARM_CP_CONST, .resetvalue = 0 },
744
- /*
745
- * DBGDSCRint[15,12,5:2] map to MDSCR_EL1[15,12,5:2]. Map all bits as
746
- * it is unlikely a guest will care.
747
- * We don't implement the configurable EL0 access.
748
- */
749
- { .name = "DBGDSCRint", .state = ARM_CP_STATE_AA32,
750
- .cp = 14, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 0,
751
- .type = ARM_CP_ALIAS,
752
- .access = PL1_R, .accessfn = access_tda,
753
- .fieldoffset = offsetof(CPUARMState, cp15.mdscr_el1), },
754
- { .name = "OSLAR_EL1", .state = ARM_CP_STATE_BOTH,
755
- .cp = 14, .opc0 = 2, .opc1 = 0, .crn = 1, .crm = 0, .opc2 = 4,
756
- .access = PL1_W, .type = ARM_CP_NO_RAW,
757
- .accessfn = access_tdosa,
758
- .writefn = oslar_write },
759
- { .name = "OSLSR_EL1", .state = ARM_CP_STATE_BOTH,
760
- .cp = 14, .opc0 = 2, .opc1 = 0, .crn = 1, .crm = 1, .opc2 = 4,
761
- .access = PL1_R, .resetvalue = 10,
762
- .accessfn = access_tdosa,
763
- .fieldoffset = offsetof(CPUARMState, cp15.oslsr_el1) },
764
- /* Dummy OSDLR_EL1: 32-bit Linux will read this */
765
- { .name = "OSDLR_EL1", .state = ARM_CP_STATE_BOTH,
766
- .cp = 14, .opc0 = 2, .opc1 = 0, .crn = 1, .crm = 3, .opc2 = 4,
767
- .access = PL1_RW, .accessfn = access_tdosa,
768
- .type = ARM_CP_NOP },
769
- /*
770
- * Dummy DBGVCR: Linux wants to clear this on startup, but we don't
771
- * implement vector catch debug events yet.
772
- */
773
- { .name = "DBGVCR",
774
- .cp = 14, .opc1 = 0, .crn = 0, .crm = 7, .opc2 = 0,
775
- .access = PL1_RW, .accessfn = access_tda,
776
- .type = ARM_CP_NOP },
777
- /*
778
- * Dummy DBGVCR32_EL2 (which is only for a 64-bit hypervisor
779
- * to save and restore a 32-bit guest's DBGVCR)
780
- */
781
- { .name = "DBGVCR32_EL2", .state = ARM_CP_STATE_AA64,
782
- .opc0 = 2, .opc1 = 4, .crn = 0, .crm = 7, .opc2 = 0,
783
- .access = PL2_RW, .accessfn = access_tda,
784
- .type = ARM_CP_NOP | ARM_CP_EL3_NO_EL2_KEEP },
785
- /*
786
- * Dummy MDCCINT_EL1, since we don't implement the Debug Communications
787
- * Channel but Linux may try to access this register. The 32-bit
788
- * alias is DBGDCCINT.
789
- */
790
- { .name = "MDCCINT_EL1", .state = ARM_CP_STATE_BOTH,
791
- .cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 0,
792
- .access = PL1_RW, .accessfn = access_tda,
793
- .type = ARM_CP_NOP },
794
-};
795
-
796
-static const ARMCPRegInfo debug_lpae_cp_reginfo[] = {
797
- /* 64 bit access versions of the (dummy) debug registers */
798
- { .name = "DBGDRAR", .cp = 14, .crm = 1, .opc1 = 0,
799
- .access = PL0_R, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
800
- { .name = "DBGDSAR", .cp = 14, .crm = 2, .opc1 = 0,
801
- .access = PL0_R, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
802
-};
803
-
804
/*
805
* Check for traps to RAS registers, which are controlled
806
* by HCR_EL2.TERR and SCR_EL3.TERR.
807
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo sme_reginfo[] = {
808
};
809
#endif /* TARGET_AARCH64 */
810
811
-void hw_watchpoint_update(ARMCPU *cpu, int n)
812
-{
813
- CPUARMState *env = &cpu->env;
814
- vaddr len = 0;
815
- vaddr wvr = env->cp15.dbgwvr[n];
816
- uint64_t wcr = env->cp15.dbgwcr[n];
817
- int mask;
818
- int flags = BP_CPU | BP_STOP_BEFORE_ACCESS;
819
-
820
- if (env->cpu_watchpoint[n]) {
821
- cpu_watchpoint_remove_by_ref(CPU(cpu), env->cpu_watchpoint[n]);
822
- env->cpu_watchpoint[n] = NULL;
823
- }
824
-
825
- if (!FIELD_EX64(wcr, DBGWCR, E)) {
826
- /* E bit clear : watchpoint disabled */
827
- return;
828
- }
829
-
830
- switch (FIELD_EX64(wcr, DBGWCR, LSC)) {
831
- case 0:
832
- /* LSC 00 is reserved and must behave as if the wp is disabled */
833
- return;
834
- case 1:
835
- flags |= BP_MEM_READ;
836
- break;
837
- case 2:
838
- flags |= BP_MEM_WRITE;
839
- break;
840
- case 3:
841
- flags |= BP_MEM_ACCESS;
842
- break;
843
- }
844
-
845
- /*
846
- * Attempts to use both MASK and BAS fields simultaneously are
847
- * CONSTRAINED UNPREDICTABLE; we opt to ignore BAS in this case,
848
- * thus generating a watchpoint for every byte in the masked region.
849
- */
850
- mask = FIELD_EX64(wcr, DBGWCR, MASK);
851
- if (mask == 1 || mask == 2) {
852
- /*
853
- * Reserved values of MASK; we must act as if the mask value was
854
- * some non-reserved value, or as if the watchpoint were disabled.
855
- * We choose the latter.
856
- */
857
- return;
858
- } else if (mask) {
859
- /* Watchpoint covers an aligned area up to 2GB in size */
860
- len = 1ULL << mask;
861
- /*
862
- * If masked bits in WVR are not zero it's CONSTRAINED UNPREDICTABLE
863
- * whether the watchpoint fires when the unmasked bits match; we opt
864
- * to generate the exceptions.
865
- */
866
- wvr &= ~(len - 1);
867
- } else {
868
- /* Watchpoint covers bytes defined by the byte address select bits */
869
- int bas = FIELD_EX64(wcr, DBGWCR, BAS);
870
- int basstart;
871
-
872
- if (extract64(wvr, 2, 1)) {
873
- /*
874
- * Deprecated case of an only 4-aligned address. BAS[7:4] are
875
- * ignored, and BAS[3:0] define which bytes to watch.
876
- */
877
- bas &= 0xf;
878
- }
879
-
880
- if (bas == 0) {
881
- /* This must act as if the watchpoint is disabled */
882
- return;
883
- }
884
-
885
- /*
886
- * The BAS bits are supposed to be programmed to indicate a contiguous
887
- * range of bytes. Otherwise it is CONSTRAINED UNPREDICTABLE whether
888
- * we fire for each byte in the word/doubleword addressed by the WVR.
889
- * We choose to ignore any non-zero bits after the first range of 1s.
890
- */
891
- basstart = ctz32(bas);
892
- len = cto32(bas >> basstart);
893
- wvr += basstart;
894
- }
895
-
896
- cpu_watchpoint_insert(CPU(cpu), wvr, len, flags,
897
- &env->cpu_watchpoint[n]);
898
-}
899
-
900
-void hw_watchpoint_update_all(ARMCPU *cpu)
901
-{
902
- int i;
903
- CPUARMState *env = &cpu->env;
904
-
905
- /*
906
- * Completely clear out existing QEMU watchpoints and our array, to
907
- * avoid possible stale entries following migration load.
908
- */
909
- cpu_watchpoint_remove_all(CPU(cpu), BP_CPU);
910
- memset(env->cpu_watchpoint, 0, sizeof(env->cpu_watchpoint));
911
-
912
- for (i = 0; i < ARRAY_SIZE(cpu->env.cpu_watchpoint); i++) {
913
- hw_watchpoint_update(cpu, i);
914
- }
915
-}
916
-
917
-static void dbgwvr_write(CPUARMState *env, const ARMCPRegInfo *ri,
918
- uint64_t value)
919
-{
920
- ARMCPU *cpu = env_archcpu(env);
921
- int i = ri->crm;
922
-
923
- /*
924
- * Bits [1:0] are RES0.
925
- *
926
- * It is IMPLEMENTATION DEFINED whether [63:49] ([63:53] with FEAT_LVA)
927
- * are hardwired to the value of bit [48] ([52] with FEAT_LVA), or if
928
- * they contain the value written. It is CONSTRAINED UNPREDICTABLE
929
- * whether the RESS bits are ignored when comparing an address.
930
- *
931
- * Therefore we are allowed to compare the entire register, which lets
932
- * us avoid considering whether or not FEAT_LVA is actually enabled.
933
- */
934
- value &= ~3ULL;
935
-
936
- raw_write(env, ri, value);
937
- hw_watchpoint_update(cpu, i);
938
-}
939
-
940
-static void dbgwcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
941
- uint64_t value)
942
-{
943
- ARMCPU *cpu = env_archcpu(env);
944
- int i = ri->crm;
945
-
946
- raw_write(env, ri, value);
947
- hw_watchpoint_update(cpu, i);
948
-}
949
-
950
-void hw_breakpoint_update(ARMCPU *cpu, int n)
951
-{
952
- CPUARMState *env = &cpu->env;
953
- uint64_t bvr = env->cp15.dbgbvr[n];
954
- uint64_t bcr = env->cp15.dbgbcr[n];
955
- vaddr addr;
956
- int bt;
957
- int flags = BP_CPU;
958
-
959
- if (env->cpu_breakpoint[n]) {
960
- cpu_breakpoint_remove_by_ref(CPU(cpu), env->cpu_breakpoint[n]);
961
- env->cpu_breakpoint[n] = NULL;
962
- }
963
-
964
- if (!extract64(bcr, 0, 1)) {
965
- /* E bit clear : watchpoint disabled */
966
- return;
967
- }
968
-
969
- bt = extract64(bcr, 20, 4);
970
-
971
- switch (bt) {
972
- case 4: /* unlinked address mismatch (reserved if AArch64) */
973
- case 5: /* linked address mismatch (reserved if AArch64) */
974
- qemu_log_mask(LOG_UNIMP,
975
- "arm: address mismatch breakpoint types not implemented\n");
976
- return;
977
- case 0: /* unlinked address match */
978
- case 1: /* linked address match */
979
- {
980
- /*
981
- * Bits [1:0] are RES0.
982
- *
983
- * It is IMPLEMENTATION DEFINED whether bits [63:49]
984
- * ([63:53] for FEAT_LVA) are hardwired to a copy of the sign bit
985
- * of the VA field ([48] or [52] for FEAT_LVA), or whether the
986
- * value is read as written. It is CONSTRAINED UNPREDICTABLE
987
- * whether the RESS bits are ignored when comparing an address.
988
- * Therefore we are allowed to compare the entire register, which
989
- * lets us avoid considering whether FEAT_LVA is actually enabled.
990
- *
991
- * The BAS field is used to allow setting breakpoints on 16-bit
992
- * wide instructions; it is CONSTRAINED UNPREDICTABLE whether
993
- * a bp will fire if the addresses covered by the bp and the addresses
994
- * covered by the insn overlap but the insn doesn't start at the
995
- * start of the bp address range. We choose to require the insn and
996
- * the bp to have the same address. The constraints on writing to
997
- * BAS enforced in dbgbcr_write mean we have only four cases:
998
- * 0b0000 => no breakpoint
999
- * 0b0011 => breakpoint on addr
1000
- * 0b1100 => breakpoint on addr + 2
1001
- * 0b1111 => breakpoint on addr
1002
- * See also figure D2-3 in the v8 ARM ARM (DDI0487A.c).
1003
- */
1004
- int bas = extract64(bcr, 5, 4);
1005
- addr = bvr & ~3ULL;
1006
- if (bas == 0) {
1007
- return;
1008
- }
1009
- if (bas == 0xc) {
1010
- addr += 2;
1011
- }
1012
- break;
1013
- }
1014
- case 2: /* unlinked context ID match */
1015
- case 8: /* unlinked VMID match (reserved if no EL2) */
1016
- case 10: /* unlinked context ID and VMID match (reserved if no EL2) */
1017
- qemu_log_mask(LOG_UNIMP,
1018
- "arm: unlinked context breakpoint types not implemented\n");
1019
- return;
1020
- case 9: /* linked VMID match (reserved if no EL2) */
1021
- case 11: /* linked context ID and VMID match (reserved if no EL2) */
1022
- case 3: /* linked context ID match */
1023
- default:
1024
- /*
1025
- * We must generate no events for Linked context matches (unless
1026
- * they are linked to by some other bp/wp, which is handled in
1027
- * updates for the linking bp/wp). We choose to also generate no events
1028
- * for reserved values.
1029
- */
1030
- return;
1031
- }
1032
-
1033
- cpu_breakpoint_insert(CPU(cpu), addr, flags, &env->cpu_breakpoint[n]);
1034
-}
1035
-
1036
-void hw_breakpoint_update_all(ARMCPU *cpu)
1037
-{
1038
- int i;
1039
- CPUARMState *env = &cpu->env;
1040
-
1041
- /*
1042
- * Completely clear out existing QEMU breakpoints and our array, to
1043
- * avoid possible stale entries following migration load.
1044
- */
1045
- cpu_breakpoint_remove_all(CPU(cpu), BP_CPU);
1046
- memset(env->cpu_breakpoint, 0, sizeof(env->cpu_breakpoint));
1047
-
1048
- for (i = 0; i < ARRAY_SIZE(cpu->env.cpu_breakpoint); i++) {
1049
- hw_breakpoint_update(cpu, i);
1050
- }
1051
-}
1052
-
1053
-static void dbgbvr_write(CPUARMState *env, const ARMCPRegInfo *ri,
1054
- uint64_t value)
1055
-{
1056
- ARMCPU *cpu = env_archcpu(env);
1057
- int i = ri->crm;
1058
-
1059
- raw_write(env, ri, value);
1060
- hw_breakpoint_update(cpu, i);
1061
-}
1062
-
1063
-static void dbgbcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
1064
- uint64_t value)
1065
-{
1066
- ARMCPU *cpu = env_archcpu(env);
1067
- int i = ri->crm;
1068
-
1069
- /*
1070
- * BAS[3] is a read-only copy of BAS[2], and BAS[1] a read-only
1071
- * copy of BAS[0].
1072
- */
1073
- value = deposit64(value, 6, 1, extract64(value, 5, 1));
1074
- value = deposit64(value, 8, 1, extract64(value, 7, 1));
1075
-
1076
- raw_write(env, ri, value);
1077
- hw_breakpoint_update(cpu, i);
1078
-}
1079
-
1080
-static void define_debug_regs(ARMCPU *cpu)
1081
-{
1082
- /*
1083
- * Define v7 and v8 architectural debug registers.
1084
- * These are just dummy implementations for now.
1085
- */
1086
- int i;
1087
- int wrps, brps, ctx_cmps;
1088
-
1089
- /*
1090
- * The Arm ARM says DBGDIDR is optional and deprecated if EL1 cannot
1091
- * use AArch32. Given that bit 15 is RES1, if the value is 0 then
1092
- * the register must not exist for this cpu.
1093
- */
1094
- if (cpu->isar.dbgdidr != 0) {
1095
- ARMCPRegInfo dbgdidr = {
1096
- .name = "DBGDIDR", .cp = 14, .crn = 0, .crm = 0,
1097
- .opc1 = 0, .opc2 = 0,
1098
- .access = PL0_R, .accessfn = access_tda,
1099
- .type = ARM_CP_CONST, .resetvalue = cpu->isar.dbgdidr,
1100
- };
1101
- define_one_arm_cp_reg(cpu, &dbgdidr);
1102
- }
1103
-
1104
- brps = arm_num_brps(cpu);
1105
- wrps = arm_num_wrps(cpu);
1106
- ctx_cmps = arm_num_ctx_cmps(cpu);
1107
-
1108
- assert(ctx_cmps <= brps);
1109
-
1110
- define_arm_cp_regs(cpu, debug_cp_reginfo);
1111
-
1112
- if (arm_feature(&cpu->env, ARM_FEATURE_LPAE)) {
1113
- define_arm_cp_regs(cpu, debug_lpae_cp_reginfo);
1114
- }
1115
-
1116
- for (i = 0; i < brps; i++) {
1117
- char *dbgbvr_el1_name = g_strdup_printf("DBGBVR%d_EL1", i);
1118
- char *dbgbcr_el1_name = g_strdup_printf("DBGBCR%d_EL1", i);
1119
- ARMCPRegInfo dbgregs[] = {
1120
- { .name = dbgbvr_el1_name, .state = ARM_CP_STATE_BOTH,
1121
- .cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = i, .opc2 = 4,
1122
- .access = PL1_RW, .accessfn = access_tda,
1123
- .fieldoffset = offsetof(CPUARMState, cp15.dbgbvr[i]),
1124
- .writefn = dbgbvr_write, .raw_writefn = raw_write
1125
- },
1126
- { .name = dbgbcr_el1_name, .state = ARM_CP_STATE_BOTH,
1127
- .cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = i, .opc2 = 5,
1128
- .access = PL1_RW, .accessfn = access_tda,
1129
- .fieldoffset = offsetof(CPUARMState, cp15.dbgbcr[i]),
1130
- .writefn = dbgbcr_write, .raw_writefn = raw_write
1131
- },
1132
- };
1133
- define_arm_cp_regs(cpu, dbgregs);
1134
- g_free(dbgbvr_el1_name);
1135
- g_free(dbgbcr_el1_name);
1136
- }
1137
-
1138
- for (i = 0; i < wrps; i++) {
1139
- char *dbgwvr_el1_name = g_strdup_printf("DBGWVR%d_EL1", i);
1140
- char *dbgwcr_el1_name = g_strdup_printf("DBGWCR%d_EL1", i);
1141
- ARMCPRegInfo dbgregs[] = {
1142
- { .name = dbgwvr_el1_name, .state = ARM_CP_STATE_BOTH,
1143
- .cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = i, .opc2 = 6,
1144
- .access = PL1_RW, .accessfn = access_tda,
1145
- .fieldoffset = offsetof(CPUARMState, cp15.dbgwvr[i]),
1146
- .writefn = dbgwvr_write, .raw_writefn = raw_write
1147
- },
1148
- { .name = dbgwcr_el1_name, .state = ARM_CP_STATE_BOTH,
1149
- .cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = i, .opc2 = 7,
1150
- .access = PL1_RW, .accessfn = access_tda,
1151
- .fieldoffset = offsetof(CPUARMState, cp15.dbgwcr[i]),
1152
- .writefn = dbgwcr_write, .raw_writefn = raw_write
1153
- },
1154
- };
1155
- define_arm_cp_regs(cpu, dbgregs);
1156
- g_free(dbgwvr_el1_name);
1157
- g_free(dbgwcr_el1_name);
1158
- }
1159
-}
1160
-
1161
static void define_pmu_regs(ARMCPU *cpu)
1162
{
1163
/*
58
--
1164
--
59
2.20.1
1165
2.25.1
60
61
diff view generated by jsdifflib
1
The ARMv5 architecture didn't specify detailed per-feature ID
1
The "OS Lock" in the Arm debug architecture is a way for software
2
registers. Now that we're using the MVFR0 register fields to
2
to suppress debug exceptions while it is trying to power down
3
gate the existence of VFP instructions, we need to set up
3
a CPU and save the state of the breakpoint and watchpoint
4
the correct values in the cpu->isar structure so that we still
4
registers. In QEMU we implemented the support for writing
5
provide an FPU to the guest.
5
the OS Lock bit via OSLAR_EL1 and reading it via OSLSR_EL1,
6
but didn't implement the actual behaviour.
6
7
7
This fixes a regression in the arm926 and arm1026 CPUs, which
8
The required behaviour with the OS Lock set is:
8
are the only ones that both have VFP and are ARMv5 or earlier.
9
* debug exceptions (apart from BKPT insns) are suppressed
9
This regression was introduced by the VFP refactoring, and more
10
* some MDSCR_EL1 bits allow write access to the corresponding
10
specifically by commits 1120827fa182f0e76 and 266bd25c485597c,
11
EDSCR external debug status register that they shadow
11
which accidentally disabled VFP short-vector support and
12
(we can ignore this because we don't implement external debug)
12
double-precision support on these CPUs.
13
* similarly with the OSECCR_EL1 which shadows the EDECCR
14
(but we don't implement OSECCR_EL1 anyway)
13
15
14
Fixes: 1120827fa182f0e
16
Implement the missing behaviour of suppressing debug
15
Fixes: 266bd25c485597c
17
exceptions.
16
Fixes: https://bugs.launchpad.net/qemu/+bug/1836192
18
17
Reported-by: Christophe Lyon <christophe.lyon@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
21
Message-id: 20220630194116.3438513-4-peter.maydell@linaro.org
21
Tested-by: Christophe Lyon <christophe.lyon@linaro.org>
22
Message-id: 20190711131241.22231-1-peter.maydell@linaro.org
23
---
22
---
24
target/arm/cpu.c | 12 ++++++++++++
23
target/arm/debug_helper.c | 3 +++
25
1 file changed, 12 insertions(+)
24
1 file changed, 3 insertions(+)
26
25
27
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
26
diff --git a/target/arm/debug_helper.c b/target/arm/debug_helper.c
28
index XXXXXXX..XXXXXXX 100644
27
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/cpu.c
28
--- a/target/arm/debug_helper.c
30
+++ b/target/arm/cpu.c
29
+++ b/target/arm/debug_helper.c
31
@@ -XXX,XX +XXX,XX @@ static void arm926_initfn(Object *obj)
30
@@ -XXX,XX +XXX,XX @@ static bool aa32_generate_debug_exceptions(CPUARMState *env)
32
* set the field to indicate Jazelle support within QEMU.
31
*/
33
*/
32
bool arm_generate_debug_exceptions(CPUARMState *env)
34
cpu->isar.id_isar1 = FIELD_DP32(cpu->isar.id_isar1, ID_ISAR1, JAZELLE, 1);
33
{
35
+ /*
34
+ if (env->cp15.oslsr_el1 & 1) {
36
+ * Similarly, we need to set MVFR0 fields to enable double precision
35
+ return false;
37
+ * and short vector support even though ARMv5 doesn't have this register.
36
+ }
38
+ */
37
if (is_a64(env)) {
39
+ cpu->isar.mvfr0 = FIELD_DP32(cpu->isar.mvfr0, MVFR0, FPSHVEC, 1);
38
return aa64_generate_debug_exceptions(env);
40
+ cpu->isar.mvfr0 = FIELD_DP32(cpu->isar.mvfr0, MVFR0, FPDP, 1);
39
} else {
41
}
42
43
static void arm946_initfn(Object *obj)
44
@@ -XXX,XX +XXX,XX @@ static void arm1026_initfn(Object *obj)
45
* set the field to indicate Jazelle support within QEMU.
46
*/
47
cpu->isar.id_isar1 = FIELD_DP32(cpu->isar.id_isar1, ID_ISAR1, JAZELLE, 1);
48
+ /*
49
+ * Similarly, we need to set MVFR0 fields to enable double precision
50
+ * and short vector support even though ARMv5 doesn't have this register.
51
+ */
52
+ cpu->isar.mvfr0 = FIELD_DP32(cpu->isar.mvfr0, MVFR0, FPSHVEC, 1);
53
+ cpu->isar.mvfr0 = FIELD_DP32(cpu->isar.mvfr0, MVFR0, FPDP, 1);
54
55
{
56
/* The 1026 had an IFAR at c6,c0,0,1 rather than the ARMv6 c6,c0,0,2 */
57
--
40
--
58
2.20.1
41
2.25.1
59
60
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
Starting with v7 of the debug architecture, there are three extra
2
ID registers that add information on top of that provided in
3
DBGDIDR. These are DBGDEVID, DBGDEVID1 and DBGDEVID2. In the
4
v7 debug architecture, DBGDEVID is optional, present only of
5
DBGDIDR.DEVID_imp is set. In v7.1 all three must be present.
2
6
3
Lei Sun found while auditing the code that a CPU write would
7
Implement the missing registers. Note that we only need to set the
4
trigger a NULL pointer dereference.
8
values in the ARMISARegisters struct for the CPUs Cortex-A7, A15,
9
A53, A57 and A72 (plus the 32-bit 'max' which uses the Cortex-A53
10
values): earlier CPUs didn't implement v7 of the architecture, and
11
our other 64-bit CPUs (Cortex-A76, Neoverse-N1 and A64fx) don't have
12
AArch32 support at EL1.
5
13
6
>From UG1085 datasheet [*] AXI writes in this region are ignored
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
and generates an AXI Slave Error (SLVERR).
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Message-id: 20220630194116.3438513-5-peter.maydell@linaro.org
17
---
18
target/arm/cpu.h | 7 +++++++
19
target/arm/cpu64.c | 6 ++++++
20
target/arm/cpu_tcg.c | 6 ++++++
21
target/arm/debug_helper.c | 36 ++++++++++++++++++++++++++++++++++++
22
4 files changed, 55 insertions(+)
8
23
9
Fix by implementing the write_with_attrs() handler.
24
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
10
Return MEMTX_ERROR when the region is accessed (this error maps
11
to an AXI slave error).
12
13
[*] https://www.xilinx.com/support/documentation/user_guides/ug1085-zynq-ultrascale-trm.pdf
14
15
Reported-by: Lei Sun <slei.casper@gmail.com>
16
Reviewed-by: Francisco Iglesias <frasse.iglesias@gmail.com>
17
Tested-by: Francisco Iglesias <frasse.iglesias@gmail.com>
18
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
---
21
hw/ssi/xilinx_spips.c | 16 ++++++++++++++++
22
1 file changed, 16 insertions(+)
23
24
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
25
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
26
--- a/hw/ssi/xilinx_spips.c
26
--- a/target/arm/cpu.h
27
+++ b/hw/ssi/xilinx_spips.c
27
+++ b/target/arm/cpu.h
28
@@ -XXX,XX +XXX,XX @@ static MemTxResult lqspi_read(void *opaque, hwaddr addr, uint64_t *value,
28
@@ -XXX,XX +XXX,XX @@ struct ArchCPU {
29
return lqspi_read(opaque, addr, value, size, attrs);
29
uint32_t mvfr2;
30
uint32_t id_dfr0;
31
uint32_t dbgdidr;
32
+ uint32_t dbgdevid;
33
+ uint32_t dbgdevid1;
34
uint64_t id_aa64isar0;
35
uint64_t id_aa64isar1;
36
uint64_t id_aa64pfr0;
37
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_ssbs(const ARMISARegisters *id)
38
return FIELD_EX32(id->id_pfr2, ID_PFR2, SSBS) != 0;
30
}
39
}
31
40
32
+static MemTxResult lqspi_write(void *opaque, hwaddr offset, uint64_t value,
41
+static inline bool isar_feature_aa32_debugv7p1(const ARMISARegisters *id)
33
+ unsigned size, MemTxAttrs attrs)
34
+{
42
+{
35
+ /*
43
+ return FIELD_EX32(id->id_dfr0, ID_DFR0, COPDBG) >= 5;
36
+ * From UG1085, Chapter 24 (Quad-SPI controllers):
37
+ * - Writes are ignored
38
+ * - AXI writes generate an external AXI slave error (SLVERR)
39
+ */
40
+ qemu_log_mask(LOG_GUEST_ERROR, "%s Unexpected %u-bit access to 0x%" PRIx64
41
+ " (value: 0x%" PRIx64 "\n",
42
+ __func__, size << 3, offset, value);
43
+
44
+ return MEMTX_ERROR;
45
+}
44
+}
46
+
45
+
47
static const MemoryRegionOps lqspi_ops = {
46
static inline bool isar_feature_aa32_debugv8p2(const ARMISARegisters *id)
48
.read_with_attrs = lqspi_read,
47
{
49
+ .write_with_attrs = lqspi_write,
48
return FIELD_EX32(id->id_dfr0, ID_DFR0, COPDBG) >= 8;
50
.endianness = DEVICE_NATIVE_ENDIAN,
49
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
51
.valid = {
50
index XXXXXXX..XXXXXXX 100644
52
.min_access_size = 1,
51
--- a/target/arm/cpu64.c
52
+++ b/target/arm/cpu64.c
53
@@ -XXX,XX +XXX,XX @@ static void aarch64_a57_initfn(Object *obj)
54
cpu->isar.id_aa64isar0 = 0x00011120;
55
cpu->isar.id_aa64mmfr0 = 0x00001124;
56
cpu->isar.dbgdidr = 0x3516d000;
57
+ cpu->isar.dbgdevid = 0x01110f13;
58
+ cpu->isar.dbgdevid1 = 0x2;
59
cpu->isar.reset_pmcr_el0 = 0x41013000;
60
cpu->clidr = 0x0a200023;
61
cpu->ccsidr[0] = 0x701fe00a; /* 32KB L1 dcache */
62
@@ -XXX,XX +XXX,XX @@ static void aarch64_a53_initfn(Object *obj)
63
cpu->isar.id_aa64isar0 = 0x00011120;
64
cpu->isar.id_aa64mmfr0 = 0x00001122; /* 40 bit physical addr */
65
cpu->isar.dbgdidr = 0x3516d000;
66
+ cpu->isar.dbgdevid = 0x00110f13;
67
+ cpu->isar.dbgdevid1 = 0x1;
68
cpu->isar.reset_pmcr_el0 = 0x41033000;
69
cpu->clidr = 0x0a200023;
70
cpu->ccsidr[0] = 0x700fe01a; /* 32KB L1 dcache */
71
@@ -XXX,XX +XXX,XX @@ static void aarch64_a72_initfn(Object *obj)
72
cpu->isar.id_aa64isar0 = 0x00011120;
73
cpu->isar.id_aa64mmfr0 = 0x00001124;
74
cpu->isar.dbgdidr = 0x3516d000;
75
+ cpu->isar.dbgdevid = 0x01110f13;
76
+ cpu->isar.dbgdevid1 = 0x2;
77
cpu->isar.reset_pmcr_el0 = 0x41023000;
78
cpu->clidr = 0x0a200023;
79
cpu->ccsidr[0] = 0x701fe00a; /* 32KB L1 dcache */
80
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
81
index XXXXXXX..XXXXXXX 100644
82
--- a/target/arm/cpu_tcg.c
83
+++ b/target/arm/cpu_tcg.c
84
@@ -XXX,XX +XXX,XX @@ static void cortex_a7_initfn(Object *obj)
85
cpu->isar.id_isar3 = 0x11112131;
86
cpu->isar.id_isar4 = 0x10011142;
87
cpu->isar.dbgdidr = 0x3515f005;
88
+ cpu->isar.dbgdevid = 0x01110f13;
89
+ cpu->isar.dbgdevid1 = 0x1;
90
cpu->clidr = 0x0a200023;
91
cpu->ccsidr[0] = 0x701fe00a; /* 32K L1 dcache */
92
cpu->ccsidr[1] = 0x201fe00a; /* 32K L1 icache */
93
@@ -XXX,XX +XXX,XX @@ static void cortex_a15_initfn(Object *obj)
94
cpu->isar.id_isar3 = 0x11112131;
95
cpu->isar.id_isar4 = 0x10011142;
96
cpu->isar.dbgdidr = 0x3515f021;
97
+ cpu->isar.dbgdevid = 0x01110f13;
98
+ cpu->isar.dbgdevid1 = 0x0;
99
cpu->clidr = 0x0a200023;
100
cpu->ccsidr[0] = 0x701fe00a; /* 32K L1 dcache */
101
cpu->ccsidr[1] = 0x201fe00a; /* 32K L1 icache */
102
@@ -XXX,XX +XXX,XX @@ static void arm_max_initfn(Object *obj)
103
cpu->isar.id_isar5 = 0x00011121;
104
cpu->isar.id_isar6 = 0;
105
cpu->isar.dbgdidr = 0x3516d000;
106
+ cpu->isar.dbgdevid = 0x00110f13;
107
+ cpu->isar.dbgdevid1 = 0x2;
108
cpu->isar.reset_pmcr_el0 = 0x41013000;
109
cpu->clidr = 0x0a200023;
110
cpu->ccsidr[0] = 0x701fe00a; /* 32KB L1 dcache */
111
diff --git a/target/arm/debug_helper.c b/target/arm/debug_helper.c
112
index XXXXXXX..XXXXXXX 100644
113
--- a/target/arm/debug_helper.c
114
+++ b/target/arm/debug_helper.c
115
@@ -XXX,XX +XXX,XX @@ void define_debug_regs(ARMCPU *cpu)
116
define_one_arm_cp_reg(cpu, &dbgdidr);
117
}
118
119
+ /*
120
+ * DBGDEVID is present in the v7 debug architecture if
121
+ * DBGDIDR.DEVID_imp is 1 (bit 15); from v7.1 and on it is
122
+ * mandatory (and bit 15 is RES1). DBGDEVID1 and DBGDEVID2 exist
123
+ * from v7.1 of the debug architecture. Because no fields have yet
124
+ * been defined in DBGDEVID2 (and quite possibly none will ever
125
+ * be) we don't define an ARMISARegisters field for it.
126
+ * These registers exist only if EL1 can use AArch32, but that
127
+ * happens naturally because they are only PL1 accessible anyway.
128
+ */
129
+ if (extract32(cpu->isar.dbgdidr, 15, 1)) {
130
+ ARMCPRegInfo dbgdevid = {
131
+ .name = "DBGDEVID",
132
+ .cp = 14, .opc1 = 0, .crn = 7, .opc2 = 2, .crn = 7,
133
+ .access = PL1_R, .accessfn = access_tda,
134
+ .type = ARM_CP_CONST, .resetvalue = cpu->isar.dbgdevid,
135
+ };
136
+ define_one_arm_cp_reg(cpu, &dbgdevid);
137
+ }
138
+ if (cpu_isar_feature(aa32_debugv7p1, cpu)) {
139
+ ARMCPRegInfo dbgdevid12[] = {
140
+ {
141
+ .name = "DBGDEVID1",
142
+ .cp = 14, .opc1 = 0, .crn = 7, .opc2 = 1, .crn = 7,
143
+ .access = PL1_R, .accessfn = access_tda,
144
+ .type = ARM_CP_CONST, .resetvalue = cpu->isar.dbgdevid1,
145
+ }, {
146
+ .name = "DBGDEVID2",
147
+ .cp = 14, .opc1 = 0, .crn = 7, .opc2 = 0, .crn = 7,
148
+ .access = PL1_R, .accessfn = access_tda,
149
+ .type = ARM_CP_CONST, .resetvalue = 0,
150
+ },
151
+ };
152
+ define_arm_cp_regs(cpu, dbgdevid12);
153
+ }
154
+
155
brps = arm_num_brps(cpu);
156
wrps = arm_num_wrps(cpu);
157
ctx_cmps = arm_num_ctx_cmps(cpu);
53
--
158
--
54
2.20.1
159
2.25.1
55
56
diff view generated by jsdifflib
1
The PL031 RTC tracks the difference between the guest RTC
1
The architecture defines the OS DoubleLock as a register which
2
and the host RTC using a tick_offset field. For migration,
2
(similarly to the OS Lock) suppresses debug events for use in CPU
3
however, we currently always migrate the offset between
3
powerdown sequences. This functionality is required in Arm v7 and
4
the guest and the vm_clock, even if the RTC clock is not
4
v8.0; from v8.2 it becomes optional and in v9 it must not be
5
the same as the vm_clock; this was an attempt to retain
5
implemented.
6
migration backwards compatibility.
7
6
8
Unfortunately this results in the RTC behaving oddly across
7
Currently in QEMU we implement the OSDLR_EL1 register as a NOP. This
9
a VM state save and restore -- since the VM clock stands still
8
is wrong both for the "feature implemented" and the "feature not
10
across save-then-restore, regardless of how much real world
9
implemented" cases: if the feature is implemented then the DLK bit
11
time has elapsed, the guest RTC ends up out of sync with the
10
should read as written and cause suppression of debug exceptions, and
12
host RTC in the restored VM.
11
if it is not implemented then the bit must be RAZ/WI.
13
12
14
Fix this by migrating the raw tick_offset. To retain migration
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
compatibility as far as possible, we have a new property
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
migrate-tick-offset; by default this is 'true' and we will
15
---
17
migrate the true tick offset in a new subsection; if the
16
target/arm/cpu.h | 20 ++++++++++++++++++++
18
incoming data has no subsection we fall back to the old
17
target/arm/debug_helper.c | 20 ++++++++++++++++++--
19
vm_clock-based offset information, so old->new migration
18
2 files changed, 38 insertions(+), 2 deletions(-)
20
compatibility is preserved. For complete new->old migration
21
compatibility, the property is set to 'false' for 4.0 and
22
earlier machine types (this will only affect 'virt-4.0'
23
and below, as none of the other pl031-using machines are
24
versioned).
25
19
26
Reported-by: Russell King <rmk@armlinux.org.uk>
20
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
27
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
28
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
29
Message-id: 20190709143912.28905-1-peter.maydell@linaro.org
30
---
31
include/hw/timer/pl031.h | 2 +
32
hw/core/machine.c | 1 +
33
hw/timer/pl031.c | 92 ++++++++++++++++++++++++++++++++++++++--
34
3 files changed, 91 insertions(+), 4 deletions(-)
35
36
diff --git a/include/hw/timer/pl031.h b/include/hw/timer/pl031.h
37
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
38
--- a/include/hw/timer/pl031.h
22
--- a/target/arm/cpu.h
39
+++ b/include/hw/timer/pl031.h
23
+++ b/target/arm/cpu.h
40
@@ -XXX,XX +XXX,XX @@ typedef struct PL031State {
24
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
41
*/
25
uint64_t dbgwcr[16]; /* watchpoint control registers */
42
uint32_t tick_offset_vmstate;
26
uint64_t mdscr_el1;
43
uint32_t tick_offset;
27
uint64_t oslsr_el1; /* OS Lock Status */
44
+ bool tick_offset_migrated;
28
+ uint64_t osdlr_el1; /* OS DoubleLock status */
45
+ bool migrate_tick_offset;
29
uint64_t mdcr_el2;
46
30
uint64_t mdcr_el3;
47
uint32_t mr;
31
/* Stores the architectural value of the counter *the last time it was
48
uint32_t lr;
32
@@ -XXX,XX +XXX,XX @@ FIELD(DBGDIDR, CTX_CMPS, 20, 4)
49
diff --git a/hw/core/machine.c b/hw/core/machine.c
33
FIELD(DBGDIDR, BRPS, 24, 4)
50
index XXXXXXX..XXXXXXX 100644
34
FIELD(DBGDIDR, WRPS, 28, 4)
51
--- a/hw/core/machine.c
35
52
+++ b/hw/core/machine.c
36
+FIELD(DBGDEVID, PCSAMPLE, 0, 4)
53
@@ -XXX,XX +XXX,XX @@ GlobalProperty hw_compat_4_0[] = {
37
+FIELD(DBGDEVID, WPADDRMASK, 4, 4)
54
{ "virtio-gpu-pci", "edid", "false" },
38
+FIELD(DBGDEVID, BPADDRMASK, 8, 4)
55
{ "virtio-device", "use-started", "false" },
39
+FIELD(DBGDEVID, VECTORCATCH, 12, 4)
56
{ "virtio-balloon-device", "qemu-4-0-config-size", "true" },
40
+FIELD(DBGDEVID, VIRTEXTNS, 16, 4)
57
+ { "pl031", "migrate-tick-offset", "false" },
41
+FIELD(DBGDEVID, DOUBLELOCK, 20, 4)
58
};
42
+FIELD(DBGDEVID, AUXREGS, 24, 4)
59
const size_t hw_compat_4_0_len = G_N_ELEMENTS(hw_compat_4_0);
43
+FIELD(DBGDEVID, CIDMASK, 28, 4)
60
44
+
61
diff --git a/hw/timer/pl031.c b/hw/timer/pl031.c
45
FIELD(MVFR0, SIMDREG, 0, 4)
62
index XXXXXXX..XXXXXXX 100644
46
FIELD(MVFR0, FPSP, 4, 4)
63
--- a/hw/timer/pl031.c
47
FIELD(MVFR0, FPDP, 8, 4)
64
+++ b/hw/timer/pl031.c
48
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_debugv8p2(const ARMISARegisters *id)
65
@@ -XXX,XX +XXX,XX @@ static int pl031_pre_save(void *opaque)
49
return FIELD_EX32(id->id_dfr0, ID_DFR0, COPDBG) >= 8;
66
{
67
PL031State *s = opaque;
68
69
- /* tick_offset is base_time - rtc_clock base time. Instead, we want to
70
- * store the base time relative to the QEMU_CLOCK_VIRTUAL for backwards-compatibility. */
71
+ /*
72
+ * The PL031 device model code uses the tick_offset field, which is
73
+ * the offset between what the guest RTC should read and what the
74
+ * QEMU rtc_clock reads:
75
+ * guest_rtc = rtc_clock + tick_offset
76
+ * and so
77
+ * tick_offset = guest_rtc - rtc_clock
78
+ *
79
+ * We want to migrate this offset, which sounds straightforward.
80
+ * Unfortunately older versions of QEMU migrated a conversion of this
81
+ * offset into an offset from the vm_clock. (This was in turn an
82
+ * attempt to be compatible with even older QEMU versions, but it
83
+ * has incorrect behaviour if the rtc_clock is not the same as the
84
+ * vm_clock.) So we put the actual tick_offset into a migration
85
+ * subsection, and the backwards-compatible time-relative-to-vm_clock
86
+ * in the main migration state.
87
+ *
88
+ * Calculate base time relative to QEMU_CLOCK_VIRTUAL:
89
+ */
90
int64_t delta = qemu_clock_get_ns(rtc_clock) - qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
91
s->tick_offset_vmstate = s->tick_offset + delta / NANOSECONDS_PER_SECOND;
92
93
return 0;
94
}
50
}
95
51
96
+static int pl031_pre_load(void *opaque)
52
+static inline bool isar_feature_aa32_doublelock(const ARMISARegisters *id)
97
+{
53
+{
98
+ PL031State *s = opaque;
54
+ return FIELD_EX32(id->dbgdevid, DBGDEVID, DOUBLELOCK) > 0;
99
+
100
+ s->tick_offset_migrated = false;
101
+ return 0;
102
+}
55
+}
103
+
56
+
104
static int pl031_post_load(void *opaque, int version_id)
57
/*
105
{
58
* 64-bit feature tests via id registers.
106
PL031State *s = opaque;
59
*/
107
60
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_sme_fa64(const ARMISARegisters *id)
108
- int64_t delta = qemu_clock_get_ns(rtc_clock) - qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
61
return FIELD_EX64(id->id_aa64smfr0, ID_AA64SMFR0, FA64);
109
- s->tick_offset = s->tick_offset_vmstate - delta / NANOSECONDS_PER_SECOND;
110
+ /*
111
+ * If we got the tick_offset subsection, then we can just use
112
+ * the value in that. Otherwise the source is an older QEMU and
113
+ * has given us the offset from the vm_clock; convert it back to
114
+ * an offset from the rtc_clock. This will cause time to incorrectly
115
+ * go backwards compared to the host RTC, but this is unavoidable.
116
+ */
117
+
118
+ if (!s->tick_offset_migrated) {
119
+ int64_t delta = qemu_clock_get_ns(rtc_clock) -
120
+ qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
121
+ s->tick_offset = s->tick_offset_vmstate -
122
+ delta / NANOSECONDS_PER_SECOND;
123
+ }
124
pl031_set_alarm(s);
125
return 0;
126
}
62
}
127
63
128
+static int pl031_tick_offset_post_load(void *opaque, int version_id)
64
+static inline bool isar_feature_aa64_doublelock(const ARMISARegisters *id)
129
+{
65
+{
130
+ PL031State *s = opaque;
66
+ return FIELD_SEX64(id->id_aa64dfr0, ID_AA64DFR0, DOUBLELOCK) >= 0;
131
+
132
+ s->tick_offset_migrated = true;
133
+ return 0;
134
+}
67
+}
135
+
68
+
136
+static bool pl031_tick_offset_needed(void *opaque)
69
/*
70
* Feature tests for "does this exist in either 32-bit or 64-bit?"
71
*/
72
diff --git a/target/arm/debug_helper.c b/target/arm/debug_helper.c
73
index XXXXXXX..XXXXXXX 100644
74
--- a/target/arm/debug_helper.c
75
+++ b/target/arm/debug_helper.c
76
@@ -XXX,XX +XXX,XX @@ static bool aa32_generate_debug_exceptions(CPUARMState *env)
77
*/
78
bool arm_generate_debug_exceptions(CPUARMState *env)
79
{
80
- if (env->cp15.oslsr_el1 & 1) {
81
+ if ((env->cp15.oslsr_el1 & 1) || (env->cp15.osdlr_el1 & 1)) {
82
return false;
83
}
84
if (is_a64(env)) {
85
@@ -XXX,XX +XXX,XX @@ static void oslar_write(CPUARMState *env, const ARMCPRegInfo *ri,
86
env->cp15.oslsr_el1 = deposit32(env->cp15.oslsr_el1, 1, 1, oslock);
87
}
88
89
+static void osdlr_write(CPUARMState *env, const ARMCPRegInfo *ri,
90
+ uint64_t value)
137
+{
91
+{
138
+ PL031State *s = opaque;
92
+ ARMCPU *cpu = env_archcpu(env);
139
+
93
+ /*
140
+ return s->migrate_tick_offset;
94
+ * Only defined bit is bit 0 (DLK); if Feat_DoubleLock is not
95
+ * implemented this is RAZ/WI.
96
+ */
97
+ if(arm_feature(env, ARM_FEATURE_AARCH64)
98
+ ? cpu_isar_feature(aa64_doublelock, cpu)
99
+ : cpu_isar_feature(aa32_doublelock, cpu)) {
100
+ env->cp15.osdlr_el1 = value & 1;
101
+ }
141
+}
102
+}
142
+
103
+
143
+static const VMStateDescription vmstate_pl031_tick_offset = {
104
static const ARMCPRegInfo debug_cp_reginfo[] = {
144
+ .name = "pl031/tick-offset",
105
/*
145
+ .version_id = 1,
106
* DBGDRAR, DBGDSAR: always RAZ since we don't implement memory mapped
146
+ .minimum_version_id = 1,
107
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo debug_cp_reginfo[] = {
147
+ .needed = pl031_tick_offset_needed,
108
{ .name = "OSDLR_EL1", .state = ARM_CP_STATE_BOTH,
148
+ .post_load = pl031_tick_offset_post_load,
109
.cp = 14, .opc0 = 2, .opc1 = 0, .crn = 1, .crm = 3, .opc2 = 4,
149
+ .fields = (VMStateField[]) {
110
.access = PL1_RW, .accessfn = access_tdosa,
150
+ VMSTATE_UINT32(tick_offset, PL031State),
111
- .type = ARM_CP_NOP },
151
+ VMSTATE_END_OF_LIST()
112
+ .writefn = osdlr_write,
152
+ }
113
+ .fieldoffset = offsetof(CPUARMState, cp15.osdlr_el1) },
153
+};
114
/*
154
+
115
* Dummy DBGVCR: Linux wants to clear this on startup, but we don't
155
static const VMStateDescription vmstate_pl031 = {
116
* implement vector catch debug events yet.
156
.name = "pl031",
157
.version_id = 1,
158
.minimum_version_id = 1,
159
.pre_save = pl031_pre_save,
160
+ .pre_load = pl031_pre_load,
161
.post_load = pl031_post_load,
162
.fields = (VMStateField[]) {
163
VMSTATE_UINT32(tick_offset_vmstate, PL031State),
164
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_pl031 = {
165
VMSTATE_UINT32(im, PL031State),
166
VMSTATE_UINT32(is, PL031State),
167
VMSTATE_END_OF_LIST()
168
+ },
169
+ .subsections = (const VMStateDescription*[]) {
170
+ &vmstate_pl031_tick_offset,
171
+ NULL
172
}
173
};
174
175
+static Property pl031_properties[] = {
176
+ /*
177
+ * True to correctly migrate the tick offset of the RTC. False to
178
+ * obtain backward migration compatibility with older QEMU versions,
179
+ * at the expense of the guest RTC going backwards compared with the
180
+ * host RTC when the VM is saved/restored if using -rtc host.
181
+ * (Even if set to 'true' older QEMU can migrate forward to newer QEMU;
182
+ * 'false' also permits newer QEMU to migrate to older QEMU.)
183
+ */
184
+ DEFINE_PROP_BOOL("migrate-tick-offset",
185
+ PL031State, migrate_tick_offset, true),
186
+ DEFINE_PROP_END_OF_LIST()
187
+};
188
+
189
static void pl031_class_init(ObjectClass *klass, void *data)
190
{
191
DeviceClass *dc = DEVICE_CLASS(klass);
192
193
dc->vmsd = &vmstate_pl031;
194
+ dc->props = pl031_properties;
195
}
196
197
static const TypeInfo pl031_info = {
198
--
117
--
199
2.20.1
118
2.25.1
200
201
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
In commit 39a1fd25287f5d we fixed a bug in the handling of LPAE block
2
descriptors where we weren't correctly zeroing out some RES0 bits.
3
However this fix has a bug because the calculation of the mask is
4
done at the wrong width: in
5
descaddr &= ~(page_size - 1);
6
page_size is a target_ulong, so in the 'qemu-system-arm' binary it is
7
only 32 bits, and the effect is that we always zero out the top 32
8
bits of the calculated address. Fix the calculation by forcing the
9
mask to be calculated with the same type as descaddr.
2
10
3
Both lqspi_read() and lqspi_load_cache() expect a 32-bit
11
This only affects 32-bit CPUs which support LPAE (e.g. cortex-a15)
4
aligned address.
12
when used on board models which put RAM or devices above the 4GB
13
mark and when the 'qemu-system-arm' executable is being used.
14
It was also masked in 7.0 by the main bug reported in
15
https://gitlab.com/qemu-project/qemu/-/issues/1078 where the
16
virt board incorrectly does not enable 'highmem' for 32-bit CPUs.
5
17
6
>From UG1085 datasheet [*] chapter on 'Quad-SPI Controller':
18
The workaround is to use 'qemu-system-aarch64' with the same
19
command line.
7
20
8
Transfer Size Limitations
21
Reported-by: He Zhe <zhe.he@windriver.com>
9
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Because of the 32-bit wide TX, RX, and generic FIFO, all
23
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
APB/AXI transfers must be an integer multiple of 4-bytes.
24
Message-id: 20220627134620.3190252-1-peter.maydell@linaro.org
12
Shorter transfers are not possible.
25
Fixes: 39a1fd25287f5de ("target/arm: Fix handling of LPAE block descriptors")
13
26
Cc: qemu-stable@nongnu.org
14
Set MemoryRegionOps.impl values to force 32-bit accesses,
15
this way we are sure we do not access the lqspi_buf[] array
16
out of bound.
17
18
[*] https://www.xilinx.com/support/documentation/user_guides/ug1085-zynq-ultrascale-trm.pdf
19
20
Reviewed-by: Francisco Iglesias <frasse.iglesias@gmail.com>
21
Tested-by: Francisco Iglesias <frasse.iglesias@gmail.com>
22
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
23
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
27
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
---
28
---
25
hw/ssi/xilinx_spips.c | 4 ++++
29
target/arm/ptw.c | 2 +-
26
1 file changed, 4 insertions(+)
30
1 file changed, 1 insertion(+), 1 deletion(-)
27
31
28
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
32
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
29
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
30
--- a/hw/ssi/xilinx_spips.c
34
--- a/target/arm/ptw.c
31
+++ b/hw/ssi/xilinx_spips.c
35
+++ b/target/arm/ptw.c
32
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps lqspi_ops = {
36
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
33
.read_with_attrs = lqspi_read,
37
* clear the lower bits here before ORing in the low vaddr bits.
34
.write_with_attrs = lqspi_write,
38
*/
35
.endianness = DEVICE_NATIVE_ENDIAN,
39
page_size = (1ULL << ((stride * (4 - level)) + 3));
36
+ .impl = {
40
- descaddr &= ~(page_size - 1);
37
+ .min_access_size = 4,
41
+ descaddr &= ~(hwaddr)(page_size - 1);
38
+ .max_access_size = 4,
42
descaddr |= (address & (page_size - 1));
39
+ },
43
/* Extract attributes from the descriptor */
40
.valid = {
44
attrs = extract64(descriptor, 2, 10)
41
.min_access_size = 1,
42
.max_access_size = 4
43
--
45
--
44
2.20.1
46
2.25.1
45
46
diff view generated by jsdifflib