1
A last collection of patches to squeeze in before rc0.
1
First arm pullreq of the 8.0 series...
2
The patches from me are all bugfixes. Philippe's are just
3
code-movement, but I wanted to get these into 4.1 because
4
that kind of patch is so painful to have to rebase.
5
(The diffstat is huge but it's just code moving from file to file.)
6
2
7
thanks
3
The following changes since commit ae2b87341b5ddb0dcb1b3f2d4f586ef18de75873:
8
-- PMM
9
4
10
The following changes since commit 234e256511e588680300600ce087c5185d68cf2a:
5
Merge tag 'pull-qapi-2022-12-14-v2' of https://repo.or.cz/qemu/armbru into staging (2022-12-14 22:42:14 +0000)
11
12
Merge remote-tracking branch 'remotes/armbru/tags/pull-build-2019-07-02-v2' into staging (2019-07-04 15:58:46 +0100)
13
6
14
are available in the Git repository at:
7
are available in the Git repository at:
15
8
16
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20190704
9
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20221215
17
10
18
for you to fetch changes up to b75f3735802b5b33f10e4bfe374d4b17bb86d29a:
11
for you to fetch changes up to 4f3ebdc33618e7c163f769047859d6f34373e3af:
19
12
20
target/arm: Correct VMOV_imm_dp handling of short vectors (2019-07-04 16:52:05 +0100)
13
target/arm: Restrict arm_cpu_exec_interrupt() to TCG accelerator (2022-12-15 11:18:20 +0000)
21
14
22
----------------------------------------------------------------
15
----------------------------------------------------------------
23
target-arm queue:
16
target-arm queue:
24
* more code-movement to separate TCG-only functions into their own files
17
* hw/arm/virt: Add properties to allow more granular
25
* Correct VMOV_imm_dp handling of short vectors
18
configuration of use of highmem space
26
* Execute Thumb instructions when their condbits are 0xf
19
* target/arm: Add Cortex-A55 CPU
27
* armv7m_systick: Forbid non-privileged accesses
20
* hw/intc/arm_gicv3: Fix GICD_TYPER ITLinesNumber advertisement
28
* Use _ra versions of cpu_stl_data() in v7M helpers
21
* Implement FEAT_EVT
29
* v8M: Check state of exception being returned from
22
* Some 3-phase-reset conversions for Arm GIC, SMMU
30
* v8M: Forcibly clear negative-priority exceptions on deactivate
23
* hw/arm/boot: set initrd with #address-cells type in fdt
24
* align user-mode exposed ID registers with Linux
25
* hw/misc: Move some arm-related files from specific_ss into softmmu_ss
26
* Restrict arm_cpu_exec_interrupt() to TCG accelerator
31
27
32
----------------------------------------------------------------
28
----------------------------------------------------------------
33
Peter Maydell (6):
29
Gavin Shan (7):
34
arm v8M: Forcibly clear negative-priority exceptions on deactivate
30
hw/arm/virt: Introduce virt_set_high_memmap() helper
35
target/arm: v8M: Check state of exception being returned from
31
hw/arm/virt: Rename variable size to region_size in virt_set_high_memmap()
36
target/arm: Use _ra versions of cpu_stl_data() in v7M helpers
32
hw/arm/virt: Introduce variable region_base in virt_set_high_memmap()
37
hw/timer/armv7m_systick: Forbid non-privileged accesses
33
hw/arm/virt: Introduce virt_get_high_memmap_enabled() helper
38
target/arm: Execute Thumb instructions when their condbits are 0xf
34
hw/arm/virt: Improve high memory region address assignment
39
target/arm: Correct VMOV_imm_dp handling of short vectors
35
hw/arm/virt: Add 'compact-highmem' property
36
hw/arm/virt: Add properties to disable high memory regions
40
37
41
Philippe Mathieu-Daudé (3):
38
Luke Starrett (1):
42
target/arm: Move debug routines to debug_helper.c
39
hw/intc/arm_gicv3: Fix GICD_TYPER ITLinesNumber advertisement
43
target/arm: Restrict semi-hosting to TCG
44
target/arm/helper: Move M profile routines to m_helper.c
45
40
46
target/arm/Makefile.objs | 5 +-
41
Mihai Carabas (1):
47
target/arm/cpu.h | 7 +
42
hw/arm/virt: build SMBIOS 19 table
48
hw/intc/armv7m_nvic.c | 54 +-
49
hw/timer/armv7m_systick.c | 26 +-
50
target/arm/cpu.c | 9 +-
51
target/arm/debug_helper.c | 311 +++++
52
target/arm/helper.c | 2646 +--------------------------------------
53
target/arm/m_helper.c | 2679 ++++++++++++++++++++++++++++++++++++++++
54
target/arm/op_helper.c | 295 -----
55
target/arm/translate-vfp.inc.c | 2 +-
56
target/arm/translate.c | 15 +-
57
11 files changed, 3096 insertions(+), 2953 deletions(-)
58
create mode 100644 target/arm/debug_helper.c
59
create mode 100644 target/arm/m_helper.c
60
43
44
Peter Maydell (15):
45
target/arm: Allow relevant HCR bits to be written for FEAT_EVT
46
target/arm: Implement HCR_EL2.TTLBIS traps
47
target/arm: Implement HCR_EL2.TTLBOS traps
48
target/arm: Implement HCR_EL2.TICAB,TOCU traps
49
target/arm: Implement HCR_EL2.TID4 traps
50
target/arm: Report FEAT_EVT for TCG '-cpu max'
51
hw/arm: Convert TYPE_ARM_SMMU to 3-phase reset
52
hw/arm: Convert TYPE_ARM_SMMUV3 to 3-phase reset
53
hw/intc: Convert TYPE_ARM_GIC_COMMON to 3-phase reset
54
hw/intc: Convert TYPE_ARM_GIC_KVM to 3-phase reset
55
hw/intc: Convert TYPE_ARM_GICV3_COMMON to 3-phase reset
56
hw/intc: Convert TYPE_KVM_ARM_GICV3 to 3-phase reset
57
hw/intc: Convert TYPE_ARM_GICV3_ITS_COMMON to 3-phase reset
58
hw/intc: Convert TYPE_ARM_GICV3_ITS to 3-phase reset
59
hw/intc: Convert TYPE_KVM_ARM_ITS to 3-phase reset
60
61
Philippe Mathieu-Daudé (1):
62
target/arm: Restrict arm_cpu_exec_interrupt() to TCG accelerator
63
64
Schspa Shi (1):
65
hw/arm/boot: set initrd with #address-cells type in fdt
66
67
Thomas Huth (1):
68
hw/misc: Move some arm-related files from specific_ss into softmmu_ss
69
70
Timofey Kutergin (1):
71
target/arm: Add Cortex-A55 CPU
72
73
Zhuojia Shen (1):
74
target/arm: align exposed ID registers with Linux
75
76
docs/system/arm/emulation.rst | 1 +
77
docs/system/arm/virt.rst | 18 +++
78
include/hw/arm/smmuv3.h | 2 +-
79
include/hw/arm/virt.h | 2 +
80
include/hw/misc/xlnx-zynqmp-apu-ctrl.h | 2 +-
81
target/arm/cpu.h | 30 +++++
82
target/arm/kvm-consts.h | 8 +-
83
hw/arm/boot.c | 10 +-
84
hw/arm/smmu-common.c | 7 +-
85
hw/arm/smmuv3.c | 12 +-
86
hw/arm/virt.c | 202 +++++++++++++++++++++++-----
87
hw/intc/arm_gic_common.c | 7 +-
88
hw/intc/arm_gic_kvm.c | 14 +-
89
hw/intc/arm_gicv3_common.c | 7 +-
90
hw/intc/arm_gicv3_dist.c | 4 +-
91
hw/intc/arm_gicv3_its.c | 14 +-
92
hw/intc/arm_gicv3_its_common.c | 7 +-
93
hw/intc/arm_gicv3_its_kvm.c | 14 +-
94
hw/intc/arm_gicv3_kvm.c | 14 +-
95
hw/misc/imx6_src.c | 2 +-
96
hw/misc/iotkit-sysctl.c | 1 -
97
target/arm/cpu.c | 5 +-
98
target/arm/cpu64.c | 70 ++++++++++
99
target/arm/cpu_tcg.c | 1 +
100
target/arm/helper.c | 231 ++++++++++++++++++++++++---------
101
hw/misc/meson.build | 11 +-
102
26 files changed, 538 insertions(+), 158 deletions(-)
103
diff view generated by jsdifflib
New patch
1
From: Gavin Shan <gshan@redhat.com>
1
2
3
This introduces virt_set_high_memmap() helper. The logic of high
4
memory region address assignment is moved to the helper. The intention
5
is to make the subsequent optimization for high memory region address
6
assignment easier.
7
8
No functional change intended.
9
10
Signed-off-by: Gavin Shan <gshan@redhat.com>
11
Reviewed-by: Eric Auger <eric.auger@redhat.com>
12
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
13
Reviewed-by: Marc Zyngier <maz@kernel.org>
14
Tested-by: Zhenyu Zhang <zhenyzha@redhat.com>
15
Message-id: 20221029224307.138822-2-gshan@redhat.com
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
18
hw/arm/virt.c | 74 ++++++++++++++++++++++++++++-----------------------
19
1 file changed, 41 insertions(+), 33 deletions(-)
20
21
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
22
index XXXXXXX..XXXXXXX 100644
23
--- a/hw/arm/virt.c
24
+++ b/hw/arm/virt.c
25
@@ -XXX,XX +XXX,XX @@ static uint64_t virt_cpu_mp_affinity(VirtMachineState *vms, int idx)
26
return arm_cpu_mp_affinity(idx, clustersz);
27
}
28
29
+static void virt_set_high_memmap(VirtMachineState *vms,
30
+ hwaddr base, int pa_bits)
31
+{
32
+ int i;
33
+
34
+ for (i = VIRT_LOWMEMMAP_LAST; i < ARRAY_SIZE(extended_memmap); i++) {
35
+ hwaddr size = extended_memmap[i].size;
36
+ bool fits;
37
+
38
+ base = ROUND_UP(base, size);
39
+ vms->memmap[i].base = base;
40
+ vms->memmap[i].size = size;
41
+
42
+ /*
43
+ * Check each device to see if they fit in the PA space,
44
+ * moving highest_gpa as we go.
45
+ *
46
+ * For each device that doesn't fit, disable it.
47
+ */
48
+ fits = (base + size) <= BIT_ULL(pa_bits);
49
+ if (fits) {
50
+ vms->highest_gpa = base + size - 1;
51
+ }
52
+
53
+ switch (i) {
54
+ case VIRT_HIGH_GIC_REDIST2:
55
+ vms->highmem_redists &= fits;
56
+ break;
57
+ case VIRT_HIGH_PCIE_ECAM:
58
+ vms->highmem_ecam &= fits;
59
+ break;
60
+ case VIRT_HIGH_PCIE_MMIO:
61
+ vms->highmem_mmio &= fits;
62
+ break;
63
+ }
64
+
65
+ base += size;
66
+ }
67
+}
68
+
69
static void virt_set_memmap(VirtMachineState *vms, int pa_bits)
70
{
71
MachineState *ms = MACHINE(vms);
72
@@ -XXX,XX +XXX,XX @@ static void virt_set_memmap(VirtMachineState *vms, int pa_bits)
73
/* We know for sure that at least the memory fits in the PA space */
74
vms->highest_gpa = memtop - 1;
75
76
- for (i = VIRT_LOWMEMMAP_LAST; i < ARRAY_SIZE(extended_memmap); i++) {
77
- hwaddr size = extended_memmap[i].size;
78
- bool fits;
79
-
80
- base = ROUND_UP(base, size);
81
- vms->memmap[i].base = base;
82
- vms->memmap[i].size = size;
83
-
84
- /*
85
- * Check each device to see if they fit in the PA space,
86
- * moving highest_gpa as we go.
87
- *
88
- * For each device that doesn't fit, disable it.
89
- */
90
- fits = (base + size) <= BIT_ULL(pa_bits);
91
- if (fits) {
92
- vms->highest_gpa = base + size - 1;
93
- }
94
-
95
- switch (i) {
96
- case VIRT_HIGH_GIC_REDIST2:
97
- vms->highmem_redists &= fits;
98
- break;
99
- case VIRT_HIGH_PCIE_ECAM:
100
- vms->highmem_ecam &= fits;
101
- break;
102
- case VIRT_HIGH_PCIE_MMIO:
103
- vms->highmem_mmio &= fits;
104
- break;
105
- }
106
-
107
- base += size;
108
- }
109
+ virt_set_high_memmap(vms, base, pa_bits);
110
111
if (device_memory_size > 0) {
112
ms->device_memory = g_malloc0(sizeof(*ms->device_memory));
113
--
114
2.25.1
diff view generated by jsdifflib
New patch
1
From: Gavin Shan <gshan@redhat.com>
1
2
3
This renames variable 'size' to 'region_size' in virt_set_high_memmap().
4
Its counterpart ('region_base') will be introduced in next patch.
5
6
No functional change intended.
7
8
Signed-off-by: Gavin Shan <gshan@redhat.com>
9
Reviewed-by: Eric Auger <eric.auger@redhat.com>
10
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
11
Reviewed-by: Marc Zyngier <maz@kernel.org>
12
Tested-by: Zhenyu Zhang <zhenyzha@redhat.com>
13
Message-id: 20221029224307.138822-3-gshan@redhat.com
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
16
hw/arm/virt.c | 15 ++++++++-------
17
1 file changed, 8 insertions(+), 7 deletions(-)
18
19
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/hw/arm/virt.c
22
+++ b/hw/arm/virt.c
23
@@ -XXX,XX +XXX,XX @@ static uint64_t virt_cpu_mp_affinity(VirtMachineState *vms, int idx)
24
static void virt_set_high_memmap(VirtMachineState *vms,
25
hwaddr base, int pa_bits)
26
{
27
+ hwaddr region_size;
28
+ bool fits;
29
int i;
30
31
for (i = VIRT_LOWMEMMAP_LAST; i < ARRAY_SIZE(extended_memmap); i++) {
32
- hwaddr size = extended_memmap[i].size;
33
- bool fits;
34
+ region_size = extended_memmap[i].size;
35
36
- base = ROUND_UP(base, size);
37
+ base = ROUND_UP(base, region_size);
38
vms->memmap[i].base = base;
39
- vms->memmap[i].size = size;
40
+ vms->memmap[i].size = region_size;
41
42
/*
43
* Check each device to see if they fit in the PA space,
44
@@ -XXX,XX +XXX,XX @@ static void virt_set_high_memmap(VirtMachineState *vms,
45
*
46
* For each device that doesn't fit, disable it.
47
*/
48
- fits = (base + size) <= BIT_ULL(pa_bits);
49
+ fits = (base + region_size) <= BIT_ULL(pa_bits);
50
if (fits) {
51
- vms->highest_gpa = base + size - 1;
52
+ vms->highest_gpa = base + region_size - 1;
53
}
54
55
switch (i) {
56
@@ -XXX,XX +XXX,XX @@ static void virt_set_high_memmap(VirtMachineState *vms,
57
break;
58
}
59
60
- base += size;
61
+ base += region_size;
62
}
63
}
64
65
--
66
2.25.1
diff view generated by jsdifflib
New patch
1
From: Gavin Shan <gshan@redhat.com>
1
2
3
This introduces variable 'region_base' for the base address of the
4
specific high memory region. It's the preparatory work to optimize
5
high memory region address assignment.
6
7
No functional change intended.
8
9
Signed-off-by: Gavin Shan <gshan@redhat.com>
10
Reviewed-by: Eric Auger <eric.auger@redhat.com>
11
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
12
Reviewed-by: Marc Zyngier <maz@kernel.org>
13
Tested-by: Zhenyu Zhang <zhenyzha@redhat.com>
14
Message-id: 20221029224307.138822-4-gshan@redhat.com
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
17
hw/arm/virt.c | 12 ++++++------
18
1 file changed, 6 insertions(+), 6 deletions(-)
19
20
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
21
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/arm/virt.c
23
+++ b/hw/arm/virt.c
24
@@ -XXX,XX +XXX,XX @@ static uint64_t virt_cpu_mp_affinity(VirtMachineState *vms, int idx)
25
static void virt_set_high_memmap(VirtMachineState *vms,
26
hwaddr base, int pa_bits)
27
{
28
- hwaddr region_size;
29
+ hwaddr region_base, region_size;
30
bool fits;
31
int i;
32
33
for (i = VIRT_LOWMEMMAP_LAST; i < ARRAY_SIZE(extended_memmap); i++) {
34
+ region_base = ROUND_UP(base, extended_memmap[i].size);
35
region_size = extended_memmap[i].size;
36
37
- base = ROUND_UP(base, region_size);
38
- vms->memmap[i].base = base;
39
+ vms->memmap[i].base = region_base;
40
vms->memmap[i].size = region_size;
41
42
/*
43
@@ -XXX,XX +XXX,XX @@ static void virt_set_high_memmap(VirtMachineState *vms,
44
*
45
* For each device that doesn't fit, disable it.
46
*/
47
- fits = (base + region_size) <= BIT_ULL(pa_bits);
48
+ fits = (region_base + region_size) <= BIT_ULL(pa_bits);
49
if (fits) {
50
- vms->highest_gpa = base + region_size - 1;
51
+ vms->highest_gpa = region_base + region_size - 1;
52
}
53
54
switch (i) {
55
@@ -XXX,XX +XXX,XX @@ static void virt_set_high_memmap(VirtMachineState *vms,
56
break;
57
}
58
59
- base += region_size;
60
+ base = region_base + region_size;
61
}
62
}
63
64
--
65
2.25.1
diff view generated by jsdifflib
New patch
1
From: Gavin Shan <gshan@redhat.com>
1
2
3
This introduces virt_get_high_memmap_enabled() helper, which returns
4
the pointer to vms->highmem_{redists, ecam, mmio}. The pointer will
5
be used in the subsequent patches.
6
7
No functional change intended.
8
9
Signed-off-by: Gavin Shan <gshan@redhat.com>
10
Reviewed-by: Eric Auger <eric.auger@redhat.com>
11
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
12
Reviewed-by: Marc Zyngier <maz@kernel.org>
13
Tested-by: Zhenyu Zhang <zhenyzha@redhat.com>
14
Message-id: 20221029224307.138822-5-gshan@redhat.com
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
17
hw/arm/virt.c | 32 +++++++++++++++++++-------------
18
1 file changed, 19 insertions(+), 13 deletions(-)
19
20
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
21
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/arm/virt.c
23
+++ b/hw/arm/virt.c
24
@@ -XXX,XX +XXX,XX @@ static uint64_t virt_cpu_mp_affinity(VirtMachineState *vms, int idx)
25
return arm_cpu_mp_affinity(idx, clustersz);
26
}
27
28
+static inline bool *virt_get_high_memmap_enabled(VirtMachineState *vms,
29
+ int index)
30
+{
31
+ bool *enabled_array[] = {
32
+ &vms->highmem_redists,
33
+ &vms->highmem_ecam,
34
+ &vms->highmem_mmio,
35
+ };
36
+
37
+ assert(ARRAY_SIZE(extended_memmap) - VIRT_LOWMEMMAP_LAST ==
38
+ ARRAY_SIZE(enabled_array));
39
+ assert(index - VIRT_LOWMEMMAP_LAST < ARRAY_SIZE(enabled_array));
40
+
41
+ return enabled_array[index - VIRT_LOWMEMMAP_LAST];
42
+}
43
+
44
static void virt_set_high_memmap(VirtMachineState *vms,
45
hwaddr base, int pa_bits)
46
{
47
hwaddr region_base, region_size;
48
- bool fits;
49
+ bool *region_enabled, fits;
50
int i;
51
52
for (i = VIRT_LOWMEMMAP_LAST; i < ARRAY_SIZE(extended_memmap); i++) {
53
+ region_enabled = virt_get_high_memmap_enabled(vms, i);
54
region_base = ROUND_UP(base, extended_memmap[i].size);
55
region_size = extended_memmap[i].size;
56
57
@@ -XXX,XX +XXX,XX @@ static void virt_set_high_memmap(VirtMachineState *vms,
58
vms->highest_gpa = region_base + region_size - 1;
59
}
60
61
- switch (i) {
62
- case VIRT_HIGH_GIC_REDIST2:
63
- vms->highmem_redists &= fits;
64
- break;
65
- case VIRT_HIGH_PCIE_ECAM:
66
- vms->highmem_ecam &= fits;
67
- break;
68
- case VIRT_HIGH_PCIE_MMIO:
69
- vms->highmem_mmio &= fits;
70
- break;
71
- }
72
-
73
+ *region_enabled &= fits;
74
base = region_base + region_size;
75
}
76
}
77
--
78
2.25.1
diff view generated by jsdifflib
New patch
1
From: Gavin Shan <gshan@redhat.com>
1
2
3
There are three high memory regions, which are VIRT_HIGH_REDIST2,
4
VIRT_HIGH_PCIE_ECAM and VIRT_HIGH_PCIE_MMIO. Their base addresses
5
are floating on highest RAM address. However, they can be disabled
6
in several cases.
7
8
(1) One specific high memory region is likely to be disabled by
9
code by toggling vms->highmem_{redists, ecam, mmio}.
10
11
(2) VIRT_HIGH_PCIE_ECAM region is disabled on machine, which is
12
'virt-2.12' or ealier than it.
13
14
(3) VIRT_HIGH_PCIE_ECAM region is disabled when firmware is loaded
15
on 32-bits system.
16
17
(4) One specific high memory region is disabled when it breaks the
18
PA space limit.
19
20
The current implementation of virt_set_{memmap, high_memmap}() isn't
21
optimized because the high memory region's PA space is always reserved,
22
regardless of whatever the actual state in the corresponding
23
vms->highmem_{redists, ecam, mmio} flag. In the code, 'base' and
24
'vms->highest_gpa' are always increased for case (1), (2) and (3).
25
It's unnecessary since the assigned PA space for the disabled high
26
memory region won't be used afterwards.
27
28
Improve the address assignment for those three high memory region by
29
skipping the address assignment for one specific high memory region if
30
it has been disabled in case (1), (2) and (3). The memory layout may
31
be changed after the improvement is applied, which leads to potential
32
migration breakage. So 'vms->highmem_compact' is added to control if
33
the improvement should be applied. For now, 'vms->highmem_compact' is
34
set to false, meaning that we don't have memory layout change until it
35
becomes configurable through property 'compact-highmem' in next patch.
36
37
Signed-off-by: Gavin Shan <gshan@redhat.com>
38
Reviewed-by: Eric Auger <eric.auger@redhat.com>
39
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
40
Reviewed-by: Marc Zyngier <maz@kernel.org>
41
Tested-by: Zhenyu Zhang <zhenyzha@redhat.com>
42
Message-id: 20221029224307.138822-6-gshan@redhat.com
43
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
44
---
45
include/hw/arm/virt.h | 1 +
46
hw/arm/virt.c | 15 ++++++++++-----
47
2 files changed, 11 insertions(+), 5 deletions(-)
48
49
diff --git a/include/hw/arm/virt.h b/include/hw/arm/virt.h
50
index XXXXXXX..XXXXXXX 100644
51
--- a/include/hw/arm/virt.h
52
+++ b/include/hw/arm/virt.h
53
@@ -XXX,XX +XXX,XX @@ struct VirtMachineState {
54
PFlashCFI01 *flash[2];
55
bool secure;
56
bool highmem;
57
+ bool highmem_compact;
58
bool highmem_ecam;
59
bool highmem_mmio;
60
bool highmem_redists;
61
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
62
index XXXXXXX..XXXXXXX 100644
63
--- a/hw/arm/virt.c
64
+++ b/hw/arm/virt.c
65
@@ -XXX,XX +XXX,XX @@ static void virt_set_high_memmap(VirtMachineState *vms,
66
vms->memmap[i].size = region_size;
67
68
/*
69
- * Check each device to see if they fit in the PA space,
70
- * moving highest_gpa as we go.
71
+ * Check each device to see if it fits in the PA space,
72
+ * moving highest_gpa as we go. For compatibility, move
73
+ * highest_gpa for disabled fitting devices as well, if
74
+ * the compact layout has been disabled.
75
*
76
* For each device that doesn't fit, disable it.
77
*/
78
fits = (region_base + region_size) <= BIT_ULL(pa_bits);
79
- if (fits) {
80
- vms->highest_gpa = region_base + region_size - 1;
81
+ *region_enabled &= fits;
82
+ if (vms->highmem_compact && !*region_enabled) {
83
+ continue;
84
}
85
86
- *region_enabled &= fits;
87
base = region_base + region_size;
88
+ if (fits) {
89
+ vms->highest_gpa = base - 1;
90
+ }
91
}
92
}
93
94
--
95
2.25.1
diff view generated by jsdifflib
New patch
1
From: Gavin Shan <gshan@redhat.com>
1
2
3
After the improvement to high memory region address assignment is
4
applied, the memory layout can be changed, introducing possible
5
migration breakage. For example, VIRT_HIGH_PCIE_MMIO memory region
6
is disabled or enabled when the optimization is applied or not, with
7
the following configuration. The configuration is only achievable by
8
modifying the source code until more properties are added to allow
9
users selectively disable those high memory regions.
10
11
pa_bits = 40;
12
vms->highmem_redists = false;
13
vms->highmem_ecam = false;
14
vms->highmem_mmio = true;
15
16
# qemu-system-aarch64 -accel kvm -cpu host \
17
-machine virt-7.2,compact-highmem={on, off} \
18
-m 4G,maxmem=511G -monitor stdio
19
20
Region compact-highmem=off compact-highmem=on
21
----------------------------------------------------------------
22
MEM [1GB 512GB] [1GB 512GB]
23
HIGH_GIC_REDISTS2 [512GB 512GB+64MB] [disabled]
24
HIGH_PCIE_ECAM [512GB+256MB 512GB+512MB] [disabled]
25
HIGH_PCIE_MMIO [disabled] [512GB 1TB]
26
27
In order to keep backwords compatibility, we need to disable the
28
optimization on machine, which is virt-7.1 or ealier than it. It
29
means the optimization is enabled by default from virt-7.2. Besides,
30
'compact-highmem' property is added so that the optimization can be
31
explicitly enabled or disabled on all machine types by users.
32
33
Signed-off-by: Gavin Shan <gshan@redhat.com>
34
Reviewed-by: Eric Auger <eric.auger@redhat.com>
35
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
36
Reviewed-by: Marc Zyngier <maz@kernel.org>
37
Tested-by: Zhenyu Zhang <zhenyzha@redhat.com>
38
Message-id: 20221029224307.138822-7-gshan@redhat.com
39
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
40
---
41
docs/system/arm/virt.rst | 4 ++++
42
include/hw/arm/virt.h | 1 +
43
hw/arm/virt.c | 32 ++++++++++++++++++++++++++++++++
44
3 files changed, 37 insertions(+)
45
46
diff --git a/docs/system/arm/virt.rst b/docs/system/arm/virt.rst
47
index XXXXXXX..XXXXXXX 100644
48
--- a/docs/system/arm/virt.rst
49
+++ b/docs/system/arm/virt.rst
50
@@ -XXX,XX +XXX,XX @@ highmem
51
address space above 32 bits. The default is ``on`` for machine types
52
later than ``virt-2.12``.
53
54
+compact-highmem
55
+ Set ``on``/``off`` to enable/disable the compact layout for high memory regions.
56
+ The default is ``on`` for machine types later than ``virt-7.2``.
57
+
58
gic-version
59
Specify the version of the Generic Interrupt Controller (GIC) to provide.
60
Valid values are:
61
diff --git a/include/hw/arm/virt.h b/include/hw/arm/virt.h
62
index XXXXXXX..XXXXXXX 100644
63
--- a/include/hw/arm/virt.h
64
+++ b/include/hw/arm/virt.h
65
@@ -XXX,XX +XXX,XX @@ struct VirtMachineClass {
66
bool no_pmu;
67
bool claim_edge_triggered_timers;
68
bool smbios_old_sys_ver;
69
+ bool no_highmem_compact;
70
bool no_highmem_ecam;
71
bool no_ged; /* Machines < 4.2 have no support for ACPI GED device */
72
bool kvm_no_adjvtime;
73
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
74
index XXXXXXX..XXXXXXX 100644
75
--- a/hw/arm/virt.c
76
+++ b/hw/arm/virt.c
77
@@ -XXX,XX +XXX,XX @@ static const MemMapEntry base_memmap[] = {
78
* Note the extended_memmap is sized so that it eventually also includes the
79
* base_memmap entries (VIRT_HIGH_GIC_REDIST2 index is greater than the last
80
* index of base_memmap).
81
+ *
82
+ * The memory map for these Highmem IO Regions can be in legacy or compact
83
+ * layout, depending on 'compact-highmem' property. With legacy layout, the
84
+ * PA space for one specific region is always reserved, even if the region
85
+ * has been disabled or doesn't fit into the PA space. However, the PA space
86
+ * for the region won't be reserved in these circumstances with compact layout.
87
*/
88
static MemMapEntry extended_memmap[] = {
89
/* Additional 64 MB redist region (can contain up to 512 redistributors) */
90
@@ -XXX,XX +XXX,XX @@ static void virt_set_highmem(Object *obj, bool value, Error **errp)
91
vms->highmem = value;
92
}
93
94
+static bool virt_get_compact_highmem(Object *obj, Error **errp)
95
+{
96
+ VirtMachineState *vms = VIRT_MACHINE(obj);
97
+
98
+ return vms->highmem_compact;
99
+}
100
+
101
+static void virt_set_compact_highmem(Object *obj, bool value, Error **errp)
102
+{
103
+ VirtMachineState *vms = VIRT_MACHINE(obj);
104
+
105
+ vms->highmem_compact = value;
106
+}
107
+
108
static bool virt_get_its(Object *obj, Error **errp)
109
{
110
VirtMachineState *vms = VIRT_MACHINE(obj);
111
@@ -XXX,XX +XXX,XX @@ static void virt_machine_class_init(ObjectClass *oc, void *data)
112
"Set on/off to enable/disable using "
113
"physical address space above 32 bits");
114
115
+ object_class_property_add_bool(oc, "compact-highmem",
116
+ virt_get_compact_highmem,
117
+ virt_set_compact_highmem);
118
+ object_class_property_set_description(oc, "compact-highmem",
119
+ "Set on/off to enable/disable compact "
120
+ "layout for high memory regions");
121
+
122
object_class_property_add_str(oc, "gic-version", virt_get_gic_version,
123
virt_set_gic_version);
124
object_class_property_set_description(oc, "gic-version",
125
@@ -XXX,XX +XXX,XX @@ static void virt_instance_init(Object *obj)
126
127
/* High memory is enabled by default */
128
vms->highmem = true;
129
+ vms->highmem_compact = !vmc->no_highmem_compact;
130
vms->gic_version = VIRT_GIC_VERSION_NOSEL;
131
132
vms->highmem_ecam = !vmc->no_highmem_ecam;
133
@@ -XXX,XX +XXX,XX @@ DEFINE_VIRT_MACHINE_AS_LATEST(7, 2)
134
135
static void virt_machine_7_1_options(MachineClass *mc)
136
{
137
+ VirtMachineClass *vmc = VIRT_MACHINE_CLASS(OBJECT_CLASS(mc));
138
+
139
virt_machine_7_2_options(mc);
140
compat_props_add(mc->compat_props, hw_compat_7_1, hw_compat_7_1_len);
141
+ /* Compact layout for high memory regions was introduced with 7.2 */
142
+ vmc->no_highmem_compact = true;
143
}
144
DEFINE_VIRT_MACHINE(7, 1)
145
146
--
147
2.25.1
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Gavin Shan <gshan@redhat.com>
2
2
3
In preparation for supporting TCG disablement on ARM, we move most
3
The 3 high memory regions are usually enabled by default, but they may
4
of TCG related v7m/v8m helpers and APIs into their own file.
4
be not used. For example, VIRT_HIGH_GIC_REDIST2 isn't needed by GICv2.
5
This leads to waste in the PA space.
5
6
6
Note: It is easier to review this commit using the 'histogram'
7
Add properties ("highmem-redists", "highmem-ecam", "highmem-mmio") to
7
diff algorithm:
8
allow users selectively disable them if needed. After that, the high
9
memory region for GICv3 or GICv4 redistributor can be disabled by user,
10
the number of maximal supported CPUs needs to be calculated based on
11
'vms->highmem_redists'. The follow-up error message is also improved
12
to indicate if the high memory region for GICv3 and GICv4 has been
13
enabled or not.
8
14
9
$ git diff --diff-algorithm=histogram ...
15
Suggested-by: Marc Zyngier <maz@kernel.org>
10
or
16
Signed-off-by: Gavin Shan <gshan@redhat.com>
11
$ git diff --histogram ...
17
Reviewed-by: Marc Zyngier <maz@kernel.org>
12
18
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
13
Suggested-by: Samuel Ortiz <sameo@linux.intel.com>
19
Reviewed-by: Eric Auger <eric.auger@redhat.com>
14
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
20
Message-id: 20221029224307.138822-8-gshan@redhat.com
15
Message-id: 20190702144335.10717-2-philmd@redhat.com
16
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
---
22
---
19
target/arm/Makefile.objs | 1 +
23
docs/system/arm/virt.rst | 13 +++++++
20
target/arm/helper.c | 2638 +------------------------------------
24
hw/arm/virt.c | 75 ++++++++++++++++++++++++++++++++++++++--
21
target/arm/m_helper.c | 2676 ++++++++++++++++++++++++++++++++++++++
25
2 files changed, 86 insertions(+), 2 deletions(-)
22
3 files changed, 2681 insertions(+), 2634 deletions(-)
23
create mode 100644 target/arm/m_helper.c
24
26
25
diff --git a/target/arm/Makefile.objs b/target/arm/Makefile.objs
27
diff --git a/docs/system/arm/virt.rst b/docs/system/arm/virt.rst
26
index XXXXXXX..XXXXXXX 100644
28
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/Makefile.objs
29
--- a/docs/system/arm/virt.rst
28
+++ b/target/arm/Makefile.objs
30
+++ b/docs/system/arm/virt.rst
29
@@ -XXX,XX +XXX,XX @@ obj-y += tlb_helper.o debug_helper.o
31
@@ -XXX,XX +XXX,XX @@ compact-highmem
30
obj-y += translate.o op_helper.o
32
Set ``on``/``off`` to enable/disable the compact layout for high memory regions.
31
obj-y += crypto_helper.o
33
The default is ``on`` for machine types later than ``virt-7.2``.
32
obj-y += iwmmxt_helper.o vec_helper.o neon_helper.o
34
33
+obj-y += m_helper.o
35
+highmem-redists
34
36
+ Set ``on``/``off`` to enable/disable the high memory region for GICv3 or
35
obj-$(CONFIG_SOFTMMU) += psci.o
37
+ GICv4 redistributor. The default is ``on``. Setting this to ``off`` will
36
38
+ limit the maximum number of CPUs when GICv3 or GICv4 is used.
37
diff --git a/target/arm/helper.c b/target/arm/helper.c
39
+
40
+highmem-ecam
41
+ Set ``on``/``off`` to enable/disable the high memory region for PCI ECAM.
42
+ The default is ``on`` for machine types later than ``virt-3.0``.
43
+
44
+highmem-mmio
45
+ Set ``on``/``off`` to enable/disable the high memory region for PCI MMIO.
46
+ The default is ``on``.
47
+
48
gic-version
49
Specify the version of the Generic Interrupt Controller (GIC) to provide.
50
Valid values are:
51
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
38
index XXXXXXX..XXXXXXX 100644
52
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/helper.c
53
--- a/hw/arm/virt.c
40
+++ b/target/arm/helper.c
54
+++ b/hw/arm/virt.c
41
@@ -XXX,XX +XXX,XX @@
55
@@ -XXX,XX +XXX,XX @@ static void machvirt_init(MachineState *machine)
42
#include "qemu/crc32c.h"
56
if (vms->gic_version == VIRT_GIC_VERSION_2) {
43
#include "qemu/qemu-print.h"
57
virt_max_cpus = GIC_NCPU;
44
#include "exec/exec-all.h"
58
} else {
45
-#include "exec/cpu_ldst.h"
59
- virt_max_cpus = virt_redist_capacity(vms, VIRT_GIC_REDIST) +
46
#include <zlib.h> /* For crc32 */
60
- virt_redist_capacity(vms, VIRT_HIGH_GIC_REDIST2);
47
#include "hw/semihosting/semihost.h"
61
+ virt_max_cpus = virt_redist_capacity(vms, VIRT_GIC_REDIST);
48
#include "sysemu/cpus.h"
62
+ if (vms->highmem_redists) {
49
@@ -XXX,XX +XXX,XX @@
63
+ virt_max_cpus += virt_redist_capacity(vms, VIRT_HIGH_GIC_REDIST2);
50
#include "qemu/guest-random.h"
64
+ }
51
#ifdef CONFIG_TCG
52
#include "arm_ldst.h"
53
+#include "exec/cpu_ldst.h"
54
#endif
55
56
#define ARM_CPU_FREQ 1000000000 /* FIXME: 1 GHz, should be configurable */
57
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(rbit)(uint32_t x)
58
59
#ifdef CONFIG_USER_ONLY
60
61
-/* These should probably raise undefined insn exceptions. */
62
-void HELPER(v7m_msr)(CPUARMState *env, uint32_t reg, uint32_t val)
63
-{
64
- ARMCPU *cpu = env_archcpu(env);
65
-
66
- cpu_abort(CPU(cpu), "v7m_msr %d\n", reg);
67
-}
68
-
69
-uint32_t HELPER(v7m_mrs)(CPUARMState *env, uint32_t reg)
70
-{
71
- ARMCPU *cpu = env_archcpu(env);
72
-
73
- cpu_abort(CPU(cpu), "v7m_mrs %d\n", reg);
74
- return 0;
75
-}
76
-
77
-void HELPER(v7m_bxns)(CPUARMState *env, uint32_t dest)
78
-{
79
- /* translate.c should never generate calls here in user-only mode */
80
- g_assert_not_reached();
81
-}
82
-
83
-void HELPER(v7m_blxns)(CPUARMState *env, uint32_t dest)
84
-{
85
- /* translate.c should never generate calls here in user-only mode */
86
- g_assert_not_reached();
87
-}
88
-
89
-void HELPER(v7m_preserve_fp_state)(CPUARMState *env)
90
-{
91
- /* translate.c should never generate calls here in user-only mode */
92
- g_assert_not_reached();
93
-}
94
-
95
-void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr)
96
-{
97
- /* translate.c should never generate calls here in user-only mode */
98
- g_assert_not_reached();
99
-}
100
-
101
-void HELPER(v7m_vlldm)(CPUARMState *env, uint32_t fptr)
102
-{
103
- /* translate.c should never generate calls here in user-only mode */
104
- g_assert_not_reached();
105
-}
106
-
107
-uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
108
-{
109
- /*
110
- * The TT instructions can be used by unprivileged code, but in
111
- * user-only emulation we don't have the MPU.
112
- * Luckily since we know we are NonSecure unprivileged (and that in
113
- * turn means that the A flag wasn't specified), all the bits in the
114
- * register must be zero:
115
- * IREGION: 0 because IRVALID is 0
116
- * IRVALID: 0 because NS
117
- * S: 0 because NS
118
- * NSRW: 0 because NS
119
- * NSR: 0 because NS
120
- * RW: 0 because unpriv and A flag not set
121
- * R: 0 because unpriv and A flag not set
122
- * SRVALID: 0 because NS
123
- * MRVALID: 0 because unpriv and A flag not set
124
- * SREGION: 0 becaus SRVALID is 0
125
- * MREGION: 0 because MRVALID is 0
126
- */
127
- return 0;
128
-}
129
-
130
static void switch_mode(CPUARMState *env, int mode)
131
{
132
ARMCPU *cpu = env_archcpu(env);
133
@@ -XXX,XX +XXX,XX @@ void arm_log_exception(int idx)
134
}
65
}
66
67
if (max_cpus > virt_max_cpus) {
68
error_report("Number of SMP CPUs requested (%d) exceeds max CPUs "
69
"supported by machine 'mach-virt' (%d)",
70
max_cpus, virt_max_cpus);
71
+ if (vms->gic_version != VIRT_GIC_VERSION_2 && !vms->highmem_redists) {
72
+ error_printf("Try 'highmem-redists=on' for more CPUs\n");
73
+ }
74
+
75
exit(1);
76
}
77
78
@@ -XXX,XX +XXX,XX @@ static void virt_set_compact_highmem(Object *obj, bool value, Error **errp)
79
vms->highmem_compact = value;
135
}
80
}
136
81
137
-/*
82
+static bool virt_get_highmem_redists(Object *obj, Error **errp)
138
- * What kind of stack write are we doing? This affects how exceptions
83
+{
139
- * generated during the stacking are treated.
84
+ VirtMachineState *vms = VIRT_MACHINE(obj);
140
- */
141
-typedef enum StackingMode {
142
- STACK_NORMAL,
143
- STACK_IGNFAULTS,
144
- STACK_LAZYFP,
145
-} StackingMode;
146
-
147
-static bool v7m_stack_write(ARMCPU *cpu, uint32_t addr, uint32_t value,
148
- ARMMMUIdx mmu_idx, StackingMode mode)
149
-{
150
- CPUState *cs = CPU(cpu);
151
- CPUARMState *env = &cpu->env;
152
- MemTxAttrs attrs = {};
153
- MemTxResult txres;
154
- target_ulong page_size;
155
- hwaddr physaddr;
156
- int prot;
157
- ARMMMUFaultInfo fi = {};
158
- bool secure = mmu_idx & ARM_MMU_IDX_M_S;
159
- int exc;
160
- bool exc_secure;
161
-
162
- if (get_phys_addr(env, addr, MMU_DATA_STORE, mmu_idx, &physaddr,
163
- &attrs, &prot, &page_size, &fi, NULL)) {
164
- /* MPU/SAU lookup failed */
165
- if (fi.type == ARMFault_QEMU_SFault) {
166
- if (mode == STACK_LAZYFP) {
167
- qemu_log_mask(CPU_LOG_INT,
168
- "...SecureFault with SFSR.LSPERR "
169
- "during lazy stacking\n");
170
- env->v7m.sfsr |= R_V7M_SFSR_LSPERR_MASK;
171
- } else {
172
- qemu_log_mask(CPU_LOG_INT,
173
- "...SecureFault with SFSR.AUVIOL "
174
- "during stacking\n");
175
- env->v7m.sfsr |= R_V7M_SFSR_AUVIOL_MASK;
176
- }
177
- env->v7m.sfsr |= R_V7M_SFSR_SFARVALID_MASK;
178
- env->v7m.sfar = addr;
179
- exc = ARMV7M_EXCP_SECURE;
180
- exc_secure = false;
181
- } else {
182
- if (mode == STACK_LAZYFP) {
183
- qemu_log_mask(CPU_LOG_INT,
184
- "...MemManageFault with CFSR.MLSPERR\n");
185
- env->v7m.cfsr[secure] |= R_V7M_CFSR_MLSPERR_MASK;
186
- } else {
187
- qemu_log_mask(CPU_LOG_INT,
188
- "...MemManageFault with CFSR.MSTKERR\n");
189
- env->v7m.cfsr[secure] |= R_V7M_CFSR_MSTKERR_MASK;
190
- }
191
- exc = ARMV7M_EXCP_MEM;
192
- exc_secure = secure;
193
- }
194
- goto pend_fault;
195
- }
196
- address_space_stl_le(arm_addressspace(cs, attrs), physaddr, value,
197
- attrs, &txres);
198
- if (txres != MEMTX_OK) {
199
- /* BusFault trying to write the data */
200
- if (mode == STACK_LAZYFP) {
201
- qemu_log_mask(CPU_LOG_INT, "...BusFault with BFSR.LSPERR\n");
202
- env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_LSPERR_MASK;
203
- } else {
204
- qemu_log_mask(CPU_LOG_INT, "...BusFault with BFSR.STKERR\n");
205
- env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_STKERR_MASK;
206
- }
207
- exc = ARMV7M_EXCP_BUS;
208
- exc_secure = false;
209
- goto pend_fault;
210
- }
211
- return true;
212
-
213
-pend_fault:
214
- /*
215
- * By pending the exception at this point we are making
216
- * the IMPDEF choice "overridden exceptions pended" (see the
217
- * MergeExcInfo() pseudocode). The other choice would be to not
218
- * pend them now and then make a choice about which to throw away
219
- * later if we have two derived exceptions.
220
- * The only case when we must not pend the exception but instead
221
- * throw it away is if we are doing the push of the callee registers
222
- * and we've already generated a derived exception (this is indicated
223
- * by the caller passing STACK_IGNFAULTS). Even in this case we will
224
- * still update the fault status registers.
225
- */
226
- switch (mode) {
227
- case STACK_NORMAL:
228
- armv7m_nvic_set_pending_derived(env->nvic, exc, exc_secure);
229
- break;
230
- case STACK_LAZYFP:
231
- armv7m_nvic_set_pending_lazyfp(env->nvic, exc, exc_secure);
232
- break;
233
- case STACK_IGNFAULTS:
234
- break;
235
- }
236
- return false;
237
-}
238
-
239
-static bool v7m_stack_read(ARMCPU *cpu, uint32_t *dest, uint32_t addr,
240
- ARMMMUIdx mmu_idx)
241
-{
242
- CPUState *cs = CPU(cpu);
243
- CPUARMState *env = &cpu->env;
244
- MemTxAttrs attrs = {};
245
- MemTxResult txres;
246
- target_ulong page_size;
247
- hwaddr physaddr;
248
- int prot;
249
- ARMMMUFaultInfo fi = {};
250
- bool secure = mmu_idx & ARM_MMU_IDX_M_S;
251
- int exc;
252
- bool exc_secure;
253
- uint32_t value;
254
-
255
- if (get_phys_addr(env, addr, MMU_DATA_LOAD, mmu_idx, &physaddr,
256
- &attrs, &prot, &page_size, &fi, NULL)) {
257
- /* MPU/SAU lookup failed */
258
- if (fi.type == ARMFault_QEMU_SFault) {
259
- qemu_log_mask(CPU_LOG_INT,
260
- "...SecureFault with SFSR.AUVIOL during unstack\n");
261
- env->v7m.sfsr |= R_V7M_SFSR_AUVIOL_MASK | R_V7M_SFSR_SFARVALID_MASK;
262
- env->v7m.sfar = addr;
263
- exc = ARMV7M_EXCP_SECURE;
264
- exc_secure = false;
265
- } else {
266
- qemu_log_mask(CPU_LOG_INT,
267
- "...MemManageFault with CFSR.MUNSTKERR\n");
268
- env->v7m.cfsr[secure] |= R_V7M_CFSR_MUNSTKERR_MASK;
269
- exc = ARMV7M_EXCP_MEM;
270
- exc_secure = secure;
271
- }
272
- goto pend_fault;
273
- }
274
-
275
- value = address_space_ldl(arm_addressspace(cs, attrs), physaddr,
276
- attrs, &txres);
277
- if (txres != MEMTX_OK) {
278
- /* BusFault trying to read the data */
279
- qemu_log_mask(CPU_LOG_INT, "...BusFault with BFSR.UNSTKERR\n");
280
- env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_UNSTKERR_MASK;
281
- exc = ARMV7M_EXCP_BUS;
282
- exc_secure = false;
283
- goto pend_fault;
284
- }
285
-
286
- *dest = value;
287
- return true;
288
-
289
-pend_fault:
290
- /*
291
- * By pending the exception at this point we are making
292
- * the IMPDEF choice "overridden exceptions pended" (see the
293
- * MergeExcInfo() pseudocode). The other choice would be to not
294
- * pend them now and then make a choice about which to throw away
295
- * later if we have two derived exceptions.
296
- */
297
- armv7m_nvic_set_pending(env->nvic, exc, exc_secure);
298
- return false;
299
-}
300
-
301
-void HELPER(v7m_preserve_fp_state)(CPUARMState *env)
302
-{
303
- /*
304
- * Preserve FP state (because LSPACT was set and we are about
305
- * to execute an FP instruction). This corresponds to the
306
- * PreserveFPState() pseudocode.
307
- * We may throw an exception if the stacking fails.
308
- */
309
- ARMCPU *cpu = env_archcpu(env);
310
- bool is_secure = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_S_MASK;
311
- bool negpri = !(env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_HFRDY_MASK);
312
- bool is_priv = !(env->v7m.fpccr[is_secure] & R_V7M_FPCCR_USER_MASK);
313
- bool splimviol = env->v7m.fpccr[is_secure] & R_V7M_FPCCR_SPLIMVIOL_MASK;
314
- uint32_t fpcar = env->v7m.fpcar[is_secure];
315
- bool stacked_ok = true;
316
- bool ts = is_secure && (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK);
317
- bool take_exception;
318
-
319
- /* Take the iothread lock as we are going to touch the NVIC */
320
- qemu_mutex_lock_iothread();
321
-
322
- /* Check the background context had access to the FPU */
323
- if (!v7m_cpacr_pass(env, is_secure, is_priv)) {
324
- armv7m_nvic_set_pending_lazyfp(env->nvic, ARMV7M_EXCP_USAGE, is_secure);
325
- env->v7m.cfsr[is_secure] |= R_V7M_CFSR_NOCP_MASK;
326
- stacked_ok = false;
327
- } else if (!is_secure && !extract32(env->v7m.nsacr, 10, 1)) {
328
- armv7m_nvic_set_pending_lazyfp(env->nvic, ARMV7M_EXCP_USAGE, M_REG_S);
329
- env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_NOCP_MASK;
330
- stacked_ok = false;
331
- }
332
-
333
- if (!splimviol && stacked_ok) {
334
- /* We only stack if the stack limit wasn't violated */
335
- int i;
336
- ARMMMUIdx mmu_idx;
337
-
338
- mmu_idx = arm_v7m_mmu_idx_all(env, is_secure, is_priv, negpri);
339
- for (i = 0; i < (ts ? 32 : 16); i += 2) {
340
- uint64_t dn = *aa32_vfp_dreg(env, i / 2);
341
- uint32_t faddr = fpcar + 4 * i;
342
- uint32_t slo = extract64(dn, 0, 32);
343
- uint32_t shi = extract64(dn, 32, 32);
344
-
345
- if (i >= 16) {
346
- faddr += 8; /* skip the slot for the FPSCR */
347
- }
348
- stacked_ok = stacked_ok &&
349
- v7m_stack_write(cpu, faddr, slo, mmu_idx, STACK_LAZYFP) &&
350
- v7m_stack_write(cpu, faddr + 4, shi, mmu_idx, STACK_LAZYFP);
351
- }
352
-
353
- stacked_ok = stacked_ok &&
354
- v7m_stack_write(cpu, fpcar + 0x40,
355
- vfp_get_fpscr(env), mmu_idx, STACK_LAZYFP);
356
- }
357
-
358
- /*
359
- * We definitely pended an exception, but it's possible that it
360
- * might not be able to be taken now. If its priority permits us
361
- * to take it now, then we must not update the LSPACT or FP regs,
362
- * but instead jump out to take the exception immediately.
363
- * If it's just pending and won't be taken until the current
364
- * handler exits, then we do update LSPACT and the FP regs.
365
- */
366
- take_exception = !stacked_ok &&
367
- armv7m_nvic_can_take_pending_exception(env->nvic);
368
-
369
- qemu_mutex_unlock_iothread();
370
-
371
- if (take_exception) {
372
- raise_exception_ra(env, EXCP_LAZYFP, 0, 1, GETPC());
373
- }
374
-
375
- env->v7m.fpccr[is_secure] &= ~R_V7M_FPCCR_LSPACT_MASK;
376
-
377
- if (ts) {
378
- /* Clear s0 to s31 and the FPSCR */
379
- int i;
380
-
381
- for (i = 0; i < 32; i += 2) {
382
- *aa32_vfp_dreg(env, i / 2) = 0;
383
- }
384
- vfp_set_fpscr(env, 0);
385
- }
386
- /*
387
- * Otherwise s0 to s15 and FPSCR are UNKNOWN; we choose to leave them
388
- * unchanged.
389
- */
390
-}
391
-
392
-/*
393
- * Write to v7M CONTROL.SPSEL bit for the specified security bank.
394
- * This may change the current stack pointer between Main and Process
395
- * stack pointers if it is done for the CONTROL register for the current
396
- * security state.
397
- */
398
-static void write_v7m_control_spsel_for_secstate(CPUARMState *env,
399
- bool new_spsel,
400
- bool secstate)
401
-{
402
- bool old_is_psp = v7m_using_psp(env);
403
-
404
- env->v7m.control[secstate] =
405
- deposit32(env->v7m.control[secstate],
406
- R_V7M_CONTROL_SPSEL_SHIFT,
407
- R_V7M_CONTROL_SPSEL_LENGTH, new_spsel);
408
-
409
- if (secstate == env->v7m.secure) {
410
- bool new_is_psp = v7m_using_psp(env);
411
- uint32_t tmp;
412
-
413
- if (old_is_psp != new_is_psp) {
414
- tmp = env->v7m.other_sp;
415
- env->v7m.other_sp = env->regs[13];
416
- env->regs[13] = tmp;
417
- }
418
- }
419
-}
420
-
421
-/*
422
- * Write to v7M CONTROL.SPSEL bit. This may change the current
423
- * stack pointer between Main and Process stack pointers.
424
- */
425
-static void write_v7m_control_spsel(CPUARMState *env, bool new_spsel)
426
-{
427
- write_v7m_control_spsel_for_secstate(env, new_spsel, env->v7m.secure);
428
-}
429
-
430
-void write_v7m_exception(CPUARMState *env, uint32_t new_exc)
431
-{
432
- /*
433
- * Write a new value to v7m.exception, thus transitioning into or out
434
- * of Handler mode; this may result in a change of active stack pointer.
435
- */
436
- bool new_is_psp, old_is_psp = v7m_using_psp(env);
437
- uint32_t tmp;
438
-
439
- env->v7m.exception = new_exc;
440
-
441
- new_is_psp = v7m_using_psp(env);
442
-
443
- if (old_is_psp != new_is_psp) {
444
- tmp = env->v7m.other_sp;
445
- env->v7m.other_sp = env->regs[13];
446
- env->regs[13] = tmp;
447
- }
448
-}
449
-
450
-/* Switch M profile security state between NS and S */
451
-static void switch_v7m_security_state(CPUARMState *env, bool new_secstate)
452
-{
453
- uint32_t new_ss_msp, new_ss_psp;
454
-
455
- if (env->v7m.secure == new_secstate) {
456
- return;
457
- }
458
-
459
- /*
460
- * All the banked state is accessed by looking at env->v7m.secure
461
- * except for the stack pointer; rearrange the SP appropriately.
462
- */
463
- new_ss_msp = env->v7m.other_ss_msp;
464
- new_ss_psp = env->v7m.other_ss_psp;
465
-
466
- if (v7m_using_psp(env)) {
467
- env->v7m.other_ss_psp = env->regs[13];
468
- env->v7m.other_ss_msp = env->v7m.other_sp;
469
- } else {
470
- env->v7m.other_ss_msp = env->regs[13];
471
- env->v7m.other_ss_psp = env->v7m.other_sp;
472
- }
473
-
474
- env->v7m.secure = new_secstate;
475
-
476
- if (v7m_using_psp(env)) {
477
- env->regs[13] = new_ss_psp;
478
- env->v7m.other_sp = new_ss_msp;
479
- } else {
480
- env->regs[13] = new_ss_msp;
481
- env->v7m.other_sp = new_ss_psp;
482
- }
483
-}
484
-
485
-void HELPER(v7m_bxns)(CPUARMState *env, uint32_t dest)
486
-{
487
- /*
488
- * Handle v7M BXNS:
489
- * - if the return value is a magic value, do exception return (like BX)
490
- * - otherwise bit 0 of the return value is the target security state
491
- */
492
- uint32_t min_magic;
493
-
494
- if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
495
- /* Covers FNC_RETURN and EXC_RETURN magic */
496
- min_magic = FNC_RETURN_MIN_MAGIC;
497
- } else {
498
- /* EXC_RETURN magic only */
499
- min_magic = EXC_RETURN_MIN_MAGIC;
500
- }
501
-
502
- if (dest >= min_magic) {
503
- /*
504
- * This is an exception return magic value; put it where
505
- * do_v7m_exception_exit() expects and raise EXCEPTION_EXIT.
506
- * Note that if we ever add gen_ss_advance() singlestep support to
507
- * M profile this should count as an "instruction execution complete"
508
- * event (compare gen_bx_excret_final_code()).
509
- */
510
- env->regs[15] = dest & ~1;
511
- env->thumb = dest & 1;
512
- HELPER(exception_internal)(env, EXCP_EXCEPTION_EXIT);
513
- /* notreached */
514
- }
515
-
516
- /* translate.c should have made BXNS UNDEF unless we're secure */
517
- assert(env->v7m.secure);
518
-
519
- if (!(dest & 1)) {
520
- env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
521
- }
522
- switch_v7m_security_state(env, dest & 1);
523
- env->thumb = 1;
524
- env->regs[15] = dest & ~1;
525
-}
526
-
527
-void HELPER(v7m_blxns)(CPUARMState *env, uint32_t dest)
528
-{
529
- /*
530
- * Handle v7M BLXNS:
531
- * - bit 0 of the destination address is the target security state
532
- */
533
-
534
- /* At this point regs[15] is the address just after the BLXNS */
535
- uint32_t nextinst = env->regs[15] | 1;
536
- uint32_t sp = env->regs[13] - 8;
537
- uint32_t saved_psr;
538
-
539
- /* translate.c will have made BLXNS UNDEF unless we're secure */
540
- assert(env->v7m.secure);
541
-
542
- if (dest & 1) {
543
- /*
544
- * Target is Secure, so this is just a normal BLX,
545
- * except that the low bit doesn't indicate Thumb/not.
546
- */
547
- env->regs[14] = nextinst;
548
- env->thumb = 1;
549
- env->regs[15] = dest & ~1;
550
- return;
551
- }
552
-
553
- /* Target is non-secure: first push a stack frame */
554
- if (!QEMU_IS_ALIGNED(sp, 8)) {
555
- qemu_log_mask(LOG_GUEST_ERROR,
556
- "BLXNS with misaligned SP is UNPREDICTABLE\n");
557
- }
558
-
559
- if (sp < v7m_sp_limit(env)) {
560
- raise_exception(env, EXCP_STKOF, 0, 1);
561
- }
562
-
563
- saved_psr = env->v7m.exception;
564
- if (env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK) {
565
- saved_psr |= XPSR_SFPA;
566
- }
567
-
568
- /* Note that these stores can throw exceptions on MPU faults */
569
- cpu_stl_data(env, sp, nextinst);
570
- cpu_stl_data(env, sp + 4, saved_psr);
571
-
572
- env->regs[13] = sp;
573
- env->regs[14] = 0xfeffffff;
574
- if (arm_v7m_is_handler_mode(env)) {
575
- /*
576
- * Write a dummy value to IPSR, to avoid leaking the current secure
577
- * exception number to non-secure code. This is guaranteed not
578
- * to cause write_v7m_exception() to actually change stacks.
579
- */
580
- write_v7m_exception(env, 1);
581
- }
582
- env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
583
- switch_v7m_security_state(env, 0);
584
- env->thumb = 1;
585
- env->regs[15] = dest;
586
-}
587
-
588
-static uint32_t *get_v7m_sp_ptr(CPUARMState *env, bool secure, bool threadmode,
589
- bool spsel)
590
-{
591
- /*
592
- * Return a pointer to the location where we currently store the
593
- * stack pointer for the requested security state and thread mode.
594
- * This pointer will become invalid if the CPU state is updated
595
- * such that the stack pointers are switched around (eg changing
596
- * the SPSEL control bit).
597
- * Compare the v8M ARM ARM pseudocode LookUpSP_with_security_mode().
598
- * Unlike that pseudocode, we require the caller to pass us in the
599
- * SPSEL control bit value; this is because we also use this
600
- * function in handling of pushing of the callee-saves registers
601
- * part of the v8M stack frame (pseudocode PushCalleeStack()),
602
- * and in the tailchain codepath the SPSEL bit comes from the exception
603
- * return magic LR value from the previous exception. The pseudocode
604
- * opencodes the stack-selection in PushCalleeStack(), but we prefer
605
- * to make this utility function generic enough to do the job.
606
- */
607
- bool want_psp = threadmode && spsel;
608
-
609
- if (secure == env->v7m.secure) {
610
- if (want_psp == v7m_using_psp(env)) {
611
- return &env->regs[13];
612
- } else {
613
- return &env->v7m.other_sp;
614
- }
615
- } else {
616
- if (want_psp) {
617
- return &env->v7m.other_ss_psp;
618
- } else {
619
- return &env->v7m.other_ss_msp;
620
- }
621
- }
622
-}
623
-
624
-static bool arm_v7m_load_vector(ARMCPU *cpu, int exc, bool targets_secure,
625
- uint32_t *pvec)
626
-{
627
- CPUState *cs = CPU(cpu);
628
- CPUARMState *env = &cpu->env;
629
- MemTxResult result;
630
- uint32_t addr = env->v7m.vecbase[targets_secure] + exc * 4;
631
- uint32_t vector_entry;
632
- MemTxAttrs attrs = {};
633
- ARMMMUIdx mmu_idx;
634
- bool exc_secure;
635
-
636
- mmu_idx = arm_v7m_mmu_idx_for_secstate_and_priv(env, targets_secure, true);
637
-
638
- /*
639
- * We don't do a get_phys_addr() here because the rules for vector
640
- * loads are special: they always use the default memory map, and
641
- * the default memory map permits reads from all addresses.
642
- * Since there's no easy way to pass through to pmsav8_mpu_lookup()
643
- * that we want this special case which would always say "yes",
644
- * we just do the SAU lookup here followed by a direct physical load.
645
- */
646
- attrs.secure = targets_secure;
647
- attrs.user = false;
648
-
649
- if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
650
- V8M_SAttributes sattrs = {};
651
-
652
- v8m_security_lookup(env, addr, MMU_DATA_LOAD, mmu_idx, &sattrs);
653
- if (sattrs.ns) {
654
- attrs.secure = false;
655
- } else if (!targets_secure) {
656
- /* NS access to S memory */
657
- goto load_fail;
658
- }
659
- }
660
-
661
- vector_entry = address_space_ldl(arm_addressspace(cs, attrs), addr,
662
- attrs, &result);
663
- if (result != MEMTX_OK) {
664
- goto load_fail;
665
- }
666
- *pvec = vector_entry;
667
- return true;
668
-
669
-load_fail:
670
- /*
671
- * All vector table fetch fails are reported as HardFault, with
672
- * HFSR.VECTTBL and .FORCED set. (FORCED is set because
673
- * technically the underlying exception is a MemManage or BusFault
674
- * that is escalated to HardFault.) This is a terminal exception,
675
- * so we will either take the HardFault immediately or else enter
676
- * lockup (the latter case is handled in armv7m_nvic_set_pending_derived()).
677
- */
678
- exc_secure = targets_secure ||
679
- !(cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK);
680
- env->v7m.hfsr |= R_V7M_HFSR_VECTTBL_MASK | R_V7M_HFSR_FORCED_MASK;
681
- armv7m_nvic_set_pending_derived(env->nvic, ARMV7M_EXCP_HARD, exc_secure);
682
- return false;
683
-}
684
-
685
-static uint32_t v7m_integrity_sig(CPUARMState *env, uint32_t lr)
686
-{
687
- /*
688
- * Return the integrity signature value for the callee-saves
689
- * stack frame section. @lr is the exception return payload/LR value
690
- * whose FType bit forms bit 0 of the signature if FP is present.
691
- */
692
- uint32_t sig = 0xfefa125a;
693
-
694
- if (!arm_feature(env, ARM_FEATURE_VFP) || (lr & R_V7M_EXCRET_FTYPE_MASK)) {
695
- sig |= 1;
696
- }
697
- return sig;
698
-}
699
-
700
-static bool v7m_push_callee_stack(ARMCPU *cpu, uint32_t lr, bool dotailchain,
701
- bool ignore_faults)
702
-{
703
- /*
704
- * For v8M, push the callee-saves register part of the stack frame.
705
- * Compare the v8M pseudocode PushCalleeStack().
706
- * In the tailchaining case this may not be the current stack.
707
- */
708
- CPUARMState *env = &cpu->env;
709
- uint32_t *frame_sp_p;
710
- uint32_t frameptr;
711
- ARMMMUIdx mmu_idx;
712
- bool stacked_ok;
713
- uint32_t limit;
714
- bool want_psp;
715
- uint32_t sig;
716
- StackingMode smode = ignore_faults ? STACK_IGNFAULTS : STACK_NORMAL;
717
-
718
- if (dotailchain) {
719
- bool mode = lr & R_V7M_EXCRET_MODE_MASK;
720
- bool priv = !(env->v7m.control[M_REG_S] & R_V7M_CONTROL_NPRIV_MASK) ||
721
- !mode;
722
-
723
- mmu_idx = arm_v7m_mmu_idx_for_secstate_and_priv(env, M_REG_S, priv);
724
- frame_sp_p = get_v7m_sp_ptr(env, M_REG_S, mode,
725
- lr & R_V7M_EXCRET_SPSEL_MASK);
726
- want_psp = mode && (lr & R_V7M_EXCRET_SPSEL_MASK);
727
- if (want_psp) {
728
- limit = env->v7m.psplim[M_REG_S];
729
- } else {
730
- limit = env->v7m.msplim[M_REG_S];
731
- }
732
- } else {
733
- mmu_idx = arm_mmu_idx(env);
734
- frame_sp_p = &env->regs[13];
735
- limit = v7m_sp_limit(env);
736
- }
737
-
738
- frameptr = *frame_sp_p - 0x28;
739
- if (frameptr < limit) {
740
- /*
741
- * Stack limit failure: set SP to the limit value, and generate
742
- * STKOF UsageFault. Stack pushes below the limit must not be
743
- * performed. It is IMPDEF whether pushes above the limit are
744
- * performed; we choose not to.
745
- */
746
- qemu_log_mask(CPU_LOG_INT,
747
- "...STKOF during callee-saves register stacking\n");
748
- env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_STKOF_MASK;
749
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
750
- env->v7m.secure);
751
- *frame_sp_p = limit;
752
- return true;
753
- }
754
-
755
- /*
756
- * Write as much of the stack frame as we can. A write failure may
757
- * cause us to pend a derived exception.
758
- */
759
- sig = v7m_integrity_sig(env, lr);
760
- stacked_ok =
761
- v7m_stack_write(cpu, frameptr, sig, mmu_idx, smode) &&
762
- v7m_stack_write(cpu, frameptr + 0x8, env->regs[4], mmu_idx, smode) &&
763
- v7m_stack_write(cpu, frameptr + 0xc, env->regs[5], mmu_idx, smode) &&
764
- v7m_stack_write(cpu, frameptr + 0x10, env->regs[6], mmu_idx, smode) &&
765
- v7m_stack_write(cpu, frameptr + 0x14, env->regs[7], mmu_idx, smode) &&
766
- v7m_stack_write(cpu, frameptr + 0x18, env->regs[8], mmu_idx, smode) &&
767
- v7m_stack_write(cpu, frameptr + 0x1c, env->regs[9], mmu_idx, smode) &&
768
- v7m_stack_write(cpu, frameptr + 0x20, env->regs[10], mmu_idx, smode) &&
769
- v7m_stack_write(cpu, frameptr + 0x24, env->regs[11], mmu_idx, smode);
770
-
771
- /* Update SP regardless of whether any of the stack accesses failed. */
772
- *frame_sp_p = frameptr;
773
-
774
- return !stacked_ok;
775
-}
776
-
777
-static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain,
778
- bool ignore_stackfaults)
779
-{
780
- /*
781
- * Do the "take the exception" parts of exception entry,
782
- * but not the pushing of state to the stack. This is
783
- * similar to the pseudocode ExceptionTaken() function.
784
- */
785
- CPUARMState *env = &cpu->env;
786
- uint32_t addr;
787
- bool targets_secure;
788
- int exc;
789
- bool push_failed = false;
790
-
791
- armv7m_nvic_get_pending_irq_info(env->nvic, &exc, &targets_secure);
792
- qemu_log_mask(CPU_LOG_INT, "...taking pending %s exception %d\n",
793
- targets_secure ? "secure" : "nonsecure", exc);
794
-
795
- if (dotailchain) {
796
- /* Sanitize LR FType and PREFIX bits */
797
- if (!arm_feature(env, ARM_FEATURE_VFP)) {
798
- lr |= R_V7M_EXCRET_FTYPE_MASK;
799
- }
800
- lr = deposit32(lr, 24, 8, 0xff);
801
- }
802
-
803
- if (arm_feature(env, ARM_FEATURE_V8)) {
804
- if (arm_feature(env, ARM_FEATURE_M_SECURITY) &&
805
- (lr & R_V7M_EXCRET_S_MASK)) {
806
- /*
807
- * The background code (the owner of the registers in the
808
- * exception frame) is Secure. This means it may either already
809
- * have or now needs to push callee-saves registers.
810
- */
811
- if (targets_secure) {
812
- if (dotailchain && !(lr & R_V7M_EXCRET_ES_MASK)) {
813
- /*
814
- * We took an exception from Secure to NonSecure
815
- * (which means the callee-saved registers got stacked)
816
- * and are now tailchaining to a Secure exception.
817
- * Clear DCRS so eventual return from this Secure
818
- * exception unstacks the callee-saved registers.
819
- */
820
- lr &= ~R_V7M_EXCRET_DCRS_MASK;
821
- }
822
- } else {
823
- /*
824
- * We're going to a non-secure exception; push the
825
- * callee-saves registers to the stack now, if they're
826
- * not already saved.
827
- */
828
- if (lr & R_V7M_EXCRET_DCRS_MASK &&
829
- !(dotailchain && !(lr & R_V7M_EXCRET_ES_MASK))) {
830
- push_failed = v7m_push_callee_stack(cpu, lr, dotailchain,
831
- ignore_stackfaults);
832
- }
833
- lr |= R_V7M_EXCRET_DCRS_MASK;
834
- }
835
- }
836
-
837
- lr &= ~R_V7M_EXCRET_ES_MASK;
838
- if (targets_secure || !arm_feature(env, ARM_FEATURE_M_SECURITY)) {
839
- lr |= R_V7M_EXCRET_ES_MASK;
840
- }
841
- lr &= ~R_V7M_EXCRET_SPSEL_MASK;
842
- if (env->v7m.control[targets_secure] & R_V7M_CONTROL_SPSEL_MASK) {
843
- lr |= R_V7M_EXCRET_SPSEL_MASK;
844
- }
845
-
846
- /*
847
- * Clear registers if necessary to prevent non-secure exception
848
- * code being able to see register values from secure code.
849
- * Where register values become architecturally UNKNOWN we leave
850
- * them with their previous values.
851
- */
852
- if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
853
- if (!targets_secure) {
854
- /*
855
- * Always clear the caller-saved registers (they have been
856
- * pushed to the stack earlier in v7m_push_stack()).
857
- * Clear callee-saved registers if the background code is
858
- * Secure (in which case these regs were saved in
859
- * v7m_push_callee_stack()).
860
- */
861
- int i;
862
-
863
- for (i = 0; i < 13; i++) {
864
- /* r4..r11 are callee-saves, zero only if EXCRET.S == 1 */
865
- if (i < 4 || i > 11 || (lr & R_V7M_EXCRET_S_MASK)) {
866
- env->regs[i] = 0;
867
- }
868
- }
869
- /* Clear EAPSR */
870
- xpsr_write(env, 0, XPSR_NZCV | XPSR_Q | XPSR_GE | XPSR_IT);
871
- }
872
- }
873
- }
874
-
875
- if (push_failed && !ignore_stackfaults) {
876
- /*
877
- * Derived exception on callee-saves register stacking:
878
- * we might now want to take a different exception which
879
- * targets a different security state, so try again from the top.
880
- */
881
- qemu_log_mask(CPU_LOG_INT,
882
- "...derived exception on callee-saves register stacking");
883
- v7m_exception_taken(cpu, lr, true, true);
884
- return;
885
- }
886
-
887
- if (!arm_v7m_load_vector(cpu, exc, targets_secure, &addr)) {
888
- /* Vector load failed: derived exception */
889
- qemu_log_mask(CPU_LOG_INT, "...derived exception on vector table load");
890
- v7m_exception_taken(cpu, lr, true, true);
891
- return;
892
- }
893
-
894
- /*
895
- * Now we've done everything that might cause a derived exception
896
- * we can go ahead and activate whichever exception we're going to
897
- * take (which might now be the derived exception).
898
- */
899
- armv7m_nvic_acknowledge_irq(env->nvic);
900
-
901
- /* Switch to target security state -- must do this before writing SPSEL */
902
- switch_v7m_security_state(env, targets_secure);
903
- write_v7m_control_spsel(env, 0);
904
- arm_clear_exclusive(env);
905
- /* Clear SFPA and FPCA (has no effect if no FPU) */
906
- env->v7m.control[M_REG_S] &=
907
- ~(R_V7M_CONTROL_FPCA_MASK | R_V7M_CONTROL_SFPA_MASK);
908
- /* Clear IT bits */
909
- env->condexec_bits = 0;
910
- env->regs[14] = lr;
911
- env->regs[15] = addr & 0xfffffffe;
912
- env->thumb = addr & 1;
913
-}
914
-
915
-static void v7m_update_fpccr(CPUARMState *env, uint32_t frameptr,
916
- bool apply_splim)
917
-{
918
- /*
919
- * Like the pseudocode UpdateFPCCR: save state in FPCAR and FPCCR
920
- * that we will need later in order to do lazy FP reg stacking.
921
- */
922
- bool is_secure = env->v7m.secure;
923
- void *nvic = env->nvic;
924
- /*
925
- * Some bits are unbanked and live always in fpccr[M_REG_S]; some bits
926
- * are banked and we want to update the bit in the bank for the
927
- * current security state; and in one case we want to specifically
928
- * update the NS banked version of a bit even if we are secure.
929
- */
930
- uint32_t *fpccr_s = &env->v7m.fpccr[M_REG_S];
931
- uint32_t *fpccr_ns = &env->v7m.fpccr[M_REG_NS];
932
- uint32_t *fpccr = &env->v7m.fpccr[is_secure];
933
- bool hfrdy, bfrdy, mmrdy, ns_ufrdy, s_ufrdy, sfrdy, monrdy;
934
-
935
- env->v7m.fpcar[is_secure] = frameptr & ~0x7;
936
-
937
- if (apply_splim && arm_feature(env, ARM_FEATURE_V8)) {
938
- bool splimviol;
939
- uint32_t splim = v7m_sp_limit(env);
940
- bool ign = armv7m_nvic_neg_prio_requested(nvic, is_secure) &&
941
- (env->v7m.ccr[is_secure] & R_V7M_CCR_STKOFHFNMIGN_MASK);
942
-
943
- splimviol = !ign && frameptr < splim;
944
- *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, SPLIMVIOL, splimviol);
945
- }
946
-
947
- *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, LSPACT, 1);
948
-
949
- *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, S, is_secure);
950
-
951
- *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, USER, arm_current_el(env) == 0);
952
-
953
- *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, THREAD,
954
- !arm_v7m_is_handler_mode(env));
955
-
956
- hfrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_HARD, false);
957
- *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, HFRDY, hfrdy);
958
-
959
- bfrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_BUS, false);
960
- *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, BFRDY, bfrdy);
961
-
962
- mmrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_MEM, is_secure);
963
- *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, MMRDY, mmrdy);
964
-
965
- ns_ufrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_USAGE, false);
966
- *fpccr_ns = FIELD_DP32(*fpccr_ns, V7M_FPCCR, UFRDY, ns_ufrdy);
967
-
968
- monrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_DEBUG, false);
969
- *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, MONRDY, monrdy);
970
-
971
- if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
972
- s_ufrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_USAGE, true);
973
- *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, UFRDY, s_ufrdy);
974
-
975
- sfrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_SECURE, false);
976
- *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, SFRDY, sfrdy);
977
- }
978
-}
979
-
980
-void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr)
981
-{
982
- /* fptr is the value of Rn, the frame pointer we store the FP regs to */
983
- bool s = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_S_MASK;
984
- bool lspact = env->v7m.fpccr[s] & R_V7M_FPCCR_LSPACT_MASK;
985
-
986
- assert(env->v7m.secure);
987
-
988
- if (!(env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK)) {
989
- return;
990
- }
991
-
992
- /* Check access to the coprocessor is permitted */
993
- if (!v7m_cpacr_pass(env, true, arm_current_el(env) != 0)) {
994
- raise_exception_ra(env, EXCP_NOCP, 0, 1, GETPC());
995
- }
996
-
997
- if (lspact) {
998
- /* LSPACT should not be active when there is active FP state */
999
- raise_exception_ra(env, EXCP_LSERR, 0, 1, GETPC());
1000
- }
1001
-
1002
- if (fptr & 7) {
1003
- raise_exception_ra(env, EXCP_UNALIGNED, 0, 1, GETPC());
1004
- }
1005
-
1006
- /*
1007
- * Note that we do not use v7m_stack_write() here, because the
1008
- * accesses should not set the FSR bits for stacking errors if they
1009
- * fail. (In pseudocode terms, they are AccType_NORMAL, not AccType_STACK
1010
- * or AccType_LAZYFP). Faults in cpu_stl_data() will throw exceptions
1011
- * and longjmp out.
1012
- */
1013
- if (!(env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPEN_MASK)) {
1014
- bool ts = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK;
1015
- int i;
1016
-
1017
- for (i = 0; i < (ts ? 32 : 16); i += 2) {
1018
- uint64_t dn = *aa32_vfp_dreg(env, i / 2);
1019
- uint32_t faddr = fptr + 4 * i;
1020
- uint32_t slo = extract64(dn, 0, 32);
1021
- uint32_t shi = extract64(dn, 32, 32);
1022
-
1023
- if (i >= 16) {
1024
- faddr += 8; /* skip the slot for the FPSCR */
1025
- }
1026
- cpu_stl_data(env, faddr, slo);
1027
- cpu_stl_data(env, faddr + 4, shi);
1028
- }
1029
- cpu_stl_data(env, fptr + 0x40, vfp_get_fpscr(env));
1030
-
1031
- /*
1032
- * If TS is 0 then s0 to s15 and FPSCR are UNKNOWN; we choose to
1033
- * leave them unchanged, matching our choice in v7m_preserve_fp_state.
1034
- */
1035
- if (ts) {
1036
- for (i = 0; i < 32; i += 2) {
1037
- *aa32_vfp_dreg(env, i / 2) = 0;
1038
- }
1039
- vfp_set_fpscr(env, 0);
1040
- }
1041
- } else {
1042
- v7m_update_fpccr(env, fptr, false);
1043
- }
1044
-
1045
- env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_FPCA_MASK;
1046
-}
1047
-
1048
-void HELPER(v7m_vlldm)(CPUARMState *env, uint32_t fptr)
1049
-{
1050
- /* fptr is the value of Rn, the frame pointer we load the FP regs from */
1051
- assert(env->v7m.secure);
1052
-
1053
- if (!(env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK)) {
1054
- return;
1055
- }
1056
-
1057
- /* Check access to the coprocessor is permitted */
1058
- if (!v7m_cpacr_pass(env, true, arm_current_el(env) != 0)) {
1059
- raise_exception_ra(env, EXCP_NOCP, 0, 1, GETPC());
1060
- }
1061
-
1062
- if (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPACT_MASK) {
1063
- /* State in FP is still valid */
1064
- env->v7m.fpccr[M_REG_S] &= ~R_V7M_FPCCR_LSPACT_MASK;
1065
- } else {
1066
- bool ts = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK;
1067
- int i;
1068
- uint32_t fpscr;
1069
-
1070
- if (fptr & 7) {
1071
- raise_exception_ra(env, EXCP_UNALIGNED, 0, 1, GETPC());
1072
- }
1073
-
1074
- for (i = 0; i < (ts ? 32 : 16); i += 2) {
1075
- uint32_t slo, shi;
1076
- uint64_t dn;
1077
- uint32_t faddr = fptr + 4 * i;
1078
-
1079
- if (i >= 16) {
1080
- faddr += 8; /* skip the slot for the FPSCR */
1081
- }
1082
-
1083
- slo = cpu_ldl_data(env, faddr);
1084
- shi = cpu_ldl_data(env, faddr + 4);
1085
-
1086
- dn = (uint64_t) shi << 32 | slo;
1087
- *aa32_vfp_dreg(env, i / 2) = dn;
1088
- }
1089
- fpscr = cpu_ldl_data(env, fptr + 0x40);
1090
- vfp_set_fpscr(env, fpscr);
1091
- }
1092
-
1093
- env->v7m.control[M_REG_S] |= R_V7M_CONTROL_FPCA_MASK;
1094
-}
1095
-
1096
-static bool v7m_push_stack(ARMCPU *cpu)
1097
-{
1098
- /*
1099
- * Do the "set up stack frame" part of exception entry,
1100
- * similar to pseudocode PushStack().
1101
- * Return true if we generate a derived exception (and so
1102
- * should ignore further stack faults trying to process
1103
- * that derived exception.)
1104
- */
1105
- bool stacked_ok = true, limitviol = false;
1106
- CPUARMState *env = &cpu->env;
1107
- uint32_t xpsr = xpsr_read(env);
1108
- uint32_t frameptr = env->regs[13];
1109
- ARMMMUIdx mmu_idx = arm_mmu_idx(env);
1110
- uint32_t framesize;
1111
- bool nsacr_cp10 = extract32(env->v7m.nsacr, 10, 1);
1112
-
1113
- if ((env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK) &&
1114
- (env->v7m.secure || nsacr_cp10)) {
1115
- if (env->v7m.secure &&
1116
- env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK) {
1117
- framesize = 0xa8;
1118
- } else {
1119
- framesize = 0x68;
1120
- }
1121
- } else {
1122
- framesize = 0x20;
1123
- }
1124
-
1125
- /* Align stack pointer if the guest wants that */
1126
- if ((frameptr & 4) &&
1127
- (env->v7m.ccr[env->v7m.secure] & R_V7M_CCR_STKALIGN_MASK)) {
1128
- frameptr -= 4;
1129
- xpsr |= XPSR_SPREALIGN;
1130
- }
1131
-
1132
- xpsr &= ~XPSR_SFPA;
1133
- if (env->v7m.secure &&
1134
- (env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK)) {
1135
- xpsr |= XPSR_SFPA;
1136
- }
1137
-
1138
- frameptr -= framesize;
1139
-
1140
- if (arm_feature(env, ARM_FEATURE_V8)) {
1141
- uint32_t limit = v7m_sp_limit(env);
1142
-
1143
- if (frameptr < limit) {
1144
- /*
1145
- * Stack limit failure: set SP to the limit value, and generate
1146
- * STKOF UsageFault. Stack pushes below the limit must not be
1147
- * performed. It is IMPDEF whether pushes above the limit are
1148
- * performed; we choose not to.
1149
- */
1150
- qemu_log_mask(CPU_LOG_INT,
1151
- "...STKOF during stacking\n");
1152
- env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_STKOF_MASK;
1153
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
1154
- env->v7m.secure);
1155
- env->regs[13] = limit;
1156
- /*
1157
- * We won't try to perform any further memory accesses but
1158
- * we must continue through the following code to check for
1159
- * permission faults during FPU state preservation, and we
1160
- * must update FPCCR if lazy stacking is enabled.
1161
- */
1162
- limitviol = true;
1163
- stacked_ok = false;
1164
- }
1165
- }
1166
-
1167
- /*
1168
- * Write as much of the stack frame as we can. If we fail a stack
1169
- * write this will result in a derived exception being pended
1170
- * (which may be taken in preference to the one we started with
1171
- * if it has higher priority).
1172
- */
1173
- stacked_ok = stacked_ok &&
1174
- v7m_stack_write(cpu, frameptr, env->regs[0], mmu_idx, STACK_NORMAL) &&
1175
- v7m_stack_write(cpu, frameptr + 4, env->regs[1],
1176
- mmu_idx, STACK_NORMAL) &&
1177
- v7m_stack_write(cpu, frameptr + 8, env->regs[2],
1178
- mmu_idx, STACK_NORMAL) &&
1179
- v7m_stack_write(cpu, frameptr + 12, env->regs[3],
1180
- mmu_idx, STACK_NORMAL) &&
1181
- v7m_stack_write(cpu, frameptr + 16, env->regs[12],
1182
- mmu_idx, STACK_NORMAL) &&
1183
- v7m_stack_write(cpu, frameptr + 20, env->regs[14],
1184
- mmu_idx, STACK_NORMAL) &&
1185
- v7m_stack_write(cpu, frameptr + 24, env->regs[15],
1186
- mmu_idx, STACK_NORMAL) &&
1187
- v7m_stack_write(cpu, frameptr + 28, xpsr, mmu_idx, STACK_NORMAL);
1188
-
1189
- if (env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK) {
1190
- /* FPU is active, try to save its registers */
1191
- bool fpccr_s = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_S_MASK;
1192
- bool lspact = env->v7m.fpccr[fpccr_s] & R_V7M_FPCCR_LSPACT_MASK;
1193
-
1194
- if (lspact && arm_feature(env, ARM_FEATURE_M_SECURITY)) {
1195
- qemu_log_mask(CPU_LOG_INT,
1196
- "...SecureFault because LSPACT and FPCA both set\n");
1197
- env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK;
1198
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
1199
- } else if (!env->v7m.secure && !nsacr_cp10) {
1200
- qemu_log_mask(CPU_LOG_INT,
1201
- "...Secure UsageFault with CFSR.NOCP because "
1202
- "NSACR.CP10 prevents stacking FP regs\n");
1203
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, M_REG_S);
1204
- env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_NOCP_MASK;
1205
- } else {
1206
- if (!(env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPEN_MASK)) {
1207
- /* Lazy stacking disabled, save registers now */
1208
- int i;
1209
- bool cpacr_pass = v7m_cpacr_pass(env, env->v7m.secure,
1210
- arm_current_el(env) != 0);
1211
-
1212
- if (stacked_ok && !cpacr_pass) {
1213
- /*
1214
- * Take UsageFault if CPACR forbids access. The pseudocode
1215
- * here does a full CheckCPEnabled() but we know the NSACR
1216
- * check can never fail as we have already handled that.
1217
- */
1218
- qemu_log_mask(CPU_LOG_INT,
1219
- "...UsageFault with CFSR.NOCP because "
1220
- "CPACR.CP10 prevents stacking FP regs\n");
1221
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
1222
- env->v7m.secure);
1223
- env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_NOCP_MASK;
1224
- stacked_ok = false;
1225
- }
1226
-
1227
- for (i = 0; i < ((framesize == 0xa8) ? 32 : 16); i += 2) {
1228
- uint64_t dn = *aa32_vfp_dreg(env, i / 2);
1229
- uint32_t faddr = frameptr + 0x20 + 4 * i;
1230
- uint32_t slo = extract64(dn, 0, 32);
1231
- uint32_t shi = extract64(dn, 32, 32);
1232
-
1233
- if (i >= 16) {
1234
- faddr += 8; /* skip the slot for the FPSCR */
1235
- }
1236
- stacked_ok = stacked_ok &&
1237
- v7m_stack_write(cpu, faddr, slo,
1238
- mmu_idx, STACK_NORMAL) &&
1239
- v7m_stack_write(cpu, faddr + 4, shi,
1240
- mmu_idx, STACK_NORMAL);
1241
- }
1242
- stacked_ok = stacked_ok &&
1243
- v7m_stack_write(cpu, frameptr + 0x60,
1244
- vfp_get_fpscr(env), mmu_idx, STACK_NORMAL);
1245
- if (cpacr_pass) {
1246
- for (i = 0; i < ((framesize == 0xa8) ? 32 : 16); i += 2) {
1247
- *aa32_vfp_dreg(env, i / 2) = 0;
1248
- }
1249
- vfp_set_fpscr(env, 0);
1250
- }
1251
- } else {
1252
- /* Lazy stacking enabled, save necessary info to stack later */
1253
- v7m_update_fpccr(env, frameptr + 0x20, true);
1254
- }
1255
- }
1256
- }
1257
-
1258
- /*
1259
- * If we broke a stack limit then SP was already updated earlier;
1260
- * otherwise we update SP regardless of whether any of the stack
1261
- * accesses failed or we took some other kind of fault.
1262
- */
1263
- if (!limitviol) {
1264
- env->regs[13] = frameptr;
1265
- }
1266
-
1267
- return !stacked_ok;
1268
-}
1269
-
1270
-static void do_v7m_exception_exit(ARMCPU *cpu)
1271
-{
1272
- CPUARMState *env = &cpu->env;
1273
- uint32_t excret;
1274
- uint32_t xpsr, xpsr_mask;
1275
- bool ufault = false;
1276
- bool sfault = false;
1277
- bool return_to_sp_process;
1278
- bool return_to_handler;
1279
- bool rettobase = false;
1280
- bool exc_secure = false;
1281
- bool return_to_secure;
1282
- bool ftype;
1283
- bool restore_s16_s31;
1284
-
1285
- /*
1286
- * If we're not in Handler mode then jumps to magic exception-exit
1287
- * addresses don't have magic behaviour. However for the v8M
1288
- * security extensions the magic secure-function-return has to
1289
- * work in thread mode too, so to avoid doing an extra check in
1290
- * the generated code we allow exception-exit magic to also cause the
1291
- * internal exception and bring us here in thread mode. Correct code
1292
- * will never try to do this (the following insn fetch will always
1293
- * fault) so we the overhead of having taken an unnecessary exception
1294
- * doesn't matter.
1295
- */
1296
- if (!arm_v7m_is_handler_mode(env)) {
1297
- return;
1298
- }
1299
-
1300
- /*
1301
- * In the spec pseudocode ExceptionReturn() is called directly
1302
- * from BXWritePC() and gets the full target PC value including
1303
- * bit zero. In QEMU's implementation we treat it as a normal
1304
- * jump-to-register (which is then caught later on), and so split
1305
- * the target value up between env->regs[15] and env->thumb in
1306
- * gen_bx(). Reconstitute it.
1307
- */
1308
- excret = env->regs[15];
1309
- if (env->thumb) {
1310
- excret |= 1;
1311
- }
1312
-
1313
- qemu_log_mask(CPU_LOG_INT, "Exception return: magic PC %" PRIx32
1314
- " previous exception %d\n",
1315
- excret, env->v7m.exception);
1316
-
1317
- if ((excret & R_V7M_EXCRET_RES1_MASK) != R_V7M_EXCRET_RES1_MASK) {
1318
- qemu_log_mask(LOG_GUEST_ERROR, "M profile: zero high bits in exception "
1319
- "exit PC value 0x%" PRIx32 " are UNPREDICTABLE\n",
1320
- excret);
1321
- }
1322
-
1323
- ftype = excret & R_V7M_EXCRET_FTYPE_MASK;
1324
-
1325
- if (!arm_feature(env, ARM_FEATURE_VFP) && !ftype) {
1326
- qemu_log_mask(LOG_GUEST_ERROR, "M profile: zero FTYPE in exception "
1327
- "exit PC value 0x%" PRIx32 " is UNPREDICTABLE "
1328
- "if FPU not present\n",
1329
- excret);
1330
- ftype = true;
1331
- }
1332
-
1333
- if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
1334
- /*
1335
- * EXC_RETURN.ES validation check (R_SMFL). We must do this before
1336
- * we pick which FAULTMASK to clear.
1337
- */
1338
- if (!env->v7m.secure &&
1339
- ((excret & R_V7M_EXCRET_ES_MASK) ||
1340
- !(excret & R_V7M_EXCRET_DCRS_MASK))) {
1341
- sfault = 1;
1342
- /* For all other purposes, treat ES as 0 (R_HXSR) */
1343
- excret &= ~R_V7M_EXCRET_ES_MASK;
1344
- }
1345
- exc_secure = excret & R_V7M_EXCRET_ES_MASK;
1346
- }
1347
-
1348
- if (env->v7m.exception != ARMV7M_EXCP_NMI) {
1349
- /*
1350
- * Auto-clear FAULTMASK on return from other than NMI.
1351
- * If the security extension is implemented then this only
1352
- * happens if the raw execution priority is >= 0; the
1353
- * value of the ES bit in the exception return value indicates
1354
- * which security state's faultmask to clear. (v8M ARM ARM R_KBNF.)
1355
- */
1356
- if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
1357
- if (armv7m_nvic_raw_execution_priority(env->nvic) >= 0) {
1358
- env->v7m.faultmask[exc_secure] = 0;
1359
- }
1360
- } else {
1361
- env->v7m.faultmask[M_REG_NS] = 0;
1362
- }
1363
- }
1364
-
1365
- switch (armv7m_nvic_complete_irq(env->nvic, env->v7m.exception,
1366
- exc_secure)) {
1367
- case -1:
1368
- /* attempt to exit an exception that isn't active */
1369
- ufault = true;
1370
- break;
1371
- case 0:
1372
- /* still an irq active now */
1373
- break;
1374
- case 1:
1375
- /*
1376
- * We returned to base exception level, no nesting.
1377
- * (In the pseudocode this is written using "NestedActivation != 1"
1378
- * where we have 'rettobase == false'.)
1379
- */
1380
- rettobase = true;
1381
- break;
1382
- default:
1383
- g_assert_not_reached();
1384
- }
1385
-
1386
- return_to_handler = !(excret & R_V7M_EXCRET_MODE_MASK);
1387
- return_to_sp_process = excret & R_V7M_EXCRET_SPSEL_MASK;
1388
- return_to_secure = arm_feature(env, ARM_FEATURE_M_SECURITY) &&
1389
- (excret & R_V7M_EXCRET_S_MASK);
1390
-
1391
- if (arm_feature(env, ARM_FEATURE_V8)) {
1392
- if (!arm_feature(env, ARM_FEATURE_M_SECURITY)) {
1393
- /*
1394
- * UNPREDICTABLE if S == 1 or DCRS == 0 or ES == 1 (R_XLCP);
1395
- * we choose to take the UsageFault.
1396
- */
1397
- if ((excret & R_V7M_EXCRET_S_MASK) ||
1398
- (excret & R_V7M_EXCRET_ES_MASK) ||
1399
- !(excret & R_V7M_EXCRET_DCRS_MASK)) {
1400
- ufault = true;
1401
- }
1402
- }
1403
- if (excret & R_V7M_EXCRET_RES0_MASK) {
1404
- ufault = true;
1405
- }
1406
- } else {
1407
- /* For v7M we only recognize certain combinations of the low bits */
1408
- switch (excret & 0xf) {
1409
- case 1: /* Return to Handler */
1410
- break;
1411
- case 13: /* Return to Thread using Process stack */
1412
- case 9: /* Return to Thread using Main stack */
1413
- /*
1414
- * We only need to check NONBASETHRDENA for v7M, because in
1415
- * v8M this bit does not exist (it is RES1).
1416
- */
1417
- if (!rettobase &&
1418
- !(env->v7m.ccr[env->v7m.secure] &
1419
- R_V7M_CCR_NONBASETHRDENA_MASK)) {
1420
- ufault = true;
1421
- }
1422
- break;
1423
- default:
1424
- ufault = true;
1425
- }
1426
- }
1427
-
1428
- /*
1429
- * Set CONTROL.SPSEL from excret.SPSEL. Since we're still in
1430
- * Handler mode (and will be until we write the new XPSR.Interrupt
1431
- * field) this does not switch around the current stack pointer.
1432
- * We must do this before we do any kind of tailchaining, including
1433
- * for the derived exceptions on integrity check failures, or we will
1434
- * give the guest an incorrect EXCRET.SPSEL value on exception entry.
1435
- */
1436
- write_v7m_control_spsel_for_secstate(env, return_to_sp_process, exc_secure);
1437
-
1438
- /*
1439
- * Clear scratch FP values left in caller saved registers; this
1440
- * must happen before any kind of tail chaining.
1441
- */
1442
- if ((env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_CLRONRET_MASK) &&
1443
- (env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK)) {
1444
- if (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPACT_MASK) {
1445
- env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK;
1446
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
1447
- qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing "
1448
- "stackframe: error during lazy state deactivation\n");
1449
- v7m_exception_taken(cpu, excret, true, false);
1450
- return;
1451
- } else {
1452
- /* Clear s0..s15 and FPSCR */
1453
- int i;
1454
-
1455
- for (i = 0; i < 16; i += 2) {
1456
- *aa32_vfp_dreg(env, i / 2) = 0;
1457
- }
1458
- vfp_set_fpscr(env, 0);
1459
- }
1460
- }
1461
-
1462
- if (sfault) {
1463
- env->v7m.sfsr |= R_V7M_SFSR_INVER_MASK;
1464
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
1465
- qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing "
1466
- "stackframe: failed EXC_RETURN.ES validity check\n");
1467
- v7m_exception_taken(cpu, excret, true, false);
1468
- return;
1469
- }
1470
-
1471
- if (ufault) {
1472
- /*
1473
- * Bad exception return: instead of popping the exception
1474
- * stack, directly take a usage fault on the current stack.
1475
- */
1476
- env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
1477
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
1478
- qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
1479
- "stackframe: failed exception return integrity check\n");
1480
- v7m_exception_taken(cpu, excret, true, false);
1481
- return;
1482
- }
1483
-
1484
- /*
1485
- * Tailchaining: if there is currently a pending exception that
1486
- * is high enough priority to preempt execution at the level we're
1487
- * about to return to, then just directly take that exception now,
1488
- * avoiding an unstack-and-then-stack. Note that now we have
1489
- * deactivated the previous exception by calling armv7m_nvic_complete_irq()
1490
- * our current execution priority is already the execution priority we are
1491
- * returning to -- none of the state we would unstack or set based on
1492
- * the EXCRET value affects it.
1493
- */
1494
- if (armv7m_nvic_can_take_pending_exception(env->nvic)) {
1495
- qemu_log_mask(CPU_LOG_INT, "...tailchaining to pending exception\n");
1496
- v7m_exception_taken(cpu, excret, true, false);
1497
- return;
1498
- }
1499
-
1500
- switch_v7m_security_state(env, return_to_secure);
1501
-
1502
- {
1503
- /*
1504
- * The stack pointer we should be reading the exception frame from
1505
- * depends on bits in the magic exception return type value (and
1506
- * for v8M isn't necessarily the stack pointer we will eventually
1507
- * end up resuming execution with). Get a pointer to the location
1508
- * in the CPU state struct where the SP we need is currently being
1509
- * stored; we will use and modify it in place.
1510
- * We use this limited C variable scope so we don't accidentally
1511
- * use 'frame_sp_p' after we do something that makes it invalid.
1512
- */
1513
- uint32_t *frame_sp_p = get_v7m_sp_ptr(env,
1514
- return_to_secure,
1515
- !return_to_handler,
1516
- return_to_sp_process);
1517
- uint32_t frameptr = *frame_sp_p;
1518
- bool pop_ok = true;
1519
- ARMMMUIdx mmu_idx;
1520
- bool return_to_priv = return_to_handler ||
1521
- !(env->v7m.control[return_to_secure] & R_V7M_CONTROL_NPRIV_MASK);
1522
-
1523
- mmu_idx = arm_v7m_mmu_idx_for_secstate_and_priv(env, return_to_secure,
1524
- return_to_priv);
1525
-
1526
- if (!QEMU_IS_ALIGNED(frameptr, 8) &&
1527
- arm_feature(env, ARM_FEATURE_V8)) {
1528
- qemu_log_mask(LOG_GUEST_ERROR,
1529
- "M profile exception return with non-8-aligned SP "
1530
- "for destination state is UNPREDICTABLE\n");
1531
- }
1532
-
1533
- /* Do we need to pop callee-saved registers? */
1534
- if (return_to_secure &&
1535
- ((excret & R_V7M_EXCRET_ES_MASK) == 0 ||
1536
- (excret & R_V7M_EXCRET_DCRS_MASK) == 0)) {
1537
- uint32_t actual_sig;
1538
-
1539
- pop_ok = v7m_stack_read(cpu, &actual_sig, frameptr, mmu_idx);
1540
-
1541
- if (pop_ok && v7m_integrity_sig(env, excret) != actual_sig) {
1542
- /* Take a SecureFault on the current stack */
1543
- env->v7m.sfsr |= R_V7M_SFSR_INVIS_MASK;
1544
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
1545
- qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing "
1546
- "stackframe: failed exception return integrity "
1547
- "signature check\n");
1548
- v7m_exception_taken(cpu, excret, true, false);
1549
- return;
1550
- }
1551
-
1552
- pop_ok = pop_ok &&
1553
- v7m_stack_read(cpu, &env->regs[4], frameptr + 0x8, mmu_idx) &&
1554
- v7m_stack_read(cpu, &env->regs[5], frameptr + 0xc, mmu_idx) &&
1555
- v7m_stack_read(cpu, &env->regs[6], frameptr + 0x10, mmu_idx) &&
1556
- v7m_stack_read(cpu, &env->regs[7], frameptr + 0x14, mmu_idx) &&
1557
- v7m_stack_read(cpu, &env->regs[8], frameptr + 0x18, mmu_idx) &&
1558
- v7m_stack_read(cpu, &env->regs[9], frameptr + 0x1c, mmu_idx) &&
1559
- v7m_stack_read(cpu, &env->regs[10], frameptr + 0x20, mmu_idx) &&
1560
- v7m_stack_read(cpu, &env->regs[11], frameptr + 0x24, mmu_idx);
1561
-
1562
- frameptr += 0x28;
1563
- }
1564
-
1565
- /* Pop registers */
1566
- pop_ok = pop_ok &&
1567
- v7m_stack_read(cpu, &env->regs[0], frameptr, mmu_idx) &&
1568
- v7m_stack_read(cpu, &env->regs[1], frameptr + 0x4, mmu_idx) &&
1569
- v7m_stack_read(cpu, &env->regs[2], frameptr + 0x8, mmu_idx) &&
1570
- v7m_stack_read(cpu, &env->regs[3], frameptr + 0xc, mmu_idx) &&
1571
- v7m_stack_read(cpu, &env->regs[12], frameptr + 0x10, mmu_idx) &&
1572
- v7m_stack_read(cpu, &env->regs[14], frameptr + 0x14, mmu_idx) &&
1573
- v7m_stack_read(cpu, &env->regs[15], frameptr + 0x18, mmu_idx) &&
1574
- v7m_stack_read(cpu, &xpsr, frameptr + 0x1c, mmu_idx);
1575
-
1576
- if (!pop_ok) {
1577
- /*
1578
- * v7m_stack_read() pended a fault, so take it (as a tail
1579
- * chained exception on the same stack frame)
1580
- */
1581
- qemu_log_mask(CPU_LOG_INT, "...derived exception on unstacking\n");
1582
- v7m_exception_taken(cpu, excret, true, false);
1583
- return;
1584
- }
1585
-
1586
- /*
1587
- * Returning from an exception with a PC with bit 0 set is defined
1588
- * behaviour on v8M (bit 0 is ignored), but for v7M it was specified
1589
- * to be UNPREDICTABLE. In practice actual v7M hardware seems to ignore
1590
- * the lsbit, and there are several RTOSes out there which incorrectly
1591
- * assume the r15 in the stack frame should be a Thumb-style "lsbit
1592
- * indicates ARM/Thumb" value, so ignore the bit on v7M as well, but
1593
- * complain about the badly behaved guest.
1594
- */
1595
- if (env->regs[15] & 1) {
1596
- env->regs[15] &= ~1U;
1597
- if (!arm_feature(env, ARM_FEATURE_V8)) {
1598
- qemu_log_mask(LOG_GUEST_ERROR,
1599
- "M profile return from interrupt with misaligned "
1600
- "PC is UNPREDICTABLE on v7M\n");
1601
- }
1602
- }
1603
-
1604
- if (arm_feature(env, ARM_FEATURE_V8)) {
1605
- /*
1606
- * For v8M we have to check whether the xPSR exception field
1607
- * matches the EXCRET value for return to handler/thread
1608
- * before we commit to changing the SP and xPSR.
1609
- */
1610
- bool will_be_handler = (xpsr & XPSR_EXCP) != 0;
1611
- if (return_to_handler != will_be_handler) {
1612
- /*
1613
- * Take an INVPC UsageFault on the current stack.
1614
- * By this point we will have switched to the security state
1615
- * for the background state, so this UsageFault will target
1616
- * that state.
1617
- */
1618
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
1619
- env->v7m.secure);
1620
- env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
1621
- qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
1622
- "stackframe: failed exception return integrity "
1623
- "check\n");
1624
- v7m_exception_taken(cpu, excret, true, false);
1625
- return;
1626
- }
1627
- }
1628
-
1629
- if (!ftype) {
1630
- /* FP present and we need to handle it */
1631
- if (!return_to_secure &&
1632
- (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPACT_MASK)) {
1633
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
1634
- env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK;
1635
- qemu_log_mask(CPU_LOG_INT,
1636
- "...taking SecureFault on existing stackframe: "
1637
- "Secure LSPACT set but exception return is "
1638
- "not to secure state\n");
1639
- v7m_exception_taken(cpu, excret, true, false);
1640
- return;
1641
- }
1642
-
1643
- restore_s16_s31 = return_to_secure &&
1644
- (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK);
1645
-
1646
- if (env->v7m.fpccr[return_to_secure] & R_V7M_FPCCR_LSPACT_MASK) {
1647
- /* State in FPU is still valid, just clear LSPACT */
1648
- env->v7m.fpccr[return_to_secure] &= ~R_V7M_FPCCR_LSPACT_MASK;
1649
- } else {
1650
- int i;
1651
- uint32_t fpscr;
1652
- bool cpacr_pass, nsacr_pass;
1653
-
1654
- cpacr_pass = v7m_cpacr_pass(env, return_to_secure,
1655
- return_to_priv);
1656
- nsacr_pass = return_to_secure ||
1657
- extract32(env->v7m.nsacr, 10, 1);
1658
-
1659
- if (!cpacr_pass) {
1660
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
1661
- return_to_secure);
1662
- env->v7m.cfsr[return_to_secure] |= R_V7M_CFSR_NOCP_MASK;
1663
- qemu_log_mask(CPU_LOG_INT,
1664
- "...taking UsageFault on existing "
1665
- "stackframe: CPACR.CP10 prevents unstacking "
1666
- "FP regs\n");
1667
- v7m_exception_taken(cpu, excret, true, false);
1668
- return;
1669
- } else if (!nsacr_pass) {
1670
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, true);
1671
- env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_INVPC_MASK;
1672
- qemu_log_mask(CPU_LOG_INT,
1673
- "...taking Secure UsageFault on existing "
1674
- "stackframe: NSACR.CP10 prevents unstacking "
1675
- "FP regs\n");
1676
- v7m_exception_taken(cpu, excret, true, false);
1677
- return;
1678
- }
1679
-
1680
- for (i = 0; i < (restore_s16_s31 ? 32 : 16); i += 2) {
1681
- uint32_t slo, shi;
1682
- uint64_t dn;
1683
- uint32_t faddr = frameptr + 0x20 + 4 * i;
1684
-
1685
- if (i >= 16) {
1686
- faddr += 8; /* Skip the slot for the FPSCR */
1687
- }
1688
-
1689
- pop_ok = pop_ok &&
1690
- v7m_stack_read(cpu, &slo, faddr, mmu_idx) &&
1691
- v7m_stack_read(cpu, &shi, faddr + 4, mmu_idx);
1692
-
1693
- if (!pop_ok) {
1694
- break;
1695
- }
1696
-
1697
- dn = (uint64_t)shi << 32 | slo;
1698
- *aa32_vfp_dreg(env, i / 2) = dn;
1699
- }
1700
- pop_ok = pop_ok &&
1701
- v7m_stack_read(cpu, &fpscr, frameptr + 0x60, mmu_idx);
1702
- if (pop_ok) {
1703
- vfp_set_fpscr(env, fpscr);
1704
- }
1705
- if (!pop_ok) {
1706
- /*
1707
- * These regs are 0 if security extension present;
1708
- * otherwise merely UNKNOWN. We zero always.
1709
- */
1710
- for (i = 0; i < (restore_s16_s31 ? 32 : 16); i += 2) {
1711
- *aa32_vfp_dreg(env, i / 2) = 0;
1712
- }
1713
- vfp_set_fpscr(env, 0);
1714
- }
1715
- }
1716
- }
1717
- env->v7m.control[M_REG_S] = FIELD_DP32(env->v7m.control[M_REG_S],
1718
- V7M_CONTROL, FPCA, !ftype);
1719
-
1720
- /* Commit to consuming the stack frame */
1721
- frameptr += 0x20;
1722
- if (!ftype) {
1723
- frameptr += 0x48;
1724
- if (restore_s16_s31) {
1725
- frameptr += 0x40;
1726
- }
1727
- }
1728
- /*
1729
- * Undo stack alignment (the SPREALIGN bit indicates that the original
1730
- * pre-exception SP was not 8-aligned and we added a padding word to
1731
- * align it, so we undo this by ORing in the bit that increases it
1732
- * from the current 8-aligned value to the 8-unaligned value. (Adding 4
1733
- * would work too but a logical OR is how the pseudocode specifies it.)
1734
- */
1735
- if (xpsr & XPSR_SPREALIGN) {
1736
- frameptr |= 4;
1737
- }
1738
- *frame_sp_p = frameptr;
1739
- }
1740
-
1741
- xpsr_mask = ~(XPSR_SPREALIGN | XPSR_SFPA);
1742
- if (!arm_feature(env, ARM_FEATURE_THUMB_DSP)) {
1743
- xpsr_mask &= ~XPSR_GE;
1744
- }
1745
- /* This xpsr_write() will invalidate frame_sp_p as it may switch stack */
1746
- xpsr_write(env, xpsr, xpsr_mask);
1747
-
1748
- if (env->v7m.secure) {
1749
- bool sfpa = xpsr & XPSR_SFPA;
1750
-
1751
- env->v7m.control[M_REG_S] = FIELD_DP32(env->v7m.control[M_REG_S],
1752
- V7M_CONTROL, SFPA, sfpa);
1753
- }
1754
-
1755
- /*
1756
- * The restored xPSR exception field will be zero if we're
1757
- * resuming in Thread mode. If that doesn't match what the
1758
- * exception return excret specified then this is a UsageFault.
1759
- * v7M requires we make this check here; v8M did it earlier.
1760
- */
1761
- if (return_to_handler != arm_v7m_is_handler_mode(env)) {
1762
- /*
1763
- * Take an INVPC UsageFault by pushing the stack again;
1764
- * we know we're v7M so this is never a Secure UsageFault.
1765
- */
1766
- bool ignore_stackfaults;
1767
-
1768
- assert(!arm_feature(env, ARM_FEATURE_V8));
1769
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, false);
1770
- env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
1771
- ignore_stackfaults = v7m_push_stack(cpu);
1772
- qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on new stackframe: "
1773
- "failed exception return integrity check\n");
1774
- v7m_exception_taken(cpu, excret, false, ignore_stackfaults);
1775
- return;
1776
- }
1777
-
1778
- /* Otherwise, we have a successful exception exit. */
1779
- arm_clear_exclusive(env);
1780
- qemu_log_mask(CPU_LOG_INT, "...successful exception return\n");
1781
-}
1782
-
1783
-static bool do_v7m_function_return(ARMCPU *cpu)
1784
-{
1785
- /*
1786
- * v8M security extensions magic function return.
1787
- * We may either:
1788
- * (1) throw an exception (longjump)
1789
- * (2) return true if we successfully handled the function return
1790
- * (3) return false if we failed a consistency check and have
1791
- * pended a UsageFault that needs to be taken now
1792
- *
1793
- * At this point the magic return value is split between env->regs[15]
1794
- * and env->thumb. We don't bother to reconstitute it because we don't
1795
- * need it (all values are handled the same way).
1796
- */
1797
- CPUARMState *env = &cpu->env;
1798
- uint32_t newpc, newpsr, newpsr_exc;
1799
-
1800
- qemu_log_mask(CPU_LOG_INT, "...really v7M secure function return\n");
1801
-
1802
- {
1803
- bool threadmode, spsel;
1804
- TCGMemOpIdx oi;
1805
- ARMMMUIdx mmu_idx;
1806
- uint32_t *frame_sp_p;
1807
- uint32_t frameptr;
1808
-
1809
- /* Pull the return address and IPSR from the Secure stack */
1810
- threadmode = !arm_v7m_is_handler_mode(env);
1811
- spsel = env->v7m.control[M_REG_S] & R_V7M_CONTROL_SPSEL_MASK;
1812
-
1813
- frame_sp_p = get_v7m_sp_ptr(env, true, threadmode, spsel);
1814
- frameptr = *frame_sp_p;
1815
-
1816
- /*
1817
- * These loads may throw an exception (for MPU faults). We want to
1818
- * do them as secure, so work out what MMU index that is.
1819
- */
1820
- mmu_idx = arm_v7m_mmu_idx_for_secstate(env, true);
1821
- oi = make_memop_idx(MO_LE, arm_to_core_mmu_idx(mmu_idx));
1822
- newpc = helper_le_ldul_mmu(env, frameptr, oi, 0);
1823
- newpsr = helper_le_ldul_mmu(env, frameptr + 4, oi, 0);
1824
-
1825
- /* Consistency checks on new IPSR */
1826
- newpsr_exc = newpsr & XPSR_EXCP;
1827
- if (!((env->v7m.exception == 0 && newpsr_exc == 0) ||
1828
- (env->v7m.exception == 1 && newpsr_exc != 0))) {
1829
- /* Pend the fault and tell our caller to take it */
1830
- env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
1831
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
1832
- env->v7m.secure);
1833
- qemu_log_mask(CPU_LOG_INT,
1834
- "...taking INVPC UsageFault: "
1835
- "IPSR consistency check failed\n");
1836
- return false;
1837
- }
1838
-
1839
- *frame_sp_p = frameptr + 8;
1840
- }
1841
-
1842
- /* This invalidates frame_sp_p */
1843
- switch_v7m_security_state(env, true);
1844
- env->v7m.exception = newpsr_exc;
1845
- env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
1846
- if (newpsr & XPSR_SFPA) {
1847
- env->v7m.control[M_REG_S] |= R_V7M_CONTROL_SFPA_MASK;
1848
- }
1849
- xpsr_write(env, 0, XPSR_IT);
1850
- env->thumb = newpc & 1;
1851
- env->regs[15] = newpc & ~1;
1852
-
1853
- qemu_log_mask(CPU_LOG_INT, "...function return successful\n");
1854
- return true;
1855
-}
1856
-
1857
-static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx,
1858
- uint32_t addr, uint16_t *insn)
1859
-{
1860
- /*
1861
- * Load a 16-bit portion of a v7M instruction, returning true on success,
1862
- * or false on failure (in which case we will have pended the appropriate
1863
- * exception).
1864
- * We need to do the instruction fetch's MPU and SAU checks
1865
- * like this because there is no MMU index that would allow
1866
- * doing the load with a single function call. Instead we must
1867
- * first check that the security attributes permit the load
1868
- * and that they don't mismatch on the two halves of the instruction,
1869
- * and then we do the load as a secure load (ie using the security
1870
- * attributes of the address, not the CPU, as architecturally required).
1871
- */
1872
- CPUState *cs = CPU(cpu);
1873
- CPUARMState *env = &cpu->env;
1874
- V8M_SAttributes sattrs = {};
1875
- MemTxAttrs attrs = {};
1876
- ARMMMUFaultInfo fi = {};
1877
- MemTxResult txres;
1878
- target_ulong page_size;
1879
- hwaddr physaddr;
1880
- int prot;
1881
-
1882
- v8m_security_lookup(env, addr, MMU_INST_FETCH, mmu_idx, &sattrs);
1883
- if (!sattrs.nsc || sattrs.ns) {
1884
- /*
1885
- * This must be the second half of the insn, and it straddles a
1886
- * region boundary with the second half not being S&NSC.
1887
- */
1888
- env->v7m.sfsr |= R_V7M_SFSR_INVEP_MASK;
1889
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
1890
- qemu_log_mask(CPU_LOG_INT,
1891
- "...really SecureFault with SFSR.INVEP\n");
1892
- return false;
1893
- }
1894
- if (get_phys_addr(env, addr, MMU_INST_FETCH, mmu_idx,
1895
- &physaddr, &attrs, &prot, &page_size, &fi, NULL)) {
1896
- /* the MPU lookup failed */
1897
- env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_IACCVIOL_MASK;
1898
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM, env->v7m.secure);
1899
- qemu_log_mask(CPU_LOG_INT, "...really MemManage with CFSR.IACCVIOL\n");
1900
- return false;
1901
- }
1902
- *insn = address_space_lduw_le(arm_addressspace(cs, attrs), physaddr,
1903
- attrs, &txres);
1904
- if (txres != MEMTX_OK) {
1905
- env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_IBUSERR_MASK;
1906
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false);
1907
- qemu_log_mask(CPU_LOG_INT, "...really BusFault with CFSR.IBUSERR\n");
1908
- return false;
1909
- }
1910
- return true;
1911
-}
1912
-
1913
-static bool v7m_handle_execute_nsc(ARMCPU *cpu)
1914
-{
1915
- /*
1916
- * Check whether this attempt to execute code in a Secure & NS-Callable
1917
- * memory region is for an SG instruction; if so, then emulate the
1918
- * effect of the SG instruction and return true. Otherwise pend
1919
- * the correct kind of exception and return false.
1920
- */
1921
- CPUARMState *env = &cpu->env;
1922
- ARMMMUIdx mmu_idx;
1923
- uint16_t insn;
1924
-
1925
- /*
1926
- * We should never get here unless get_phys_addr_pmsav8() caused
1927
- * an exception for NS executing in S&NSC memory.
1928
- */
1929
- assert(!env->v7m.secure);
1930
- assert(arm_feature(env, ARM_FEATURE_M_SECURITY));
1931
-
1932
- /* We want to do the MPU lookup as secure; work out what mmu_idx that is */
1933
- mmu_idx = arm_v7m_mmu_idx_for_secstate(env, true);
1934
-
1935
- if (!v7m_read_half_insn(cpu, mmu_idx, env->regs[15], &insn)) {
1936
- return false;
1937
- }
1938
-
1939
- if (!env->thumb) {
1940
- goto gen_invep;
1941
- }
1942
-
1943
- if (insn != 0xe97f) {
1944
- /*
1945
- * Not an SG instruction first half (we choose the IMPDEF
1946
- * early-SG-check option).
1947
- */
1948
- goto gen_invep;
1949
- }
1950
-
1951
- if (!v7m_read_half_insn(cpu, mmu_idx, env->regs[15] + 2, &insn)) {
1952
- return false;
1953
- }
1954
-
1955
- if (insn != 0xe97f) {
1956
- /*
1957
- * Not an SG instruction second half (yes, both halves of the SG
1958
- * insn have the same hex value)
1959
- */
1960
- goto gen_invep;
1961
- }
1962
-
1963
- /*
1964
- * OK, we have confirmed that we really have an SG instruction.
1965
- * We know we're NS in S memory so don't need to repeat those checks.
1966
- */
1967
- qemu_log_mask(CPU_LOG_INT, "...really an SG instruction at 0x%08" PRIx32
1968
- ", executing it\n", env->regs[15]);
1969
- env->regs[14] &= ~1;
1970
- env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
1971
- switch_v7m_security_state(env, true);
1972
- xpsr_write(env, 0, XPSR_IT);
1973
- env->regs[15] += 4;
1974
- return true;
1975
-
1976
-gen_invep:
1977
- env->v7m.sfsr |= R_V7M_SFSR_INVEP_MASK;
1978
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
1979
- qemu_log_mask(CPU_LOG_INT,
1980
- "...really SecureFault with SFSR.INVEP\n");
1981
- return false;
1982
-}
1983
-
1984
-void arm_v7m_cpu_do_interrupt(CPUState *cs)
1985
-{
1986
- ARMCPU *cpu = ARM_CPU(cs);
1987
- CPUARMState *env = &cpu->env;
1988
- uint32_t lr;
1989
- bool ignore_stackfaults;
1990
-
1991
- arm_log_exception(cs->exception_index);
1992
-
1993
- /*
1994
- * For exceptions we just mark as pending on the NVIC, and let that
1995
- * handle it.
1996
- */
1997
- switch (cs->exception_index) {
1998
- case EXCP_UDEF:
1999
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
2000
- env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_UNDEFINSTR_MASK;
2001
- break;
2002
- case EXCP_NOCP:
2003
- {
2004
- /*
2005
- * NOCP might be directed to something other than the current
2006
- * security state if this fault is because of NSACR; we indicate
2007
- * the target security state using exception.target_el.
2008
- */
2009
- int target_secstate;
2010
-
2011
- if (env->exception.target_el == 3) {
2012
- target_secstate = M_REG_S;
2013
- } else {
2014
- target_secstate = env->v7m.secure;
2015
- }
2016
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, target_secstate);
2017
- env->v7m.cfsr[target_secstate] |= R_V7M_CFSR_NOCP_MASK;
2018
- break;
2019
- }
2020
- case EXCP_INVSTATE:
2021
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
2022
- env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVSTATE_MASK;
2023
- break;
2024
- case EXCP_STKOF:
2025
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
2026
- env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_STKOF_MASK;
2027
- break;
2028
- case EXCP_LSERR:
2029
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
2030
- env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK;
2031
- break;
2032
- case EXCP_UNALIGNED:
2033
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
2034
- env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_UNALIGNED_MASK;
2035
- break;
2036
- case EXCP_SWI:
2037
- /* The PC already points to the next instruction. */
2038
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SVC, env->v7m.secure);
2039
- break;
2040
- case EXCP_PREFETCH_ABORT:
2041
- case EXCP_DATA_ABORT:
2042
- /*
2043
- * Note that for M profile we don't have a guest facing FSR, but
2044
- * the env->exception.fsr will be populated by the code that
2045
- * raises the fault, in the A profile short-descriptor format.
2046
- */
2047
- switch (env->exception.fsr & 0xf) {
2048
- case M_FAKE_FSR_NSC_EXEC:
2049
- /*
2050
- * Exception generated when we try to execute code at an address
2051
- * which is marked as Secure & Non-Secure Callable and the CPU
2052
- * is in the Non-Secure state. The only instruction which can
2053
- * be executed like this is SG (and that only if both halves of
2054
- * the SG instruction have the same security attributes.)
2055
- * Everything else must generate an INVEP SecureFault, so we
2056
- * emulate the SG instruction here.
2057
- */
2058
- if (v7m_handle_execute_nsc(cpu)) {
2059
- return;
2060
- }
2061
- break;
2062
- case M_FAKE_FSR_SFAULT:
2063
- /*
2064
- * Various flavours of SecureFault for attempts to execute or
2065
- * access data in the wrong security state.
2066
- */
2067
- switch (cs->exception_index) {
2068
- case EXCP_PREFETCH_ABORT:
2069
- if (env->v7m.secure) {
2070
- env->v7m.sfsr |= R_V7M_SFSR_INVTRAN_MASK;
2071
- qemu_log_mask(CPU_LOG_INT,
2072
- "...really SecureFault with SFSR.INVTRAN\n");
2073
- } else {
2074
- env->v7m.sfsr |= R_V7M_SFSR_INVEP_MASK;
2075
- qemu_log_mask(CPU_LOG_INT,
2076
- "...really SecureFault with SFSR.INVEP\n");
2077
- }
2078
- break;
2079
- case EXCP_DATA_ABORT:
2080
- /* This must be an NS access to S memory */
2081
- env->v7m.sfsr |= R_V7M_SFSR_AUVIOL_MASK;
2082
- qemu_log_mask(CPU_LOG_INT,
2083
- "...really SecureFault with SFSR.AUVIOL\n");
2084
- break;
2085
- }
2086
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
2087
- break;
2088
- case 0x8: /* External Abort */
2089
- switch (cs->exception_index) {
2090
- case EXCP_PREFETCH_ABORT:
2091
- env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_IBUSERR_MASK;
2092
- qemu_log_mask(CPU_LOG_INT, "...with CFSR.IBUSERR\n");
2093
- break;
2094
- case EXCP_DATA_ABORT:
2095
- env->v7m.cfsr[M_REG_NS] |=
2096
- (R_V7M_CFSR_PRECISERR_MASK | R_V7M_CFSR_BFARVALID_MASK);
2097
- env->v7m.bfar = env->exception.vaddress;
2098
- qemu_log_mask(CPU_LOG_INT,
2099
- "...with CFSR.PRECISERR and BFAR 0x%x\n",
2100
- env->v7m.bfar);
2101
- break;
2102
- }
2103
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false);
2104
- break;
2105
- default:
2106
- /*
2107
- * All other FSR values are either MPU faults or "can't happen
2108
- * for M profile" cases.
2109
- */
2110
- switch (cs->exception_index) {
2111
- case EXCP_PREFETCH_ABORT:
2112
- env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_IACCVIOL_MASK;
2113
- qemu_log_mask(CPU_LOG_INT, "...with CFSR.IACCVIOL\n");
2114
- break;
2115
- case EXCP_DATA_ABORT:
2116
- env->v7m.cfsr[env->v7m.secure] |=
2117
- (R_V7M_CFSR_DACCVIOL_MASK | R_V7M_CFSR_MMARVALID_MASK);
2118
- env->v7m.mmfar[env->v7m.secure] = env->exception.vaddress;
2119
- qemu_log_mask(CPU_LOG_INT,
2120
- "...with CFSR.DACCVIOL and MMFAR 0x%x\n",
2121
- env->v7m.mmfar[env->v7m.secure]);
2122
- break;
2123
- }
2124
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM,
2125
- env->v7m.secure);
2126
- break;
2127
- }
2128
- break;
2129
- case EXCP_BKPT:
2130
- if (semihosting_enabled()) {
2131
- int nr;
2132
- nr = arm_lduw_code(env, env->regs[15], arm_sctlr_b(env)) & 0xff;
2133
- if (nr == 0xab) {
2134
- env->regs[15] += 2;
2135
- qemu_log_mask(CPU_LOG_INT,
2136
- "...handling as semihosting call 0x%x\n",
2137
- env->regs[0]);
2138
- env->regs[0] = do_arm_semihosting(env);
2139
- return;
2140
- }
2141
- }
2142
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_DEBUG, false);
2143
- break;
2144
- case EXCP_IRQ:
2145
- break;
2146
- case EXCP_EXCEPTION_EXIT:
2147
- if (env->regs[15] < EXC_RETURN_MIN_MAGIC) {
2148
- /* Must be v8M security extension function return */
2149
- assert(env->regs[15] >= FNC_RETURN_MIN_MAGIC);
2150
- assert(arm_feature(env, ARM_FEATURE_M_SECURITY));
2151
- if (do_v7m_function_return(cpu)) {
2152
- return;
2153
- }
2154
- } else {
2155
- do_v7m_exception_exit(cpu);
2156
- return;
2157
- }
2158
- break;
2159
- case EXCP_LAZYFP:
2160
- /*
2161
- * We already pended the specific exception in the NVIC in the
2162
- * v7m_preserve_fp_state() helper function.
2163
- */
2164
- break;
2165
- default:
2166
- cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index);
2167
- return; /* Never happens. Keep compiler happy. */
2168
- }
2169
-
2170
- if (arm_feature(env, ARM_FEATURE_V8)) {
2171
- lr = R_V7M_EXCRET_RES1_MASK |
2172
- R_V7M_EXCRET_DCRS_MASK;
2173
- /*
2174
- * The S bit indicates whether we should return to Secure
2175
- * or NonSecure (ie our current state).
2176
- * The ES bit indicates whether we're taking this exception
2177
- * to Secure or NonSecure (ie our target state). We set it
2178
- * later, in v7m_exception_taken().
2179
- * The SPSEL bit is also set in v7m_exception_taken() for v8M.
2180
- * This corresponds to the ARM ARM pseudocode for v8M setting
2181
- * some LR bits in PushStack() and some in ExceptionTaken();
2182
- * the distinction matters for the tailchain cases where we
2183
- * can take an exception without pushing the stack.
2184
- */
2185
- if (env->v7m.secure) {
2186
- lr |= R_V7M_EXCRET_S_MASK;
2187
- }
2188
- if (!(env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK)) {
2189
- lr |= R_V7M_EXCRET_FTYPE_MASK;
2190
- }
2191
- } else {
2192
- lr = R_V7M_EXCRET_RES1_MASK |
2193
- R_V7M_EXCRET_S_MASK |
2194
- R_V7M_EXCRET_DCRS_MASK |
2195
- R_V7M_EXCRET_FTYPE_MASK |
2196
- R_V7M_EXCRET_ES_MASK;
2197
- if (env->v7m.control[M_REG_NS] & R_V7M_CONTROL_SPSEL_MASK) {
2198
- lr |= R_V7M_EXCRET_SPSEL_MASK;
2199
- }
2200
- }
2201
- if (!arm_v7m_is_handler_mode(env)) {
2202
- lr |= R_V7M_EXCRET_MODE_MASK;
2203
- }
2204
-
2205
- ignore_stackfaults = v7m_push_stack(cpu);
2206
- v7m_exception_taken(cpu, lr, false, ignore_stackfaults);
2207
-}
2208
-
2209
/*
2210
* Function used to synchronize QEMU's AArch64 register set with AArch32
2211
* register set. This is necessary when switching between AArch32 and AArch64
2212
@@ -XXX,XX +XXX,XX @@ hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr,
2213
return phys_addr;
2214
}
2215
2216
-uint32_t HELPER(v7m_mrs)(CPUARMState *env, uint32_t reg)
2217
-{
2218
- uint32_t mask;
2219
- unsigned el = arm_current_el(env);
2220
-
2221
- /* First handle registers which unprivileged can read */
2222
-
2223
- switch (reg) {
2224
- case 0 ... 7: /* xPSR sub-fields */
2225
- mask = 0;
2226
- if ((reg & 1) && el) {
2227
- mask |= XPSR_EXCP; /* IPSR (unpriv. reads as zero) */
2228
- }
2229
- if (!(reg & 4)) {
2230
- mask |= XPSR_NZCV | XPSR_Q; /* APSR */
2231
- if (arm_feature(env, ARM_FEATURE_THUMB_DSP)) {
2232
- mask |= XPSR_GE;
2233
- }
2234
- }
2235
- /* EPSR reads as zero */
2236
- return xpsr_read(env) & mask;
2237
- break;
2238
- case 20: /* CONTROL */
2239
- {
2240
- uint32_t value = env->v7m.control[env->v7m.secure];
2241
- if (!env->v7m.secure) {
2242
- /* SFPA is RAZ/WI from NS; FPCA is stored in the M_REG_S bank */
2243
- value |= env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK;
2244
- }
2245
- return value;
2246
- }
2247
- case 0x94: /* CONTROL_NS */
2248
- /*
2249
- * We have to handle this here because unprivileged Secure code
2250
- * can read the NS CONTROL register.
2251
- */
2252
- if (!env->v7m.secure) {
2253
- return 0;
2254
- }
2255
- return env->v7m.control[M_REG_NS] |
2256
- (env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK);
2257
- }
2258
-
2259
- if (el == 0) {
2260
- return 0; /* unprivileged reads others as zero */
2261
- }
2262
-
2263
- if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
2264
- switch (reg) {
2265
- case 0x88: /* MSP_NS */
2266
- if (!env->v7m.secure) {
2267
- return 0;
2268
- }
2269
- return env->v7m.other_ss_msp;
2270
- case 0x89: /* PSP_NS */
2271
- if (!env->v7m.secure) {
2272
- return 0;
2273
- }
2274
- return env->v7m.other_ss_psp;
2275
- case 0x8a: /* MSPLIM_NS */
2276
- if (!env->v7m.secure) {
2277
- return 0;
2278
- }
2279
- return env->v7m.msplim[M_REG_NS];
2280
- case 0x8b: /* PSPLIM_NS */
2281
- if (!env->v7m.secure) {
2282
- return 0;
2283
- }
2284
- return env->v7m.psplim[M_REG_NS];
2285
- case 0x90: /* PRIMASK_NS */
2286
- if (!env->v7m.secure) {
2287
- return 0;
2288
- }
2289
- return env->v7m.primask[M_REG_NS];
2290
- case 0x91: /* BASEPRI_NS */
2291
- if (!env->v7m.secure) {
2292
- return 0;
2293
- }
2294
- return env->v7m.basepri[M_REG_NS];
2295
- case 0x93: /* FAULTMASK_NS */
2296
- if (!env->v7m.secure) {
2297
- return 0;
2298
- }
2299
- return env->v7m.faultmask[M_REG_NS];
2300
- case 0x98: /* SP_NS */
2301
- {
2302
- /*
2303
- * This gives the non-secure SP selected based on whether we're
2304
- * currently in handler mode or not, using the NS CONTROL.SPSEL.
2305
- */
2306
- bool spsel = env->v7m.control[M_REG_NS] & R_V7M_CONTROL_SPSEL_MASK;
2307
-
2308
- if (!env->v7m.secure) {
2309
- return 0;
2310
- }
2311
- if (!arm_v7m_is_handler_mode(env) && spsel) {
2312
- return env->v7m.other_ss_psp;
2313
- } else {
2314
- return env->v7m.other_ss_msp;
2315
- }
2316
- }
2317
- default:
2318
- break;
2319
- }
2320
- }
2321
-
2322
- switch (reg) {
2323
- case 8: /* MSP */
2324
- return v7m_using_psp(env) ? env->v7m.other_sp : env->regs[13];
2325
- case 9: /* PSP */
2326
- return v7m_using_psp(env) ? env->regs[13] : env->v7m.other_sp;
2327
- case 10: /* MSPLIM */
2328
- if (!arm_feature(env, ARM_FEATURE_V8)) {
2329
- goto bad_reg;
2330
- }
2331
- return env->v7m.msplim[env->v7m.secure];
2332
- case 11: /* PSPLIM */
2333
- if (!arm_feature(env, ARM_FEATURE_V8)) {
2334
- goto bad_reg;
2335
- }
2336
- return env->v7m.psplim[env->v7m.secure];
2337
- case 16: /* PRIMASK */
2338
- return env->v7m.primask[env->v7m.secure];
2339
- case 17: /* BASEPRI */
2340
- case 18: /* BASEPRI_MAX */
2341
- return env->v7m.basepri[env->v7m.secure];
2342
- case 19: /* FAULTMASK */
2343
- return env->v7m.faultmask[env->v7m.secure];
2344
- default:
2345
- bad_reg:
2346
- qemu_log_mask(LOG_GUEST_ERROR, "Attempt to read unknown special"
2347
- " register %d\n", reg);
2348
- return 0;
2349
- }
2350
-}
2351
-
2352
-void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
2353
-{
2354
- /*
2355
- * We're passed bits [11..0] of the instruction; extract
2356
- * SYSm and the mask bits.
2357
- * Invalid combinations of SYSm and mask are UNPREDICTABLE;
2358
- * we choose to treat them as if the mask bits were valid.
2359
- * NB that the pseudocode 'mask' variable is bits [11..10],
2360
- * whereas ours is [11..8].
2361
- */
2362
- uint32_t mask = extract32(maskreg, 8, 4);
2363
- uint32_t reg = extract32(maskreg, 0, 8);
2364
- int cur_el = arm_current_el(env);
2365
-
2366
- if (cur_el == 0 && reg > 7 && reg != 20) {
2367
- /*
2368
- * only xPSR sub-fields and CONTROL.SFPA may be written by
2369
- * unprivileged code
2370
- */
2371
- return;
2372
- }
2373
-
2374
- if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
2375
- switch (reg) {
2376
- case 0x88: /* MSP_NS */
2377
- if (!env->v7m.secure) {
2378
- return;
2379
- }
2380
- env->v7m.other_ss_msp = val;
2381
- return;
2382
- case 0x89: /* PSP_NS */
2383
- if (!env->v7m.secure) {
2384
- return;
2385
- }
2386
- env->v7m.other_ss_psp = val;
2387
- return;
2388
- case 0x8a: /* MSPLIM_NS */
2389
- if (!env->v7m.secure) {
2390
- return;
2391
- }
2392
- env->v7m.msplim[M_REG_NS] = val & ~7;
2393
- return;
2394
- case 0x8b: /* PSPLIM_NS */
2395
- if (!env->v7m.secure) {
2396
- return;
2397
- }
2398
- env->v7m.psplim[M_REG_NS] = val & ~7;
2399
- return;
2400
- case 0x90: /* PRIMASK_NS */
2401
- if (!env->v7m.secure) {
2402
- return;
2403
- }
2404
- env->v7m.primask[M_REG_NS] = val & 1;
2405
- return;
2406
- case 0x91: /* BASEPRI_NS */
2407
- if (!env->v7m.secure || !arm_feature(env, ARM_FEATURE_M_MAIN)) {
2408
- return;
2409
- }
2410
- env->v7m.basepri[M_REG_NS] = val & 0xff;
2411
- return;
2412
- case 0x93: /* FAULTMASK_NS */
2413
- if (!env->v7m.secure || !arm_feature(env, ARM_FEATURE_M_MAIN)) {
2414
- return;
2415
- }
2416
- env->v7m.faultmask[M_REG_NS] = val & 1;
2417
- return;
2418
- case 0x94: /* CONTROL_NS */
2419
- if (!env->v7m.secure) {
2420
- return;
2421
- }
2422
- write_v7m_control_spsel_for_secstate(env,
2423
- val & R_V7M_CONTROL_SPSEL_MASK,
2424
- M_REG_NS);
2425
- if (arm_feature(env, ARM_FEATURE_M_MAIN)) {
2426
- env->v7m.control[M_REG_NS] &= ~R_V7M_CONTROL_NPRIV_MASK;
2427
- env->v7m.control[M_REG_NS] |= val & R_V7M_CONTROL_NPRIV_MASK;
2428
- }
2429
- /*
2430
- * SFPA is RAZ/WI from NS. FPCA is RO if NSACR.CP10 == 0,
2431
- * RES0 if the FPU is not present, and is stored in the S bank
2432
- */
2433
- if (arm_feature(env, ARM_FEATURE_VFP) &&
2434
- extract32(env->v7m.nsacr, 10, 1)) {
2435
- env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_FPCA_MASK;
2436
- env->v7m.control[M_REG_S] |= val & R_V7M_CONTROL_FPCA_MASK;
2437
- }
2438
- return;
2439
- case 0x98: /* SP_NS */
2440
- {
2441
- /*
2442
- * This gives the non-secure SP selected based on whether we're
2443
- * currently in handler mode or not, using the NS CONTROL.SPSEL.
2444
- */
2445
- bool spsel = env->v7m.control[M_REG_NS] & R_V7M_CONTROL_SPSEL_MASK;
2446
- bool is_psp = !arm_v7m_is_handler_mode(env) && spsel;
2447
- uint32_t limit;
2448
-
2449
- if (!env->v7m.secure) {
2450
- return;
2451
- }
2452
-
2453
- limit = is_psp ? env->v7m.psplim[false] : env->v7m.msplim[false];
2454
-
2455
- if (val < limit) {
2456
- CPUState *cs = env_cpu(env);
2457
-
2458
- cpu_restore_state(cs, GETPC(), true);
2459
- raise_exception(env, EXCP_STKOF, 0, 1);
2460
- }
2461
-
2462
- if (is_psp) {
2463
- env->v7m.other_ss_psp = val;
2464
- } else {
2465
- env->v7m.other_ss_msp = val;
2466
- }
2467
- return;
2468
- }
2469
- default:
2470
- break;
2471
- }
2472
- }
2473
-
2474
- switch (reg) {
2475
- case 0 ... 7: /* xPSR sub-fields */
2476
- /* only APSR is actually writable */
2477
- if (!(reg & 4)) {
2478
- uint32_t apsrmask = 0;
2479
-
2480
- if (mask & 8) {
2481
- apsrmask |= XPSR_NZCV | XPSR_Q;
2482
- }
2483
- if ((mask & 4) && arm_feature(env, ARM_FEATURE_THUMB_DSP)) {
2484
- apsrmask |= XPSR_GE;
2485
- }
2486
- xpsr_write(env, val, apsrmask);
2487
- }
2488
- break;
2489
- case 8: /* MSP */
2490
- if (v7m_using_psp(env)) {
2491
- env->v7m.other_sp = val;
2492
- } else {
2493
- env->regs[13] = val;
2494
- }
2495
- break;
2496
- case 9: /* PSP */
2497
- if (v7m_using_psp(env)) {
2498
- env->regs[13] = val;
2499
- } else {
2500
- env->v7m.other_sp = val;
2501
- }
2502
- break;
2503
- case 10: /* MSPLIM */
2504
- if (!arm_feature(env, ARM_FEATURE_V8)) {
2505
- goto bad_reg;
2506
- }
2507
- env->v7m.msplim[env->v7m.secure] = val & ~7;
2508
- break;
2509
- case 11: /* PSPLIM */
2510
- if (!arm_feature(env, ARM_FEATURE_V8)) {
2511
- goto bad_reg;
2512
- }
2513
- env->v7m.psplim[env->v7m.secure] = val & ~7;
2514
- break;
2515
- case 16: /* PRIMASK */
2516
- env->v7m.primask[env->v7m.secure] = val & 1;
2517
- break;
2518
- case 17: /* BASEPRI */
2519
- if (!arm_feature(env, ARM_FEATURE_M_MAIN)) {
2520
- goto bad_reg;
2521
- }
2522
- env->v7m.basepri[env->v7m.secure] = val & 0xff;
2523
- break;
2524
- case 18: /* BASEPRI_MAX */
2525
- if (!arm_feature(env, ARM_FEATURE_M_MAIN)) {
2526
- goto bad_reg;
2527
- }
2528
- val &= 0xff;
2529
- if (val != 0 && (val < env->v7m.basepri[env->v7m.secure]
2530
- || env->v7m.basepri[env->v7m.secure] == 0)) {
2531
- env->v7m.basepri[env->v7m.secure] = val;
2532
- }
2533
- break;
2534
- case 19: /* FAULTMASK */
2535
- if (!arm_feature(env, ARM_FEATURE_M_MAIN)) {
2536
- goto bad_reg;
2537
- }
2538
- env->v7m.faultmask[env->v7m.secure] = val & 1;
2539
- break;
2540
- case 20: /* CONTROL */
2541
- /*
2542
- * Writing to the SPSEL bit only has an effect if we are in
2543
- * thread mode; other bits can be updated by any privileged code.
2544
- * write_v7m_control_spsel() deals with updating the SPSEL bit in
2545
- * env->v7m.control, so we only need update the others.
2546
- * For v7M, we must just ignore explicit writes to SPSEL in handler
2547
- * mode; for v8M the write is permitted but will have no effect.
2548
- * All these bits are writes-ignored from non-privileged code,
2549
- * except for SFPA.
2550
- */
2551
- if (cur_el > 0 && (arm_feature(env, ARM_FEATURE_V8) ||
2552
- !arm_v7m_is_handler_mode(env))) {
2553
- write_v7m_control_spsel(env, (val & R_V7M_CONTROL_SPSEL_MASK) != 0);
2554
- }
2555
- if (cur_el > 0 && arm_feature(env, ARM_FEATURE_M_MAIN)) {
2556
- env->v7m.control[env->v7m.secure] &= ~R_V7M_CONTROL_NPRIV_MASK;
2557
- env->v7m.control[env->v7m.secure] |= val & R_V7M_CONTROL_NPRIV_MASK;
2558
- }
2559
- if (arm_feature(env, ARM_FEATURE_VFP)) {
2560
- /*
2561
- * SFPA is RAZ/WI from NS or if no FPU.
2562
- * FPCA is RO if NSACR.CP10 == 0, RES0 if the FPU is not present.
2563
- * Both are stored in the S bank.
2564
- */
2565
- if (env->v7m.secure) {
2566
- env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
2567
- env->v7m.control[M_REG_S] |= val & R_V7M_CONTROL_SFPA_MASK;
2568
- }
2569
- if (cur_el > 0 &&
2570
- (env->v7m.secure || !arm_feature(env, ARM_FEATURE_M_SECURITY) ||
2571
- extract32(env->v7m.nsacr, 10, 1))) {
2572
- env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_FPCA_MASK;
2573
- env->v7m.control[M_REG_S] |= val & R_V7M_CONTROL_FPCA_MASK;
2574
- }
2575
- }
2576
- break;
2577
- default:
2578
- bad_reg:
2579
- qemu_log_mask(LOG_GUEST_ERROR, "Attempt to write unknown special"
2580
- " register %d\n", reg);
2581
- return;
2582
- }
2583
-}
2584
-
2585
-uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
2586
-{
2587
- /* Implement the TT instruction. op is bits [7:6] of the insn. */
2588
- bool forceunpriv = op & 1;
2589
- bool alt = op & 2;
2590
- V8M_SAttributes sattrs = {};
2591
- uint32_t tt_resp;
2592
- bool r, rw, nsr, nsrw, mrvalid;
2593
- int prot;
2594
- ARMMMUFaultInfo fi = {};
2595
- MemTxAttrs attrs = {};
2596
- hwaddr phys_addr;
2597
- ARMMMUIdx mmu_idx;
2598
- uint32_t mregion;
2599
- bool targetpriv;
2600
- bool targetsec = env->v7m.secure;
2601
- bool is_subpage;
2602
-
2603
- /*
2604
- * Work out what the security state and privilege level we're
2605
- * interested in is...
2606
- */
2607
- if (alt) {
2608
- targetsec = !targetsec;
2609
- }
2610
-
2611
- if (forceunpriv) {
2612
- targetpriv = false;
2613
- } else {
2614
- targetpriv = arm_v7m_is_handler_mode(env) ||
2615
- !(env->v7m.control[targetsec] & R_V7M_CONTROL_NPRIV_MASK);
2616
- }
2617
-
2618
- /* ...and then figure out which MMU index this is */
2619
- mmu_idx = arm_v7m_mmu_idx_for_secstate_and_priv(env, targetsec, targetpriv);
2620
-
2621
- /*
2622
- * We know that the MPU and SAU don't care about the access type
2623
- * for our purposes beyond that we don't want to claim to be
2624
- * an insn fetch, so we arbitrarily call this a read.
2625
- */
2626
-
2627
- /*
2628
- * MPU region info only available for privileged or if
2629
- * inspecting the other MPU state.
2630
- */
2631
- if (arm_current_el(env) != 0 || alt) {
2632
- /* We can ignore the return value as prot is always set */
2633
- pmsav8_mpu_lookup(env, addr, MMU_DATA_LOAD, mmu_idx,
2634
- &phys_addr, &attrs, &prot, &is_subpage,
2635
- &fi, &mregion);
2636
- if (mregion == -1) {
2637
- mrvalid = false;
2638
- mregion = 0;
2639
- } else {
2640
- mrvalid = true;
2641
- }
2642
- r = prot & PAGE_READ;
2643
- rw = prot & PAGE_WRITE;
2644
- } else {
2645
- r = false;
2646
- rw = false;
2647
- mrvalid = false;
2648
- mregion = 0;
2649
- }
2650
-
2651
- if (env->v7m.secure) {
2652
- v8m_security_lookup(env, addr, MMU_DATA_LOAD, mmu_idx, &sattrs);
2653
- nsr = sattrs.ns && r;
2654
- nsrw = sattrs.ns && rw;
2655
- } else {
2656
- sattrs.ns = true;
2657
- nsr = false;
2658
- nsrw = false;
2659
- }
2660
-
2661
- tt_resp = (sattrs.iregion << 24) |
2662
- (sattrs.irvalid << 23) |
2663
- ((!sattrs.ns) << 22) |
2664
- (nsrw << 21) |
2665
- (nsr << 20) |
2666
- (rw << 19) |
2667
- (r << 18) |
2668
- (sattrs.srvalid << 17) |
2669
- (mrvalid << 16) |
2670
- (sattrs.sregion << 8) |
2671
- mregion;
2672
-
2673
- return tt_resp;
2674
-}
2675
-
2676
#endif
2677
2678
/* Note that signed overflow is undefined in C. The following routines are
2679
@@ -XXX,XX +XXX,XX @@ int fp_exception_el(CPUARMState *env, int cur_el)
2680
return 0;
2681
}
2682
2683
-ARMMMUIdx arm_v7m_mmu_idx_all(CPUARMState *env,
2684
- bool secstate, bool priv, bool negpri)
2685
-{
2686
- ARMMMUIdx mmu_idx = ARM_MMU_IDX_M;
2687
-
2688
- if (priv) {
2689
- mmu_idx |= ARM_MMU_IDX_M_PRIV;
2690
- }
2691
-
2692
- if (negpri) {
2693
- mmu_idx |= ARM_MMU_IDX_M_NEGPRI;
2694
- }
2695
-
2696
- if (secstate) {
2697
- mmu_idx |= ARM_MMU_IDX_M_S;
2698
- }
2699
-
2700
- return mmu_idx;
2701
-}
2702
-
2703
-ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
2704
- bool secstate, bool priv)
2705
-{
2706
- bool negpri = armv7m_nvic_neg_prio_requested(env->nvic, secstate);
2707
-
2708
- return arm_v7m_mmu_idx_all(env, secstate, priv, negpri);
2709
-}
2710
-
2711
-/* Return the MMU index for a v7M CPU in the specified security state */
2712
+#ifndef CONFIG_TCG
2713
ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate)
2714
{
2715
- bool priv = arm_current_el(env) != 0;
2716
-
2717
- return arm_v7m_mmu_idx_for_secstate_and_priv(env, secstate, priv);
2718
+ g_assert_not_reached();
2719
}
2720
+#endif
2721
2722
ARMMMUIdx arm_mmu_idx(CPUARMState *env)
2723
{
2724
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
2725
new file mode 100644
2726
index XXXXXXX..XXXXXXX
2727
--- /dev/null
2728
+++ b/target/arm/m_helper.c
2729
@@ -XXX,XX +XXX,XX @@
2730
+/*
2731
+ * ARM generic helpers.
2732
+ *
2733
+ * This code is licensed under the GNU GPL v2 or later.
2734
+ *
2735
+ * SPDX-License-Identifier: GPL-2.0-or-later
2736
+ */
2737
+#include "qemu/osdep.h"
2738
+#include "qemu/units.h"
2739
+#include "target/arm/idau.h"
2740
+#include "trace.h"
2741
+#include "cpu.h"
2742
+#include "internals.h"
2743
+#include "exec/gdbstub.h"
2744
+#include "exec/helper-proto.h"
2745
+#include "qemu/host-utils.h"
2746
+#include "sysemu/sysemu.h"
2747
+#include "qemu/bitops.h"
2748
+#include "qemu/crc32c.h"
2749
+#include "qemu/qemu-print.h"
2750
+#include "exec/exec-all.h"
2751
+#include <zlib.h> /* For crc32 */
2752
+#include "hw/semihosting/semihost.h"
2753
+#include "sysemu/cpus.h"
2754
+#include "sysemu/kvm.h"
2755
+#include "qemu/range.h"
2756
+#include "qapi/qapi-commands-target.h"
2757
+#include "qapi/error.h"
2758
+#include "qemu/guest-random.h"
2759
+#ifdef CONFIG_TCG
2760
+#include "arm_ldst.h"
2761
+#include "exec/cpu_ldst.h"
2762
+#endif
2763
+
85
+
2764
+#ifdef CONFIG_USER_ONLY
86
+ return vms->highmem_redists;
2765
+
2766
+/* These should probably raise undefined insn exceptions. */
2767
+void HELPER(v7m_msr)(CPUARMState *env, uint32_t reg, uint32_t val)
2768
+{
2769
+ ARMCPU *cpu = env_archcpu(env);
2770
+
2771
+ cpu_abort(CPU(cpu), "v7m_msr %d\n", reg);
2772
+}
87
+}
2773
+
88
+
2774
+uint32_t HELPER(v7m_mrs)(CPUARMState *env, uint32_t reg)
89
+static void virt_set_highmem_redists(Object *obj, bool value, Error **errp)
2775
+{
90
+{
2776
+ ARMCPU *cpu = env_archcpu(env);
91
+ VirtMachineState *vms = VIRT_MACHINE(obj);
2777
+
92
+
2778
+ cpu_abort(CPU(cpu), "v7m_mrs %d\n", reg);
93
+ vms->highmem_redists = value;
2779
+ return 0;
2780
+}
94
+}
2781
+
95
+
2782
+void HELPER(v7m_bxns)(CPUARMState *env, uint32_t dest)
96
+static bool virt_get_highmem_ecam(Object *obj, Error **errp)
2783
+{
97
+{
2784
+ /* translate.c should never generate calls here in user-only mode */
98
+ VirtMachineState *vms = VIRT_MACHINE(obj);
2785
+ g_assert_not_reached();
99
+
100
+ return vms->highmem_ecam;
2786
+}
101
+}
2787
+
102
+
2788
+void HELPER(v7m_blxns)(CPUARMState *env, uint32_t dest)
103
+static void virt_set_highmem_ecam(Object *obj, bool value, Error **errp)
2789
+{
104
+{
2790
+ /* translate.c should never generate calls here in user-only mode */
105
+ VirtMachineState *vms = VIRT_MACHINE(obj);
2791
+ g_assert_not_reached();
106
+
107
+ vms->highmem_ecam = value;
2792
+}
108
+}
2793
+
109
+
2794
+void HELPER(v7m_preserve_fp_state)(CPUARMState *env)
110
+static bool virt_get_highmem_mmio(Object *obj, Error **errp)
2795
+{
111
+{
2796
+ /* translate.c should never generate calls here in user-only mode */
112
+ VirtMachineState *vms = VIRT_MACHINE(obj);
2797
+ g_assert_not_reached();
113
+
114
+ return vms->highmem_mmio;
2798
+}
115
+}
2799
+
116
+
2800
+void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr)
117
+static void virt_set_highmem_mmio(Object *obj, bool value, Error **errp)
2801
+{
118
+{
2802
+ /* translate.c should never generate calls here in user-only mode */
119
+ VirtMachineState *vms = VIRT_MACHINE(obj);
2803
+ g_assert_not_reached();
120
+
121
+ vms->highmem_mmio = value;
2804
+}
122
+}
2805
+
123
+
2806
+void HELPER(v7m_vlldm)(CPUARMState *env, uint32_t fptr)
2807
+{
2808
+ /* translate.c should never generate calls here in user-only mode */
2809
+ g_assert_not_reached();
2810
+}
2811
+
124
+
2812
+uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
125
static bool virt_get_its(Object *obj, Error **errp)
2813
+{
126
{
2814
+ /*
127
VirtMachineState *vms = VIRT_MACHINE(obj);
2815
+ * The TT instructions can be used by unprivileged code, but in
128
@@ -XXX,XX +XXX,XX @@ static void virt_machine_class_init(ObjectClass *oc, void *data)
2816
+ * user-only emulation we don't have the MPU.
129
"Set on/off to enable/disable compact "
2817
+ * Luckily since we know we are NonSecure unprivileged (and that in
130
"layout for high memory regions");
2818
+ * turn means that the A flag wasn't specified), all the bits in the
131
2819
+ * register must be zero:
132
+ object_class_property_add_bool(oc, "highmem-redists",
2820
+ * IREGION: 0 because IRVALID is 0
133
+ virt_get_highmem_redists,
2821
+ * IRVALID: 0 because NS
134
+ virt_set_highmem_redists);
2822
+ * S: 0 because NS
135
+ object_class_property_set_description(oc, "highmem-redists",
2823
+ * NSRW: 0 because NS
136
+ "Set on/off to enable/disable high "
2824
+ * NSR: 0 because NS
137
+ "memory region for GICv3 or GICv4 "
2825
+ * RW: 0 because unpriv and A flag not set
138
+ "redistributor");
2826
+ * R: 0 because unpriv and A flag not set
2827
+ * SRVALID: 0 because NS
2828
+ * MRVALID: 0 because unpriv and A flag not set
2829
+ * SREGION: 0 becaus SRVALID is 0
2830
+ * MREGION: 0 because MRVALID is 0
2831
+ */
2832
+ return 0;
2833
+}
2834
+
139
+
2835
+#else
140
+ object_class_property_add_bool(oc, "highmem-ecam",
141
+ virt_get_highmem_ecam,
142
+ virt_set_highmem_ecam);
143
+ object_class_property_set_description(oc, "highmem-ecam",
144
+ "Set on/off to enable/disable high "
145
+ "memory region for PCI ECAM");
2836
+
146
+
2837
+/*
147
+ object_class_property_add_bool(oc, "highmem-mmio",
2838
+ * What kind of stack write are we doing? This affects how exceptions
148
+ virt_get_highmem_mmio,
2839
+ * generated during the stacking are treated.
149
+ virt_set_highmem_mmio);
2840
+ */
150
+ object_class_property_set_description(oc, "highmem-mmio",
2841
+typedef enum StackingMode {
151
+ "Set on/off to enable/disable high "
2842
+ STACK_NORMAL,
152
+ "memory region for PCI MMIO");
2843
+ STACK_IGNFAULTS,
2844
+ STACK_LAZYFP,
2845
+} StackingMode;
2846
+
153
+
2847
+static bool v7m_stack_write(ARMCPU *cpu, uint32_t addr, uint32_t value,
154
object_class_property_add_str(oc, "gic-version", virt_get_gic_version,
2848
+ ARMMMUIdx mmu_idx, StackingMode mode)
155
virt_set_gic_version);
2849
+{
156
object_class_property_set_description(oc, "gic-version",
2850
+ CPUState *cs = CPU(cpu);
2851
+ CPUARMState *env = &cpu->env;
2852
+ MemTxAttrs attrs = {};
2853
+ MemTxResult txres;
2854
+ target_ulong page_size;
2855
+ hwaddr physaddr;
2856
+ int prot;
2857
+ ARMMMUFaultInfo fi = {};
2858
+ bool secure = mmu_idx & ARM_MMU_IDX_M_S;
2859
+ int exc;
2860
+ bool exc_secure;
2861
+
2862
+ if (get_phys_addr(env, addr, MMU_DATA_STORE, mmu_idx, &physaddr,
2863
+ &attrs, &prot, &page_size, &fi, NULL)) {
2864
+ /* MPU/SAU lookup failed */
2865
+ if (fi.type == ARMFault_QEMU_SFault) {
2866
+ if (mode == STACK_LAZYFP) {
2867
+ qemu_log_mask(CPU_LOG_INT,
2868
+ "...SecureFault with SFSR.LSPERR "
2869
+ "during lazy stacking\n");
2870
+ env->v7m.sfsr |= R_V7M_SFSR_LSPERR_MASK;
2871
+ } else {
2872
+ qemu_log_mask(CPU_LOG_INT,
2873
+ "...SecureFault with SFSR.AUVIOL "
2874
+ "during stacking\n");
2875
+ env->v7m.sfsr |= R_V7M_SFSR_AUVIOL_MASK;
2876
+ }
2877
+ env->v7m.sfsr |= R_V7M_SFSR_SFARVALID_MASK;
2878
+ env->v7m.sfar = addr;
2879
+ exc = ARMV7M_EXCP_SECURE;
2880
+ exc_secure = false;
2881
+ } else {
2882
+ if (mode == STACK_LAZYFP) {
2883
+ qemu_log_mask(CPU_LOG_INT,
2884
+ "...MemManageFault with CFSR.MLSPERR\n");
2885
+ env->v7m.cfsr[secure] |= R_V7M_CFSR_MLSPERR_MASK;
2886
+ } else {
2887
+ qemu_log_mask(CPU_LOG_INT,
2888
+ "...MemManageFault with CFSR.MSTKERR\n");
2889
+ env->v7m.cfsr[secure] |= R_V7M_CFSR_MSTKERR_MASK;
2890
+ }
2891
+ exc = ARMV7M_EXCP_MEM;
2892
+ exc_secure = secure;
2893
+ }
2894
+ goto pend_fault;
2895
+ }
2896
+ address_space_stl_le(arm_addressspace(cs, attrs), physaddr, value,
2897
+ attrs, &txres);
2898
+ if (txres != MEMTX_OK) {
2899
+ /* BusFault trying to write the data */
2900
+ if (mode == STACK_LAZYFP) {
2901
+ qemu_log_mask(CPU_LOG_INT, "...BusFault with BFSR.LSPERR\n");
2902
+ env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_LSPERR_MASK;
2903
+ } else {
2904
+ qemu_log_mask(CPU_LOG_INT, "...BusFault with BFSR.STKERR\n");
2905
+ env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_STKERR_MASK;
2906
+ }
2907
+ exc = ARMV7M_EXCP_BUS;
2908
+ exc_secure = false;
2909
+ goto pend_fault;
2910
+ }
2911
+ return true;
2912
+
2913
+pend_fault:
2914
+ /*
2915
+ * By pending the exception at this point we are making
2916
+ * the IMPDEF choice "overridden exceptions pended" (see the
2917
+ * MergeExcInfo() pseudocode). The other choice would be to not
2918
+ * pend them now and then make a choice about which to throw away
2919
+ * later if we have two derived exceptions.
2920
+ * The only case when we must not pend the exception but instead
2921
+ * throw it away is if we are doing the push of the callee registers
2922
+ * and we've already generated a derived exception (this is indicated
2923
+ * by the caller passing STACK_IGNFAULTS). Even in this case we will
2924
+ * still update the fault status registers.
2925
+ */
2926
+ switch (mode) {
2927
+ case STACK_NORMAL:
2928
+ armv7m_nvic_set_pending_derived(env->nvic, exc, exc_secure);
2929
+ break;
2930
+ case STACK_LAZYFP:
2931
+ armv7m_nvic_set_pending_lazyfp(env->nvic, exc, exc_secure);
2932
+ break;
2933
+ case STACK_IGNFAULTS:
2934
+ break;
2935
+ }
2936
+ return false;
2937
+}
2938
+
2939
+static bool v7m_stack_read(ARMCPU *cpu, uint32_t *dest, uint32_t addr,
2940
+ ARMMMUIdx mmu_idx)
2941
+{
2942
+ CPUState *cs = CPU(cpu);
2943
+ CPUARMState *env = &cpu->env;
2944
+ MemTxAttrs attrs = {};
2945
+ MemTxResult txres;
2946
+ target_ulong page_size;
2947
+ hwaddr physaddr;
2948
+ int prot;
2949
+ ARMMMUFaultInfo fi = {};
2950
+ bool secure = mmu_idx & ARM_MMU_IDX_M_S;
2951
+ int exc;
2952
+ bool exc_secure;
2953
+ uint32_t value;
2954
+
2955
+ if (get_phys_addr(env, addr, MMU_DATA_LOAD, mmu_idx, &physaddr,
2956
+ &attrs, &prot, &page_size, &fi, NULL)) {
2957
+ /* MPU/SAU lookup failed */
2958
+ if (fi.type == ARMFault_QEMU_SFault) {
2959
+ qemu_log_mask(CPU_LOG_INT,
2960
+ "...SecureFault with SFSR.AUVIOL during unstack\n");
2961
+ env->v7m.sfsr |= R_V7M_SFSR_AUVIOL_MASK | R_V7M_SFSR_SFARVALID_MASK;
2962
+ env->v7m.sfar = addr;
2963
+ exc = ARMV7M_EXCP_SECURE;
2964
+ exc_secure = false;
2965
+ } else {
2966
+ qemu_log_mask(CPU_LOG_INT,
2967
+ "...MemManageFault with CFSR.MUNSTKERR\n");
2968
+ env->v7m.cfsr[secure] |= R_V7M_CFSR_MUNSTKERR_MASK;
2969
+ exc = ARMV7M_EXCP_MEM;
2970
+ exc_secure = secure;
2971
+ }
2972
+ goto pend_fault;
2973
+ }
2974
+
2975
+ value = address_space_ldl(arm_addressspace(cs, attrs), physaddr,
2976
+ attrs, &txres);
2977
+ if (txres != MEMTX_OK) {
2978
+ /* BusFault trying to read the data */
2979
+ qemu_log_mask(CPU_LOG_INT, "...BusFault with BFSR.UNSTKERR\n");
2980
+ env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_UNSTKERR_MASK;
2981
+ exc = ARMV7M_EXCP_BUS;
2982
+ exc_secure = false;
2983
+ goto pend_fault;
2984
+ }
2985
+
2986
+ *dest = value;
2987
+ return true;
2988
+
2989
+pend_fault:
2990
+ /*
2991
+ * By pending the exception at this point we are making
2992
+ * the IMPDEF choice "overridden exceptions pended" (see the
2993
+ * MergeExcInfo() pseudocode). The other choice would be to not
2994
+ * pend them now and then make a choice about which to throw away
2995
+ * later if we have two derived exceptions.
2996
+ */
2997
+ armv7m_nvic_set_pending(env->nvic, exc, exc_secure);
2998
+ return false;
2999
+}
3000
+
3001
+void HELPER(v7m_preserve_fp_state)(CPUARMState *env)
3002
+{
3003
+ /*
3004
+ * Preserve FP state (because LSPACT was set and we are about
3005
+ * to execute an FP instruction). This corresponds to the
3006
+ * PreserveFPState() pseudocode.
3007
+ * We may throw an exception if the stacking fails.
3008
+ */
3009
+ ARMCPU *cpu = env_archcpu(env);
3010
+ bool is_secure = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_S_MASK;
3011
+ bool negpri = !(env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_HFRDY_MASK);
3012
+ bool is_priv = !(env->v7m.fpccr[is_secure] & R_V7M_FPCCR_USER_MASK);
3013
+ bool splimviol = env->v7m.fpccr[is_secure] & R_V7M_FPCCR_SPLIMVIOL_MASK;
3014
+ uint32_t fpcar = env->v7m.fpcar[is_secure];
3015
+ bool stacked_ok = true;
3016
+ bool ts = is_secure && (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK);
3017
+ bool take_exception;
3018
+
3019
+ /* Take the iothread lock as we are going to touch the NVIC */
3020
+ qemu_mutex_lock_iothread();
3021
+
3022
+ /* Check the background context had access to the FPU */
3023
+ if (!v7m_cpacr_pass(env, is_secure, is_priv)) {
3024
+ armv7m_nvic_set_pending_lazyfp(env->nvic, ARMV7M_EXCP_USAGE, is_secure);
3025
+ env->v7m.cfsr[is_secure] |= R_V7M_CFSR_NOCP_MASK;
3026
+ stacked_ok = false;
3027
+ } else if (!is_secure && !extract32(env->v7m.nsacr, 10, 1)) {
3028
+ armv7m_nvic_set_pending_lazyfp(env->nvic, ARMV7M_EXCP_USAGE, M_REG_S);
3029
+ env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_NOCP_MASK;
3030
+ stacked_ok = false;
3031
+ }
3032
+
3033
+ if (!splimviol && stacked_ok) {
3034
+ /* We only stack if the stack limit wasn't violated */
3035
+ int i;
3036
+ ARMMMUIdx mmu_idx;
3037
+
3038
+ mmu_idx = arm_v7m_mmu_idx_all(env, is_secure, is_priv, negpri);
3039
+ for (i = 0; i < (ts ? 32 : 16); i += 2) {
3040
+ uint64_t dn = *aa32_vfp_dreg(env, i / 2);
3041
+ uint32_t faddr = fpcar + 4 * i;
3042
+ uint32_t slo = extract64(dn, 0, 32);
3043
+ uint32_t shi = extract64(dn, 32, 32);
3044
+
3045
+ if (i >= 16) {
3046
+ faddr += 8; /* skip the slot for the FPSCR */
3047
+ }
3048
+ stacked_ok = stacked_ok &&
3049
+ v7m_stack_write(cpu, faddr, slo, mmu_idx, STACK_LAZYFP) &&
3050
+ v7m_stack_write(cpu, faddr + 4, shi, mmu_idx, STACK_LAZYFP);
3051
+ }
3052
+
3053
+ stacked_ok = stacked_ok &&
3054
+ v7m_stack_write(cpu, fpcar + 0x40,
3055
+ vfp_get_fpscr(env), mmu_idx, STACK_LAZYFP);
3056
+ }
3057
+
3058
+ /*
3059
+ * We definitely pended an exception, but it's possible that it
3060
+ * might not be able to be taken now. If its priority permits us
3061
+ * to take it now, then we must not update the LSPACT or FP regs,
3062
+ * but instead jump out to take the exception immediately.
3063
+ * If it's just pending and won't be taken until the current
3064
+ * handler exits, then we do update LSPACT and the FP regs.
3065
+ */
3066
+ take_exception = !stacked_ok &&
3067
+ armv7m_nvic_can_take_pending_exception(env->nvic);
3068
+
3069
+ qemu_mutex_unlock_iothread();
3070
+
3071
+ if (take_exception) {
3072
+ raise_exception_ra(env, EXCP_LAZYFP, 0, 1, GETPC());
3073
+ }
3074
+
3075
+ env->v7m.fpccr[is_secure] &= ~R_V7M_FPCCR_LSPACT_MASK;
3076
+
3077
+ if (ts) {
3078
+ /* Clear s0 to s31 and the FPSCR */
3079
+ int i;
3080
+
3081
+ for (i = 0; i < 32; i += 2) {
3082
+ *aa32_vfp_dreg(env, i / 2) = 0;
3083
+ }
3084
+ vfp_set_fpscr(env, 0);
3085
+ }
3086
+ /*
3087
+ * Otherwise s0 to s15 and FPSCR are UNKNOWN; we choose to leave them
3088
+ * unchanged.
3089
+ */
3090
+}
3091
+
3092
+/*
3093
+ * Write to v7M CONTROL.SPSEL bit for the specified security bank.
3094
+ * This may change the current stack pointer between Main and Process
3095
+ * stack pointers if it is done for the CONTROL register for the current
3096
+ * security state.
3097
+ */
3098
+static void write_v7m_control_spsel_for_secstate(CPUARMState *env,
3099
+ bool new_spsel,
3100
+ bool secstate)
3101
+{
3102
+ bool old_is_psp = v7m_using_psp(env);
3103
+
3104
+ env->v7m.control[secstate] =
3105
+ deposit32(env->v7m.control[secstate],
3106
+ R_V7M_CONTROL_SPSEL_SHIFT,
3107
+ R_V7M_CONTROL_SPSEL_LENGTH, new_spsel);
3108
+
3109
+ if (secstate == env->v7m.secure) {
3110
+ bool new_is_psp = v7m_using_psp(env);
3111
+ uint32_t tmp;
3112
+
3113
+ if (old_is_psp != new_is_psp) {
3114
+ tmp = env->v7m.other_sp;
3115
+ env->v7m.other_sp = env->regs[13];
3116
+ env->regs[13] = tmp;
3117
+ }
3118
+ }
3119
+}
3120
+
3121
+/*
3122
+ * Write to v7M CONTROL.SPSEL bit. This may change the current
3123
+ * stack pointer between Main and Process stack pointers.
3124
+ */
3125
+static void write_v7m_control_spsel(CPUARMState *env, bool new_spsel)
3126
+{
3127
+ write_v7m_control_spsel_for_secstate(env, new_spsel, env->v7m.secure);
3128
+}
3129
+
3130
+void write_v7m_exception(CPUARMState *env, uint32_t new_exc)
3131
+{
3132
+ /*
3133
+ * Write a new value to v7m.exception, thus transitioning into or out
3134
+ * of Handler mode; this may result in a change of active stack pointer.
3135
+ */
3136
+ bool new_is_psp, old_is_psp = v7m_using_psp(env);
3137
+ uint32_t tmp;
3138
+
3139
+ env->v7m.exception = new_exc;
3140
+
3141
+ new_is_psp = v7m_using_psp(env);
3142
+
3143
+ if (old_is_psp != new_is_psp) {
3144
+ tmp = env->v7m.other_sp;
3145
+ env->v7m.other_sp = env->regs[13];
3146
+ env->regs[13] = tmp;
3147
+ }
3148
+}
3149
+
3150
+/* Switch M profile security state between NS and S */
3151
+static void switch_v7m_security_state(CPUARMState *env, bool new_secstate)
3152
+{
3153
+ uint32_t new_ss_msp, new_ss_psp;
3154
+
3155
+ if (env->v7m.secure == new_secstate) {
3156
+ return;
3157
+ }
3158
+
3159
+ /*
3160
+ * All the banked state is accessed by looking at env->v7m.secure
3161
+ * except for the stack pointer; rearrange the SP appropriately.
3162
+ */
3163
+ new_ss_msp = env->v7m.other_ss_msp;
3164
+ new_ss_psp = env->v7m.other_ss_psp;
3165
+
3166
+ if (v7m_using_psp(env)) {
3167
+ env->v7m.other_ss_psp = env->regs[13];
3168
+ env->v7m.other_ss_msp = env->v7m.other_sp;
3169
+ } else {
3170
+ env->v7m.other_ss_msp = env->regs[13];
3171
+ env->v7m.other_ss_psp = env->v7m.other_sp;
3172
+ }
3173
+
3174
+ env->v7m.secure = new_secstate;
3175
+
3176
+ if (v7m_using_psp(env)) {
3177
+ env->regs[13] = new_ss_psp;
3178
+ env->v7m.other_sp = new_ss_msp;
3179
+ } else {
3180
+ env->regs[13] = new_ss_msp;
3181
+ env->v7m.other_sp = new_ss_psp;
3182
+ }
3183
+}
3184
+
3185
+void HELPER(v7m_bxns)(CPUARMState *env, uint32_t dest)
3186
+{
3187
+ /*
3188
+ * Handle v7M BXNS:
3189
+ * - if the return value is a magic value, do exception return (like BX)
3190
+ * - otherwise bit 0 of the return value is the target security state
3191
+ */
3192
+ uint32_t min_magic;
3193
+
3194
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
3195
+ /* Covers FNC_RETURN and EXC_RETURN magic */
3196
+ min_magic = FNC_RETURN_MIN_MAGIC;
3197
+ } else {
3198
+ /* EXC_RETURN magic only */
3199
+ min_magic = EXC_RETURN_MIN_MAGIC;
3200
+ }
3201
+
3202
+ if (dest >= min_magic) {
3203
+ /*
3204
+ * This is an exception return magic value; put it where
3205
+ * do_v7m_exception_exit() expects and raise EXCEPTION_EXIT.
3206
+ * Note that if we ever add gen_ss_advance() singlestep support to
3207
+ * M profile this should count as an "instruction execution complete"
3208
+ * event (compare gen_bx_excret_final_code()).
3209
+ */
3210
+ env->regs[15] = dest & ~1;
3211
+ env->thumb = dest & 1;
3212
+ HELPER(exception_internal)(env, EXCP_EXCEPTION_EXIT);
3213
+ /* notreached */
3214
+ }
3215
+
3216
+ /* translate.c should have made BXNS UNDEF unless we're secure */
3217
+ assert(env->v7m.secure);
3218
+
3219
+ if (!(dest & 1)) {
3220
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
3221
+ }
3222
+ switch_v7m_security_state(env, dest & 1);
3223
+ env->thumb = 1;
3224
+ env->regs[15] = dest & ~1;
3225
+}
3226
+
3227
+void HELPER(v7m_blxns)(CPUARMState *env, uint32_t dest)
3228
+{
3229
+ /*
3230
+ * Handle v7M BLXNS:
3231
+ * - bit 0 of the destination address is the target security state
3232
+ */
3233
+
3234
+ /* At this point regs[15] is the address just after the BLXNS */
3235
+ uint32_t nextinst = env->regs[15] | 1;
3236
+ uint32_t sp = env->regs[13] - 8;
3237
+ uint32_t saved_psr;
3238
+
3239
+ /* translate.c will have made BLXNS UNDEF unless we're secure */
3240
+ assert(env->v7m.secure);
3241
+
3242
+ if (dest & 1) {
3243
+ /*
3244
+ * Target is Secure, so this is just a normal BLX,
3245
+ * except that the low bit doesn't indicate Thumb/not.
3246
+ */
3247
+ env->regs[14] = nextinst;
3248
+ env->thumb = 1;
3249
+ env->regs[15] = dest & ~1;
3250
+ return;
3251
+ }
3252
+
3253
+ /* Target is non-secure: first push a stack frame */
3254
+ if (!QEMU_IS_ALIGNED(sp, 8)) {
3255
+ qemu_log_mask(LOG_GUEST_ERROR,
3256
+ "BLXNS with misaligned SP is UNPREDICTABLE\n");
3257
+ }
3258
+
3259
+ if (sp < v7m_sp_limit(env)) {
3260
+ raise_exception(env, EXCP_STKOF, 0, 1);
3261
+ }
3262
+
3263
+ saved_psr = env->v7m.exception;
3264
+ if (env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK) {
3265
+ saved_psr |= XPSR_SFPA;
3266
+ }
3267
+
3268
+ /* Note that these stores can throw exceptions on MPU faults */
3269
+ cpu_stl_data(env, sp, nextinst);
3270
+ cpu_stl_data(env, sp + 4, saved_psr);
3271
+
3272
+ env->regs[13] = sp;
3273
+ env->regs[14] = 0xfeffffff;
3274
+ if (arm_v7m_is_handler_mode(env)) {
3275
+ /*
3276
+ * Write a dummy value to IPSR, to avoid leaking the current secure
3277
+ * exception number to non-secure code. This is guaranteed not
3278
+ * to cause write_v7m_exception() to actually change stacks.
3279
+ */
3280
+ write_v7m_exception(env, 1);
3281
+ }
3282
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
3283
+ switch_v7m_security_state(env, 0);
3284
+ env->thumb = 1;
3285
+ env->regs[15] = dest;
3286
+}
3287
+
3288
+static uint32_t *get_v7m_sp_ptr(CPUARMState *env, bool secure, bool threadmode,
3289
+ bool spsel)
3290
+{
3291
+ /*
3292
+ * Return a pointer to the location where we currently store the
3293
+ * stack pointer for the requested security state and thread mode.
3294
+ * This pointer will become invalid if the CPU state is updated
3295
+ * such that the stack pointers are switched around (eg changing
3296
+ * the SPSEL control bit).
3297
+ * Compare the v8M ARM ARM pseudocode LookUpSP_with_security_mode().
3298
+ * Unlike that pseudocode, we require the caller to pass us in the
3299
+ * SPSEL control bit value; this is because we also use this
3300
+ * function in handling of pushing of the callee-saves registers
3301
+ * part of the v8M stack frame (pseudocode PushCalleeStack()),
3302
+ * and in the tailchain codepath the SPSEL bit comes from the exception
3303
+ * return magic LR value from the previous exception. The pseudocode
3304
+ * opencodes the stack-selection in PushCalleeStack(), but we prefer
3305
+ * to make this utility function generic enough to do the job.
3306
+ */
3307
+ bool want_psp = threadmode && spsel;
3308
+
3309
+ if (secure == env->v7m.secure) {
3310
+ if (want_psp == v7m_using_psp(env)) {
3311
+ return &env->regs[13];
3312
+ } else {
3313
+ return &env->v7m.other_sp;
3314
+ }
3315
+ } else {
3316
+ if (want_psp) {
3317
+ return &env->v7m.other_ss_psp;
3318
+ } else {
3319
+ return &env->v7m.other_ss_msp;
3320
+ }
3321
+ }
3322
+}
3323
+
3324
+static bool arm_v7m_load_vector(ARMCPU *cpu, int exc, bool targets_secure,
3325
+ uint32_t *pvec)
3326
+{
3327
+ CPUState *cs = CPU(cpu);
3328
+ CPUARMState *env = &cpu->env;
3329
+ MemTxResult result;
3330
+ uint32_t addr = env->v7m.vecbase[targets_secure] + exc * 4;
3331
+ uint32_t vector_entry;
3332
+ MemTxAttrs attrs = {};
3333
+ ARMMMUIdx mmu_idx;
3334
+ bool exc_secure;
3335
+
3336
+ mmu_idx = arm_v7m_mmu_idx_for_secstate_and_priv(env, targets_secure, true);
3337
+
3338
+ /*
3339
+ * We don't do a get_phys_addr() here because the rules for vector
3340
+ * loads are special: they always use the default memory map, and
3341
+ * the default memory map permits reads from all addresses.
3342
+ * Since there's no easy way to pass through to pmsav8_mpu_lookup()
3343
+ * that we want this special case which would always say "yes",
3344
+ * we just do the SAU lookup here followed by a direct physical load.
3345
+ */
3346
+ attrs.secure = targets_secure;
3347
+ attrs.user = false;
3348
+
3349
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
3350
+ V8M_SAttributes sattrs = {};
3351
+
3352
+ v8m_security_lookup(env, addr, MMU_DATA_LOAD, mmu_idx, &sattrs);
3353
+ if (sattrs.ns) {
3354
+ attrs.secure = false;
3355
+ } else if (!targets_secure) {
3356
+ /* NS access to S memory */
3357
+ goto load_fail;
3358
+ }
3359
+ }
3360
+
3361
+ vector_entry = address_space_ldl(arm_addressspace(cs, attrs), addr,
3362
+ attrs, &result);
3363
+ if (result != MEMTX_OK) {
3364
+ goto load_fail;
3365
+ }
3366
+ *pvec = vector_entry;
3367
+ return true;
3368
+
3369
+load_fail:
3370
+ /*
3371
+ * All vector table fetch fails are reported as HardFault, with
3372
+ * HFSR.VECTTBL and .FORCED set. (FORCED is set because
3373
+ * technically the underlying exception is a MemManage or BusFault
3374
+ * that is escalated to HardFault.) This is a terminal exception,
3375
+ * so we will either take the HardFault immediately or else enter
3376
+ * lockup (the latter case is handled in armv7m_nvic_set_pending_derived()).
3377
+ */
3378
+ exc_secure = targets_secure ||
3379
+ !(cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK);
3380
+ env->v7m.hfsr |= R_V7M_HFSR_VECTTBL_MASK | R_V7M_HFSR_FORCED_MASK;
3381
+ armv7m_nvic_set_pending_derived(env->nvic, ARMV7M_EXCP_HARD, exc_secure);
3382
+ return false;
3383
+}
3384
+
3385
+static uint32_t v7m_integrity_sig(CPUARMState *env, uint32_t lr)
3386
+{
3387
+ /*
3388
+ * Return the integrity signature value for the callee-saves
3389
+ * stack frame section. @lr is the exception return payload/LR value
3390
+ * whose FType bit forms bit 0 of the signature if FP is present.
3391
+ */
3392
+ uint32_t sig = 0xfefa125a;
3393
+
3394
+ if (!arm_feature(env, ARM_FEATURE_VFP) || (lr & R_V7M_EXCRET_FTYPE_MASK)) {
3395
+ sig |= 1;
3396
+ }
3397
+ return sig;
3398
+}
3399
+
3400
+static bool v7m_push_callee_stack(ARMCPU *cpu, uint32_t lr, bool dotailchain,
3401
+ bool ignore_faults)
3402
+{
3403
+ /*
3404
+ * For v8M, push the callee-saves register part of the stack frame.
3405
+ * Compare the v8M pseudocode PushCalleeStack().
3406
+ * In the tailchaining case this may not be the current stack.
3407
+ */
3408
+ CPUARMState *env = &cpu->env;
3409
+ uint32_t *frame_sp_p;
3410
+ uint32_t frameptr;
3411
+ ARMMMUIdx mmu_idx;
3412
+ bool stacked_ok;
3413
+ uint32_t limit;
3414
+ bool want_psp;
3415
+ uint32_t sig;
3416
+ StackingMode smode = ignore_faults ? STACK_IGNFAULTS : STACK_NORMAL;
3417
+
3418
+ if (dotailchain) {
3419
+ bool mode = lr & R_V7M_EXCRET_MODE_MASK;
3420
+ bool priv = !(env->v7m.control[M_REG_S] & R_V7M_CONTROL_NPRIV_MASK) ||
3421
+ !mode;
3422
+
3423
+ mmu_idx = arm_v7m_mmu_idx_for_secstate_and_priv(env, M_REG_S, priv);
3424
+ frame_sp_p = get_v7m_sp_ptr(env, M_REG_S, mode,
3425
+ lr & R_V7M_EXCRET_SPSEL_MASK);
3426
+ want_psp = mode && (lr & R_V7M_EXCRET_SPSEL_MASK);
3427
+ if (want_psp) {
3428
+ limit = env->v7m.psplim[M_REG_S];
3429
+ } else {
3430
+ limit = env->v7m.msplim[M_REG_S];
3431
+ }
3432
+ } else {
3433
+ mmu_idx = arm_mmu_idx(env);
3434
+ frame_sp_p = &env->regs[13];
3435
+ limit = v7m_sp_limit(env);
3436
+ }
3437
+
3438
+ frameptr = *frame_sp_p - 0x28;
3439
+ if (frameptr < limit) {
3440
+ /*
3441
+ * Stack limit failure: set SP to the limit value, and generate
3442
+ * STKOF UsageFault. Stack pushes below the limit must not be
3443
+ * performed. It is IMPDEF whether pushes above the limit are
3444
+ * performed; we choose not to.
3445
+ */
3446
+ qemu_log_mask(CPU_LOG_INT,
3447
+ "...STKOF during callee-saves register stacking\n");
3448
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_STKOF_MASK;
3449
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
3450
+ env->v7m.secure);
3451
+ *frame_sp_p = limit;
3452
+ return true;
3453
+ }
3454
+
3455
+ /*
3456
+ * Write as much of the stack frame as we can. A write failure may
3457
+ * cause us to pend a derived exception.
3458
+ */
3459
+ sig = v7m_integrity_sig(env, lr);
3460
+ stacked_ok =
3461
+ v7m_stack_write(cpu, frameptr, sig, mmu_idx, smode) &&
3462
+ v7m_stack_write(cpu, frameptr + 0x8, env->regs[4], mmu_idx, smode) &&
3463
+ v7m_stack_write(cpu, frameptr + 0xc, env->regs[5], mmu_idx, smode) &&
3464
+ v7m_stack_write(cpu, frameptr + 0x10, env->regs[6], mmu_idx, smode) &&
3465
+ v7m_stack_write(cpu, frameptr + 0x14, env->regs[7], mmu_idx, smode) &&
3466
+ v7m_stack_write(cpu, frameptr + 0x18, env->regs[8], mmu_idx, smode) &&
3467
+ v7m_stack_write(cpu, frameptr + 0x1c, env->regs[9], mmu_idx, smode) &&
3468
+ v7m_stack_write(cpu, frameptr + 0x20, env->regs[10], mmu_idx, smode) &&
3469
+ v7m_stack_write(cpu, frameptr + 0x24, env->regs[11], mmu_idx, smode);
3470
+
3471
+ /* Update SP regardless of whether any of the stack accesses failed. */
3472
+ *frame_sp_p = frameptr;
3473
+
3474
+ return !stacked_ok;
3475
+}
3476
+
3477
+static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain,
3478
+ bool ignore_stackfaults)
3479
+{
3480
+ /*
3481
+ * Do the "take the exception" parts of exception entry,
3482
+ * but not the pushing of state to the stack. This is
3483
+ * similar to the pseudocode ExceptionTaken() function.
3484
+ */
3485
+ CPUARMState *env = &cpu->env;
3486
+ uint32_t addr;
3487
+ bool targets_secure;
3488
+ int exc;
3489
+ bool push_failed = false;
3490
+
3491
+ armv7m_nvic_get_pending_irq_info(env->nvic, &exc, &targets_secure);
3492
+ qemu_log_mask(CPU_LOG_INT, "...taking pending %s exception %d\n",
3493
+ targets_secure ? "secure" : "nonsecure", exc);
3494
+
3495
+ if (dotailchain) {
3496
+ /* Sanitize LR FType and PREFIX bits */
3497
+ if (!arm_feature(env, ARM_FEATURE_VFP)) {
3498
+ lr |= R_V7M_EXCRET_FTYPE_MASK;
3499
+ }
3500
+ lr = deposit32(lr, 24, 8, 0xff);
3501
+ }
3502
+
3503
+ if (arm_feature(env, ARM_FEATURE_V8)) {
3504
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY) &&
3505
+ (lr & R_V7M_EXCRET_S_MASK)) {
3506
+ /*
3507
+ * The background code (the owner of the registers in the
3508
+ * exception frame) is Secure. This means it may either already
3509
+ * have or now needs to push callee-saves registers.
3510
+ */
3511
+ if (targets_secure) {
3512
+ if (dotailchain && !(lr & R_V7M_EXCRET_ES_MASK)) {
3513
+ /*
3514
+ * We took an exception from Secure to NonSecure
3515
+ * (which means the callee-saved registers got stacked)
3516
+ * and are now tailchaining to a Secure exception.
3517
+ * Clear DCRS so eventual return from this Secure
3518
+ * exception unstacks the callee-saved registers.
3519
+ */
3520
+ lr &= ~R_V7M_EXCRET_DCRS_MASK;
3521
+ }
3522
+ } else {
3523
+ /*
3524
+ * We're going to a non-secure exception; push the
3525
+ * callee-saves registers to the stack now, if they're
3526
+ * not already saved.
3527
+ */
3528
+ if (lr & R_V7M_EXCRET_DCRS_MASK &&
3529
+ !(dotailchain && !(lr & R_V7M_EXCRET_ES_MASK))) {
3530
+ push_failed = v7m_push_callee_stack(cpu, lr, dotailchain,
3531
+ ignore_stackfaults);
3532
+ }
3533
+ lr |= R_V7M_EXCRET_DCRS_MASK;
3534
+ }
3535
+ }
3536
+
3537
+ lr &= ~R_V7M_EXCRET_ES_MASK;
3538
+ if (targets_secure || !arm_feature(env, ARM_FEATURE_M_SECURITY)) {
3539
+ lr |= R_V7M_EXCRET_ES_MASK;
3540
+ }
3541
+ lr &= ~R_V7M_EXCRET_SPSEL_MASK;
3542
+ if (env->v7m.control[targets_secure] & R_V7M_CONTROL_SPSEL_MASK) {
3543
+ lr |= R_V7M_EXCRET_SPSEL_MASK;
3544
+ }
3545
+
3546
+ /*
3547
+ * Clear registers if necessary to prevent non-secure exception
3548
+ * code being able to see register values from secure code.
3549
+ * Where register values become architecturally UNKNOWN we leave
3550
+ * them with their previous values.
3551
+ */
3552
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
3553
+ if (!targets_secure) {
3554
+ /*
3555
+ * Always clear the caller-saved registers (they have been
3556
+ * pushed to the stack earlier in v7m_push_stack()).
3557
+ * Clear callee-saved registers if the background code is
3558
+ * Secure (in which case these regs were saved in
3559
+ * v7m_push_callee_stack()).
3560
+ */
3561
+ int i;
3562
+
3563
+ for (i = 0; i < 13; i++) {
3564
+ /* r4..r11 are callee-saves, zero only if EXCRET.S == 1 */
3565
+ if (i < 4 || i > 11 || (lr & R_V7M_EXCRET_S_MASK)) {
3566
+ env->regs[i] = 0;
3567
+ }
3568
+ }
3569
+ /* Clear EAPSR */
3570
+ xpsr_write(env, 0, XPSR_NZCV | XPSR_Q | XPSR_GE | XPSR_IT);
3571
+ }
3572
+ }
3573
+ }
3574
+
3575
+ if (push_failed && !ignore_stackfaults) {
3576
+ /*
3577
+ * Derived exception on callee-saves register stacking:
3578
+ * we might now want to take a different exception which
3579
+ * targets a different security state, so try again from the top.
3580
+ */
3581
+ qemu_log_mask(CPU_LOG_INT,
3582
+ "...derived exception on callee-saves register stacking");
3583
+ v7m_exception_taken(cpu, lr, true, true);
3584
+ return;
3585
+ }
3586
+
3587
+ if (!arm_v7m_load_vector(cpu, exc, targets_secure, &addr)) {
3588
+ /* Vector load failed: derived exception */
3589
+ qemu_log_mask(CPU_LOG_INT, "...derived exception on vector table load");
3590
+ v7m_exception_taken(cpu, lr, true, true);
3591
+ return;
3592
+ }
3593
+
3594
+ /*
3595
+ * Now we've done everything that might cause a derived exception
3596
+ * we can go ahead and activate whichever exception we're going to
3597
+ * take (which might now be the derived exception).
3598
+ */
3599
+ armv7m_nvic_acknowledge_irq(env->nvic);
3600
+
3601
+ /* Switch to target security state -- must do this before writing SPSEL */
3602
+ switch_v7m_security_state(env, targets_secure);
3603
+ write_v7m_control_spsel(env, 0);
3604
+ arm_clear_exclusive(env);
3605
+ /* Clear SFPA and FPCA (has no effect if no FPU) */
3606
+ env->v7m.control[M_REG_S] &=
3607
+ ~(R_V7M_CONTROL_FPCA_MASK | R_V7M_CONTROL_SFPA_MASK);
3608
+ /* Clear IT bits */
3609
+ env->condexec_bits = 0;
3610
+ env->regs[14] = lr;
3611
+ env->regs[15] = addr & 0xfffffffe;
3612
+ env->thumb = addr & 1;
3613
+}
3614
+
3615
+static void v7m_update_fpccr(CPUARMState *env, uint32_t frameptr,
3616
+ bool apply_splim)
3617
+{
3618
+ /*
3619
+ * Like the pseudocode UpdateFPCCR: save state in FPCAR and FPCCR
3620
+ * that we will need later in order to do lazy FP reg stacking.
3621
+ */
3622
+ bool is_secure = env->v7m.secure;
3623
+ void *nvic = env->nvic;
3624
+ /*
3625
+ * Some bits are unbanked and live always in fpccr[M_REG_S]; some bits
3626
+ * are banked and we want to update the bit in the bank for the
3627
+ * current security state; and in one case we want to specifically
3628
+ * update the NS banked version of a bit even if we are secure.
3629
+ */
3630
+ uint32_t *fpccr_s = &env->v7m.fpccr[M_REG_S];
3631
+ uint32_t *fpccr_ns = &env->v7m.fpccr[M_REG_NS];
3632
+ uint32_t *fpccr = &env->v7m.fpccr[is_secure];
3633
+ bool hfrdy, bfrdy, mmrdy, ns_ufrdy, s_ufrdy, sfrdy, monrdy;
3634
+
3635
+ env->v7m.fpcar[is_secure] = frameptr & ~0x7;
3636
+
3637
+ if (apply_splim && arm_feature(env, ARM_FEATURE_V8)) {
3638
+ bool splimviol;
3639
+ uint32_t splim = v7m_sp_limit(env);
3640
+ bool ign = armv7m_nvic_neg_prio_requested(nvic, is_secure) &&
3641
+ (env->v7m.ccr[is_secure] & R_V7M_CCR_STKOFHFNMIGN_MASK);
3642
+
3643
+ splimviol = !ign && frameptr < splim;
3644
+ *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, SPLIMVIOL, splimviol);
3645
+ }
3646
+
3647
+ *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, LSPACT, 1);
3648
+
3649
+ *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, S, is_secure);
3650
+
3651
+ *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, USER, arm_current_el(env) == 0);
3652
+
3653
+ *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, THREAD,
3654
+ !arm_v7m_is_handler_mode(env));
3655
+
3656
+ hfrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_HARD, false);
3657
+ *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, HFRDY, hfrdy);
3658
+
3659
+ bfrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_BUS, false);
3660
+ *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, BFRDY, bfrdy);
3661
+
3662
+ mmrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_MEM, is_secure);
3663
+ *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, MMRDY, mmrdy);
3664
+
3665
+ ns_ufrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_USAGE, false);
3666
+ *fpccr_ns = FIELD_DP32(*fpccr_ns, V7M_FPCCR, UFRDY, ns_ufrdy);
3667
+
3668
+ monrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_DEBUG, false);
3669
+ *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, MONRDY, monrdy);
3670
+
3671
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
3672
+ s_ufrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_USAGE, true);
3673
+ *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, UFRDY, s_ufrdy);
3674
+
3675
+ sfrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_SECURE, false);
3676
+ *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, SFRDY, sfrdy);
3677
+ }
3678
+}
3679
+
3680
+void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr)
3681
+{
3682
+ /* fptr is the value of Rn, the frame pointer we store the FP regs to */
3683
+ bool s = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_S_MASK;
3684
+ bool lspact = env->v7m.fpccr[s] & R_V7M_FPCCR_LSPACT_MASK;
3685
+
3686
+ assert(env->v7m.secure);
3687
+
3688
+ if (!(env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK)) {
3689
+ return;
3690
+ }
3691
+
3692
+ /* Check access to the coprocessor is permitted */
3693
+ if (!v7m_cpacr_pass(env, true, arm_current_el(env) != 0)) {
3694
+ raise_exception_ra(env, EXCP_NOCP, 0, 1, GETPC());
3695
+ }
3696
+
3697
+ if (lspact) {
3698
+ /* LSPACT should not be active when there is active FP state */
3699
+ raise_exception_ra(env, EXCP_LSERR, 0, 1, GETPC());
3700
+ }
3701
+
3702
+ if (fptr & 7) {
3703
+ raise_exception_ra(env, EXCP_UNALIGNED, 0, 1, GETPC());
3704
+ }
3705
+
3706
+ /*
3707
+ * Note that we do not use v7m_stack_write() here, because the
3708
+ * accesses should not set the FSR bits for stacking errors if they
3709
+ * fail. (In pseudocode terms, they are AccType_NORMAL, not AccType_STACK
3710
+ * or AccType_LAZYFP). Faults in cpu_stl_data() will throw exceptions
3711
+ * and longjmp out.
3712
+ */
3713
+ if (!(env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPEN_MASK)) {
3714
+ bool ts = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK;
3715
+ int i;
3716
+
3717
+ for (i = 0; i < (ts ? 32 : 16); i += 2) {
3718
+ uint64_t dn = *aa32_vfp_dreg(env, i / 2);
3719
+ uint32_t faddr = fptr + 4 * i;
3720
+ uint32_t slo = extract64(dn, 0, 32);
3721
+ uint32_t shi = extract64(dn, 32, 32);
3722
+
3723
+ if (i >= 16) {
3724
+ faddr += 8; /* skip the slot for the FPSCR */
3725
+ }
3726
+ cpu_stl_data(env, faddr, slo);
3727
+ cpu_stl_data(env, faddr + 4, shi);
3728
+ }
3729
+ cpu_stl_data(env, fptr + 0x40, vfp_get_fpscr(env));
3730
+
3731
+ /*
3732
+ * If TS is 0 then s0 to s15 and FPSCR are UNKNOWN; we choose to
3733
+ * leave them unchanged, matching our choice in v7m_preserve_fp_state.
3734
+ */
3735
+ if (ts) {
3736
+ for (i = 0; i < 32; i += 2) {
3737
+ *aa32_vfp_dreg(env, i / 2) = 0;
3738
+ }
3739
+ vfp_set_fpscr(env, 0);
3740
+ }
3741
+ } else {
3742
+ v7m_update_fpccr(env, fptr, false);
3743
+ }
3744
+
3745
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_FPCA_MASK;
3746
+}
3747
+
3748
+void HELPER(v7m_vlldm)(CPUARMState *env, uint32_t fptr)
3749
+{
3750
+ /* fptr is the value of Rn, the frame pointer we load the FP regs from */
3751
+ assert(env->v7m.secure);
3752
+
3753
+ if (!(env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK)) {
3754
+ return;
3755
+ }
3756
+
3757
+ /* Check access to the coprocessor is permitted */
3758
+ if (!v7m_cpacr_pass(env, true, arm_current_el(env) != 0)) {
3759
+ raise_exception_ra(env, EXCP_NOCP, 0, 1, GETPC());
3760
+ }
3761
+
3762
+ if (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPACT_MASK) {
3763
+ /* State in FP is still valid */
3764
+ env->v7m.fpccr[M_REG_S] &= ~R_V7M_FPCCR_LSPACT_MASK;
3765
+ } else {
3766
+ bool ts = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK;
3767
+ int i;
3768
+ uint32_t fpscr;
3769
+
3770
+ if (fptr & 7) {
3771
+ raise_exception_ra(env, EXCP_UNALIGNED, 0, 1, GETPC());
3772
+ }
3773
+
3774
+ for (i = 0; i < (ts ? 32 : 16); i += 2) {
3775
+ uint32_t slo, shi;
3776
+ uint64_t dn;
3777
+ uint32_t faddr = fptr + 4 * i;
3778
+
3779
+ if (i >= 16) {
3780
+ faddr += 8; /* skip the slot for the FPSCR */
3781
+ }
3782
+
3783
+ slo = cpu_ldl_data(env, faddr);
3784
+ shi = cpu_ldl_data(env, faddr + 4);
3785
+
3786
+ dn = (uint64_t) shi << 32 | slo;
3787
+ *aa32_vfp_dreg(env, i / 2) = dn;
3788
+ }
3789
+ fpscr = cpu_ldl_data(env, fptr + 0x40);
3790
+ vfp_set_fpscr(env, fpscr);
3791
+ }
3792
+
3793
+ env->v7m.control[M_REG_S] |= R_V7M_CONTROL_FPCA_MASK;
3794
+}
3795
+
3796
+static bool v7m_push_stack(ARMCPU *cpu)
3797
+{
3798
+ /*
3799
+ * Do the "set up stack frame" part of exception entry,
3800
+ * similar to pseudocode PushStack().
3801
+ * Return true if we generate a derived exception (and so
3802
+ * should ignore further stack faults trying to process
3803
+ * that derived exception.)
3804
+ */
3805
+ bool stacked_ok = true, limitviol = false;
3806
+ CPUARMState *env = &cpu->env;
3807
+ uint32_t xpsr = xpsr_read(env);
3808
+ uint32_t frameptr = env->regs[13];
3809
+ ARMMMUIdx mmu_idx = arm_mmu_idx(env);
3810
+ uint32_t framesize;
3811
+ bool nsacr_cp10 = extract32(env->v7m.nsacr, 10, 1);
3812
+
3813
+ if ((env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK) &&
3814
+ (env->v7m.secure || nsacr_cp10)) {
3815
+ if (env->v7m.secure &&
3816
+ env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK) {
3817
+ framesize = 0xa8;
3818
+ } else {
3819
+ framesize = 0x68;
3820
+ }
3821
+ } else {
3822
+ framesize = 0x20;
3823
+ }
3824
+
3825
+ /* Align stack pointer if the guest wants that */
3826
+ if ((frameptr & 4) &&
3827
+ (env->v7m.ccr[env->v7m.secure] & R_V7M_CCR_STKALIGN_MASK)) {
3828
+ frameptr -= 4;
3829
+ xpsr |= XPSR_SPREALIGN;
3830
+ }
3831
+
3832
+ xpsr &= ~XPSR_SFPA;
3833
+ if (env->v7m.secure &&
3834
+ (env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK)) {
3835
+ xpsr |= XPSR_SFPA;
3836
+ }
3837
+
3838
+ frameptr -= framesize;
3839
+
3840
+ if (arm_feature(env, ARM_FEATURE_V8)) {
3841
+ uint32_t limit = v7m_sp_limit(env);
3842
+
3843
+ if (frameptr < limit) {
3844
+ /*
3845
+ * Stack limit failure: set SP to the limit value, and generate
3846
+ * STKOF UsageFault. Stack pushes below the limit must not be
3847
+ * performed. It is IMPDEF whether pushes above the limit are
3848
+ * performed; we choose not to.
3849
+ */
3850
+ qemu_log_mask(CPU_LOG_INT,
3851
+ "...STKOF during stacking\n");
3852
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_STKOF_MASK;
3853
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
3854
+ env->v7m.secure);
3855
+ env->regs[13] = limit;
3856
+ /*
3857
+ * We won't try to perform any further memory accesses but
3858
+ * we must continue through the following code to check for
3859
+ * permission faults during FPU state preservation, and we
3860
+ * must update FPCCR if lazy stacking is enabled.
3861
+ */
3862
+ limitviol = true;
3863
+ stacked_ok = false;
3864
+ }
3865
+ }
3866
+
3867
+ /*
3868
+ * Write as much of the stack frame as we can. If we fail a stack
3869
+ * write this will result in a derived exception being pended
3870
+ * (which may be taken in preference to the one we started with
3871
+ * if it has higher priority).
3872
+ */
3873
+ stacked_ok = stacked_ok &&
3874
+ v7m_stack_write(cpu, frameptr, env->regs[0], mmu_idx, STACK_NORMAL) &&
3875
+ v7m_stack_write(cpu, frameptr + 4, env->regs[1],
3876
+ mmu_idx, STACK_NORMAL) &&
3877
+ v7m_stack_write(cpu, frameptr + 8, env->regs[2],
3878
+ mmu_idx, STACK_NORMAL) &&
3879
+ v7m_stack_write(cpu, frameptr + 12, env->regs[3],
3880
+ mmu_idx, STACK_NORMAL) &&
3881
+ v7m_stack_write(cpu, frameptr + 16, env->regs[12],
3882
+ mmu_idx, STACK_NORMAL) &&
3883
+ v7m_stack_write(cpu, frameptr + 20, env->regs[14],
3884
+ mmu_idx, STACK_NORMAL) &&
3885
+ v7m_stack_write(cpu, frameptr + 24, env->regs[15],
3886
+ mmu_idx, STACK_NORMAL) &&
3887
+ v7m_stack_write(cpu, frameptr + 28, xpsr, mmu_idx, STACK_NORMAL);
3888
+
3889
+ if (env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK) {
3890
+ /* FPU is active, try to save its registers */
3891
+ bool fpccr_s = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_S_MASK;
3892
+ bool lspact = env->v7m.fpccr[fpccr_s] & R_V7M_FPCCR_LSPACT_MASK;
3893
+
3894
+ if (lspact && arm_feature(env, ARM_FEATURE_M_SECURITY)) {
3895
+ qemu_log_mask(CPU_LOG_INT,
3896
+ "...SecureFault because LSPACT and FPCA both set\n");
3897
+ env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK;
3898
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
3899
+ } else if (!env->v7m.secure && !nsacr_cp10) {
3900
+ qemu_log_mask(CPU_LOG_INT,
3901
+ "...Secure UsageFault with CFSR.NOCP because "
3902
+ "NSACR.CP10 prevents stacking FP regs\n");
3903
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, M_REG_S);
3904
+ env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_NOCP_MASK;
3905
+ } else {
3906
+ if (!(env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPEN_MASK)) {
3907
+ /* Lazy stacking disabled, save registers now */
3908
+ int i;
3909
+ bool cpacr_pass = v7m_cpacr_pass(env, env->v7m.secure,
3910
+ arm_current_el(env) != 0);
3911
+
3912
+ if (stacked_ok && !cpacr_pass) {
3913
+ /*
3914
+ * Take UsageFault if CPACR forbids access. The pseudocode
3915
+ * here does a full CheckCPEnabled() but we know the NSACR
3916
+ * check can never fail as we have already handled that.
3917
+ */
3918
+ qemu_log_mask(CPU_LOG_INT,
3919
+ "...UsageFault with CFSR.NOCP because "
3920
+ "CPACR.CP10 prevents stacking FP regs\n");
3921
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
3922
+ env->v7m.secure);
3923
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_NOCP_MASK;
3924
+ stacked_ok = false;
3925
+ }
3926
+
3927
+ for (i = 0; i < ((framesize == 0xa8) ? 32 : 16); i += 2) {
3928
+ uint64_t dn = *aa32_vfp_dreg(env, i / 2);
3929
+ uint32_t faddr = frameptr + 0x20 + 4 * i;
3930
+ uint32_t slo = extract64(dn, 0, 32);
3931
+ uint32_t shi = extract64(dn, 32, 32);
3932
+
3933
+ if (i >= 16) {
3934
+ faddr += 8; /* skip the slot for the FPSCR */
3935
+ }
3936
+ stacked_ok = stacked_ok &&
3937
+ v7m_stack_write(cpu, faddr, slo,
3938
+ mmu_idx, STACK_NORMAL) &&
3939
+ v7m_stack_write(cpu, faddr + 4, shi,
3940
+ mmu_idx, STACK_NORMAL);
3941
+ }
3942
+ stacked_ok = stacked_ok &&
3943
+ v7m_stack_write(cpu, frameptr + 0x60,
3944
+ vfp_get_fpscr(env), mmu_idx, STACK_NORMAL);
3945
+ if (cpacr_pass) {
3946
+ for (i = 0; i < ((framesize == 0xa8) ? 32 : 16); i += 2) {
3947
+ *aa32_vfp_dreg(env, i / 2) = 0;
3948
+ }
3949
+ vfp_set_fpscr(env, 0);
3950
+ }
3951
+ } else {
3952
+ /* Lazy stacking enabled, save necessary info to stack later */
3953
+ v7m_update_fpccr(env, frameptr + 0x20, true);
3954
+ }
3955
+ }
3956
+ }
3957
+
3958
+ /*
3959
+ * If we broke a stack limit then SP was already updated earlier;
3960
+ * otherwise we update SP regardless of whether any of the stack
3961
+ * accesses failed or we took some other kind of fault.
3962
+ */
3963
+ if (!limitviol) {
3964
+ env->regs[13] = frameptr;
3965
+ }
3966
+
3967
+ return !stacked_ok;
3968
+}
3969
+
3970
+static void do_v7m_exception_exit(ARMCPU *cpu)
3971
+{
3972
+ CPUARMState *env = &cpu->env;
3973
+ uint32_t excret;
3974
+ uint32_t xpsr, xpsr_mask;
3975
+ bool ufault = false;
3976
+ bool sfault = false;
3977
+ bool return_to_sp_process;
3978
+ bool return_to_handler;
3979
+ bool rettobase = false;
3980
+ bool exc_secure = false;
3981
+ bool return_to_secure;
3982
+ bool ftype;
3983
+ bool restore_s16_s31;
3984
+
3985
+ /*
3986
+ * If we're not in Handler mode then jumps to magic exception-exit
3987
+ * addresses don't have magic behaviour. However for the v8M
3988
+ * security extensions the magic secure-function-return has to
3989
+ * work in thread mode too, so to avoid doing an extra check in
3990
+ * the generated code we allow exception-exit magic to also cause the
3991
+ * internal exception and bring us here in thread mode. Correct code
3992
+ * will never try to do this (the following insn fetch will always
3993
+ * fault) so we the overhead of having taken an unnecessary exception
3994
+ * doesn't matter.
3995
+ */
3996
+ if (!arm_v7m_is_handler_mode(env)) {
3997
+ return;
3998
+ }
3999
+
4000
+ /*
4001
+ * In the spec pseudocode ExceptionReturn() is called directly
4002
+ * from BXWritePC() and gets the full target PC value including
4003
+ * bit zero. In QEMU's implementation we treat it as a normal
4004
+ * jump-to-register (which is then caught later on), and so split
4005
+ * the target value up between env->regs[15] and env->thumb in
4006
+ * gen_bx(). Reconstitute it.
4007
+ */
4008
+ excret = env->regs[15];
4009
+ if (env->thumb) {
4010
+ excret |= 1;
4011
+ }
4012
+
4013
+ qemu_log_mask(CPU_LOG_INT, "Exception return: magic PC %" PRIx32
4014
+ " previous exception %d\n",
4015
+ excret, env->v7m.exception);
4016
+
4017
+ if ((excret & R_V7M_EXCRET_RES1_MASK) != R_V7M_EXCRET_RES1_MASK) {
4018
+ qemu_log_mask(LOG_GUEST_ERROR, "M profile: zero high bits in exception "
4019
+ "exit PC value 0x%" PRIx32 " are UNPREDICTABLE\n",
4020
+ excret);
4021
+ }
4022
+
4023
+ ftype = excret & R_V7M_EXCRET_FTYPE_MASK;
4024
+
4025
+ if (!arm_feature(env, ARM_FEATURE_VFP) && !ftype) {
4026
+ qemu_log_mask(LOG_GUEST_ERROR, "M profile: zero FTYPE in exception "
4027
+ "exit PC value 0x%" PRIx32 " is UNPREDICTABLE "
4028
+ "if FPU not present\n",
4029
+ excret);
4030
+ ftype = true;
4031
+ }
4032
+
4033
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
4034
+ /*
4035
+ * EXC_RETURN.ES validation check (R_SMFL). We must do this before
4036
+ * we pick which FAULTMASK to clear.
4037
+ */
4038
+ if (!env->v7m.secure &&
4039
+ ((excret & R_V7M_EXCRET_ES_MASK) ||
4040
+ !(excret & R_V7M_EXCRET_DCRS_MASK))) {
4041
+ sfault = 1;
4042
+ /* For all other purposes, treat ES as 0 (R_HXSR) */
4043
+ excret &= ~R_V7M_EXCRET_ES_MASK;
4044
+ }
4045
+ exc_secure = excret & R_V7M_EXCRET_ES_MASK;
4046
+ }
4047
+
4048
+ if (env->v7m.exception != ARMV7M_EXCP_NMI) {
4049
+ /*
4050
+ * Auto-clear FAULTMASK on return from other than NMI.
4051
+ * If the security extension is implemented then this only
4052
+ * happens if the raw execution priority is >= 0; the
4053
+ * value of the ES bit in the exception return value indicates
4054
+ * which security state's faultmask to clear. (v8M ARM ARM R_KBNF.)
4055
+ */
4056
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
4057
+ if (armv7m_nvic_raw_execution_priority(env->nvic) >= 0) {
4058
+ env->v7m.faultmask[exc_secure] = 0;
4059
+ }
4060
+ } else {
4061
+ env->v7m.faultmask[M_REG_NS] = 0;
4062
+ }
4063
+ }
4064
+
4065
+ switch (armv7m_nvic_complete_irq(env->nvic, env->v7m.exception,
4066
+ exc_secure)) {
4067
+ case -1:
4068
+ /* attempt to exit an exception that isn't active */
4069
+ ufault = true;
4070
+ break;
4071
+ case 0:
4072
+ /* still an irq active now */
4073
+ break;
4074
+ case 1:
4075
+ /*
4076
+ * We returned to base exception level, no nesting.
4077
+ * (In the pseudocode this is written using "NestedActivation != 1"
4078
+ * where we have 'rettobase == false'.)
4079
+ */
4080
+ rettobase = true;
4081
+ break;
4082
+ default:
4083
+ g_assert_not_reached();
4084
+ }
4085
+
4086
+ return_to_handler = !(excret & R_V7M_EXCRET_MODE_MASK);
4087
+ return_to_sp_process = excret & R_V7M_EXCRET_SPSEL_MASK;
4088
+ return_to_secure = arm_feature(env, ARM_FEATURE_M_SECURITY) &&
4089
+ (excret & R_V7M_EXCRET_S_MASK);
4090
+
4091
+ if (arm_feature(env, ARM_FEATURE_V8)) {
4092
+ if (!arm_feature(env, ARM_FEATURE_M_SECURITY)) {
4093
+ /*
4094
+ * UNPREDICTABLE if S == 1 or DCRS == 0 or ES == 1 (R_XLCP);
4095
+ * we choose to take the UsageFault.
4096
+ */
4097
+ if ((excret & R_V7M_EXCRET_S_MASK) ||
4098
+ (excret & R_V7M_EXCRET_ES_MASK) ||
4099
+ !(excret & R_V7M_EXCRET_DCRS_MASK)) {
4100
+ ufault = true;
4101
+ }
4102
+ }
4103
+ if (excret & R_V7M_EXCRET_RES0_MASK) {
4104
+ ufault = true;
4105
+ }
4106
+ } else {
4107
+ /* For v7M we only recognize certain combinations of the low bits */
4108
+ switch (excret & 0xf) {
4109
+ case 1: /* Return to Handler */
4110
+ break;
4111
+ case 13: /* Return to Thread using Process stack */
4112
+ case 9: /* Return to Thread using Main stack */
4113
+ /*
4114
+ * We only need to check NONBASETHRDENA for v7M, because in
4115
+ * v8M this bit does not exist (it is RES1).
4116
+ */
4117
+ if (!rettobase &&
4118
+ !(env->v7m.ccr[env->v7m.secure] &
4119
+ R_V7M_CCR_NONBASETHRDENA_MASK)) {
4120
+ ufault = true;
4121
+ }
4122
+ break;
4123
+ default:
4124
+ ufault = true;
4125
+ }
4126
+ }
4127
+
4128
+ /*
4129
+ * Set CONTROL.SPSEL from excret.SPSEL. Since we're still in
4130
+ * Handler mode (and will be until we write the new XPSR.Interrupt
4131
+ * field) this does not switch around the current stack pointer.
4132
+ * We must do this before we do any kind of tailchaining, including
4133
+ * for the derived exceptions on integrity check failures, or we will
4134
+ * give the guest an incorrect EXCRET.SPSEL value on exception entry.
4135
+ */
4136
+ write_v7m_control_spsel_for_secstate(env, return_to_sp_process, exc_secure);
4137
+
4138
+ /*
4139
+ * Clear scratch FP values left in caller saved registers; this
4140
+ * must happen before any kind of tail chaining.
4141
+ */
4142
+ if ((env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_CLRONRET_MASK) &&
4143
+ (env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK)) {
4144
+ if (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPACT_MASK) {
4145
+ env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK;
4146
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
4147
+ qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing "
4148
+ "stackframe: error during lazy state deactivation\n");
4149
+ v7m_exception_taken(cpu, excret, true, false);
4150
+ return;
4151
+ } else {
4152
+ /* Clear s0..s15 and FPSCR */
4153
+ int i;
4154
+
4155
+ for (i = 0; i < 16; i += 2) {
4156
+ *aa32_vfp_dreg(env, i / 2) = 0;
4157
+ }
4158
+ vfp_set_fpscr(env, 0);
4159
+ }
4160
+ }
4161
+
4162
+ if (sfault) {
4163
+ env->v7m.sfsr |= R_V7M_SFSR_INVER_MASK;
4164
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
4165
+ qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing "
4166
+ "stackframe: failed EXC_RETURN.ES validity check\n");
4167
+ v7m_exception_taken(cpu, excret, true, false);
4168
+ return;
4169
+ }
4170
+
4171
+ if (ufault) {
4172
+ /*
4173
+ * Bad exception return: instead of popping the exception
4174
+ * stack, directly take a usage fault on the current stack.
4175
+ */
4176
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
4177
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
4178
+ qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
4179
+ "stackframe: failed exception return integrity check\n");
4180
+ v7m_exception_taken(cpu, excret, true, false);
4181
+ return;
4182
+ }
4183
+
4184
+ /*
4185
+ * Tailchaining: if there is currently a pending exception that
4186
+ * is high enough priority to preempt execution at the level we're
4187
+ * about to return to, then just directly take that exception now,
4188
+ * avoiding an unstack-and-then-stack. Note that now we have
4189
+ * deactivated the previous exception by calling armv7m_nvic_complete_irq()
4190
+ * our current execution priority is already the execution priority we are
4191
+ * returning to -- none of the state we would unstack or set based on
4192
+ * the EXCRET value affects it.
4193
+ */
4194
+ if (armv7m_nvic_can_take_pending_exception(env->nvic)) {
4195
+ qemu_log_mask(CPU_LOG_INT, "...tailchaining to pending exception\n");
4196
+ v7m_exception_taken(cpu, excret, true, false);
4197
+ return;
4198
+ }
4199
+
4200
+ switch_v7m_security_state(env, return_to_secure);
4201
+
4202
+ {
4203
+ /*
4204
+ * The stack pointer we should be reading the exception frame from
4205
+ * depends on bits in the magic exception return type value (and
4206
+ * for v8M isn't necessarily the stack pointer we will eventually
4207
+ * end up resuming execution with). Get a pointer to the location
4208
+ * in the CPU state struct where the SP we need is currently being
4209
+ * stored; we will use and modify it in place.
4210
+ * We use this limited C variable scope so we don't accidentally
4211
+ * use 'frame_sp_p' after we do something that makes it invalid.
4212
+ */
4213
+ uint32_t *frame_sp_p = get_v7m_sp_ptr(env,
4214
+ return_to_secure,
4215
+ !return_to_handler,
4216
+ return_to_sp_process);
4217
+ uint32_t frameptr = *frame_sp_p;
4218
+ bool pop_ok = true;
4219
+ ARMMMUIdx mmu_idx;
4220
+ bool return_to_priv = return_to_handler ||
4221
+ !(env->v7m.control[return_to_secure] & R_V7M_CONTROL_NPRIV_MASK);
4222
+
4223
+ mmu_idx = arm_v7m_mmu_idx_for_secstate_and_priv(env, return_to_secure,
4224
+ return_to_priv);
4225
+
4226
+ if (!QEMU_IS_ALIGNED(frameptr, 8) &&
4227
+ arm_feature(env, ARM_FEATURE_V8)) {
4228
+ qemu_log_mask(LOG_GUEST_ERROR,
4229
+ "M profile exception return with non-8-aligned SP "
4230
+ "for destination state is UNPREDICTABLE\n");
4231
+ }
4232
+
4233
+ /* Do we need to pop callee-saved registers? */
4234
+ if (return_to_secure &&
4235
+ ((excret & R_V7M_EXCRET_ES_MASK) == 0 ||
4236
+ (excret & R_V7M_EXCRET_DCRS_MASK) == 0)) {
4237
+ uint32_t actual_sig;
4238
+
4239
+ pop_ok = v7m_stack_read(cpu, &actual_sig, frameptr, mmu_idx);
4240
+
4241
+ if (pop_ok && v7m_integrity_sig(env, excret) != actual_sig) {
4242
+ /* Take a SecureFault on the current stack */
4243
+ env->v7m.sfsr |= R_V7M_SFSR_INVIS_MASK;
4244
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
4245
+ qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing "
4246
+ "stackframe: failed exception return integrity "
4247
+ "signature check\n");
4248
+ v7m_exception_taken(cpu, excret, true, false);
4249
+ return;
4250
+ }
4251
+
4252
+ pop_ok = pop_ok &&
4253
+ v7m_stack_read(cpu, &env->regs[4], frameptr + 0x8, mmu_idx) &&
4254
+ v7m_stack_read(cpu, &env->regs[5], frameptr + 0xc, mmu_idx) &&
4255
+ v7m_stack_read(cpu, &env->regs[6], frameptr + 0x10, mmu_idx) &&
4256
+ v7m_stack_read(cpu, &env->regs[7], frameptr + 0x14, mmu_idx) &&
4257
+ v7m_stack_read(cpu, &env->regs[8], frameptr + 0x18, mmu_idx) &&
4258
+ v7m_stack_read(cpu, &env->regs[9], frameptr + 0x1c, mmu_idx) &&
4259
+ v7m_stack_read(cpu, &env->regs[10], frameptr + 0x20, mmu_idx) &&
4260
+ v7m_stack_read(cpu, &env->regs[11], frameptr + 0x24, mmu_idx);
4261
+
4262
+ frameptr += 0x28;
4263
+ }
4264
+
4265
+ /* Pop registers */
4266
+ pop_ok = pop_ok &&
4267
+ v7m_stack_read(cpu, &env->regs[0], frameptr, mmu_idx) &&
4268
+ v7m_stack_read(cpu, &env->regs[1], frameptr + 0x4, mmu_idx) &&
4269
+ v7m_stack_read(cpu, &env->regs[2], frameptr + 0x8, mmu_idx) &&
4270
+ v7m_stack_read(cpu, &env->regs[3], frameptr + 0xc, mmu_idx) &&
4271
+ v7m_stack_read(cpu, &env->regs[12], frameptr + 0x10, mmu_idx) &&
4272
+ v7m_stack_read(cpu, &env->regs[14], frameptr + 0x14, mmu_idx) &&
4273
+ v7m_stack_read(cpu, &env->regs[15], frameptr + 0x18, mmu_idx) &&
4274
+ v7m_stack_read(cpu, &xpsr, frameptr + 0x1c, mmu_idx);
4275
+
4276
+ if (!pop_ok) {
4277
+ /*
4278
+ * v7m_stack_read() pended a fault, so take it (as a tail
4279
+ * chained exception on the same stack frame)
4280
+ */
4281
+ qemu_log_mask(CPU_LOG_INT, "...derived exception on unstacking\n");
4282
+ v7m_exception_taken(cpu, excret, true, false);
4283
+ return;
4284
+ }
4285
+
4286
+ /*
4287
+ * Returning from an exception with a PC with bit 0 set is defined
4288
+ * behaviour on v8M (bit 0 is ignored), but for v7M it was specified
4289
+ * to be UNPREDICTABLE. In practice actual v7M hardware seems to ignore
4290
+ * the lsbit, and there are several RTOSes out there which incorrectly
4291
+ * assume the r15 in the stack frame should be a Thumb-style "lsbit
4292
+ * indicates ARM/Thumb" value, so ignore the bit on v7M as well, but
4293
+ * complain about the badly behaved guest.
4294
+ */
4295
+ if (env->regs[15] & 1) {
4296
+ env->regs[15] &= ~1U;
4297
+ if (!arm_feature(env, ARM_FEATURE_V8)) {
4298
+ qemu_log_mask(LOG_GUEST_ERROR,
4299
+ "M profile return from interrupt with misaligned "
4300
+ "PC is UNPREDICTABLE on v7M\n");
4301
+ }
4302
+ }
4303
+
4304
+ if (arm_feature(env, ARM_FEATURE_V8)) {
4305
+ /*
4306
+ * For v8M we have to check whether the xPSR exception field
4307
+ * matches the EXCRET value for return to handler/thread
4308
+ * before we commit to changing the SP and xPSR.
4309
+ */
4310
+ bool will_be_handler = (xpsr & XPSR_EXCP) != 0;
4311
+ if (return_to_handler != will_be_handler) {
4312
+ /*
4313
+ * Take an INVPC UsageFault on the current stack.
4314
+ * By this point we will have switched to the security state
4315
+ * for the background state, so this UsageFault will target
4316
+ * that state.
4317
+ */
4318
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
4319
+ env->v7m.secure);
4320
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
4321
+ qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
4322
+ "stackframe: failed exception return integrity "
4323
+ "check\n");
4324
+ v7m_exception_taken(cpu, excret, true, false);
4325
+ return;
4326
+ }
4327
+ }
4328
+
4329
+ if (!ftype) {
4330
+ /* FP present and we need to handle it */
4331
+ if (!return_to_secure &&
4332
+ (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPACT_MASK)) {
4333
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
4334
+ env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK;
4335
+ qemu_log_mask(CPU_LOG_INT,
4336
+ "...taking SecureFault on existing stackframe: "
4337
+ "Secure LSPACT set but exception return is "
4338
+ "not to secure state\n");
4339
+ v7m_exception_taken(cpu, excret, true, false);
4340
+ return;
4341
+ }
4342
+
4343
+ restore_s16_s31 = return_to_secure &&
4344
+ (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK);
4345
+
4346
+ if (env->v7m.fpccr[return_to_secure] & R_V7M_FPCCR_LSPACT_MASK) {
4347
+ /* State in FPU is still valid, just clear LSPACT */
4348
+ env->v7m.fpccr[return_to_secure] &= ~R_V7M_FPCCR_LSPACT_MASK;
4349
+ } else {
4350
+ int i;
4351
+ uint32_t fpscr;
4352
+ bool cpacr_pass, nsacr_pass;
4353
+
4354
+ cpacr_pass = v7m_cpacr_pass(env, return_to_secure,
4355
+ return_to_priv);
4356
+ nsacr_pass = return_to_secure ||
4357
+ extract32(env->v7m.nsacr, 10, 1);
4358
+
4359
+ if (!cpacr_pass) {
4360
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
4361
+ return_to_secure);
4362
+ env->v7m.cfsr[return_to_secure] |= R_V7M_CFSR_NOCP_MASK;
4363
+ qemu_log_mask(CPU_LOG_INT,
4364
+ "...taking UsageFault on existing "
4365
+ "stackframe: CPACR.CP10 prevents unstacking "
4366
+ "FP regs\n");
4367
+ v7m_exception_taken(cpu, excret, true, false);
4368
+ return;
4369
+ } else if (!nsacr_pass) {
4370
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, true);
4371
+ env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_INVPC_MASK;
4372
+ qemu_log_mask(CPU_LOG_INT,
4373
+ "...taking Secure UsageFault on existing "
4374
+ "stackframe: NSACR.CP10 prevents unstacking "
4375
+ "FP regs\n");
4376
+ v7m_exception_taken(cpu, excret, true, false);
4377
+ return;
4378
+ }
4379
+
4380
+ for (i = 0; i < (restore_s16_s31 ? 32 : 16); i += 2) {
4381
+ uint32_t slo, shi;
4382
+ uint64_t dn;
4383
+ uint32_t faddr = frameptr + 0x20 + 4 * i;
4384
+
4385
+ if (i >= 16) {
4386
+ faddr += 8; /* Skip the slot for the FPSCR */
4387
+ }
4388
+
4389
+ pop_ok = pop_ok &&
4390
+ v7m_stack_read(cpu, &slo, faddr, mmu_idx) &&
4391
+ v7m_stack_read(cpu, &shi, faddr + 4, mmu_idx);
4392
+
4393
+ if (!pop_ok) {
4394
+ break;
4395
+ }
4396
+
4397
+ dn = (uint64_t)shi << 32 | slo;
4398
+ *aa32_vfp_dreg(env, i / 2) = dn;
4399
+ }
4400
+ pop_ok = pop_ok &&
4401
+ v7m_stack_read(cpu, &fpscr, frameptr + 0x60, mmu_idx);
4402
+ if (pop_ok) {
4403
+ vfp_set_fpscr(env, fpscr);
4404
+ }
4405
+ if (!pop_ok) {
4406
+ /*
4407
+ * These regs are 0 if security extension present;
4408
+ * otherwise merely UNKNOWN. We zero always.
4409
+ */
4410
+ for (i = 0; i < (restore_s16_s31 ? 32 : 16); i += 2) {
4411
+ *aa32_vfp_dreg(env, i / 2) = 0;
4412
+ }
4413
+ vfp_set_fpscr(env, 0);
4414
+ }
4415
+ }
4416
+ }
4417
+ env->v7m.control[M_REG_S] = FIELD_DP32(env->v7m.control[M_REG_S],
4418
+ V7M_CONTROL, FPCA, !ftype);
4419
+
4420
+ /* Commit to consuming the stack frame */
4421
+ frameptr += 0x20;
4422
+ if (!ftype) {
4423
+ frameptr += 0x48;
4424
+ if (restore_s16_s31) {
4425
+ frameptr += 0x40;
4426
+ }
4427
+ }
4428
+ /*
4429
+ * Undo stack alignment (the SPREALIGN bit indicates that the original
4430
+ * pre-exception SP was not 8-aligned and we added a padding word to
4431
+ * align it, so we undo this by ORing in the bit that increases it
4432
+ * from the current 8-aligned value to the 8-unaligned value. (Adding 4
4433
+ * would work too but a logical OR is how the pseudocode specifies it.)
4434
+ */
4435
+ if (xpsr & XPSR_SPREALIGN) {
4436
+ frameptr |= 4;
4437
+ }
4438
+ *frame_sp_p = frameptr;
4439
+ }
4440
+
4441
+ xpsr_mask = ~(XPSR_SPREALIGN | XPSR_SFPA);
4442
+ if (!arm_feature(env, ARM_FEATURE_THUMB_DSP)) {
4443
+ xpsr_mask &= ~XPSR_GE;
4444
+ }
4445
+ /* This xpsr_write() will invalidate frame_sp_p as it may switch stack */
4446
+ xpsr_write(env, xpsr, xpsr_mask);
4447
+
4448
+ if (env->v7m.secure) {
4449
+ bool sfpa = xpsr & XPSR_SFPA;
4450
+
4451
+ env->v7m.control[M_REG_S] = FIELD_DP32(env->v7m.control[M_REG_S],
4452
+ V7M_CONTROL, SFPA, sfpa);
4453
+ }
4454
+
4455
+ /*
4456
+ * The restored xPSR exception field will be zero if we're
4457
+ * resuming in Thread mode. If that doesn't match what the
4458
+ * exception return excret specified then this is a UsageFault.
4459
+ * v7M requires we make this check here; v8M did it earlier.
4460
+ */
4461
+ if (return_to_handler != arm_v7m_is_handler_mode(env)) {
4462
+ /*
4463
+ * Take an INVPC UsageFault by pushing the stack again;
4464
+ * we know we're v7M so this is never a Secure UsageFault.
4465
+ */
4466
+ bool ignore_stackfaults;
4467
+
4468
+ assert(!arm_feature(env, ARM_FEATURE_V8));
4469
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, false);
4470
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
4471
+ ignore_stackfaults = v7m_push_stack(cpu);
4472
+ qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on new stackframe: "
4473
+ "failed exception return integrity check\n");
4474
+ v7m_exception_taken(cpu, excret, false, ignore_stackfaults);
4475
+ return;
4476
+ }
4477
+
4478
+ /* Otherwise, we have a successful exception exit. */
4479
+ arm_clear_exclusive(env);
4480
+ qemu_log_mask(CPU_LOG_INT, "...successful exception return\n");
4481
+}
4482
+
4483
+static bool do_v7m_function_return(ARMCPU *cpu)
4484
+{
4485
+ /*
4486
+ * v8M security extensions magic function return.
4487
+ * We may either:
4488
+ * (1) throw an exception (longjump)
4489
+ * (2) return true if we successfully handled the function return
4490
+ * (3) return false if we failed a consistency check and have
4491
+ * pended a UsageFault that needs to be taken now
4492
+ *
4493
+ * At this point the magic return value is split between env->regs[15]
4494
+ * and env->thumb. We don't bother to reconstitute it because we don't
4495
+ * need it (all values are handled the same way).
4496
+ */
4497
+ CPUARMState *env = &cpu->env;
4498
+ uint32_t newpc, newpsr, newpsr_exc;
4499
+
4500
+ qemu_log_mask(CPU_LOG_INT, "...really v7M secure function return\n");
4501
+
4502
+ {
4503
+ bool threadmode, spsel;
4504
+ TCGMemOpIdx oi;
4505
+ ARMMMUIdx mmu_idx;
4506
+ uint32_t *frame_sp_p;
4507
+ uint32_t frameptr;
4508
+
4509
+ /* Pull the return address and IPSR from the Secure stack */
4510
+ threadmode = !arm_v7m_is_handler_mode(env);
4511
+ spsel = env->v7m.control[M_REG_S] & R_V7M_CONTROL_SPSEL_MASK;
4512
+
4513
+ frame_sp_p = get_v7m_sp_ptr(env, true, threadmode, spsel);
4514
+ frameptr = *frame_sp_p;
4515
+
4516
+ /*
4517
+ * These loads may throw an exception (for MPU faults). We want to
4518
+ * do them as secure, so work out what MMU index that is.
4519
+ */
4520
+ mmu_idx = arm_v7m_mmu_idx_for_secstate(env, true);
4521
+ oi = make_memop_idx(MO_LE, arm_to_core_mmu_idx(mmu_idx));
4522
+ newpc = helper_le_ldul_mmu(env, frameptr, oi, 0);
4523
+ newpsr = helper_le_ldul_mmu(env, frameptr + 4, oi, 0);
4524
+
4525
+ /* Consistency checks on new IPSR */
4526
+ newpsr_exc = newpsr & XPSR_EXCP;
4527
+ if (!((env->v7m.exception == 0 && newpsr_exc == 0) ||
4528
+ (env->v7m.exception == 1 && newpsr_exc != 0))) {
4529
+ /* Pend the fault and tell our caller to take it */
4530
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
4531
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
4532
+ env->v7m.secure);
4533
+ qemu_log_mask(CPU_LOG_INT,
4534
+ "...taking INVPC UsageFault: "
4535
+ "IPSR consistency check failed\n");
4536
+ return false;
4537
+ }
4538
+
4539
+ *frame_sp_p = frameptr + 8;
4540
+ }
4541
+
4542
+ /* This invalidates frame_sp_p */
4543
+ switch_v7m_security_state(env, true);
4544
+ env->v7m.exception = newpsr_exc;
4545
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
4546
+ if (newpsr & XPSR_SFPA) {
4547
+ env->v7m.control[M_REG_S] |= R_V7M_CONTROL_SFPA_MASK;
4548
+ }
4549
+ xpsr_write(env, 0, XPSR_IT);
4550
+ env->thumb = newpc & 1;
4551
+ env->regs[15] = newpc & ~1;
4552
+
4553
+ qemu_log_mask(CPU_LOG_INT, "...function return successful\n");
4554
+ return true;
4555
+}
4556
+
4557
+static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx,
4558
+ uint32_t addr, uint16_t *insn)
4559
+{
4560
+ /*
4561
+ * Load a 16-bit portion of a v7M instruction, returning true on success,
4562
+ * or false on failure (in which case we will have pended the appropriate
4563
+ * exception).
4564
+ * We need to do the instruction fetch's MPU and SAU checks
4565
+ * like this because there is no MMU index that would allow
4566
+ * doing the load with a single function call. Instead we must
4567
+ * first check that the security attributes permit the load
4568
+ * and that they don't mismatch on the two halves of the instruction,
4569
+ * and then we do the load as a secure load (ie using the security
4570
+ * attributes of the address, not the CPU, as architecturally required).
4571
+ */
4572
+ CPUState *cs = CPU(cpu);
4573
+ CPUARMState *env = &cpu->env;
4574
+ V8M_SAttributes sattrs = {};
4575
+ MemTxAttrs attrs = {};
4576
+ ARMMMUFaultInfo fi = {};
4577
+ MemTxResult txres;
4578
+ target_ulong page_size;
4579
+ hwaddr physaddr;
4580
+ int prot;
4581
+
4582
+ v8m_security_lookup(env, addr, MMU_INST_FETCH, mmu_idx, &sattrs);
4583
+ if (!sattrs.nsc || sattrs.ns) {
4584
+ /*
4585
+ * This must be the second half of the insn, and it straddles a
4586
+ * region boundary with the second half not being S&NSC.
4587
+ */
4588
+ env->v7m.sfsr |= R_V7M_SFSR_INVEP_MASK;
4589
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
4590
+ qemu_log_mask(CPU_LOG_INT,
4591
+ "...really SecureFault with SFSR.INVEP\n");
4592
+ return false;
4593
+ }
4594
+ if (get_phys_addr(env, addr, MMU_INST_FETCH, mmu_idx,
4595
+ &physaddr, &attrs, &prot, &page_size, &fi, NULL)) {
4596
+ /* the MPU lookup failed */
4597
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_IACCVIOL_MASK;
4598
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM, env->v7m.secure);
4599
+ qemu_log_mask(CPU_LOG_INT, "...really MemManage with CFSR.IACCVIOL\n");
4600
+ return false;
4601
+ }
4602
+ *insn = address_space_lduw_le(arm_addressspace(cs, attrs), physaddr,
4603
+ attrs, &txres);
4604
+ if (txres != MEMTX_OK) {
4605
+ env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_IBUSERR_MASK;
4606
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false);
4607
+ qemu_log_mask(CPU_LOG_INT, "...really BusFault with CFSR.IBUSERR\n");
4608
+ return false;
4609
+ }
4610
+ return true;
4611
+}
4612
+
4613
+static bool v7m_handle_execute_nsc(ARMCPU *cpu)
4614
+{
4615
+ /*
4616
+ * Check whether this attempt to execute code in a Secure & NS-Callable
4617
+ * memory region is for an SG instruction; if so, then emulate the
4618
+ * effect of the SG instruction and return true. Otherwise pend
4619
+ * the correct kind of exception and return false.
4620
+ */
4621
+ CPUARMState *env = &cpu->env;
4622
+ ARMMMUIdx mmu_idx;
4623
+ uint16_t insn;
4624
+
4625
+ /*
4626
+ * We should never get here unless get_phys_addr_pmsav8() caused
4627
+ * an exception for NS executing in S&NSC memory.
4628
+ */
4629
+ assert(!env->v7m.secure);
4630
+ assert(arm_feature(env, ARM_FEATURE_M_SECURITY));
4631
+
4632
+ /* We want to do the MPU lookup as secure; work out what mmu_idx that is */
4633
+ mmu_idx = arm_v7m_mmu_idx_for_secstate(env, true);
4634
+
4635
+ if (!v7m_read_half_insn(cpu, mmu_idx, env->regs[15], &insn)) {
4636
+ return false;
4637
+ }
4638
+
4639
+ if (!env->thumb) {
4640
+ goto gen_invep;
4641
+ }
4642
+
4643
+ if (insn != 0xe97f) {
4644
+ /*
4645
+ * Not an SG instruction first half (we choose the IMPDEF
4646
+ * early-SG-check option).
4647
+ */
4648
+ goto gen_invep;
4649
+ }
4650
+
4651
+ if (!v7m_read_half_insn(cpu, mmu_idx, env->regs[15] + 2, &insn)) {
4652
+ return false;
4653
+ }
4654
+
4655
+ if (insn != 0xe97f) {
4656
+ /*
4657
+ * Not an SG instruction second half (yes, both halves of the SG
4658
+ * insn have the same hex value)
4659
+ */
4660
+ goto gen_invep;
4661
+ }
4662
+
4663
+ /*
4664
+ * OK, we have confirmed that we really have an SG instruction.
4665
+ * We know we're NS in S memory so don't need to repeat those checks.
4666
+ */
4667
+ qemu_log_mask(CPU_LOG_INT, "...really an SG instruction at 0x%08" PRIx32
4668
+ ", executing it\n", env->regs[15]);
4669
+ env->regs[14] &= ~1;
4670
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
4671
+ switch_v7m_security_state(env, true);
4672
+ xpsr_write(env, 0, XPSR_IT);
4673
+ env->regs[15] += 4;
4674
+ return true;
4675
+
4676
+gen_invep:
4677
+ env->v7m.sfsr |= R_V7M_SFSR_INVEP_MASK;
4678
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
4679
+ qemu_log_mask(CPU_LOG_INT,
4680
+ "...really SecureFault with SFSR.INVEP\n");
4681
+ return false;
4682
+}
4683
+
4684
+void arm_v7m_cpu_do_interrupt(CPUState *cs)
4685
+{
4686
+ ARMCPU *cpu = ARM_CPU(cs);
4687
+ CPUARMState *env = &cpu->env;
4688
+ uint32_t lr;
4689
+ bool ignore_stackfaults;
4690
+
4691
+ arm_log_exception(cs->exception_index);
4692
+
4693
+ /*
4694
+ * For exceptions we just mark as pending on the NVIC, and let that
4695
+ * handle it.
4696
+ */
4697
+ switch (cs->exception_index) {
4698
+ case EXCP_UDEF:
4699
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
4700
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_UNDEFINSTR_MASK;
4701
+ break;
4702
+ case EXCP_NOCP:
4703
+ {
4704
+ /*
4705
+ * NOCP might be directed to something other than the current
4706
+ * security state if this fault is because of NSACR; we indicate
4707
+ * the target security state using exception.target_el.
4708
+ */
4709
+ int target_secstate;
4710
+
4711
+ if (env->exception.target_el == 3) {
4712
+ target_secstate = M_REG_S;
4713
+ } else {
4714
+ target_secstate = env->v7m.secure;
4715
+ }
4716
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, target_secstate);
4717
+ env->v7m.cfsr[target_secstate] |= R_V7M_CFSR_NOCP_MASK;
4718
+ break;
4719
+ }
4720
+ case EXCP_INVSTATE:
4721
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
4722
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVSTATE_MASK;
4723
+ break;
4724
+ case EXCP_STKOF:
4725
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
4726
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_STKOF_MASK;
4727
+ break;
4728
+ case EXCP_LSERR:
4729
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
4730
+ env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK;
4731
+ break;
4732
+ case EXCP_UNALIGNED:
4733
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
4734
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_UNALIGNED_MASK;
4735
+ break;
4736
+ case EXCP_SWI:
4737
+ /* The PC already points to the next instruction. */
4738
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SVC, env->v7m.secure);
4739
+ break;
4740
+ case EXCP_PREFETCH_ABORT:
4741
+ case EXCP_DATA_ABORT:
4742
+ /*
4743
+ * Note that for M profile we don't have a guest facing FSR, but
4744
+ * the env->exception.fsr will be populated by the code that
4745
+ * raises the fault, in the A profile short-descriptor format.
4746
+ */
4747
+ switch (env->exception.fsr & 0xf) {
4748
+ case M_FAKE_FSR_NSC_EXEC:
4749
+ /*
4750
+ * Exception generated when we try to execute code at an address
4751
+ * which is marked as Secure & Non-Secure Callable and the CPU
4752
+ * is in the Non-Secure state. The only instruction which can
4753
+ * be executed like this is SG (and that only if both halves of
4754
+ * the SG instruction have the same security attributes.)
4755
+ * Everything else must generate an INVEP SecureFault, so we
4756
+ * emulate the SG instruction here.
4757
+ */
4758
+ if (v7m_handle_execute_nsc(cpu)) {
4759
+ return;
4760
+ }
4761
+ break;
4762
+ case M_FAKE_FSR_SFAULT:
4763
+ /*
4764
+ * Various flavours of SecureFault for attempts to execute or
4765
+ * access data in the wrong security state.
4766
+ */
4767
+ switch (cs->exception_index) {
4768
+ case EXCP_PREFETCH_ABORT:
4769
+ if (env->v7m.secure) {
4770
+ env->v7m.sfsr |= R_V7M_SFSR_INVTRAN_MASK;
4771
+ qemu_log_mask(CPU_LOG_INT,
4772
+ "...really SecureFault with SFSR.INVTRAN\n");
4773
+ } else {
4774
+ env->v7m.sfsr |= R_V7M_SFSR_INVEP_MASK;
4775
+ qemu_log_mask(CPU_LOG_INT,
4776
+ "...really SecureFault with SFSR.INVEP\n");
4777
+ }
4778
+ break;
4779
+ case EXCP_DATA_ABORT:
4780
+ /* This must be an NS access to S memory */
4781
+ env->v7m.sfsr |= R_V7M_SFSR_AUVIOL_MASK;
4782
+ qemu_log_mask(CPU_LOG_INT,
4783
+ "...really SecureFault with SFSR.AUVIOL\n");
4784
+ break;
4785
+ }
4786
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
4787
+ break;
4788
+ case 0x8: /* External Abort */
4789
+ switch (cs->exception_index) {
4790
+ case EXCP_PREFETCH_ABORT:
4791
+ env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_IBUSERR_MASK;
4792
+ qemu_log_mask(CPU_LOG_INT, "...with CFSR.IBUSERR\n");
4793
+ break;
4794
+ case EXCP_DATA_ABORT:
4795
+ env->v7m.cfsr[M_REG_NS] |=
4796
+ (R_V7M_CFSR_PRECISERR_MASK | R_V7M_CFSR_BFARVALID_MASK);
4797
+ env->v7m.bfar = env->exception.vaddress;
4798
+ qemu_log_mask(CPU_LOG_INT,
4799
+ "...with CFSR.PRECISERR and BFAR 0x%x\n",
4800
+ env->v7m.bfar);
4801
+ break;
4802
+ }
4803
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false);
4804
+ break;
4805
+ default:
4806
+ /*
4807
+ * All other FSR values are either MPU faults or "can't happen
4808
+ * for M profile" cases.
4809
+ */
4810
+ switch (cs->exception_index) {
4811
+ case EXCP_PREFETCH_ABORT:
4812
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_IACCVIOL_MASK;
4813
+ qemu_log_mask(CPU_LOG_INT, "...with CFSR.IACCVIOL\n");
4814
+ break;
4815
+ case EXCP_DATA_ABORT:
4816
+ env->v7m.cfsr[env->v7m.secure] |=
4817
+ (R_V7M_CFSR_DACCVIOL_MASK | R_V7M_CFSR_MMARVALID_MASK);
4818
+ env->v7m.mmfar[env->v7m.secure] = env->exception.vaddress;
4819
+ qemu_log_mask(CPU_LOG_INT,
4820
+ "...with CFSR.DACCVIOL and MMFAR 0x%x\n",
4821
+ env->v7m.mmfar[env->v7m.secure]);
4822
+ break;
4823
+ }
4824
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM,
4825
+ env->v7m.secure);
4826
+ break;
4827
+ }
4828
+ break;
4829
+ case EXCP_BKPT:
4830
+ if (semihosting_enabled()) {
4831
+ int nr;
4832
+ nr = arm_lduw_code(env, env->regs[15], arm_sctlr_b(env)) & 0xff;
4833
+ if (nr == 0xab) {
4834
+ env->regs[15] += 2;
4835
+ qemu_log_mask(CPU_LOG_INT,
4836
+ "...handling as semihosting call 0x%x\n",
4837
+ env->regs[0]);
4838
+ env->regs[0] = do_arm_semihosting(env);
4839
+ return;
4840
+ }
4841
+ }
4842
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_DEBUG, false);
4843
+ break;
4844
+ case EXCP_IRQ:
4845
+ break;
4846
+ case EXCP_EXCEPTION_EXIT:
4847
+ if (env->regs[15] < EXC_RETURN_MIN_MAGIC) {
4848
+ /* Must be v8M security extension function return */
4849
+ assert(env->regs[15] >= FNC_RETURN_MIN_MAGIC);
4850
+ assert(arm_feature(env, ARM_FEATURE_M_SECURITY));
4851
+ if (do_v7m_function_return(cpu)) {
4852
+ return;
4853
+ }
4854
+ } else {
4855
+ do_v7m_exception_exit(cpu);
4856
+ return;
4857
+ }
4858
+ break;
4859
+ case EXCP_LAZYFP:
4860
+ /*
4861
+ * We already pended the specific exception in the NVIC in the
4862
+ * v7m_preserve_fp_state() helper function.
4863
+ */
4864
+ break;
4865
+ default:
4866
+ cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index);
4867
+ return; /* Never happens. Keep compiler happy. */
4868
+ }
4869
+
4870
+ if (arm_feature(env, ARM_FEATURE_V8)) {
4871
+ lr = R_V7M_EXCRET_RES1_MASK |
4872
+ R_V7M_EXCRET_DCRS_MASK;
4873
+ /*
4874
+ * The S bit indicates whether we should return to Secure
4875
+ * or NonSecure (ie our current state).
4876
+ * The ES bit indicates whether we're taking this exception
4877
+ * to Secure or NonSecure (ie our target state). We set it
4878
+ * later, in v7m_exception_taken().
4879
+ * The SPSEL bit is also set in v7m_exception_taken() for v8M.
4880
+ * This corresponds to the ARM ARM pseudocode for v8M setting
4881
+ * some LR bits in PushStack() and some in ExceptionTaken();
4882
+ * the distinction matters for the tailchain cases where we
4883
+ * can take an exception without pushing the stack.
4884
+ */
4885
+ if (env->v7m.secure) {
4886
+ lr |= R_V7M_EXCRET_S_MASK;
4887
+ }
4888
+ if (!(env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK)) {
4889
+ lr |= R_V7M_EXCRET_FTYPE_MASK;
4890
+ }
4891
+ } else {
4892
+ lr = R_V7M_EXCRET_RES1_MASK |
4893
+ R_V7M_EXCRET_S_MASK |
4894
+ R_V7M_EXCRET_DCRS_MASK |
4895
+ R_V7M_EXCRET_FTYPE_MASK |
4896
+ R_V7M_EXCRET_ES_MASK;
4897
+ if (env->v7m.control[M_REG_NS] & R_V7M_CONTROL_SPSEL_MASK) {
4898
+ lr |= R_V7M_EXCRET_SPSEL_MASK;
4899
+ }
4900
+ }
4901
+ if (!arm_v7m_is_handler_mode(env)) {
4902
+ lr |= R_V7M_EXCRET_MODE_MASK;
4903
+ }
4904
+
4905
+ ignore_stackfaults = v7m_push_stack(cpu);
4906
+ v7m_exception_taken(cpu, lr, false, ignore_stackfaults);
4907
+}
4908
+
4909
+uint32_t HELPER(v7m_mrs)(CPUARMState *env, uint32_t reg)
4910
+{
4911
+ uint32_t mask;
4912
+ unsigned el = arm_current_el(env);
4913
+
4914
+ /* First handle registers which unprivileged can read */
4915
+
4916
+ switch (reg) {
4917
+ case 0 ... 7: /* xPSR sub-fields */
4918
+ mask = 0;
4919
+ if ((reg & 1) && el) {
4920
+ mask |= XPSR_EXCP; /* IPSR (unpriv. reads as zero) */
4921
+ }
4922
+ if (!(reg & 4)) {
4923
+ mask |= XPSR_NZCV | XPSR_Q; /* APSR */
4924
+ if (arm_feature(env, ARM_FEATURE_THUMB_DSP)) {
4925
+ mask |= XPSR_GE;
4926
+ }
4927
+ }
4928
+ /* EPSR reads as zero */
4929
+ return xpsr_read(env) & mask;
4930
+ break;
4931
+ case 20: /* CONTROL */
4932
+ {
4933
+ uint32_t value = env->v7m.control[env->v7m.secure];
4934
+ if (!env->v7m.secure) {
4935
+ /* SFPA is RAZ/WI from NS; FPCA is stored in the M_REG_S bank */
4936
+ value |= env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK;
4937
+ }
4938
+ return value;
4939
+ }
4940
+ case 0x94: /* CONTROL_NS */
4941
+ /*
4942
+ * We have to handle this here because unprivileged Secure code
4943
+ * can read the NS CONTROL register.
4944
+ */
4945
+ if (!env->v7m.secure) {
4946
+ return 0;
4947
+ }
4948
+ return env->v7m.control[M_REG_NS] |
4949
+ (env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK);
4950
+ }
4951
+
4952
+ if (el == 0) {
4953
+ return 0; /* unprivileged reads others as zero */
4954
+ }
4955
+
4956
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
4957
+ switch (reg) {
4958
+ case 0x88: /* MSP_NS */
4959
+ if (!env->v7m.secure) {
4960
+ return 0;
4961
+ }
4962
+ return env->v7m.other_ss_msp;
4963
+ case 0x89: /* PSP_NS */
4964
+ if (!env->v7m.secure) {
4965
+ return 0;
4966
+ }
4967
+ return env->v7m.other_ss_psp;
4968
+ case 0x8a: /* MSPLIM_NS */
4969
+ if (!env->v7m.secure) {
4970
+ return 0;
4971
+ }
4972
+ return env->v7m.msplim[M_REG_NS];
4973
+ case 0x8b: /* PSPLIM_NS */
4974
+ if (!env->v7m.secure) {
4975
+ return 0;
4976
+ }
4977
+ return env->v7m.psplim[M_REG_NS];
4978
+ case 0x90: /* PRIMASK_NS */
4979
+ if (!env->v7m.secure) {
4980
+ return 0;
4981
+ }
4982
+ return env->v7m.primask[M_REG_NS];
4983
+ case 0x91: /* BASEPRI_NS */
4984
+ if (!env->v7m.secure) {
4985
+ return 0;
4986
+ }
4987
+ return env->v7m.basepri[M_REG_NS];
4988
+ case 0x93: /* FAULTMASK_NS */
4989
+ if (!env->v7m.secure) {
4990
+ return 0;
4991
+ }
4992
+ return env->v7m.faultmask[M_REG_NS];
4993
+ case 0x98: /* SP_NS */
4994
+ {
4995
+ /*
4996
+ * This gives the non-secure SP selected based on whether we're
4997
+ * currently in handler mode or not, using the NS CONTROL.SPSEL.
4998
+ */
4999
+ bool spsel = env->v7m.control[M_REG_NS] & R_V7M_CONTROL_SPSEL_MASK;
5000
+
5001
+ if (!env->v7m.secure) {
5002
+ return 0;
5003
+ }
5004
+ if (!arm_v7m_is_handler_mode(env) && spsel) {
5005
+ return env->v7m.other_ss_psp;
5006
+ } else {
5007
+ return env->v7m.other_ss_msp;
5008
+ }
5009
+ }
5010
+ default:
5011
+ break;
5012
+ }
5013
+ }
5014
+
5015
+ switch (reg) {
5016
+ case 8: /* MSP */
5017
+ return v7m_using_psp(env) ? env->v7m.other_sp : env->regs[13];
5018
+ case 9: /* PSP */
5019
+ return v7m_using_psp(env) ? env->regs[13] : env->v7m.other_sp;
5020
+ case 10: /* MSPLIM */
5021
+ if (!arm_feature(env, ARM_FEATURE_V8)) {
5022
+ goto bad_reg;
5023
+ }
5024
+ return env->v7m.msplim[env->v7m.secure];
5025
+ case 11: /* PSPLIM */
5026
+ if (!arm_feature(env, ARM_FEATURE_V8)) {
5027
+ goto bad_reg;
5028
+ }
5029
+ return env->v7m.psplim[env->v7m.secure];
5030
+ case 16: /* PRIMASK */
5031
+ return env->v7m.primask[env->v7m.secure];
5032
+ case 17: /* BASEPRI */
5033
+ case 18: /* BASEPRI_MAX */
5034
+ return env->v7m.basepri[env->v7m.secure];
5035
+ case 19: /* FAULTMASK */
5036
+ return env->v7m.faultmask[env->v7m.secure];
5037
+ default:
5038
+ bad_reg:
5039
+ qemu_log_mask(LOG_GUEST_ERROR, "Attempt to read unknown special"
5040
+ " register %d\n", reg);
5041
+ return 0;
5042
+ }
5043
+}
5044
+
5045
+void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
5046
+{
5047
+ /*
5048
+ * We're passed bits [11..0] of the instruction; extract
5049
+ * SYSm and the mask bits.
5050
+ * Invalid combinations of SYSm and mask are UNPREDICTABLE;
5051
+ * we choose to treat them as if the mask bits were valid.
5052
+ * NB that the pseudocode 'mask' variable is bits [11..10],
5053
+ * whereas ours is [11..8].
5054
+ */
5055
+ uint32_t mask = extract32(maskreg, 8, 4);
5056
+ uint32_t reg = extract32(maskreg, 0, 8);
5057
+ int cur_el = arm_current_el(env);
5058
+
5059
+ if (cur_el == 0 && reg > 7 && reg != 20) {
5060
+ /*
5061
+ * only xPSR sub-fields and CONTROL.SFPA may be written by
5062
+ * unprivileged code
5063
+ */
5064
+ return;
5065
+ }
5066
+
5067
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
5068
+ switch (reg) {
5069
+ case 0x88: /* MSP_NS */
5070
+ if (!env->v7m.secure) {
5071
+ return;
5072
+ }
5073
+ env->v7m.other_ss_msp = val;
5074
+ return;
5075
+ case 0x89: /* PSP_NS */
5076
+ if (!env->v7m.secure) {
5077
+ return;
5078
+ }
5079
+ env->v7m.other_ss_psp = val;
5080
+ return;
5081
+ case 0x8a: /* MSPLIM_NS */
5082
+ if (!env->v7m.secure) {
5083
+ return;
5084
+ }
5085
+ env->v7m.msplim[M_REG_NS] = val & ~7;
5086
+ return;
5087
+ case 0x8b: /* PSPLIM_NS */
5088
+ if (!env->v7m.secure) {
5089
+ return;
5090
+ }
5091
+ env->v7m.psplim[M_REG_NS] = val & ~7;
5092
+ return;
5093
+ case 0x90: /* PRIMASK_NS */
5094
+ if (!env->v7m.secure) {
5095
+ return;
5096
+ }
5097
+ env->v7m.primask[M_REG_NS] = val & 1;
5098
+ return;
5099
+ case 0x91: /* BASEPRI_NS */
5100
+ if (!env->v7m.secure || !arm_feature(env, ARM_FEATURE_M_MAIN)) {
5101
+ return;
5102
+ }
5103
+ env->v7m.basepri[M_REG_NS] = val & 0xff;
5104
+ return;
5105
+ case 0x93: /* FAULTMASK_NS */
5106
+ if (!env->v7m.secure || !arm_feature(env, ARM_FEATURE_M_MAIN)) {
5107
+ return;
5108
+ }
5109
+ env->v7m.faultmask[M_REG_NS] = val & 1;
5110
+ return;
5111
+ case 0x94: /* CONTROL_NS */
5112
+ if (!env->v7m.secure) {
5113
+ return;
5114
+ }
5115
+ write_v7m_control_spsel_for_secstate(env,
5116
+ val & R_V7M_CONTROL_SPSEL_MASK,
5117
+ M_REG_NS);
5118
+ if (arm_feature(env, ARM_FEATURE_M_MAIN)) {
5119
+ env->v7m.control[M_REG_NS] &= ~R_V7M_CONTROL_NPRIV_MASK;
5120
+ env->v7m.control[M_REG_NS] |= val & R_V7M_CONTROL_NPRIV_MASK;
5121
+ }
5122
+ /*
5123
+ * SFPA is RAZ/WI from NS. FPCA is RO if NSACR.CP10 == 0,
5124
+ * RES0 if the FPU is not present, and is stored in the S bank
5125
+ */
5126
+ if (arm_feature(env, ARM_FEATURE_VFP) &&
5127
+ extract32(env->v7m.nsacr, 10, 1)) {
5128
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_FPCA_MASK;
5129
+ env->v7m.control[M_REG_S] |= val & R_V7M_CONTROL_FPCA_MASK;
5130
+ }
5131
+ return;
5132
+ case 0x98: /* SP_NS */
5133
+ {
5134
+ /*
5135
+ * This gives the non-secure SP selected based on whether we're
5136
+ * currently in handler mode or not, using the NS CONTROL.SPSEL.
5137
+ */
5138
+ bool spsel = env->v7m.control[M_REG_NS] & R_V7M_CONTROL_SPSEL_MASK;
5139
+ bool is_psp = !arm_v7m_is_handler_mode(env) && spsel;
5140
+ uint32_t limit;
5141
+
5142
+ if (!env->v7m.secure) {
5143
+ return;
5144
+ }
5145
+
5146
+ limit = is_psp ? env->v7m.psplim[false] : env->v7m.msplim[false];
5147
+
5148
+ if (val < limit) {
5149
+ CPUState *cs = env_cpu(env);
5150
+
5151
+ cpu_restore_state(cs, GETPC(), true);
5152
+ raise_exception(env, EXCP_STKOF, 0, 1);
5153
+ }
5154
+
5155
+ if (is_psp) {
5156
+ env->v7m.other_ss_psp = val;
5157
+ } else {
5158
+ env->v7m.other_ss_msp = val;
5159
+ }
5160
+ return;
5161
+ }
5162
+ default:
5163
+ break;
5164
+ }
5165
+ }
5166
+
5167
+ switch (reg) {
5168
+ case 0 ... 7: /* xPSR sub-fields */
5169
+ /* only APSR is actually writable */
5170
+ if (!(reg & 4)) {
5171
+ uint32_t apsrmask = 0;
5172
+
5173
+ if (mask & 8) {
5174
+ apsrmask |= XPSR_NZCV | XPSR_Q;
5175
+ }
5176
+ if ((mask & 4) && arm_feature(env, ARM_FEATURE_THUMB_DSP)) {
5177
+ apsrmask |= XPSR_GE;
5178
+ }
5179
+ xpsr_write(env, val, apsrmask);
5180
+ }
5181
+ break;
5182
+ case 8: /* MSP */
5183
+ if (v7m_using_psp(env)) {
5184
+ env->v7m.other_sp = val;
5185
+ } else {
5186
+ env->regs[13] = val;
5187
+ }
5188
+ break;
5189
+ case 9: /* PSP */
5190
+ if (v7m_using_psp(env)) {
5191
+ env->regs[13] = val;
5192
+ } else {
5193
+ env->v7m.other_sp = val;
5194
+ }
5195
+ break;
5196
+ case 10: /* MSPLIM */
5197
+ if (!arm_feature(env, ARM_FEATURE_V8)) {
5198
+ goto bad_reg;
5199
+ }
5200
+ env->v7m.msplim[env->v7m.secure] = val & ~7;
5201
+ break;
5202
+ case 11: /* PSPLIM */
5203
+ if (!arm_feature(env, ARM_FEATURE_V8)) {
5204
+ goto bad_reg;
5205
+ }
5206
+ env->v7m.psplim[env->v7m.secure] = val & ~7;
5207
+ break;
5208
+ case 16: /* PRIMASK */
5209
+ env->v7m.primask[env->v7m.secure] = val & 1;
5210
+ break;
5211
+ case 17: /* BASEPRI */
5212
+ if (!arm_feature(env, ARM_FEATURE_M_MAIN)) {
5213
+ goto bad_reg;
5214
+ }
5215
+ env->v7m.basepri[env->v7m.secure] = val & 0xff;
5216
+ break;
5217
+ case 18: /* BASEPRI_MAX */
5218
+ if (!arm_feature(env, ARM_FEATURE_M_MAIN)) {
5219
+ goto bad_reg;
5220
+ }
5221
+ val &= 0xff;
5222
+ if (val != 0 && (val < env->v7m.basepri[env->v7m.secure]
5223
+ || env->v7m.basepri[env->v7m.secure] == 0)) {
5224
+ env->v7m.basepri[env->v7m.secure] = val;
5225
+ }
5226
+ break;
5227
+ case 19: /* FAULTMASK */
5228
+ if (!arm_feature(env, ARM_FEATURE_M_MAIN)) {
5229
+ goto bad_reg;
5230
+ }
5231
+ env->v7m.faultmask[env->v7m.secure] = val & 1;
5232
+ break;
5233
+ case 20: /* CONTROL */
5234
+ /*
5235
+ * Writing to the SPSEL bit only has an effect if we are in
5236
+ * thread mode; other bits can be updated by any privileged code.
5237
+ * write_v7m_control_spsel() deals with updating the SPSEL bit in
5238
+ * env->v7m.control, so we only need update the others.
5239
+ * For v7M, we must just ignore explicit writes to SPSEL in handler
5240
+ * mode; for v8M the write is permitted but will have no effect.
5241
+ * All these bits are writes-ignored from non-privileged code,
5242
+ * except for SFPA.
5243
+ */
5244
+ if (cur_el > 0 && (arm_feature(env, ARM_FEATURE_V8) ||
5245
+ !arm_v7m_is_handler_mode(env))) {
5246
+ write_v7m_control_spsel(env, (val & R_V7M_CONTROL_SPSEL_MASK) != 0);
5247
+ }
5248
+ if (cur_el > 0 && arm_feature(env, ARM_FEATURE_M_MAIN)) {
5249
+ env->v7m.control[env->v7m.secure] &= ~R_V7M_CONTROL_NPRIV_MASK;
5250
+ env->v7m.control[env->v7m.secure] |= val & R_V7M_CONTROL_NPRIV_MASK;
5251
+ }
5252
+ if (arm_feature(env, ARM_FEATURE_VFP)) {
5253
+ /*
5254
+ * SFPA is RAZ/WI from NS or if no FPU.
5255
+ * FPCA is RO if NSACR.CP10 == 0, RES0 if the FPU is not present.
5256
+ * Both are stored in the S bank.
5257
+ */
5258
+ if (env->v7m.secure) {
5259
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
5260
+ env->v7m.control[M_REG_S] |= val & R_V7M_CONTROL_SFPA_MASK;
5261
+ }
5262
+ if (cur_el > 0 &&
5263
+ (env->v7m.secure || !arm_feature(env, ARM_FEATURE_M_SECURITY) ||
5264
+ extract32(env->v7m.nsacr, 10, 1))) {
5265
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_FPCA_MASK;
5266
+ env->v7m.control[M_REG_S] |= val & R_V7M_CONTROL_FPCA_MASK;
5267
+ }
5268
+ }
5269
+ break;
5270
+ default:
5271
+ bad_reg:
5272
+ qemu_log_mask(LOG_GUEST_ERROR, "Attempt to write unknown special"
5273
+ " register %d\n", reg);
5274
+ return;
5275
+ }
5276
+}
5277
+
5278
+uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
5279
+{
5280
+ /* Implement the TT instruction. op is bits [7:6] of the insn. */
5281
+ bool forceunpriv = op & 1;
5282
+ bool alt = op & 2;
5283
+ V8M_SAttributes sattrs = {};
5284
+ uint32_t tt_resp;
5285
+ bool r, rw, nsr, nsrw, mrvalid;
5286
+ int prot;
5287
+ ARMMMUFaultInfo fi = {};
5288
+ MemTxAttrs attrs = {};
5289
+ hwaddr phys_addr;
5290
+ ARMMMUIdx mmu_idx;
5291
+ uint32_t mregion;
5292
+ bool targetpriv;
5293
+ bool targetsec = env->v7m.secure;
5294
+ bool is_subpage;
5295
+
5296
+ /*
5297
+ * Work out what the security state and privilege level we're
5298
+ * interested in is...
5299
+ */
5300
+ if (alt) {
5301
+ targetsec = !targetsec;
5302
+ }
5303
+
5304
+ if (forceunpriv) {
5305
+ targetpriv = false;
5306
+ } else {
5307
+ targetpriv = arm_v7m_is_handler_mode(env) ||
5308
+ !(env->v7m.control[targetsec] & R_V7M_CONTROL_NPRIV_MASK);
5309
+ }
5310
+
5311
+ /* ...and then figure out which MMU index this is */
5312
+ mmu_idx = arm_v7m_mmu_idx_for_secstate_and_priv(env, targetsec, targetpriv);
5313
+
5314
+ /*
5315
+ * We know that the MPU and SAU don't care about the access type
5316
+ * for our purposes beyond that we don't want to claim to be
5317
+ * an insn fetch, so we arbitrarily call this a read.
5318
+ */
5319
+
5320
+ /*
5321
+ * MPU region info only available for privileged or if
5322
+ * inspecting the other MPU state.
5323
+ */
5324
+ if (arm_current_el(env) != 0 || alt) {
5325
+ /* We can ignore the return value as prot is always set */
5326
+ pmsav8_mpu_lookup(env, addr, MMU_DATA_LOAD, mmu_idx,
5327
+ &phys_addr, &attrs, &prot, &is_subpage,
5328
+ &fi, &mregion);
5329
+ if (mregion == -1) {
5330
+ mrvalid = false;
5331
+ mregion = 0;
5332
+ } else {
5333
+ mrvalid = true;
5334
+ }
5335
+ r = prot & PAGE_READ;
5336
+ rw = prot & PAGE_WRITE;
5337
+ } else {
5338
+ r = false;
5339
+ rw = false;
5340
+ mrvalid = false;
5341
+ mregion = 0;
5342
+ }
5343
+
5344
+ if (env->v7m.secure) {
5345
+ v8m_security_lookup(env, addr, MMU_DATA_LOAD, mmu_idx, &sattrs);
5346
+ nsr = sattrs.ns && r;
5347
+ nsrw = sattrs.ns && rw;
5348
+ } else {
5349
+ sattrs.ns = true;
5350
+ nsr = false;
5351
+ nsrw = false;
5352
+ }
5353
+
5354
+ tt_resp = (sattrs.iregion << 24) |
5355
+ (sattrs.irvalid << 23) |
5356
+ ((!sattrs.ns) << 22) |
5357
+ (nsrw << 21) |
5358
+ (nsr << 20) |
5359
+ (rw << 19) |
5360
+ (r << 18) |
5361
+ (sattrs.srvalid << 17) |
5362
+ (mrvalid << 16) |
5363
+ (sattrs.sregion << 8) |
5364
+ mregion;
5365
+
5366
+ return tt_resp;
5367
+}
5368
+
5369
+#endif /* !CONFIG_USER_ONLY */
5370
+
5371
+ARMMMUIdx arm_v7m_mmu_idx_all(CPUARMState *env,
5372
+ bool secstate, bool priv, bool negpri)
5373
+{
5374
+ ARMMMUIdx mmu_idx = ARM_MMU_IDX_M;
5375
+
5376
+ if (priv) {
5377
+ mmu_idx |= ARM_MMU_IDX_M_PRIV;
5378
+ }
5379
+
5380
+ if (negpri) {
5381
+ mmu_idx |= ARM_MMU_IDX_M_NEGPRI;
5382
+ }
5383
+
5384
+ if (secstate) {
5385
+ mmu_idx |= ARM_MMU_IDX_M_S;
5386
+ }
5387
+
5388
+ return mmu_idx;
5389
+}
5390
+
5391
+ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
5392
+ bool secstate, bool priv)
5393
+{
5394
+ bool negpri = armv7m_nvic_neg_prio_requested(env->nvic, secstate);
5395
+
5396
+ return arm_v7m_mmu_idx_all(env, secstate, priv, negpri);
5397
+}
5398
+
5399
+/* Return the MMU index for a v7M CPU in the specified security state */
5400
+ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate)
5401
+{
5402
+ bool priv = arm_current_el(env) != 0;
5403
+
5404
+ return arm_v7m_mmu_idx_for_secstate_and_priv(env, secstate, priv);
5405
+}
5406
--
157
--
5407
2.20.1
158
2.25.1
5408
5409
diff view generated by jsdifflib
New patch
1
From: Mihai Carabas <mihai.carabas@oracle.com>
1
2
3
Use the base_memmap to build the SMBIOS 19 table which provides the address
4
mapping for a Physical Memory Array (from spec [1] chapter 7.20).
5
6
This was present on i386 from commit c97294ec1b9e36887e119589d456557d72ab37b5
7
("SMBIOS: Build aggregate smbios tables and entry point").
8
9
[1] https://www.dmtf.org/sites/default/files/standards/documents/DSP0134_3.5.0.pdf
10
11
The absence of this table is a breach of the specs and is
12
detected by the FirmwareTestSuite (FWTS), but it doesn't
13
cause any known problems for guest OSes.
14
15
Signed-off-by: Mihai Carabas <mihai.carabas@oracle.com>
16
Message-id: 1668789029-5432-1-git-send-email-mihai.carabas@oracle.com
17
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
---
20
hw/arm/virt.c | 8 +++++++-
21
1 file changed, 7 insertions(+), 1 deletion(-)
22
23
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
24
index XXXXXXX..XXXXXXX 100644
25
--- a/hw/arm/virt.c
26
+++ b/hw/arm/virt.c
27
@@ -XXX,XX +XXX,XX @@ static void *machvirt_dtb(const struct arm_boot_info *binfo, int *fdt_size)
28
static void virt_build_smbios(VirtMachineState *vms)
29
{
30
MachineClass *mc = MACHINE_GET_CLASS(vms);
31
+ MachineState *ms = MACHINE(vms);
32
VirtMachineClass *vmc = VIRT_MACHINE_GET_CLASS(vms);
33
uint8_t *smbios_tables, *smbios_anchor;
34
size_t smbios_tables_len, smbios_anchor_len;
35
+ struct smbios_phys_mem_area mem_array;
36
const char *product = "QEMU Virtual Machine";
37
38
if (kvm_enabled()) {
39
@@ -XXX,XX +XXX,XX @@ static void virt_build_smbios(VirtMachineState *vms)
40
vmc->smbios_old_sys_ver ? "1.0" : mc->name, false,
41
true, SMBIOS_ENTRY_POINT_TYPE_64);
42
43
- smbios_get_tables(MACHINE(vms), NULL, 0,
44
+ /* build the array of physical mem area from base_memmap */
45
+ mem_array.address = vms->memmap[VIRT_MEM].base;
46
+ mem_array.length = ms->ram_size;
47
+
48
+ smbios_get_tables(ms, &mem_array, 1,
49
&smbios_tables, &smbios_tables_len,
50
&smbios_anchor, &smbios_anchor_len,
51
&error_fatal);
52
--
53
2.25.1
diff view generated by jsdifflib
New patch
1
From: Timofey Kutergin <tkutergin@gmail.com>
1
2
3
The Cortex-A55 is one of the newer armv8.2+ CPUs; in particular
4
it supports the Privileged Access Never (PAN) feature. Add
5
a model of this CPU, so you can use a CPU type on the virt
6
board that models a specific real hardware CPU, rather than
7
having to use the QEMU-specific "max" CPU type.
8
9
Signed-off-by: Timofey Kutergin <tkutergin@gmail.com>
10
Message-id: 20221121150819.2782817-1-tkutergin@gmail.com
11
[PMM: tweaked commit message]
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
docs/system/arm/virt.rst | 1 +
16
hw/arm/virt.c | 1 +
17
target/arm/cpu64.c | 69 ++++++++++++++++++++++++++++++++++++++++
18
3 files changed, 71 insertions(+)
19
20
diff --git a/docs/system/arm/virt.rst b/docs/system/arm/virt.rst
21
index XXXXXXX..XXXXXXX 100644
22
--- a/docs/system/arm/virt.rst
23
+++ b/docs/system/arm/virt.rst
24
@@ -XXX,XX +XXX,XX @@ Supported guest CPU types:
25
- ``cortex-a15`` (32-bit; the default)
26
- ``cortex-a35`` (64-bit)
27
- ``cortex-a53`` (64-bit)
28
+- ``cortex-a55`` (64-bit)
29
- ``cortex-a57`` (64-bit)
30
- ``cortex-a72`` (64-bit)
31
- ``cortex-a76`` (64-bit)
32
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
33
index XXXXXXX..XXXXXXX 100644
34
--- a/hw/arm/virt.c
35
+++ b/hw/arm/virt.c
36
@@ -XXX,XX +XXX,XX @@ static const char *valid_cpus[] = {
37
ARM_CPU_TYPE_NAME("cortex-a15"),
38
ARM_CPU_TYPE_NAME("cortex-a35"),
39
ARM_CPU_TYPE_NAME("cortex-a53"),
40
+ ARM_CPU_TYPE_NAME("cortex-a55"),
41
ARM_CPU_TYPE_NAME("cortex-a57"),
42
ARM_CPU_TYPE_NAME("cortex-a72"),
43
ARM_CPU_TYPE_NAME("cortex-a76"),
44
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/cpu64.c
47
+++ b/target/arm/cpu64.c
48
@@ -XXX,XX +XXX,XX @@ static void aarch64_a53_initfn(Object *obj)
49
define_cortex_a72_a57_a53_cp_reginfo(cpu);
50
}
51
52
+static void aarch64_a55_initfn(Object *obj)
53
+{
54
+ ARMCPU *cpu = ARM_CPU(obj);
55
+
56
+ cpu->dtb_compatible = "arm,cortex-a55";
57
+ set_feature(&cpu->env, ARM_FEATURE_V8);
58
+ set_feature(&cpu->env, ARM_FEATURE_NEON);
59
+ set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
60
+ set_feature(&cpu->env, ARM_FEATURE_AARCH64);
61
+ set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
62
+ set_feature(&cpu->env, ARM_FEATURE_EL2);
63
+ set_feature(&cpu->env, ARM_FEATURE_EL3);
64
+ set_feature(&cpu->env, ARM_FEATURE_PMU);
65
+
66
+ /* Ordered by B2.4 AArch64 registers by functional group */
67
+ cpu->clidr = 0x82000023;
68
+ cpu->ctr = 0x84448004; /* L1Ip = VIPT */
69
+ cpu->dcz_blocksize = 4; /* 64 bytes */
70
+ cpu->isar.id_aa64dfr0 = 0x0000000010305408ull;
71
+ cpu->isar.id_aa64isar0 = 0x0000100010211120ull;
72
+ cpu->isar.id_aa64isar1 = 0x0000000000100001ull;
73
+ cpu->isar.id_aa64mmfr0 = 0x0000000000101122ull;
74
+ cpu->isar.id_aa64mmfr1 = 0x0000000010212122ull;
75
+ cpu->isar.id_aa64mmfr2 = 0x0000000000001011ull;
76
+ cpu->isar.id_aa64pfr0 = 0x0000000010112222ull;
77
+ cpu->isar.id_aa64pfr1 = 0x0000000000000010ull;
78
+ cpu->id_afr0 = 0x00000000;
79
+ cpu->isar.id_dfr0 = 0x04010088;
80
+ cpu->isar.id_isar0 = 0x02101110;
81
+ cpu->isar.id_isar1 = 0x13112111;
82
+ cpu->isar.id_isar2 = 0x21232042;
83
+ cpu->isar.id_isar3 = 0x01112131;
84
+ cpu->isar.id_isar4 = 0x00011142;
85
+ cpu->isar.id_isar5 = 0x01011121;
86
+ cpu->isar.id_isar6 = 0x00000010;
87
+ cpu->isar.id_mmfr0 = 0x10201105;
88
+ cpu->isar.id_mmfr1 = 0x40000000;
89
+ cpu->isar.id_mmfr2 = 0x01260000;
90
+ cpu->isar.id_mmfr3 = 0x02122211;
91
+ cpu->isar.id_mmfr4 = 0x00021110;
92
+ cpu->isar.id_pfr0 = 0x10010131;
93
+ cpu->isar.id_pfr1 = 0x00011011;
94
+ cpu->isar.id_pfr2 = 0x00000011;
95
+ cpu->midr = 0x412FD050; /* r2p0 */
96
+ cpu->revidr = 0;
97
+
98
+ /* From B2.23 CCSIDR_EL1 */
99
+ cpu->ccsidr[0] = 0x700fe01a; /* 32KB L1 dcache */
100
+ cpu->ccsidr[1] = 0x200fe01a; /* 32KB L1 icache */
101
+ cpu->ccsidr[2] = 0x703fe07a; /* 512KB L2 cache */
102
+
103
+ /* From B2.96 SCTLR_EL3 */
104
+ cpu->reset_sctlr = 0x30c50838;
105
+
106
+ /* From B4.45 ICH_VTR_EL2 */
107
+ cpu->gic_num_lrs = 4;
108
+ cpu->gic_vpribits = 5;
109
+ cpu->gic_vprebits = 5;
110
+ cpu->gic_pribits = 5;
111
+
112
+ cpu->isar.mvfr0 = 0x10110222;
113
+ cpu->isar.mvfr1 = 0x13211111;
114
+ cpu->isar.mvfr2 = 0x00000043;
115
+
116
+ /* From D5.4 AArch64 PMU register summary */
117
+ cpu->isar.reset_pmcr_el0 = 0x410b3000;
118
+}
119
+
120
static void aarch64_a72_initfn(Object *obj)
121
{
122
ARMCPU *cpu = ARM_CPU(obj);
123
@@ -XXX,XX +XXX,XX @@ static const ARMCPUInfo aarch64_cpus[] = {
124
{ .name = "cortex-a35", .initfn = aarch64_a35_initfn },
125
{ .name = "cortex-a57", .initfn = aarch64_a57_initfn },
126
{ .name = "cortex-a53", .initfn = aarch64_a53_initfn },
127
+ { .name = "cortex-a55", .initfn = aarch64_a55_initfn },
128
{ .name = "cortex-a72", .initfn = aarch64_a72_initfn },
129
{ .name = "cortex-a76", .initfn = aarch64_a76_initfn },
130
{ .name = "a64fx", .initfn = aarch64_a64fx_initfn },
131
--
132
2.25.1
diff view generated by jsdifflib
New patch
1
From: Luke Starrett <lukes@xsightlabs.com>
1
2
3
The ARM GICv3 TRM describes that the ITLinesNumber field of GICD_TYPER
4
register:
5
6
"indicates the maximum SPI INTID that the GIC implementation supports"
7
8
As SPI #0 is absolute IRQ #32, the max SPI INTID should have accounted
9
for the internal 16x SGI's and 16x PPI's. However, the original GICv3
10
model subtracted off the SGI/PPI. Cosmetically this can be seen at OS
11
boot (Linux) showing 32 shy of what should be there, i.e.:
12
13
[ 0.000000] GICv3: 224 SPIs implemented
14
15
Though in hw/arm/virt.c, the machine is configured for 256 SPI's. ARM
16
virt machine likely doesn't have a problem with this because the upper
17
32 IRQ's don't actually have anything meaningful wired. But, this does
18
become a functional issue on a custom use case which wants to make use
19
of these IRQ's. Additionally, boot code (i.e. TF-A) will only init up
20
to the number (blocks of 32) that it believes to actually be there.
21
22
Signed-off-by: Luke Starrett <lukes@xsightlabs.com>
23
Message-id: AM9P193MB168473D99B761E204E032095D40D9@AM9P193MB1684.EURP193.PROD.OUTLOOK.COM
24
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
25
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
26
---
27
hw/intc/arm_gicv3_dist.c | 4 ++--
28
1 file changed, 2 insertions(+), 2 deletions(-)
29
30
diff --git a/hw/intc/arm_gicv3_dist.c b/hw/intc/arm_gicv3_dist.c
31
index XXXXXXX..XXXXXXX 100644
32
--- a/hw/intc/arm_gicv3_dist.c
33
+++ b/hw/intc/arm_gicv3_dist.c
34
@@ -XXX,XX +XXX,XX @@ static bool gicd_readl(GICv3State *s, hwaddr offset,
35
* MBIS == 0 (message-based SPIs not supported)
36
* SecurityExtn == 1 if security extns supported
37
* CPUNumber == 0 since for us ARE is always 1
38
- * ITLinesNumber == (num external irqs / 32) - 1
39
+ * ITLinesNumber == (((max SPI IntID + 1) / 32) - 1)
40
*/
41
- int itlinesnumber = ((s->num_irq - GIC_INTERNAL) / 32) - 1;
42
+ int itlinesnumber = (s->num_irq / 32) - 1;
43
/*
44
* SecurityExtn must be RAZ if GICD_CTLR.DS == 1, and
45
* "security extensions not supported" always implies DS == 1,
46
--
47
2.25.1
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
FEAT_EVT adds five new bits to the HCR_EL2 register: TTLBIS, TTLBOS,
2
TICAB, TOCU and TID4. These allow the guest to enable trapping of
3
various EL1 instructions to EL2. In this commit, add the necessary
4
code to allow the guest to set these bits if the feature is present;
5
because the bit is always zero when the feature isn't present we
6
won't need to use explicit feature checks in the "trap on condition"
7
tests in the following commits.
2
8
3
Per Peter Maydell:
9
Note that although full implementation of the feature (mandatory from
10
Armv8.5 onward) requires all five trap bits, the ID registers permit
11
a value indicating that only TICAB, TOCU and TID4 are implemented,
12
which might be the case for CPUs between Armv8.2 and Armv8.5.
4
13
5
Semihosting hooks either SVC or HLT instructions, and inside KVM
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
both of those go to EL1, ie to the guest, and can't be trapped to
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
KVM.
16
---
17
target/arm/cpu.h | 30 ++++++++++++++++++++++++++++++
18
target/arm/helper.c | 6 ++++++
19
2 files changed, 36 insertions(+)
8
20
9
Let check_for_semihosting() return False when not running on TCG.
10
11
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
12
Message-id: 20190701194942.10092-3-philmd@redhat.com
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
16
target/arm/Makefile.objs | 2 +-
17
target/arm/cpu.h | 7 +++++++
18
target/arm/helper.c | 8 +++++++-
19
3 files changed, 15 insertions(+), 2 deletions(-)
20
21
diff --git a/target/arm/Makefile.objs b/target/arm/Makefile.objs
22
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/Makefile.objs
24
+++ b/target/arm/Makefile.objs
25
@@ -XXX,XX +XXX,XX @@
26
-obj-y += arm-semi.o
27
+obj-$(CONFIG_TCG) += arm-semi.o
28
obj-y += helper.o vfp_helper.o
29
obj-y += cpu.o gdbstub.o
30
obj-$(TARGET_AARCH64) += cpu64.o gdbstub64.o
31
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
21
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
32
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/cpu.h
23
--- a/target/arm/cpu.h
34
+++ b/target/arm/cpu.h
24
+++ b/target/arm/cpu.h
35
@@ -XXX,XX +XXX,XX @@ static inline void aarch64_sve_change_el(CPUARMState *env, int o,
25
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_tts2uxn(const ARMISARegisters *id)
36
{ }
26
return FIELD_EX32(id->id_mmfr4, ID_MMFR4, XNX) != 0;
37
#endif
27
}
38
28
39
+#if !defined(CONFIG_TCG)
29
+static inline bool isar_feature_aa32_half_evt(const ARMISARegisters *id)
40
+static inline target_ulong do_arm_semihosting(CPUARMState *env)
41
+{
30
+{
42
+ g_assert_not_reached();
31
+ return FIELD_EX32(id->id_mmfr4, ID_MMFR4, EVT) >= 1;
43
+}
32
+}
44
+#else
33
+
45
target_ulong do_arm_semihosting(CPUARMState *env);
34
+static inline bool isar_feature_aa32_evt(const ARMISARegisters *id)
46
+#endif
35
+{
47
void aarch64_sync_32_to_64(CPUARMState *env);
36
+ return FIELD_EX32(id->id_mmfr4, ID_MMFR4, EVT) >= 2;
48
void aarch64_sync_64_to_32(CPUARMState *env);
37
+}
49
38
+
39
static inline bool isar_feature_aa32_dit(const ARMISARegisters *id)
40
{
41
return FIELD_EX32(id->id_pfr0, ID_PFR0, DIT) != 0;
42
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_ids(const ARMISARegisters *id)
43
return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, IDS) != 0;
44
}
45
46
+static inline bool isar_feature_aa64_half_evt(const ARMISARegisters *id)
47
+{
48
+ return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, EVT) >= 1;
49
+}
50
+
51
+static inline bool isar_feature_aa64_evt(const ARMISARegisters *id)
52
+{
53
+ return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, EVT) >= 2;
54
+}
55
+
56
static inline bool isar_feature_aa64_bti(const ARMISARegisters *id)
57
{
58
return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, BT) != 0;
59
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_any_ras(const ARMISARegisters *id)
60
return isar_feature_aa64_ras(id) || isar_feature_aa32_ras(id);
61
}
62
63
+static inline bool isar_feature_any_half_evt(const ARMISARegisters *id)
64
+{
65
+ return isar_feature_aa64_half_evt(id) || isar_feature_aa32_half_evt(id);
66
+}
67
+
68
+static inline bool isar_feature_any_evt(const ARMISARegisters *id)
69
+{
70
+ return isar_feature_aa64_evt(id) || isar_feature_aa32_evt(id);
71
+}
72
+
73
/*
74
* Forward to the above feature tests given an ARMCPU pointer.
75
*/
50
diff --git a/target/arm/helper.c b/target/arm/helper.c
76
diff --git a/target/arm/helper.c b/target/arm/helper.c
51
index XXXXXXX..XXXXXXX 100644
77
index XXXXXXX..XXXXXXX 100644
52
--- a/target/arm/helper.c
78
--- a/target/arm/helper.c
53
+++ b/target/arm/helper.c
79
+++ b/target/arm/helper.c
54
@@ -XXX,XX +XXX,XX @@
80
@@ -XXX,XX +XXX,XX @@ static void do_hcr_write(CPUARMState *env, uint64_t value, uint64_t valid_mask)
55
#include "qemu/qemu-print.h"
81
}
56
#include "exec/exec-all.h"
57
#include "exec/cpu_ldst.h"
58
-#include "arm_ldst.h"
59
#include <zlib.h> /* For crc32 */
60
#include "hw/semihosting/semihost.h"
61
#include "sysemu/cpus.h"
62
@@ -XXX,XX +XXX,XX @@
63
#include "qapi/qapi-commands-machine-target.h"
64
#include "qapi/error.h"
65
#include "qemu/guest-random.h"
66
+#ifdef CONFIG_TCG
67
+#include "arm_ldst.h"
68
+#endif
69
70
#define ARM_CPU_FREQ 1000000000 /* FIXME: 1 GHz, should be configurable */
71
72
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
73
74
static inline bool check_for_semihosting(CPUState *cs)
75
{
76
+#ifdef CONFIG_TCG
77
/* Check whether this exception is a semihosting call; if so
78
* then handle it and return true; otherwise return false.
79
*/
80
@@ -XXX,XX +XXX,XX @@ static inline bool check_for_semihosting(CPUState *cs)
81
env->regs[0] = do_arm_semihosting(env);
82
return true;
83
}
82
}
84
+#else
83
85
+ return false;
84
+ if (cpu_isar_feature(any_evt, cpu)) {
86
+#endif
85
+ valid_mask |= HCR_TTLBIS | HCR_TTLBOS | HCR_TICAB | HCR_TOCU | HCR_TID4;
87
}
86
+ } else if (cpu_isar_feature(any_half_evt, cpu)) {
88
87
+ valid_mask |= HCR_TICAB | HCR_TOCU | HCR_TID4;
89
/* Handle a CPU exception for A and R profile CPUs.
88
+ }
89
+
90
/* Clear RES0 bits. */
91
value &= valid_mask;
92
90
--
93
--
91
2.20.1
94
2.25.1
92
93
diff view generated by jsdifflib
New patch
1
For FEAT_EVT, the HCR_EL2.TTLBIS bit allows trapping on EL1 use of
2
TLB maintenance instructions that operate on the inner shareable
3
domain:
1
4
5
AArch64:
6
TLBI VMALLE1IS, TLBI VAE1IS, TLBI ASIDE1IS, TLBI VAAE1IS,
7
TLBI VALE1IS, TLBI VAALE1IS, TLBI RVAE1IS, TLBI RVAAE1IS,
8
TLBI RVALE1IS, and TLBI RVAALE1IS.
9
10
AArch32:
11
TLBIALLIS, TLBIMVAIS, TLBIASIDIS, TLBIMVAAIS, TLBIMVALIS,
12
and TLBIMVAALIS.
13
14
Add the trapping support.
15
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
18
---
19
target/arm/helper.c | 43 +++++++++++++++++++++++++++----------------
20
1 file changed, 27 insertions(+), 16 deletions(-)
21
22
diff --git a/target/arm/helper.c b/target/arm/helper.c
23
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/helper.c
25
+++ b/target/arm/helper.c
26
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_ttlb(CPUARMState *env, const ARMCPRegInfo *ri,
27
return CP_ACCESS_OK;
28
}
29
30
+/* Check for traps from EL1 due to HCR_EL2.TTLB or TTLBIS. */
31
+static CPAccessResult access_ttlbis(CPUARMState *env, const ARMCPRegInfo *ri,
32
+ bool isread)
33
+{
34
+ if (arm_current_el(env) == 1 &&
35
+ (arm_hcr_el2_eff(env) & (HCR_TTLB | HCR_TTLBIS))) {
36
+ return CP_ACCESS_TRAP_EL2;
37
+ }
38
+ return CP_ACCESS_OK;
39
+}
40
+
41
static void dacr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
42
{
43
ARMCPU *cpu = env_archcpu(env);
44
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
45
static const ARMCPRegInfo v7mp_cp_reginfo[] = {
46
/* 32 bit TLB invalidates, Inner Shareable */
47
{ .name = "TLBIALLIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 0,
48
- .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
49
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlbis,
50
.writefn = tlbiall_is_write },
51
{ .name = "TLBIMVAIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 1,
52
- .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
53
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlbis,
54
.writefn = tlbimva_is_write },
55
{ .name = "TLBIASIDIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 2,
56
- .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
57
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlbis,
58
.writefn = tlbiasid_is_write },
59
{ .name = "TLBIMVAAIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 3,
60
- .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
61
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlbis,
62
.writefn = tlbimvaa_is_write },
63
};
64
65
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
66
/* TLBI operations */
67
{ .name = "TLBI_VMALLE1IS", .state = ARM_CP_STATE_AA64,
68
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 0,
69
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
70
+ .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
71
.writefn = tlbi_aa64_vmalle1is_write },
72
{ .name = "TLBI_VAE1IS", .state = ARM_CP_STATE_AA64,
73
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 1,
74
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
75
+ .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
76
.writefn = tlbi_aa64_vae1is_write },
77
{ .name = "TLBI_ASIDE1IS", .state = ARM_CP_STATE_AA64,
78
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 2,
79
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
80
+ .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
81
.writefn = tlbi_aa64_vmalle1is_write },
82
{ .name = "TLBI_VAAE1IS", .state = ARM_CP_STATE_AA64,
83
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 3,
84
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
85
+ .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
86
.writefn = tlbi_aa64_vae1is_write },
87
{ .name = "TLBI_VALE1IS", .state = ARM_CP_STATE_AA64,
88
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 5,
89
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
90
+ .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
91
.writefn = tlbi_aa64_vae1is_write },
92
{ .name = "TLBI_VAALE1IS", .state = ARM_CP_STATE_AA64,
93
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 7,
94
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
95
+ .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
96
.writefn = tlbi_aa64_vae1is_write },
97
{ .name = "TLBI_VMALLE1", .state = ARM_CP_STATE_AA64,
98
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 0,
99
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
100
#endif
101
/* TLB invalidate last level of translation table walk */
102
{ .name = "TLBIMVALIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 5,
103
- .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
104
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlbis,
105
.writefn = tlbimva_is_write },
106
{ .name = "TLBIMVAALIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 7,
107
- .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
108
+ .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlbis,
109
.writefn = tlbimvaa_is_write },
110
{ .name = "TLBIMVAL", .cp = 15, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 5,
111
.type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
112
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo pauth_reginfo[] = {
113
static const ARMCPRegInfo tlbirange_reginfo[] = {
114
{ .name = "TLBI_RVAE1IS", .state = ARM_CP_STATE_AA64,
115
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 1,
116
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
117
+ .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
118
.writefn = tlbi_aa64_rvae1is_write },
119
{ .name = "TLBI_RVAAE1IS", .state = ARM_CP_STATE_AA64,
120
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 3,
121
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
122
+ .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
123
.writefn = tlbi_aa64_rvae1is_write },
124
{ .name = "TLBI_RVALE1IS", .state = ARM_CP_STATE_AA64,
125
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 5,
126
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
127
+ .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
128
.writefn = tlbi_aa64_rvae1is_write },
129
{ .name = "TLBI_RVAALE1IS", .state = ARM_CP_STATE_AA64,
130
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 7,
131
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
132
+ .access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
133
.writefn = tlbi_aa64_rvae1is_write },
134
{ .name = "TLBI_RVAE1OS", .state = ARM_CP_STATE_AA64,
135
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 1,
136
--
137
2.25.1
diff view generated by jsdifflib
New patch
1
For FEAT_EVT, the HCR_EL2.TTLBOS bit allows trapping on EL1
2
use of TLB maintenance instructions that operate on the
3
outer shareable domain:
1
4
5
TLBI VMALLE1OS, TLBI VAE1OS, TLBI ASIDE1OS,TLBI VAAE1OS,
6
TLBI VALE1OS, TLBI VAALE1OS, TLBI RVAE1OS, TLBI RVAAE1OS,
7
TLBI RVALE1OS, and TLBI RVAALE1OS.
8
9
(There are no AArch32 outer-shareable TLB maintenance ops.)
10
11
Implement the trapping.
12
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
---
16
target/arm/helper.c | 33 +++++++++++++++++++++++----------
17
1 file changed, 23 insertions(+), 10 deletions(-)
18
19
diff --git a/target/arm/helper.c b/target/arm/helper.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/helper.c
22
+++ b/target/arm/helper.c
23
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_ttlbis(CPUARMState *env, const ARMCPRegInfo *ri,
24
return CP_ACCESS_OK;
25
}
26
27
+#ifdef TARGET_AARCH64
28
+/* Check for traps from EL1 due to HCR_EL2.TTLB or TTLBOS. */
29
+static CPAccessResult access_ttlbos(CPUARMState *env, const ARMCPRegInfo *ri,
30
+ bool isread)
31
+{
32
+ if (arm_current_el(env) == 1 &&
33
+ (arm_hcr_el2_eff(env) & (HCR_TTLB | HCR_TTLBOS))) {
34
+ return CP_ACCESS_TRAP_EL2;
35
+ }
36
+ return CP_ACCESS_OK;
37
+}
38
+#endif
39
+
40
static void dacr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
41
{
42
ARMCPU *cpu = env_archcpu(env);
43
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbirange_reginfo[] = {
44
.writefn = tlbi_aa64_rvae1is_write },
45
{ .name = "TLBI_RVAE1OS", .state = ARM_CP_STATE_AA64,
46
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 1,
47
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
48
+ .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
49
.writefn = tlbi_aa64_rvae1is_write },
50
{ .name = "TLBI_RVAAE1OS", .state = ARM_CP_STATE_AA64,
51
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 3,
52
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
53
+ .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
54
.writefn = tlbi_aa64_rvae1is_write },
55
{ .name = "TLBI_RVALE1OS", .state = ARM_CP_STATE_AA64,
56
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 5,
57
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
58
+ .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
59
.writefn = tlbi_aa64_rvae1is_write },
60
{ .name = "TLBI_RVAALE1OS", .state = ARM_CP_STATE_AA64,
61
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 7,
62
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
63
+ .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
64
.writefn = tlbi_aa64_rvae1is_write },
65
{ .name = "TLBI_RVAE1", .state = ARM_CP_STATE_AA64,
66
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 1,
67
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbirange_reginfo[] = {
68
static const ARMCPRegInfo tlbios_reginfo[] = {
69
{ .name = "TLBI_VMALLE1OS", .state = ARM_CP_STATE_AA64,
70
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 0,
71
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
72
+ .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
73
.writefn = tlbi_aa64_vmalle1is_write },
74
{ .name = "TLBI_VAE1OS", .state = ARM_CP_STATE_AA64,
75
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 1,
76
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
77
+ .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
78
.writefn = tlbi_aa64_vae1is_write },
79
{ .name = "TLBI_ASIDE1OS", .state = ARM_CP_STATE_AA64,
80
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 2,
81
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
82
+ .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
83
.writefn = tlbi_aa64_vmalle1is_write },
84
{ .name = "TLBI_VAAE1OS", .state = ARM_CP_STATE_AA64,
85
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 3,
86
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
87
+ .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
88
.writefn = tlbi_aa64_vae1is_write },
89
{ .name = "TLBI_VALE1OS", .state = ARM_CP_STATE_AA64,
90
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 5,
91
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
92
+ .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
93
.writefn = tlbi_aa64_vae1is_write },
94
{ .name = "TLBI_VAALE1OS", .state = ARM_CP_STATE_AA64,
95
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 7,
96
- .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
97
+ .access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
98
.writefn = tlbi_aa64_vae1is_write },
99
{ .name = "TLBI_ALLE2OS", .state = ARM_CP_STATE_AA64,
100
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 0,
101
--
102
2.25.1
diff view generated by jsdifflib
New patch
1
For FEAT_EVT, the HCR_EL2.TICAB bit allows trapping of the ICIALLUIS
2
and IC IALLUIS cache maintenance instructions.
1
3
4
The HCR_EL2.TOCU bit traps all the other cache maintenance
5
instructions that operate to the point of unification:
6
AArch64 IC IVAU, IC IALLU, DC CVAU
7
AArch32 ICIMVAU, ICIALLU, DCCMVAU
8
9
The two trap bits between them cover all of the cache maintenance
10
instructions which must also check the HCR_TPU flag. Turn the old
11
aa64_cacheop_pou_access() function into a helper function which takes
12
the set of HCR_EL2 flags to check as an argument, and call it from
13
new access_ticab() and access_tocu() functions as appropriate for
14
each cache op.
15
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
18
---
19
target/arm/helper.c | 36 +++++++++++++++++++++++-------------
20
1 file changed, 23 insertions(+), 13 deletions(-)
21
22
diff --git a/target/arm/helper.c b/target/arm/helper.c
23
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/helper.c
25
+++ b/target/arm/helper.c
26
@@ -XXX,XX +XXX,XX @@ static CPAccessResult aa64_cacheop_poc_access(CPUARMState *env,
27
return CP_ACCESS_OK;
28
}
29
30
-static CPAccessResult aa64_cacheop_pou_access(CPUARMState *env,
31
- const ARMCPRegInfo *ri,
32
- bool isread)
33
+static CPAccessResult do_cacheop_pou_access(CPUARMState *env, uint64_t hcrflags)
34
{
35
/* Cache invalidate/clean to Point of Unification... */
36
switch (arm_current_el(env)) {
37
@@ -XXX,XX +XXX,XX @@ static CPAccessResult aa64_cacheop_pou_access(CPUARMState *env,
38
}
39
/* fall through */
40
case 1:
41
- /* ... EL1 must trap to EL2 if HCR_EL2.TPU is set. */
42
- if (arm_hcr_el2_eff(env) & HCR_TPU) {
43
+ /* ... EL1 must trap to EL2 if relevant HCR_EL2 flags are set. */
44
+ if (arm_hcr_el2_eff(env) & hcrflags) {
45
return CP_ACCESS_TRAP_EL2;
46
}
47
break;
48
@@ -XXX,XX +XXX,XX @@ static CPAccessResult aa64_cacheop_pou_access(CPUARMState *env,
49
return CP_ACCESS_OK;
50
}
51
52
+static CPAccessResult access_ticab(CPUARMState *env, const ARMCPRegInfo *ri,
53
+ bool isread)
54
+{
55
+ return do_cacheop_pou_access(env, HCR_TICAB | HCR_TPU);
56
+}
57
+
58
+static CPAccessResult access_tocu(CPUARMState *env, const ARMCPRegInfo *ri,
59
+ bool isread)
60
+{
61
+ return do_cacheop_pou_access(env, HCR_TOCU | HCR_TPU);
62
+}
63
+
64
/* See: D4.7.2 TLB maintenance requirements and the TLB maintenance instructions
65
* Page D4-1736 (DDI0487A.b)
66
*/
67
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
68
{ .name = "IC_IALLUIS", .state = ARM_CP_STATE_AA64,
69
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 1, .opc2 = 0,
70
.access = PL1_W, .type = ARM_CP_NOP,
71
- .accessfn = aa64_cacheop_pou_access },
72
+ .accessfn = access_ticab },
73
{ .name = "IC_IALLU", .state = ARM_CP_STATE_AA64,
74
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 5, .opc2 = 0,
75
.access = PL1_W, .type = ARM_CP_NOP,
76
- .accessfn = aa64_cacheop_pou_access },
77
+ .accessfn = access_tocu },
78
{ .name = "IC_IVAU", .state = ARM_CP_STATE_AA64,
79
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 5, .opc2 = 1,
80
.access = PL0_W, .type = ARM_CP_NOP,
81
- .accessfn = aa64_cacheop_pou_access },
82
+ .accessfn = access_tocu },
83
{ .name = "DC_IVAC", .state = ARM_CP_STATE_AA64,
84
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 6, .opc2 = 1,
85
.access = PL1_W, .accessfn = aa64_cacheop_poc_access,
86
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
87
{ .name = "DC_CVAU", .state = ARM_CP_STATE_AA64,
88
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 11, .opc2 = 1,
89
.access = PL0_W, .type = ARM_CP_NOP,
90
- .accessfn = aa64_cacheop_pou_access },
91
+ .accessfn = access_tocu },
92
{ .name = "DC_CIVAC", .state = ARM_CP_STATE_AA64,
93
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 14, .opc2 = 1,
94
.access = PL0_W, .type = ARM_CP_NOP,
95
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
96
.writefn = tlbiipas2is_hyp_write },
97
/* 32 bit cache operations */
98
{ .name = "ICIALLUIS", .cp = 15, .opc1 = 0, .crn = 7, .crm = 1, .opc2 = 0,
99
- .type = ARM_CP_NOP, .access = PL1_W, .accessfn = aa64_cacheop_pou_access },
100
+ .type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_ticab },
101
{ .name = "BPIALLUIS", .cp = 15, .opc1 = 0, .crn = 7, .crm = 1, .opc2 = 6,
102
.type = ARM_CP_NOP, .access = PL1_W },
103
{ .name = "ICIALLU", .cp = 15, .opc1 = 0, .crn = 7, .crm = 5, .opc2 = 0,
104
- .type = ARM_CP_NOP, .access = PL1_W, .accessfn = aa64_cacheop_pou_access },
105
+ .type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_tocu },
106
{ .name = "ICIMVAU", .cp = 15, .opc1 = 0, .crn = 7, .crm = 5, .opc2 = 1,
107
- .type = ARM_CP_NOP, .access = PL1_W, .accessfn = aa64_cacheop_pou_access },
108
+ .type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_tocu },
109
{ .name = "BPIALL", .cp = 15, .opc1 = 0, .crn = 7, .crm = 5, .opc2 = 6,
110
.type = ARM_CP_NOP, .access = PL1_W },
111
{ .name = "BPIMVA", .cp = 15, .opc1 = 0, .crn = 7, .crm = 5, .opc2 = 7,
112
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
113
{ .name = "DCCSW", .cp = 15, .opc1 = 0, .crn = 7, .crm = 10, .opc2 = 2,
114
.type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_tsw },
115
{ .name = "DCCMVAU", .cp = 15, .opc1 = 0, .crn = 7, .crm = 11, .opc2 = 1,
116
- .type = ARM_CP_NOP, .access = PL1_W, .accessfn = aa64_cacheop_pou_access },
117
+ .type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_tocu },
118
{ .name = "DCCIMVAC", .cp = 15, .opc1 = 0, .crn = 7, .crm = 14, .opc2 = 1,
119
.type = ARM_CP_NOP, .access = PL1_W, .accessfn = aa64_cacheop_poc_access },
120
{ .name = "DCCISW", .cp = 15, .opc1 = 0, .crn = 7, .crm = 14, .opc2 = 2,
121
--
122
2.25.1
diff view generated by jsdifflib
New patch
1
For FEAT_EVT, the HCR_EL2.TID4 trap allows trapping of the cache ID
2
registers CCSIDR_EL1, CCSIDR2_EL1, CLIDR_EL1 and CSSELR_EL1 (and
3
their AArch32 equivalents). This is a subset of the registers
4
trapped by HCR_EL2.TID2, which includes all of these and also the
5
CTR_EL0 register.
1
6
7
Our implementation already uses a separate access function for
8
CTR_EL0 (ctr_el0_access()), so all of the registers currently using
9
access_aa64_tid2() should also be checking TID4. Make that function
10
check both TID2 and TID4, and rename it appropriately.
11
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
---
15
target/arm/helper.c | 17 +++++++++--------
16
1 file changed, 9 insertions(+), 8 deletions(-)
17
18
diff --git a/target/arm/helper.c b/target/arm/helper.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/helper.c
21
+++ b/target/arm/helper.c
22
@@ -XXX,XX +XXX,XX @@ static void scr_reset(CPUARMState *env, const ARMCPRegInfo *ri)
23
scr_write(env, ri, 0);
24
}
25
26
-static CPAccessResult access_aa64_tid2(CPUARMState *env,
27
- const ARMCPRegInfo *ri,
28
- bool isread)
29
+static CPAccessResult access_tid4(CPUARMState *env,
30
+ const ARMCPRegInfo *ri,
31
+ bool isread)
32
{
33
- if (arm_current_el(env) == 1 && (arm_hcr_el2_eff(env) & HCR_TID2)) {
34
+ if (arm_current_el(env) == 1 &&
35
+ (arm_hcr_el2_eff(env) & (HCR_TID2 | HCR_TID4))) {
36
return CP_ACCESS_TRAP_EL2;
37
}
38
39
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
40
{ .name = "CCSIDR", .state = ARM_CP_STATE_BOTH,
41
.opc0 = 3, .crn = 0, .crm = 0, .opc1 = 1, .opc2 = 0,
42
.access = PL1_R,
43
- .accessfn = access_aa64_tid2,
44
+ .accessfn = access_tid4,
45
.readfn = ccsidr_read, .type = ARM_CP_NO_RAW },
46
{ .name = "CSSELR", .state = ARM_CP_STATE_BOTH,
47
.opc0 = 3, .crn = 0, .crm = 0, .opc1 = 2, .opc2 = 0,
48
.access = PL1_RW,
49
- .accessfn = access_aa64_tid2,
50
+ .accessfn = access_tid4,
51
.writefn = csselr_write, .resetvalue = 0,
52
.bank_fieldoffsets = { offsetof(CPUARMState, cp15.csselr_s),
53
offsetof(CPUARMState, cp15.csselr_ns) } },
54
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo ccsidr2_reginfo[] = {
55
{ .name = "CCSIDR2", .state = ARM_CP_STATE_BOTH,
56
.opc0 = 3, .opc1 = 1, .crn = 0, .crm = 0, .opc2 = 2,
57
.access = PL1_R,
58
- .accessfn = access_aa64_tid2,
59
+ .accessfn = access_tid4,
60
.readfn = ccsidr2_read, .type = ARM_CP_NO_RAW },
61
};
62
63
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
64
.name = "CLIDR", .state = ARM_CP_STATE_BOTH,
65
.opc0 = 3, .crn = 0, .crm = 0, .opc1 = 1, .opc2 = 1,
66
.access = PL1_R, .type = ARM_CP_CONST,
67
- .accessfn = access_aa64_tid2,
68
+ .accessfn = access_tid4,
69
.resetvalue = cpu->clidr
70
};
71
define_one_arm_cp_reg(cpu, &clidr);
72
--
73
2.25.1
diff view generated by jsdifflib
New patch
1
Update the ID registers for TCG's '-cpu max' to report the
2
FEAT_EVT Enhanced Virtualization Traps support.
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
---
7
docs/system/arm/emulation.rst | 1 +
8
target/arm/cpu64.c | 1 +
9
target/arm/cpu_tcg.c | 1 +
10
3 files changed, 3 insertions(+)
11
12
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
13
index XXXXXXX..XXXXXXX 100644
14
--- a/docs/system/arm/emulation.rst
15
+++ b/docs/system/arm/emulation.rst
16
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
17
- FEAT_DoubleFault (Double Fault Extension)
18
- FEAT_E0PD (Preventing EL0 access to halves of address maps)
19
- FEAT_ETS (Enhanced Translation Synchronization)
20
+- FEAT_EVT (Enhanced Virtualization Traps)
21
- FEAT_FCMA (Floating-point complex number instructions)
22
- FEAT_FHM (Floating-point half-precision multiplication instructions)
23
- FEAT_FP16 (Half-precision floating-point data processing)
24
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
25
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/cpu64.c
27
+++ b/target/arm/cpu64.c
28
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
29
t = FIELD_DP64(t, ID_AA64MMFR2, FWB, 1); /* FEAT_S2FWB */
30
t = FIELD_DP64(t, ID_AA64MMFR2, TTL, 1); /* FEAT_TTL */
31
t = FIELD_DP64(t, ID_AA64MMFR2, BBM, 2); /* FEAT_BBM at level 2 */
32
+ t = FIELD_DP64(t, ID_AA64MMFR2, EVT, 2); /* FEAT_EVT */
33
t = FIELD_DP64(t, ID_AA64MMFR2, E0PD, 1); /* FEAT_E0PD */
34
cpu->isar.id_aa64mmfr2 = t;
35
36
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
37
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/cpu_tcg.c
39
+++ b/target/arm/cpu_tcg.c
40
@@ -XXX,XX +XXX,XX @@ void aa32_max_features(ARMCPU *cpu)
41
t = FIELD_DP32(t, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */
42
t = FIELD_DP32(t, ID_MMFR4, CNP, 1); /* FEAT_TTCNP */
43
t = FIELD_DP32(t, ID_MMFR4, XNX, 1); /* FEAT_XNX */
44
+ t = FIELD_DP32(t, ID_MMFR4, EVT, 2); /* FEAT_EVT */
45
cpu->isar.id_mmfr4 = t;
46
47
t = cpu->isar.id_mmfr5;
48
--
49
2.25.1
diff view generated by jsdifflib
New patch
1
Convert the TYPE_ARM_SMMU device to 3-phase reset. The legacy method
2
doesn't do anything that's invalid in the hold phase, so the
3
conversion is simple and not a behaviour change.
1
4
5
Note that we must convert this base class before we can convert the
6
TYPE_ARM_SMMUV3 subclass -- transitional support in Resettable
7
handles "chain to parent class reset" when the base class is 3-phase
8
and the subclass is still using legacy reset, but not the other way
9
around.
10
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Reviewed-by: Eric Auger <eric.auger@redhat.com>
15
Message-id: 20221109161444.3397405-2-peter.maydell@linaro.org
16
---
17
hw/arm/smmu-common.c | 7 ++++---
18
1 file changed, 4 insertions(+), 3 deletions(-)
19
20
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
21
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/arm/smmu-common.c
23
+++ b/hw/arm/smmu-common.c
24
@@ -XXX,XX +XXX,XX @@ static void smmu_base_realize(DeviceState *dev, Error **errp)
25
}
26
}
27
28
-static void smmu_base_reset(DeviceState *dev)
29
+static void smmu_base_reset_hold(Object *obj)
30
{
31
- SMMUState *s = ARM_SMMU(dev);
32
+ SMMUState *s = ARM_SMMU(obj);
33
34
g_hash_table_remove_all(s->configs);
35
g_hash_table_remove_all(s->iotlb);
36
@@ -XXX,XX +XXX,XX @@ static Property smmu_dev_properties[] = {
37
static void smmu_base_class_init(ObjectClass *klass, void *data)
38
{
39
DeviceClass *dc = DEVICE_CLASS(klass);
40
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
41
SMMUBaseClass *sbc = ARM_SMMU_CLASS(klass);
42
43
device_class_set_props(dc, smmu_dev_properties);
44
device_class_set_parent_realize(dc, smmu_base_realize,
45
&sbc->parent_realize);
46
- dc->reset = smmu_base_reset;
47
+ rc->phases.hold = smmu_base_reset_hold;
48
}
49
50
static const TypeInfo smmu_base_info = {
51
--
52
2.25.1
53
54
diff view generated by jsdifflib
1
Coverity points out (CID 1402195) that the loop in trans_VMOV_imm_dp()
1
Convert the TYPE_ARM_SMMUV3 device to 3-phase reset. The legacy
2
that iterates over the destination registers in a short-vector VMOV
2
reset method doesn't do anything that's invalid in the hold phase, so
3
accidentally throws away the returned updated register number
3
the conversion only requires changing it to a hold phase method, and
4
from vfp_advance_dreg(). Add the missing assignment. (We got this
4
using the 3-phase versions of the "save the parent reset method and
5
correct in trans_VMOV_imm_sp().)
5
chain to it" code.
6
6
7
Fixes: 18cf951af9a27ae573a
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20190702105115.9465-1-peter.maydell@linaro.org
9
Reviewed-by: Eric Auger <eric.auger@redhat.com>
10
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
11
Message-id: 20221109161444.3397405-3-peter.maydell@linaro.org
11
---
12
---
12
target/arm/translate-vfp.inc.c | 2 +-
13
include/hw/arm/smmuv3.h | 2 +-
13
1 file changed, 1 insertion(+), 1 deletion(-)
14
hw/arm/smmuv3.c | 12 ++++++++----
15
2 files changed, 9 insertions(+), 5 deletions(-)
14
16
15
diff --git a/target/arm/translate-vfp.inc.c b/target/arm/translate-vfp.inc.c
17
diff --git a/include/hw/arm/smmuv3.h b/include/hw/arm/smmuv3.h
16
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate-vfp.inc.c
19
--- a/include/hw/arm/smmuv3.h
18
+++ b/target/arm/translate-vfp.inc.c
20
+++ b/include/hw/arm/smmuv3.h
19
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_dp(DisasContext *s, arg_VMOV_imm_dp *a)
21
@@ -XXX,XX +XXX,XX @@ struct SMMUv3Class {
20
22
/*< public >*/
21
/* Set up the operands for the next iteration */
23
22
veclen--;
24
DeviceRealize parent_realize;
23
- vfp_advance_dreg(vd, delta_d);
25
- DeviceReset parent_reset;
24
+ vd = vfp_advance_dreg(vd, delta_d);
26
+ ResettablePhases parent_phases;
27
};
28
29
#define TYPE_ARM_SMMUV3 "arm-smmuv3"
30
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
31
index XXXXXXX..XXXXXXX 100644
32
--- a/hw/arm/smmuv3.c
33
+++ b/hw/arm/smmuv3.c
34
@@ -XXX,XX +XXX,XX @@ static void smmu_init_irq(SMMUv3State *s, SysBusDevice *dev)
25
}
35
}
26
36
}
27
tcg_temp_free_i64(fd);
37
38
-static void smmu_reset(DeviceState *dev)
39
+static void smmu_reset_hold(Object *obj)
40
{
41
- SMMUv3State *s = ARM_SMMUV3(dev);
42
+ SMMUv3State *s = ARM_SMMUV3(obj);
43
SMMUv3Class *c = ARM_SMMUV3_GET_CLASS(s);
44
45
- c->parent_reset(dev);
46
+ if (c->parent_phases.hold) {
47
+ c->parent_phases.hold(obj);
48
+ }
49
50
smmuv3_init_regs(s);
51
}
52
@@ -XXX,XX +XXX,XX @@ static void smmuv3_instance_init(Object *obj)
53
static void smmuv3_class_init(ObjectClass *klass, void *data)
54
{
55
DeviceClass *dc = DEVICE_CLASS(klass);
56
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
57
SMMUv3Class *c = ARM_SMMUV3_CLASS(klass);
58
59
dc->vmsd = &vmstate_smmuv3;
60
- device_class_set_parent_reset(dc, smmu_reset, &c->parent_reset);
61
+ resettable_class_set_parent_phases(rc, NULL, smmu_reset_hold, NULL,
62
+ &c->parent_phases);
63
c->parent_realize = dc->realize;
64
dc->realize = smmu_realize;
65
}
28
--
66
--
29
2.20.1
67
2.25.1
30
68
31
69
diff view generated by jsdifflib
New patch
1
Convert the TYPE_ARM_GIC_COMMON device to 3-phase reset. This is a
2
simple no-behaviour-change conversion.
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20221109161444.3397405-4-peter.maydell@linaro.org
8
---
9
hw/intc/arm_gic_common.c | 7 ++++---
10
1 file changed, 4 insertions(+), 3 deletions(-)
11
12
diff --git a/hw/intc/arm_gic_common.c b/hw/intc/arm_gic_common.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/hw/intc/arm_gic_common.c
15
+++ b/hw/intc/arm_gic_common.c
16
@@ -XXX,XX +XXX,XX @@ static inline void arm_gic_common_reset_irq_state(GICState *s, int first_cpu,
17
}
18
}
19
20
-static void arm_gic_common_reset(DeviceState *dev)
21
+static void arm_gic_common_reset_hold(Object *obj)
22
{
23
- GICState *s = ARM_GIC_COMMON(dev);
24
+ GICState *s = ARM_GIC_COMMON(obj);
25
int i, j;
26
int resetprio;
27
28
@@ -XXX,XX +XXX,XX @@ static Property arm_gic_common_properties[] = {
29
static void arm_gic_common_class_init(ObjectClass *klass, void *data)
30
{
31
DeviceClass *dc = DEVICE_CLASS(klass);
32
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
33
ARMLinuxBootIfClass *albifc = ARM_LINUX_BOOT_IF_CLASS(klass);
34
35
- dc->reset = arm_gic_common_reset;
36
+ rc->phases.hold = arm_gic_common_reset_hold;
37
dc->realize = arm_gic_common_realize;
38
device_class_set_props(dc, arm_gic_common_properties);
39
dc->vmsd = &vmstate_gic;
40
--
41
2.25.1
42
43
diff view generated by jsdifflib
1
Like most of the v7M memory mapped system registers, the systick
1
Now we have converted TYPE_ARM_GIC_COMMON, we can convert the
2
registers are accessible to privileged code only and user accesses
2
TYPE_ARM_GIC_KVM subclass to 3-phase reset.
3
must generate a BusFault. We implement that for registers in
4
the NVIC proper already, but missed it for systick since we
5
implement it as a separate device. Correct the omission.
6
3
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
6
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
10
Message-id: 20190617175317.27557-6-peter.maydell@linaro.org
7
Message-id: 20221109161444.3397405-5-peter.maydell@linaro.org
11
---
8
---
12
hw/timer/armv7m_systick.c | 26 ++++++++++++++++++++------
9
hw/intc/arm_gic_kvm.c | 14 +++++++++-----
13
1 file changed, 20 insertions(+), 6 deletions(-)
10
1 file changed, 9 insertions(+), 5 deletions(-)
14
11
15
diff --git a/hw/timer/armv7m_systick.c b/hw/timer/armv7m_systick.c
12
diff --git a/hw/intc/arm_gic_kvm.c b/hw/intc/arm_gic_kvm.c
16
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/timer/armv7m_systick.c
14
--- a/hw/intc/arm_gic_kvm.c
18
+++ b/hw/timer/armv7m_systick.c
15
+++ b/hw/intc/arm_gic_kvm.c
19
@@ -XXX,XX +XXX,XX @@ static void systick_timer_tick(void *opaque)
16
@@ -XXX,XX +XXX,XX @@ DECLARE_OBJ_CHECKERS(GICState, KVMARMGICClass,
17
struct KVMARMGICClass {
18
ARMGICCommonClass parent_class;
19
DeviceRealize parent_realize;
20
- void (*parent_reset)(DeviceState *dev);
21
+ ResettablePhases parent_phases;
22
};
23
24
void kvm_arm_gic_set_irq(uint32_t num_irq, int irq, int level)
25
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gic_get(GICState *s)
20
}
26
}
21
}
27
}
22
28
23
-static uint64_t systick_read(void *opaque, hwaddr addr, unsigned size)
29
-static void kvm_arm_gic_reset(DeviceState *dev)
24
+static MemTxResult systick_read(void *opaque, hwaddr addr, uint64_t *data,
30
+static void kvm_arm_gic_reset_hold(Object *obj)
25
+ unsigned size, MemTxAttrs attrs)
26
{
31
{
27
SysTickState *s = opaque;
32
- GICState *s = ARM_GIC_COMMON(dev);
28
uint32_t val;
33
+ GICState *s = ARM_GIC_COMMON(obj);
29
34
KVMARMGICClass *kgc = KVM_ARM_GIC_GET_CLASS(s);
30
+ if (attrs.user) {
35
31
+ /* Generate BusFault for unprivileged accesses */
36
- kgc->parent_reset(dev);
32
+ return MEMTX_ERROR;
37
+ if (kgc->parent_phases.hold) {
38
+ kgc->parent_phases.hold(obj);
33
+ }
39
+ }
34
+
40
35
switch (addr) {
41
if (kvm_arm_gic_can_save_restore(s)) {
36
case 0x0: /* SysTick Control and Status. */
42
kvm_arm_gic_put(s);
37
val = s->control;
43
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gic_realize(DeviceState *dev, Error **errp)
38
@@ -XXX,XX +XXX,XX @@ static uint64_t systick_read(void *opaque, hwaddr addr, unsigned size)
44
static void kvm_arm_gic_class_init(ObjectClass *klass, void *data)
39
}
45
{
40
46
DeviceClass *dc = DEVICE_CLASS(klass);
41
trace_systick_read(addr, val, size);
47
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
42
- return val;
48
ARMGICCommonClass *agcc = ARM_GIC_COMMON_CLASS(klass);
43
+ *data = val;
49
KVMARMGICClass *kgc = KVM_ARM_GIC_CLASS(klass);
44
+ return MEMTX_OK;
50
51
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gic_class_init(ObjectClass *klass, void *data)
52
agcc->post_load = kvm_arm_gic_put;
53
device_class_set_parent_realize(dc, kvm_arm_gic_realize,
54
&kgc->parent_realize);
55
- device_class_set_parent_reset(dc, kvm_arm_gic_reset, &kgc->parent_reset);
56
+ resettable_class_set_parent_phases(rc, NULL, kvm_arm_gic_reset_hold, NULL,
57
+ &kgc->parent_phases);
45
}
58
}
46
59
47
-static void systick_write(void *opaque, hwaddr addr,
60
static const TypeInfo kvm_arm_gic_info = {
48
- uint64_t value, unsigned size)
49
+static MemTxResult systick_write(void *opaque, hwaddr addr,
50
+ uint64_t value, unsigned size,
51
+ MemTxAttrs attrs)
52
{
53
SysTickState *s = opaque;
54
55
+ if (attrs.user) {
56
+ /* Generate BusFault for unprivileged accesses */
57
+ return MEMTX_ERROR;
58
+ }
59
+
60
trace_systick_write(addr, value, size);
61
62
switch (addr) {
63
@@ -XXX,XX +XXX,XX @@ static void systick_write(void *opaque, hwaddr addr,
64
qemu_log_mask(LOG_GUEST_ERROR,
65
"SysTick: Bad write offset 0x%" HWADDR_PRIx "\n", addr);
66
}
67
+ return MEMTX_OK;
68
}
69
70
static const MemoryRegionOps systick_ops = {
71
- .read = systick_read,
72
- .write = systick_write,
73
+ .read_with_attrs = systick_read,
74
+ .write_with_attrs = systick_write,
75
.endianness = DEVICE_NATIVE_ENDIAN,
76
.valid.min_access_size = 4,
77
.valid.max_access_size = 4,
78
--
61
--
79
2.20.1
62
2.25.1
80
63
81
64
diff view generated by jsdifflib
1
Thumb instructions in an IT block are set up to be conditionally
1
Convert the TYPE_ARM_GICV3_COMMON parent class to 3-phase reset.
2
executed depending on a set of condition bits encoded into the IT
3
bits of the CPSR/XPSR. The architecture specifies that if the
4
condition bits are 0b1111 this means "always execute" (like 0b1110),
5
not "never execute"; we were treating it as "never execute". (See
6
the ConditionHolds() pseudocode in both the A-profile and M-profile
7
Arm ARM.)
8
9
This is a bit of an obscure corner case, because the only legal
10
way to get to an 0b1111 set of condbits is to do an exception
11
return which sets the XPSR/CPSR up that way. An IT instruction
12
which encodes a condition sequence that would include an 0b1111 is
13
UNPREDICTABLE, and for v8A the CONSTRAINED UNPREDICTABLE choices
14
for such an IT insn are to NOP, UNDEF, or treat 0b1111 like 0b1110.
15
Add a comment noting that we take the latter option.
16
2
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
19
Message-id: 20190617175317.27557-7-peter.maydell@linaro.org
5
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
Message-id: 20221109161444.3397405-6-peter.maydell@linaro.org
20
---
7
---
21
target/arm/translate.c | 15 +++++++++++++--
8
hw/intc/arm_gicv3_common.c | 7 ++++---
22
1 file changed, 13 insertions(+), 2 deletions(-)
9
1 file changed, 4 insertions(+), 3 deletions(-)
23
10
24
diff --git a/target/arm/translate.c b/target/arm/translate.c
11
diff --git a/hw/intc/arm_gicv3_common.c b/hw/intc/arm_gicv3_common.c
25
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/translate.c
13
--- a/hw/intc/arm_gicv3_common.c
27
+++ b/target/arm/translate.c
14
+++ b/hw/intc/arm_gicv3_common.c
28
@@ -XXX,XX +XXX,XX @@ static void disas_thumb_insn(DisasContext *s, uint32_t insn)
15
@@ -XXX,XX +XXX,XX @@ static void arm_gicv3_finalize(Object *obj)
29
gen_nop_hint(s, (insn >> 4) & 0xf);
16
g_free(s->redist_region_count);
30
break;
17
}
31
}
18
32
- /* If Then. */
19
-static void arm_gicv3_common_reset(DeviceState *dev)
33
+ /*
20
+static void arm_gicv3_common_reset_hold(Object *obj)
34
+ * IT (If-Then)
21
{
35
+ *
22
- GICv3State *s = ARM_GICV3_COMMON(dev);
36
+ * Combinations of firstcond and mask which set up an 0b1111
23
+ GICv3State *s = ARM_GICV3_COMMON(obj);
37
+ * condition are UNPREDICTABLE; we take the CONSTRAINED
24
int i;
38
+ * UNPREDICTABLE choice to treat 0b1111 the same as 0b1110,
25
39
+ * i.e. both meaning "execute always".
26
for (i = 0; i < s->num_cpu; i++) {
40
+ */
27
@@ -XXX,XX +XXX,XX @@ static Property arm_gicv3_common_properties[] = {
41
s->condexec_cond = (insn >> 4) & 0xe;
28
static void arm_gicv3_common_class_init(ObjectClass *klass, void *data)
42
s->condexec_mask = insn & 0x1f;
29
{
43
/* No actual code generated for this insn, just setup state. */
30
DeviceClass *dc = DEVICE_CLASS(klass);
44
@@ -XXX,XX +XXX,XX @@ static void thumb_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
31
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
45
if (dc->condexec_mask && !thumb_insn_is_unconditional(dc, insn)) {
32
ARMLinuxBootIfClass *albifc = ARM_LINUX_BOOT_IF_CLASS(klass);
46
uint32_t cond = dc->condexec_cond;
33
47
34
- dc->reset = arm_gicv3_common_reset;
48
- if (cond != 0x0e) { /* Skip conditional when condition is AL. */
35
+ rc->phases.hold = arm_gicv3_common_reset_hold;
49
+ /*
36
dc->realize = arm_gicv3_common_realize;
50
+ * Conditionally skip the insn. Note that both 0xe and 0xf mean
37
device_class_set_props(dc, arm_gicv3_common_properties);
51
+ * "always"; 0xf is not "never".
38
dc->vmsd = &vmstate_gicv3;
52
+ */
53
+ if (cond < 0x0e) {
54
arm_skip_unless(dc, cond);
55
}
56
}
57
--
39
--
58
2.20.1
40
2.25.1
59
41
60
42
diff view generated by jsdifflib
1
To prevent execution priority remaining negative if the guest
1
Convert the TYPE_KVM_ARM_GICV3 device to 3-phase reset.
2
returns from an NMI or HardFault with a corrupted IPSR, the
3
v8M interrupt deactivation process forces the HardFault and NMI
4
to inactive based on the current raw execution priority,
5
even if the interrupt the guest is trying to deactivate
6
is something else. In the pseudocode this is done in the
7
Deactivate() function.
8
2
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20190617175317.27557-3-peter.maydell@linaro.org
5
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
Message-id: 20221109161444.3397405-7-peter.maydell@linaro.org
12
---
7
---
13
hw/intc/armv7m_nvic.c | 40 +++++++++++++++++++++++++++++++++++-----
8
hw/intc/arm_gicv3_kvm.c | 14 +++++++++-----
14
1 file changed, 35 insertions(+), 5 deletions(-)
9
1 file changed, 9 insertions(+), 5 deletions(-)
15
10
16
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
11
diff --git a/hw/intc/arm_gicv3_kvm.c b/hw/intc/arm_gicv3_kvm.c
17
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/intc/armv7m_nvic.c
13
--- a/hw/intc/arm_gicv3_kvm.c
19
+++ b/hw/intc/armv7m_nvic.c
14
+++ b/hw/intc/arm_gicv3_kvm.c
20
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_get_pending_irq_info(void *opaque,
15
@@ -XXX,XX +XXX,XX @@ DECLARE_OBJ_CHECKERS(GICv3State, KVMARMGICv3Class,
21
int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure)
16
struct KVMARMGICv3Class {
17
ARMGICv3CommonClass parent_class;
18
DeviceRealize parent_realize;
19
- void (*parent_reset)(DeviceState *dev);
20
+ ResettablePhases parent_phases;
21
};
22
23
static void kvm_arm_gicv3_set_irq(void *opaque, int irq, int level)
24
@@ -XXX,XX +XXX,XX @@ static void arm_gicv3_icc_reset(CPUARMState *env, const ARMCPRegInfo *ri)
25
c->icc_ctlr_el1[GICV3_S] = c->icc_ctlr_el1[GICV3_NS];
26
}
27
28
-static void kvm_arm_gicv3_reset(DeviceState *dev)
29
+static void kvm_arm_gicv3_reset_hold(Object *obj)
22
{
30
{
23
NVICState *s = (NVICState *)opaque;
31
- GICv3State *s = ARM_GICV3_COMMON(dev);
24
- VecInfo *vec;
32
+ GICv3State *s = ARM_GICV3_COMMON(obj);
25
+ VecInfo *vec = NULL;
33
KVMARMGICv3Class *kgc = KVM_ARM_GICV3_GET_CLASS(s);
26
int ret;
34
27
35
DPRINTF("Reset\n");
28
assert(irq > ARMV7M_EXCP_RESET && irq < s->num_irq);
36
29
37
- kgc->parent_reset(dev);
30
- if (secure && exc_is_banked(irq)) {
38
+ if (kgc->parent_phases.hold) {
31
- vec = &s->sec_vectors[irq];
39
+ kgc->parent_phases.hold(obj);
32
- } else {
33
- vec = &s->vectors[irq];
34
+ /*
35
+ * For negative priorities, v8M will forcibly deactivate the appropriate
36
+ * NMI or HardFault regardless of what interrupt we're being asked to
37
+ * deactivate (compare the DeActivate() pseudocode). This is a guard
38
+ * against software returning from NMI or HardFault with a corrupted
39
+ * IPSR and leaving the CPU in a negative-priority state.
40
+ * v7M does not do this, but simply deactivates the requested interrupt.
41
+ */
42
+ if (arm_feature(&s->cpu->env, ARM_FEATURE_V8)) {
43
+ switch (armv7m_nvic_raw_execution_priority(s)) {
44
+ case -1:
45
+ if (s->cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK) {
46
+ vec = &s->vectors[ARMV7M_EXCP_HARD];
47
+ } else {
48
+ vec = &s->sec_vectors[ARMV7M_EXCP_HARD];
49
+ }
50
+ break;
51
+ case -2:
52
+ vec = &s->vectors[ARMV7M_EXCP_NMI];
53
+ break;
54
+ case -3:
55
+ vec = &s->sec_vectors[ARMV7M_EXCP_HARD];
56
+ break;
57
+ default:
58
+ break;
59
+ }
60
+ }
40
+ }
61
+
41
62
+ if (!vec) {
42
if (s->migration_blocker) {
63
+ if (secure && exc_is_banked(irq)) {
43
DPRINTF("Cannot put kernel gic state, no kernel interface\n");
64
+ vec = &s->sec_vectors[irq];
44
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gicv3_realize(DeviceState *dev, Error **errp)
65
+ } else {
45
static void kvm_arm_gicv3_class_init(ObjectClass *klass, void *data)
66
+ vec = &s->vectors[irq];
46
{
67
+ }
47
DeviceClass *dc = DEVICE_CLASS(klass);
68
}
48
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
69
49
ARMGICv3CommonClass *agcc = ARM_GICV3_COMMON_CLASS(klass);
70
trace_nvic_complete_irq(irq, secure);
50
KVMARMGICv3Class *kgc = KVM_ARM_GICV3_CLASS(klass);
51
52
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gicv3_class_init(ObjectClass *klass, void *data)
53
agcc->post_load = kvm_arm_gicv3_put;
54
device_class_set_parent_realize(dc, kvm_arm_gicv3_realize,
55
&kgc->parent_realize);
56
- device_class_set_parent_reset(dc, kvm_arm_gicv3_reset, &kgc->parent_reset);
57
+ resettable_class_set_parent_phases(rc, NULL, kvm_arm_gicv3_reset_hold, NULL,
58
+ &kgc->parent_phases);
59
}
60
61
static const TypeInfo kvm_arm_gicv3_info = {
71
--
62
--
72
2.20.1
63
2.25.1
73
64
74
65
diff view generated by jsdifflib
New patch
1
Convert the TYPE_ARM_GICV3_ITS_COMMON parent class to 3-phase reset.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20221109161444.3397405-8-peter.maydell@linaro.org
7
---
8
hw/intc/arm_gicv3_its_common.c | 7 ++++---
9
1 file changed, 4 insertions(+), 3 deletions(-)
10
11
diff --git a/hw/intc/arm_gicv3_its_common.c b/hw/intc/arm_gicv3_its_common.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/intc/arm_gicv3_its_common.c
14
+++ b/hw/intc/arm_gicv3_its_common.c
15
@@ -XXX,XX +XXX,XX @@ void gicv3_its_init_mmio(GICv3ITSState *s, const MemoryRegionOps *ops,
16
msi_nonbroken = true;
17
}
18
19
-static void gicv3_its_common_reset(DeviceState *dev)
20
+static void gicv3_its_common_reset_hold(Object *obj)
21
{
22
- GICv3ITSState *s = ARM_GICV3_ITS_COMMON(dev);
23
+ GICv3ITSState *s = ARM_GICV3_ITS_COMMON(obj);
24
25
s->ctlr = 0;
26
s->cbaser = 0;
27
@@ -XXX,XX +XXX,XX @@ static void gicv3_its_common_reset(DeviceState *dev)
28
static void gicv3_its_common_class_init(ObjectClass *klass, void *data)
29
{
30
DeviceClass *dc = DEVICE_CLASS(klass);
31
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
32
33
- dc->reset = gicv3_its_common_reset;
34
+ rc->phases.hold = gicv3_its_common_reset_hold;
35
dc->vmsd = &vmstate_its;
36
}
37
38
--
39
2.25.1
40
41
diff view generated by jsdifflib
1
In the various helper functions for v7M/v8M instructions, use
1
Convert the TYPE_ARM_GICV3_ITS device to 3-phase reset.
2
the _ra versions of cpu_stl_data() and friends. Otherwise we
3
may get wrong behaviour or an assert() due to not being able
4
to locate the TB if there is an exception on the memory access
5
or if it performs an IO operation when in icount mode.
6
2
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20190617175317.27557-5-peter.maydell@linaro.org
5
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
Message-id: 20221109161444.3397405-9-peter.maydell@linaro.org
10
---
7
---
11
target/arm/m_helper.c | 21 ++++++++++++---------
8
hw/intc/arm_gicv3_its.c | 14 +++++++++-----
12
1 file changed, 12 insertions(+), 9 deletions(-)
9
1 file changed, 9 insertions(+), 5 deletions(-)
13
10
14
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
11
diff --git a/hw/intc/arm_gicv3_its.c b/hw/intc/arm_gicv3_its.c
15
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/m_helper.c
13
--- a/hw/intc/arm_gicv3_its.c
17
+++ b/target/arm/m_helper.c
14
+++ b/hw/intc/arm_gicv3_its.c
18
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_blxns)(CPUARMState *env, uint32_t dest)
15
@@ -XXX,XX +XXX,XX @@ DECLARE_OBJ_CHECKERS(GICv3ITSState, GICv3ITSClass,
16
17
struct GICv3ITSClass {
18
GICv3ITSCommonClass parent_class;
19
- void (*parent_reset)(DeviceState *dev);
20
+ ResettablePhases parent_phases;
21
};
22
23
/*
24
@@ -XXX,XX +XXX,XX @@ static void gicv3_arm_its_realize(DeviceState *dev, Error **errp)
19
}
25
}
20
26
}
21
/* Note that these stores can throw exceptions on MPU faults */
27
22
- cpu_stl_data(env, sp, nextinst);
28
-static void gicv3_its_reset(DeviceState *dev)
23
- cpu_stl_data(env, sp + 4, saved_psr);
29
+static void gicv3_its_reset_hold(Object *obj)
24
+ cpu_stl_data_ra(env, sp, nextinst, GETPC());
25
+ cpu_stl_data_ra(env, sp + 4, saved_psr, GETPC());
26
27
env->regs[13] = sp;
28
env->regs[14] = 0xfeffffff;
29
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr)
30
/* fptr is the value of Rn, the frame pointer we store the FP regs to */
31
bool s = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_S_MASK;
32
bool lspact = env->v7m.fpccr[s] & R_V7M_FPCCR_LSPACT_MASK;
33
+ uintptr_t ra = GETPC();
34
35
assert(env->v7m.secure);
36
37
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr)
38
* Note that we do not use v7m_stack_write() here, because the
39
* accesses should not set the FSR bits for stacking errors if they
40
* fail. (In pseudocode terms, they are AccType_NORMAL, not AccType_STACK
41
- * or AccType_LAZYFP). Faults in cpu_stl_data() will throw exceptions
42
+ * or AccType_LAZYFP). Faults in cpu_stl_data_ra() will throw exceptions
43
* and longjmp out.
44
*/
45
if (!(env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPEN_MASK)) {
46
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr)
47
if (i >= 16) {
48
faddr += 8; /* skip the slot for the FPSCR */
49
}
50
- cpu_stl_data(env, faddr, slo);
51
- cpu_stl_data(env, faddr + 4, shi);
52
+ cpu_stl_data_ra(env, faddr, slo, ra);
53
+ cpu_stl_data_ra(env, faddr + 4, shi, ra);
54
}
55
- cpu_stl_data(env, fptr + 0x40, vfp_get_fpscr(env));
56
+ cpu_stl_data_ra(env, fptr + 0x40, vfp_get_fpscr(env), ra);
57
58
/*
59
* If TS is 0 then s0 to s15 and FPSCR are UNKNOWN; we choose to
60
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr)
61
62
void HELPER(v7m_vlldm)(CPUARMState *env, uint32_t fptr)
63
{
30
{
64
+ uintptr_t ra = GETPC();
31
- GICv3ITSState *s = ARM_GICV3_ITS_COMMON(dev);
65
+
32
+ GICv3ITSState *s = ARM_GICV3_ITS_COMMON(obj);
66
/* fptr is the value of Rn, the frame pointer we load the FP regs from */
33
GICv3ITSClass *c = ARM_GICV3_ITS_GET_CLASS(s);
67
assert(env->v7m.secure);
34
68
35
- c->parent_reset(dev);
69
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_vlldm)(CPUARMState *env, uint32_t fptr)
36
+ if (c->parent_phases.hold) {
70
faddr += 8; /* skip the slot for the FPSCR */
37
+ c->parent_phases.hold(obj);
71
}
38
+ }
72
39
73
- slo = cpu_ldl_data(env, faddr);
40
/* Quiescent bit reset to 1 */
74
- shi = cpu_ldl_data(env, faddr + 4);
41
s->ctlr = FIELD_DP32(s->ctlr, GITS_CTLR, QUIESCENT, 1);
75
+ slo = cpu_ldl_data_ra(env, faddr, ra);
42
@@ -XXX,XX +XXX,XX @@ static Property gicv3_its_props[] = {
76
+ shi = cpu_ldl_data_ra(env, faddr + 4, ra);
43
static void gicv3_its_class_init(ObjectClass *klass, void *data)
77
44
{
78
dn = (uint64_t) shi << 32 | slo;
45
DeviceClass *dc = DEVICE_CLASS(klass);
79
*aa32_vfp_dreg(env, i / 2) = dn;
46
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
80
}
47
GICv3ITSClass *ic = ARM_GICV3_ITS_CLASS(klass);
81
- fpscr = cpu_ldl_data(env, fptr + 0x40);
48
GICv3ITSCommonClass *icc = ARM_GICV3_ITS_COMMON_CLASS(klass);
82
+ fpscr = cpu_ldl_data_ra(env, fptr + 0x40, ra);
49
83
vfp_set_fpscr(env, fpscr);
50
dc->realize = gicv3_arm_its_realize;
84
}
51
device_class_set_props(dc, gicv3_its_props);
52
- device_class_set_parent_reset(dc, gicv3_its_reset, &ic->parent_reset);
53
+ resettable_class_set_parent_phases(rc, NULL, gicv3_its_reset_hold, NULL,
54
+ &ic->parent_phases);
55
icc->post_load = gicv3_its_post_load;
56
}
85
57
86
--
58
--
87
2.20.1
59
2.25.1
88
60
89
61
diff view generated by jsdifflib
1
In v8M, an attempt to return from an exception which is not
1
Convert the TYPE_KVM_ARM_ITS device to 3-phase reset.
2
active is an illegal exception return. For this purpose,
3
exceptions which can configurably target either Secure or
4
NonSecure are not considered to be active if they are
5
configured for the opposite security state for the one
6
we're trying to return from (eg attempt to return from
7
an NS NMI but NMI targets Secure). In the pseudocode this
8
is handled by IsActiveForState().
9
10
Detect this case rather than counting an active exception
11
possibly of the wrong security state as being sufficient.
12
2
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20190617175317.27557-4-peter.maydell@linaro.org
5
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
Message-id: 20221109161444.3397405-10-peter.maydell@linaro.org
16
---
7
---
17
hw/intc/armv7m_nvic.c | 14 +++++++++++++-
8
hw/intc/arm_gicv3_its_kvm.c | 14 +++++++++-----
18
1 file changed, 13 insertions(+), 1 deletion(-)
9
1 file changed, 9 insertions(+), 5 deletions(-)
19
10
20
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
11
diff --git a/hw/intc/arm_gicv3_its_kvm.c b/hw/intc/arm_gicv3_its_kvm.c
21
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/intc/armv7m_nvic.c
13
--- a/hw/intc/arm_gicv3_its_kvm.c
23
+++ b/hw/intc/armv7m_nvic.c
14
+++ b/hw/intc/arm_gicv3_its_kvm.c
24
@@ -XXX,XX +XXX,XX @@ int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure)
15
@@ -XXX,XX +XXX,XX @@ DECLARE_OBJ_CHECKERS(GICv3ITSState, KVMARMITSClass,
25
return -1;
16
26
}
17
struct KVMARMITSClass {
27
18
GICv3ITSCommonClass parent_class;
28
- ret = nvic_rettobase(s);
19
- void (*parent_reset)(DeviceState *dev);
29
+ /*
20
+ ResettablePhases parent_phases;
30
+ * If this is a configurable exception and it is currently
21
};
31
+ * targeting the opposite security state from the one we're trying
22
32
+ * to complete it for, this counts as an illegal exception return.
23
33
+ * We still need to deactivate whatever vector the logic above has
24
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_its_post_load(GICv3ITSState *s)
34
+ * selected, though, as it might not be the same as the one for the
25
GITS_CTLR, &s->ctlr, true, &error_abort);
35
+ * requested exception number.
26
}
36
+ */
27
37
+ if (!exc_is_banked(irq) && exc_targets_secure(s, irq) != secure) {
28
-static void kvm_arm_its_reset(DeviceState *dev)
38
+ ret = -1;
29
+static void kvm_arm_its_reset_hold(Object *obj)
39
+ } else {
30
{
40
+ ret = nvic_rettobase(s);
31
- GICv3ITSState *s = ARM_GICV3_ITS_COMMON(dev);
32
+ GICv3ITSState *s = ARM_GICV3_ITS_COMMON(obj);
33
KVMARMITSClass *c = KVM_ARM_ITS_GET_CLASS(s);
34
int i;
35
36
- c->parent_reset(dev);
37
+ if (c->parent_phases.hold) {
38
+ c->parent_phases.hold(obj);
41
+ }
39
+ }
42
40
43
vec->active = 0;
41
if (kvm_device_check_attr(s->dev_fd, KVM_DEV_ARM_VGIC_GRP_CTRL,
44
if (vec->level) {
42
KVM_DEV_ARM_ITS_CTRL_RESET)) {
43
@@ -XXX,XX +XXX,XX @@ static Property kvm_arm_its_props[] = {
44
static void kvm_arm_its_class_init(ObjectClass *klass, void *data)
45
{
46
DeviceClass *dc = DEVICE_CLASS(klass);
47
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
48
GICv3ITSCommonClass *icc = ARM_GICV3_ITS_COMMON_CLASS(klass);
49
KVMARMITSClass *ic = KVM_ARM_ITS_CLASS(klass);
50
51
dc->realize = kvm_arm_its_realize;
52
device_class_set_props(dc, kvm_arm_its_props);
53
- device_class_set_parent_reset(dc, kvm_arm_its_reset, &ic->parent_reset);
54
+ resettable_class_set_parent_phases(rc, NULL, kvm_arm_its_reset_hold, NULL,
55
+ &ic->parent_phases);
56
icc->send_msi = kvm_its_send_msi;
57
icc->pre_save = kvm_arm_its_pre_save;
58
icc->post_load = kvm_arm_its_post_load;
45
--
59
--
46
2.20.1
60
2.25.1
47
61
48
62
diff view generated by jsdifflib
New patch
1
From: Schspa Shi <schspa@gmail.com>
1
2
3
We use 32bit value for linux,initrd-[start/end], when we have
4
loader_start > 4GB, there will be a wrong initrd_start passed
5
to the kernel, and the kernel will report the following warning.
6
7
[ 0.000000] ------------[ cut here ]------------
8
[ 0.000000] initrd not fully accessible via the linear mapping -- please check your bootloader ...
9
[ 0.000000] WARNING: CPU: 0 PID: 0 at arch/arm64/mm/init.c:355 arm64_memblock_init+0x158/0x244
10
[ 0.000000] Modules linked in:
11
[ 0.000000] CPU: 0 PID: 0 Comm: swapper Tainted: G W 6.1.0-rc3-13250-g30a0b95b1335-dirty #28
12
[ 0.000000] Hardware name: Horizon Sigi Virtual development board (DT)
13
[ 0.000000] pstate: 600000c5 (nZCv daIF -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
14
[ 0.000000] pc : arm64_memblock_init+0x158/0x244
15
[ 0.000000] lr : arm64_memblock_init+0x158/0x244
16
[ 0.000000] sp : ffff800009273df0
17
[ 0.000000] x29: ffff800009273df0 x28: 0000001000cc0010 x27: 0000800000000000
18
[ 0.000000] x26: 000000000050a3e2 x25: ffff800008b46000 x24: ffff800008b46000
19
[ 0.000000] x23: ffff800008a53000 x22: ffff800009420000 x21: ffff800008a53000
20
[ 0.000000] x20: 0000000004000000 x19: 0000000004000000 x18: 00000000ffff1020
21
[ 0.000000] x17: 6568632065736165 x16: 6c70202d2d20676e x15: 697070616d207261
22
[ 0.000000] x14: 656e696c20656874 x13: 0a2e2e2e20726564 x12: 0000000000000000
23
[ 0.000000] x11: 0000000000000000 x10: 00000000ffffffff x9 : 0000000000000000
24
[ 0.000000] x8 : 0000000000000000 x7 : 796c6c756620746f x6 : 6e20647274696e69
25
[ 0.000000] x5 : ffff8000093c7c47 x4 : ffff800008a2102f x3 : ffff800009273a88
26
[ 0.000000] x2 : 80000000fffff038 x1 : 00000000000000c0 x0 : 0000000000000056
27
[ 0.000000] Call trace:
28
[ 0.000000] arm64_memblock_init+0x158/0x244
29
[ 0.000000] setup_arch+0x164/0x1cc
30
[ 0.000000] start_kernel+0x94/0x4ac
31
[ 0.000000] __primary_switched+0xb4/0xbc
32
[ 0.000000] ---[ end trace 0000000000000000 ]---
33
[ 0.000000] Zone ranges:
34
[ 0.000000] DMA [mem 0x0000001000000000-0x0000001007ffffff]
35
36
This doesn't affect any machine types we currently support, because
37
for all of our machine types the RAM starts well below the 4GB
38
mark, but it does demonstrate that we're not currently writing
39
the device-tree properties quite as intended.
40
41
To fix it, we can change it to write these values to the dtb using a
42
type width matching #address-cells. This is the intended size for
43
these dtb properties, and is how u-boot, for instance, writes them,
44
although in practice the Linux kernel will cope with them being any
45
width as long as they're big enough to fit the value.
46
47
Signed-off-by: Schspa Shi <schspa@gmail.com>
48
Message-id: 20221129160724.75667-1-schspa@gmail.com
49
[PMM: tweaked commit message]
50
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
51
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
52
---
53
hw/arm/boot.c | 10 ++++++----
54
1 file changed, 6 insertions(+), 4 deletions(-)
55
56
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
57
index XXXXXXX..XXXXXXX 100644
58
--- a/hw/arm/boot.c
59
+++ b/hw/arm/boot.c
60
@@ -XXX,XX +XXX,XX @@ int arm_load_dtb(hwaddr addr, const struct arm_boot_info *binfo,
61
}
62
63
if (binfo->initrd_size) {
64
- rc = qemu_fdt_setprop_cell(fdt, "/chosen", "linux,initrd-start",
65
- binfo->initrd_start);
66
+ rc = qemu_fdt_setprop_sized_cells(fdt, "/chosen", "linux,initrd-start",
67
+ acells, binfo->initrd_start);
68
if (rc < 0) {
69
fprintf(stderr, "couldn't set /chosen/linux,initrd-start\n");
70
goto fail;
71
}
72
73
- rc = qemu_fdt_setprop_cell(fdt, "/chosen", "linux,initrd-end",
74
- binfo->initrd_start + binfo->initrd_size);
75
+ rc = qemu_fdt_setprop_sized_cells(fdt, "/chosen", "linux,initrd-end",
76
+ acells,
77
+ binfo->initrd_start +
78
+ binfo->initrd_size);
79
if (rc < 0) {
80
fprintf(stderr, "couldn't set /chosen/linux,initrd-end\n");
81
goto fail;
82
--
83
2.25.1
diff view generated by jsdifflib
New patch
1
From: Zhuojia Shen <chaosdefinition@hotmail.com>
1
2
3
In CPUID registers exposed to userspace, some registers were missing
4
and some fields were not exposed. This patch aligns exposed ID
5
registers and their fields with what the upstream kernel currently
6
exposes.
7
8
Specifically, the following new ID registers/fields are exposed to
9
userspace:
10
11
ID_AA64PFR1_EL1.BT: bits 3-0
12
ID_AA64PFR1_EL1.MTE: bits 11-8
13
ID_AA64PFR1_EL1.SME: bits 27-24
14
15
ID_AA64ZFR0_EL1.SVEver: bits 3-0
16
ID_AA64ZFR0_EL1.AES: bits 7-4
17
ID_AA64ZFR0_EL1.BitPerm: bits 19-16
18
ID_AA64ZFR0_EL1.BF16: bits 23-20
19
ID_AA64ZFR0_EL1.SHA3: bits 35-32
20
ID_AA64ZFR0_EL1.SM4: bits 43-40
21
ID_AA64ZFR0_EL1.I8MM: bits 47-44
22
ID_AA64ZFR0_EL1.F32MM: bits 55-52
23
ID_AA64ZFR0_EL1.F64MM: bits 59-56
24
25
ID_AA64SMFR0_EL1.F32F32: bit 32
26
ID_AA64SMFR0_EL1.B16F32: bit 34
27
ID_AA64SMFR0_EL1.F16F32: bit 35
28
ID_AA64SMFR0_EL1.I8I32: bits 39-36
29
ID_AA64SMFR0_EL1.F64F64: bit 48
30
ID_AA64SMFR0_EL1.I16I64: bits 55-52
31
ID_AA64SMFR0_EL1.FA64: bit 63
32
33
ID_AA64MMFR0_EL1.ECV: bits 63-60
34
35
ID_AA64MMFR1_EL1.AFP: bits 47-44
36
37
ID_AA64MMFR2_EL1.AT: bits 35-32
38
39
ID_AA64ISAR0_EL1.RNDR: bits 63-60
40
41
ID_AA64ISAR1_EL1.FRINTTS: bits 35-32
42
ID_AA64ISAR1_EL1.BF16: bits 47-44
43
ID_AA64ISAR1_EL1.DGH: bits 51-48
44
ID_AA64ISAR1_EL1.I8MM: bits 55-52
45
46
ID_AA64ISAR2_EL1.WFxT: bits 3-0
47
ID_AA64ISAR2_EL1.RPRES: bits 7-4
48
ID_AA64ISAR2_EL1.GPA3: bits 11-8
49
ID_AA64ISAR2_EL1.APA3: bits 15-12
50
51
The code is also refactored to use symbolic names for ID register fields
52
for better readability and maintainability.
53
54
Signed-off-by: Zhuojia Shen <chaosdefinition@hotmail.com>
55
Message-id: DS7PR12MB6309BC9133877BCC6FC419FEAC0D9@DS7PR12MB6309.namprd12.prod.outlook.com
56
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
57
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
58
---
59
target/arm/helper.c | 96 +++++++++++++++++++++++++++++++++++++--------
60
1 file changed, 79 insertions(+), 17 deletions(-)
61
62
diff --git a/target/arm/helper.c b/target/arm/helper.c
63
index XXXXXXX..XXXXXXX 100644
64
--- a/target/arm/helper.c
65
+++ b/target/arm/helper.c
66
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
67
#ifdef CONFIG_USER_ONLY
68
static const ARMCPRegUserSpaceInfo v8_user_idregs[] = {
69
{ .name = "ID_AA64PFR0_EL1",
70
- .exported_bits = 0x000f000f00ff0000,
71
- .fixed_bits = 0x0000000000000011 },
72
+ .exported_bits = R_ID_AA64PFR0_FP_MASK |
73
+ R_ID_AA64PFR0_ADVSIMD_MASK |
74
+ R_ID_AA64PFR0_SVE_MASK |
75
+ R_ID_AA64PFR0_DIT_MASK,
76
+ .fixed_bits = (0x1 << R_ID_AA64PFR0_EL0_SHIFT) |
77
+ (0x1 << R_ID_AA64PFR0_EL1_SHIFT) },
78
{ .name = "ID_AA64PFR1_EL1",
79
- .exported_bits = 0x00000000000000f0 },
80
+ .exported_bits = R_ID_AA64PFR1_BT_MASK |
81
+ R_ID_AA64PFR1_SSBS_MASK |
82
+ R_ID_AA64PFR1_MTE_MASK |
83
+ R_ID_AA64PFR1_SME_MASK },
84
{ .name = "ID_AA64PFR*_EL1_RESERVED",
85
- .is_glob = true },
86
- { .name = "ID_AA64ZFR0_EL1" },
87
+ .is_glob = true },
88
+ { .name = "ID_AA64ZFR0_EL1",
89
+ .exported_bits = R_ID_AA64ZFR0_SVEVER_MASK |
90
+ R_ID_AA64ZFR0_AES_MASK |
91
+ R_ID_AA64ZFR0_BITPERM_MASK |
92
+ R_ID_AA64ZFR0_BFLOAT16_MASK |
93
+ R_ID_AA64ZFR0_SHA3_MASK |
94
+ R_ID_AA64ZFR0_SM4_MASK |
95
+ R_ID_AA64ZFR0_I8MM_MASK |
96
+ R_ID_AA64ZFR0_F32MM_MASK |
97
+ R_ID_AA64ZFR0_F64MM_MASK },
98
+ { .name = "ID_AA64SMFR0_EL1",
99
+ .exported_bits = R_ID_AA64SMFR0_F32F32_MASK |
100
+ R_ID_AA64SMFR0_B16F32_MASK |
101
+ R_ID_AA64SMFR0_F16F32_MASK |
102
+ R_ID_AA64SMFR0_I8I32_MASK |
103
+ R_ID_AA64SMFR0_F64F64_MASK |
104
+ R_ID_AA64SMFR0_I16I64_MASK |
105
+ R_ID_AA64SMFR0_FA64_MASK },
106
{ .name = "ID_AA64MMFR0_EL1",
107
- .fixed_bits = 0x00000000ff000000 },
108
- { .name = "ID_AA64MMFR1_EL1" },
109
+ .exported_bits = R_ID_AA64MMFR0_ECV_MASK,
110
+ .fixed_bits = (0xf << R_ID_AA64MMFR0_TGRAN64_SHIFT) |
111
+ (0xf << R_ID_AA64MMFR0_TGRAN4_SHIFT) },
112
+ { .name = "ID_AA64MMFR1_EL1",
113
+ .exported_bits = R_ID_AA64MMFR1_AFP_MASK },
114
+ { .name = "ID_AA64MMFR2_EL1",
115
+ .exported_bits = R_ID_AA64MMFR2_AT_MASK },
116
{ .name = "ID_AA64MMFR*_EL1_RESERVED",
117
- .is_glob = true },
118
+ .is_glob = true },
119
{ .name = "ID_AA64DFR0_EL1",
120
- .fixed_bits = 0x0000000000000006 },
121
- { .name = "ID_AA64DFR1_EL1" },
122
+ .fixed_bits = (0x6 << R_ID_AA64DFR0_DEBUGVER_SHIFT) },
123
+ { .name = "ID_AA64DFR1_EL1" },
124
{ .name = "ID_AA64DFR*_EL1_RESERVED",
125
- .is_glob = true },
126
+ .is_glob = true },
127
{ .name = "ID_AA64AFR*",
128
- .is_glob = true },
129
+ .is_glob = true },
130
{ .name = "ID_AA64ISAR0_EL1",
131
- .exported_bits = 0x00fffffff0fffff0 },
132
+ .exported_bits = R_ID_AA64ISAR0_AES_MASK |
133
+ R_ID_AA64ISAR0_SHA1_MASK |
134
+ R_ID_AA64ISAR0_SHA2_MASK |
135
+ R_ID_AA64ISAR0_CRC32_MASK |
136
+ R_ID_AA64ISAR0_ATOMIC_MASK |
137
+ R_ID_AA64ISAR0_RDM_MASK |
138
+ R_ID_AA64ISAR0_SHA3_MASK |
139
+ R_ID_AA64ISAR0_SM3_MASK |
140
+ R_ID_AA64ISAR0_SM4_MASK |
141
+ R_ID_AA64ISAR0_DP_MASK |
142
+ R_ID_AA64ISAR0_FHM_MASK |
143
+ R_ID_AA64ISAR0_TS_MASK |
144
+ R_ID_AA64ISAR0_RNDR_MASK },
145
{ .name = "ID_AA64ISAR1_EL1",
146
- .exported_bits = 0x000000f0ffffffff },
147
+ .exported_bits = R_ID_AA64ISAR1_DPB_MASK |
148
+ R_ID_AA64ISAR1_APA_MASK |
149
+ R_ID_AA64ISAR1_API_MASK |
150
+ R_ID_AA64ISAR1_JSCVT_MASK |
151
+ R_ID_AA64ISAR1_FCMA_MASK |
152
+ R_ID_AA64ISAR1_LRCPC_MASK |
153
+ R_ID_AA64ISAR1_GPA_MASK |
154
+ R_ID_AA64ISAR1_GPI_MASK |
155
+ R_ID_AA64ISAR1_FRINTTS_MASK |
156
+ R_ID_AA64ISAR1_SB_MASK |
157
+ R_ID_AA64ISAR1_BF16_MASK |
158
+ R_ID_AA64ISAR1_DGH_MASK |
159
+ R_ID_AA64ISAR1_I8MM_MASK },
160
+ { .name = "ID_AA64ISAR2_EL1",
161
+ .exported_bits = R_ID_AA64ISAR2_WFXT_MASK |
162
+ R_ID_AA64ISAR2_RPRES_MASK |
163
+ R_ID_AA64ISAR2_GPA3_MASK |
164
+ R_ID_AA64ISAR2_APA3_MASK },
165
{ .name = "ID_AA64ISAR*_EL1_RESERVED",
166
- .is_glob = true },
167
+ .is_glob = true },
168
};
169
modify_arm_cp_regs(v8_idregs, v8_user_idregs);
170
#endif
171
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
172
#ifdef CONFIG_USER_ONLY
173
static const ARMCPRegUserSpaceInfo id_v8_user_midr_cp_reginfo[] = {
174
{ .name = "MIDR_EL1",
175
- .exported_bits = 0x00000000ffffffff },
176
- { .name = "REVIDR_EL1" },
177
+ .exported_bits = R_MIDR_EL1_REVISION_MASK |
178
+ R_MIDR_EL1_PARTNUM_MASK |
179
+ R_MIDR_EL1_ARCHITECTURE_MASK |
180
+ R_MIDR_EL1_VARIANT_MASK |
181
+ R_MIDR_EL1_IMPLEMENTER_MASK },
182
+ { .name = "REVIDR_EL1" },
183
};
184
modify_arm_cp_regs(id_v8_midr_cp_reginfo, id_v8_user_midr_cp_reginfo);
185
#endif
186
--
187
2.25.1
diff view generated by jsdifflib
New patch
1
From: Thomas Huth <thuth@redhat.com>
1
2
3
The header target/arm/kvm-consts.h checks CONFIG_KVM which is marked as
4
poisoned in common code, so the files that include this header have to
5
be added to specific_ss and recompiled for each, qemu-system-arm and
6
qemu-system-aarch64. However, since the kvm headers are only optionally
7
used in kvm-constants.h for some sanity checks, we can additionally
8
check the NEED_CPU_H macro first to avoid the poisoned CONFIG_KVM macro,
9
so kvm-constants.h can also be used from "common" files (without the
10
sanity checks - which should be OK since they are still done from other
11
target-specific files instead). This way, and by adjusting some other
12
include statements in the related files here and there, we can move some
13
files from specific_ss into softmmu_ss, so that they only need to be
14
compiled once during the build process.
15
16
Signed-off-by: Thomas Huth <thuth@redhat.com>
17
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
18
Message-id: 20221202154023.293614-1-thuth@redhat.com
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
---
21
include/hw/misc/xlnx-zynqmp-apu-ctrl.h | 2 +-
22
target/arm/kvm-consts.h | 8 ++++----
23
hw/misc/imx6_src.c | 2 +-
24
hw/misc/iotkit-sysctl.c | 1 -
25
hw/misc/meson.build | 11 +++++------
26
5 files changed, 11 insertions(+), 13 deletions(-)
27
28
diff --git a/include/hw/misc/xlnx-zynqmp-apu-ctrl.h b/include/hw/misc/xlnx-zynqmp-apu-ctrl.h
29
index XXXXXXX..XXXXXXX 100644
30
--- a/include/hw/misc/xlnx-zynqmp-apu-ctrl.h
31
+++ b/include/hw/misc/xlnx-zynqmp-apu-ctrl.h
32
@@ -XXX,XX +XXX,XX @@
33
34
#include "hw/sysbus.h"
35
#include "hw/register.h"
36
-#include "target/arm/cpu.h"
37
+#include "target/arm/cpu-qom.h"
38
39
#define TYPE_XLNX_ZYNQMP_APU_CTRL "xlnx.apu-ctrl"
40
OBJECT_DECLARE_SIMPLE_TYPE(XlnxZynqMPAPUCtrl, XLNX_ZYNQMP_APU_CTRL)
41
diff --git a/target/arm/kvm-consts.h b/target/arm/kvm-consts.h
42
index XXXXXXX..XXXXXXX 100644
43
--- a/target/arm/kvm-consts.h
44
+++ b/target/arm/kvm-consts.h
45
@@ -XXX,XX +XXX,XX @@
46
#ifndef ARM_KVM_CONSTS_H
47
#define ARM_KVM_CONSTS_H
48
49
+#ifdef NEED_CPU_H
50
#ifdef CONFIG_KVM
51
#include <linux/kvm.h>
52
#include <linux/psci.h>
53
-
54
#define MISMATCH_CHECK(X, Y) QEMU_BUILD_BUG_ON(X != Y)
55
+#endif
56
+#endif
57
58
-#else
59
-
60
+#ifndef MISMATCH_CHECK
61
#define MISMATCH_CHECK(X, Y) QEMU_BUILD_BUG_ON(0)
62
-
63
#endif
64
65
#define CP_REG_SIZE_SHIFT 52
66
diff --git a/hw/misc/imx6_src.c b/hw/misc/imx6_src.c
67
index XXXXXXX..XXXXXXX 100644
68
--- a/hw/misc/imx6_src.c
69
+++ b/hw/misc/imx6_src.c
70
@@ -XXX,XX +XXX,XX @@
71
#include "qemu/log.h"
72
#include "qemu/main-loop.h"
73
#include "qemu/module.h"
74
-#include "arm-powerctl.h"
75
+#include "target/arm/arm-powerctl.h"
76
#include "hw/core/cpu.h"
77
78
#ifndef DEBUG_IMX6_SRC
79
diff --git a/hw/misc/iotkit-sysctl.c b/hw/misc/iotkit-sysctl.c
80
index XXXXXXX..XXXXXXX 100644
81
--- a/hw/misc/iotkit-sysctl.c
82
+++ b/hw/misc/iotkit-sysctl.c
83
@@ -XXX,XX +XXX,XX @@
84
#include "hw/qdev-properties.h"
85
#include "hw/arm/armsse-version.h"
86
#include "target/arm/arm-powerctl.h"
87
-#include "target/arm/cpu.h"
88
89
REG32(SECDBGSTAT, 0x0)
90
REG32(SECDBGSET, 0x4)
91
diff --git a/hw/misc/meson.build b/hw/misc/meson.build
92
index XXXXXXX..XXXXXXX 100644
93
--- a/hw/misc/meson.build
94
+++ b/hw/misc/meson.build
95
@@ -XXX,XX +XXX,XX @@ softmmu_ss.add(when: 'CONFIG_IMX', if_true: files(
96
'imx25_ccm.c',
97
'imx31_ccm.c',
98
'imx6_ccm.c',
99
+ 'imx6_src.c',
100
'imx6ul_ccm.c',
101
'imx7_ccm.c',
102
'imx7_gpr.c',
103
@@ -XXX,XX +XXX,XX @@ softmmu_ss.add(when: 'CONFIG_RASPI', if_true: files(
104
))
105
softmmu_ss.add(when: 'CONFIG_SLAVIO', if_true: files('slavio_misc.c'))
106
softmmu_ss.add(when: 'CONFIG_ZYNQ', if_true: files('zynq_slcr.c'))
107
-specific_ss.add(when: 'CONFIG_XLNX_ZYNQMP_ARM', if_true: files('xlnx-zynqmp-crf.c'))
108
-specific_ss.add(when: 'CONFIG_XLNX_ZYNQMP_ARM', if_true: files('xlnx-zynqmp-apu-ctrl.c'))
109
+softmmu_ss.add(when: 'CONFIG_XLNX_ZYNQMP_ARM', if_true: files('xlnx-zynqmp-crf.c'))
110
+softmmu_ss.add(when: 'CONFIG_XLNX_ZYNQMP_ARM', if_true: files('xlnx-zynqmp-apu-ctrl.c'))
111
specific_ss.add(when: 'CONFIG_XLNX_VERSAL', if_true: files('xlnx-versal-crl.c'))
112
softmmu_ss.add(when: 'CONFIG_XLNX_VERSAL', if_true: files(
113
'xlnx-versal-xramc.c',
114
@@ -XXX,XX +XXX,XX @@ softmmu_ss.add(when: 'CONFIG_TZ_MPC', if_true: files('tz-mpc.c'))
115
softmmu_ss.add(when: 'CONFIG_TZ_MSC', if_true: files('tz-msc.c'))
116
softmmu_ss.add(when: 'CONFIG_TZ_PPC', if_true: files('tz-ppc.c'))
117
softmmu_ss.add(when: 'CONFIG_IOTKIT_SECCTL', if_true: files('iotkit-secctl.c'))
118
+softmmu_ss.add(when: 'CONFIG_IOTKIT_SYSCTL', if_true: files('iotkit-sysctl.c'))
119
softmmu_ss.add(when: 'CONFIG_IOTKIT_SYSINFO', if_true: files('iotkit-sysinfo.c'))
120
softmmu_ss.add(when: 'CONFIG_ARMSSE_CPU_PWRCTRL', if_true: files('armsse-cpu-pwrctrl.c'))
121
softmmu_ss.add(when: 'CONFIG_ARMSSE_CPUID', if_true: files('armsse-cpuid.c'))
122
@@ -XXX,XX +XXX,XX @@ softmmu_ss.add(when: 'CONFIG_GRLIB', if_true: files('grlib_ahb_apb_pnp.c'))
123
124
specific_ss.add(when: 'CONFIG_AVR_POWER', if_true: files('avr_power.c'))
125
126
-specific_ss.add(when: 'CONFIG_IMX', if_true: files('imx6_src.c'))
127
-specific_ss.add(when: 'CONFIG_IOTKIT_SYSCTL', if_true: files('iotkit-sysctl.c'))
128
-
129
specific_ss.add(when: 'CONFIG_MAC_VIA', if_true: files('mac_via.c'))
130
131
specific_ss.add(when: 'CONFIG_MIPS_CPS', if_true: files('mips_cmgcr.c', 'mips_cpc.c'))
132
specific_ss.add(when: 'CONFIG_MIPS_ITU', if_true: files('mips_itu.c'))
133
134
-specific_ss.add(when: 'CONFIG_SBSA_REF', if_true: files('sbsa_ec.c'))
135
+softmmu_ss.add(when: 'CONFIG_SBSA_REF', if_true: files('sbsa_ec.c'))
136
137
# HPPA devices
138
softmmu_ss.add(when: 'CONFIG_LASI', if_true: files('lasi.c'))
139
--
140
2.25.1
141
142
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
2
3
These routines are TCG specific.
3
When building with --disable-tcg on Darwin we get:
4
4
5
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
target/arm/cpu.c:725:16: error: incomplete definition of type 'struct TCGCPUOps'
6
Message-id: 20190701194942.10092-2-philmd@redhat.com
6
cc->tcg_ops->do_interrupt(cs);
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
~~~~~~~~~~~^
8
9
Commit 083afd18a9 ("target/arm: Restrict cpu_exec_interrupt()
10
handler to sysemu") limited this block to system emulation,
11
but neglected to also limit it to TCG.
12
13
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
14
Reviewed-by: Fabiano Rosas <farosas@suse.de>
15
Message-id: 20221209110823.59495-1-philmd@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
17
---
10
target/arm/Makefile.objs | 2 +-
18
target/arm/cpu.c | 5 +++--
11
target/arm/cpu.c | 9 +-
19
1 file changed, 3 insertions(+), 2 deletions(-)
12
target/arm/debug_helper.c | 311 ++++++++++++++++++++++++++++++++++++++
13
target/arm/op_helper.c | 295 ------------------------------------
14
4 files changed, 315 insertions(+), 302 deletions(-)
15
create mode 100644 target/arm/debug_helper.c
16
20
17
diff --git a/target/arm/Makefile.objs b/target/arm/Makefile.objs
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/Makefile.objs
20
+++ b/target/arm/Makefile.objs
21
@@ -XXX,XX +XXX,XX @@ target/arm/translate-sve.o: target/arm/decode-sve.inc.c
22
target/arm/translate.o: target/arm/decode-vfp.inc.c
23
target/arm/translate.o: target/arm/decode-vfp-uncond.inc.c
24
25
-obj-y += tlb_helper.o
26
+obj-y += tlb_helper.o debug_helper.o
27
obj-y += translate.o op_helper.o
28
obj-y += crypto_helper.o
29
obj-y += iwmmxt_helper.o vec_helper.o neon_helper.o
30
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
21
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
31
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/cpu.c
23
--- a/target/arm/cpu.c
33
+++ b/target/arm/cpu.c
24
+++ b/target/arm/cpu.c
34
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_class_init(ObjectClass *oc, void *data)
25
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
35
cc->gdb_arch_name = arm_gdb_arch_name;
26
arm_rebuild_hflags(env);
36
cc->gdb_get_dynamic_xml = arm_gdb_get_dynamic_xml;
37
cc->gdb_stop_before_watchpoint = true;
38
- cc->debug_excp_handler = arm_debug_excp_handler;
39
- cc->debug_check_watchpoint = arm_debug_check_watchpoint;
40
-#if !defined(CONFIG_USER_ONLY)
41
- cc->adjust_watchpoint_address = arm_adjust_watchpoint_address;
42
-#endif
43
-
44
cc->disas_set_info = arm_disas_set_info;
45
#ifdef CONFIG_TCG
46
cc->tcg_initialize = arm_translate_init;
47
cc->tlb_fill = arm_cpu_tlb_fill;
48
+ cc->debug_excp_handler = arm_debug_excp_handler;
49
+ cc->debug_check_watchpoint = arm_debug_check_watchpoint;
50
#if !defined(CONFIG_USER_ONLY)
51
cc->do_unaligned_access = arm_cpu_do_unaligned_access;
52
cc->do_transaction_failed = arm_cpu_do_transaction_failed;
53
+ cc->adjust_watchpoint_address = arm_adjust_watchpoint_address;
54
#endif /* CONFIG_TCG && !CONFIG_USER_ONLY */
55
#endif
56
}
27
}
57
diff --git a/target/arm/debug_helper.c b/target/arm/debug_helper.c
28
58
new file mode 100644
29
-#ifndef CONFIG_USER_ONLY
59
index XXXXXXX..XXXXXXX
30
+#if defined(CONFIG_TCG) && !defined(CONFIG_USER_ONLY)
60
--- /dev/null
31
61
+++ b/target/arm/debug_helper.c
32
static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
62
@@ -XXX,XX +XXX,XX @@
33
unsigned int target_el,
63
+/*
34
@@ -XXX,XX +XXX,XX @@ static bool arm_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
64
+ * ARM debug helpers.
35
cc->tcg_ops->do_interrupt(cs);
65
+ *
36
return true;
66
+ * This code is licensed under the GNU GPL v2 or later.
37
}
67
+ *
38
-#endif /* !CONFIG_USER_ONLY */
68
+ * SPDX-License-Identifier: GPL-2.0-or-later
69
+ */
70
+#include "qemu/osdep.h"
71
+#include "cpu.h"
72
+#include "internals.h"
73
+#include "exec/exec-all.h"
74
+#include "exec/helper-proto.h"
75
+
39
+
76
+/* Return true if the linked breakpoint entry lbn passes its checks */
40
+#endif /* CONFIG_TCG && !CONFIG_USER_ONLY */
77
+static bool linked_bp_matches(ARMCPU *cpu, int lbn)
41
78
+{
42
void arm_cpu_update_virq(ARMCPU *cpu)
79
+ CPUARMState *env = &cpu->env;
43
{
80
+ uint64_t bcr = env->cp15.dbgbcr[lbn];
81
+ int brps = extract32(cpu->dbgdidr, 24, 4);
82
+ int ctx_cmps = extract32(cpu->dbgdidr, 20, 4);
83
+ int bt;
84
+ uint32_t contextidr;
85
+
86
+ /*
87
+ * Links to unimplemented or non-context aware breakpoints are
88
+ * CONSTRAINED UNPREDICTABLE: either behave as if disabled, or
89
+ * as if linked to an UNKNOWN context-aware breakpoint (in which
90
+ * case DBGWCR<n>_EL1.LBN must indicate that breakpoint).
91
+ * We choose the former.
92
+ */
93
+ if (lbn > brps || lbn < (brps - ctx_cmps)) {
94
+ return false;
95
+ }
96
+
97
+ bcr = env->cp15.dbgbcr[lbn];
98
+
99
+ if (extract64(bcr, 0, 1) == 0) {
100
+ /* Linked breakpoint disabled : generate no events */
101
+ return false;
102
+ }
103
+
104
+ bt = extract64(bcr, 20, 4);
105
+
106
+ /*
107
+ * We match the whole register even if this is AArch32 using the
108
+ * short descriptor format (in which case it holds both PROCID and ASID),
109
+ * since we don't implement the optional v7 context ID masking.
110
+ */
111
+ contextidr = extract64(env->cp15.contextidr_el[1], 0, 32);
112
+
113
+ switch (bt) {
114
+ case 3: /* linked context ID match */
115
+ if (arm_current_el(env) > 1) {
116
+ /* Context matches never fire in EL2 or (AArch64) EL3 */
117
+ return false;
118
+ }
119
+ return (contextidr == extract64(env->cp15.dbgbvr[lbn], 0, 32));
120
+ case 5: /* linked address mismatch (reserved in AArch64) */
121
+ case 9: /* linked VMID match (reserved if no EL2) */
122
+ case 11: /* linked context ID and VMID match (reserved if no EL2) */
123
+ default:
124
+ /*
125
+ * Links to Unlinked context breakpoints must generate no
126
+ * events; we choose to do the same for reserved values too.
127
+ */
128
+ return false;
129
+ }
130
+
131
+ return false;
132
+}
133
+
134
+static bool bp_wp_matches(ARMCPU *cpu, int n, bool is_wp)
135
+{
136
+ CPUARMState *env = &cpu->env;
137
+ uint64_t cr;
138
+ int pac, hmc, ssc, wt, lbn;
139
+ /*
140
+ * Note that for watchpoints the check is against the CPU security
141
+ * state, not the S/NS attribute on the offending data access.
142
+ */
143
+ bool is_secure = arm_is_secure(env);
144
+ int access_el = arm_current_el(env);
145
+
146
+ if (is_wp) {
147
+ CPUWatchpoint *wp = env->cpu_watchpoint[n];
148
+
149
+ if (!wp || !(wp->flags & BP_WATCHPOINT_HIT)) {
150
+ return false;
151
+ }
152
+ cr = env->cp15.dbgwcr[n];
153
+ if (wp->hitattrs.user) {
154
+ /*
155
+ * The LDRT/STRT/LDT/STT "unprivileged access" instructions should
156
+ * match watchpoints as if they were accesses done at EL0, even if
157
+ * the CPU is at EL1 or higher.
158
+ */
159
+ access_el = 0;
160
+ }
161
+ } else {
162
+ uint64_t pc = is_a64(env) ? env->pc : env->regs[15];
163
+
164
+ if (!env->cpu_breakpoint[n] || env->cpu_breakpoint[n]->pc != pc) {
165
+ return false;
166
+ }
167
+ cr = env->cp15.dbgbcr[n];
168
+ }
169
+ /*
170
+ * The WATCHPOINT_HIT flag guarantees us that the watchpoint is
171
+ * enabled and that the address and access type match; for breakpoints
172
+ * we know the address matched; check the remaining fields, including
173
+ * linked breakpoints. We rely on WCR and BCR having the same layout
174
+ * for the LBN, SSC, HMC, PAC/PMC and is-linked fields.
175
+ * Note that some combinations of {PAC, HMC, SSC} are reserved and
176
+ * must act either like some valid combination or as if the watchpoint
177
+ * were disabled. We choose the former, and use this together with
178
+ * the fact that EL3 must always be Secure and EL2 must always be
179
+ * Non-Secure to simplify the code slightly compared to the full
180
+ * table in the ARM ARM.
181
+ */
182
+ pac = extract64(cr, 1, 2);
183
+ hmc = extract64(cr, 13, 1);
184
+ ssc = extract64(cr, 14, 2);
185
+
186
+ switch (ssc) {
187
+ case 0:
188
+ break;
189
+ case 1:
190
+ case 3:
191
+ if (is_secure) {
192
+ return false;
193
+ }
194
+ break;
195
+ case 2:
196
+ if (!is_secure) {
197
+ return false;
198
+ }
199
+ break;
200
+ }
201
+
202
+ switch (access_el) {
203
+ case 3:
204
+ case 2:
205
+ if (!hmc) {
206
+ return false;
207
+ }
208
+ break;
209
+ case 1:
210
+ if (extract32(pac, 0, 1) == 0) {
211
+ return false;
212
+ }
213
+ break;
214
+ case 0:
215
+ if (extract32(pac, 1, 1) == 0) {
216
+ return false;
217
+ }
218
+ break;
219
+ default:
220
+ g_assert_not_reached();
221
+ }
222
+
223
+ wt = extract64(cr, 20, 1);
224
+ lbn = extract64(cr, 16, 4);
225
+
226
+ if (wt && !linked_bp_matches(cpu, lbn)) {
227
+ return false;
228
+ }
229
+
230
+ return true;
231
+}
232
+
233
+static bool check_watchpoints(ARMCPU *cpu)
234
+{
235
+ CPUARMState *env = &cpu->env;
236
+ int n;
237
+
238
+ /*
239
+ * If watchpoints are disabled globally or we can't take debug
240
+ * exceptions here then watchpoint firings are ignored.
241
+ */
242
+ if (extract32(env->cp15.mdscr_el1, 15, 1) == 0
243
+ || !arm_generate_debug_exceptions(env)) {
244
+ return false;
245
+ }
246
+
247
+ for (n = 0; n < ARRAY_SIZE(env->cpu_watchpoint); n++) {
248
+ if (bp_wp_matches(cpu, n, true)) {
249
+ return true;
250
+ }
251
+ }
252
+ return false;
253
+}
254
+
255
+static bool check_breakpoints(ARMCPU *cpu)
256
+{
257
+ CPUARMState *env = &cpu->env;
258
+ int n;
259
+
260
+ /*
261
+ * If breakpoints are disabled globally or we can't take debug
262
+ * exceptions here then breakpoint firings are ignored.
263
+ */
264
+ if (extract32(env->cp15.mdscr_el1, 15, 1) == 0
265
+ || !arm_generate_debug_exceptions(env)) {
266
+ return false;
267
+ }
268
+
269
+ for (n = 0; n < ARRAY_SIZE(env->cpu_breakpoint); n++) {
270
+ if (bp_wp_matches(cpu, n, false)) {
271
+ return true;
272
+ }
273
+ }
274
+ return false;
275
+}
276
+
277
+void HELPER(check_breakpoints)(CPUARMState *env)
278
+{
279
+ ARMCPU *cpu = env_archcpu(env);
280
+
281
+ if (check_breakpoints(cpu)) {
282
+ HELPER(exception_internal(env, EXCP_DEBUG));
283
+ }
284
+}
285
+
286
+bool arm_debug_check_watchpoint(CPUState *cs, CPUWatchpoint *wp)
287
+{
288
+ /*
289
+ * Called by core code when a CPU watchpoint fires; need to check if this
290
+ * is also an architectural watchpoint match.
291
+ */
292
+ ARMCPU *cpu = ARM_CPU(cs);
293
+
294
+ return check_watchpoints(cpu);
295
+}
296
+
297
+void arm_debug_excp_handler(CPUState *cs)
298
+{
299
+ /*
300
+ * Called by core code when a watchpoint or breakpoint fires;
301
+ * need to check which one and raise the appropriate exception.
302
+ */
303
+ ARMCPU *cpu = ARM_CPU(cs);
304
+ CPUARMState *env = &cpu->env;
305
+ CPUWatchpoint *wp_hit = cs->watchpoint_hit;
306
+
307
+ if (wp_hit) {
308
+ if (wp_hit->flags & BP_CPU) {
309
+ bool wnr = (wp_hit->flags & BP_WATCHPOINT_HIT_WRITE) != 0;
310
+ bool same_el = arm_debug_target_el(env) == arm_current_el(env);
311
+
312
+ cs->watchpoint_hit = NULL;
313
+
314
+ env->exception.fsr = arm_debug_exception_fsr(env);
315
+ env->exception.vaddress = wp_hit->hitaddr;
316
+ raise_exception(env, EXCP_DATA_ABORT,
317
+ syn_watchpoint(same_el, 0, wnr),
318
+ arm_debug_target_el(env));
319
+ }
320
+ } else {
321
+ uint64_t pc = is_a64(env) ? env->pc : env->regs[15];
322
+ bool same_el = (arm_debug_target_el(env) == arm_current_el(env));
323
+
324
+ /*
325
+ * (1) GDB breakpoints should be handled first.
326
+ * (2) Do not raise a CPU exception if no CPU breakpoint has fired,
327
+ * since singlestep is also done by generating a debug internal
328
+ * exception.
329
+ */
330
+ if (cpu_breakpoint_test(cs, pc, BP_GDB)
331
+ || !cpu_breakpoint_test(cs, pc, BP_CPU)) {
332
+ return;
333
+ }
334
+
335
+ env->exception.fsr = arm_debug_exception_fsr(env);
336
+ /*
337
+ * FAR is UNKNOWN: clear vaddress to avoid potentially exposing
338
+ * values to the guest that it shouldn't be able to see at its
339
+ * exception/security level.
340
+ */
341
+ env->exception.vaddress = 0;
342
+ raise_exception(env, EXCP_PREFETCH_ABORT,
343
+ syn_breakpoint(same_el),
344
+ arm_debug_target_el(env));
345
+ }
346
+}
347
+
348
+#if !defined(CONFIG_USER_ONLY)
349
+
350
+vaddr arm_adjust_watchpoint_address(CPUState *cs, vaddr addr, int len)
351
+{
352
+ ARMCPU *cpu = ARM_CPU(cs);
353
+ CPUARMState *env = &cpu->env;
354
+
355
+ /*
356
+ * In BE32 system mode, target memory is stored byteswapped (on a
357
+ * little-endian host system), and by the time we reach here (via an
358
+ * opcode helper) the addresses of subword accesses have been adjusted
359
+ * to account for that, which means that watchpoints will not match.
360
+ * Undo the adjustment here.
361
+ */
362
+ if (arm_sctlr_b(env)) {
363
+ if (len == 1) {
364
+ addr ^= 3;
365
+ } else if (len == 2) {
366
+ addr ^= 2;
367
+ }
368
+ }
369
+
370
+ return addr;
371
+}
372
+
373
+#endif
374
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
375
index XXXXXXX..XXXXXXX 100644
376
--- a/target/arm/op_helper.c
377
+++ b/target/arm/op_helper.c
378
@@ -XXX,XX +XXX,XX @@ void HELPER(pre_smc)(CPUARMState *env, uint32_t syndrome)
379
}
380
}
381
382
-/* Return true if the linked breakpoint entry lbn passes its checks */
383
-static bool linked_bp_matches(ARMCPU *cpu, int lbn)
384
-{
385
- CPUARMState *env = &cpu->env;
386
- uint64_t bcr = env->cp15.dbgbcr[lbn];
387
- int brps = extract32(cpu->dbgdidr, 24, 4);
388
- int ctx_cmps = extract32(cpu->dbgdidr, 20, 4);
389
- int bt;
390
- uint32_t contextidr;
391
-
392
- /*
393
- * Links to unimplemented or non-context aware breakpoints are
394
- * CONSTRAINED UNPREDICTABLE: either behave as if disabled, or
395
- * as if linked to an UNKNOWN context-aware breakpoint (in which
396
- * case DBGWCR<n>_EL1.LBN must indicate that breakpoint).
397
- * We choose the former.
398
- */
399
- if (lbn > brps || lbn < (brps - ctx_cmps)) {
400
- return false;
401
- }
402
-
403
- bcr = env->cp15.dbgbcr[lbn];
404
-
405
- if (extract64(bcr, 0, 1) == 0) {
406
- /* Linked breakpoint disabled : generate no events */
407
- return false;
408
- }
409
-
410
- bt = extract64(bcr, 20, 4);
411
-
412
- /*
413
- * We match the whole register even if this is AArch32 using the
414
- * short descriptor format (in which case it holds both PROCID and ASID),
415
- * since we don't implement the optional v7 context ID masking.
416
- */
417
- contextidr = extract64(env->cp15.contextidr_el[1], 0, 32);
418
-
419
- switch (bt) {
420
- case 3: /* linked context ID match */
421
- if (arm_current_el(env) > 1) {
422
- /* Context matches never fire in EL2 or (AArch64) EL3 */
423
- return false;
424
- }
425
- return (contextidr == extract64(env->cp15.dbgbvr[lbn], 0, 32));
426
- case 5: /* linked address mismatch (reserved in AArch64) */
427
- case 9: /* linked VMID match (reserved if no EL2) */
428
- case 11: /* linked context ID and VMID match (reserved if no EL2) */
429
- default:
430
- /*
431
- * Links to Unlinked context breakpoints must generate no
432
- * events; we choose to do the same for reserved values too.
433
- */
434
- return false;
435
- }
436
-
437
- return false;
438
-}
439
-
440
-static bool bp_wp_matches(ARMCPU *cpu, int n, bool is_wp)
441
-{
442
- CPUARMState *env = &cpu->env;
443
- uint64_t cr;
444
- int pac, hmc, ssc, wt, lbn;
445
- /*
446
- * Note that for watchpoints the check is against the CPU security
447
- * state, not the S/NS attribute on the offending data access.
448
- */
449
- bool is_secure = arm_is_secure(env);
450
- int access_el = arm_current_el(env);
451
-
452
- if (is_wp) {
453
- CPUWatchpoint *wp = env->cpu_watchpoint[n];
454
-
455
- if (!wp || !(wp->flags & BP_WATCHPOINT_HIT)) {
456
- return false;
457
- }
458
- cr = env->cp15.dbgwcr[n];
459
- if (wp->hitattrs.user) {
460
- /*
461
- * The LDRT/STRT/LDT/STT "unprivileged access" instructions should
462
- * match watchpoints as if they were accesses done at EL0, even if
463
- * the CPU is at EL1 or higher.
464
- */
465
- access_el = 0;
466
- }
467
- } else {
468
- uint64_t pc = is_a64(env) ? env->pc : env->regs[15];
469
-
470
- if (!env->cpu_breakpoint[n] || env->cpu_breakpoint[n]->pc != pc) {
471
- return false;
472
- }
473
- cr = env->cp15.dbgbcr[n];
474
- }
475
- /*
476
- * The WATCHPOINT_HIT flag guarantees us that the watchpoint is
477
- * enabled and that the address and access type match; for breakpoints
478
- * we know the address matched; check the remaining fields, including
479
- * linked breakpoints. We rely on WCR and BCR having the same layout
480
- * for the LBN, SSC, HMC, PAC/PMC and is-linked fields.
481
- * Note that some combinations of {PAC, HMC, SSC} are reserved and
482
- * must act either like some valid combination or as if the watchpoint
483
- * were disabled. We choose the former, and use this together with
484
- * the fact that EL3 must always be Secure and EL2 must always be
485
- * Non-Secure to simplify the code slightly compared to the full
486
- * table in the ARM ARM.
487
- */
488
- pac = extract64(cr, 1, 2);
489
- hmc = extract64(cr, 13, 1);
490
- ssc = extract64(cr, 14, 2);
491
-
492
- switch (ssc) {
493
- case 0:
494
- break;
495
- case 1:
496
- case 3:
497
- if (is_secure) {
498
- return false;
499
- }
500
- break;
501
- case 2:
502
- if (!is_secure) {
503
- return false;
504
- }
505
- break;
506
- }
507
-
508
- switch (access_el) {
509
- case 3:
510
- case 2:
511
- if (!hmc) {
512
- return false;
513
- }
514
- break;
515
- case 1:
516
- if (extract32(pac, 0, 1) == 0) {
517
- return false;
518
- }
519
- break;
520
- case 0:
521
- if (extract32(pac, 1, 1) == 0) {
522
- return false;
523
- }
524
- break;
525
- default:
526
- g_assert_not_reached();
527
- }
528
-
529
- wt = extract64(cr, 20, 1);
530
- lbn = extract64(cr, 16, 4);
531
-
532
- if (wt && !linked_bp_matches(cpu, lbn)) {
533
- return false;
534
- }
535
-
536
- return true;
537
-}
538
-
539
-static bool check_watchpoints(ARMCPU *cpu)
540
-{
541
- CPUARMState *env = &cpu->env;
542
- int n;
543
-
544
- /*
545
- * If watchpoints are disabled globally or we can't take debug
546
- * exceptions here then watchpoint firings are ignored.
547
- */
548
- if (extract32(env->cp15.mdscr_el1, 15, 1) == 0
549
- || !arm_generate_debug_exceptions(env)) {
550
- return false;
551
- }
552
-
553
- for (n = 0; n < ARRAY_SIZE(env->cpu_watchpoint); n++) {
554
- if (bp_wp_matches(cpu, n, true)) {
555
- return true;
556
- }
557
- }
558
- return false;
559
-}
560
-
561
-static bool check_breakpoints(ARMCPU *cpu)
562
-{
563
- CPUARMState *env = &cpu->env;
564
- int n;
565
-
566
- /*
567
- * If breakpoints are disabled globally or we can't take debug
568
- * exceptions here then breakpoint firings are ignored.
569
- */
570
- if (extract32(env->cp15.mdscr_el1, 15, 1) == 0
571
- || !arm_generate_debug_exceptions(env)) {
572
- return false;
573
- }
574
-
575
- for (n = 0; n < ARRAY_SIZE(env->cpu_breakpoint); n++) {
576
- if (bp_wp_matches(cpu, n, false)) {
577
- return true;
578
- }
579
- }
580
- return false;
581
-}
582
-
583
-void HELPER(check_breakpoints)(CPUARMState *env)
584
-{
585
- ARMCPU *cpu = env_archcpu(env);
586
-
587
- if (check_breakpoints(cpu)) {
588
- HELPER(exception_internal(env, EXCP_DEBUG));
589
- }
590
-}
591
-
592
-bool arm_debug_check_watchpoint(CPUState *cs, CPUWatchpoint *wp)
593
-{
594
- /*
595
- * Called by core code when a CPU watchpoint fires; need to check if this
596
- * is also an architectural watchpoint match.
597
- */
598
- ARMCPU *cpu = ARM_CPU(cs);
599
-
600
- return check_watchpoints(cpu);
601
-}
602
-
603
-vaddr arm_adjust_watchpoint_address(CPUState *cs, vaddr addr, int len)
604
-{
605
- ARMCPU *cpu = ARM_CPU(cs);
606
- CPUARMState *env = &cpu->env;
607
-
608
- /*
609
- * In BE32 system mode, target memory is stored byteswapped (on a
610
- * little-endian host system), and by the time we reach here (via an
611
- * opcode helper) the addresses of subword accesses have been adjusted
612
- * to account for that, which means that watchpoints will not match.
613
- * Undo the adjustment here.
614
- */
615
- if (arm_sctlr_b(env)) {
616
- if (len == 1) {
617
- addr ^= 3;
618
- } else if (len == 2) {
619
- addr ^= 2;
620
- }
621
- }
622
-
623
- return addr;
624
-}
625
-
626
-void arm_debug_excp_handler(CPUState *cs)
627
-{
628
- /*
629
- * Called by core code when a watchpoint or breakpoint fires;
630
- * need to check which one and raise the appropriate exception.
631
- */
632
- ARMCPU *cpu = ARM_CPU(cs);
633
- CPUARMState *env = &cpu->env;
634
- CPUWatchpoint *wp_hit = cs->watchpoint_hit;
635
-
636
- if (wp_hit) {
637
- if (wp_hit->flags & BP_CPU) {
638
- bool wnr = (wp_hit->flags & BP_WATCHPOINT_HIT_WRITE) != 0;
639
- bool same_el = arm_debug_target_el(env) == arm_current_el(env);
640
-
641
- cs->watchpoint_hit = NULL;
642
-
643
- env->exception.fsr = arm_debug_exception_fsr(env);
644
- env->exception.vaddress = wp_hit->hitaddr;
645
- raise_exception(env, EXCP_DATA_ABORT,
646
- syn_watchpoint(same_el, 0, wnr),
647
- arm_debug_target_el(env));
648
- }
649
- } else {
650
- uint64_t pc = is_a64(env) ? env->pc : env->regs[15];
651
- bool same_el = (arm_debug_target_el(env) == arm_current_el(env));
652
-
653
- /*
654
- * (1) GDB breakpoints should be handled first.
655
- * (2) Do not raise a CPU exception if no CPU breakpoint has fired,
656
- * since singlestep is also done by generating a debug internal
657
- * exception.
658
- */
659
- if (cpu_breakpoint_test(cs, pc, BP_GDB)
660
- || !cpu_breakpoint_test(cs, pc, BP_CPU)) {
661
- return;
662
- }
663
-
664
- env->exception.fsr = arm_debug_exception_fsr(env);
665
- /*
666
- * FAR is UNKNOWN: clear vaddress to avoid potentially exposing
667
- * values to the guest that it shouldn't be able to see at its
668
- * exception/security level.
669
- */
670
- env->exception.vaddress = 0;
671
- raise_exception(env, EXCP_PREFETCH_ABORT,
672
- syn_breakpoint(same_el),
673
- arm_debug_target_el(env));
674
- }
675
-}
676
-
677
/* ??? Flag setting arithmetic is awkward because we need to do comparisons.
678
The only way to do that in TCG is a conditional branch, which clobbers
679
all our temporaries. For now implement these as helper functions. */
680
--
44
--
681
2.20.1
45
2.25.1
682
46
683
47
diff view generated by jsdifflib