1
arm pullreq for rc1. All minor bugfixes, except for the sve-default-vector-length
1
The following changes since commit bf4460a8d9a86f6cfe05d7a7f470c48e3a93d8b2:
2
patches, which are somewhere between a bugfix and a new feature.
3
2
4
thanks
3
Merge tag 'pull-tcg-20230123' of https://gitlab.com/rth7680/qemu into staging (2023-02-03 09:30:45 +0000)
5
-- PMM
6
7
The following changes since commit c08ccd1b53f488ac86c1f65cf7623dc91acc249a:
8
9
Merge remote-tracking branch 'remotes/rth-gitlab/tags/pull-tcg-20210726' into staging (2021-07-27 08:35:01 +0100)
10
4
11
are available in the Git repository at:
5
are available in the Git repository at:
12
6
13
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20210727
7
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20230203
14
8
15
for you to fetch changes up to e229a179a503f2aee43a76888cf12fbdfe8a3749:
9
for you to fetch changes up to bb18151d8bd9bedc497ee9d4e8d81b39a4e5bbf6:
16
10
17
hw: aspeed_gpio: Fix memory size (2021-07-27 11:00:00 +0100)
11
target/arm: Enable FEAT_FGT on '-cpu max' (2023-02-03 12:59:24 +0000)
18
12
19
----------------------------------------------------------------
13
----------------------------------------------------------------
20
target-arm queue:
14
target-arm queue:
21
* hw/arm/smmuv3: Check 31st bit to see if CD is valid
15
* Fix physical address resolution for Stage2
22
* qemu-options.hx: Fix formatting of -machine memory-backend option
16
* pl011: refactoring, implement reset method
23
* hw: aspeed_gpio: Fix memory size
17
* Support GICv3 with hvf acceleration
24
* hw/arm/nseries: Display hexadecimal value with '0x' prefix
18
* sbsa-ref: remove cortex-a76 from list of supported cpus
25
* Add sve-default-vector-length cpu property
19
* Correct syndrome for ATS12NSO* traps at Secure EL1
26
* docs: Update path that mentions deprecated.rst
20
* Fix priority of HSTR_EL2 traps vs UNDEFs
27
* hw/intc/armv7m_nvic: for v8.1M VECTPENDING hides S exceptions from NS
21
* Implement FEAT_FGT for '-cpu max'
28
* hw/intc/armv7m_nvic: Correct size of ICSR.VECTPENDING
29
* hw/intc/armv7m_nvic: ISCR.ISRPENDING is set for non-enabled pending interrupts
30
* target/arm: Report M-profile alignment faults correctly to the guest
31
* target/arm: Add missing 'return's after calling v7m_exception_taken()
32
* target/arm: Enforce that M-profile SP low 2 bits are always zero
33
22
34
----------------------------------------------------------------
23
----------------------------------------------------------------
35
Joe Komlodi (1):
24
Alexander Graf (3):
36
hw/arm/smmuv3: Check 31st bit to see if CD is valid
25
hvf: arm: Add support for GICv3
26
hw/arm/virt: Consolidate GIC finalize logic
27
hw/arm/virt: Make accels in GIC finalize logic explicit
37
28
38
Joel Stanley (1):
29
Evgeny Iakovlev (4):
39
hw: aspeed_gpio: Fix memory size
30
hw/char/pl011: refactor FIFO depth handling code
31
hw/char/pl011: add post_load hook for backwards-compatibility
32
hw/char/pl011: implement a reset method
33
hw/char/pl011: better handling of FIFO flags on LCR reset
40
34
41
Mao Zhongyi (1):
35
Marcin Juszkiewicz (1):
42
docs: Update path that mentions deprecated.rst
36
sbsa-ref: remove cortex-a76 from list of supported cpus
43
37
44
Peter Maydell (7):
38
Peter Maydell (23):
45
qemu-options.hx: Fix formatting of -machine memory-backend option
39
target/arm: Name AT_S1E1RP and AT_S1E1WP cpregs correctly
46
target/arm: Enforce that M-profile SP low 2 bits are always zero
40
target/arm: Correct syndrome for ATS12NSO* at Secure EL1
47
target/arm: Add missing 'return's after calling v7m_exception_taken()
41
target/arm: Remove CP_ACCESS_TRAP_UNCATEGORIZED_{EL2, EL3}
48
target/arm: Report M-profile alignment faults correctly to the guest
42
target/arm: Move do_coproc_insn() syndrome calculation earlier
49
hw/intc/armv7m_nvic: ISCR.ISRPENDING is set for non-enabled pending interrupts
43
target/arm: All UNDEF-at-EL0 traps take priority over HSTR_EL2 traps
50
hw/intc/armv7m_nvic: Correct size of ICSR.VECTPENDING
44
target/arm: Make HSTR_EL2 traps take priority over UNDEF-at-EL1
51
hw/intc/armv7m_nvic: for v8.1M VECTPENDING hides S exceptions from NS
45
target/arm: Disable HSTR_EL2 traps if EL2 is not enabled
46
target/arm: Define the FEAT_FGT registers
47
target/arm: Implement FGT trapping infrastructure
48
target/arm: Mark up sysregs for HFGRTR bits 0..11
49
target/arm: Mark up sysregs for HFGRTR bits 12..23
50
target/arm: Mark up sysregs for HFGRTR bits 24..35
51
target/arm: Mark up sysregs for HFGRTR bits 36..63
52
target/arm: Mark up sysregs for HDFGRTR bits 0..11
53
target/arm: Mark up sysregs for HDFGRTR bits 12..63
54
target/arm: Mark up sysregs for HFGITR bits 0..11
55
target/arm: Mark up sysregs for HFGITR bits 12..17
56
target/arm: Mark up sysregs for HFGITR bits 18..47
57
target/arm: Mark up sysregs for HFGITR bits 48..63
58
target/arm: Implement the HFGITR_EL2.ERET trap
59
target/arm: Implement the HFGITR_EL2.SVC_EL0 and SVC_EL1 traps
60
target/arm: Implement MDCR_EL2.TDCC and MDCR_EL3.TDCC traps
61
target/arm: Enable FEAT_FGT on '-cpu max'
52
62
53
Philippe Mathieu-Daudé (1):
63
Richard Henderson (2):
54
hw/arm/nseries: Display hexadecimal value with '0x' prefix
64
hw/arm: Use TYPE_ARM_SMMUV3
65
target/arm: Fix physical address resolution for Stage2
55
66
56
Richard Henderson (3):
67
docs/system/arm/emulation.rst | 1 +
57
target/arm: Correctly bound length in sve_zcr_get_valid_len
68
include/hw/arm/virt.h | 15 +-
58
target/arm: Export aarch64_sve_zcr_get_valid_len
69
include/hw/char/pl011.h | 5 +-
59
target/arm: Add sve-default-vector-length cpu property
70
target/arm/cpregs.h | 484 +++++++++++++++++++++++++++++++++++++++++-
60
71
target/arm/cpu.h | 18 ++
61
docs/system/arm/cpu-features.rst | 15 ++++++++++
72
target/arm/internals.h | 20 ++
62
configure | 2 +-
73
target/arm/syndrome.h | 10 +
63
hw/arm/smmuv3-internal.h | 2 +-
74
target/arm/translate.h | 6 +
64
target/arm/cpu.h | 5 ++++
75
hw/arm/sbsa-ref.c | 4 +-
65
target/arm/internals.h | 10 +++++++
76
hw/arm/virt.c | 203 +++++++++---------
66
hw/arm/nseries.c | 2 +-
77
hw/char/pl011.c | 93 ++++++--
67
hw/gpio/aspeed_gpio.c | 3 +-
78
hw/intc/arm_gicv3_cpuif.c | 18 +-
68
hw/intc/armv7m_nvic.c | 40 +++++++++++++++++++--------
79
target/arm/cpu64.c | 1 +
69
target/arm/cpu.c | 14 ++++++++--
80
target/arm/debug_helper.c | 46 +++-
70
target/arm/cpu64.c | 60 ++++++++++++++++++++++++++++++++++++++++
81
target/arm/helper.c | 245 ++++++++++++++++++++-
71
target/arm/gdbstub.c | 4 +++
82
target/arm/hvf/hvf.c | 151 +++++++++++++
72
target/arm/helper.c | 8 ++++--
83
target/arm/op_helper.c | 58 ++++-
73
target/arm/m_helper.c | 24 ++++++++++++----
84
target/arm/ptw.c | 2 +-
74
target/arm/translate.c | 3 ++
85
target/arm/translate-a64.c | 22 +-
75
target/i386/cpu.c | 2 +-
86
target/arm/translate.c | 125 +++++++----
76
MAINTAINERS | 2 +-
87
target/arm/hvf/trace-events | 2 +
77
qemu-options.hx | 30 +++++++++++---------
88
21 files changed, 1340 insertions(+), 189 deletions(-)
78
17 files changed, 183 insertions(+), 43 deletions(-)
79
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Currently, our only caller is sve_zcr_len_for_el, which has
3
Use the macro instead of two explicit string literals.
4
already masked the length extracted from ZCR_ELx, so the
5
masking done here is a nop. But we will shortly have uses
6
from other locations, where the length will be unmasked.
7
8
Saturate the length to ARM_MAX_VQ instead of truncating to
9
the low 4 bits.
10
4
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
13
Message-id: 20210723203344.968563-2-richard.henderson@linaro.org
7
Reviewed-by: Eric Auger <eric.auger@redhat.com>
8
Message-id: 20230124232059.4017615-1-richard.henderson@linaro.org
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
10
---
16
target/arm/helper.c | 4 +++-
11
hw/arm/sbsa-ref.c | 3 ++-
17
1 file changed, 3 insertions(+), 1 deletion(-)
12
hw/arm/virt.c | 2 +-
13
2 files changed, 3 insertions(+), 2 deletions(-)
18
14
19
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
20
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/helper.c
17
--- a/hw/arm/sbsa-ref.c
22
+++ b/target/arm/helper.c
18
+++ b/hw/arm/sbsa-ref.c
23
@@ -XXX,XX +XXX,XX @@ static uint32_t sve_zcr_get_valid_len(ARMCPU *cpu, uint32_t start_len)
19
@@ -XXX,XX +XXX,XX @@
24
{
20
#include "exec/hwaddr.h"
25
uint32_t end_len;
21
#include "kvm_arm.h"
26
22
#include "hw/arm/boot.h"
27
- end_len = start_len &= 0xf;
23
+#include "hw/arm/smmuv3.h"
28
+ start_len = MIN(start_len, ARM_MAX_VQ - 1);
24
#include "hw/block/flash.h"
29
+ end_len = start_len;
25
#include "hw/boards.h"
30
+
26
#include "hw/ide/internal.h"
31
if (!test_bit(start_len, cpu->sve_vq_map)) {
27
@@ -XXX,XX +XXX,XX @@ static void create_smmu(const SBSAMachineState *sms, PCIBus *bus)
32
end_len = find_last_bit(cpu->sve_vq_map, start_len);
28
DeviceState *dev;
33
assert(end_len < start_len);
29
int i;
30
31
- dev = qdev_new("arm-smmuv3");
32
+ dev = qdev_new(TYPE_ARM_SMMUV3);
33
34
object_property_set_link(OBJECT(dev), "primary-bus", OBJECT(bus),
35
&error_abort);
36
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
37
index XXXXXXX..XXXXXXX 100644
38
--- a/hw/arm/virt.c
39
+++ b/hw/arm/virt.c
40
@@ -XXX,XX +XXX,XX @@ static void create_smmu(const VirtMachineState *vms,
41
return;
42
}
43
44
- dev = qdev_new("arm-smmuv3");
45
+ dev = qdev_new(TYPE_ARM_SMMUV3);
46
47
object_property_set_link(OBJECT(dev), "primary-bus", OBJECT(bus),
48
&error_abort);
34
--
49
--
35
2.20.1
50
2.34.1
36
51
37
52
diff view generated by jsdifflib
1
From: Joe Komlodi <joe.komlodi@xilinx.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The bit to see if a CD is valid is the last bit of the first word of the CD.
3
Conversion to probe_access_full missed applying the page offset.
4
4
5
Signed-off-by: Joe Komlodi <joe.komlodi@xilinx.com>
5
Cc: qemu-stable@nongnu.org
6
Message-id: 1626728232-134665-2-git-send-email-joe.komlodi@xilinx.com
6
Reported-by: Sid Manning <sidneym@quicinc.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
Message-id: 20230126233134.103193-1-richard.henderson@linaro.org
10
Fixes: f3639a64f602 ("target/arm: Use softmmu tlbs for page table walking")
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
13
---
10
hw/arm/smmuv3-internal.h | 2 +-
14
target/arm/ptw.c | 2 +-
11
1 file changed, 1 insertion(+), 1 deletion(-)
15
1 file changed, 1 insertion(+), 1 deletion(-)
12
16
13
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
17
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
14
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/arm/smmuv3-internal.h
19
--- a/target/arm/ptw.c
16
+++ b/hw/arm/smmuv3-internal.h
20
+++ b/target/arm/ptw.c
17
@@ -XXX,XX +XXX,XX @@ static inline int pa_range(STE *ste)
21
@@ -XXX,XX +XXX,XX @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
18
22
if (unlikely(flags & TLB_INVALID_MASK)) {
19
/* CD fields */
23
goto fail;
20
24
}
21
-#define CD_VALID(x) extract32((x)->word[0], 30, 1)
25
- ptw->out_phys = full->phys_addr;
22
+#define CD_VALID(x) extract32((x)->word[0], 31, 1)
26
+ ptw->out_phys = full->phys_addr | (addr & ~TARGET_PAGE_MASK);
23
#define CD_ASID(x) extract32((x)->word[1], 16, 16)
27
ptw->out_rw = full->prot & PAGE_WRITE;
24
#define CD_TTB(x, sel) \
28
pte_attrs = full->pte_attrs;
25
({ \
29
pte_secure = full->attrs.secure;
26
--
30
--
27
2.20.1
31
2.34.1
28
32
29
33
diff view generated by jsdifflib
1
From: Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
1
From: Evgeny Iakovlev <eiakovlev@linux.microsoft.com>
2
2
3
Missed in commit f3478392 "docs: Move deprecation, build
3
PL011 can be in either of 2 modes depending guest config: FIFO and
4
and license info out of system/"
4
single register. The last mode could be viewed as a 1-element-deep FIFO.
5
5
6
Signed-off-by: Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
6
Current code open-codes a bunch of depth-dependent logic. Refactor FIFO
7
depth handling code to isolate calculating current FIFO depth.
8
9
One functional (albeit guest-invisible) side-effect of this change is
10
that previously we would always increment s->read_pos in UARTDR read
11
handler even if FIFO was disabled, now we are limiting read_pos to not
12
exceed FIFO depth (read_pos itself is reset to 0 if user disables FIFO).
13
14
Signed-off-by: Evgeny Iakovlev <eiakovlev@linux.microsoft.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20210723065828.1336760-1-maozhongyi@cmss.chinamobile.com
16
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
17
Message-id: 20230123162304.26254-2-eiakovlev@linux.microsoft.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
19
---
11
configure | 2 +-
20
include/hw/char/pl011.h | 5 ++++-
12
target/i386/cpu.c | 2 +-
21
hw/char/pl011.c | 30 ++++++++++++++++++------------
13
MAINTAINERS | 2 +-
22
2 files changed, 22 insertions(+), 13 deletions(-)
14
3 files changed, 3 insertions(+), 3 deletions(-)
15
23
16
diff --git a/configure b/configure
24
diff --git a/include/hw/char/pl011.h b/include/hw/char/pl011.h
17
index XXXXXXX..XXXXXXX 100755
18
--- a/configure
19
+++ b/configure
20
@@ -XXX,XX +XXX,XX @@ fi
21
22
if test -n "${deprecated_features}"; then
23
echo "Warning, deprecated features enabled."
24
- echo "Please see docs/system/deprecated.rst"
25
+ echo "Please see docs/about/deprecated.rst"
26
echo " features: ${deprecated_features}"
27
fi
28
29
diff --git a/target/i386/cpu.c b/target/i386/cpu.c
30
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
31
--- a/target/i386/cpu.c
26
--- a/include/hw/char/pl011.h
32
+++ b/target/i386/cpu.c
27
+++ b/include/hw/char/pl011.h
33
@@ -XXX,XX +XXX,XX @@ static const X86CPUDefinition builtin_x86_defs[] = {
28
@@ -XXX,XX +XXX,XX @@ OBJECT_DECLARE_SIMPLE_TYPE(PL011State, PL011)
34
* none", but this is just for compatibility while libvirt isn't
29
/* This shares the same struct (and cast macro) as the base pl011 device */
35
* adapted to resolve CPU model versions before creating VMs.
30
#define TYPE_PL011_LUMINARY "pl011_luminary"
36
* See "Runnability guarantee of CPU models" at
31
37
- * docs/system/deprecated.rst.
32
+/* Depth of UART FIFO in bytes, when FIFO mode is enabled (else depth == 1) */
38
+ * docs/about/deprecated.rst.
33
+#define PL011_FIFO_DEPTH 16
39
*/
34
+
40
X86CPUVersion default_cpu_version = 1;
35
struct PL011State {
41
36
SysBusDevice parent_obj;
42
diff --git a/MAINTAINERS b/MAINTAINERS
37
38
@@ -XXX,XX +XXX,XX @@ struct PL011State {
39
uint32_t dmacr;
40
uint32_t int_enabled;
41
uint32_t int_level;
42
- uint32_t read_fifo[16];
43
+ uint32_t read_fifo[PL011_FIFO_DEPTH];
44
uint32_t ilpr;
45
uint32_t ibrd;
46
uint32_t fbrd;
47
diff --git a/hw/char/pl011.c b/hw/char/pl011.c
43
index XXXXXXX..XXXXXXX 100644
48
index XXXXXXX..XXXXXXX 100644
44
--- a/MAINTAINERS
49
--- a/hw/char/pl011.c
45
+++ b/MAINTAINERS
50
+++ b/hw/char/pl011.c
46
@@ -XXX,XX +XXX,XX @@ F: contrib/gitdm/*
51
@@ -XXX,XX +XXX,XX @@ static void pl011_update(PL011State *s)
47
52
}
48
Incompatible changes
53
}
49
R: libvir-list@redhat.com
54
50
-F: docs/system/deprecated.rst
55
+static bool pl011_is_fifo_enabled(PL011State *s)
51
+F: docs/about/deprecated.rst
56
+{
52
57
+ return (s->lcr & 0x10) != 0;
53
Build System
58
+}
54
------------
59
+
60
+static inline unsigned pl011_get_fifo_depth(PL011State *s)
61
+{
62
+ /* Note: FIFO depth is expected to be power-of-2 */
63
+ return pl011_is_fifo_enabled(s) ? PL011_FIFO_DEPTH : 1;
64
+}
65
+
66
static uint64_t pl011_read(void *opaque, hwaddr offset,
67
unsigned size)
68
{
69
@@ -XXX,XX +XXX,XX @@ static uint64_t pl011_read(void *opaque, hwaddr offset,
70
c = s->read_fifo[s->read_pos];
71
if (s->read_count > 0) {
72
s->read_count--;
73
- if (++s->read_pos == 16)
74
- s->read_pos = 0;
75
+ s->read_pos = (s->read_pos + 1) & (pl011_get_fifo_depth(s) - 1);
76
}
77
if (s->read_count == 0) {
78
s->flags |= PL011_FLAG_RXFE;
79
@@ -XXX,XX +XXX,XX @@ static int pl011_can_receive(void *opaque)
80
PL011State *s = (PL011State *)opaque;
81
int r;
82
83
- if (s->lcr & 0x10) {
84
- r = s->read_count < 16;
85
- } else {
86
- r = s->read_count < 1;
87
- }
88
+ r = s->read_count < pl011_get_fifo_depth(s);
89
trace_pl011_can_receive(s->lcr, s->read_count, r);
90
return r;
91
}
92
@@ -XXX,XX +XXX,XX @@ static void pl011_put_fifo(void *opaque, uint32_t value)
93
{
94
PL011State *s = (PL011State *)opaque;
95
int slot;
96
+ unsigned pipe_depth;
97
98
- slot = s->read_pos + s->read_count;
99
- if (slot >= 16)
100
- slot -= 16;
101
+ pipe_depth = pl011_get_fifo_depth(s);
102
+ slot = (s->read_pos + s->read_count) & (pipe_depth - 1);
103
s->read_fifo[slot] = value;
104
s->read_count++;
105
s->flags &= ~PL011_FLAG_RXFE;
106
trace_pl011_put_fifo(value, s->read_count);
107
- if (!(s->lcr & 0x10) || s->read_count == 16) {
108
+ if (s->read_count == pipe_depth) {
109
trace_pl011_put_fifo_full();
110
s->flags |= PL011_FLAG_RXFF;
111
}
112
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_pl011 = {
113
VMSTATE_UINT32(dmacr, PL011State),
114
VMSTATE_UINT32(int_enabled, PL011State),
115
VMSTATE_UINT32(int_level, PL011State),
116
- VMSTATE_UINT32_ARRAY(read_fifo, PL011State, 16),
117
+ VMSTATE_UINT32_ARRAY(read_fifo, PL011State, PL011_FIFO_DEPTH),
118
VMSTATE_UINT32(ilpr, PL011State),
119
VMSTATE_UINT32(ibrd, PL011State),
120
VMSTATE_UINT32(fbrd, PL011State),
55
--
121
--
56
2.20.1
122
2.34.1
57
123
58
124
diff view generated by jsdifflib
New patch
1
From: Evgeny Iakovlev <eiakovlev@linux.microsoft.com>
1
2
3
Previous change slightly modified the way we handle data writes when
4
FIFO is disabled. Previously we kept incrementing read_pos and were
5
storing data at that position, although we only have a
6
single-register-deep FIFO now. Then we changed it to always store data
7
at pos 0.
8
9
If guest disables FIFO and the proceeds to read data, it will work out
10
fine, because we still read from current read_pos before setting it to
11
0.
12
13
However, to make code less fragile, introduce a post_load hook for
14
PL011State and move fixup read FIFO state when FIFO is disabled. Since
15
we are introducing a post_load hook, also do some sanity checking on
16
untrusted incoming input state.
17
18
Signed-off-by: Evgeny Iakovlev <eiakovlev@linux.microsoft.com>
19
Message-id: 20230123162304.26254-3-eiakovlev@linux.microsoft.com
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21
---
22
hw/char/pl011.c | 25 +++++++++++++++++++++++++
23
1 file changed, 25 insertions(+)
24
25
diff --git a/hw/char/pl011.c b/hw/char/pl011.c
26
index XXXXXXX..XXXXXXX 100644
27
--- a/hw/char/pl011.c
28
+++ b/hw/char/pl011.c
29
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_pl011_clock = {
30
}
31
};
32
33
+static int pl011_post_load(void *opaque, int version_id)
34
+{
35
+ PL011State* s = opaque;
36
+
37
+ /* Sanity-check input state */
38
+ if (s->read_pos >= ARRAY_SIZE(s->read_fifo) ||
39
+ s->read_count > ARRAY_SIZE(s->read_fifo)) {
40
+ return -1;
41
+ }
42
+
43
+ if (!pl011_is_fifo_enabled(s) && s->read_count > 0 && s->read_pos > 0) {
44
+ /*
45
+ * Older versions of PL011 didn't ensure that the single
46
+ * character in the FIFO in FIFO-disabled mode is in
47
+ * element 0 of the array; convert to follow the current
48
+ * code's assumptions.
49
+ */
50
+ s->read_fifo[0] = s->read_fifo[s->read_pos];
51
+ s->read_pos = 0;
52
+ }
53
+
54
+ return 0;
55
+}
56
+
57
static const VMStateDescription vmstate_pl011 = {
58
.name = "pl011",
59
.version_id = 2,
60
.minimum_version_id = 2,
61
+ .post_load = pl011_post_load,
62
.fields = (VMStateField[]) {
63
VMSTATE_UINT32(readbuff, PL011State),
64
VMSTATE_UINT32(flags, PL011State),
65
--
66
2.34.1
diff view generated by jsdifflib
1
From: Joel Stanley <joel@jms.id.au>
1
From: Evgeny Iakovlev <eiakovlev@linux.microsoft.com>
2
2
3
The macro used to calculate the maximum memory size of the MMIO region
3
PL011 currently lacks a reset method. Implement it.
4
had a mistake, causing all GPIO models to create a mapping of 0x9D8.
5
The intent was to have it be 0x9D8 - 0x800.
6
4
7
This extra size doesn't matter on ast2400 and ast2500, which have a 4KB
5
Signed-off-by: Evgeny Iakovlev <eiakovlev@linux.microsoft.com>
8
region set aside for the GPIO controller.
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
10
On the ast2600 the 3.3V and 1.8V GPIO controllers are 2KB apart, so the
8
Message-id: 20230123162304.26254-4-eiakovlev@linux.microsoft.com
11
regions would overlap. Worse was the 1.8V controller would map over the
12
top of the following peripheral, which happens to be the RTC.
13
14
The mmio region used by each device is a maximum of 2KB, so avoid the
15
calculations and hard code this as the maximum.
16
17
Fixes: 36d737ee82b2 ("hw/gpio: Add in AST2600 specific implementation")
18
Signed-off-by: Joel Stanley <joel@jms.id.au>
19
Reviewed-by: Rashmica Gupta <rashmica.g@gmail.com>
20
Reviewed-by: Cédric Le Goater <clg@kaod.org>
21
Message-id: 20210713065854.134634-2-joel@jms.id.au
22
[PMM: fix autocorrect error in commit message]
23
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
---
10
---
25
hw/gpio/aspeed_gpio.c | 3 +--
11
hw/char/pl011.c | 26 +++++++++++++++++++++-----
26
1 file changed, 1 insertion(+), 2 deletions(-)
12
1 file changed, 21 insertions(+), 5 deletions(-)
27
13
28
diff --git a/hw/gpio/aspeed_gpio.c b/hw/gpio/aspeed_gpio.c
14
diff --git a/hw/char/pl011.c b/hw/char/pl011.c
29
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
30
--- a/hw/gpio/aspeed_gpio.c
16
--- a/hw/char/pl011.c
31
+++ b/hw/gpio/aspeed_gpio.c
17
+++ b/hw/char/pl011.c
32
@@ -XXX,XX +XXX,XX @@
18
@@ -XXX,XX +XXX,XX @@ static void pl011_init(Object *obj)
33
#define GPIO_1_8V_MEM_SIZE 0x9D8
19
s->clk = qdev_init_clock_in(DEVICE(obj), "clk", pl011_clock_update, s,
34
#define GPIO_1_8V_REG_ARRAY_SIZE ((GPIO_1_8V_MEM_SIZE - \
20
ClockUpdate);
35
GPIO_1_8V_REG_OFFSET) >> 2)
21
36
-#define GPIO_MAX_MEM_SIZE MAX(GPIO_3_6V_MEM_SIZE, GPIO_1_8V_MEM_SIZE)
22
- s->read_trigger = 1;
37
23
- s->ifl = 0x12;
38
static int aspeed_evaluate_irq(GPIOSets *regs, int gpio_prev_high, int gpio)
24
- s->cr = 0x300;
25
- s->flags = 0x90;
26
-
27
s->id = pl011_id_arm;
28
}
29
30
@@ -XXX,XX +XXX,XX @@ static void pl011_realize(DeviceState *dev, Error **errp)
31
pl011_event, NULL, s, NULL, true);
32
}
33
34
+static void pl011_reset(DeviceState *dev)
35
+{
36
+ PL011State *s = PL011(dev);
37
+
38
+ s->lcr = 0;
39
+ s->rsr = 0;
40
+ s->dmacr = 0;
41
+ s->int_enabled = 0;
42
+ s->int_level = 0;
43
+ s->ilpr = 0;
44
+ s->ibrd = 0;
45
+ s->fbrd = 0;
46
+ s->read_pos = 0;
47
+ s->read_count = 0;
48
+ s->read_trigger = 1;
49
+ s->ifl = 0x12;
50
+ s->cr = 0x300;
51
+ s->flags = 0x90;
52
+}
53
+
54
static void pl011_class_init(ObjectClass *oc, void *data)
39
{
55
{
40
@@ -XXX,XX +XXX,XX @@ static void aspeed_gpio_realize(DeviceState *dev, Error **errp)
56
DeviceClass *dc = DEVICE_CLASS(oc);
41
}
57
42
58
dc->realize = pl011_realize;
43
memory_region_init_io(&s->iomem, OBJECT(s), &aspeed_gpio_ops, s,
59
+ dc->reset = pl011_reset;
44
- TYPE_ASPEED_GPIO, GPIO_MAX_MEM_SIZE);
60
dc->vmsd = &vmstate_pl011;
45
+ TYPE_ASPEED_GPIO, 0x800);
61
device_class_set_props(dc, pl011_properties);
46
47
sysbus_init_mmio(sbd, &s->iomem);
48
}
62
}
49
--
63
--
50
2.20.1
64
2.34.1
51
65
52
66
diff view generated by jsdifflib
New patch
1
From: Evgeny Iakovlev <eiakovlev@linux.microsoft.com>
1
2
3
Current FIFO handling code does not reset RXFE/RXFF flags when guest
4
resets FIFO by writing to UARTLCR register, although internal FIFO state
5
is reset to 0 read count. Actual guest-visible flag update will happen
6
only on next data read or write attempt. As a result of that any guest
7
that expects RXFE flag to be set (and RXFF to be cleared) after resetting
8
FIFO will never see that happen.
9
10
Signed-off-by: Evgeny Iakovlev <eiakovlev@linux.microsoft.com>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Message-id: 20230123162304.26254-5-eiakovlev@linux.microsoft.com
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
hw/char/pl011.c | 18 +++++++++++++-----
16
1 file changed, 13 insertions(+), 5 deletions(-)
17
18
diff --git a/hw/char/pl011.c b/hw/char/pl011.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/char/pl011.c
21
+++ b/hw/char/pl011.c
22
@@ -XXX,XX +XXX,XX @@ static inline unsigned pl011_get_fifo_depth(PL011State *s)
23
return pl011_is_fifo_enabled(s) ? PL011_FIFO_DEPTH : 1;
24
}
25
26
+static inline void pl011_reset_fifo(PL011State *s)
27
+{
28
+ s->read_count = 0;
29
+ s->read_pos = 0;
30
+
31
+ /* Reset FIFO flags */
32
+ s->flags &= ~(PL011_FLAG_RXFF | PL011_FLAG_TXFF);
33
+ s->flags |= PL011_FLAG_RXFE | PL011_FLAG_TXFE;
34
+}
35
+
36
static uint64_t pl011_read(void *opaque, hwaddr offset,
37
unsigned size)
38
{
39
@@ -XXX,XX +XXX,XX @@ static void pl011_write(void *opaque, hwaddr offset,
40
case 11: /* UARTLCR_H */
41
/* Reset the FIFO state on FIFO enable or disable */
42
if ((s->lcr ^ value) & 0x10) {
43
- s->read_count = 0;
44
- s->read_pos = 0;
45
+ pl011_reset_fifo(s);
46
}
47
if ((s->lcr ^ value) & 0x1) {
48
int break_enable = value & 0x1;
49
@@ -XXX,XX +XXX,XX @@ static void pl011_reset(DeviceState *dev)
50
s->ilpr = 0;
51
s->ibrd = 0;
52
s->fbrd = 0;
53
- s->read_pos = 0;
54
- s->read_count = 0;
55
s->read_trigger = 1;
56
s->ifl = 0x12;
57
s->cr = 0x300;
58
- s->flags = 0x90;
59
+ s->flags = 0;
60
+ pl011_reset_fifo(s);
61
}
62
63
static void pl011_class_init(ObjectClass *oc, void *data)
64
--
65
2.34.1
diff view generated by jsdifflib
New patch
1
1
From: Alexander Graf <agraf@csgraf.de>
2
3
We currently only support GICv2 emulation. To also support GICv3, we will
4
need to pass a few system registers into their respective handler functions.
5
6
This patch adds support for HVF to call into the TCG callbacks for GICv3
7
system register handlers. This is safe because the GICv3 TCG code is generic
8
as long as we limit ourselves to EL0 and EL1 - which are the only modes
9
supported by HVF.
10
11
To make sure nobody trips over that, we also annotate callbacks that don't
12
work in HVF mode, such as EL state change hooks.
13
14
With GICv3 support in place, we can run with more than 8 vCPUs.
15
16
Signed-off-by: Alexander Graf <agraf@csgraf.de>
17
Message-id: 20230128224459.70676-1-agraf@csgraf.de
18
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
---
21
hw/intc/arm_gicv3_cpuif.c | 16 +++-
22
target/arm/hvf/hvf.c | 151 ++++++++++++++++++++++++++++++++++++
23
target/arm/hvf/trace-events | 2 +
24
3 files changed, 168 insertions(+), 1 deletion(-)
25
26
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
27
index XXXXXXX..XXXXXXX 100644
28
--- a/hw/intc/arm_gicv3_cpuif.c
29
+++ b/hw/intc/arm_gicv3_cpuif.c
30
@@ -XXX,XX +XXX,XX @@
31
#include "hw/irq.h"
32
#include "cpu.h"
33
#include "target/arm/cpregs.h"
34
+#include "sysemu/tcg.h"
35
+#include "sysemu/qtest.h"
36
37
/*
38
* Special case return value from hppvi_index(); must be larger than
39
@@ -XXX,XX +XXX,XX @@ void gicv3_init_cpuif(GICv3State *s)
40
* which case we'd get the wrong value.
41
* So instead we define the regs with no ri->opaque info, and
42
* get back to the GICv3CPUState from the CPUARMState.
43
+ *
44
+ * These CP regs callbacks can be called from either TCG or HVF code.
45
*/
46
define_arm_cp_regs(cpu, gicv3_cpuif_reginfo);
47
48
@@ -XXX,XX +XXX,XX @@ void gicv3_init_cpuif(GICv3State *s)
49
define_arm_cp_regs(cpu, gicv3_cpuif_ich_apxr23_reginfo);
50
}
51
}
52
- arm_register_el_change_hook(cpu, gicv3_cpuif_el_change_hook, cs);
53
+ if (tcg_enabled() || qtest_enabled()) {
54
+ /*
55
+ * We can only trap EL changes with TCG. However the GIC interrupt
56
+ * state only changes on EL changes involving EL2 or EL3, so for
57
+ * the non-TCG case this is OK, as EL2 and EL3 can't exist.
58
+ */
59
+ arm_register_el_change_hook(cpu, gicv3_cpuif_el_change_hook, cs);
60
+ } else {
61
+ assert(!arm_feature(&cpu->env, ARM_FEATURE_EL2));
62
+ assert(!arm_feature(&cpu->env, ARM_FEATURE_EL3));
63
+ }
64
}
65
}
66
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
67
index XXXXXXX..XXXXXXX 100644
68
--- a/target/arm/hvf/hvf.c
69
+++ b/target/arm/hvf/hvf.c
70
@@ -XXX,XX +XXX,XX @@
71
#define SYSREG_PMCCNTR_EL0 SYSREG(3, 3, 9, 13, 0)
72
#define SYSREG_PMCCFILTR_EL0 SYSREG(3, 3, 14, 15, 7)
73
74
+#define SYSREG_ICC_AP0R0_EL1 SYSREG(3, 0, 12, 8, 4)
75
+#define SYSREG_ICC_AP0R1_EL1 SYSREG(3, 0, 12, 8, 5)
76
+#define SYSREG_ICC_AP0R2_EL1 SYSREG(3, 0, 12, 8, 6)
77
+#define SYSREG_ICC_AP0R3_EL1 SYSREG(3, 0, 12, 8, 7)
78
+#define SYSREG_ICC_AP1R0_EL1 SYSREG(3, 0, 12, 9, 0)
79
+#define SYSREG_ICC_AP1R1_EL1 SYSREG(3, 0, 12, 9, 1)
80
+#define SYSREG_ICC_AP1R2_EL1 SYSREG(3, 0, 12, 9, 2)
81
+#define SYSREG_ICC_AP1R3_EL1 SYSREG(3, 0, 12, 9, 3)
82
+#define SYSREG_ICC_ASGI1R_EL1 SYSREG(3, 0, 12, 11, 6)
83
+#define SYSREG_ICC_BPR0_EL1 SYSREG(3, 0, 12, 8, 3)
84
+#define SYSREG_ICC_BPR1_EL1 SYSREG(3, 0, 12, 12, 3)
85
+#define SYSREG_ICC_CTLR_EL1 SYSREG(3, 0, 12, 12, 4)
86
+#define SYSREG_ICC_DIR_EL1 SYSREG(3, 0, 12, 11, 1)
87
+#define SYSREG_ICC_EOIR0_EL1 SYSREG(3, 0, 12, 8, 1)
88
+#define SYSREG_ICC_EOIR1_EL1 SYSREG(3, 0, 12, 12, 1)
89
+#define SYSREG_ICC_HPPIR0_EL1 SYSREG(3, 0, 12, 8, 2)
90
+#define SYSREG_ICC_HPPIR1_EL1 SYSREG(3, 0, 12, 12, 2)
91
+#define SYSREG_ICC_IAR0_EL1 SYSREG(3, 0, 12, 8, 0)
92
+#define SYSREG_ICC_IAR1_EL1 SYSREG(3, 0, 12, 12, 0)
93
+#define SYSREG_ICC_IGRPEN0_EL1 SYSREG(3, 0, 12, 12, 6)
94
+#define SYSREG_ICC_IGRPEN1_EL1 SYSREG(3, 0, 12, 12, 7)
95
+#define SYSREG_ICC_PMR_EL1 SYSREG(3, 0, 4, 6, 0)
96
+#define SYSREG_ICC_RPR_EL1 SYSREG(3, 0, 12, 11, 3)
97
+#define SYSREG_ICC_SGI0R_EL1 SYSREG(3, 0, 12, 11, 7)
98
+#define SYSREG_ICC_SGI1R_EL1 SYSREG(3, 0, 12, 11, 5)
99
+#define SYSREG_ICC_SRE_EL1 SYSREG(3, 0, 12, 12, 5)
100
+
101
#define WFX_IS_WFE (1 << 0)
102
103
#define TMR_CTL_ENABLE (1 << 0)
104
@@ -XXX,XX +XXX,XX @@ static bool is_id_sysreg(uint32_t reg)
105
SYSREG_CRM(reg) < 8;
106
}
107
108
+static uint32_t hvf_reg2cp_reg(uint32_t reg)
109
+{
110
+ return ENCODE_AA64_CP_REG(CP_REG_ARM64_SYSREG_CP,
111
+ (reg >> SYSREG_CRN_SHIFT) & SYSREG_CRN_MASK,
112
+ (reg >> SYSREG_CRM_SHIFT) & SYSREG_CRM_MASK,
113
+ (reg >> SYSREG_OP0_SHIFT) & SYSREG_OP0_MASK,
114
+ (reg >> SYSREG_OP1_SHIFT) & SYSREG_OP1_MASK,
115
+ (reg >> SYSREG_OP2_SHIFT) & SYSREG_OP2_MASK);
116
+}
117
+
118
+static bool hvf_sysreg_read_cp(CPUState *cpu, uint32_t reg, uint64_t *val)
119
+{
120
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
121
+ CPUARMState *env = &arm_cpu->env;
122
+ const ARMCPRegInfo *ri;
123
+
124
+ ri = get_arm_cp_reginfo(arm_cpu->cp_regs, hvf_reg2cp_reg(reg));
125
+ if (ri) {
126
+ if (ri->accessfn) {
127
+ if (ri->accessfn(env, ri, true) != CP_ACCESS_OK) {
128
+ return false;
129
+ }
130
+ }
131
+ if (ri->type & ARM_CP_CONST) {
132
+ *val = ri->resetvalue;
133
+ } else if (ri->readfn) {
134
+ *val = ri->readfn(env, ri);
135
+ } else {
136
+ *val = CPREG_FIELD64(env, ri);
137
+ }
138
+ trace_hvf_vgic_read(ri->name, *val);
139
+ return true;
140
+ }
141
+
142
+ return false;
143
+}
144
+
145
static int hvf_sysreg_read(CPUState *cpu, uint32_t reg, uint32_t rt)
146
{
147
ARMCPU *arm_cpu = ARM_CPU(cpu);
148
@@ -XXX,XX +XXX,XX @@ static int hvf_sysreg_read(CPUState *cpu, uint32_t reg, uint32_t rt)
149
case SYSREG_OSDLR_EL1:
150
/* Dummy register */
151
break;
152
+ case SYSREG_ICC_AP0R0_EL1:
153
+ case SYSREG_ICC_AP0R1_EL1:
154
+ case SYSREG_ICC_AP0R2_EL1:
155
+ case SYSREG_ICC_AP0R3_EL1:
156
+ case SYSREG_ICC_AP1R0_EL1:
157
+ case SYSREG_ICC_AP1R1_EL1:
158
+ case SYSREG_ICC_AP1R2_EL1:
159
+ case SYSREG_ICC_AP1R3_EL1:
160
+ case SYSREG_ICC_ASGI1R_EL1:
161
+ case SYSREG_ICC_BPR0_EL1:
162
+ case SYSREG_ICC_BPR1_EL1:
163
+ case SYSREG_ICC_DIR_EL1:
164
+ case SYSREG_ICC_EOIR0_EL1:
165
+ case SYSREG_ICC_EOIR1_EL1:
166
+ case SYSREG_ICC_HPPIR0_EL1:
167
+ case SYSREG_ICC_HPPIR1_EL1:
168
+ case SYSREG_ICC_IAR0_EL1:
169
+ case SYSREG_ICC_IAR1_EL1:
170
+ case SYSREG_ICC_IGRPEN0_EL1:
171
+ case SYSREG_ICC_IGRPEN1_EL1:
172
+ case SYSREG_ICC_PMR_EL1:
173
+ case SYSREG_ICC_SGI0R_EL1:
174
+ case SYSREG_ICC_SGI1R_EL1:
175
+ case SYSREG_ICC_SRE_EL1:
176
+ case SYSREG_ICC_CTLR_EL1:
177
+ /* Call the TCG sysreg handler. This is only safe for GICv3 regs. */
178
+ if (!hvf_sysreg_read_cp(cpu, reg, &val)) {
179
+ hvf_raise_exception(cpu, EXCP_UDEF, syn_uncategorized());
180
+ }
181
+ break;
182
default:
183
if (is_id_sysreg(reg)) {
184
/* ID system registers read as RES0 */
185
@@ -XXX,XX +XXX,XX @@ static void pmswinc_write(CPUARMState *env, uint64_t value)
186
}
187
}
188
189
+static bool hvf_sysreg_write_cp(CPUState *cpu, uint32_t reg, uint64_t val)
190
+{
191
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
192
+ CPUARMState *env = &arm_cpu->env;
193
+ const ARMCPRegInfo *ri;
194
+
195
+ ri = get_arm_cp_reginfo(arm_cpu->cp_regs, hvf_reg2cp_reg(reg));
196
+
197
+ if (ri) {
198
+ if (ri->accessfn) {
199
+ if (ri->accessfn(env, ri, false) != CP_ACCESS_OK) {
200
+ return false;
201
+ }
202
+ }
203
+ if (ri->writefn) {
204
+ ri->writefn(env, ri, val);
205
+ } else {
206
+ CPREG_FIELD64(env, ri) = val;
207
+ }
208
+
209
+ trace_hvf_vgic_write(ri->name, val);
210
+ return true;
211
+ }
212
+
213
+ return false;
214
+}
215
+
216
static int hvf_sysreg_write(CPUState *cpu, uint32_t reg, uint64_t val)
217
{
218
ARMCPU *arm_cpu = ARM_CPU(cpu);
219
@@ -XXX,XX +XXX,XX @@ static int hvf_sysreg_write(CPUState *cpu, uint32_t reg, uint64_t val)
220
case SYSREG_OSDLR_EL1:
221
/* Dummy register */
222
break;
223
+ case SYSREG_ICC_AP0R0_EL1:
224
+ case SYSREG_ICC_AP0R1_EL1:
225
+ case SYSREG_ICC_AP0R2_EL1:
226
+ case SYSREG_ICC_AP0R3_EL1:
227
+ case SYSREG_ICC_AP1R0_EL1:
228
+ case SYSREG_ICC_AP1R1_EL1:
229
+ case SYSREG_ICC_AP1R2_EL1:
230
+ case SYSREG_ICC_AP1R3_EL1:
231
+ case SYSREG_ICC_ASGI1R_EL1:
232
+ case SYSREG_ICC_BPR0_EL1:
233
+ case SYSREG_ICC_BPR1_EL1:
234
+ case SYSREG_ICC_CTLR_EL1:
235
+ case SYSREG_ICC_DIR_EL1:
236
+ case SYSREG_ICC_EOIR0_EL1:
237
+ case SYSREG_ICC_EOIR1_EL1:
238
+ case SYSREG_ICC_HPPIR0_EL1:
239
+ case SYSREG_ICC_HPPIR1_EL1:
240
+ case SYSREG_ICC_IAR0_EL1:
241
+ case SYSREG_ICC_IAR1_EL1:
242
+ case SYSREG_ICC_IGRPEN0_EL1:
243
+ case SYSREG_ICC_IGRPEN1_EL1:
244
+ case SYSREG_ICC_PMR_EL1:
245
+ case SYSREG_ICC_SGI0R_EL1:
246
+ case SYSREG_ICC_SGI1R_EL1:
247
+ case SYSREG_ICC_SRE_EL1:
248
+ /* Call the TCG sysreg handler. This is only safe for GICv3 regs. */
249
+ if (!hvf_sysreg_write_cp(cpu, reg, val)) {
250
+ hvf_raise_exception(cpu, EXCP_UDEF, syn_uncategorized());
251
+ }
252
+ break;
253
default:
254
cpu_synchronize_state(cpu);
255
trace_hvf_unhandled_sysreg_write(env->pc, reg,
256
diff --git a/target/arm/hvf/trace-events b/target/arm/hvf/trace-events
257
index XXXXXXX..XXXXXXX 100644
258
--- a/target/arm/hvf/trace-events
259
+++ b/target/arm/hvf/trace-events
260
@@ -XXX,XX +XXX,XX @@ hvf_unknown_hvc(uint64_t x0) "unknown HVC! 0x%016"PRIx64
261
hvf_unknown_smc(uint64_t x0) "unknown SMC! 0x%016"PRIx64
262
hvf_exit(uint64_t syndrome, uint32_t ec, uint64_t pc) "exit: 0x%"PRIx64" [ec=0x%x pc=0x%"PRIx64"]"
263
hvf_psci_call(uint64_t x0, uint64_t x1, uint64_t x2, uint64_t x3, uint32_t cpuid) "PSCI Call x0=0x%016"PRIx64" x1=0x%016"PRIx64" x2=0x%016"PRIx64" x3=0x%016"PRIx64" cpu=0x%x"
264
+hvf_vgic_write(const char *name, uint64_t val) "vgic write to %s [val=0x%016"PRIx64"]"
265
+hvf_vgic_read(const char *name, uint64_t val) "vgic read from %s [val=0x%016"PRIx64"]"
266
--
267
2.34.1
diff view generated by jsdifflib
New patch
1
1
From: Alexander Graf <agraf@csgraf.de>
2
3
Up to now, the finalize_gic_version() code open coded what is essentially
4
a support bitmap match between host/emulation environment and desired
5
target GIC type.
6
7
This open coding leads to undesirable side effects. For example, a VM with
8
KVM and -smp 10 will automatically choose GICv3 while the same command
9
line with TCG will stay on GICv2 and fail the launch.
10
11
This patch combines the TCG and KVM matching code paths by making
12
everything a 2 pass process. First, we determine which GIC versions the
13
current environment is able to support, then we go through a single
14
state machine to determine which target GIC mode that means for us.
15
16
After this patch, the only user noticable changes should be consolidated
17
error messages as well as TCG -M virt supporting -smp > 8 automatically.
18
19
Signed-off-by: Alexander Graf <agraf@csgraf.de>
20
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
21
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
22
Reviewed-by: Zenghui Yu <yuzenghui@huawei.com>
23
Message-id: 20221223090107.98888-2-agraf@csgraf.de
24
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
25
---
26
include/hw/arm/virt.h | 15 ++--
27
hw/arm/virt.c | 198 ++++++++++++++++++++++--------------------
28
2 files changed, 112 insertions(+), 101 deletions(-)
29
30
diff --git a/include/hw/arm/virt.h b/include/hw/arm/virt.h
31
index XXXXXXX..XXXXXXX 100644
32
--- a/include/hw/arm/virt.h
33
+++ b/include/hw/arm/virt.h
34
@@ -XXX,XX +XXX,XX @@ typedef enum VirtMSIControllerType {
35
} VirtMSIControllerType;
36
37
typedef enum VirtGICType {
38
- VIRT_GIC_VERSION_MAX,
39
- VIRT_GIC_VERSION_HOST,
40
- VIRT_GIC_VERSION_2,
41
- VIRT_GIC_VERSION_3,
42
- VIRT_GIC_VERSION_4,
43
+ VIRT_GIC_VERSION_MAX = 0,
44
+ VIRT_GIC_VERSION_HOST = 1,
45
+ /* The concrete GIC values have to match the GIC version number */
46
+ VIRT_GIC_VERSION_2 = 2,
47
+ VIRT_GIC_VERSION_3 = 3,
48
+ VIRT_GIC_VERSION_4 = 4,
49
VIRT_GIC_VERSION_NOSEL,
50
} VirtGICType;
51
52
+#define VIRT_GIC_VERSION_2_MASK BIT(VIRT_GIC_VERSION_2)
53
+#define VIRT_GIC_VERSION_3_MASK BIT(VIRT_GIC_VERSION_3)
54
+#define VIRT_GIC_VERSION_4_MASK BIT(VIRT_GIC_VERSION_4)
55
+
56
struct VirtMachineClass {
57
MachineClass parent;
58
bool disallow_affinity_adjustment;
59
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
60
index XXXXXXX..XXXXXXX 100644
61
--- a/hw/arm/virt.c
62
+++ b/hw/arm/virt.c
63
@@ -XXX,XX +XXX,XX @@ static void virt_set_memmap(VirtMachineState *vms, int pa_bits)
64
}
65
}
66
67
+static VirtGICType finalize_gic_version_do(const char *accel_name,
68
+ VirtGICType gic_version,
69
+ int gics_supported,
70
+ unsigned int max_cpus)
71
+{
72
+ /* Convert host/max/nosel to GIC version number */
73
+ switch (gic_version) {
74
+ case VIRT_GIC_VERSION_HOST:
75
+ if (!kvm_enabled()) {
76
+ error_report("gic-version=host requires KVM");
77
+ exit(1);
78
+ }
79
+
80
+ /* For KVM, gic-version=host means gic-version=max */
81
+ return finalize_gic_version_do(accel_name, VIRT_GIC_VERSION_MAX,
82
+ gics_supported, max_cpus);
83
+ case VIRT_GIC_VERSION_MAX:
84
+ if (gics_supported & VIRT_GIC_VERSION_4_MASK) {
85
+ gic_version = VIRT_GIC_VERSION_4;
86
+ } else if (gics_supported & VIRT_GIC_VERSION_3_MASK) {
87
+ gic_version = VIRT_GIC_VERSION_3;
88
+ } else {
89
+ gic_version = VIRT_GIC_VERSION_2;
90
+ }
91
+ break;
92
+ case VIRT_GIC_VERSION_NOSEL:
93
+ if ((gics_supported & VIRT_GIC_VERSION_2_MASK) &&
94
+ max_cpus <= GIC_NCPU) {
95
+ gic_version = VIRT_GIC_VERSION_2;
96
+ } else if (gics_supported & VIRT_GIC_VERSION_3_MASK) {
97
+ /*
98
+ * in case the host does not support v2 emulation or
99
+ * the end-user requested more than 8 VCPUs we now default
100
+ * to v3. In any case defaulting to v2 would be broken.
101
+ */
102
+ gic_version = VIRT_GIC_VERSION_3;
103
+ } else if (max_cpus > GIC_NCPU) {
104
+ error_report("%s only supports GICv2 emulation but more than 8 "
105
+ "vcpus are requested", accel_name);
106
+ exit(1);
107
+ }
108
+ break;
109
+ case VIRT_GIC_VERSION_2:
110
+ case VIRT_GIC_VERSION_3:
111
+ case VIRT_GIC_VERSION_4:
112
+ break;
113
+ }
114
+
115
+ /* Check chosen version is effectively supported */
116
+ switch (gic_version) {
117
+ case VIRT_GIC_VERSION_2:
118
+ if (!(gics_supported & VIRT_GIC_VERSION_2_MASK)) {
119
+ error_report("%s does not support GICv2 emulation", accel_name);
120
+ exit(1);
121
+ }
122
+ break;
123
+ case VIRT_GIC_VERSION_3:
124
+ if (!(gics_supported & VIRT_GIC_VERSION_3_MASK)) {
125
+ error_report("%s does not support GICv3 emulation", accel_name);
126
+ exit(1);
127
+ }
128
+ break;
129
+ case VIRT_GIC_VERSION_4:
130
+ if (!(gics_supported & VIRT_GIC_VERSION_4_MASK)) {
131
+ error_report("%s does not support GICv4 emulation, is virtualization=on?",
132
+ accel_name);
133
+ exit(1);
134
+ }
135
+ break;
136
+ default:
137
+ error_report("logic error in finalize_gic_version");
138
+ exit(1);
139
+ break;
140
+ }
141
+
142
+ return gic_version;
143
+}
144
+
145
/*
146
* finalize_gic_version - Determines the final gic_version
147
* according to the gic-version property
148
@@ -XXX,XX +XXX,XX @@ static void virt_set_memmap(VirtMachineState *vms, int pa_bits)
149
*/
150
static void finalize_gic_version(VirtMachineState *vms)
151
{
152
+ const char *accel_name = current_accel_name();
153
unsigned int max_cpus = MACHINE(vms)->smp.max_cpus;
154
+ int gics_supported = 0;
155
156
- if (kvm_enabled()) {
157
- int probe_bitmap;
158
+ /* Determine which GIC versions the current environment supports */
159
+ if (kvm_enabled() && kvm_irqchip_in_kernel()) {
160
+ int probe_bitmap = kvm_arm_vgic_probe();
161
162
- if (!kvm_irqchip_in_kernel()) {
163
- switch (vms->gic_version) {
164
- case VIRT_GIC_VERSION_HOST:
165
- warn_report(
166
- "gic-version=host not relevant with kernel-irqchip=off "
167
- "as only userspace GICv2 is supported. Using v2 ...");
168
- return;
169
- case VIRT_GIC_VERSION_MAX:
170
- case VIRT_GIC_VERSION_NOSEL:
171
- vms->gic_version = VIRT_GIC_VERSION_2;
172
- return;
173
- case VIRT_GIC_VERSION_2:
174
- return;
175
- case VIRT_GIC_VERSION_3:
176
- error_report(
177
- "gic-version=3 is not supported with kernel-irqchip=off");
178
- exit(1);
179
- case VIRT_GIC_VERSION_4:
180
- error_report(
181
- "gic-version=4 is not supported with kernel-irqchip=off");
182
- exit(1);
183
- }
184
- }
185
-
186
- probe_bitmap = kvm_arm_vgic_probe();
187
if (!probe_bitmap) {
188
error_report("Unable to determine GIC version supported by host");
189
exit(1);
190
}
191
192
- switch (vms->gic_version) {
193
- case VIRT_GIC_VERSION_HOST:
194
- case VIRT_GIC_VERSION_MAX:
195
- if (probe_bitmap & KVM_ARM_VGIC_V3) {
196
- vms->gic_version = VIRT_GIC_VERSION_3;
197
- } else {
198
- vms->gic_version = VIRT_GIC_VERSION_2;
199
- }
200
- return;
201
- case VIRT_GIC_VERSION_NOSEL:
202
- if ((probe_bitmap & KVM_ARM_VGIC_V2) && max_cpus <= GIC_NCPU) {
203
- vms->gic_version = VIRT_GIC_VERSION_2;
204
- } else if (probe_bitmap & KVM_ARM_VGIC_V3) {
205
- /*
206
- * in case the host does not support v2 in-kernel emulation or
207
- * the end-user requested more than 8 VCPUs we now default
208
- * to v3. In any case defaulting to v2 would be broken.
209
- */
210
- vms->gic_version = VIRT_GIC_VERSION_3;
211
- } else if (max_cpus > GIC_NCPU) {
212
- error_report("host only supports in-kernel GICv2 emulation "
213
- "but more than 8 vcpus are requested");
214
- exit(1);
215
- }
216
- break;
217
- case VIRT_GIC_VERSION_2:
218
- case VIRT_GIC_VERSION_3:
219
- break;
220
- case VIRT_GIC_VERSION_4:
221
- error_report("gic-version=4 is not supported with KVM");
222
- exit(1);
223
+ if (probe_bitmap & KVM_ARM_VGIC_V2) {
224
+ gics_supported |= VIRT_GIC_VERSION_2_MASK;
225
}
226
-
227
- /* Check chosen version is effectively supported by the host */
228
- if (vms->gic_version == VIRT_GIC_VERSION_2 &&
229
- !(probe_bitmap & KVM_ARM_VGIC_V2)) {
230
- error_report("host does not support in-kernel GICv2 emulation");
231
- exit(1);
232
- } else if (vms->gic_version == VIRT_GIC_VERSION_3 &&
233
- !(probe_bitmap & KVM_ARM_VGIC_V3)) {
234
- error_report("host does not support in-kernel GICv3 emulation");
235
- exit(1);
236
+ if (probe_bitmap & KVM_ARM_VGIC_V3) {
237
+ gics_supported |= VIRT_GIC_VERSION_3_MASK;
238
}
239
- return;
240
- }
241
-
242
- /* TCG mode */
243
- switch (vms->gic_version) {
244
- case VIRT_GIC_VERSION_NOSEL:
245
- vms->gic_version = VIRT_GIC_VERSION_2;
246
- break;
247
- case VIRT_GIC_VERSION_MAX:
248
+ } else if (kvm_enabled() && !kvm_irqchip_in_kernel()) {
249
+ /* KVM w/o kernel irqchip can only deal with GICv2 */
250
+ gics_supported |= VIRT_GIC_VERSION_2_MASK;
251
+ accel_name = "KVM with kernel-irqchip=off";
252
+ } else {
253
+ gics_supported |= VIRT_GIC_VERSION_2_MASK;
254
if (module_object_class_by_name("arm-gicv3")) {
255
- /* CONFIG_ARM_GICV3_TCG was set */
256
+ gics_supported |= VIRT_GIC_VERSION_3_MASK;
257
if (vms->virt) {
258
/* GICv4 only makes sense if CPU has EL2 */
259
- vms->gic_version = VIRT_GIC_VERSION_4;
260
- } else {
261
- vms->gic_version = VIRT_GIC_VERSION_3;
262
+ gics_supported |= VIRT_GIC_VERSION_4_MASK;
263
}
264
- } else {
265
- vms->gic_version = VIRT_GIC_VERSION_2;
266
}
267
- break;
268
- case VIRT_GIC_VERSION_HOST:
269
- error_report("gic-version=host requires KVM");
270
- exit(1);
271
- case VIRT_GIC_VERSION_4:
272
- if (!vms->virt) {
273
- error_report("gic-version=4 requires virtualization enabled");
274
- exit(1);
275
- }
276
- break;
277
- case VIRT_GIC_VERSION_2:
278
- case VIRT_GIC_VERSION_3:
279
- break;
280
}
281
+
282
+ /*
283
+ * Then convert helpers like host/max to concrete GIC versions and ensure
284
+ * the desired version is supported
285
+ */
286
+ vms->gic_version = finalize_gic_version_do(accel_name, vms->gic_version,
287
+ gics_supported, max_cpus);
288
}
289
290
/*
291
--
292
2.34.1
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
3
Let's explicitly list out all accelerators that we support when trying to
4
determine the supported set of GIC versions. KVM was already separate, so
5
the only missing one is HVF which simply reuses all of TCG's emulation
6
code and thus has the same compatibility matrix.
7
8
Signed-off-by: Alexander Graf <agraf@csgraf.de>
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
10
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
11
Reviewed-by: Zenghui Yu <yuzenghui@huawei.com>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210726150953.1218690-1-f4bug@amsat.org
13
Message-id: 20221223090107.98888-3-agraf@csgraf.de
14
[PMM: Added qtest to the list of accelerators]
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
16
---
8
hw/arm/nseries.c | 2 +-
17
hw/arm/virt.c | 7 ++++++-
9
1 file changed, 1 insertion(+), 1 deletion(-)
18
1 file changed, 6 insertions(+), 1 deletion(-)
10
19
11
diff --git a/hw/arm/nseries.c b/hw/arm/nseries.c
20
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
12
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/arm/nseries.c
22
--- a/hw/arm/virt.c
14
+++ b/hw/arm/nseries.c
23
+++ b/hw/arm/virt.c
15
@@ -XXX,XX +XXX,XX @@ static uint32_t mipid_txrx(void *opaque, uint32_t cmd, int len)
24
@@ -XXX,XX +XXX,XX @@
16
default:
25
#include "sysemu/numa.h"
17
bad_cmd:
26
#include "sysemu/runstate.h"
18
qemu_log_mask(LOG_GUEST_ERROR,
27
#include "sysemu/tpm.h"
19
- "%s: unknown command %02x\n", __func__, s->cmd);
28
+#include "sysemu/tcg.h"
20
+ "%s: unknown command 0x%02x\n", __func__, s->cmd);
29
#include "sysemu/kvm.h"
21
break;
30
#include "sysemu/hvf.h"
31
+#include "sysemu/qtest.h"
32
#include "hw/loader.h"
33
#include "qapi/error.h"
34
#include "qemu/bitops.h"
35
@@ -XXX,XX +XXX,XX @@ static void finalize_gic_version(VirtMachineState *vms)
36
/* KVM w/o kernel irqchip can only deal with GICv2 */
37
gics_supported |= VIRT_GIC_VERSION_2_MASK;
38
accel_name = "KVM with kernel-irqchip=off";
39
- } else {
40
+ } else if (tcg_enabled() || hvf_enabled() || qtest_enabled()) {
41
gics_supported |= VIRT_GIC_VERSION_2_MASK;
42
if (module_object_class_by_name("arm-gicv3")) {
43
gics_supported |= VIRT_GIC_VERSION_3_MASK;
44
@@ -XXX,XX +XXX,XX @@ static void finalize_gic_version(VirtMachineState *vms)
45
gics_supported |= VIRT_GIC_VERSION_4_MASK;
46
}
47
}
48
+ } else {
49
+ error_report("Unsupported accelerator, can not determine GIC support");
50
+ exit(1);
22
}
51
}
23
52
53
/*
24
--
54
--
25
2.20.1
55
2.34.1
26
56
27
57
diff view generated by jsdifflib
1
The documentation of the -machine memory-backend has some minor
1
From: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
2
formatting errors:
3
* Misindentation of the initial line meant that the whole option
4
section is incorrectly indented in the HTML output compared to
5
the other -machine options
6
* The examples weren't indented, which meant that they were formatted
7
as plain run-on text including outputting the "::" as text.
8
* The a) b) list has no rst-format markup so it is rendered as
9
a single run-on paragraph
10
2
11
Fix the formatting.
3
Cortex-A76 supports 40bits of address space. sbsa-ref's memory
4
starts above this limit.
12
5
6
Signed-off-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20230126114416.2447685-1-marcin.juszkiewicz@linaro.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
15
Message-id: 20210719105257.3599-1-peter.maydell@linaro.org
16
---
11
---
17
qemu-options.hx | 30 +++++++++++++++++-------------
12
hw/arm/sbsa-ref.c | 1 -
18
1 file changed, 17 insertions(+), 13 deletions(-)
13
1 file changed, 1 deletion(-)
19
14
20
diff --git a/qemu-options.hx b/qemu-options.hx
15
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
21
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
22
--- a/qemu-options.hx
17
--- a/hw/arm/sbsa-ref.c
23
+++ b/qemu-options.hx
18
+++ b/hw/arm/sbsa-ref.c
24
@@ -XXX,XX +XXX,XX @@ SRST
19
@@ -XXX,XX +XXX,XX @@ static const int sbsa_ref_irqmap[] = {
25
Enables or disables ACPI Heterogeneous Memory Attribute Table
20
static const char * const valid_cpus[] = {
26
(HMAT) support. The default is off.
21
ARM_CPU_TYPE_NAME("cortex-a57"),
27
22
ARM_CPU_TYPE_NAME("cortex-a72"),
28
- ``memory-backend='id'``
23
- ARM_CPU_TYPE_NAME("cortex-a76"),
29
+ ``memory-backend='id'``
24
ARM_CPU_TYPE_NAME("neoverse-n1"),
30
An alternative to legacy ``-mem-path`` and ``mem-prealloc`` options.
25
ARM_CPU_TYPE_NAME("max"),
31
Allows to use a memory backend as main RAM.
26
};
32
33
For example:
34
::
35
- -object memory-backend-file,id=pc.ram,size=512M,mem-path=/hugetlbfs,prealloc=on,share=on
36
- -machine memory-backend=pc.ram
37
- -m 512M
38
+
39
+ -object memory-backend-file,id=pc.ram,size=512M,mem-path=/hugetlbfs,prealloc=on,share=on
40
+ -machine memory-backend=pc.ram
41
+ -m 512M
42
43
Migration compatibility note:
44
- a) as backend id one shall use value of 'default-ram-id', advertised by
45
- machine type (available via ``query-machines`` QMP command), if migration
46
- to/from old QEMU (<5.0) is expected.
47
- b) for machine types 4.0 and older, user shall
48
- use ``x-use-canonical-path-for-ramblock-id=off`` backend option
49
- if migration to/from old QEMU (<5.0) is expected.
50
+
51
+ * as backend id one shall use value of 'default-ram-id', advertised by
52
+ machine type (available via ``query-machines`` QMP command), if migration
53
+ to/from old QEMU (<5.0) is expected.
54
+ * for machine types 4.0 and older, user shall
55
+ use ``x-use-canonical-path-for-ramblock-id=off`` backend option
56
+ if migration to/from old QEMU (<5.0) is expected.
57
+
58
For example:
59
::
60
- -object memory-backend-ram,id=pc.ram,size=512M,x-use-canonical-path-for-ramblock-id=off
61
- -machine memory-backend=pc.ram
62
- -m 512M
63
+
64
+ -object memory-backend-ram,id=pc.ram,size=512M,x-use-canonical-path-for-ramblock-id=off
65
+ -machine memory-backend=pc.ram
66
+ -m 512M
67
ERST
68
69
HXCOMM Deprecated by -machine
70
--
27
--
71
2.20.1
28
2.34.1
72
29
73
30
diff view generated by jsdifflib
New patch
1
The encodings 0,0,C7,C9,0 and 0,0,C7,C9,1 are AT SP1E1RP and AT
2
S1E1WP, but our ARMCPRegInfo definitions for them incorrectly name
3
them AT S1E1R and AT S1E1W (which are entirely different
4
instructions). Fix the names.
1
5
6
(This has no guest-visible effect as the names are for debug purposes
7
only.)
8
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Tested-by: Fuad Tabba <tabba@google.com>
12
Message-id: 20230130182459.3309057-2-peter.maydell@linaro.org
13
Message-id: 20230127175507.2895013-2-peter.maydell@linaro.org
14
---
15
target/arm/helper.c | 4 ++--
16
1 file changed, 2 insertions(+), 2 deletions(-)
17
18
diff --git a/target/arm/helper.c b/target/arm/helper.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/helper.c
21
+++ b/target/arm/helper.c
22
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo vhe_reginfo[] = {
23
24
#ifndef CONFIG_USER_ONLY
25
static const ARMCPRegInfo ats1e1_reginfo[] = {
26
- { .name = "AT_S1E1R", .state = ARM_CP_STATE_AA64,
27
+ { .name = "AT_S1E1RP", .state = ARM_CP_STATE_AA64,
28
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 9, .opc2 = 0,
29
.access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
30
.writefn = ats_write64 },
31
- { .name = "AT_S1E1W", .state = ARM_CP_STATE_AA64,
32
+ { .name = "AT_S1E1WP", .state = ARM_CP_STATE_AA64,
33
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 9, .opc2 = 1,
34
.access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
35
.writefn = ats_write64 },
36
--
37
2.34.1
diff view generated by jsdifflib
New patch
1
The AArch32 ATS12NSO* address translation operations are supposed to
2
trap to either EL2 or EL3 if they're executed at Secure EL1 (which
3
can only happen if EL3 is AArch64). We implement this, but we got
4
the syndrome value wrong: like other traps to EL2 or EL3 on an
5
AArch32 cpreg access, they should report the 0x3 syndrome, not the
6
0x0 'uncategorized' syndrome. This is clear in the access pseudocode
7
for these instructions.
1
8
9
Fix the syndrome value for these operations by correcting the
10
returned value from the ats_access() function.
11
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Tested-by: Fuad Tabba <tabba@google.com>
15
Message-id: 20230130182459.3309057-3-peter.maydell@linaro.org
16
Message-id: 20230127175507.2895013-3-peter.maydell@linaro.org
17
---
18
target/arm/helper.c | 4 ++--
19
1 file changed, 2 insertions(+), 2 deletions(-)
20
21
diff --git a/target/arm/helper.c b/target/arm/helper.c
22
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/helper.c
24
+++ b/target/arm/helper.c
25
@@ -XXX,XX +XXX,XX @@ static CPAccessResult ats_access(CPUARMState *env, const ARMCPRegInfo *ri,
26
if (arm_current_el(env) == 1) {
27
if (arm_is_secure_below_el3(env)) {
28
if (env->cp15.scr_el3 & SCR_EEL2) {
29
- return CP_ACCESS_TRAP_UNCATEGORIZED_EL2;
30
+ return CP_ACCESS_TRAP_EL2;
31
}
32
- return CP_ACCESS_TRAP_UNCATEGORIZED_EL3;
33
+ return CP_ACCESS_TRAP_EL3;
34
}
35
return CP_ACCESS_TRAP_UNCATEGORIZED;
36
}
37
--
38
2.34.1
diff view generated by jsdifflib
New patch
1
We added the CPAccessResult values CP_ACCESS_TRAP_UNCATEGORIZED_EL2
2
and CP_ACCESS_TRAP_UNCATEGORIZED_EL3 purely in order to use them in
3
the ats_access() function, but doing so was incorrect (a bug fixed in
4
a previous commit). There aren't any cases where we want an access
5
function to be able to request a trap to EL2 or EL3 with a zero
6
syndrome value, so remove these enum values.
1
7
8
As well as cleaning up dead code, the motivation here is that
9
we'd like to implement fine-grained-trap handling in
10
helper_access_check_cp_reg(). Although the fine-grained traps
11
to EL2 are always lower priority than trap-to-same-EL and
12
higher priority than trap-to-EL3, they are in the middle of
13
various other kinds of trap-to-EL2. Knowing that a trap-to-EL2
14
must always for us have the same syndrome (ie that an access
15
function will return CP_ACCESS_TRAP_EL2 and there is no other
16
kind of trap-to-EL2 enum value) means we don't have to try
17
to choose which of the two syndrome values to report if the
18
access would trap to EL2 both for the fine-grained-trap and
19
because the access function requires it.
20
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
23
Tested-by: Fuad Tabba <tabba@google.com>
24
Message-id: 20230130182459.3309057-4-peter.maydell@linaro.org
25
Message-id: 20230127175507.2895013-4-peter.maydell@linaro.org
26
---
27
target/arm/cpregs.h | 4 ++--
28
target/arm/op_helper.c | 2 ++
29
2 files changed, 4 insertions(+), 2 deletions(-)
30
31
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/cpregs.h
34
+++ b/target/arm/cpregs.h
35
@@ -XXX,XX +XXX,XX @@ typedef enum CPAccessResult {
36
* Access fails and results in an exception syndrome 0x0 ("uncategorized").
37
* Note that this is not a catch-all case -- the set of cases which may
38
* result in this failure is specifically defined by the architecture.
39
+ * This trap is always to the usual target EL, never directly to a
40
+ * specified target EL.
41
*/
42
CP_ACCESS_TRAP_UNCATEGORIZED = (2 << 2),
43
- CP_ACCESS_TRAP_UNCATEGORIZED_EL2 = CP_ACCESS_TRAP_UNCATEGORIZED | 2,
44
- CP_ACCESS_TRAP_UNCATEGORIZED_EL3 = CP_ACCESS_TRAP_UNCATEGORIZED | 3,
45
} CPAccessResult;
46
47
typedef struct ARMCPRegInfo ARMCPRegInfo;
48
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
49
index XXXXXXX..XXXXXXX 100644
50
--- a/target/arm/op_helper.c
51
+++ b/target/arm/op_helper.c
52
@@ -XXX,XX +XXX,XX @@ const void *HELPER(access_check_cp_reg)(CPUARMState *env, uint32_t key,
53
case CP_ACCESS_TRAP:
54
break;
55
case CP_ACCESS_TRAP_UNCATEGORIZED:
56
+ /* Only CP_ACCESS_TRAP traps are direct to a specified EL */
57
+ assert((res & CP_ACCESS_EL_MASK) == 0);
58
if (cpu_isar_feature(aa64_ids, cpu) && isread &&
59
arm_cpreg_in_idspace(ri)) {
60
/*
61
--
62
2.34.1
diff view generated by jsdifflib
New patch
1
Rearrange the code in do_coproc_insn() so that we calculate the
2
syndrome value for a potential trap early; we're about to add a
3
second check that wants this value earlier than where it is currently
4
determined.
1
5
6
(Specifically, a trap to EL2 because of HSTR_EL2 should take
7
priority over an UNDEF to EL1, even when the UNDEF is because
8
the register does not exist at all or because its ri->access
9
bits non-configurably fail the access. So the check we put in
10
for HSTR_EL2 trapping at EL1 (which needs the syndrome) is
11
going to have to be done before the check "is the ARMCPRegInfo
12
pointer NULL".)
13
14
This commit is just code motion; the change to HSTR_EL2
15
handling that will use the 'syndrome' variable is in a
16
subsequent commit.
17
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Tested-by: Fuad Tabba <tabba@google.com>
21
Message-id: 20230130182459.3309057-5-peter.maydell@linaro.org
22
Message-id: 20230127175507.2895013-5-peter.maydell@linaro.org
23
---
24
target/arm/translate.c | 83 +++++++++++++++++++++---------------------
25
1 file changed, 41 insertions(+), 42 deletions(-)
26
27
diff --git a/target/arm/translate.c b/target/arm/translate.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/translate.c
30
+++ b/target/arm/translate.c
31
@@ -XXX,XX +XXX,XX @@ static void do_coproc_insn(DisasContext *s, int cpnum, int is64,
32
const ARMCPRegInfo *ri = get_arm_cp_reginfo(s->cp_regs, key);
33
TCGv_ptr tcg_ri = NULL;
34
bool need_exit_tb;
35
+ uint32_t syndrome;
36
+
37
+ /*
38
+ * Note that since we are an implementation which takes an
39
+ * exception on a trapped conditional instruction only if the
40
+ * instruction passes its condition code check, we can take
41
+ * advantage of the clause in the ARM ARM that allows us to set
42
+ * the COND field in the instruction to 0xE in all cases.
43
+ * We could fish the actual condition out of the insn (ARM)
44
+ * or the condexec bits (Thumb) but it isn't necessary.
45
+ */
46
+ switch (cpnum) {
47
+ case 14:
48
+ if (is64) {
49
+ syndrome = syn_cp14_rrt_trap(1, 0xe, opc1, crm, rt, rt2,
50
+ isread, false);
51
+ } else {
52
+ syndrome = syn_cp14_rt_trap(1, 0xe, opc1, opc2, crn, crm,
53
+ rt, isread, false);
54
+ }
55
+ break;
56
+ case 15:
57
+ if (is64) {
58
+ syndrome = syn_cp15_rrt_trap(1, 0xe, opc1, crm, rt, rt2,
59
+ isread, false);
60
+ } else {
61
+ syndrome = syn_cp15_rt_trap(1, 0xe, opc1, opc2, crn, crm,
62
+ rt, isread, false);
63
+ }
64
+ break;
65
+ default:
66
+ /*
67
+ * ARMv8 defines that only coprocessors 14 and 15 exist,
68
+ * so this can only happen if this is an ARMv7 or earlier CPU,
69
+ * in which case the syndrome information won't actually be
70
+ * guest visible.
71
+ */
72
+ assert(!arm_dc_feature(s, ARM_FEATURE_V8));
73
+ syndrome = syn_uncategorized();
74
+ break;
75
+ }
76
77
if (!ri) {
78
/*
79
@@ -XXX,XX +XXX,XX @@ static void do_coproc_insn(DisasContext *s, int cpnum, int is64,
80
* Note that on XScale all cp0..c13 registers do an access check
81
* call in order to handle c15_cpar.
82
*/
83
- uint32_t syndrome;
84
-
85
- /*
86
- * Note that since we are an implementation which takes an
87
- * exception on a trapped conditional instruction only if the
88
- * instruction passes its condition code check, we can take
89
- * advantage of the clause in the ARM ARM that allows us to set
90
- * the COND field in the instruction to 0xE in all cases.
91
- * We could fish the actual condition out of the insn (ARM)
92
- * or the condexec bits (Thumb) but it isn't necessary.
93
- */
94
- switch (cpnum) {
95
- case 14:
96
- if (is64) {
97
- syndrome = syn_cp14_rrt_trap(1, 0xe, opc1, crm, rt, rt2,
98
- isread, false);
99
- } else {
100
- syndrome = syn_cp14_rt_trap(1, 0xe, opc1, opc2, crn, crm,
101
- rt, isread, false);
102
- }
103
- break;
104
- case 15:
105
- if (is64) {
106
- syndrome = syn_cp15_rrt_trap(1, 0xe, opc1, crm, rt, rt2,
107
- isread, false);
108
- } else {
109
- syndrome = syn_cp15_rt_trap(1, 0xe, opc1, opc2, crn, crm,
110
- rt, isread, false);
111
- }
112
- break;
113
- default:
114
- /*
115
- * ARMv8 defines that only coprocessors 14 and 15 exist,
116
- * so this can only happen if this is an ARMv7 or earlier CPU,
117
- * in which case the syndrome information won't actually be
118
- * guest visible.
119
- */
120
- assert(!arm_dc_feature(s, ARM_FEATURE_V8));
121
- syndrome = syn_uncategorized();
122
- break;
123
- }
124
-
125
gen_set_condexec(s);
126
gen_update_pc(s, 0);
127
tcg_ri = tcg_temp_new_ptr();
128
--
129
2.34.1
diff view generated by jsdifflib
New patch
1
The HSTR_EL2 register has a collection of trap bits which allow
2
trapping to EL2 for AArch32 EL0 or EL1 accesses to coprocessor
3
registers. The specification of these bits is that when the bit is
4
set we should trap
5
* EL1 accesses
6
* EL0 accesses, if the access is not UNDEFINED when the
7
trap bit is 0
1
8
9
In other words, all UNDEF traps from EL0 to EL1 take precedence over
10
the HSTR_EL2 trap to EL2. (Since this is all AArch32, the only kind
11
of trap-to-EL1 is the UNDEF.)
12
13
Our implementation doesn't quite get this right -- we check for traps
14
in the order:
15
* no such register
16
* ARMCPRegInfo::access bits
17
* HSTR_EL2 trap bits
18
* ARMCPRegInfo::accessfn
19
20
So UNDEFs that happen because of the access bits or because the
21
register doesn't exist at all correctly take priority over the
22
HSTR_EL2 trap, but where a register can UNDEF at EL0 because of the
23
accessfn we are incorrectly always taking the HSTR_EL2 trap. There
24
aren't many of these, but one example is the PMCR; if you look at the
25
access pseudocode for this register you can see that UNDEFs taken
26
because of the value of PMUSERENR.EN are checked before the HSTR_EL2
27
bit.
28
29
Rearrange helper_access_check_cp_reg() so that we always call the
30
accessfn, and use its return value if it indicates that the access
31
traps to EL0 rather than continuing to do the HSTR_EL2 check.
32
33
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
34
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
35
Tested-by: Fuad Tabba <tabba@google.com>
36
Message-id: 20230130182459.3309057-6-peter.maydell@linaro.org
37
Message-id: 20230127175507.2895013-6-peter.maydell@linaro.org
38
---
39
target/arm/op_helper.c | 21 ++++++++++++++++-----
40
1 file changed, 16 insertions(+), 5 deletions(-)
41
42
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/op_helper.c
45
+++ b/target/arm/op_helper.c
46
@@ -XXX,XX +XXX,XX @@ const void *HELPER(access_check_cp_reg)(CPUARMState *env, uint32_t key,
47
goto fail;
48
}
49
50
+ if (ri->accessfn) {
51
+ res = ri->accessfn(env, ri, isread);
52
+ }
53
+
54
/*
55
- * Check for an EL2 trap due to HSTR_EL2. We expect EL0 accesses
56
- * to sysregs non accessible at EL0 to have UNDEF-ed already.
57
+ * If the access function indicates a trap from EL0 to EL1 then
58
+ * that always takes priority over the HSTR_EL2 trap. (If it indicates
59
+ * a trap to EL3, then the HSTR_EL2 trap takes priority; if it indicates
60
+ * a trap to EL2, then the syndrome is the same either way so we don't
61
+ * care whether technically the architecture says that HSTR_EL2 trap or
62
+ * the other trap takes priority. So we take the "check HSTR_EL2" path
63
+ * for all of those cases.)
64
*/
65
+ if (res != CP_ACCESS_OK && ((res & CP_ACCESS_EL_MASK) == 0) &&
66
+ arm_current_el(env) == 0) {
67
+ goto fail;
68
+ }
69
+
70
if (!is_a64(env) && arm_current_el(env) < 2 && ri->cp == 15 &&
71
(arm_hcr_el2_eff(env) & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE)) {
72
uint32_t mask = 1 << ri->crn;
73
@@ -XXX,XX +XXX,XX @@ const void *HELPER(access_check_cp_reg)(CPUARMState *env, uint32_t key,
74
}
75
}
76
77
- if (ri->accessfn) {
78
- res = ri->accessfn(env, ri, isread);
79
- }
80
if (likely(res == CP_ACCESS_OK)) {
81
return ri;
82
}
83
--
84
2.34.1
diff view generated by jsdifflib
New patch
1
The semantics of HSTR_EL2 require that it traps cpreg accesses
2
to EL2 for:
3
* EL1 accesses
4
* EL0 accesses, if the access is not UNDEFINED when the
5
trap bit is 0
1
6
7
(You can see this in the I_ZFGJP priority ordering, where HSTR_EL2
8
traps from EL1 to EL2 are priority 12, UNDEFs are priority 13, and
9
HSTR_EL2 traps from EL0 are priority 15.)
10
11
However, we don't get this right for EL1 accesses which UNDEF because
12
the register doesn't exist at all or because its ri->access bits
13
non-configurably forbid the access. At EL1, check for the HSTR_EL2
14
trap early, before either of these UNDEF reasons.
15
16
We have to retain the HSTR_EL2 check in access_check_cp_reg(),
17
because at EL0 any kind of UNDEF-to-EL1 (including "no such
18
register", "bad ri->access" and "ri->accessfn returns 'trap to EL1'")
19
takes precedence over the trap to EL2. But we only need to do that
20
check for EL0 now.
21
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
Tested-by: Fuad Tabba <tabba@google.com>
24
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
25
Message-id: 20230130182459.3309057-7-peter.maydell@linaro.org
26
Message-id: 20230127175507.2895013-7-peter.maydell@linaro.org
27
---
28
target/arm/op_helper.c | 6 +++++-
29
target/arm/translate.c | 28 +++++++++++++++++++++++++++-
30
2 files changed, 32 insertions(+), 2 deletions(-)
31
32
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
33
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/op_helper.c
35
+++ b/target/arm/op_helper.c
36
@@ -XXX,XX +XXX,XX @@ const void *HELPER(access_check_cp_reg)(CPUARMState *env, uint32_t key,
37
goto fail;
38
}
39
40
- if (!is_a64(env) && arm_current_el(env) < 2 && ri->cp == 15 &&
41
+ /*
42
+ * HSTR_EL2 traps from EL1 are checked earlier, in generated code;
43
+ * we only need to check here for traps from EL0.
44
+ */
45
+ if (!is_a64(env) && arm_current_el(env) == 0 && ri->cp == 15 &&
46
(arm_hcr_el2_eff(env) & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE)) {
47
uint32_t mask = 1 << ri->crn;
48
49
diff --git a/target/arm/translate.c b/target/arm/translate.c
50
index XXXXXXX..XXXXXXX 100644
51
--- a/target/arm/translate.c
52
+++ b/target/arm/translate.c
53
@@ -XXX,XX +XXX,XX @@ static void do_coproc_insn(DisasContext *s, int cpnum, int is64,
54
break;
55
}
56
57
+ if (s->hstr_active && cpnum == 15 && s->current_el == 1) {
58
+ /*
59
+ * At EL1, check for a HSTR_EL2 trap, which must take precedence
60
+ * over the UNDEF for "no such register" or the UNDEF for "access
61
+ * permissions forbid this EL1 access". HSTR_EL2 traps from EL0
62
+ * only happen if the cpreg doesn't UNDEF at EL0, so we do those in
63
+ * access_check_cp_reg(), after the checks for whether the access
64
+ * configurably trapped to EL1.
65
+ */
66
+ uint32_t maskbit = is64 ? crm : crn;
67
+
68
+ if (maskbit != 4 && maskbit != 14) {
69
+ /* T4 and T14 are RES0 so never cause traps */
70
+ TCGv_i32 t;
71
+ DisasLabel over = gen_disas_label(s);
72
+
73
+ t = load_cpu_offset(offsetoflow32(CPUARMState, cp15.hstr_el2));
74
+ tcg_gen_andi_i32(t, t, 1u << maskbit);
75
+ tcg_gen_brcondi_i32(TCG_COND_EQ, t, 0, over.label);
76
+ tcg_temp_free_i32(t);
77
+
78
+ gen_exception_insn(s, 0, EXCP_UDEF, syndrome);
79
+ set_disas_label(s, over);
80
+ }
81
+ }
82
+
83
if (!ri) {
84
/*
85
* Unknown register; this might be a guest error or a QEMU
86
@@ -XXX,XX +XXX,XX @@ static void do_coproc_insn(DisasContext *s, int cpnum, int is64,
87
return;
88
}
89
90
- if (s->hstr_active || ri->accessfn ||
91
+ if ((s->hstr_active && s->current_el == 0) || ri->accessfn ||
92
(arm_dc_feature(s, ARM_FEATURE_XSCALE) && cpnum < 14)) {
93
/*
94
* Emit code to perform further access permissions checks at
95
--
96
2.34.1
diff view generated by jsdifflib
New patch
1
The HSTR_EL2 register is not supposed to have an effect unless EL2 is
2
enabled in the current security state. We weren't checking for this,
3
which meant that if the guest set up the HSTR_EL2 register we would
4
incorrectly trap even for accesses from Secure EL0 and EL1.
1
5
6
Add the missing checks. (Other places where we look at HSTR_EL2
7
for the not-in-v8A bits TTEE and TJDBX are already checking that
8
we are in NS EL0 or EL1, so there we alredy know EL2 is enabled.)
9
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Tested-by: Fuad Tabba <tabba@google.com>
13
Message-id: 20230130182459.3309057-8-peter.maydell@linaro.org
14
Message-id: 20230127175507.2895013-8-peter.maydell@linaro.org
15
---
16
target/arm/helper.c | 2 +-
17
target/arm/op_helper.c | 1 +
18
2 files changed, 2 insertions(+), 1 deletion(-)
19
20
diff --git a/target/arm/helper.c b/target/arm/helper.c
21
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/helper.c
23
+++ b/target/arm/helper.c
24
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_a32(CPUARMState *env, int fp_el,
25
DP_TBFLAG_A32(flags, VFPEN, 1);
26
}
27
28
- if (el < 2 && env->cp15.hstr_el2 &&
29
+ if (el < 2 && env->cp15.hstr_el2 && arm_is_el2_enabled(env) &&
30
(arm_hcr_el2_eff(env) & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE)) {
31
DP_TBFLAG_A32(flags, HSTR_ACTIVE, 1);
32
}
33
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/op_helper.c
36
+++ b/target/arm/op_helper.c
37
@@ -XXX,XX +XXX,XX @@ const void *HELPER(access_check_cp_reg)(CPUARMState *env, uint32_t key,
38
* we only need to check here for traps from EL0.
39
*/
40
if (!is_a64(env) && arm_current_el(env) == 0 && ri->cp == 15 &&
41
+ arm_is_el2_enabled(env) &&
42
(arm_hcr_el2_eff(env) & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE)) {
43
uint32_t mask = 1 << ri->crn;
44
45
--
46
2.34.1
diff view generated by jsdifflib
New patch
1
1
Define the system registers which are provided by the
2
FEAT_FGT fine-grained trap architectural feature:
3
HFGRTR_EL2, HFGWTR_EL2, HDFGRTR_EL2, HDFGWTR_EL2, HFGITR_EL2
4
5
All these registers are a set of bit fields, where each bit is set
6
for a trap and clear to not trap on a particular system register
7
access. The R and W register pairs are for system registers,
8
allowing trapping to be done separately for reads and writes; the I
9
register is for system instructions where trapping is on instruction
10
execution.
11
12
The data storage in the CPU state struct is arranged as a set of
13
arrays rather than separate fields so that when we're looking up the
14
bits for a system register access we can just index into the array
15
rather than having to use a switch to select a named struct member.
16
The later FEAT_FGT2 will add extra elements to these arrays.
17
18
The field definitions for the new registers are in cpregs.h because
19
in practice the code that needs them is code that also needs
20
the cpregs information; cpu.h is included in a lot more files.
21
We're also going to add some FGT-specific definitions to cpregs.h
22
in the next commit.
23
24
We do not implement HAFGRTR_EL2, because we don't implement
25
FEAT_AMUv1.
26
27
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
28
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
29
Tested-by: Fuad Tabba <tabba@google.com>
30
Message-id: 20230130182459.3309057-9-peter.maydell@linaro.org
31
Message-id: 20230127175507.2895013-9-peter.maydell@linaro.org
32
---
33
target/arm/cpregs.h | 285 ++++++++++++++++++++++++++++++++++++++++++++
34
target/arm/cpu.h | 15 +++
35
target/arm/helper.c | 40 +++++++
36
3 files changed, 340 insertions(+)
37
38
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/cpregs.h
41
+++ b/target/arm/cpregs.h
42
@@ -XXX,XX +XXX,XX @@ typedef enum CPAccessResult {
43
CP_ACCESS_TRAP_UNCATEGORIZED = (2 << 2),
44
} CPAccessResult;
45
46
+/* Indexes into fgt_read[] */
47
+#define FGTREG_HFGRTR 0
48
+#define FGTREG_HDFGRTR 1
49
+/* Indexes into fgt_write[] */
50
+#define FGTREG_HFGWTR 0
51
+#define FGTREG_HDFGWTR 1
52
+/* Indexes into fgt_exec[] */
53
+#define FGTREG_HFGITR 0
54
+
55
+FIELD(HFGRTR_EL2, AFSR0_EL1, 0, 1)
56
+FIELD(HFGRTR_EL2, AFSR1_EL1, 1, 1)
57
+FIELD(HFGRTR_EL2, AIDR_EL1, 2, 1)
58
+FIELD(HFGRTR_EL2, AMAIR_EL1, 3, 1)
59
+FIELD(HFGRTR_EL2, APDAKEY, 4, 1)
60
+FIELD(HFGRTR_EL2, APDBKEY, 5, 1)
61
+FIELD(HFGRTR_EL2, APGAKEY, 6, 1)
62
+FIELD(HFGRTR_EL2, APIAKEY, 7, 1)
63
+FIELD(HFGRTR_EL2, APIBKEY, 8, 1)
64
+FIELD(HFGRTR_EL2, CCSIDR_EL1, 9, 1)
65
+FIELD(HFGRTR_EL2, CLIDR_EL1, 10, 1)
66
+FIELD(HFGRTR_EL2, CONTEXTIDR_EL1, 11, 1)
67
+FIELD(HFGRTR_EL2, CPACR_EL1, 12, 1)
68
+FIELD(HFGRTR_EL2, CSSELR_EL1, 13, 1)
69
+FIELD(HFGRTR_EL2, CTR_EL0, 14, 1)
70
+FIELD(HFGRTR_EL2, DCZID_EL0, 15, 1)
71
+FIELD(HFGRTR_EL2, ESR_EL1, 16, 1)
72
+FIELD(HFGRTR_EL2, FAR_EL1, 17, 1)
73
+FIELD(HFGRTR_EL2, ISR_EL1, 18, 1)
74
+FIELD(HFGRTR_EL2, LORC_EL1, 19, 1)
75
+FIELD(HFGRTR_EL2, LOREA_EL1, 20, 1)
76
+FIELD(HFGRTR_EL2, LORID_EL1, 21, 1)
77
+FIELD(HFGRTR_EL2, LORN_EL1, 22, 1)
78
+FIELD(HFGRTR_EL2, LORSA_EL1, 23, 1)
79
+FIELD(HFGRTR_EL2, MAIR_EL1, 24, 1)
80
+FIELD(HFGRTR_EL2, MIDR_EL1, 25, 1)
81
+FIELD(HFGRTR_EL2, MPIDR_EL1, 26, 1)
82
+FIELD(HFGRTR_EL2, PAR_EL1, 27, 1)
83
+FIELD(HFGRTR_EL2, REVIDR_EL1, 28, 1)
84
+FIELD(HFGRTR_EL2, SCTLR_EL1, 29, 1)
85
+FIELD(HFGRTR_EL2, SCXTNUM_EL1, 30, 1)
86
+FIELD(HFGRTR_EL2, SCXTNUM_EL0, 31, 1)
87
+FIELD(HFGRTR_EL2, TCR_EL1, 32, 1)
88
+FIELD(HFGRTR_EL2, TPIDR_EL1, 33, 1)
89
+FIELD(HFGRTR_EL2, TPIDRRO_EL0, 34, 1)
90
+FIELD(HFGRTR_EL2, TPIDR_EL0, 35, 1)
91
+FIELD(HFGRTR_EL2, TTBR0_EL1, 36, 1)
92
+FIELD(HFGRTR_EL2, TTBR1_EL1, 37, 1)
93
+FIELD(HFGRTR_EL2, VBAR_EL1, 38, 1)
94
+FIELD(HFGRTR_EL2, ICC_IGRPENN_EL1, 39, 1)
95
+FIELD(HFGRTR_EL2, ERRIDR_EL1, 40, 1)
96
+FIELD(HFGRTR_EL2, ERRSELR_EL1, 41, 1)
97
+FIELD(HFGRTR_EL2, ERXFR_EL1, 42, 1)
98
+FIELD(HFGRTR_EL2, ERXCTLR_EL1, 43, 1)
99
+FIELD(HFGRTR_EL2, ERXSTATUS_EL1, 44, 1)
100
+FIELD(HFGRTR_EL2, ERXMISCN_EL1, 45, 1)
101
+FIELD(HFGRTR_EL2, ERXPFGF_EL1, 46, 1)
102
+FIELD(HFGRTR_EL2, ERXPFGCTL_EL1, 47, 1)
103
+FIELD(HFGRTR_EL2, ERXPFGCDN_EL1, 48, 1)
104
+FIELD(HFGRTR_EL2, ERXADDR_EL1, 49, 1)
105
+FIELD(HFGRTR_EL2, NACCDATA_EL1, 50, 1)
106
+/* 51-53: RES0 */
107
+FIELD(HFGRTR_EL2, NSMPRI_EL1, 54, 1)
108
+FIELD(HFGRTR_EL2, NTPIDR2_EL0, 55, 1)
109
+/* 56-63: RES0 */
110
+
111
+/* These match HFGRTR but bits for RO registers are RES0 */
112
+FIELD(HFGWTR_EL2, AFSR0_EL1, 0, 1)
113
+FIELD(HFGWTR_EL2, AFSR1_EL1, 1, 1)
114
+FIELD(HFGWTR_EL2, AMAIR_EL1, 3, 1)
115
+FIELD(HFGWTR_EL2, APDAKEY, 4, 1)
116
+FIELD(HFGWTR_EL2, APDBKEY, 5, 1)
117
+FIELD(HFGWTR_EL2, APGAKEY, 6, 1)
118
+FIELD(HFGWTR_EL2, APIAKEY, 7, 1)
119
+FIELD(HFGWTR_EL2, APIBKEY, 8, 1)
120
+FIELD(HFGWTR_EL2, CONTEXTIDR_EL1, 11, 1)
121
+FIELD(HFGWTR_EL2, CPACR_EL1, 12, 1)
122
+FIELD(HFGWTR_EL2, CSSELR_EL1, 13, 1)
123
+FIELD(HFGWTR_EL2, ESR_EL1, 16, 1)
124
+FIELD(HFGWTR_EL2, FAR_EL1, 17, 1)
125
+FIELD(HFGWTR_EL2, LORC_EL1, 19, 1)
126
+FIELD(HFGWTR_EL2, LOREA_EL1, 20, 1)
127
+FIELD(HFGWTR_EL2, LORN_EL1, 22, 1)
128
+FIELD(HFGWTR_EL2, LORSA_EL1, 23, 1)
129
+FIELD(HFGWTR_EL2, MAIR_EL1, 24, 1)
130
+FIELD(HFGWTR_EL2, PAR_EL1, 27, 1)
131
+FIELD(HFGWTR_EL2, SCTLR_EL1, 29, 1)
132
+FIELD(HFGWTR_EL2, SCXTNUM_EL1, 30, 1)
133
+FIELD(HFGWTR_EL2, SCXTNUM_EL0, 31, 1)
134
+FIELD(HFGWTR_EL2, TCR_EL1, 32, 1)
135
+FIELD(HFGWTR_EL2, TPIDR_EL1, 33, 1)
136
+FIELD(HFGWTR_EL2, TPIDRRO_EL0, 34, 1)
137
+FIELD(HFGWTR_EL2, TPIDR_EL0, 35, 1)
138
+FIELD(HFGWTR_EL2, TTBR0_EL1, 36, 1)
139
+FIELD(HFGWTR_EL2, TTBR1_EL1, 37, 1)
140
+FIELD(HFGWTR_EL2, VBAR_EL1, 38, 1)
141
+FIELD(HFGWTR_EL2, ICC_IGRPENN_EL1, 39, 1)
142
+FIELD(HFGWTR_EL2, ERRSELR_EL1, 41, 1)
143
+FIELD(HFGWTR_EL2, ERXCTLR_EL1, 43, 1)
144
+FIELD(HFGWTR_EL2, ERXSTATUS_EL1, 44, 1)
145
+FIELD(HFGWTR_EL2, ERXMISCN_EL1, 45, 1)
146
+FIELD(HFGWTR_EL2, ERXPFGCTL_EL1, 47, 1)
147
+FIELD(HFGWTR_EL2, ERXPFGCDN_EL1, 48, 1)
148
+FIELD(HFGWTR_EL2, ERXADDR_EL1, 49, 1)
149
+FIELD(HFGWTR_EL2, NACCDATA_EL1, 50, 1)
150
+FIELD(HFGWTR_EL2, NSMPRI_EL1, 54, 1)
151
+FIELD(HFGWTR_EL2, NTPIDR2_EL0, 55, 1)
152
+
153
+FIELD(HFGITR_EL2, ICIALLUIS, 0, 1)
154
+FIELD(HFGITR_EL2, ICIALLU, 1, 1)
155
+FIELD(HFGITR_EL2, ICIVAU, 2, 1)
156
+FIELD(HFGITR_EL2, DCIVAC, 3, 1)
157
+FIELD(HFGITR_EL2, DCISW, 4, 1)
158
+FIELD(HFGITR_EL2, DCCSW, 5, 1)
159
+FIELD(HFGITR_EL2, DCCISW, 6, 1)
160
+FIELD(HFGITR_EL2, DCCVAU, 7, 1)
161
+FIELD(HFGITR_EL2, DCCVAP, 8, 1)
162
+FIELD(HFGITR_EL2, DCCVADP, 9, 1)
163
+FIELD(HFGITR_EL2, DCCIVAC, 10, 1)
164
+FIELD(HFGITR_EL2, DCZVA, 11, 1)
165
+FIELD(HFGITR_EL2, ATS1E1R, 12, 1)
166
+FIELD(HFGITR_EL2, ATS1E1W, 13, 1)
167
+FIELD(HFGITR_EL2, ATS1E0R, 14, 1)
168
+FIELD(HFGITR_EL2, ATS1E0W, 15, 1)
169
+FIELD(HFGITR_EL2, ATS1E1RP, 16, 1)
170
+FIELD(HFGITR_EL2, ATS1E1WP, 17, 1)
171
+FIELD(HFGITR_EL2, TLBIVMALLE1OS, 18, 1)
172
+FIELD(HFGITR_EL2, TLBIVAE1OS, 19, 1)
173
+FIELD(HFGITR_EL2, TLBIASIDE1OS, 20, 1)
174
+FIELD(HFGITR_EL2, TLBIVAAE1OS, 21, 1)
175
+FIELD(HFGITR_EL2, TLBIVALE1OS, 22, 1)
176
+FIELD(HFGITR_EL2, TLBIVAALE1OS, 23, 1)
177
+FIELD(HFGITR_EL2, TLBIRVAE1OS, 24, 1)
178
+FIELD(HFGITR_EL2, TLBIRVAAE1OS, 25, 1)
179
+FIELD(HFGITR_EL2, TLBIRVALE1OS, 26, 1)
180
+FIELD(HFGITR_EL2, TLBIRVAALE1OS, 27, 1)
181
+FIELD(HFGITR_EL2, TLBIVMALLE1IS, 28, 1)
182
+FIELD(HFGITR_EL2, TLBIVAE1IS, 29, 1)
183
+FIELD(HFGITR_EL2, TLBIASIDE1IS, 30, 1)
184
+FIELD(HFGITR_EL2, TLBIVAAE1IS, 31, 1)
185
+FIELD(HFGITR_EL2, TLBIVALE1IS, 32, 1)
186
+FIELD(HFGITR_EL2, TLBIVAALE1IS, 33, 1)
187
+FIELD(HFGITR_EL2, TLBIRVAE1IS, 34, 1)
188
+FIELD(HFGITR_EL2, TLBIRVAAE1IS, 35, 1)
189
+FIELD(HFGITR_EL2, TLBIRVALE1IS, 36, 1)
190
+FIELD(HFGITR_EL2, TLBIRVAALE1IS, 37, 1)
191
+FIELD(HFGITR_EL2, TLBIRVAE1, 38, 1)
192
+FIELD(HFGITR_EL2, TLBIRVAAE1, 39, 1)
193
+FIELD(HFGITR_EL2, TLBIRVALE1, 40, 1)
194
+FIELD(HFGITR_EL2, TLBIRVAALE1, 41, 1)
195
+FIELD(HFGITR_EL2, TLBIVMALLE1, 42, 1)
196
+FIELD(HFGITR_EL2, TLBIVAE1, 43, 1)
197
+FIELD(HFGITR_EL2, TLBIASIDE1, 44, 1)
198
+FIELD(HFGITR_EL2, TLBIVAAE1, 45, 1)
199
+FIELD(HFGITR_EL2, TLBIVALE1, 46, 1)
200
+FIELD(HFGITR_EL2, TLBIVAALE1, 47, 1)
201
+FIELD(HFGITR_EL2, CFPRCTX, 48, 1)
202
+FIELD(HFGITR_EL2, DVPRCTX, 49, 1)
203
+FIELD(HFGITR_EL2, CPPRCTX, 50, 1)
204
+FIELD(HFGITR_EL2, ERET, 51, 1)
205
+FIELD(HFGITR_EL2, SVC_EL0, 52, 1)
206
+FIELD(HFGITR_EL2, SVC_EL1, 53, 1)
207
+FIELD(HFGITR_EL2, DCCVAC, 54, 1)
208
+FIELD(HFGITR_EL2, NBRBINJ, 55, 1)
209
+FIELD(HFGITR_EL2, NBRBIALL, 56, 1)
210
+
211
+FIELD(HDFGRTR_EL2, DBGBCRN_EL1, 0, 1)
212
+FIELD(HDFGRTR_EL2, DBGBVRN_EL1, 1, 1)
213
+FIELD(HDFGRTR_EL2, DBGWCRN_EL1, 2, 1)
214
+FIELD(HDFGRTR_EL2, DBGWVRN_EL1, 3, 1)
215
+FIELD(HDFGRTR_EL2, MDSCR_EL1, 4, 1)
216
+FIELD(HDFGRTR_EL2, DBGCLAIM, 5, 1)
217
+FIELD(HDFGRTR_EL2, DBGAUTHSTATUS_EL1, 6, 1)
218
+FIELD(HDFGRTR_EL2, DBGPRCR_EL1, 7, 1)
219
+/* 8: RES0: OSLAR_EL1 is WO */
220
+FIELD(HDFGRTR_EL2, OSLSR_EL1, 9, 1)
221
+FIELD(HDFGRTR_EL2, OSECCR_EL1, 10, 1)
222
+FIELD(HDFGRTR_EL2, OSDLR_EL1, 11, 1)
223
+FIELD(HDFGRTR_EL2, PMEVCNTRN_EL0, 12, 1)
224
+FIELD(HDFGRTR_EL2, PMEVTYPERN_EL0, 13, 1)
225
+FIELD(HDFGRTR_EL2, PMCCFILTR_EL0, 14, 1)
226
+FIELD(HDFGRTR_EL2, PMCCNTR_EL0, 15, 1)
227
+FIELD(HDFGRTR_EL2, PMCNTEN, 16, 1)
228
+FIELD(HDFGRTR_EL2, PMINTEN, 17, 1)
229
+FIELD(HDFGRTR_EL2, PMOVS, 18, 1)
230
+FIELD(HDFGRTR_EL2, PMSELR_EL0, 19, 1)
231
+/* 20: RES0: PMSWINC_EL0 is WO */
232
+/* 21: RES0: PMCR_EL0 is WO */
233
+FIELD(HDFGRTR_EL2, PMMIR_EL1, 22, 1)
234
+FIELD(HDFGRTR_EL2, PMBLIMITR_EL1, 23, 1)
235
+FIELD(HDFGRTR_EL2, PMBPTR_EL1, 24, 1)
236
+FIELD(HDFGRTR_EL2, PMBSR_EL1, 25, 1)
237
+FIELD(HDFGRTR_EL2, PMSCR_EL1, 26, 1)
238
+FIELD(HDFGRTR_EL2, PMSEVFR_EL1, 27, 1)
239
+FIELD(HDFGRTR_EL2, PMSFCR_EL1, 28, 1)
240
+FIELD(HDFGRTR_EL2, PMSICR_EL1, 29, 1)
241
+FIELD(HDFGRTR_EL2, PMSIDR_EL1, 30, 1)
242
+FIELD(HDFGRTR_EL2, PMSIRR_EL1, 31, 1)
243
+FIELD(HDFGRTR_EL2, PMSLATFR_EL1, 32, 1)
244
+FIELD(HDFGRTR_EL2, TRC, 33, 1)
245
+FIELD(HDFGRTR_EL2, TRCAUTHSTATUS, 34, 1)
246
+FIELD(HDFGRTR_EL2, TRCAUXCTLR, 35, 1)
247
+FIELD(HDFGRTR_EL2, TRCCLAIM, 36, 1)
248
+FIELD(HDFGRTR_EL2, TRCCNTVRn, 37, 1)
249
+/* 38, 39: RES0 */
250
+FIELD(HDFGRTR_EL2, TRCID, 40, 1)
251
+FIELD(HDFGRTR_EL2, TRCIMSPECN, 41, 1)
252
+/* 42: RES0: TRCOSLAR is WO */
253
+FIELD(HDFGRTR_EL2, TRCOSLSR, 43, 1)
254
+FIELD(HDFGRTR_EL2, TRCPRGCTLR, 44, 1)
255
+FIELD(HDFGRTR_EL2, TRCSEQSTR, 45, 1)
256
+FIELD(HDFGRTR_EL2, TRCSSCSRN, 46, 1)
257
+FIELD(HDFGRTR_EL2, TRCSTATR, 47, 1)
258
+FIELD(HDFGRTR_EL2, TRCVICTLR, 48, 1)
259
+/* 49: RES0: TRFCR_EL1 is WO */
260
+FIELD(HDFGRTR_EL2, TRBBASER_EL1, 50, 1)
261
+FIELD(HDFGRTR_EL2, TRBIDR_EL1, 51, 1)
262
+FIELD(HDFGRTR_EL2, TRBLIMITR_EL1, 52, 1)
263
+FIELD(HDFGRTR_EL2, TRBMAR_EL1, 53, 1)
264
+FIELD(HDFGRTR_EL2, TRBPTR_EL1, 54, 1)
265
+FIELD(HDFGRTR_EL2, TRBSR_EL1, 55, 1)
266
+FIELD(HDFGRTR_EL2, TRBTRG_EL1, 56, 1)
267
+FIELD(HDFGRTR_EL2, PMUSERENR_EL0, 57, 1)
268
+FIELD(HDFGRTR_EL2, PMCEIDN_EL0, 58, 1)
269
+FIELD(HDFGRTR_EL2, NBRBIDR, 59, 1)
270
+FIELD(HDFGRTR_EL2, NBRBCTL, 60, 1)
271
+FIELD(HDFGRTR_EL2, NBRBDATA, 61, 1)
272
+FIELD(HDFGRTR_EL2, NPMSNEVFR_EL1, 62, 1)
273
+FIELD(HDFGRTR_EL2, PMBIDR_EL1, 63, 1)
274
+
275
+/*
276
+ * These match HDFGRTR_EL2, but bits for RO registers are RES0.
277
+ * A few bits are for WO registers, where the HDFGRTR_EL2 bit is RES0.
278
+ */
279
+FIELD(HDFGWTR_EL2, DBGBCRN_EL1, 0, 1)
280
+FIELD(HDFGWTR_EL2, DBGBVRN_EL1, 1, 1)
281
+FIELD(HDFGWTR_EL2, DBGWCRN_EL1, 2, 1)
282
+FIELD(HDFGWTR_EL2, DBGWVRN_EL1, 3, 1)
283
+FIELD(HDFGWTR_EL2, MDSCR_EL1, 4, 1)
284
+FIELD(HDFGWTR_EL2, DBGCLAIM, 5, 1)
285
+FIELD(HDFGWTR_EL2, DBGPRCR_EL1, 7, 1)
286
+FIELD(HDFGWTR_EL2, OSLAR_EL1, 8, 1)
287
+FIELD(HDFGWTR_EL2, OSLSR_EL1, 9, 1)
288
+FIELD(HDFGWTR_EL2, OSECCR_EL1, 10, 1)
289
+FIELD(HDFGWTR_EL2, OSDLR_EL1, 11, 1)
290
+FIELD(HDFGWTR_EL2, PMEVCNTRN_EL0, 12, 1)
291
+FIELD(HDFGWTR_EL2, PMEVTYPERN_EL0, 13, 1)
292
+FIELD(HDFGWTR_EL2, PMCCFILTR_EL0, 14, 1)
293
+FIELD(HDFGWTR_EL2, PMCCNTR_EL0, 15, 1)
294
+FIELD(HDFGWTR_EL2, PMCNTEN, 16, 1)
295
+FIELD(HDFGWTR_EL2, PMINTEN, 17, 1)
296
+FIELD(HDFGWTR_EL2, PMOVS, 18, 1)
297
+FIELD(HDFGWTR_EL2, PMSELR_EL0, 19, 1)
298
+FIELD(HDFGWTR_EL2, PMSWINC_EL0, 20, 1)
299
+FIELD(HDFGWTR_EL2, PMCR_EL0, 21, 1)
300
+FIELD(HDFGWTR_EL2, PMBLIMITR_EL1, 23, 1)
301
+FIELD(HDFGWTR_EL2, PMBPTR_EL1, 24, 1)
302
+FIELD(HDFGWTR_EL2, PMBSR_EL1, 25, 1)
303
+FIELD(HDFGWTR_EL2, PMSCR_EL1, 26, 1)
304
+FIELD(HDFGWTR_EL2, PMSEVFR_EL1, 27, 1)
305
+FIELD(HDFGWTR_EL2, PMSFCR_EL1, 28, 1)
306
+FIELD(HDFGWTR_EL2, PMSICR_EL1, 29, 1)
307
+FIELD(HDFGWTR_EL2, PMSIRR_EL1, 31, 1)
308
+FIELD(HDFGWTR_EL2, PMSLATFR_EL1, 32, 1)
309
+FIELD(HDFGWTR_EL2, TRC, 33, 1)
310
+FIELD(HDFGWTR_EL2, TRCAUXCTLR, 35, 1)
311
+FIELD(HDFGWTR_EL2, TRCCLAIM, 36, 1)
312
+FIELD(HDFGWTR_EL2, TRCCNTVRn, 37, 1)
313
+FIELD(HDFGWTR_EL2, TRCIMSPECN, 41, 1)
314
+FIELD(HDFGWTR_EL2, TRCOSLAR, 42, 1)
315
+FIELD(HDFGWTR_EL2, TRCPRGCTLR, 44, 1)
316
+FIELD(HDFGWTR_EL2, TRCSEQSTR, 45, 1)
317
+FIELD(HDFGWTR_EL2, TRCSSCSRN, 46, 1)
318
+FIELD(HDFGWTR_EL2, TRCVICTLR, 48, 1)
319
+FIELD(HDFGWTR_EL2, TRFCR_EL1, 49, 1)
320
+FIELD(HDFGWTR_EL2, TRBBASER_EL1, 50, 1)
321
+FIELD(HDFGWTR_EL2, TRBLIMITR_EL1, 52, 1)
322
+FIELD(HDFGWTR_EL2, TRBMAR_EL1, 53, 1)
323
+FIELD(HDFGWTR_EL2, TRBPTR_EL1, 54, 1)
324
+FIELD(HDFGWTR_EL2, TRBSR_EL1, 55, 1)
325
+FIELD(HDFGWTR_EL2, TRBTRG_EL1, 56, 1)
326
+FIELD(HDFGWTR_EL2, PMUSERENR_EL0, 57, 1)
327
+FIELD(HDFGWTR_EL2, NBRBCTL, 60, 1)
328
+FIELD(HDFGWTR_EL2, NBRBDATA, 61, 1)
329
+FIELD(HDFGWTR_EL2, NPMSNEVFR_EL1, 62, 1)
330
+
331
typedef struct ARMCPRegInfo ARMCPRegInfo;
332
333
/*
334
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
335
index XXXXXXX..XXXXXXX 100644
336
--- a/target/arm/cpu.h
337
+++ b/target/arm/cpu.h
338
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
339
uint64_t disr_el1;
340
uint64_t vdisr_el2;
341
uint64_t vsesr_el2;
342
+
343
+ /*
344
+ * Fine-Grained Trap registers. We store these as arrays so the
345
+ * access checking code doesn't have to manually select
346
+ * HFGRTR_EL2 vs HFDFGRTR_EL2 etc when looking up the bit to test.
347
+ * FEAT_FGT2 will add more elements to these arrays.
348
+ */
349
+ uint64_t fgt_read[2]; /* HFGRTR, HDFGRTR */
350
+ uint64_t fgt_write[2]; /* HFGWTR, HDFGWTR */
351
+ uint64_t fgt_exec[1]; /* HFGITR */
352
} cp15;
353
354
struct {
355
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_tgran64_2(const ARMISARegisters *id)
356
return t >= 2 || (t == 0 && isar_feature_aa64_tgran64(id));
357
}
358
359
+static inline bool isar_feature_aa64_fgt(const ARMISARegisters *id)
360
+{
361
+ return FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, FGT) != 0;
362
+}
363
+
364
static inline bool isar_feature_aa64_ccidx(const ARMISARegisters *id)
365
{
366
return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, CCIDX) != 0;
367
diff --git a/target/arm/helper.c b/target/arm/helper.c
368
index XXXXXXX..XXXXXXX 100644
369
--- a/target/arm/helper.c
370
+++ b/target/arm/helper.c
371
@@ -XXX,XX +XXX,XX @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
372
if (cpu_isar_feature(aa64_hcx, cpu)) {
373
valid_mask |= SCR_HXEN;
374
}
375
+ if (cpu_isar_feature(aa64_fgt, cpu)) {
376
+ valid_mask |= SCR_FGTEN;
377
+ }
378
} else {
379
valid_mask &= ~(SCR_RW | SCR_ST);
380
if (cpu_isar_feature(aa32_ras, cpu)) {
381
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo scxtnum_reginfo[] = {
382
.access = PL3_RW,
383
.fieldoffset = offsetof(CPUARMState, scxtnum_el[3]) },
384
};
385
+
386
+static CPAccessResult access_fgt(CPUARMState *env, const ARMCPRegInfo *ri,
387
+ bool isread)
388
+{
389
+ if (arm_current_el(env) == 2 &&
390
+ arm_feature(env, ARM_FEATURE_EL3) && !(env->cp15.scr_el3 & SCR_FGTEN)) {
391
+ return CP_ACCESS_TRAP_EL3;
392
+ }
393
+ return CP_ACCESS_OK;
394
+}
395
+
396
+static const ARMCPRegInfo fgt_reginfo[] = {
397
+ { .name = "HFGRTR_EL2", .state = ARM_CP_STATE_AA64,
398
+ .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 4,
399
+ .access = PL2_RW, .accessfn = access_fgt,
400
+ .fieldoffset = offsetof(CPUARMState, cp15.fgt_read[FGTREG_HFGRTR]) },
401
+ { .name = "HFGWTR_EL2", .state = ARM_CP_STATE_AA64,
402
+ .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 5,
403
+ .access = PL2_RW, .accessfn = access_fgt,
404
+ .fieldoffset = offsetof(CPUARMState, cp15.fgt_write[FGTREG_HFGWTR]) },
405
+ { .name = "HDFGRTR_EL2", .state = ARM_CP_STATE_AA64,
406
+ .opc0 = 3, .opc1 = 4, .crn = 3, .crm = 1, .opc2 = 4,
407
+ .access = PL2_RW, .accessfn = access_fgt,
408
+ .fieldoffset = offsetof(CPUARMState, cp15.fgt_read[FGTREG_HDFGRTR]) },
409
+ { .name = "HDFGWTR_EL2", .state = ARM_CP_STATE_AA64,
410
+ .opc0 = 3, .opc1 = 4, .crn = 3, .crm = 1, .opc2 = 5,
411
+ .access = PL2_RW, .accessfn = access_fgt,
412
+ .fieldoffset = offsetof(CPUARMState, cp15.fgt_write[FGTREG_HDFGWTR]) },
413
+ { .name = "HFGITR_EL2", .state = ARM_CP_STATE_AA64,
414
+ .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 6,
415
+ .access = PL2_RW, .accessfn = access_fgt,
416
+ .fieldoffset = offsetof(CPUARMState, cp15.fgt_exec[FGTREG_HFGITR]) },
417
+};
418
#endif /* TARGET_AARCH64 */
419
420
static CPAccessResult access_predinv(CPUARMState *env, const ARMCPRegInfo *ri,
421
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
422
if (cpu_isar_feature(aa64_scxtnum, cpu)) {
423
define_arm_cp_regs(cpu, scxtnum_reginfo);
424
}
425
+
426
+ if (cpu_isar_feature(aa64_fgt, cpu)) {
427
+ define_arm_cp_regs(cpu, fgt_reginfo);
428
+ }
429
#endif
430
431
if (cpu_isar_feature(any_predinv, cpu)) {
432
--
433
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Implement the machinery for fine-grained traps on normal sysregs.
2
2
Any sysreg with a fine-grained trap will set the new field to
3
Rename from sve_zcr_get_valid_len and make accessible
3
indicate which FGT register bit it should trap on.
4
from outside of helper.c.
4
5
5
FGT traps only happen when an AArch64 EL2 enables them for
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
an AArch64 EL1. They therefore are only relevant for AArch32
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
cpregs when the cpreg can be accessed from EL0. The logic
8
Message-id: 20210723203344.968563-3-richard.henderson@linaro.org
8
in access_check_cp_reg() will check this, so it is safe to
9
add a .fgt marking to an ARM_CP_STATE_BOTH ARMCPRegInfo.
10
11
The DO_BIT and DO_REV_BIT macros define enum constants FGT_##bitname
12
which can be used to specify the FGT bit, eg
13
.fgt = FGT_AFSR0_EL1
14
(We assume that there is no bit name duplication across the FGT
15
registers, for brevity's sake.)
16
17
Subsequent commits will add the .fgt fields to the relevant register
18
definitions and define the FGT_nnn values for them.
19
20
Note that some of the FGT traps are for instructions that we don't
21
handle via the cpregs mechanisms (mostly these are instruction traps).
22
Those we will have to handle separately.
23
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
25
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
26
Tested-by: Fuad Tabba <tabba@google.com>
27
Message-id: 20230130182459.3309057-10-peter.maydell@linaro.org
28
Message-id: 20230127175507.2895013-10-peter.maydell@linaro.org
10
---
29
---
11
target/arm/internals.h | 10 ++++++++++
30
target/arm/cpregs.h | 72 ++++++++++++++++++++++++++++++++++++++
12
target/arm/helper.c | 4 ++--
31
target/arm/cpu.h | 1 +
13
2 files changed, 12 insertions(+), 2 deletions(-)
32
target/arm/internals.h | 20 +++++++++++
14
33
target/arm/translate.h | 2 ++
34
target/arm/helper.c | 9 +++++
35
target/arm/op_helper.c | 30 ++++++++++++++++
36
target/arm/translate-a64.c | 3 +-
37
target/arm/translate.c | 2 ++
38
8 files changed, 138 insertions(+), 1 deletion(-)
39
40
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/cpregs.h
43
+++ b/target/arm/cpregs.h
44
@@ -XXX,XX +XXX,XX @@ FIELD(HDFGWTR_EL2, NBRBCTL, 60, 1)
45
FIELD(HDFGWTR_EL2, NBRBDATA, 61, 1)
46
FIELD(HDFGWTR_EL2, NPMSNEVFR_EL1, 62, 1)
47
48
+/* Which fine-grained trap bit register to check, if any */
49
+FIELD(FGT, TYPE, 10, 3)
50
+FIELD(FGT, REV, 9, 1) /* Is bit sense reversed? */
51
+FIELD(FGT, IDX, 6, 3) /* Index within a uint64_t[] array */
52
+FIELD(FGT, BITPOS, 0, 6) /* Bit position within the uint64_t */
53
+
54
+/*
55
+ * Macros to define FGT_##bitname enum constants to use in ARMCPRegInfo::fgt
56
+ * fields. We assume for brevity's sake that there are no duplicated
57
+ * bit names across the various FGT registers.
58
+ */
59
+#define DO_BIT(REG, BITNAME) \
60
+ FGT_##BITNAME = FGT_##REG | R_##REG##_EL2_##BITNAME##_SHIFT
61
+
62
+/* Some bits have reversed sense, so 0 means trap and 1 means not */
63
+#define DO_REV_BIT(REG, BITNAME) \
64
+ FGT_##BITNAME = FGT_##REG | FGT_REV | R_##REG##_EL2_##BITNAME##_SHIFT
65
+
66
+typedef enum FGTBit {
67
+ /*
68
+ * These bits tell us which register arrays to use:
69
+ * if FGT_R is set then reads are checked against fgt_read[];
70
+ * if FGT_W is set then writes are checked against fgt_write[];
71
+ * if FGT_EXEC is set then all accesses are checked against fgt_exec[].
72
+ *
73
+ * For almost all bits in the R/W register pairs, the bit exists in
74
+ * both registers for a RW register, in HFGRTR/HDFGRTR for a RO register
75
+ * with the corresponding HFGWTR/HDFGTWTR bit being RES0, and vice-versa
76
+ * for a WO register. There are unfortunately a couple of exceptions
77
+ * (PMCR_EL0, TRFCR_EL1) where the register being trapped is RW but
78
+ * the FGT system only allows trapping of writes, not reads.
79
+ *
80
+ * Note that we arrange these bits so that a 0 FGTBit means "no trap".
81
+ */
82
+ FGT_R = 1 << R_FGT_TYPE_SHIFT,
83
+ FGT_W = 2 << R_FGT_TYPE_SHIFT,
84
+ FGT_EXEC = 4 << R_FGT_TYPE_SHIFT,
85
+ FGT_RW = FGT_R | FGT_W,
86
+ /* Bit to identify whether trap bit is reversed sense */
87
+ FGT_REV = R_FGT_REV_MASK,
88
+
89
+ /*
90
+ * If a bit exists in HFGRTR/HDFGRTR then either the register being
91
+ * trapped is RO or the bit also exists in HFGWTR/HDFGWTR, so we either
92
+ * want to trap for both reads and writes or else it's harmless to mark
93
+ * it as trap-on-writes.
94
+ * If a bit exists only in HFGWTR/HDFGWTR then either the register being
95
+ * trapped is WO, or else it is one of the two oddball special cases
96
+ * which are RW but have only a write trap. We mark these as only
97
+ * FGT_W so we get the right behaviour for those special cases.
98
+ * (If a bit was added in future that provided only a read trap for an
99
+ * RW register we'd need to do something special to get the FGT_R bit
100
+ * only. But this seems unlikely to happen.)
101
+ *
102
+ * So for the DO_BIT/DO_REV_BIT macros: use FGT_HFGRTR/FGT_HDFGRTR if
103
+ * the bit exists in that register. Otherwise use FGT_HFGWTR/FGT_HDFGWTR.
104
+ */
105
+ FGT_HFGRTR = FGT_RW | (FGTREG_HFGRTR << R_FGT_IDX_SHIFT),
106
+ FGT_HFGWTR = FGT_W | (FGTREG_HFGWTR << R_FGT_IDX_SHIFT),
107
+ FGT_HDFGRTR = FGT_RW | (FGTREG_HDFGRTR << R_FGT_IDX_SHIFT),
108
+ FGT_HDFGWTR = FGT_W | (FGTREG_HDFGWTR << R_FGT_IDX_SHIFT),
109
+ FGT_HFGITR = FGT_EXEC | (FGTREG_HFGITR << R_FGT_IDX_SHIFT),
110
+} FGTBit;
111
+
112
+#undef DO_BIT
113
+#undef DO_REV_BIT
114
+
115
typedef struct ARMCPRegInfo ARMCPRegInfo;
116
117
/*
118
@@ -XXX,XX +XXX,XX @@ struct ARMCPRegInfo {
119
CPAccessRights access;
120
/* Security state: ARM_CP_SECSTATE_* bits/values */
121
CPSecureState secure;
122
+ /*
123
+ * Which fine-grained trap register bit to check, if any. This
124
+ * value encodes both the trap register and bit within it.
125
+ */
126
+ FGTBit fgt;
127
/*
128
* The opaque pointer passed to define_arm_cp_regs_with_opaque() when
129
* this register was defined: can be used to hand data through to the
130
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
131
index XXXXXXX..XXXXXXX 100644
132
--- a/target/arm/cpu.h
133
+++ b/target/arm/cpu.h
134
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_ANY, FPEXC_EL, 8, 2)
135
/* Memory operations require alignment: SCTLR_ELx.A or CCR.UNALIGN_TRP */
136
FIELD(TBFLAG_ANY, ALIGN_MEM, 10, 1)
137
FIELD(TBFLAG_ANY, PSTATE__IL, 11, 1)
138
+FIELD(TBFLAG_ANY, FGT_ACTIVE, 12, 1)
139
140
/*
141
* Bit usage when in AArch32 state, both A- and M-profile.
15
diff --git a/target/arm/internals.h b/target/arm/internals.h
142
diff --git a/target/arm/internals.h b/target/arm/internals.h
16
index XXXXXXX..XXXXXXX 100644
143
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/internals.h
144
--- a/target/arm/internals.h
18
+++ b/target/arm/internals.h
145
+++ b/target/arm/internals.h
19
@@ -XXX,XX +XXX,XX @@ void arm_translate_init(void);
146
@@ -XXX,XX +XXX,XX @@ static inline uint64_t arm_mdcr_el2_eff(CPUARMState *env)
20
void arm_cpu_synchronize_from_tb(CPUState *cs, const TranslationBlock *tb);
147
((1 << (1 - 1)) | (1 << (2 - 1)) | \
21
#endif /* CONFIG_TCG */
148
(1 << (4 - 1)) | (1 << (8 - 1)) | (1 << (16 - 1)))
22
149
23
+/**
150
+/*
24
+ * aarch64_sve_zcr_get_valid_len:
151
+ * Return true if it is possible to take a fine-grained-trap to EL2.
25
+ * @cpu: cpu context
26
+ * @start_len: maximum len to consider
27
+ *
28
+ * Return the maximum supported sve vector length <= @start_len.
29
+ * Note that both @start_len and the return value are in units
30
+ * of ZCR_ELx.LEN, so the vector bit length is (x + 1) * 128.
31
+ */
152
+ */
32
+uint32_t aarch64_sve_zcr_get_valid_len(ARMCPU *cpu, uint32_t start_len);
153
+static inline bool arm_fgt_active(CPUARMState *env, int el)
33
154
+{
34
enum arm_fprounding {
155
+ /*
35
FPROUNDING_TIEEVEN,
156
+ * The Arm ARM only requires the "{E2H,TGE} != {1,1}" test for traps
157
+ * that can affect EL0, but it is harmless to do the test also for
158
+ * traps on registers that are only accessible at EL1 because if the test
159
+ * returns true then we can't be executing at EL1 anyway.
160
+ * FGT traps only happen when EL2 is enabled and EL1 is AArch64;
161
+ * traps from AArch32 only happen for the EL0 is AArch32 case.
162
+ */
163
+ return cpu_isar_feature(aa64_fgt, env_archcpu(env)) &&
164
+ el < 2 && arm_is_el2_enabled(env) &&
165
+ arm_el_is_aa64(env, 1) &&
166
+ (arm_hcr_el2_eff(env) & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE) &&
167
+ (!arm_feature(env, ARM_FEATURE_EL3) || (env->cp15.scr_el3 & SCR_FGTEN));
168
+}
169
+
170
#endif
171
diff --git a/target/arm/translate.h b/target/arm/translate.h
172
index XXXXXXX..XXXXXXX 100644
173
--- a/target/arm/translate.h
174
+++ b/target/arm/translate.h
175
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
176
bool is_nonstreaming;
177
/* True if MVE insns are definitely not predicated by VPR or LTPSIZE */
178
bool mve_no_pred;
179
+ /* True if fine-grained traps are active */
180
+ bool fgt_active;
181
/*
182
* >= 0, a copy of PSTATE.BTYPE, which will be 0 without v8.5-BTI.
183
* < 0, set by the current instruction.
36
diff --git a/target/arm/helper.c b/target/arm/helper.c
184
diff --git a/target/arm/helper.c b/target/arm/helper.c
37
index XXXXXXX..XXXXXXX 100644
185
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/helper.c
186
--- a/target/arm/helper.c
39
+++ b/target/arm/helper.c
187
+++ b/target/arm/helper.c
40
@@ -XXX,XX +XXX,XX @@ int sve_exception_el(CPUARMState *env, int el)
188
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_common(CPUARMState *env, int fp_el,
41
return 0;
189
if (arm_singlestep_active(env)) {
190
DP_TBFLAG_ANY(flags, SS_ACTIVE, 1);
191
}
192
+
193
return flags;
42
}
194
}
43
195
44
-static uint32_t sve_zcr_get_valid_len(ARMCPU *cpu, uint32_t start_len)
196
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_a32(CPUARMState *env, int fp_el,
45
+uint32_t aarch64_sve_zcr_get_valid_len(ARMCPU *cpu, uint32_t start_len)
197
DP_TBFLAG_A32(flags, HSTR_ACTIVE, 1);
46
{
198
}
47
uint32_t end_len;
199
48
200
+ if (arm_fgt_active(env, el)) {
49
@@ -XXX,XX +XXX,XX @@ uint32_t sve_zcr_len_for_el(CPUARMState *env, int el)
201
+ DP_TBFLAG_ANY(flags, FGT_ACTIVE, 1);
50
zcr_len = MIN(zcr_len, 0xf & (uint32_t)env->vfp.zcr_el[3]);
202
+ }
51
}
203
+
52
204
if (env->uncached_cpsr & CPSR_IL) {
53
- return sve_zcr_get_valid_len(cpu, zcr_len);
205
DP_TBFLAG_ANY(flags, PSTATE__IL, 1);
54
+ return aarch64_sve_zcr_get_valid_len(cpu, zcr_len);
206
}
55
}
207
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
56
208
DP_TBFLAG_ANY(flags, PSTATE__IL, 1);
57
static void zcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
209
}
210
211
+ if (arm_fgt_active(env, el)) {
212
+ DP_TBFLAG_ANY(flags, FGT_ACTIVE, 1);
213
+ }
214
+
215
if (cpu_isar_feature(aa64_mte, env_archcpu(env))) {
216
/*
217
* Set MTE_ACTIVE if any access may be Checked, and leave clear
218
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
219
index XXXXXXX..XXXXXXX 100644
220
--- a/target/arm/op_helper.c
221
+++ b/target/arm/op_helper.c
222
@@ -XXX,XX +XXX,XX @@ const void *HELPER(access_check_cp_reg)(CPUARMState *env, uint32_t key,
223
}
224
}
225
226
+ /*
227
+ * Fine-grained traps also are lower priority than undef-to-EL1,
228
+ * higher priority than trap-to-EL3, and we don't care about priority
229
+ * order with other EL2 traps because the syndrome value is the same.
230
+ */
231
+ if (arm_fgt_active(env, arm_current_el(env))) {
232
+ uint64_t trapword = 0;
233
+ unsigned int idx = FIELD_EX32(ri->fgt, FGT, IDX);
234
+ unsigned int bitpos = FIELD_EX32(ri->fgt, FGT, BITPOS);
235
+ bool rev = FIELD_EX32(ri->fgt, FGT, REV);
236
+ bool trapbit;
237
+
238
+ if (ri->fgt & FGT_EXEC) {
239
+ assert(idx < ARRAY_SIZE(env->cp15.fgt_exec));
240
+ trapword = env->cp15.fgt_exec[idx];
241
+ } else if (isread && (ri->fgt & FGT_R)) {
242
+ assert(idx < ARRAY_SIZE(env->cp15.fgt_read));
243
+ trapword = env->cp15.fgt_read[idx];
244
+ } else if (!isread && (ri->fgt & FGT_W)) {
245
+ assert(idx < ARRAY_SIZE(env->cp15.fgt_write));
246
+ trapword = env->cp15.fgt_write[idx];
247
+ }
248
+
249
+ trapbit = extract64(trapword, bitpos, 1);
250
+ if (trapbit != rev) {
251
+ res = CP_ACCESS_TRAP_EL2;
252
+ goto fail;
253
+ }
254
+ }
255
+
256
if (likely(res == CP_ACCESS_OK)) {
257
return ri;
258
}
259
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
260
index XXXXXXX..XXXXXXX 100644
261
--- a/target/arm/translate-a64.c
262
+++ b/target/arm/translate-a64.c
263
@@ -XXX,XX +XXX,XX @@ static void handle_sys(DisasContext *s, uint32_t insn, bool isread,
264
return;
265
}
266
267
- if (ri->accessfn) {
268
+ if (ri->accessfn || (ri->fgt && s->fgt_active)) {
269
/* Emit code to perform further access permissions checks at
270
* runtime; this may result in an exception.
271
*/
272
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
273
dc->fp_excp_el = EX_TBFLAG_ANY(tb_flags, FPEXC_EL);
274
dc->align_mem = EX_TBFLAG_ANY(tb_flags, ALIGN_MEM);
275
dc->pstate_il = EX_TBFLAG_ANY(tb_flags, PSTATE__IL);
276
+ dc->fgt_active = EX_TBFLAG_ANY(tb_flags, FGT_ACTIVE);
277
dc->sve_excp_el = EX_TBFLAG_A64(tb_flags, SVEEXC_EL);
278
dc->sme_excp_el = EX_TBFLAG_A64(tb_flags, SMEEXC_EL);
279
dc->vl = (EX_TBFLAG_A64(tb_flags, VL) + 1) * 16;
280
diff --git a/target/arm/translate.c b/target/arm/translate.c
281
index XXXXXXX..XXXXXXX 100644
282
--- a/target/arm/translate.c
283
+++ b/target/arm/translate.c
284
@@ -XXX,XX +XXX,XX @@ static void do_coproc_insn(DisasContext *s, int cpnum, int is64,
285
}
286
287
if ((s->hstr_active && s->current_el == 0) || ri->accessfn ||
288
+ (ri->fgt && s->fgt_active) ||
289
(arm_dc_feature(s, ARM_FEATURE_XSCALE) && cpnum < 14)) {
290
/*
291
* Emit code to perform further access permissions checks at
292
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
293
dc->fp_excp_el = EX_TBFLAG_ANY(tb_flags, FPEXC_EL);
294
dc->align_mem = EX_TBFLAG_ANY(tb_flags, ALIGN_MEM);
295
dc->pstate_il = EX_TBFLAG_ANY(tb_flags, PSTATE__IL);
296
+ dc->fgt_active = EX_TBFLAG_ANY(tb_flags, FGT_ACTIVE);
297
298
if (arm_feature(env, ARM_FEATURE_M)) {
299
dc->vfp_enabled = 1;
58
--
300
--
59
2.20.1
301
2.34.1
60
61
diff view generated by jsdifflib
New patch
1
Mark up the sysreg definitions for the registers trapped
2
by HFGRTR/HFGWTR bits 0..11.
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Tested-by: Fuad Tabba <tabba@google.com>
7
Message-id: 20230130182459.3309057-11-peter.maydell@linaro.org
8
Message-id: 20230127175507.2895013-11-peter.maydell@linaro.org
9
---
10
target/arm/cpregs.h | 14 ++++++++++++++
11
target/arm/helper.c | 17 +++++++++++++++++
12
2 files changed, 31 insertions(+)
13
14
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/cpregs.h
17
+++ b/target/arm/cpregs.h
18
@@ -XXX,XX +XXX,XX @@ typedef enum FGTBit {
19
FGT_HDFGRTR = FGT_RW | (FGTREG_HDFGRTR << R_FGT_IDX_SHIFT),
20
FGT_HDFGWTR = FGT_W | (FGTREG_HDFGWTR << R_FGT_IDX_SHIFT),
21
FGT_HFGITR = FGT_EXEC | (FGTREG_HFGITR << R_FGT_IDX_SHIFT),
22
+
23
+ /* Trap bits in HFGRTR_EL2 / HFGWTR_EL2, starting from bit 0. */
24
+ DO_BIT(HFGRTR, AFSR0_EL1),
25
+ DO_BIT(HFGRTR, AFSR1_EL1),
26
+ DO_BIT(HFGRTR, AIDR_EL1),
27
+ DO_BIT(HFGRTR, AMAIR_EL1),
28
+ DO_BIT(HFGRTR, APDAKEY),
29
+ DO_BIT(HFGRTR, APDBKEY),
30
+ DO_BIT(HFGRTR, APGAKEY),
31
+ DO_BIT(HFGRTR, APIAKEY),
32
+ DO_BIT(HFGRTR, APIBKEY),
33
+ DO_BIT(HFGRTR, CCSIDR_EL1),
34
+ DO_BIT(HFGRTR, CLIDR_EL1),
35
+ DO_BIT(HFGRTR, CONTEXTIDR_EL1),
36
} FGTBit;
37
38
#undef DO_BIT
39
diff --git a/target/arm/helper.c b/target/arm/helper.c
40
index XXXXXXX..XXXXXXX 100644
41
--- a/target/arm/helper.c
42
+++ b/target/arm/helper.c
43
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cp_reginfo[] = {
44
{ .name = "CONTEXTIDR_EL1", .state = ARM_CP_STATE_BOTH,
45
.opc0 = 3, .opc1 = 0, .crn = 13, .crm = 0, .opc2 = 1,
46
.access = PL1_RW, .accessfn = access_tvm_trvm,
47
+ .fgt = FGT_CONTEXTIDR_EL1,
48
.secure = ARM_CP_SECSTATE_NS,
49
.fieldoffset = offsetof(CPUARMState, cp15.contextidr_el[1]),
50
.resetvalue = 0, .writefn = contextidr_write, .raw_writefn = raw_write, },
51
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
52
.opc0 = 3, .crn = 0, .crm = 0, .opc1 = 1, .opc2 = 0,
53
.access = PL1_R,
54
.accessfn = access_tid4,
55
+ .fgt = FGT_CCSIDR_EL1,
56
.readfn = ccsidr_read, .type = ARM_CP_NO_RAW },
57
{ .name = "CSSELR", .state = ARM_CP_STATE_BOTH,
58
.opc0 = 3, .crn = 0, .crm = 0, .opc1 = 2, .opc2 = 0,
59
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
60
.opc0 = 3, .opc1 = 1, .crn = 0, .crm = 0, .opc2 = 7,
61
.access = PL1_R, .type = ARM_CP_CONST,
62
.accessfn = access_aa64_tid1,
63
+ .fgt = FGT_AIDR_EL1,
64
.resetvalue = 0 },
65
/*
66
* Auxiliary fault status registers: these also are IMPDEF, and we
67
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
68
{ .name = "AFSR0_EL1", .state = ARM_CP_STATE_BOTH,
69
.opc0 = 3, .opc1 = 0, .crn = 5, .crm = 1, .opc2 = 0,
70
.access = PL1_RW, .accessfn = access_tvm_trvm,
71
+ .fgt = FGT_AFSR0_EL1,
72
.type = ARM_CP_CONST, .resetvalue = 0 },
73
{ .name = "AFSR1_EL1", .state = ARM_CP_STATE_BOTH,
74
.opc0 = 3, .opc1 = 0, .crn = 5, .crm = 1, .opc2 = 1,
75
.access = PL1_RW, .accessfn = access_tvm_trvm,
76
+ .fgt = FGT_AFSR1_EL1,
77
.type = ARM_CP_CONST, .resetvalue = 0 },
78
/*
79
* MAIR can just read-as-written because we don't implement caches
80
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo lpae_cp_reginfo[] = {
81
{ .name = "AMAIR0", .state = ARM_CP_STATE_BOTH,
82
.opc0 = 3, .crn = 10, .crm = 3, .opc1 = 0, .opc2 = 0,
83
.access = PL1_RW, .accessfn = access_tvm_trvm,
84
+ .fgt = FGT_AMAIR_EL1,
85
.type = ARM_CP_CONST, .resetvalue = 0 },
86
/* AMAIR1 is mapped to AMAIR_EL1[63:32] */
87
{ .name = "AMAIR1", .cp = 15, .crn = 10, .crm = 3, .opc1 = 0, .opc2 = 1,
88
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo pauth_reginfo[] = {
89
{ .name = "APDAKEYLO_EL1", .state = ARM_CP_STATE_AA64,
90
.opc0 = 3, .opc1 = 0, .crn = 2, .crm = 2, .opc2 = 0,
91
.access = PL1_RW, .accessfn = access_pauth,
92
+ .fgt = FGT_APDAKEY,
93
.fieldoffset = offsetof(CPUARMState, keys.apda.lo) },
94
{ .name = "APDAKEYHI_EL1", .state = ARM_CP_STATE_AA64,
95
.opc0 = 3, .opc1 = 0, .crn = 2, .crm = 2, .opc2 = 1,
96
.access = PL1_RW, .accessfn = access_pauth,
97
+ .fgt = FGT_APDAKEY,
98
.fieldoffset = offsetof(CPUARMState, keys.apda.hi) },
99
{ .name = "APDBKEYLO_EL1", .state = ARM_CP_STATE_AA64,
100
.opc0 = 3, .opc1 = 0, .crn = 2, .crm = 2, .opc2 = 2,
101
.access = PL1_RW, .accessfn = access_pauth,
102
+ .fgt = FGT_APDBKEY,
103
.fieldoffset = offsetof(CPUARMState, keys.apdb.lo) },
104
{ .name = "APDBKEYHI_EL1", .state = ARM_CP_STATE_AA64,
105
.opc0 = 3, .opc1 = 0, .crn = 2, .crm = 2, .opc2 = 3,
106
.access = PL1_RW, .accessfn = access_pauth,
107
+ .fgt = FGT_APDBKEY,
108
.fieldoffset = offsetof(CPUARMState, keys.apdb.hi) },
109
{ .name = "APGAKEYLO_EL1", .state = ARM_CP_STATE_AA64,
110
.opc0 = 3, .opc1 = 0, .crn = 2, .crm = 3, .opc2 = 0,
111
.access = PL1_RW, .accessfn = access_pauth,
112
+ .fgt = FGT_APGAKEY,
113
.fieldoffset = offsetof(CPUARMState, keys.apga.lo) },
114
{ .name = "APGAKEYHI_EL1", .state = ARM_CP_STATE_AA64,
115
.opc0 = 3, .opc1 = 0, .crn = 2, .crm = 3, .opc2 = 1,
116
.access = PL1_RW, .accessfn = access_pauth,
117
+ .fgt = FGT_APGAKEY,
118
.fieldoffset = offsetof(CPUARMState, keys.apga.hi) },
119
{ .name = "APIAKEYLO_EL1", .state = ARM_CP_STATE_AA64,
120
.opc0 = 3, .opc1 = 0, .crn = 2, .crm = 1, .opc2 = 0,
121
.access = PL1_RW, .accessfn = access_pauth,
122
+ .fgt = FGT_APIAKEY,
123
.fieldoffset = offsetof(CPUARMState, keys.apia.lo) },
124
{ .name = "APIAKEYHI_EL1", .state = ARM_CP_STATE_AA64,
125
.opc0 = 3, .opc1 = 0, .crn = 2, .crm = 1, .opc2 = 1,
126
.access = PL1_RW, .accessfn = access_pauth,
127
+ .fgt = FGT_APIAKEY,
128
.fieldoffset = offsetof(CPUARMState, keys.apia.hi) },
129
{ .name = "APIBKEYLO_EL1", .state = ARM_CP_STATE_AA64,
130
.opc0 = 3, .opc1 = 0, .crn = 2, .crm = 1, .opc2 = 2,
131
.access = PL1_RW, .accessfn = access_pauth,
132
+ .fgt = FGT_APIBKEY,
133
.fieldoffset = offsetof(CPUARMState, keys.apib.lo) },
134
{ .name = "APIBKEYHI_EL1", .state = ARM_CP_STATE_AA64,
135
.opc0 = 3, .opc1 = 0, .crn = 2, .crm = 1, .opc2 = 3,
136
.access = PL1_RW, .accessfn = access_pauth,
137
+ .fgt = FGT_APIBKEY,
138
.fieldoffset = offsetof(CPUARMState, keys.apib.hi) },
139
};
140
141
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
142
.opc0 = 3, .crn = 0, .crm = 0, .opc1 = 1, .opc2 = 1,
143
.access = PL1_R, .type = ARM_CP_CONST,
144
.accessfn = access_tid4,
145
+ .fgt = FGT_CLIDR_EL1,
146
.resetvalue = cpu->clidr
147
};
148
define_one_arm_cp_reg(cpu, &clidr);
149
--
150
2.34.1
diff view generated by jsdifflib
New patch
1
Mark up the sysreg definitions for the registers trapped
2
by HFGRTR/HFGWTR bits 12..23.
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Tested-by: Fuad Tabba <tabba@google.com>
7
Message-id: 20230130182459.3309057-12-peter.maydell@linaro.org
8
Message-id: 20230127175507.2895013-12-peter.maydell@linaro.org
9
---
10
target/arm/cpregs.h | 12 ++++++++++++
11
target/arm/helper.c | 12 ++++++++++++
12
2 files changed, 24 insertions(+)
13
14
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/cpregs.h
17
+++ b/target/arm/cpregs.h
18
@@ -XXX,XX +XXX,XX @@ typedef enum FGTBit {
19
DO_BIT(HFGRTR, CCSIDR_EL1),
20
DO_BIT(HFGRTR, CLIDR_EL1),
21
DO_BIT(HFGRTR, CONTEXTIDR_EL1),
22
+ DO_BIT(HFGRTR, CPACR_EL1),
23
+ DO_BIT(HFGRTR, CSSELR_EL1),
24
+ DO_BIT(HFGRTR, CTR_EL0),
25
+ DO_BIT(HFGRTR, DCZID_EL0),
26
+ DO_BIT(HFGRTR, ESR_EL1),
27
+ DO_BIT(HFGRTR, FAR_EL1),
28
+ DO_BIT(HFGRTR, ISR_EL1),
29
+ DO_BIT(HFGRTR, LORC_EL1),
30
+ DO_BIT(HFGRTR, LOREA_EL1),
31
+ DO_BIT(HFGRTR, LORID_EL1),
32
+ DO_BIT(HFGRTR, LORN_EL1),
33
+ DO_BIT(HFGRTR, LORSA_EL1),
34
} FGTBit;
35
36
#undef DO_BIT
37
diff --git a/target/arm/helper.c b/target/arm/helper.c
38
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/helper.c
40
+++ b/target/arm/helper.c
41
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v6_cp_reginfo[] = {
42
.access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0, },
43
{ .name = "CPACR", .state = ARM_CP_STATE_BOTH, .opc0 = 3,
44
.crn = 1, .crm = 0, .opc1 = 0, .opc2 = 2, .accessfn = cpacr_access,
45
+ .fgt = FGT_CPACR_EL1,
46
.access = PL1_RW, .fieldoffset = offsetof(CPUARMState, cp15.cpacr_el1),
47
.resetfn = cpacr_reset, .writefn = cpacr_write, .readfn = cpacr_read },
48
};
49
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
50
.opc0 = 3, .crn = 0, .crm = 0, .opc1 = 2, .opc2 = 0,
51
.access = PL1_RW,
52
.accessfn = access_tid4,
53
+ .fgt = FGT_CSSELR_EL1,
54
.writefn = csselr_write, .resetvalue = 0,
55
.bank_fieldoffsets = { offsetof(CPUARMState, cp15.csselr_s),
56
offsetof(CPUARMState, cp15.csselr_ns) } },
57
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
58
.resetfn = arm_cp_reset_ignore },
59
{ .name = "ISR_EL1", .state = ARM_CP_STATE_BOTH,
60
.opc0 = 3, .opc1 = 0, .crn = 12, .crm = 1, .opc2 = 0,
61
+ .fgt = FGT_ISR_EL1,
62
.type = ARM_CP_NO_RAW, .access = PL1_R, .readfn = isr_read },
63
/* 32 bit ITLB invalidates */
64
{ .name = "ITLBIALL", .cp = 15, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 0,
65
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo vmsa_pmsa_cp_reginfo[] = {
66
{ .name = "FAR_EL1", .state = ARM_CP_STATE_AA64,
67
.opc0 = 3, .crn = 6, .crm = 0, .opc1 = 0, .opc2 = 0,
68
.access = PL1_RW, .accessfn = access_tvm_trvm,
69
+ .fgt = FGT_FAR_EL1,
70
.fieldoffset = offsetof(CPUARMState, cp15.far_el[1]),
71
.resetvalue = 0, },
72
};
73
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo vmsa_cp_reginfo[] = {
74
{ .name = "ESR_EL1", .state = ARM_CP_STATE_AA64,
75
.opc0 = 3, .crn = 5, .crm = 2, .opc1 = 0, .opc2 = 0,
76
.access = PL1_RW, .accessfn = access_tvm_trvm,
77
+ .fgt = FGT_ESR_EL1,
78
.fieldoffset = offsetof(CPUARMState, cp15.esr_el[1]), .resetvalue = 0, },
79
{ .name = "TTBR0_EL1", .state = ARM_CP_STATE_BOTH,
80
.opc0 = 3, .opc1 = 0, .crn = 2, .crm = 0, .opc2 = 0,
81
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
82
{ .name = "DCZID_EL0", .state = ARM_CP_STATE_AA64,
83
.opc0 = 3, .opc1 = 3, .opc2 = 7, .crn = 0, .crm = 0,
84
.access = PL0_R, .type = ARM_CP_NO_RAW,
85
+ .fgt = FGT_DCZID_EL0,
86
.readfn = aa64_dczid_read },
87
{ .name = "DC_ZVA", .state = ARM_CP_STATE_AA64,
88
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 4, .opc2 = 1,
89
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo lor_reginfo[] = {
90
{ .name = "LORSA_EL1", .state = ARM_CP_STATE_AA64,
91
.opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 0,
92
.access = PL1_RW, .accessfn = access_lor_other,
93
+ .fgt = FGT_LORSA_EL1,
94
.type = ARM_CP_CONST, .resetvalue = 0 },
95
{ .name = "LOREA_EL1", .state = ARM_CP_STATE_AA64,
96
.opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 1,
97
.access = PL1_RW, .accessfn = access_lor_other,
98
+ .fgt = FGT_LOREA_EL1,
99
.type = ARM_CP_CONST, .resetvalue = 0 },
100
{ .name = "LORN_EL1", .state = ARM_CP_STATE_AA64,
101
.opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 2,
102
.access = PL1_RW, .accessfn = access_lor_other,
103
+ .fgt = FGT_LORN_EL1,
104
.type = ARM_CP_CONST, .resetvalue = 0 },
105
{ .name = "LORC_EL1", .state = ARM_CP_STATE_AA64,
106
.opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 3,
107
.access = PL1_RW, .accessfn = access_lor_other,
108
+ .fgt = FGT_LORC_EL1,
109
.type = ARM_CP_CONST, .resetvalue = 0 },
110
{ .name = "LORID_EL1", .state = ARM_CP_STATE_AA64,
111
.opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 7,
112
.access = PL1_R, .accessfn = access_lor_ns,
113
+ .fgt = FGT_LORID_EL1,
114
.type = ARM_CP_CONST, .resetvalue = 0 },
115
};
116
117
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
118
{ .name = "CTR_EL0", .state = ARM_CP_STATE_AA64,
119
.opc0 = 3, .opc1 = 3, .opc2 = 1, .crn = 0, .crm = 0,
120
.access = PL0_R, .accessfn = ctr_el0_access,
121
+ .fgt = FGT_CTR_EL0,
122
.type = ARM_CP_CONST, .resetvalue = cpu->ctr },
123
/* TCMTR and TLBTR exist in v8 but have no 64-bit versions */
124
{ .name = "TCMTR",
125
--
126
2.34.1
diff view generated by jsdifflib
New patch
1
Mark up the sysreg definitions for the registers trapped
2
by HFGRTR/HFGWTR bits 24..35.
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Tested-by: Fuad Tabba <tabba@google.com>
7
Message-id: 20230130182459.3309057-13-peter.maydell@linaro.org
8
Message-id: 20230127175507.2895013-13-peter.maydell@linaro.org
9
---
10
target/arm/cpregs.h | 12 ++++++++++++
11
target/arm/helper.c | 14 ++++++++++++++
12
2 files changed, 26 insertions(+)
13
14
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/cpregs.h
17
+++ b/target/arm/cpregs.h
18
@@ -XXX,XX +XXX,XX @@ typedef enum FGTBit {
19
DO_BIT(HFGRTR, LORID_EL1),
20
DO_BIT(HFGRTR, LORN_EL1),
21
DO_BIT(HFGRTR, LORSA_EL1),
22
+ DO_BIT(HFGRTR, MAIR_EL1),
23
+ DO_BIT(HFGRTR, MIDR_EL1),
24
+ DO_BIT(HFGRTR, MPIDR_EL1),
25
+ DO_BIT(HFGRTR, PAR_EL1),
26
+ DO_BIT(HFGRTR, REVIDR_EL1),
27
+ DO_BIT(HFGRTR, SCTLR_EL1),
28
+ DO_BIT(HFGRTR, SCXTNUM_EL1),
29
+ DO_BIT(HFGRTR, SCXTNUM_EL0),
30
+ DO_BIT(HFGRTR, TCR_EL1),
31
+ DO_BIT(HFGRTR, TPIDR_EL1),
32
+ DO_BIT(HFGRTR, TPIDRRO_EL0),
33
+ DO_BIT(HFGRTR, TPIDR_EL0),
34
} FGTBit;
35
36
#undef DO_BIT
37
diff --git a/target/arm/helper.c b/target/arm/helper.c
38
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/helper.c
40
+++ b/target/arm/helper.c
41
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
42
{ .name = "MAIR_EL1", .state = ARM_CP_STATE_AA64,
43
.opc0 = 3, .opc1 = 0, .crn = 10, .crm = 2, .opc2 = 0,
44
.access = PL1_RW, .accessfn = access_tvm_trvm,
45
+ .fgt = FGT_MAIR_EL1,
46
.fieldoffset = offsetof(CPUARMState, cp15.mair_el[1]),
47
.resetvalue = 0 },
48
{ .name = "MAIR_EL3", .state = ARM_CP_STATE_AA64,
49
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v6k_cp_reginfo[] = {
50
{ .name = "TPIDR_EL0", .state = ARM_CP_STATE_AA64,
51
.opc0 = 3, .opc1 = 3, .opc2 = 2, .crn = 13, .crm = 0,
52
.access = PL0_RW,
53
+ .fgt = FGT_TPIDR_EL0,
54
.fieldoffset = offsetof(CPUARMState, cp15.tpidr_el[0]), .resetvalue = 0 },
55
{ .name = "TPIDRURW", .cp = 15, .crn = 13, .crm = 0, .opc1 = 0, .opc2 = 2,
56
.access = PL0_RW,
57
+ .fgt = FGT_TPIDR_EL0,
58
.bank_fieldoffsets = { offsetoflow32(CPUARMState, cp15.tpidrurw_s),
59
offsetoflow32(CPUARMState, cp15.tpidrurw_ns) },
60
.resetfn = arm_cp_reset_ignore },
61
{ .name = "TPIDRRO_EL0", .state = ARM_CP_STATE_AA64,
62
.opc0 = 3, .opc1 = 3, .opc2 = 3, .crn = 13, .crm = 0,
63
.access = PL0_R | PL1_W,
64
+ .fgt = FGT_TPIDRRO_EL0,
65
.fieldoffset = offsetof(CPUARMState, cp15.tpidrro_el[0]),
66
.resetvalue = 0},
67
{ .name = "TPIDRURO", .cp = 15, .crn = 13, .crm = 0, .opc1 = 0, .opc2 = 3,
68
.access = PL0_R | PL1_W,
69
+ .fgt = FGT_TPIDRRO_EL0,
70
.bank_fieldoffsets = { offsetoflow32(CPUARMState, cp15.tpidruro_s),
71
offsetoflow32(CPUARMState, cp15.tpidruro_ns) },
72
.resetfn = arm_cp_reset_ignore },
73
{ .name = "TPIDR_EL1", .state = ARM_CP_STATE_AA64,
74
.opc0 = 3, .opc1 = 0, .opc2 = 4, .crn = 13, .crm = 0,
75
.access = PL1_RW,
76
+ .fgt = FGT_TPIDR_EL1,
77
.fieldoffset = offsetof(CPUARMState, cp15.tpidr_el[1]), .resetvalue = 0 },
78
{ .name = "TPIDRPRW", .opc1 = 0, .cp = 15, .crn = 13, .crm = 0, .opc2 = 4,
79
.access = PL1_RW,
80
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo vmsa_cp_reginfo[] = {
81
{ .name = "TCR_EL1", .state = ARM_CP_STATE_AA64,
82
.opc0 = 3, .crn = 2, .crm = 0, .opc1 = 0, .opc2 = 2,
83
.access = PL1_RW, .accessfn = access_tvm_trvm,
84
+ .fgt = FGT_TCR_EL1,
85
.writefn = vmsa_tcr_el12_write,
86
.raw_writefn = raw_write,
87
.resetvalue = 0,
88
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
89
.type = ARM_CP_ALIAS,
90
.opc0 = 3, .opc1 = 0, .crn = 7, .crm = 4, .opc2 = 0,
91
.access = PL1_RW, .resetvalue = 0,
92
+ .fgt = FGT_PAR_EL1,
93
.fieldoffset = offsetof(CPUARMState, cp15.par_el[1]),
94
.writefn = par_write },
95
#endif
96
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo scxtnum_reginfo[] = {
97
{ .name = "SCXTNUM_EL0", .state = ARM_CP_STATE_AA64,
98
.opc0 = 3, .opc1 = 3, .crn = 13, .crm = 0, .opc2 = 7,
99
.access = PL0_RW, .accessfn = access_scxtnum,
100
+ .fgt = FGT_SCXTNUM_EL0,
101
.fieldoffset = offsetof(CPUARMState, scxtnum_el[0]) },
102
{ .name = "SCXTNUM_EL1", .state = ARM_CP_STATE_AA64,
103
.opc0 = 3, .opc1 = 0, .crn = 13, .crm = 0, .opc2 = 7,
104
.access = PL1_RW, .accessfn = access_scxtnum,
105
+ .fgt = FGT_SCXTNUM_EL1,
106
.fieldoffset = offsetof(CPUARMState, scxtnum_el[1]) },
107
{ .name = "SCXTNUM_EL2", .state = ARM_CP_STATE_AA64,
108
.opc0 = 3, .opc1 = 4, .crn = 13, .crm = 0, .opc2 = 7,
109
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
110
{ .name = "MIDR_EL1", .state = ARM_CP_STATE_BOTH,
111
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 0, .opc2 = 0,
112
.access = PL1_R, .type = ARM_CP_NO_RAW, .resetvalue = cpu->midr,
113
+ .fgt = FGT_MIDR_EL1,
114
.fieldoffset = offsetof(CPUARMState, cp15.c0_cpuid),
115
.readfn = midr_read },
116
/* crn = 0 op1 = 0 crm = 0 op2 = 7 : AArch32 aliases of MIDR */
117
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
118
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 0, .opc2 = 6,
119
.access = PL1_R,
120
.accessfn = access_aa64_tid1,
121
+ .fgt = FGT_REVIDR_EL1,
122
.type = ARM_CP_CONST, .resetvalue = cpu->revidr },
123
};
124
ARMCPRegInfo id_v8_midr_alias_cp_reginfo = {
125
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
126
ARMCPRegInfo mpidr_cp_reginfo[] = {
127
{ .name = "MPIDR_EL1", .state = ARM_CP_STATE_BOTH,
128
.opc0 = 3, .crn = 0, .crm = 0, .opc1 = 0, .opc2 = 5,
129
+ .fgt = FGT_MPIDR_EL1,
130
.access = PL1_R, .readfn = mpidr_read, .type = ARM_CP_NO_RAW },
131
};
132
#ifdef CONFIG_USER_ONLY
133
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
134
.name = "SCTLR", .state = ARM_CP_STATE_BOTH,
135
.opc0 = 3, .opc1 = 0, .crn = 1, .crm = 0, .opc2 = 0,
136
.access = PL1_RW, .accessfn = access_tvm_trvm,
137
+ .fgt = FGT_SCTLR_EL1,
138
.bank_fieldoffsets = { offsetof(CPUARMState, cp15.sctlr_s),
139
offsetof(CPUARMState, cp15.sctlr_ns) },
140
.writefn = sctlr_write, .resetvalue = cpu->reset_sctlr,
141
--
142
2.34.1
diff view generated by jsdifflib
New patch
1
Mark up the sysreg definitions for the registers trapped
2
by HFGRTR/HFGWTR bits 36..63.
1
3
4
Of these, some correspond to RAS registers which we implement as
5
always-UNDEF: these don't need any extra handling for FGT because the
6
UNDEF-to-EL1 always takes priority over any theoretical
7
FGT-trap-to-EL2.
8
9
Bit 50 (NACCDATA_EL1) is for the ACCDATA_EL1 register which is part
10
of the FEAT_LS64_ACCDATA feature which we don't yet implement.
11
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Tested-by: Fuad Tabba <tabba@google.com>
15
Message-id: 20230130182459.3309057-14-peter.maydell@linaro.org
16
Message-id: 20230127175507.2895013-14-peter.maydell@linaro.org
17
---
18
target/arm/cpregs.h | 7 +++++++
19
hw/intc/arm_gicv3_cpuif.c | 2 ++
20
target/arm/helper.c | 10 ++++++++++
21
3 files changed, 19 insertions(+)
22
23
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
24
index XXXXXXX..XXXXXXX 100644
25
--- a/target/arm/cpregs.h
26
+++ b/target/arm/cpregs.h
27
@@ -XXX,XX +XXX,XX @@ typedef enum FGTBit {
28
DO_BIT(HFGRTR, TPIDR_EL1),
29
DO_BIT(HFGRTR, TPIDRRO_EL0),
30
DO_BIT(HFGRTR, TPIDR_EL0),
31
+ DO_BIT(HFGRTR, TTBR0_EL1),
32
+ DO_BIT(HFGRTR, TTBR1_EL1),
33
+ DO_BIT(HFGRTR, VBAR_EL1),
34
+ DO_BIT(HFGRTR, ICC_IGRPENN_EL1),
35
+ DO_BIT(HFGRTR, ERRIDR_EL1),
36
+ DO_REV_BIT(HFGRTR, NSMPRI_EL1),
37
+ DO_REV_BIT(HFGRTR, NTPIDR2_EL0),
38
} FGTBit;
39
40
#undef DO_BIT
41
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
42
index XXXXXXX..XXXXXXX 100644
43
--- a/hw/intc/arm_gicv3_cpuif.c
44
+++ b/hw/intc/arm_gicv3_cpuif.c
45
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo gicv3_cpuif_reginfo[] = {
46
.opc0 = 3, .opc1 = 0, .crn = 12, .crm = 12, .opc2 = 6,
47
.type = ARM_CP_IO | ARM_CP_NO_RAW,
48
.access = PL1_RW, .accessfn = gicv3_fiq_access,
49
+ .fgt = FGT_ICC_IGRPENN_EL1,
50
.readfn = icc_igrpen_read,
51
.writefn = icc_igrpen_write,
52
},
53
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo gicv3_cpuif_reginfo[] = {
54
.opc0 = 3, .opc1 = 0, .crn = 12, .crm = 12, .opc2 = 7,
55
.type = ARM_CP_IO | ARM_CP_NO_RAW,
56
.access = PL1_RW, .accessfn = gicv3_irq_access,
57
+ .fgt = FGT_ICC_IGRPENN_EL1,
58
.readfn = icc_igrpen_read,
59
.writefn = icc_igrpen_write,
60
},
61
diff --git a/target/arm/helper.c b/target/arm/helper.c
62
index XXXXXXX..XXXXXXX 100644
63
--- a/target/arm/helper.c
64
+++ b/target/arm/helper.c
65
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo vmsa_cp_reginfo[] = {
66
{ .name = "TTBR0_EL1", .state = ARM_CP_STATE_BOTH,
67
.opc0 = 3, .opc1 = 0, .crn = 2, .crm = 0, .opc2 = 0,
68
.access = PL1_RW, .accessfn = access_tvm_trvm,
69
+ .fgt = FGT_TTBR0_EL1,
70
.writefn = vmsa_ttbr_write, .resetvalue = 0,
71
.bank_fieldoffsets = { offsetof(CPUARMState, cp15.ttbr0_s),
72
offsetof(CPUARMState, cp15.ttbr0_ns) } },
73
{ .name = "TTBR1_EL1", .state = ARM_CP_STATE_BOTH,
74
.opc0 = 3, .opc1 = 0, .crn = 2, .crm = 0, .opc2 = 1,
75
.access = PL1_RW, .accessfn = access_tvm_trvm,
76
+ .fgt = FGT_TTBR1_EL1,
77
.writefn = vmsa_ttbr_write, .resetvalue = 0,
78
.bank_fieldoffsets = { offsetof(CPUARMState, cp15.ttbr1_s),
79
offsetof(CPUARMState, cp15.ttbr1_ns) } },
80
@@ -XXX,XX +XXX,XX @@ static void disr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t val)
81
* ERRSELR_EL1
82
* may generate UNDEFINED, which is the effect we get by not
83
* listing them at all.
84
+ *
85
+ * These registers have fine-grained trap bits, but UNDEF-to-EL1
86
+ * is higher priority than FGT-to-EL2 so we do not need to list them
87
+ * in order to check for an FGT.
88
*/
89
static const ARMCPRegInfo minimal_ras_reginfo[] = {
90
{ .name = "DISR_EL1", .state = ARM_CP_STATE_BOTH,
91
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo minimal_ras_reginfo[] = {
92
{ .name = "ERRIDR_EL1", .state = ARM_CP_STATE_BOTH,
93
.opc0 = 3, .opc1 = 0, .crn = 5, .crm = 3, .opc2 = 0,
94
.access = PL1_R, .accessfn = access_terr,
95
+ .fgt = FGT_ERRIDR_EL1,
96
.type = ARM_CP_CONST, .resetvalue = 0 },
97
{ .name = "VDISR_EL2", .state = ARM_CP_STATE_BOTH,
98
.opc0 = 3, .opc1 = 4, .crn = 12, .crm = 1, .opc2 = 1,
99
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo sme_reginfo[] = {
100
{ .name = "TPIDR2_EL0", .state = ARM_CP_STATE_AA64,
101
.opc0 = 3, .opc1 = 3, .crn = 13, .crm = 0, .opc2 = 5,
102
.access = PL0_RW, .accessfn = access_tpidr2,
103
+ .fgt = FGT_NTPIDR2_EL0,
104
.fieldoffset = offsetof(CPUARMState, cp15.tpidr2_el0) },
105
{ .name = "SVCR", .state = ARM_CP_STATE_AA64,
106
.opc0 = 3, .opc1 = 3, .crn = 4, .crm = 2, .opc2 = 2,
107
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo sme_reginfo[] = {
108
{ .name = "SMPRI_EL1", .state = ARM_CP_STATE_AA64,
109
.opc0 = 3, .opc1 = 0, .crn = 1, .crm = 2, .opc2 = 4,
110
.access = PL1_RW, .accessfn = access_esm,
111
+ .fgt = FGT_NSMPRI_EL1,
112
.type = ARM_CP_CONST, .resetvalue = 0 },
113
{ .name = "SMPRIMAP_EL2", .state = ARM_CP_STATE_AA64,
114
.opc0 = 3, .opc1 = 4, .crn = 1, .crm = 2, .opc2 = 5,
115
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
116
{ .name = "VBAR", .state = ARM_CP_STATE_BOTH,
117
.opc0 = 3, .crn = 12, .crm = 0, .opc1 = 0, .opc2 = 0,
118
.access = PL1_RW, .writefn = vbar_write,
119
+ .fgt = FGT_VBAR_EL1,
120
.bank_fieldoffsets = { offsetof(CPUARMState, cp15.vbar_s),
121
offsetof(CPUARMState, cp15.vbar_ns) },
122
.resetvalue = 0 },
123
--
124
2.34.1
diff view generated by jsdifflib
New patch
1
Mark up the sysreg definitons for the registers trapped
2
by HDFGRTR/HDFGWTR bits 0..11. These cover various debug
3
related registers.
1
4
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Tested-by: Fuad Tabba <tabba@google.com>
8
Message-id: 20230130182459.3309057-15-peter.maydell@linaro.org
9
Message-id: 20230127175507.2895013-15-peter.maydell@linaro.org
10
---
11
target/arm/cpregs.h | 12 ++++++++++++
12
target/arm/debug_helper.c | 11 +++++++++++
13
2 files changed, 23 insertions(+)
14
15
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpregs.h
18
+++ b/target/arm/cpregs.h
19
@@ -XXX,XX +XXX,XX @@ typedef enum FGTBit {
20
DO_BIT(HFGRTR, ERRIDR_EL1),
21
DO_REV_BIT(HFGRTR, NSMPRI_EL1),
22
DO_REV_BIT(HFGRTR, NTPIDR2_EL0),
23
+
24
+ /* Trap bits in HDFGRTR_EL2 / HDFGWTR_EL2, starting from bit 0. */
25
+ DO_BIT(HDFGRTR, DBGBCRN_EL1),
26
+ DO_BIT(HDFGRTR, DBGBVRN_EL1),
27
+ DO_BIT(HDFGRTR, DBGWCRN_EL1),
28
+ DO_BIT(HDFGRTR, DBGWVRN_EL1),
29
+ DO_BIT(HDFGRTR, MDSCR_EL1),
30
+ DO_BIT(HDFGRTR, DBGCLAIM),
31
+ DO_BIT(HDFGWTR, OSLAR_EL1),
32
+ DO_BIT(HDFGRTR, OSLSR_EL1),
33
+ DO_BIT(HDFGRTR, OSECCR_EL1),
34
+ DO_BIT(HDFGRTR, OSDLR_EL1),
35
} FGTBit;
36
37
#undef DO_BIT
38
diff --git a/target/arm/debug_helper.c b/target/arm/debug_helper.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/debug_helper.c
41
+++ b/target/arm/debug_helper.c
42
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo debug_cp_reginfo[] = {
43
{ .name = "MDSCR_EL1", .state = ARM_CP_STATE_BOTH,
44
.cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 2,
45
.access = PL1_RW, .accessfn = access_tda,
46
+ .fgt = FGT_MDSCR_EL1,
47
.fieldoffset = offsetof(CPUARMState, cp15.mdscr_el1),
48
.resetvalue = 0 },
49
/*
50
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo debug_cp_reginfo[] = {
51
{ .name = "OSECCR_EL1", .state = ARM_CP_STATE_BOTH, .cp = 14,
52
.opc0 = 2, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 2,
53
.access = PL1_RW, .accessfn = access_tda,
54
+ .fgt = FGT_OSECCR_EL1,
55
.type = ARM_CP_CONST, .resetvalue = 0 },
56
/*
57
* DBGDSCRint[15,12,5:2] map to MDSCR_EL1[15,12,5:2]. Map all bits as
58
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo debug_cp_reginfo[] = {
59
.cp = 14, .opc0 = 2, .opc1 = 0, .crn = 1, .crm = 0, .opc2 = 4,
60
.access = PL1_W, .type = ARM_CP_NO_RAW,
61
.accessfn = access_tdosa,
62
+ .fgt = FGT_OSLAR_EL1,
63
.writefn = oslar_write },
64
{ .name = "OSLSR_EL1", .state = ARM_CP_STATE_BOTH,
65
.cp = 14, .opc0 = 2, .opc1 = 0, .crn = 1, .crm = 1, .opc2 = 4,
66
.access = PL1_R, .resetvalue = 10,
67
.accessfn = access_tdosa,
68
+ .fgt = FGT_OSLSR_EL1,
69
.fieldoffset = offsetof(CPUARMState, cp15.oslsr_el1) },
70
/* Dummy OSDLR_EL1: 32-bit Linux will read this */
71
{ .name = "OSDLR_EL1", .state = ARM_CP_STATE_BOTH,
72
.cp = 14, .opc0 = 2, .opc1 = 0, .crn = 1, .crm = 3, .opc2 = 4,
73
.access = PL1_RW, .accessfn = access_tdosa,
74
+ .fgt = FGT_OSDLR_EL1,
75
.writefn = osdlr_write,
76
.fieldoffset = offsetof(CPUARMState, cp15.osdlr_el1) },
77
/*
78
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo debug_cp_reginfo[] = {
79
.cp = 14, .opc0 = 2, .opc1 = 0, .crn = 7, .crm = 8, .opc2 = 6,
80
.type = ARM_CP_ALIAS,
81
.access = PL1_RW, .accessfn = access_tda,
82
+ .fgt = FGT_DBGCLAIM,
83
.writefn = dbgclaimset_write, .readfn = dbgclaimset_read },
84
{ .name = "DBGCLAIMCLR_EL1", .state = ARM_CP_STATE_BOTH,
85
.cp = 14, .opc0 = 2, .opc1 = 0, .crn = 7, .crm = 9, .opc2 = 6,
86
.access = PL1_RW, .accessfn = access_tda,
87
+ .fgt = FGT_DBGCLAIM,
88
.writefn = dbgclaimclr_write, .raw_writefn = raw_write,
89
.fieldoffset = offsetof(CPUARMState, cp15.dbgclaim) },
90
};
91
@@ -XXX,XX +XXX,XX @@ void define_debug_regs(ARMCPU *cpu)
92
{ .name = dbgbvr_el1_name, .state = ARM_CP_STATE_BOTH,
93
.cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = i, .opc2 = 4,
94
.access = PL1_RW, .accessfn = access_tda,
95
+ .fgt = FGT_DBGBVRN_EL1,
96
.fieldoffset = offsetof(CPUARMState, cp15.dbgbvr[i]),
97
.writefn = dbgbvr_write, .raw_writefn = raw_write
98
},
99
{ .name = dbgbcr_el1_name, .state = ARM_CP_STATE_BOTH,
100
.cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = i, .opc2 = 5,
101
.access = PL1_RW, .accessfn = access_tda,
102
+ .fgt = FGT_DBGBCRN_EL1,
103
.fieldoffset = offsetof(CPUARMState, cp15.dbgbcr[i]),
104
.writefn = dbgbcr_write, .raw_writefn = raw_write
105
},
106
@@ -XXX,XX +XXX,XX @@ void define_debug_regs(ARMCPU *cpu)
107
{ .name = dbgwvr_el1_name, .state = ARM_CP_STATE_BOTH,
108
.cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = i, .opc2 = 6,
109
.access = PL1_RW, .accessfn = access_tda,
110
+ .fgt = FGT_DBGWVRN_EL1,
111
.fieldoffset = offsetof(CPUARMState, cp15.dbgwvr[i]),
112
.writefn = dbgwvr_write, .raw_writefn = raw_write
113
},
114
{ .name = dbgwcr_el1_name, .state = ARM_CP_STATE_BOTH,
115
.cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = i, .opc2 = 7,
116
.access = PL1_RW, .accessfn = access_tda,
117
+ .fgt = FGT_DBGWCRN_EL1,
118
.fieldoffset = offsetof(CPUARMState, cp15.dbgwcr[i]),
119
.writefn = dbgwcr_write, .raw_writefn = raw_write
120
},
121
--
122
2.34.1
diff view generated by jsdifflib
New patch
1
1
Mark up the sysreg definitions for the registers trapped
2
by HDFGRTR/HDFGWTR bits 12..x.
3
4
Bits 12..22 and bit 58 are for PMU registers.
5
6
The remaining bits in HDFGRTR/HDFGWTR are for traps on
7
registers that are part of features we don't implement:
8
9
Bits 23..32 and 63 : FEAT_SPE
10
Bits 33..48 : FEAT_ETE
11
Bits 50..56 : FEAT_TRBE
12
Bits 59..61 : FEAT_BRBE
13
Bit 62 : FEAT_SPEv1p2.
14
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
17
Tested-by: Fuad Tabba <tabba@google.com>
18
Message-id: 20230130182459.3309057-16-peter.maydell@linaro.org
19
Message-id: 20230127175507.2895013-16-peter.maydell@linaro.org
20
---
21
target/arm/cpregs.h | 12 ++++++++++++
22
target/arm/helper.c | 37 +++++++++++++++++++++++++++++++++++++
23
2 files changed, 49 insertions(+)
24
25
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
26
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/cpregs.h
28
+++ b/target/arm/cpregs.h
29
@@ -XXX,XX +XXX,XX @@ typedef enum FGTBit {
30
DO_BIT(HDFGRTR, OSLSR_EL1),
31
DO_BIT(HDFGRTR, OSECCR_EL1),
32
DO_BIT(HDFGRTR, OSDLR_EL1),
33
+ DO_BIT(HDFGRTR, PMEVCNTRN_EL0),
34
+ DO_BIT(HDFGRTR, PMEVTYPERN_EL0),
35
+ DO_BIT(HDFGRTR, PMCCFILTR_EL0),
36
+ DO_BIT(HDFGRTR, PMCCNTR_EL0),
37
+ DO_BIT(HDFGRTR, PMCNTEN),
38
+ DO_BIT(HDFGRTR, PMINTEN),
39
+ DO_BIT(HDFGRTR, PMOVS),
40
+ DO_BIT(HDFGRTR, PMSELR_EL0),
41
+ DO_BIT(HDFGWTR, PMSWINC_EL0),
42
+ DO_BIT(HDFGWTR, PMCR_EL0),
43
+ DO_BIT(HDFGRTR, PMMIR_EL1),
44
+ DO_BIT(HDFGRTR, PMCEIDN_EL0),
45
} FGTBit;
46
47
#undef DO_BIT
48
diff --git a/target/arm/helper.c b/target/arm/helper.c
49
index XXXXXXX..XXXXXXX 100644
50
--- a/target/arm/helper.c
51
+++ b/target/arm/helper.c
52
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
53
.fieldoffset = offsetoflow32(CPUARMState, cp15.c9_pmcnten),
54
.writefn = pmcntenset_write,
55
.accessfn = pmreg_access,
56
+ .fgt = FGT_PMCNTEN,
57
.raw_writefn = raw_write },
58
{ .name = "PMCNTENSET_EL0", .state = ARM_CP_STATE_AA64, .type = ARM_CP_IO,
59
.opc0 = 3, .opc1 = 3, .crn = 9, .crm = 12, .opc2 = 1,
60
.access = PL0_RW, .accessfn = pmreg_access,
61
+ .fgt = FGT_PMCNTEN,
62
.fieldoffset = offsetof(CPUARMState, cp15.c9_pmcnten), .resetvalue = 0,
63
.writefn = pmcntenset_write, .raw_writefn = raw_write },
64
{ .name = "PMCNTENCLR", .cp = 15, .crn = 9, .crm = 12, .opc1 = 0, .opc2 = 2,
65
.access = PL0_RW,
66
.fieldoffset = offsetoflow32(CPUARMState, cp15.c9_pmcnten),
67
.accessfn = pmreg_access,
68
+ .fgt = FGT_PMCNTEN,
69
.writefn = pmcntenclr_write,
70
.type = ARM_CP_ALIAS | ARM_CP_IO },
71
{ .name = "PMCNTENCLR_EL0", .state = ARM_CP_STATE_AA64,
72
.opc0 = 3, .opc1 = 3, .crn = 9, .crm = 12, .opc2 = 2,
73
.access = PL0_RW, .accessfn = pmreg_access,
74
+ .fgt = FGT_PMCNTEN,
75
.type = ARM_CP_ALIAS | ARM_CP_IO,
76
.fieldoffset = offsetof(CPUARMState, cp15.c9_pmcnten),
77
.writefn = pmcntenclr_write },
78
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
79
.access = PL0_RW, .type = ARM_CP_IO,
80
.fieldoffset = offsetoflow32(CPUARMState, cp15.c9_pmovsr),
81
.accessfn = pmreg_access,
82
+ .fgt = FGT_PMOVS,
83
.writefn = pmovsr_write,
84
.raw_writefn = raw_write },
85
{ .name = "PMOVSCLR_EL0", .state = ARM_CP_STATE_AA64,
86
.opc0 = 3, .opc1 = 3, .crn = 9, .crm = 12, .opc2 = 3,
87
.access = PL0_RW, .accessfn = pmreg_access,
88
+ .fgt = FGT_PMOVS,
89
.type = ARM_CP_ALIAS | ARM_CP_IO,
90
.fieldoffset = offsetof(CPUARMState, cp15.c9_pmovsr),
91
.writefn = pmovsr_write,
92
.raw_writefn = raw_write },
93
{ .name = "PMSWINC", .cp = 15, .crn = 9, .crm = 12, .opc1 = 0, .opc2 = 4,
94
.access = PL0_W, .accessfn = pmreg_access_swinc,
95
+ .fgt = FGT_PMSWINC_EL0,
96
.type = ARM_CP_NO_RAW | ARM_CP_IO,
97
.writefn = pmswinc_write },
98
{ .name = "PMSWINC_EL0", .state = ARM_CP_STATE_AA64,
99
.opc0 = 3, .opc1 = 3, .crn = 9, .crm = 12, .opc2 = 4,
100
.access = PL0_W, .accessfn = pmreg_access_swinc,
101
+ .fgt = FGT_PMSWINC_EL0,
102
.type = ARM_CP_NO_RAW | ARM_CP_IO,
103
.writefn = pmswinc_write },
104
{ .name = "PMSELR", .cp = 15, .crn = 9, .crm = 12, .opc1 = 0, .opc2 = 5,
105
.access = PL0_RW, .type = ARM_CP_ALIAS,
106
+ .fgt = FGT_PMSELR_EL0,
107
.fieldoffset = offsetoflow32(CPUARMState, cp15.c9_pmselr),
108
.accessfn = pmreg_access_selr, .writefn = pmselr_write,
109
.raw_writefn = raw_write},
110
{ .name = "PMSELR_EL0", .state = ARM_CP_STATE_AA64,
111
.opc0 = 3, .opc1 = 3, .crn = 9, .crm = 12, .opc2 = 5,
112
.access = PL0_RW, .accessfn = pmreg_access_selr,
113
+ .fgt = FGT_PMSELR_EL0,
114
.fieldoffset = offsetof(CPUARMState, cp15.c9_pmselr),
115
.writefn = pmselr_write, .raw_writefn = raw_write, },
116
{ .name = "PMCCNTR", .cp = 15, .crn = 9, .crm = 13, .opc1 = 0, .opc2 = 0,
117
.access = PL0_RW, .resetvalue = 0, .type = ARM_CP_ALIAS | ARM_CP_IO,
118
+ .fgt = FGT_PMCCNTR_EL0,
119
.readfn = pmccntr_read, .writefn = pmccntr_write32,
120
.accessfn = pmreg_access_ccntr },
121
{ .name = "PMCCNTR_EL0", .state = ARM_CP_STATE_AA64,
122
.opc0 = 3, .opc1 = 3, .crn = 9, .crm = 13, .opc2 = 0,
123
.access = PL0_RW, .accessfn = pmreg_access_ccntr,
124
+ .fgt = FGT_PMCCNTR_EL0,
125
.type = ARM_CP_IO,
126
.fieldoffset = offsetof(CPUARMState, cp15.c15_ccnt),
127
.readfn = pmccntr_read, .writefn = pmccntr_write,
128
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
129
{ .name = "PMCCFILTR", .cp = 15, .opc1 = 0, .crn = 14, .crm = 15, .opc2 = 7,
130
.writefn = pmccfiltr_write_a32, .readfn = pmccfiltr_read_a32,
131
.access = PL0_RW, .accessfn = pmreg_access,
132
+ .fgt = FGT_PMCCFILTR_EL0,
133
.type = ARM_CP_ALIAS | ARM_CP_IO,
134
.resetvalue = 0, },
135
{ .name = "PMCCFILTR_EL0", .state = ARM_CP_STATE_AA64,
136
.opc0 = 3, .opc1 = 3, .crn = 14, .crm = 15, .opc2 = 7,
137
.writefn = pmccfiltr_write, .raw_writefn = raw_write,
138
.access = PL0_RW, .accessfn = pmreg_access,
139
+ .fgt = FGT_PMCCFILTR_EL0,
140
.type = ARM_CP_IO,
141
.fieldoffset = offsetof(CPUARMState, cp15.pmccfiltr_el0),
142
.resetvalue = 0, },
143
{ .name = "PMXEVTYPER", .cp = 15, .crn = 9, .crm = 13, .opc1 = 0, .opc2 = 1,
144
.access = PL0_RW, .type = ARM_CP_NO_RAW | ARM_CP_IO,
145
.accessfn = pmreg_access,
146
+ .fgt = FGT_PMEVTYPERN_EL0,
147
.writefn = pmxevtyper_write, .readfn = pmxevtyper_read },
148
{ .name = "PMXEVTYPER_EL0", .state = ARM_CP_STATE_AA64,
149
.opc0 = 3, .opc1 = 3, .crn = 9, .crm = 13, .opc2 = 1,
150
.access = PL0_RW, .type = ARM_CP_NO_RAW | ARM_CP_IO,
151
.accessfn = pmreg_access,
152
+ .fgt = FGT_PMEVTYPERN_EL0,
153
.writefn = pmxevtyper_write, .readfn = pmxevtyper_read },
154
{ .name = "PMXEVCNTR", .cp = 15, .crn = 9, .crm = 13, .opc1 = 0, .opc2 = 2,
155
.access = PL0_RW, .type = ARM_CP_NO_RAW | ARM_CP_IO,
156
.accessfn = pmreg_access_xevcntr,
157
+ .fgt = FGT_PMEVCNTRN_EL0,
158
.writefn = pmxevcntr_write, .readfn = pmxevcntr_read },
159
{ .name = "PMXEVCNTR_EL0", .state = ARM_CP_STATE_AA64,
160
.opc0 = 3, .opc1 = 3, .crn = 9, .crm = 13, .opc2 = 2,
161
.access = PL0_RW, .type = ARM_CP_NO_RAW | ARM_CP_IO,
162
.accessfn = pmreg_access_xevcntr,
163
+ .fgt = FGT_PMEVCNTRN_EL0,
164
.writefn = pmxevcntr_write, .readfn = pmxevcntr_read },
165
{ .name = "PMUSERENR", .cp = 15, .crn = 9, .crm = 14, .opc1 = 0, .opc2 = 0,
166
.access = PL0_R | PL1_RW, .accessfn = access_tpm,
167
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
168
.writefn = pmuserenr_write, .raw_writefn = raw_write },
169
{ .name = "PMINTENSET", .cp = 15, .crn = 9, .crm = 14, .opc1 = 0, .opc2 = 1,
170
.access = PL1_RW, .accessfn = access_tpm,
171
+ .fgt = FGT_PMINTEN,
172
.type = ARM_CP_ALIAS | ARM_CP_IO,
173
.fieldoffset = offsetoflow32(CPUARMState, cp15.c9_pminten),
174
.resetvalue = 0,
175
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
176
{ .name = "PMINTENSET_EL1", .state = ARM_CP_STATE_AA64,
177
.opc0 = 3, .opc1 = 0, .crn = 9, .crm = 14, .opc2 = 1,
178
.access = PL1_RW, .accessfn = access_tpm,
179
+ .fgt = FGT_PMINTEN,
180
.type = ARM_CP_IO,
181
.fieldoffset = offsetof(CPUARMState, cp15.c9_pminten),
182
.writefn = pmintenset_write, .raw_writefn = raw_write,
183
.resetvalue = 0x0 },
184
{ .name = "PMINTENCLR", .cp = 15, .crn = 9, .crm = 14, .opc1 = 0, .opc2 = 2,
185
.access = PL1_RW, .accessfn = access_tpm,
186
+ .fgt = FGT_PMINTEN,
187
.type = ARM_CP_ALIAS | ARM_CP_IO | ARM_CP_NO_RAW,
188
.fieldoffset = offsetof(CPUARMState, cp15.c9_pminten),
189
.writefn = pmintenclr_write, },
190
{ .name = "PMINTENCLR_EL1", .state = ARM_CP_STATE_AA64,
191
.opc0 = 3, .opc1 = 0, .crn = 9, .crm = 14, .opc2 = 2,
192
.access = PL1_RW, .accessfn = access_tpm,
193
+ .fgt = FGT_PMINTEN,
194
.type = ARM_CP_ALIAS | ARM_CP_IO | ARM_CP_NO_RAW,
195
.fieldoffset = offsetof(CPUARMState, cp15.c9_pminten),
196
.writefn = pmintenclr_write },
197
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo pmovsset_cp_reginfo[] = {
198
/* PMOVSSET is not implemented in v7 before v7ve */
199
{ .name = "PMOVSSET", .cp = 15, .opc1 = 0, .crn = 9, .crm = 14, .opc2 = 3,
200
.access = PL0_RW, .accessfn = pmreg_access,
201
+ .fgt = FGT_PMOVS,
202
.type = ARM_CP_ALIAS | ARM_CP_IO,
203
.fieldoffset = offsetoflow32(CPUARMState, cp15.c9_pmovsr),
204
.writefn = pmovsset_write,
205
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo pmovsset_cp_reginfo[] = {
206
{ .name = "PMOVSSET_EL0", .state = ARM_CP_STATE_AA64,
207
.opc0 = 3, .opc1 = 3, .crn = 9, .crm = 14, .opc2 = 3,
208
.access = PL0_RW, .accessfn = pmreg_access,
209
+ .fgt = FGT_PMOVS,
210
.type = ARM_CP_ALIAS | ARM_CP_IO,
211
.fieldoffset = offsetof(CPUARMState, cp15.c9_pmovsr),
212
.writefn = pmovsset_write,
213
@@ -XXX,XX +XXX,XX @@ static void define_pmu_regs(ARMCPU *cpu)
214
ARMCPRegInfo pmcr = {
215
.name = "PMCR", .cp = 15, .crn = 9, .crm = 12, .opc1 = 0, .opc2 = 0,
216
.access = PL0_RW,
217
+ .fgt = FGT_PMCR_EL0,
218
.type = ARM_CP_IO | ARM_CP_ALIAS,
219
.fieldoffset = offsetoflow32(CPUARMState, cp15.c9_pmcr),
220
.accessfn = pmreg_access, .writefn = pmcr_write,
221
@@ -XXX,XX +XXX,XX @@ static void define_pmu_regs(ARMCPU *cpu)
222
.name = "PMCR_EL0", .state = ARM_CP_STATE_AA64,
223
.opc0 = 3, .opc1 = 3, .crn = 9, .crm = 12, .opc2 = 0,
224
.access = PL0_RW, .accessfn = pmreg_access,
225
+ .fgt = FGT_PMCR_EL0,
226
.type = ARM_CP_IO,
227
.fieldoffset = offsetof(CPUARMState, cp15.c9_pmcr),
228
.resetvalue = cpu->isar.reset_pmcr_el0,
229
@@ -XXX,XX +XXX,XX @@ static void define_pmu_regs(ARMCPU *cpu)
230
{ .name = pmevcntr_name, .cp = 15, .crn = 14,
231
.crm = 8 | (3 & (i >> 3)), .opc1 = 0, .opc2 = i & 7,
232
.access = PL0_RW, .type = ARM_CP_IO | ARM_CP_ALIAS,
233
+ .fgt = FGT_PMEVCNTRN_EL0,
234
.readfn = pmevcntr_readfn, .writefn = pmevcntr_writefn,
235
.accessfn = pmreg_access_xevcntr },
236
{ .name = pmevcntr_el0_name, .state = ARM_CP_STATE_AA64,
237
.opc0 = 3, .opc1 = 3, .crn = 14, .crm = 8 | (3 & (i >> 3)),
238
.opc2 = i & 7, .access = PL0_RW, .accessfn = pmreg_access_xevcntr,
239
.type = ARM_CP_IO,
240
+ .fgt = FGT_PMEVCNTRN_EL0,
241
.readfn = pmevcntr_readfn, .writefn = pmevcntr_writefn,
242
.raw_readfn = pmevcntr_rawread,
243
.raw_writefn = pmevcntr_rawwrite },
244
{ .name = pmevtyper_name, .cp = 15, .crn = 14,
245
.crm = 12 | (3 & (i >> 3)), .opc1 = 0, .opc2 = i & 7,
246
.access = PL0_RW, .type = ARM_CP_IO | ARM_CP_ALIAS,
247
+ .fgt = FGT_PMEVTYPERN_EL0,
248
.readfn = pmevtyper_readfn, .writefn = pmevtyper_writefn,
249
.accessfn = pmreg_access },
250
{ .name = pmevtyper_el0_name, .state = ARM_CP_STATE_AA64,
251
.opc0 = 3, .opc1 = 3, .crn = 14, .crm = 12 | (3 & (i >> 3)),
252
.opc2 = i & 7, .access = PL0_RW, .accessfn = pmreg_access,
253
+ .fgt = FGT_PMEVTYPERN_EL0,
254
.type = ARM_CP_IO,
255
.readfn = pmevtyper_readfn, .writefn = pmevtyper_writefn,
256
.raw_writefn = pmevtyper_rawwrite },
257
@@ -XXX,XX +XXX,XX @@ static void define_pmu_regs(ARMCPU *cpu)
258
{ .name = "PMCEID2", .state = ARM_CP_STATE_AA32,
259
.cp = 15, .opc1 = 0, .crn = 9, .crm = 14, .opc2 = 4,
260
.access = PL0_R, .accessfn = pmreg_access, .type = ARM_CP_CONST,
261
+ .fgt = FGT_PMCEIDN_EL0,
262
.resetvalue = extract64(cpu->pmceid0, 32, 32) },
263
{ .name = "PMCEID3", .state = ARM_CP_STATE_AA32,
264
.cp = 15, .opc1 = 0, .crn = 9, .crm = 14, .opc2 = 5,
265
.access = PL0_R, .accessfn = pmreg_access, .type = ARM_CP_CONST,
266
+ .fgt = FGT_PMCEIDN_EL0,
267
.resetvalue = extract64(cpu->pmceid1, 32, 32) },
268
};
269
define_arm_cp_regs(cpu, v81_pmu_regs);
270
@@ -XXX,XX +XXX,XX @@ static void define_pmu_regs(ARMCPU *cpu)
271
.name = "PMMIR_EL1", .state = ARM_CP_STATE_BOTH,
272
.opc0 = 3, .opc1 = 0, .crn = 9, .crm = 14, .opc2 = 6,
273
.access = PL1_R, .accessfn = pmreg_access, .type = ARM_CP_CONST,
274
+ .fgt = FGT_PMMIR_EL1,
275
.resetvalue = 0
276
};
277
define_one_arm_cp_reg(cpu, &v84_pmmir);
278
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
279
{ .name = "PMCEID0", .state = ARM_CP_STATE_AA32,
280
.cp = 15, .opc1 = 0, .crn = 9, .crm = 12, .opc2 = 6,
281
.access = PL0_R, .accessfn = pmreg_access, .type = ARM_CP_CONST,
282
+ .fgt = FGT_PMCEIDN_EL0,
283
.resetvalue = extract64(cpu->pmceid0, 0, 32) },
284
{ .name = "PMCEID0_EL0", .state = ARM_CP_STATE_AA64,
285
.opc0 = 3, .opc1 = 3, .crn = 9, .crm = 12, .opc2 = 6,
286
.access = PL0_R, .accessfn = pmreg_access, .type = ARM_CP_CONST,
287
+ .fgt = FGT_PMCEIDN_EL0,
288
.resetvalue = cpu->pmceid0 },
289
{ .name = "PMCEID1", .state = ARM_CP_STATE_AA32,
290
.cp = 15, .opc1 = 0, .crn = 9, .crm = 12, .opc2 = 7,
291
.access = PL0_R, .accessfn = pmreg_access, .type = ARM_CP_CONST,
292
+ .fgt = FGT_PMCEIDN_EL0,
293
.resetvalue = extract64(cpu->pmceid1, 0, 32) },
294
{ .name = "PMCEID1_EL0", .state = ARM_CP_STATE_AA64,
295
.opc0 = 3, .opc1 = 3, .crn = 9, .crm = 12, .opc2 = 7,
296
.access = PL0_R, .accessfn = pmreg_access, .type = ARM_CP_CONST,
297
+ .fgt = FGT_PMCEIDN_EL0,
298
.resetvalue = cpu->pmceid1 },
299
};
300
#ifdef CONFIG_USER_ONLY
301
--
302
2.34.1
diff view generated by jsdifflib
New patch
1
Mark up the sysreg definitions for the system instructions
2
trapped by HFGITR bits 0..11. These bits cover various
3
cache maintenance operations.
1
4
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Tested-by: Fuad Tabba <tabba@google.com>
8
Message-id: 20230130182459.3309057-17-peter.maydell@linaro.org
9
Message-id: 20230127175507.2895013-17-peter.maydell@linaro.org
10
---
11
target/arm/cpregs.h | 14 ++++++++++++++
12
target/arm/helper.c | 28 ++++++++++++++++++++++++++++
13
2 files changed, 42 insertions(+)
14
15
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpregs.h
18
+++ b/target/arm/cpregs.h
19
@@ -XXX,XX +XXX,XX @@ typedef enum FGTBit {
20
DO_BIT(HDFGWTR, PMCR_EL0),
21
DO_BIT(HDFGRTR, PMMIR_EL1),
22
DO_BIT(HDFGRTR, PMCEIDN_EL0),
23
+
24
+ /* Trap bits in HFGITR_EL2, starting from bit 0 */
25
+ DO_BIT(HFGITR, ICIALLUIS),
26
+ DO_BIT(HFGITR, ICIALLU),
27
+ DO_BIT(HFGITR, ICIVAU),
28
+ DO_BIT(HFGITR, DCIVAC),
29
+ DO_BIT(HFGITR, DCISW),
30
+ DO_BIT(HFGITR, DCCSW),
31
+ DO_BIT(HFGITR, DCCISW),
32
+ DO_BIT(HFGITR, DCCVAU),
33
+ DO_BIT(HFGITR, DCCVAP),
34
+ DO_BIT(HFGITR, DCCVADP),
35
+ DO_BIT(HFGITR, DCCIVAC),
36
+ DO_BIT(HFGITR, DCZVA),
37
} FGTBit;
38
39
#undef DO_BIT
40
diff --git a/target/arm/helper.c b/target/arm/helper.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/helper.c
43
+++ b/target/arm/helper.c
44
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
45
#ifndef CONFIG_USER_ONLY
46
/* Avoid overhead of an access check that always passes in user-mode */
47
.accessfn = aa64_zva_access,
48
+ .fgt = FGT_DCZVA,
49
#endif
50
},
51
{ .name = "CURRENTEL", .state = ARM_CP_STATE_AA64,
52
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
53
{ .name = "IC_IALLUIS", .state = ARM_CP_STATE_AA64,
54
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 1, .opc2 = 0,
55
.access = PL1_W, .type = ARM_CP_NOP,
56
+ .fgt = FGT_ICIALLUIS,
57
.accessfn = access_ticab },
58
{ .name = "IC_IALLU", .state = ARM_CP_STATE_AA64,
59
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 5, .opc2 = 0,
60
.access = PL1_W, .type = ARM_CP_NOP,
61
+ .fgt = FGT_ICIALLU,
62
.accessfn = access_tocu },
63
{ .name = "IC_IVAU", .state = ARM_CP_STATE_AA64,
64
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 5, .opc2 = 1,
65
.access = PL0_W, .type = ARM_CP_NOP,
66
+ .fgt = FGT_ICIVAU,
67
.accessfn = access_tocu },
68
{ .name = "DC_IVAC", .state = ARM_CP_STATE_AA64,
69
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 6, .opc2 = 1,
70
.access = PL1_W, .accessfn = aa64_cacheop_poc_access,
71
+ .fgt = FGT_DCIVAC,
72
.type = ARM_CP_NOP },
73
{ .name = "DC_ISW", .state = ARM_CP_STATE_AA64,
74
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 6, .opc2 = 2,
75
+ .fgt = FGT_DCISW,
76
.access = PL1_W, .accessfn = access_tsw, .type = ARM_CP_NOP },
77
{ .name = "DC_CVAC", .state = ARM_CP_STATE_AA64,
78
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 10, .opc2 = 1,
79
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
80
.accessfn = aa64_cacheop_poc_access },
81
{ .name = "DC_CSW", .state = ARM_CP_STATE_AA64,
82
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 10, .opc2 = 2,
83
+ .fgt = FGT_DCCSW,
84
.access = PL1_W, .accessfn = access_tsw, .type = ARM_CP_NOP },
85
{ .name = "DC_CVAU", .state = ARM_CP_STATE_AA64,
86
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 11, .opc2 = 1,
87
.access = PL0_W, .type = ARM_CP_NOP,
88
+ .fgt = FGT_DCCVAU,
89
.accessfn = access_tocu },
90
{ .name = "DC_CIVAC", .state = ARM_CP_STATE_AA64,
91
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 14, .opc2 = 1,
92
.access = PL0_W, .type = ARM_CP_NOP,
93
+ .fgt = FGT_DCCIVAC,
94
.accessfn = aa64_cacheop_poc_access },
95
{ .name = "DC_CISW", .state = ARM_CP_STATE_AA64,
96
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 14, .opc2 = 2,
97
+ .fgt = FGT_DCCISW,
98
.access = PL1_W, .accessfn = access_tsw, .type = ARM_CP_NOP },
99
/* TLBI operations */
100
{ .name = "TLBI_VMALLE1IS", .state = ARM_CP_STATE_AA64,
101
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo dcpop_reg[] = {
102
{ .name = "DC_CVAP", .state = ARM_CP_STATE_AA64,
103
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 12, .opc2 = 1,
104
.access = PL0_W, .type = ARM_CP_NO_RAW | ARM_CP_SUPPRESS_TB_END,
105
+ .fgt = FGT_DCCVAP,
106
.accessfn = aa64_cacheop_poc_access, .writefn = dccvap_writefn },
107
};
108
109
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo dcpodp_reg[] = {
110
{ .name = "DC_CVADP", .state = ARM_CP_STATE_AA64,
111
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 13, .opc2 = 1,
112
.access = PL0_W, .type = ARM_CP_NO_RAW | ARM_CP_SUPPRESS_TB_END,
113
+ .fgt = FGT_DCCVADP,
114
.accessfn = aa64_cacheop_poc_access, .writefn = dccvap_writefn },
115
};
116
#endif /*CONFIG_USER_ONLY*/
117
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo mte_reginfo[] = {
118
{ .name = "DC_IGVAC", .state = ARM_CP_STATE_AA64,
119
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 6, .opc2 = 3,
120
.type = ARM_CP_NOP, .access = PL1_W,
121
+ .fgt = FGT_DCIVAC,
122
.accessfn = aa64_cacheop_poc_access },
123
{ .name = "DC_IGSW", .state = ARM_CP_STATE_AA64,
124
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 6, .opc2 = 4,
125
+ .fgt = FGT_DCISW,
126
.type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_tsw },
127
{ .name = "DC_IGDVAC", .state = ARM_CP_STATE_AA64,
128
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 6, .opc2 = 5,
129
.type = ARM_CP_NOP, .access = PL1_W,
130
+ .fgt = FGT_DCIVAC,
131
.accessfn = aa64_cacheop_poc_access },
132
{ .name = "DC_IGDSW", .state = ARM_CP_STATE_AA64,
133
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 6, .opc2 = 6,
134
+ .fgt = FGT_DCISW,
135
.type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_tsw },
136
{ .name = "DC_CGSW", .state = ARM_CP_STATE_AA64,
137
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 10, .opc2 = 4,
138
+ .fgt = FGT_DCCSW,
139
.type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_tsw },
140
{ .name = "DC_CGDSW", .state = ARM_CP_STATE_AA64,
141
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 10, .opc2 = 6,
142
+ .fgt = FGT_DCCSW,
143
.type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_tsw },
144
{ .name = "DC_CIGSW", .state = ARM_CP_STATE_AA64,
145
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 14, .opc2 = 4,
146
+ .fgt = FGT_DCCISW,
147
.type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_tsw },
148
{ .name = "DC_CIGDSW", .state = ARM_CP_STATE_AA64,
149
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 14, .opc2 = 6,
150
+ .fgt = FGT_DCCISW,
151
.type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_tsw },
152
};
153
154
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo mte_el0_cacheop_reginfo[] = {
155
{ .name = "DC_CGVAP", .state = ARM_CP_STATE_AA64,
156
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 12, .opc2 = 3,
157
.type = ARM_CP_NOP, .access = PL0_W,
158
+ .fgt = FGT_DCCVAP,
159
.accessfn = aa64_cacheop_poc_access },
160
{ .name = "DC_CGDVAP", .state = ARM_CP_STATE_AA64,
161
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 12, .opc2 = 5,
162
.type = ARM_CP_NOP, .access = PL0_W,
163
+ .fgt = FGT_DCCVAP,
164
.accessfn = aa64_cacheop_poc_access },
165
{ .name = "DC_CGVADP", .state = ARM_CP_STATE_AA64,
166
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 13, .opc2 = 3,
167
.type = ARM_CP_NOP, .access = PL0_W,
168
+ .fgt = FGT_DCCVADP,
169
.accessfn = aa64_cacheop_poc_access },
170
{ .name = "DC_CGDVADP", .state = ARM_CP_STATE_AA64,
171
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 13, .opc2 = 5,
172
.type = ARM_CP_NOP, .access = PL0_W,
173
+ .fgt = FGT_DCCVADP,
174
.accessfn = aa64_cacheop_poc_access },
175
{ .name = "DC_CIGVAC", .state = ARM_CP_STATE_AA64,
176
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 14, .opc2 = 3,
177
.type = ARM_CP_NOP, .access = PL0_W,
178
+ .fgt = FGT_DCCIVAC,
179
.accessfn = aa64_cacheop_poc_access },
180
{ .name = "DC_CIGDVAC", .state = ARM_CP_STATE_AA64,
181
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 14, .opc2 = 5,
182
.type = ARM_CP_NOP, .access = PL0_W,
183
+ .fgt = FGT_DCCIVAC,
184
.accessfn = aa64_cacheop_poc_access },
185
{ .name = "DC_GVA", .state = ARM_CP_STATE_AA64,
186
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 4, .opc2 = 3,
187
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo mte_el0_cacheop_reginfo[] = {
188
#ifndef CONFIG_USER_ONLY
189
/* Avoid overhead of an access check that always passes in user-mode */
190
.accessfn = aa64_zva_access,
191
+ .fgt = FGT_DCZVA,
192
#endif
193
},
194
{ .name = "DC_GZVA", .state = ARM_CP_STATE_AA64,
195
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo mte_el0_cacheop_reginfo[] = {
196
#ifndef CONFIG_USER_ONLY
197
/* Avoid overhead of an access check that always passes in user-mode */
198
.accessfn = aa64_zva_access,
199
+ .fgt = FGT_DCZVA,
200
#endif
201
},
202
};
203
--
204
2.34.1
diff view generated by jsdifflib
1
The VECTPENDING field in the ICSR is 9 bits wide, in bits [20:12] of
1
Mark up the sysreg definitions for the system instructions
2
the register. We were incorrectly masking it to 8 bits, so it would
2
trapped by HFGITR bits 12..17. These bits cover AT address
3
report the wrong value if the pending exception was greater than 256.
3
translation instructions.
4
Fix the bug.
5
4
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210723162146.5167-6-peter.maydell@linaro.org
7
Tested-by: Fuad Tabba <tabba@google.com>
8
Message-id: 20230130182459.3309057-18-peter.maydell@linaro.org
9
Message-id: 20230127175507.2895013-18-peter.maydell@linaro.org
9
---
10
---
10
hw/intc/armv7m_nvic.c | 2 +-
11
target/arm/cpregs.h | 6 ++++++
11
1 file changed, 1 insertion(+), 1 deletion(-)
12
target/arm/helper.c | 6 ++++++
13
2 files changed, 12 insertions(+)
12
14
13
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
15
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
14
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/intc/armv7m_nvic.c
17
--- a/target/arm/cpregs.h
16
+++ b/hw/intc/armv7m_nvic.c
18
+++ b/target/arm/cpregs.h
17
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
19
@@ -XXX,XX +XXX,XX @@ typedef enum FGTBit {
18
/* VECTACTIVE */
20
DO_BIT(HFGITR, DCCVADP),
19
val = cpu->env.v7m.exception;
21
DO_BIT(HFGITR, DCCIVAC),
20
/* VECTPENDING */
22
DO_BIT(HFGITR, DCZVA),
21
- val |= (s->vectpending & 0xff) << 12;
23
+ DO_BIT(HFGITR, ATS1E1R),
22
+ val |= (s->vectpending & 0x1ff) << 12;
24
+ DO_BIT(HFGITR, ATS1E1W),
23
/* ISRPENDING - set if any external IRQ is pending */
25
+ DO_BIT(HFGITR, ATS1E0R),
24
if (nvic_isrpending(s)) {
26
+ DO_BIT(HFGITR, ATS1E0W),
25
val |= (1 << 22);
27
+ DO_BIT(HFGITR, ATS1E1RP),
28
+ DO_BIT(HFGITR, ATS1E1WP),
29
} FGTBit;
30
31
#undef DO_BIT
32
diff --git a/target/arm/helper.c b/target/arm/helper.c
33
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/helper.c
35
+++ b/target/arm/helper.c
36
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
37
{ .name = "AT_S1E1R", .state = ARM_CP_STATE_AA64,
38
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 8, .opc2 = 0,
39
.access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
40
+ .fgt = FGT_ATS1E1R,
41
.writefn = ats_write64 },
42
{ .name = "AT_S1E1W", .state = ARM_CP_STATE_AA64,
43
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 8, .opc2 = 1,
44
.access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
45
+ .fgt = FGT_ATS1E1W,
46
.writefn = ats_write64 },
47
{ .name = "AT_S1E0R", .state = ARM_CP_STATE_AA64,
48
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 8, .opc2 = 2,
49
.access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
50
+ .fgt = FGT_ATS1E0R,
51
.writefn = ats_write64 },
52
{ .name = "AT_S1E0W", .state = ARM_CP_STATE_AA64,
53
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 8, .opc2 = 3,
54
.access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
55
+ .fgt = FGT_ATS1E0W,
56
.writefn = ats_write64 },
57
{ .name = "AT_S12E1R", .state = ARM_CP_STATE_AA64,
58
.opc0 = 1, .opc1 = 4, .crn = 7, .crm = 8, .opc2 = 4,
59
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo ats1e1_reginfo[] = {
60
{ .name = "AT_S1E1RP", .state = ARM_CP_STATE_AA64,
61
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 9, .opc2 = 0,
62
.access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
63
+ .fgt = FGT_ATS1E1RP,
64
.writefn = ats_write64 },
65
{ .name = "AT_S1E1WP", .state = ARM_CP_STATE_AA64,
66
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 9, .opc2 = 1,
67
.access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
68
+ .fgt = FGT_ATS1E1WP,
69
.writefn = ats_write64 },
70
};
71
26
--
72
--
27
2.20.1
73
2.34.1
28
29
diff view generated by jsdifflib
1
The ISCR.ISRPENDING bit is set when an external interrupt is pending.
1
Mark up the sysreg definitions for the system instructions
2
This is true whether that external interrupt is enabled or not.
2
trapped by HFGITR bits 18..47. These bits cover TLBI
3
This means that we can't use 's->vectpending == 0' as a shortcut to
3
TLB maintenance instructions.
4
"ISRPENDING is zero", because s->vectpending indicates only the
5
highest priority pending enabled interrupt.
6
4
7
Remove the incorrect optimization so that if there is no pending
5
(If we implemented FEAT_XS we would need to trap some of the
8
enabled interrupt we fall through to scanning through the whole
6
instructions added by that feature using these bits; but we don't
9
interrupt array.
7
yet, so will need to add the .fgt markup when we do.)
10
8
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20210723162146.5167-5-peter.maydell@linaro.org
11
Tested-by: Fuad Tabba <tabba@google.com>
12
Message-id: 20230130182459.3309057-19-peter.maydell@linaro.org
13
Message-id: 20230127175507.2895013-19-peter.maydell@linaro.org
14
---
14
---
15
hw/intc/armv7m_nvic.c | 9 ++++-----
15
target/arm/cpregs.h | 30 ++++++++++++++++++++++++++++++
16
1 file changed, 4 insertions(+), 5 deletions(-)
16
target/arm/helper.c | 30 ++++++++++++++++++++++++++++++
17
2 files changed, 60 insertions(+)
17
18
18
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
19
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
19
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/intc/armv7m_nvic.c
21
--- a/target/arm/cpregs.h
21
+++ b/hw/intc/armv7m_nvic.c
22
+++ b/target/arm/cpregs.h
22
@@ -XXX,XX +XXX,XX @@ static bool nvic_isrpending(NVICState *s)
23
@@ -XXX,XX +XXX,XX @@ typedef enum FGTBit {
23
{
24
DO_BIT(HFGITR, ATS1E0W),
24
int irq;
25
DO_BIT(HFGITR, ATS1E1RP),
25
26
DO_BIT(HFGITR, ATS1E1WP),
26
- /* We can shortcut if the highest priority pending interrupt
27
+ DO_BIT(HFGITR, TLBIVMALLE1OS),
27
- * happens to be external or if there is nothing pending.
28
+ DO_BIT(HFGITR, TLBIVAE1OS),
28
+ /*
29
+ DO_BIT(HFGITR, TLBIASIDE1OS),
29
+ * We can shortcut if the highest priority pending interrupt
30
+ DO_BIT(HFGITR, TLBIVAAE1OS),
30
+ * happens to be external; if not we need to check the whole
31
+ DO_BIT(HFGITR, TLBIVALE1OS),
31
+ * vectors[] array.
32
+ DO_BIT(HFGITR, TLBIVAALE1OS),
32
*/
33
+ DO_BIT(HFGITR, TLBIRVAE1OS),
33
if (s->vectpending > NVIC_FIRST_IRQ) {
34
+ DO_BIT(HFGITR, TLBIRVAAE1OS),
34
return true;
35
+ DO_BIT(HFGITR, TLBIRVALE1OS),
35
}
36
+ DO_BIT(HFGITR, TLBIRVAALE1OS),
36
- if (s->vectpending == 0) {
37
+ DO_BIT(HFGITR, TLBIVMALLE1IS),
37
- return false;
38
+ DO_BIT(HFGITR, TLBIVAE1IS),
38
- }
39
+ DO_BIT(HFGITR, TLBIASIDE1IS),
39
40
+ DO_BIT(HFGITR, TLBIVAAE1IS),
40
for (irq = NVIC_FIRST_IRQ; irq < s->num_irq; irq++) {
41
+ DO_BIT(HFGITR, TLBIVALE1IS),
41
if (s->vectors[irq].pending) {
42
+ DO_BIT(HFGITR, TLBIVAALE1IS),
43
+ DO_BIT(HFGITR, TLBIRVAE1IS),
44
+ DO_BIT(HFGITR, TLBIRVAAE1IS),
45
+ DO_BIT(HFGITR, TLBIRVALE1IS),
46
+ DO_BIT(HFGITR, TLBIRVAALE1IS),
47
+ DO_BIT(HFGITR, TLBIRVAE1),
48
+ DO_BIT(HFGITR, TLBIRVAAE1),
49
+ DO_BIT(HFGITR, TLBIRVALE1),
50
+ DO_BIT(HFGITR, TLBIRVAALE1),
51
+ DO_BIT(HFGITR, TLBIVMALLE1),
52
+ DO_BIT(HFGITR, TLBIVAE1),
53
+ DO_BIT(HFGITR, TLBIASIDE1),
54
+ DO_BIT(HFGITR, TLBIVAAE1),
55
+ DO_BIT(HFGITR, TLBIVALE1),
56
+ DO_BIT(HFGITR, TLBIVAALE1),
57
} FGTBit;
58
59
#undef DO_BIT
60
diff --git a/target/arm/helper.c b/target/arm/helper.c
61
index XXXXXXX..XXXXXXX 100644
62
--- a/target/arm/helper.c
63
+++ b/target/arm/helper.c
64
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
65
{ .name = "TLBI_VMALLE1IS", .state = ARM_CP_STATE_AA64,
66
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 0,
67
.access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
68
+ .fgt = FGT_TLBIVMALLE1IS,
69
.writefn = tlbi_aa64_vmalle1is_write },
70
{ .name = "TLBI_VAE1IS", .state = ARM_CP_STATE_AA64,
71
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 1,
72
.access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
73
+ .fgt = FGT_TLBIVAE1IS,
74
.writefn = tlbi_aa64_vae1is_write },
75
{ .name = "TLBI_ASIDE1IS", .state = ARM_CP_STATE_AA64,
76
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 2,
77
.access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
78
+ .fgt = FGT_TLBIASIDE1IS,
79
.writefn = tlbi_aa64_vmalle1is_write },
80
{ .name = "TLBI_VAAE1IS", .state = ARM_CP_STATE_AA64,
81
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 3,
82
.access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
83
+ .fgt = FGT_TLBIVAAE1IS,
84
.writefn = tlbi_aa64_vae1is_write },
85
{ .name = "TLBI_VALE1IS", .state = ARM_CP_STATE_AA64,
86
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 5,
87
.access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
88
+ .fgt = FGT_TLBIVALE1IS,
89
.writefn = tlbi_aa64_vae1is_write },
90
{ .name = "TLBI_VAALE1IS", .state = ARM_CP_STATE_AA64,
91
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 7,
92
.access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
93
+ .fgt = FGT_TLBIVAALE1IS,
94
.writefn = tlbi_aa64_vae1is_write },
95
{ .name = "TLBI_VMALLE1", .state = ARM_CP_STATE_AA64,
96
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 0,
97
.access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
98
+ .fgt = FGT_TLBIVMALLE1,
99
.writefn = tlbi_aa64_vmalle1_write },
100
{ .name = "TLBI_VAE1", .state = ARM_CP_STATE_AA64,
101
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 1,
102
.access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
103
+ .fgt = FGT_TLBIVAE1,
104
.writefn = tlbi_aa64_vae1_write },
105
{ .name = "TLBI_ASIDE1", .state = ARM_CP_STATE_AA64,
106
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 2,
107
.access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
108
+ .fgt = FGT_TLBIASIDE1,
109
.writefn = tlbi_aa64_vmalle1_write },
110
{ .name = "TLBI_VAAE1", .state = ARM_CP_STATE_AA64,
111
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 3,
112
.access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
113
+ .fgt = FGT_TLBIVAAE1,
114
.writefn = tlbi_aa64_vae1_write },
115
{ .name = "TLBI_VALE1", .state = ARM_CP_STATE_AA64,
116
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 5,
117
.access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
118
+ .fgt = FGT_TLBIVALE1,
119
.writefn = tlbi_aa64_vae1_write },
120
{ .name = "TLBI_VAALE1", .state = ARM_CP_STATE_AA64,
121
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 7,
122
.access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
123
+ .fgt = FGT_TLBIVAALE1,
124
.writefn = tlbi_aa64_vae1_write },
125
{ .name = "TLBI_IPAS2E1IS", .state = ARM_CP_STATE_AA64,
126
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 1,
127
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbirange_reginfo[] = {
128
{ .name = "TLBI_RVAE1IS", .state = ARM_CP_STATE_AA64,
129
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 1,
130
.access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
131
+ .fgt = FGT_TLBIRVAE1IS,
132
.writefn = tlbi_aa64_rvae1is_write },
133
{ .name = "TLBI_RVAAE1IS", .state = ARM_CP_STATE_AA64,
134
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 3,
135
.access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
136
+ .fgt = FGT_TLBIRVAAE1IS,
137
.writefn = tlbi_aa64_rvae1is_write },
138
{ .name = "TLBI_RVALE1IS", .state = ARM_CP_STATE_AA64,
139
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 5,
140
.access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
141
+ .fgt = FGT_TLBIRVALE1IS,
142
.writefn = tlbi_aa64_rvae1is_write },
143
{ .name = "TLBI_RVAALE1IS", .state = ARM_CP_STATE_AA64,
144
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 7,
145
.access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
146
+ .fgt = FGT_TLBIRVAALE1IS,
147
.writefn = tlbi_aa64_rvae1is_write },
148
{ .name = "TLBI_RVAE1OS", .state = ARM_CP_STATE_AA64,
149
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 1,
150
.access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
151
+ .fgt = FGT_TLBIRVAE1OS,
152
.writefn = tlbi_aa64_rvae1is_write },
153
{ .name = "TLBI_RVAAE1OS", .state = ARM_CP_STATE_AA64,
154
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 3,
155
.access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
156
+ .fgt = FGT_TLBIRVAAE1OS,
157
.writefn = tlbi_aa64_rvae1is_write },
158
{ .name = "TLBI_RVALE1OS", .state = ARM_CP_STATE_AA64,
159
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 5,
160
.access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
161
+ .fgt = FGT_TLBIRVALE1OS,
162
.writefn = tlbi_aa64_rvae1is_write },
163
{ .name = "TLBI_RVAALE1OS", .state = ARM_CP_STATE_AA64,
164
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 7,
165
.access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
166
+ .fgt = FGT_TLBIRVAALE1OS,
167
.writefn = tlbi_aa64_rvae1is_write },
168
{ .name = "TLBI_RVAE1", .state = ARM_CP_STATE_AA64,
169
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 1,
170
.access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
171
+ .fgt = FGT_TLBIRVAE1,
172
.writefn = tlbi_aa64_rvae1_write },
173
{ .name = "TLBI_RVAAE1", .state = ARM_CP_STATE_AA64,
174
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 3,
175
.access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
176
+ .fgt = FGT_TLBIRVAAE1,
177
.writefn = tlbi_aa64_rvae1_write },
178
{ .name = "TLBI_RVALE1", .state = ARM_CP_STATE_AA64,
179
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 5,
180
.access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
181
+ .fgt = FGT_TLBIRVALE1,
182
.writefn = tlbi_aa64_rvae1_write },
183
{ .name = "TLBI_RVAALE1", .state = ARM_CP_STATE_AA64,
184
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 7,
185
.access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
186
+ .fgt = FGT_TLBIRVAALE1,
187
.writefn = tlbi_aa64_rvae1_write },
188
{ .name = "TLBI_RIPAS2E1IS", .state = ARM_CP_STATE_AA64,
189
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 2,
190
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbios_reginfo[] = {
191
{ .name = "TLBI_VMALLE1OS", .state = ARM_CP_STATE_AA64,
192
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 0,
193
.access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
194
+ .fgt = FGT_TLBIVMALLE1OS,
195
.writefn = tlbi_aa64_vmalle1is_write },
196
{ .name = "TLBI_VAE1OS", .state = ARM_CP_STATE_AA64,
197
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 1,
198
+ .fgt = FGT_TLBIVAE1OS,
199
.access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
200
.writefn = tlbi_aa64_vae1is_write },
201
{ .name = "TLBI_ASIDE1OS", .state = ARM_CP_STATE_AA64,
202
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 2,
203
.access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
204
+ .fgt = FGT_TLBIASIDE1OS,
205
.writefn = tlbi_aa64_vmalle1is_write },
206
{ .name = "TLBI_VAAE1OS", .state = ARM_CP_STATE_AA64,
207
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 3,
208
.access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
209
+ .fgt = FGT_TLBIVAAE1OS,
210
.writefn = tlbi_aa64_vae1is_write },
211
{ .name = "TLBI_VALE1OS", .state = ARM_CP_STATE_AA64,
212
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 5,
213
.access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
214
+ .fgt = FGT_TLBIVALE1OS,
215
.writefn = tlbi_aa64_vae1is_write },
216
{ .name = "TLBI_VAALE1OS", .state = ARM_CP_STATE_AA64,
217
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 7,
218
.access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
219
+ .fgt = FGT_TLBIVAALE1OS,
220
.writefn = tlbi_aa64_vae1is_write },
221
{ .name = "TLBI_ALLE2OS", .state = ARM_CP_STATE_AA64,
222
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 0,
42
--
223
--
43
2.20.1
224
2.34.1
44
45
diff view generated by jsdifflib
1
For M-profile, we weren't reporting alignment faults triggered by the
1
Mark up the sysreg definitions for the system instructions
2
generic TCG code correctly to the guest. These get passed into
2
trapped by HFGITR bits 48..63.
3
arm_v7m_cpu_do_interrupt() as an EXCP_DATA_ABORT with an A-profile
4
style exception.fsr value of 1. We didn't check for this, and so
5
they fell through into the default of "assume this is an MPU fault"
6
and were reported to the guest as a data access violation MPU fault.
7
3
8
Report these alignment faults as UsageFaults which set the UNALIGNED
4
Some of these bits are for trapping instructions which are
9
bit in the UFSR.
5
not in the system instruction encoding (i.e. which are
6
not handled by the ARMCPRegInfo mechanism):
7
* ERET, ERETAA, ERETAB
8
* SVC
9
10
We will have to handle those separately and manually.
10
11
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20210723162146.5167-4-peter.maydell@linaro.org
14
Tested-by: Fuad Tabba <tabba@google.com>
15
Message-id: 20230130182459.3309057-20-peter.maydell@linaro.org
16
Message-id: 20230127175507.2895013-20-peter.maydell@linaro.org
14
---
17
---
15
target/arm/m_helper.c | 8 ++++++++
18
target/arm/cpregs.h | 4 ++++
16
1 file changed, 8 insertions(+)
19
target/arm/helper.c | 9 +++++++++
20
2 files changed, 13 insertions(+)
17
21
18
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
22
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
19
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/m_helper.c
24
--- a/target/arm/cpregs.h
21
+++ b/target/arm/m_helper.c
25
+++ b/target/arm/cpregs.h
22
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
26
@@ -XXX,XX +XXX,XX @@ typedef enum FGTBit {
23
env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK;
27
DO_BIT(HFGITR, TLBIVAAE1),
24
break;
28
DO_BIT(HFGITR, TLBIVALE1),
25
case EXCP_UNALIGNED:
29
DO_BIT(HFGITR, TLBIVAALE1),
26
+ /* Unaligned faults reported by M-profile aware code */
30
+ DO_BIT(HFGITR, CFPRCTX),
27
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
31
+ DO_BIT(HFGITR, DVPRCTX),
28
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_UNALIGNED_MASK;
32
+ DO_BIT(HFGITR, CPPRCTX),
29
break;
33
+ DO_BIT(HFGITR, DCCVAC),
30
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
34
} FGTBit;
31
}
35
32
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false);
36
#undef DO_BIT
33
break;
37
diff --git a/target/arm/helper.c b/target/arm/helper.c
34
+ case 0x1: /* Alignment fault reported by generic code */
38
index XXXXXXX..XXXXXXX 100644
35
+ qemu_log_mask(CPU_LOG_INT,
39
--- a/target/arm/helper.c
36
+ "...really UsageFault with UFSR.UNALIGNED\n");
40
+++ b/target/arm/helper.c
37
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_UNALIGNED_MASK;
41
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
38
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
42
{ .name = "DC_CVAC", .state = ARM_CP_STATE_AA64,
39
+ env->v7m.secure);
43
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 10, .opc2 = 1,
40
+ break;
44
.access = PL0_W, .type = ARM_CP_NOP,
41
default:
45
+ .fgt = FGT_DCCVAC,
42
/*
46
.accessfn = aa64_cacheop_poc_access },
43
* All other FSR values are either MPU faults or "can't happen
47
{ .name = "DC_CSW", .state = ARM_CP_STATE_AA64,
48
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 10, .opc2 = 2,
49
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo mte_el0_cacheop_reginfo[] = {
50
{ .name = "DC_CGVAC", .state = ARM_CP_STATE_AA64,
51
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 10, .opc2 = 3,
52
.type = ARM_CP_NOP, .access = PL0_W,
53
+ .fgt = FGT_DCCVAC,
54
.accessfn = aa64_cacheop_poc_access },
55
{ .name = "DC_CGDVAC", .state = ARM_CP_STATE_AA64,
56
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 10, .opc2 = 5,
57
.type = ARM_CP_NOP, .access = PL0_W,
58
+ .fgt = FGT_DCCVAC,
59
.accessfn = aa64_cacheop_poc_access },
60
{ .name = "DC_CGVAP", .state = ARM_CP_STATE_AA64,
61
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 12, .opc2 = 3,
62
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_predinv(CPUARMState *env, const ARMCPRegInfo *ri,
63
static const ARMCPRegInfo predinv_reginfo[] = {
64
{ .name = "CFP_RCTX", .state = ARM_CP_STATE_AA64,
65
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 3, .opc2 = 4,
66
+ .fgt = FGT_CFPRCTX,
67
.type = ARM_CP_NOP, .access = PL0_W, .accessfn = access_predinv },
68
{ .name = "DVP_RCTX", .state = ARM_CP_STATE_AA64,
69
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 3, .opc2 = 5,
70
+ .fgt = FGT_DVPRCTX,
71
.type = ARM_CP_NOP, .access = PL0_W, .accessfn = access_predinv },
72
{ .name = "CPP_RCTX", .state = ARM_CP_STATE_AA64,
73
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 3, .opc2 = 7,
74
+ .fgt = FGT_CPPRCTX,
75
.type = ARM_CP_NOP, .access = PL0_W, .accessfn = access_predinv },
76
/*
77
* Note the AArch32 opcodes have a different OPC1.
78
*/
79
{ .name = "CFPRCTX", .state = ARM_CP_STATE_AA32,
80
.cp = 15, .opc1 = 0, .crn = 7, .crm = 3, .opc2 = 4,
81
+ .fgt = FGT_CFPRCTX,
82
.type = ARM_CP_NOP, .access = PL0_W, .accessfn = access_predinv },
83
{ .name = "DVPRCTX", .state = ARM_CP_STATE_AA32,
84
.cp = 15, .opc1 = 0, .crn = 7, .crm = 3, .opc2 = 5,
85
+ .fgt = FGT_DVPRCTX,
86
.type = ARM_CP_NOP, .access = PL0_W, .accessfn = access_predinv },
87
{ .name = "CPPRCTX", .state = ARM_CP_STATE_AA32,
88
.cp = 15, .opc1 = 0, .crn = 7, .crm = 3, .opc2 = 7,
89
+ .fgt = FGT_CPPRCTX,
90
.type = ARM_CP_NOP, .access = PL0_W, .accessfn = access_predinv },
91
};
92
44
--
93
--
45
2.20.1
94
2.34.1
46
47
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Implement the HFGITR_EL2.ERET fine-grained trap. This traps
2
execution from AArch64 EL1 of ERET, ERETAA and ERETAB. The trap is
3
reported with a syndrome value of 0x1a.
2
4
3
Mirror the behavour of /proc/sys/abi/sve_default_vector_length
5
The trap must take precedence over a possible pointer-authentication
4
under the real linux kernel. We have no way of passing along
6
trap for ERETAA and ERETAB.
5
a real default across exec like the kernel can, but this is a
6
decent way of adjusting the startup vector length of a process.
7
7
8
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/482
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20210723203344.968563-4-richard.henderson@linaro.org
12
[PMM: tweaked docs formatting, document -1 special-case,
13
added fixup patch from RTH mentioning QEMU's maximum veclen.]
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Tested-by: Fuad Tabba <tabba@google.com>
11
Message-id: 20230130182459.3309057-21-peter.maydell@linaro.org
12
Message-id: 20230127175507.2895013-21-peter.maydell@linaro.org
15
---
13
---
16
docs/system/arm/cpu-features.rst | 15 ++++++++
14
target/arm/cpu.h | 1 +
17
target/arm/cpu.h | 5 +++
15
target/arm/syndrome.h | 10 ++++++++++
18
target/arm/cpu.c | 14 ++++++--
16
target/arm/translate.h | 2 ++
19
target/arm/cpu64.c | 60 ++++++++++++++++++++++++++++++++
17
target/arm/helper.c | 3 +++
20
4 files changed, 92 insertions(+), 2 deletions(-)
18
target/arm/translate-a64.c | 10 ++++++++++
19
5 files changed, 26 insertions(+)
21
20
22
diff --git a/docs/system/arm/cpu-features.rst b/docs/system/arm/cpu-features.rst
23
index XXXXXXX..XXXXXXX 100644
24
--- a/docs/system/arm/cpu-features.rst
25
+++ b/docs/system/arm/cpu-features.rst
26
@@ -XXX,XX +XXX,XX @@ verbose command lines. However, the recommended way to select vector
27
lengths is to explicitly enable each desired length. Therefore only
28
example's (1), (4), and (6) exhibit recommended uses of the properties.
29
30
+SVE User-mode Default Vector Length Property
31
+--------------------------------------------
32
+
33
+For qemu-aarch64, the cpu property ``sve-default-vector-length=N`` is
34
+defined to mirror the Linux kernel parameter file
35
+``/proc/sys/abi/sve_default_vector_length``. The default length, ``N``,
36
+is in units of bytes and must be between 16 and 8192.
37
+If not specified, the default vector length is 64.
38
+
39
+If the default length is larger than the maximum vector length enabled,
40
+the actual vector length will be reduced. Note that the maximum vector
41
+length supported by QEMU is 256.
42
+
43
+If this property is set to ``-1`` then the default vector length
44
+is set to the maximum possible length.
45
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
21
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
46
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/cpu.h
23
--- a/target/arm/cpu.h
48
+++ b/target/arm/cpu.h
24
+++ b/target/arm/cpu.h
49
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
25
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_A64, PSTATE_ZA, 23, 1)
50
/* Used to set the maximum vector length the cpu will support. */
26
FIELD(TBFLAG_A64, SVL, 24, 4)
51
uint32_t sve_max_vq;
27
/* Indicates that SME Streaming mode is active, and SMCR_ELx.FA64 is not. */
52
28
FIELD(TBFLAG_A64, SME_TRAP_NONSTREAMING, 28, 1)
53
+#ifdef CONFIG_USER_ONLY
29
+FIELD(TBFLAG_A64, FGT_ERET, 29, 1)
54
+ /* Used to set the default vector length at process start. */
30
55
+ uint32_t sve_default_vq;
31
/*
56
+#endif
32
* Helpers for using the above.
57
+
33
diff --git a/target/arm/syndrome.h b/target/arm/syndrome.h
58
/*
59
* In sve_vq_map each set bit is a supported vector length of
60
* (bit-number + 1) * 16 bytes, i.e. each bit number + 1 is the vector
61
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
62
index XXXXXXX..XXXXXXX 100644
34
index XXXXXXX..XXXXXXX 100644
63
--- a/target/arm/cpu.c
35
--- a/target/arm/syndrome.h
64
+++ b/target/arm/cpu.c
36
+++ b/target/arm/syndrome.h
65
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
37
@@ -XXX,XX +XXX,XX @@ enum arm_exception_class {
66
env->cp15.cpacr_el1 = deposit64(env->cp15.cpacr_el1, 16, 2, 3);
38
EC_AA64_SMC = 0x17,
67
/* with reasonable vector length */
39
EC_SYSTEMREGISTERTRAP = 0x18,
68
if (cpu_isar_feature(aa64_sve, cpu)) {
40
EC_SVEACCESSTRAP = 0x19,
69
- env->vfp.zcr_el[1] = MIN(cpu->sve_max_vq - 1, 3);
41
+ EC_ERETTRAP = 0x1a,
70
+ env->vfp.zcr_el[1] =
42
EC_SMETRAP = 0x1d,
71
+ aarch64_sve_zcr_get_valid_len(cpu, cpu->sve_default_vq - 1);
43
EC_INSNABORT = 0x20,
72
}
44
EC_INSNABORT_SAME_EL = 0x21,
73
/*
45
@@ -XXX,XX +XXX,XX @@ static inline uint32_t syn_sve_access_trap(void)
74
* Enable TBI0 but not TBI1.
46
return EC_SVEACCESSTRAP << ARM_EL_EC_SHIFT;
75
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_initfn(Object *obj)
76
QLIST_INIT(&cpu->pre_el_change_hooks);
77
QLIST_INIT(&cpu->el_change_hooks);
78
79
-#ifndef CONFIG_USER_ONLY
80
+#ifdef CONFIG_USER_ONLY
81
+# ifdef TARGET_AARCH64
82
+ /*
83
+ * The linux kernel defaults to 512-bit vectors, when sve is supported.
84
+ * See documentation for /proc/sys/abi/sve_default_vector_length, and
85
+ * our corresponding sve-default-vector-length cpu property.
86
+ */
87
+ cpu->sve_default_vq = 4;
88
+# endif
89
+#else
90
/* Our inbound IRQ and FIQ lines */
91
if (kvm_enabled()) {
92
/* VIRQ and VFIQ are unused with KVM but we add them to maintain
93
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
94
index XXXXXXX..XXXXXXX 100644
95
--- a/target/arm/cpu64.c
96
+++ b/target/arm/cpu64.c
97
@@ -XXX,XX +XXX,XX @@ static void cpu_arm_set_sve(Object *obj, bool value, Error **errp)
98
cpu->isar.id_aa64pfr0 = t;
99
}
47
}
100
48
101
+#ifdef CONFIG_USER_ONLY
49
+/*
102
+/* Mirror linux /proc/sys/abi/sve_default_vector_length. */
50
+ * eret_op is bits [1:0] of the ERET instruction, so:
103
+static void cpu_arm_set_sve_default_vec_len(Object *obj, Visitor *v,
51
+ * 0 for ERET, 2 for ERETAA, 3 for ERETAB.
104
+ const char *name, void *opaque,
52
+ */
105
+ Error **errp)
53
+static inline uint32_t syn_erettrap(int eret_op)
106
+{
54
+{
107
+ ARMCPU *cpu = ARM_CPU(obj);
55
+ return (EC_ERETTRAP << ARM_EL_EC_SHIFT) | ARM_EL_IL | eret_op;
108
+ int32_t default_len, default_vq, remainder;
109
+
110
+ if (!visit_type_int32(v, name, &default_len, errp)) {
111
+ return;
112
+ }
113
+
114
+ /* Undocumented, but the kernel allows -1 to indicate "maximum". */
115
+ if (default_len == -1) {
116
+ cpu->sve_default_vq = ARM_MAX_VQ;
117
+ return;
118
+ }
119
+
120
+ default_vq = default_len / 16;
121
+ remainder = default_len % 16;
122
+
123
+ /*
124
+ * Note that the 512 max comes from include/uapi/asm/sve_context.h
125
+ * and is the maximum architectural width of ZCR_ELx.LEN.
126
+ */
127
+ if (remainder || default_vq < 1 || default_vq > 512) {
128
+ error_setg(errp, "cannot set sve-default-vector-length");
129
+ if (remainder) {
130
+ error_append_hint(errp, "Vector length not a multiple of 16\n");
131
+ } else if (default_vq < 1) {
132
+ error_append_hint(errp, "Vector length smaller than 16\n");
133
+ } else {
134
+ error_append_hint(errp, "Vector length larger than %d\n",
135
+ 512 * 16);
136
+ }
137
+ return;
138
+ }
139
+
140
+ cpu->sve_default_vq = default_vq;
141
+}
56
+}
142
+
57
+
143
+static void cpu_arm_get_sve_default_vec_len(Object *obj, Visitor *v,
58
static inline uint32_t syn_smetrap(SMEExceptionType etype, bool is_16bit)
144
+ const char *name, void *opaque,
145
+ Error **errp)
146
+{
147
+ ARMCPU *cpu = ARM_CPU(obj);
148
+ int32_t value = cpu->sve_default_vq * 16;
149
+
150
+ visit_type_int32(v, name, &value, errp);
151
+}
152
+#endif
153
+
154
void aarch64_add_sve_properties(Object *obj)
155
{
59
{
156
uint32_t vq;
60
return (EC_SMETRAP << ARM_EL_EC_SHIFT)
157
@@ -XXX,XX +XXX,XX @@ void aarch64_add_sve_properties(Object *obj)
61
diff --git a/target/arm/translate.h b/target/arm/translate.h
158
object_property_add(obj, name, "bool", cpu_arm_get_sve_vq,
62
index XXXXXXX..XXXXXXX 100644
159
cpu_arm_set_sve_vq, NULL, NULL);
63
--- a/target/arm/translate.h
64
+++ b/target/arm/translate.h
65
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
66
bool mve_no_pred;
67
/* True if fine-grained traps are active */
68
bool fgt_active;
69
+ /* True if fine-grained trap on ERET is enabled */
70
+ bool fgt_eret;
71
/*
72
* >= 0, a copy of PSTATE.BTYPE, which will be 0 without v8.5-BTI.
73
* < 0, set by the current instruction.
74
diff --git a/target/arm/helper.c b/target/arm/helper.c
75
index XXXXXXX..XXXXXXX 100644
76
--- a/target/arm/helper.c
77
+++ b/target/arm/helper.c
78
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
79
80
if (arm_fgt_active(env, el)) {
81
DP_TBFLAG_ANY(flags, FGT_ACTIVE, 1);
82
+ if (FIELD_EX64(env->cp15.fgt_exec[FGTREG_HFGITR], HFGITR_EL2, ERET)) {
83
+ DP_TBFLAG_A64(flags, FGT_ERET, 1);
84
+ }
160
}
85
}
161
+
86
162
+#ifdef CONFIG_USER_ONLY
87
if (cpu_isar_feature(aa64_mte, env_archcpu(env))) {
163
+ /* Mirror linux /proc/sys/abi/sve_default_vector_length. */
88
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
164
+ object_property_add(obj, "sve-default-vector-length", "int32",
89
index XXXXXXX..XXXXXXX 100644
165
+ cpu_arm_get_sve_default_vec_len,
90
--- a/target/arm/translate-a64.c
166
+ cpu_arm_set_sve_default_vec_len, NULL, NULL);
91
+++ b/target/arm/translate-a64.c
167
+#endif
92
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
168
}
93
if (op4 != 0) {
169
94
goto do_unallocated;
170
void arm_cpu_pauth_finalize(ARMCPU *cpu, Error **errp)
95
}
96
+ if (s->fgt_eret) {
97
+ gen_exception_insn_el(s, 0, EXCP_UDEF, syn_erettrap(op3), 2);
98
+ return;
99
+ }
100
dst = tcg_temp_new_i64();
101
tcg_gen_ld_i64(dst, cpu_env,
102
offsetof(CPUARMState, elr_el[s->current_el]));
103
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
104
if (rn != 0x1f || op4 != 0x1f) {
105
goto do_unallocated;
106
}
107
+ /* The FGT trap takes precedence over an auth trap. */
108
+ if (s->fgt_eret) {
109
+ gen_exception_insn_el(s, 0, EXCP_UDEF, syn_erettrap(op3), 2);
110
+ return;
111
+ }
112
dst = tcg_temp_new_i64();
113
tcg_gen_ld_i64(dst, cpu_env,
114
offsetof(CPUARMState, elr_el[s->current_el]));
115
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
116
dc->align_mem = EX_TBFLAG_ANY(tb_flags, ALIGN_MEM);
117
dc->pstate_il = EX_TBFLAG_ANY(tb_flags, PSTATE__IL);
118
dc->fgt_active = EX_TBFLAG_ANY(tb_flags, FGT_ACTIVE);
119
+ dc->fgt_eret = EX_TBFLAG_A64(tb_flags, FGT_ERET);
120
dc->sve_excp_el = EX_TBFLAG_A64(tb_flags, SVEEXC_EL);
121
dc->sme_excp_el = EX_TBFLAG_A64(tb_flags, SMEEXC_EL);
122
dc->vl = (EX_TBFLAG_A64(tb_flags, VL) + 1) * 16;
171
--
123
--
172
2.20.1
124
2.34.1
173
174
diff view generated by jsdifflib
1
For M-profile, unlike A-profile, the low 2 bits of SP are defined to be
1
Implement the HFGITR_EL2.SVC_EL0 and SVC_EL1 fine-grained traps.
2
RES0H, which is to say that they must be hardwired to zero so that
2
These trap execution of the SVC instruction from AArch32 and AArch64.
3
guest attempts to write non-zero values to them are ignored.
3
(As usual, AArch32 can only trap from EL0, as fine grained traps are
4
4
disabled with an AArch32 EL1.)
5
Implement this behaviour by masking out the low bits:
6
* for writes to r13 by the gdbstub
7
* for writes to any of the various flavours of SP via MSR
8
* for writes to r13 via store_reg() in generated code
9
10
Note that all the direct uses of cpu_R[] in translate.c are in places
11
where the register is definitely not r13 (usually because that has
12
been checked for as an UNDEFINED or UNPREDICTABLE case and handled as
13
UNDEF).
14
15
All the other writes to regs[13] in C code are either:
16
* A-profile only code
17
* writes of values we can guarantee to be aligned, such as
18
- writes of previous-SP-value plus or minus a 4-aligned constant
19
- writes of the value in an SP limit register (which we already
20
enforce to be aligned)
21
5
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
24
Message-id: 20210723162146.5167-2-peter.maydell@linaro.org
8
Tested-by: Fuad Tabba <tabba@google.com>
9
Message-id: 20230130182459.3309057-22-peter.maydell@linaro.org
10
Message-id: 20230127175507.2895013-22-peter.maydell@linaro.org
25
---
11
---
26
target/arm/gdbstub.c | 4 ++++
12
target/arm/cpu.h | 1 +
27
target/arm/m_helper.c | 14 ++++++++------
13
target/arm/translate.h | 2 ++
28
target/arm/translate.c | 3 +++
14
target/arm/helper.c | 20 ++++++++++++++++++++
29
3 files changed, 15 insertions(+), 6 deletions(-)
15
target/arm/translate-a64.c | 9 ++++++++-
16
target/arm/translate.c | 12 +++++++++---
17
5 files changed, 40 insertions(+), 4 deletions(-)
30
18
31
diff --git a/target/arm/gdbstub.c b/target/arm/gdbstub.c
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
32
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/gdbstub.c
21
--- a/target/arm/cpu.h
34
+++ b/target/arm/gdbstub.c
22
+++ b/target/arm/cpu.h
35
@@ -XXX,XX +XXX,XX @@ int arm_cpu_gdb_write_register(CPUState *cs, uint8_t *mem_buf, int n)
23
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_ANY, FPEXC_EL, 8, 2)
36
24
FIELD(TBFLAG_ANY, ALIGN_MEM, 10, 1)
37
if (n < 16) {
25
FIELD(TBFLAG_ANY, PSTATE__IL, 11, 1)
38
/* Core integer register. */
26
FIELD(TBFLAG_ANY, FGT_ACTIVE, 12, 1)
39
+ if (n == 13 && arm_feature(env, ARM_FEATURE_M)) {
27
+FIELD(TBFLAG_ANY, FGT_SVC, 13, 1)
40
+ /* M profile SP low bits are always 0 */
28
41
+ tmp &= ~3;
29
/*
30
* Bit usage when in AArch32 state, both A- and M-profile.
31
diff --git a/target/arm/translate.h b/target/arm/translate.h
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/translate.h
34
+++ b/target/arm/translate.h
35
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
36
bool fgt_active;
37
/* True if fine-grained trap on ERET is enabled */
38
bool fgt_eret;
39
+ /* True if fine-grained trap on SVC is enabled */
40
+ bool fgt_svc;
41
/*
42
* >= 0, a copy of PSTATE.BTYPE, which will be 0 without v8.5-BTI.
43
* < 0, set by the current instruction.
44
diff --git a/target/arm/helper.c b/target/arm/helper.c
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/helper.c
47
+++ b/target/arm/helper.c
48
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_mmu_idx(CPUARMState *env)
49
return arm_mmu_idx_el(env, arm_current_el(env));
50
}
51
52
+static inline bool fgt_svc(CPUARMState *env, int el)
53
+{
54
+ /*
55
+ * Assuming fine-grained-traps are active, return true if we
56
+ * should be trapping on SVC instructions. Only AArch64 can
57
+ * trap on an SVC at EL1, but we don't need to special-case this
58
+ * because if this is AArch32 EL1 then arm_fgt_active() is false.
59
+ * We also know el is 0 or 1.
60
+ */
61
+ return el == 0 ?
62
+ FIELD_EX64(env->cp15.fgt_exec[FGTREG_HFGITR], HFGITR_EL2, SVC_EL0) :
63
+ FIELD_EX64(env->cp15.fgt_exec[FGTREG_HFGITR], HFGITR_EL2, SVC_EL1);
64
+}
65
+
66
static CPUARMTBFlags rebuild_hflags_common(CPUARMState *env, int fp_el,
67
ARMMMUIdx mmu_idx,
68
CPUARMTBFlags flags)
69
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_a32(CPUARMState *env, int fp_el,
70
71
if (arm_fgt_active(env, el)) {
72
DP_TBFLAG_ANY(flags, FGT_ACTIVE, 1);
73
+ if (fgt_svc(env, el)) {
74
+ DP_TBFLAG_ANY(flags, FGT_SVC, 1);
42
+ }
75
+ }
43
env->regs[n] = tmp;
44
return 4;
45
}
76
}
46
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
77
78
if (env->uncached_cpsr & CPSR_IL) {
79
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
80
if (FIELD_EX64(env->cp15.fgt_exec[FGTREG_HFGITR], HFGITR_EL2, ERET)) {
81
DP_TBFLAG_A64(flags, FGT_ERET, 1);
82
}
83
+ if (fgt_svc(env, el)) {
84
+ DP_TBFLAG_ANY(flags, FGT_SVC, 1);
85
+ }
86
}
87
88
if (cpu_isar_feature(aa64_mte, env_archcpu(env))) {
89
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
47
index XXXXXXX..XXXXXXX 100644
90
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/m_helper.c
91
--- a/target/arm/translate-a64.c
49
+++ b/target/arm/m_helper.c
92
+++ b/target/arm/translate-a64.c
50
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
93
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
51
if (!env->v7m.secure) {
94
int opc = extract32(insn, 21, 3);
52
return;
95
int op2_ll = extract32(insn, 0, 5);
53
}
96
int imm16 = extract32(insn, 5, 16);
54
- env->v7m.other_ss_msp = val;
97
+ uint32_t syndrome;
55
+ env->v7m.other_ss_msp = val & ~3;
98
56
return;
99
switch (opc) {
57
case 0x89: /* PSP_NS */
100
case 0:
58
if (!env->v7m.secure) {
101
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
59
return;
102
*/
60
}
103
switch (op2_ll) {
61
- env->v7m.other_ss_psp = val;
104
case 1: /* SVC */
62
+ env->v7m.other_ss_psp = val & ~3;
105
+ syndrome = syn_aa64_svc(imm16);
63
return;
106
+ if (s->fgt_svc) {
64
case 0x8a: /* MSPLIM_NS */
107
+ gen_exception_insn_el(s, 0, EXCP_UDEF, syndrome, 2);
65
if (!env->v7m.secure) {
108
+ break;
66
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
109
+ }
67
110
gen_ss_advance(s);
68
limit = is_psp ? env->v7m.psplim[false] : env->v7m.msplim[false];
111
- gen_exception_insn(s, 4, EXCP_SWI, syn_aa64_svc(imm16));
69
112
+ gen_exception_insn(s, 4, EXCP_SWI, syndrome);
70
+ val &= ~0x3;
113
break;
71
+
114
case 2: /* HVC */
72
if (val < limit) {
115
if (s->current_el == 0) {
73
raise_exception_ra(env, EXCP_STKOF, 0, 1, GETPC());
116
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
74
}
117
dc->align_mem = EX_TBFLAG_ANY(tb_flags, ALIGN_MEM);
75
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
118
dc->pstate_il = EX_TBFLAG_ANY(tb_flags, PSTATE__IL);
76
break;
119
dc->fgt_active = EX_TBFLAG_ANY(tb_flags, FGT_ACTIVE);
77
case 8: /* MSP */
120
+ dc->fgt_svc = EX_TBFLAG_ANY(tb_flags, FGT_SVC);
78
if (v7m_using_psp(env)) {
121
dc->fgt_eret = EX_TBFLAG_A64(tb_flags, FGT_ERET);
79
- env->v7m.other_sp = val;
122
dc->sve_excp_el = EX_TBFLAG_A64(tb_flags, SVEEXC_EL);
80
+ env->v7m.other_sp = val & ~3;
123
dc->sme_excp_el = EX_TBFLAG_A64(tb_flags, SMEEXC_EL);
81
} else {
82
- env->regs[13] = val;
83
+ env->regs[13] = val & ~3;
84
}
85
break;
86
case 9: /* PSP */
87
if (v7m_using_psp(env)) {
88
- env->regs[13] = val;
89
+ env->regs[13] = val & ~3;
90
} else {
91
- env->v7m.other_sp = val;
92
+ env->v7m.other_sp = val & ~3;
93
}
94
break;
95
case 10: /* MSPLIM */
96
diff --git a/target/arm/translate.c b/target/arm/translate.c
124
diff --git a/target/arm/translate.c b/target/arm/translate.c
97
index XXXXXXX..XXXXXXX 100644
125
index XXXXXXX..XXXXXXX 100644
98
--- a/target/arm/translate.c
126
--- a/target/arm/translate.c
99
+++ b/target/arm/translate.c
127
+++ b/target/arm/translate.c
100
@@ -XXX,XX +XXX,XX @@ void store_reg(DisasContext *s, int reg, TCGv_i32 var)
128
@@ -XXX,XX +XXX,XX @@ static bool trans_SVC(DisasContext *s, arg_SVC *a)
101
*/
129
(a->imm == semihost_imm)) {
102
tcg_gen_andi_i32(var, var, s->thumb ? ~1 : ~3);
130
gen_exception_internal_insn(s, EXCP_SEMIHOST);
103
s->base.is_jmp = DISAS_JUMP;
131
} else {
104
+ } else if (reg == 13 && arm_dc_feature(s, ARM_FEATURE_M)) {
132
- gen_update_pc(s, curr_insn_len(s));
105
+ /* For M-profile SP bits [1:0] are always zero */
133
- s->svc_imm = a->imm;
106
+ tcg_gen_andi_i32(var, var, ~3);
134
- s->base.is_jmp = DISAS_SWI;
135
+ if (s->fgt_svc) {
136
+ uint32_t syndrome = syn_aa32_svc(a->imm, s->thumb);
137
+ gen_exception_insn_el(s, 0, EXCP_UDEF, syndrome, 2);
138
+ } else {
139
+ gen_update_pc(s, curr_insn_len(s));
140
+ s->svc_imm = a->imm;
141
+ s->base.is_jmp = DISAS_SWI;
142
+ }
107
}
143
}
108
tcg_gen_mov_i32(cpu_R[reg], var);
144
return true;
109
tcg_temp_free_i32(var);
145
}
146
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
147
dc->align_mem = EX_TBFLAG_ANY(tb_flags, ALIGN_MEM);
148
dc->pstate_il = EX_TBFLAG_ANY(tb_flags, PSTATE__IL);
149
dc->fgt_active = EX_TBFLAG_ANY(tb_flags, FGT_ACTIVE);
150
+ dc->fgt_svc = EX_TBFLAG_ANY(tb_flags, FGT_SVC);
151
152
if (arm_feature(env, ARM_FEATURE_M)) {
153
dc->vfp_enabled = 1;
110
--
154
--
111
2.20.1
155
2.34.1
112
113
diff view generated by jsdifflib
1
In Arm v8.1M the VECTPENDING field in the ICSR has new behaviour: if
1
FEAT_FGT also implements an extra trap bit in the MDCR_EL2 and
2
the register is accessed NonSecure and the highest priority pending
2
MDCR_EL3 registers: bit TDCC enables trapping of use of the Debug
3
enabled exception (that would be returned in the VECTPENDING field)
3
Comms Channel registers OSDTRRX_EL1, OSDTRTX_EL1, MDCCSR_EL0,
4
targets Secure, then the VECTPENDING field must read 1 rather than
4
MDCCINT_EL0, DBGDTR_EL0, DBGDTRRX_EL0 and DBGDTRTX_EL0 (and their
5
the exception number of the pending exception. Implement this.
5
AArch32 equivalents). This trapping is independent of whether
6
fine-grained traps are enabled or not.
7
8
Implement these extra traps. (We don't implement DBGDTR_EL0,
9
DBGDTRRX_EL0 and DBGDTRTX_EL0.)
6
10
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210723162146.5167-7-peter.maydell@linaro.org
13
Tested-by: Fuad Tabba <tabba@google.com>
14
Message-id: 20230130182459.3309057-23-peter.maydell@linaro.org
15
Message-id: 20230127175507.2895013-23-peter.maydell@linaro.org
10
---
16
---
11
hw/intc/armv7m_nvic.c | 31 ++++++++++++++++++++++++-------
17
target/arm/debug_helper.c | 35 +++++++++++++++++++++++++++++++----
12
1 file changed, 24 insertions(+), 7 deletions(-)
18
1 file changed, 31 insertions(+), 4 deletions(-)
13
19
14
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
20
diff --git a/target/arm/debug_helper.c b/target/arm/debug_helper.c
15
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/intc/armv7m_nvic.c
22
--- a/target/arm/debug_helper.c
17
+++ b/hw/intc/armv7m_nvic.c
23
+++ b/target/arm/debug_helper.c
18
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_acknowledge_irq(void *opaque)
24
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tda(CPUARMState *env, const ARMCPRegInfo *ri,
19
nvic_irq_update(s);
25
return CP_ACCESS_OK;
20
}
26
}
21
27
22
+static bool vectpending_targets_secure(NVICState *s)
28
+/*
29
+ * Check for traps to Debug Comms Channel registers. If FEAT_FGT
30
+ * is implemented then these are controlled by MDCR_EL2.TDCC for
31
+ * EL2 and MDCR_EL3.TDCC for EL3. They are also controlled by
32
+ * the general debug access trap bits MDCR_EL2.TDA and MDCR_EL3.TDA.
33
+ */
34
+static CPAccessResult access_tdcc(CPUARMState *env, const ARMCPRegInfo *ri,
35
+ bool isread)
23
+{
36
+{
24
+ /* Return true if s->vectpending targets Secure state */
37
+ int el = arm_current_el(env);
25
+ if (s->vectpending_is_s_banked) {
38
+ uint64_t mdcr_el2 = arm_mdcr_el2_eff(env);
26
+ return true;
39
+ bool mdcr_el2_tda = (mdcr_el2 & MDCR_TDA) || (mdcr_el2 & MDCR_TDE) ||
40
+ (arm_hcr_el2_eff(env) & HCR_TGE);
41
+ bool mdcr_el2_tdcc = cpu_isar_feature(aa64_fgt, env_archcpu(env)) &&
42
+ (mdcr_el2 & MDCR_TDCC);
43
+ bool mdcr_el3_tdcc = cpu_isar_feature(aa64_fgt, env_archcpu(env)) &&
44
+ (env->cp15.mdcr_el3 & MDCR_TDCC);
45
+
46
+ if (el < 2 && (mdcr_el2_tda || mdcr_el2_tdcc)) {
47
+ return CP_ACCESS_TRAP_EL2;
27
+ }
48
+ }
28
+ return !exc_is_banked(s->vectpending) &&
49
+ if (el < 3 && ((env->cp15.mdcr_el3 & MDCR_TDA) || mdcr_el3_tdcc)) {
29
+ exc_targets_secure(s, s->vectpending);
50
+ return CP_ACCESS_TRAP_EL3;
51
+ }
52
+ return CP_ACCESS_OK;
30
+}
53
+}
31
+
54
+
32
void armv7m_nvic_get_pending_irq_info(void *opaque,
55
static void oslar_write(CPUARMState *env, const ARMCPRegInfo *ri,
33
int *pirq, bool *ptargets_secure)
56
uint64_t value)
34
{
57
{
35
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_get_pending_irq_info(void *opaque,
58
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo debug_cp_reginfo[] = {
36
59
*/
37
assert(pending > ARMV7M_EXCP_RESET && pending < s->num_irq);
60
{ .name = "MDCCSR_EL0", .state = ARM_CP_STATE_AA64,
38
61
.opc0 = 2, .opc1 = 3, .crn = 0, .crm = 1, .opc2 = 0,
39
- if (s->vectpending_is_s_banked) {
62
- .access = PL0_R, .accessfn = access_tda,
40
- targets_secure = true;
63
+ .access = PL0_R, .accessfn = access_tdcc,
41
- } else {
64
.type = ARM_CP_CONST, .resetvalue = 0 },
42
- targets_secure = !exc_is_banked(pending) &&
65
/*
43
- exc_targets_secure(s, pending);
66
* OSDTRRX_EL1/OSDTRTX_EL1 are used for save and restore of DBGDTRRX_EL0.
44
- }
67
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo debug_cp_reginfo[] = {
45
+ targets_secure = vectpending_targets_secure(s);
68
*/
46
69
{ .name = "OSDTRRX_EL1", .state = ARM_CP_STATE_BOTH, .cp = 14,
47
trace_nvic_get_pending_irq_info(pending, targets_secure);
70
.opc0 = 2, .opc1 = 0, .crn = 0, .crm = 0, .opc2 = 2,
48
71
- .access = PL1_RW, .accessfn = access_tda,
49
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
72
+ .access = PL1_RW, .accessfn = access_tdcc,
50
/* VECTACTIVE */
73
.type = ARM_CP_CONST, .resetvalue = 0 },
51
val = cpu->env.v7m.exception;
74
{ .name = "OSDTRTX_EL1", .state = ARM_CP_STATE_BOTH, .cp = 14,
52
/* VECTPENDING */
75
.opc0 = 2, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 2,
53
- val |= (s->vectpending & 0x1ff) << 12;
76
- .access = PL1_RW, .accessfn = access_tda,
54
+ if (s->vectpending) {
77
+ .access = PL1_RW, .accessfn = access_tdcc,
55
+ /*
78
.type = ARM_CP_CONST, .resetvalue = 0 },
56
+ * From v8.1M VECTPENDING must read as 1 if accessed as
79
/*
57
+ * NonSecure and the highest priority pending and enabled
80
* OSECCR_EL1 provides a mechanism for an operating system
58
+ * exception targets Secure.
81
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo debug_cp_reginfo[] = {
59
+ */
82
*/
60
+ int vp = s->vectpending;
83
{ .name = "MDCCINT_EL1", .state = ARM_CP_STATE_BOTH,
61
+ if (!attrs.secure && arm_feature(&cpu->env, ARM_FEATURE_V8_1M) &&
84
.cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 0,
62
+ vectpending_targets_secure(s)) {
85
- .access = PL1_RW, .accessfn = access_tda,
63
+ vp = 1;
86
+ .access = PL1_RW, .accessfn = access_tdcc,
64
+ }
87
.type = ARM_CP_NOP },
65
+ val |= (vp & 0x1ff) << 12;
88
/*
66
+ }
89
* Dummy DBGCLAIM registers.
67
/* ISRPENDING - set if any external IRQ is pending */
68
if (nvic_isrpending(s)) {
69
val |= (1 << 22);
70
--
90
--
71
2.20.1
91
2.34.1
72
73
diff view generated by jsdifflib
1
In do_v7m_exception_exit(), we perform various checks as part of
1
Update the ID registers for TCG's '-cpu max' to report the
2
performing the exception return. If one of these checks fails, the
2
presence of FEAT_FGT Fine-Grained Traps support.
3
architecture requires that we take an appropriate exception on the
4
existing stackframe. We implement this by calling
5
v7m_exception_taken() to set up to take the new exception, and then
6
immediately returning from do_v7m_exception_exit() without proceeding
7
any further with the unstack-and-exception-return process.
8
9
In a couple of checks that are new in v8.1M, we forgot the "return"
10
statement, with the effect that if bad code in the guest tripped over
11
these checks we would set up to take a UsageFault exception but then
12
blunder on trying to also unstack and return from the original
13
exception, with the probable result that the guest would crash.
14
15
Add the missing return statements.
16
3
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
19
Message-id: 20210723162146.5167-3-peter.maydell@linaro.org
6
Tested-by: Fuad Tabba <tabba@google.com>
7
Message-id: 20230130182459.3309057-24-peter.maydell@linaro.org
8
Message-id: 20230127175507.2895013-24-peter.maydell@linaro.org
20
---
9
---
21
target/arm/m_helper.c | 2 ++
10
docs/system/arm/emulation.rst | 1 +
22
1 file changed, 2 insertions(+)
11
target/arm/cpu64.c | 1 +
12
2 files changed, 2 insertions(+)
23
13
24
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
14
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
25
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/m_helper.c
16
--- a/docs/system/arm/emulation.rst
27
+++ b/target/arm/m_helper.c
17
+++ b/docs/system/arm/emulation.rst
28
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
18
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
29
qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
19
- FEAT_ETS (Enhanced Translation Synchronization)
30
"stackframe: NSACR prevents clearing FPU registers\n");
20
- FEAT_EVT (Enhanced Virtualization Traps)
31
v7m_exception_taken(cpu, excret, true, false);
21
- FEAT_FCMA (Floating-point complex number instructions)
32
+ return;
22
+- FEAT_FGT (Fine-Grained Traps)
33
} else if (!cpacr_pass) {
23
- FEAT_FHM (Floating-point half-precision multiplication instructions)
34
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
24
- FEAT_FP16 (Half-precision floating-point data processing)
35
exc_secure);
25
- FEAT_FRINTTS (Floating-point to integer instructions)
36
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
26
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
37
qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
27
index XXXXXXX..XXXXXXX 100644
38
"stackframe: CPACR prevents clearing FPU registers\n");
28
--- a/target/arm/cpu64.c
39
v7m_exception_taken(cpu, excret, true, false);
29
+++ b/target/arm/cpu64.c
40
+ return;
30
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
41
}
31
t = FIELD_DP64(t, ID_AA64MMFR0, TGRAN16_2, 2); /* 16k stage2 supported */
42
}
32
t = FIELD_DP64(t, ID_AA64MMFR0, TGRAN64_2, 2); /* 64k stage2 supported */
43
/* Clear s0..s15, FPSCR and VPR */
33
t = FIELD_DP64(t, ID_AA64MMFR0, TGRAN4_2, 2); /* 4k stage2 supported */
34
+ t = FIELD_DP64(t, ID_AA64MMFR0, FGT, 1); /* FEAT_FGT */
35
cpu->isar.id_aa64mmfr0 = t;
36
37
t = cpu->isar.id_aa64mmfr1;
44
--
38
--
45
2.20.1
39
2.34.1
46
47
diff view generated by jsdifflib