1
target-arm queue: the big stuff here is the final part of
1
The following changes since commit 003ba52a8b327180e284630b289c6ece5a3e08b9:
2
rth's patches for Cortex-A76 and Neoverse-N1 support;
3
also present are Gavin's NUMA series and a few other things.
4
2
5
thanks
3
Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging (2023-02-16 11:16:39 +0000)
6
-- PMM
7
8
The following changes since commit 554623226f800acf48a2ed568900c1c968ec9a8b:
9
10
Merge tag 'qemu-sparc-20220508' of https://github.com/mcayland/qemu into staging (2022-05-08 17:03:26 -0500)
11
4
12
are available in the Git repository at:
5
are available in the Git repository at:
13
6
14
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20220509
7
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20230216
15
8
16
for you to fetch changes up to ae9141d4a3265553503bf07d3574b40f84615a34:
9
for you to fetch changes up to caf01d6a435d9f4a95aeae2f9fc6cb8b889b1fb8:
17
10
18
hw/acpi/aml-build: Use existing CPU topology to build PPTT table (2022-05-09 11:47:55 +0100)
11
tests/qtest: Restrict tpm-tis-devices-{swtpm}-test to CONFIG_TCG (2023-02-16 16:28:53 +0000)
19
12
20
----------------------------------------------------------------
13
----------------------------------------------------------------
21
target-arm queue:
14
target-arm queue:
22
* MAINTAINERS/.mailmap: update email for Leif Lindholm
15
* Some mostly M-profile-related code cleanups
23
* hw/arm: add version information to sbsa-ref machine DT
16
* avocado: Retire the boot_linux.py AArch64 TCG tests
24
* Enable new features for -cpu max:
17
* hw/arm/smmuv3: Add GBPA register
25
FEAT_Debugv8p2, FEAT_Debugv8p4, FEAT_RAS (minimal version only),
18
* arm/virt: don't try to spell out the accelerator
26
FEAT_IESB, FEAT_CSV2, FEAT_CSV2_2, FEAT_CSV3, FEAT_DGH
19
* hw/arm: Attach PSPI module to NPCM7XX SoC
27
* Emulate Cortex-A76
20
* Some cleanup/refactoring patches aiming towards
28
* Emulate Neoverse-N1
21
allowing building Arm targets without CONFIG_TCG
29
* Fix the virt board default NUMA topology
30
22
31
----------------------------------------------------------------
23
----------------------------------------------------------------
32
Gavin Shan (6):
24
Alex Bennée (1):
33
qapi/machine.json: Add cluster-id
25
tests/avocado: retire the Aarch64 TCG tests from boot_linux.py
34
qtest/numa-test: Specify CPU topology in aarch64_numa_cpu()
35
hw/arm/virt: Consider SMP configuration in CPU topology
36
qtest/numa-test: Correct CPU and NUMA association in aarch64_numa_cpu()
37
hw/arm/virt: Fix CPU's default NUMA node ID
38
hw/acpi/aml-build: Use existing CPU topology to build PPTT table
39
26
40
Leif Lindholm (2):
27
Claudio Fontana (3):
41
MAINTAINERS/.mailmap: update email for Leif Lindholm
28
target/arm: rename handle_semihosting to tcg_handle_semihosting
42
hw/arm: add versioning to sbsa-ref machine DT
29
target/arm: wrap psci call with tcg_enabled
30
target/arm: wrap call to aarch64_sve_change_el in tcg_enabled()
43
31
44
Richard Henderson (24):
32
Cornelia Huck (1):
45
target/arm: Handle cpreg registration for missing EL
33
arm/virt: don't try to spell out the accelerator
46
target/arm: Drop EL3 no EL2 fallbacks
47
target/arm: Merge zcr reginfo
48
target/arm: Adjust definition of CONTEXTIDR_EL2
49
target/arm: Move cortex impdef sysregs to cpu_tcg.c
50
target/arm: Update qemu-system-arm -cpu max to cortex-a57
51
target/arm: Set ID_DFR0.PerfMon for qemu-system-arm -cpu max
52
target/arm: Split out aa32_max_features
53
target/arm: Annotate arm_max_initfn with FEAT identifiers
54
target/arm: Use field names for manipulating EL2 and EL3 modes
55
target/arm: Enable FEAT_Debugv8p2 for -cpu max
56
target/arm: Enable FEAT_Debugv8p4 for -cpu max
57
target/arm: Add minimal RAS registers
58
target/arm: Enable SCR and HCR bits for RAS
59
target/arm: Implement virtual SError exceptions
60
target/arm: Implement ESB instruction
61
target/arm: Enable FEAT_RAS for -cpu max
62
target/arm: Enable FEAT_IESB for -cpu max
63
target/arm: Enable FEAT_CSV2 for -cpu max
64
target/arm: Enable FEAT_CSV2_2 for -cpu max
65
target/arm: Enable FEAT_CSV3 for -cpu max
66
target/arm: Enable FEAT_DGH for -cpu max
67
target/arm: Define cortex-a76
68
target/arm: Define neoverse-n1
69
34
70
docs/system/arm/emulation.rst | 10 +
35
Fabiano Rosas (7):
71
docs/system/arm/virt.rst | 2 +
36
target/arm: Move PC alignment check
72
qapi/machine.json | 6 +-
37
target/arm: Move cpregs code out of cpu.h
73
target/arm/cpregs.h | 11 +
38
tests/avocado: Skip tests that require a missing accelerator
74
target/arm/cpu.h | 23 ++
39
tests/avocado: Tag TCG tests with accel:tcg
75
target/arm/helper.h | 1 +
40
target/arm: Use "max" as default cpu for the virt machine with KVM
76
target/arm/internals.h | 16 ++
41
tests/qtest: arm-cpu-features: Match tests to required accelerators
77
target/arm/syndrome.h | 5 +
42
tests/qtest: Restrict tpm-tis-devices-{swtpm}-test to CONFIG_TCG
78
target/arm/a32.decode | 16 +-
43
79
target/arm/t32.decode | 18 +-
44
Hao Wu (3):
80
hw/acpi/aml-build.c | 111 ++++----
45
MAINTAINERS: Add myself to maintainers and remove Havard
81
hw/arm/sbsa-ref.c | 16 ++
46
hw/ssi: Add Nuvoton PSPI Module
82
hw/arm/virt.c | 21 +-
47
hw/arm: Attach PSPI module to NPCM7XX SoC
83
hw/core/machine-hmp-cmds.c | 4 +
48
84
hw/core/machine.c | 16 ++
49
Jean-Philippe Brucker (2):
85
target/arm/cpu.c | 66 ++++-
50
hw/arm/smmu-common: Support 64-bit addresses
86
target/arm/cpu64.c | 353 ++++++++++++++-----------
51
hw/arm/smmu-common: Fix TTB1 handling
87
target/arm/cpu_tcg.c | 227 +++++++++++-----
52
88
target/arm/helper.c | 600 +++++++++++++++++++++++++-----------------
53
Mostafa Saleh (1):
89
target/arm/op_helper.c | 43 +++
54
hw/arm/smmuv3: Add GBPA register
90
target/arm/translate-a64.c | 18 ++
55
91
target/arm/translate.c | 23 ++
56
Philippe Mathieu-Daudé (12):
92
tests/qtest/numa-test.c | 19 +-
57
hw/intc/armv7m_nvic: Use OBJECT_DECLARE_SIMPLE_TYPE() macro
93
.mailmap | 3 +-
58
target/arm: Simplify arm_v7m_mmu_idx_for_secstate() for user emulation
94
MAINTAINERS | 2 +-
59
target/arm: Reduce arm_v7m_mmu_idx_[all/for_secstate_and_priv]() scope
95
25 files changed, 1068 insertions(+), 562 deletions(-)
60
target/arm: Constify ID_PFR1 on user emulation
61
target/arm: Convert CPUARMState::eabi to boolean
62
target/arm: Avoid resetting CPUARMState::eabi field
63
target/arm: Restrict CPUARMState::gicv3state to sysemu
64
target/arm: Restrict CPUARMState::arm_boot_info to sysemu
65
target/arm: Restrict CPUARMState::nvic to sysemu
66
target/arm: Store CPUARMState::nvic as NVICState*
67
target/arm: Declare CPU <-> NVIC helpers in 'hw/intc/armv7m_nvic.h'
68
hw/arm: Add missing XLNX_ZYNQMP_ARM -> USB_DWC3 Kconfig dependency
69
70
MAINTAINERS | 8 +-
71
docs/system/arm/nuvoton.rst | 2 +-
72
hw/arm/smmuv3-internal.h | 7 +
73
include/hw/arm/npcm7xx.h | 2 +
74
include/hw/arm/smmu-common.h | 2 -
75
include/hw/arm/smmuv3.h | 1 +
76
include/hw/intc/armv7m_nvic.h | 128 +++++++++++++++++-
77
include/hw/ssi/npcm_pspi.h | 53 ++++++++
78
linux-user/user-internals.h | 2 +-
79
target/arm/cpregs.h | 98 ++++++++++++++
80
target/arm/cpu.h | 228 ++-------------------------------
81
target/arm/internals.h | 14 --
82
hw/arm/npcm7xx.c | 25 +++-
83
hw/arm/smmu-common.c | 4 +-
84
hw/arm/smmuv3.c | 43 ++++++-
85
hw/arm/virt.c | 10 +-
86
hw/intc/armv7m_nvic.c | 38 ++----
87
hw/ssi/npcm_pspi.c | 221 ++++++++++++++++++++++++++++++++
88
linux-user/arm/cpu_loop.c | 4 +-
89
target/arm/cpu.c | 5 +-
90
target/arm/cpu_tcg.c | 3 +
91
target/arm/helper.c | 31 +++--
92
target/arm/m_helper.c | 86 +++++++------
93
target/arm/machine.c | 18 +--
94
tests/qtest/arm-cpu-features.c | 28 ++--
95
hw/arm/Kconfig | 1 +
96
hw/ssi/meson.build | 2 +-
97
hw/ssi/trace-events | 5 +
98
tests/avocado/avocado_qemu/__init__.py | 4 +
99
tests/avocado/boot_linux.py | 48 ++-----
100
tests/avocado/boot_linux_console.py | 1 +
101
tests/avocado/machine_aarch64_virt.py | 63 ++++++++-
102
tests/avocado/reverse_debugging.py | 8 ++
103
tests/qtest/meson.build | 4 +-
104
34 files changed, 798 insertions(+), 399 deletions(-)
105
create mode 100644 include/hw/ssi/npcm_pspi.h
106
create mode 100644 hw/ssi/npcm_pspi.c
107
diff view generated by jsdifflib
1
From: Gavin Shan <gshan@redhat.com>
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
2
3
In aarch64_numa_cpu(), the CPU and NUMA association is something
3
Manually convert to OBJECT_DECLARE_SIMPLE_TYPE() macro,
4
like below. Two threads in the same core/cluster/socket are
4
similarly to automatic conversion from commit 8063396bf3
5
associated with two individual NUMA nodes, which is unreal as
5
("Use OBJECT_DECLARE_SIMPLE_TYPE when possible").
6
Igor Mammedov mentioned. We don't expect the association to break
7
NUMA-to-socket boundary, which matches with the real world.
8
6
9
NUMA-node socket cluster core thread
7
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
10
------------------------------------------
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
0 0 0 0 0
9
Message-id: 20230206223502.25122-2-philmd@linaro.org
12
1 0 0 0 1
13
14
This corrects the topology for CPUs and their association with
15
NUMA nodes. After this patch is applied, the CPU and NUMA
16
association becomes something like below, which looks real.
17
Besides, socket/cluster/core/thread IDs are all checked when
18
the NUMA node IDs are verified. It helps to check if the CPU
19
topology is properly populated or not.
20
21
NUMA-node socket cluster core thread
22
------------------------------------------
23
0 1 0 0 0
24
1 0 0 0 0
25
26
Suggested-by: Igor Mammedov <imammedo@redhat.com>
27
Signed-off-by: Gavin Shan <gshan@redhat.com>
28
Acked-by: Igor Mammedov <imammedo@redhat.com>
29
Message-id: 20220503140304.855514-5-gshan@redhat.com
30
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
31
---
11
---
32
tests/qtest/numa-test.c | 18 ++++++++++++------
12
include/hw/intc/armv7m_nvic.h | 5 +----
33
1 file changed, 12 insertions(+), 6 deletions(-)
13
1 file changed, 1 insertion(+), 4 deletions(-)
34
14
35
diff --git a/tests/qtest/numa-test.c b/tests/qtest/numa-test.c
15
diff --git a/include/hw/intc/armv7m_nvic.h b/include/hw/intc/armv7m_nvic.h
36
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
37
--- a/tests/qtest/numa-test.c
17
--- a/include/hw/intc/armv7m_nvic.h
38
+++ b/tests/qtest/numa-test.c
18
+++ b/include/hw/intc/armv7m_nvic.h
39
@@ -XXX,XX +XXX,XX @@ static void aarch64_numa_cpu(const void *data)
19
@@ -XXX,XX +XXX,XX @@
40
g_autofree char *cli = NULL;
20
#include "qom/object.h"
41
21
42
cli = make_cli(data, "-machine "
22
#define TYPE_NVIC "armv7m_nvic"
43
- "smp.cpus=2,smp.sockets=1,smp.clusters=1,smp.cores=1,smp.threads=2 "
23
-
44
+ "smp.cpus=2,smp.sockets=2,smp.clusters=1,smp.cores=1,smp.threads=1 "
24
-typedef struct NVICState NVICState;
45
"-numa node,nodeid=0,memdev=ram -numa node,nodeid=1 "
25
-DECLARE_INSTANCE_CHECKER(NVICState, NVIC,
46
- "-numa cpu,node-id=1,thread-id=0 "
26
- TYPE_NVIC)
47
- "-numa cpu,node-id=0,thread-id=1");
27
+OBJECT_DECLARE_SIMPLE_TYPE(NVICState, NVIC)
48
+ "-numa cpu,node-id=0,socket-id=1,cluster-id=0,core-id=0,thread-id=0 "
28
49
+ "-numa cpu,node-id=1,socket-id=0,cluster-id=0,core-id=0,thread-id=0");
29
/* Highest permitted number of exceptions (architectural limit) */
50
qts = qtest_init(cli);
30
#define NVIC_MAX_VECTORS 512
51
cpus = get_cpus(qts, &resp);
52
g_assert(cpus);
53
54
while ((e = qlist_pop(cpus))) {
55
QDict *cpu, *props;
56
- int64_t thread, node;
57
+ int64_t socket, cluster, core, thread, node;
58
59
cpu = qobject_to(QDict, e);
60
g_assert(qdict_haskey(cpu, "props"));
61
@@ -XXX,XX +XXX,XX @@ static void aarch64_numa_cpu(const void *data)
62
63
g_assert(qdict_haskey(props, "node-id"));
64
node = qdict_get_int(props, "node-id");
65
+ g_assert(qdict_haskey(props, "socket-id"));
66
+ socket = qdict_get_int(props, "socket-id");
67
+ g_assert(qdict_haskey(props, "cluster-id"));
68
+ cluster = qdict_get_int(props, "cluster-id");
69
+ g_assert(qdict_haskey(props, "core-id"));
70
+ core = qdict_get_int(props, "core-id");
71
g_assert(qdict_haskey(props, "thread-id"));
72
thread = qdict_get_int(props, "thread-id");
73
74
- if (thread == 0) {
75
+ if (socket == 0 && cluster == 0 && core == 0 && thread == 0) {
76
g_assert_cmpint(node, ==, 1);
77
- } else if (thread == 1) {
78
+ } else if (socket == 1 && cluster == 0 && core == 0 && thread == 0) {
79
g_assert_cmpint(node, ==, 0);
80
} else {
81
g_assert(false);
82
--
31
--
83
2.25.1
32
2.34.1
33
34
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
2
3
Enable the n1 for virt and sbsa board use.
3
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
4
4
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20230206223502.25122-3-philmd@linaro.org
7
Message-id: 20220506180242.216785-25-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
8
---
10
docs/system/arm/virt.rst | 1 +
9
target/arm/m_helper.c | 11 ++++++++---
11
hw/arm/sbsa-ref.c | 1 +
10
1 file changed, 8 insertions(+), 3 deletions(-)
12
hw/arm/virt.c | 1 +
13
target/arm/cpu64.c | 66 ++++++++++++++++++++++++++++++++++++++++
14
4 files changed, 69 insertions(+)
15
11
16
diff --git a/docs/system/arm/virt.rst b/docs/system/arm/virt.rst
12
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
17
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
18
--- a/docs/system/arm/virt.rst
14
--- a/target/arm/m_helper.c
19
+++ b/docs/system/arm/virt.rst
15
+++ b/target/arm/m_helper.c
20
@@ -XXX,XX +XXX,XX @@ Supported guest CPU types:
16
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
21
- ``cortex-a76`` (64-bit)
17
return 0;
22
- ``a64fx`` (64-bit)
23
- ``host`` (with KVM only)
24
+- ``neoverse-n1`` (64-bit)
25
- ``max`` (same as ``host`` for KVM; best possible emulation with TCG)
26
27
Note that the default is ``cortex-a15``, so for an AArch64 guest you must
28
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/hw/arm/sbsa-ref.c
31
+++ b/hw/arm/sbsa-ref.c
32
@@ -XXX,XX +XXX,XX @@ static const char * const valid_cpus[] = {
33
ARM_CPU_TYPE_NAME("cortex-a57"),
34
ARM_CPU_TYPE_NAME("cortex-a72"),
35
ARM_CPU_TYPE_NAME("cortex-a76"),
36
+ ARM_CPU_TYPE_NAME("neoverse-n1"),
37
ARM_CPU_TYPE_NAME("max"),
38
};
39
40
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/hw/arm/virt.c
43
+++ b/hw/arm/virt.c
44
@@ -XXX,XX +XXX,XX @@ static const char *valid_cpus[] = {
45
ARM_CPU_TYPE_NAME("cortex-a72"),
46
ARM_CPU_TYPE_NAME("cortex-a76"),
47
ARM_CPU_TYPE_NAME("a64fx"),
48
+ ARM_CPU_TYPE_NAME("neoverse-n1"),
49
ARM_CPU_TYPE_NAME("host"),
50
ARM_CPU_TYPE_NAME("max"),
51
};
52
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/target/arm/cpu64.c
55
+++ b/target/arm/cpu64.c
56
@@ -XXX,XX +XXX,XX @@ static void aarch64_a76_initfn(Object *obj)
57
cpu->isar.mvfr2 = 0x00000043;
58
}
18
}
59
19
60
+static void aarch64_neoverse_n1_initfn(Object *obj)
20
-#else
21
+ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate)
61
+{
22
+{
62
+ ARMCPU *cpu = ARM_CPU(obj);
23
+ return ARMMMUIdx_MUser;
63
+
64
+ cpu->dtb_compatible = "arm,neoverse-n1";
65
+ set_feature(&cpu->env, ARM_FEATURE_V8);
66
+ set_feature(&cpu->env, ARM_FEATURE_NEON);
67
+ set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
68
+ set_feature(&cpu->env, ARM_FEATURE_AARCH64);
69
+ set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
70
+ set_feature(&cpu->env, ARM_FEATURE_EL2);
71
+ set_feature(&cpu->env, ARM_FEATURE_EL3);
72
+ set_feature(&cpu->env, ARM_FEATURE_PMU);
73
+
74
+ /* Ordered by B2.4 AArch64 registers by functional group */
75
+ cpu->clidr = 0x82000023;
76
+ cpu->ctr = 0x8444c004;
77
+ cpu->dcz_blocksize = 4;
78
+ cpu->isar.id_aa64dfr0 = 0x0000000110305408ull;
79
+ cpu->isar.id_aa64isar0 = 0x0000100010211120ull;
80
+ cpu->isar.id_aa64isar1 = 0x0000000000100001ull;
81
+ cpu->isar.id_aa64mmfr0 = 0x0000000000101125ull;
82
+ cpu->isar.id_aa64mmfr1 = 0x0000000010212122ull;
83
+ cpu->isar.id_aa64mmfr2 = 0x0000000000001011ull;
84
+ cpu->isar.id_aa64pfr0 = 0x1100000010111112ull; /* GIC filled in later */
85
+ cpu->isar.id_aa64pfr1 = 0x0000000000000020ull;
86
+ cpu->id_afr0 = 0x00000000;
87
+ cpu->isar.id_dfr0 = 0x04010088;
88
+ cpu->isar.id_isar0 = 0x02101110;
89
+ cpu->isar.id_isar1 = 0x13112111;
90
+ cpu->isar.id_isar2 = 0x21232042;
91
+ cpu->isar.id_isar3 = 0x01112131;
92
+ cpu->isar.id_isar4 = 0x00010142;
93
+ cpu->isar.id_isar5 = 0x01011121;
94
+ cpu->isar.id_isar6 = 0x00000010;
95
+ cpu->isar.id_mmfr0 = 0x10201105;
96
+ cpu->isar.id_mmfr1 = 0x40000000;
97
+ cpu->isar.id_mmfr2 = 0x01260000;
98
+ cpu->isar.id_mmfr3 = 0x02122211;
99
+ cpu->isar.id_mmfr4 = 0x00021110;
100
+ cpu->isar.id_pfr0 = 0x10010131;
101
+ cpu->isar.id_pfr1 = 0x00010000; /* GIC filled in later */
102
+ cpu->isar.id_pfr2 = 0x00000011;
103
+ cpu->midr = 0x414fd0c1; /* r4p1 */
104
+ cpu->revidr = 0;
105
+
106
+ /* From B2.23 CCSIDR_EL1 */
107
+ cpu->ccsidr[0] = 0x701fe01a; /* 64KB L1 dcache */
108
+ cpu->ccsidr[1] = 0x201fe01a; /* 64KB L1 icache */
109
+ cpu->ccsidr[2] = 0x70ffe03a; /* 1MB L2 cache */
110
+
111
+ /* From B2.98 SCTLR_EL3 */
112
+ cpu->reset_sctlr = 0x30c50838;
113
+
114
+ /* From B4.23 ICH_VTR_EL2 */
115
+ cpu->gic_num_lrs = 4;
116
+ cpu->gic_vpribits = 5;
117
+ cpu->gic_vprebits = 5;
118
+
119
+ /* From B5.1 AdvSIMD AArch64 register summary */
120
+ cpu->isar.mvfr0 = 0x10110222;
121
+ cpu->isar.mvfr1 = 0x13211111;
122
+ cpu->isar.mvfr2 = 0x00000043;
123
+}
24
+}
124
+
25
+
125
void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
26
+#else /* !CONFIG_USER_ONLY */
27
28
/*
29
* What kind of stack write are we doing? This affects how exceptions
30
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
31
return tt_resp;
32
}
33
34
-#endif /* !CONFIG_USER_ONLY */
35
-
36
ARMMMUIdx arm_v7m_mmu_idx_all(CPUARMState *env,
37
bool secstate, bool priv, bool negpri)
126
{
38
{
127
/*
39
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate)
128
@@ -XXX,XX +XXX,XX @@ static const ARMCPUInfo aarch64_cpus[] = {
40
129
{ .name = "cortex-a72", .initfn = aarch64_a72_initfn },
41
return arm_v7m_mmu_idx_for_secstate_and_priv(env, secstate, priv);
130
{ .name = "cortex-a76", .initfn = aarch64_a76_initfn },
42
}
131
{ .name = "a64fx", .initfn = aarch64_a64fx_initfn },
43
+
132
+ { .name = "neoverse-n1", .initfn = aarch64_neoverse_n1_initfn },
44
+#endif /* !CONFIG_USER_ONLY */
133
{ .name = "max", .initfn = aarch64_max_initfn },
134
#if defined(CONFIG_KVM) || defined(CONFIG_HVF)
135
{ .name = "host", .initfn = aarch64_host_initfn },
136
--
45
--
137
2.25.1
46
2.34.1
47
48
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
2
3
Share the code to set AArch32 max features so that we no
3
arm_v7m_mmu_idx_all() and arm_v7m_mmu_idx_for_secstate_and_priv()
4
longer have code drift between qemu{-system,}-{arm,aarch64}.
4
are only used for system emulation in m_helper.c.
5
Move the definitions to avoid prototype forward declarations.
5
6
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20220506180242.216785-9-richard.henderson@linaro.org
9
Message-id: 20230206223502.25122-4-philmd@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
---
11
target/arm/internals.h | 2 +
12
target/arm/internals.h | 14 --------
12
target/arm/cpu64.c | 50 +-----------------
13
target/arm/m_helper.c | 74 +++++++++++++++++++++---------------------
13
target/arm/cpu_tcg.c | 114 ++++++++++++++++++++++-------------------
14
2 files changed, 37 insertions(+), 51 deletions(-)
14
3 files changed, 65 insertions(+), 101 deletions(-)
15
15
16
diff --git a/target/arm/internals.h b/target/arm/internals.h
16
diff --git a/target/arm/internals.h b/target/arm/internals.h
17
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/internals.h
18
--- a/target/arm/internals.h
19
+++ b/target/arm/internals.h
19
+++ b/target/arm/internals.h
20
@@ -XXX,XX +XXX,XX @@ static inline void define_cortex_a72_a57_a53_cp_reginfo(ARMCPU *cpu) { }
20
@@ -XXX,XX +XXX,XX @@ static inline ARMMMUIdx core_to_aa64_mmu_idx(int mmu_idx)
21
void define_cortex_a72_a57_a53_cp_reginfo(ARMCPU *cpu);
21
22
#endif
22
int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx);
23
23
24
+void aa32_max_features(ARMCPU *cpu);
24
-/*
25
- * Return the MMU index for a v7M CPU with all relevant information
26
- * manually specified.
27
- */
28
-ARMMMUIdx arm_v7m_mmu_idx_all(CPUARMState *env,
29
- bool secstate, bool priv, bool negpri);
30
-
31
-/*
32
- * Return the MMU index for a v7M CPU in the specified security and
33
- * privilege state.
34
- */
35
-ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
36
- bool secstate, bool priv);
37
-
38
/* Return the MMU index for a v7M CPU in the specified security state */
39
ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate);
40
41
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
42
index XXXXXXX..XXXXXXX 100644
43
--- a/target/arm/m_helper.c
44
+++ b/target/arm/m_helper.c
45
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate)
46
47
#else /* !CONFIG_USER_ONLY */
48
49
+static ARMMMUIdx arm_v7m_mmu_idx_all(CPUARMState *env,
50
+ bool secstate, bool priv, bool negpri)
51
+{
52
+ ARMMMUIdx mmu_idx = ARM_MMU_IDX_M;
25
+
53
+
26
#endif
54
+ if (priv) {
27
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
55
+ mmu_idx |= ARM_MMU_IDX_M_PRIV;
28
index XXXXXXX..XXXXXXX 100644
56
+ }
29
--- a/target/arm/cpu64.c
30
+++ b/target/arm/cpu64.c
31
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
32
{
33
ARMCPU *cpu = ARM_CPU(obj);
34
uint64_t t;
35
- uint32_t u;
36
37
if (kvm_enabled() || hvf_enabled()) {
38
/* With KVM or HVF, '-cpu max' is identical to '-cpu host' */
39
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
40
t = FIELD_DP64(t, ID_AA64ZFR0, F64MM, 1);
41
cpu->isar.id_aa64zfr0 = t;
42
43
- /* Replicate the same data to the 32-bit id registers. */
44
- u = cpu->isar.id_isar5;
45
- u = FIELD_DP32(u, ID_ISAR5, AES, 2); /* AES + PMULL */
46
- u = FIELD_DP32(u, ID_ISAR5, SHA1, 1);
47
- u = FIELD_DP32(u, ID_ISAR5, SHA2, 1);
48
- u = FIELD_DP32(u, ID_ISAR5, CRC32, 1);
49
- u = FIELD_DP32(u, ID_ISAR5, RDM, 1);
50
- u = FIELD_DP32(u, ID_ISAR5, VCMA, 1);
51
- cpu->isar.id_isar5 = u;
52
-
53
- u = cpu->isar.id_isar6;
54
- u = FIELD_DP32(u, ID_ISAR6, JSCVT, 1);
55
- u = FIELD_DP32(u, ID_ISAR6, DP, 1);
56
- u = FIELD_DP32(u, ID_ISAR6, FHM, 1);
57
- u = FIELD_DP32(u, ID_ISAR6, SB, 1);
58
- u = FIELD_DP32(u, ID_ISAR6, SPECRES, 1);
59
- u = FIELD_DP32(u, ID_ISAR6, BF16, 1);
60
- u = FIELD_DP32(u, ID_ISAR6, I8MM, 1);
61
- cpu->isar.id_isar6 = u;
62
-
63
- u = cpu->isar.id_pfr0;
64
- u = FIELD_DP32(u, ID_PFR0, DIT, 1);
65
- cpu->isar.id_pfr0 = u;
66
-
67
- u = cpu->isar.id_pfr2;
68
- u = FIELD_DP32(u, ID_PFR2, SSBS, 1);
69
- cpu->isar.id_pfr2 = u;
70
-
71
- u = cpu->isar.id_mmfr3;
72
- u = FIELD_DP32(u, ID_MMFR3, PAN, 2); /* ATS1E1 */
73
- cpu->isar.id_mmfr3 = u;
74
-
75
- u = cpu->isar.id_mmfr4;
76
- u = FIELD_DP32(u, ID_MMFR4, HPDS, 1); /* AA32HPD */
77
- u = FIELD_DP32(u, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */
78
- u = FIELD_DP32(u, ID_MMFR4, CNP, 1); /* TTCNP */
79
- u = FIELD_DP32(u, ID_MMFR4, XNX, 1); /* TTS2UXN */
80
- cpu->isar.id_mmfr4 = u;
81
-
82
t = cpu->isar.id_aa64dfr0;
83
t = FIELD_DP64(t, ID_AA64DFR0, PMUVER, 5); /* v8.4-PMU */
84
cpu->isar.id_aa64dfr0 = t;
85
86
- u = cpu->isar.id_dfr0;
87
- u = FIELD_DP32(u, ID_DFR0, PERFMON, 5); /* v8.4-PMU */
88
- cpu->isar.id_dfr0 = u;
89
-
90
- u = cpu->isar.mvfr1;
91
- u = FIELD_DP32(u, MVFR1, FPHP, 3); /* v8.2-FP16 */
92
- u = FIELD_DP32(u, MVFR1, SIMDHP, 2); /* v8.2-FP16 */
93
- cpu->isar.mvfr1 = u;
94
+ /* Replicate the same data to the 32-bit id registers. */
95
+ aa32_max_features(cpu);
96
97
#ifdef CONFIG_USER_ONLY
98
/*
99
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
100
index XXXXXXX..XXXXXXX 100644
101
--- a/target/arm/cpu_tcg.c
102
+++ b/target/arm/cpu_tcg.c
103
@@ -XXX,XX +XXX,XX @@
104
#endif
105
#include "cpregs.h"
106
107
+
57
+
108
+/* Share AArch32 -cpu max features with AArch64. */
58
+ if (negpri) {
109
+void aa32_max_features(ARMCPU *cpu)
59
+ mmu_idx |= ARM_MMU_IDX_M_NEGPRI;
110
+{
60
+ }
111
+ uint32_t t;
112
+
61
+
113
+ /* Add additional features supported by QEMU */
62
+ if (secstate) {
114
+ t = cpu->isar.id_isar5;
63
+ mmu_idx |= ARM_MMU_IDX_M_S;
115
+ t = FIELD_DP32(t, ID_ISAR5, AES, 2);
64
+ }
116
+ t = FIELD_DP32(t, ID_ISAR5, SHA1, 1);
117
+ t = FIELD_DP32(t, ID_ISAR5, SHA2, 1);
118
+ t = FIELD_DP32(t, ID_ISAR5, CRC32, 1);
119
+ t = FIELD_DP32(t, ID_ISAR5, RDM, 1);
120
+ t = FIELD_DP32(t, ID_ISAR5, VCMA, 1);
121
+ cpu->isar.id_isar5 = t;
122
+
65
+
123
+ t = cpu->isar.id_isar6;
66
+ return mmu_idx;
124
+ t = FIELD_DP32(t, ID_ISAR6, JSCVT, 1);
125
+ t = FIELD_DP32(t, ID_ISAR6, DP, 1);
126
+ t = FIELD_DP32(t, ID_ISAR6, FHM, 1);
127
+ t = FIELD_DP32(t, ID_ISAR6, SB, 1);
128
+ t = FIELD_DP32(t, ID_ISAR6, SPECRES, 1);
129
+ t = FIELD_DP32(t, ID_ISAR6, BF16, 1);
130
+ t = FIELD_DP32(t, ID_ISAR6, I8MM, 1);
131
+ cpu->isar.id_isar6 = t;
132
+
133
+ t = cpu->isar.mvfr1;
134
+ t = FIELD_DP32(t, MVFR1, FPHP, 3); /* v8.2-FP16 */
135
+ t = FIELD_DP32(t, MVFR1, SIMDHP, 2); /* v8.2-FP16 */
136
+ cpu->isar.mvfr1 = t;
137
+
138
+ t = cpu->isar.mvfr2;
139
+ t = FIELD_DP32(t, MVFR2, SIMDMISC, 3); /* SIMD MaxNum */
140
+ t = FIELD_DP32(t, MVFR2, FPMISC, 4); /* FP MaxNum */
141
+ cpu->isar.mvfr2 = t;
142
+
143
+ t = cpu->isar.id_mmfr3;
144
+ t = FIELD_DP32(t, ID_MMFR3, PAN, 2); /* ATS1E1 */
145
+ cpu->isar.id_mmfr3 = t;
146
+
147
+ t = cpu->isar.id_mmfr4;
148
+ t = FIELD_DP32(t, ID_MMFR4, HPDS, 1); /* AA32HPD */
149
+ t = FIELD_DP32(t, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */
150
+ t = FIELD_DP32(t, ID_MMFR4, CNP, 1); /* TTCNP */
151
+ t = FIELD_DP32(t, ID_MMFR4, XNX, 1); /* TTS2UXN */
152
+ cpu->isar.id_mmfr4 = t;
153
+
154
+ t = cpu->isar.id_pfr0;
155
+ t = FIELD_DP32(t, ID_PFR0, DIT, 1);
156
+ cpu->isar.id_pfr0 = t;
157
+
158
+ t = cpu->isar.id_pfr2;
159
+ t = FIELD_DP32(t, ID_PFR2, SSBS, 1);
160
+ cpu->isar.id_pfr2 = t;
161
+
162
+ t = cpu->isar.id_dfr0;
163
+ t = FIELD_DP32(t, ID_DFR0, PERFMON, 5); /* v8.4-PMU */
164
+ cpu->isar.id_dfr0 = t;
165
+}
67
+}
166
+
68
+
167
#ifndef CONFIG_USER_ONLY
69
+static ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
168
static uint64_t l2ctlr_read(CPUARMState *env, const ARMCPRegInfo *ri)
70
+ bool secstate, bool priv)
169
{
71
+{
170
@@ -XXX,XX +XXX,XX @@ static void arm_v7m_class_init(ObjectClass *oc, void *data)
72
+ bool negpri = armv7m_nvic_neg_prio_requested(env->nvic, secstate);
171
static void arm_max_initfn(Object *obj)
73
+
172
{
74
+ return arm_v7m_mmu_idx_all(env, secstate, priv, negpri);
173
ARMCPU *cpu = ARM_CPU(obj);
75
+}
174
- uint32_t t;
76
+
175
77
+/* Return the MMU index for a v7M CPU in the specified security state */
176
/* aarch64_a57_initfn, advertising none of the aarch64 features */
78
+ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate)
177
cpu->dtb_compatible = "arm,cortex-a57";
79
+{
178
@@ -XXX,XX +XXX,XX @@ static void arm_max_initfn(Object *obj)
80
+ bool priv = arm_v7m_is_handler_mode(env) ||
179
cpu->ccsidr[2] = 0x70ffe07a; /* 2048KB L2 cache */
81
+ !(env->v7m.control[secstate] & 1);
180
define_cortex_a72_a57_a53_cp_reginfo(cpu);
82
+
181
83
+ return arm_v7m_mmu_idx_for_secstate_and_priv(env, secstate, priv);
182
- /* Add additional features supported by QEMU */
84
+}
183
- t = cpu->isar.id_isar5;
85
+
184
- t = FIELD_DP32(t, ID_ISAR5, AES, 2);
86
/*
185
- t = FIELD_DP32(t, ID_ISAR5, SHA1, 1);
87
* What kind of stack write are we doing? This affects how exceptions
186
- t = FIELD_DP32(t, ID_ISAR5, SHA2, 1);
88
* generated during the stacking are treated.
187
- t = FIELD_DP32(t, ID_ISAR5, CRC32, 1);
89
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
188
- t = FIELD_DP32(t, ID_ISAR5, RDM, 1);
90
return tt_resp;
189
- t = FIELD_DP32(t, ID_ISAR5, VCMA, 1);
91
}
190
- cpu->isar.id_isar5 = t;
92
93
-ARMMMUIdx arm_v7m_mmu_idx_all(CPUARMState *env,
94
- bool secstate, bool priv, bool negpri)
95
-{
96
- ARMMMUIdx mmu_idx = ARM_MMU_IDX_M;
191
-
97
-
192
- t = cpu->isar.id_isar6;
98
- if (priv) {
193
- t = FIELD_DP32(t, ID_ISAR6, JSCVT, 1);
99
- mmu_idx |= ARM_MMU_IDX_M_PRIV;
194
- t = FIELD_DP32(t, ID_ISAR6, DP, 1);
100
- }
195
- t = FIELD_DP32(t, ID_ISAR6, FHM, 1);
196
- t = FIELD_DP32(t, ID_ISAR6, SB, 1);
197
- t = FIELD_DP32(t, ID_ISAR6, SPECRES, 1);
198
- t = FIELD_DP32(t, ID_ISAR6, BF16, 1);
199
- t = FIELD_DP32(t, ID_ISAR6, I8MM, 1);
200
- cpu->isar.id_isar6 = t;
201
-
101
-
202
- t = cpu->isar.mvfr1;
102
- if (negpri) {
203
- t = FIELD_DP32(t, MVFR1, FPHP, 3); /* v8.2-FP16 */
103
- mmu_idx |= ARM_MMU_IDX_M_NEGPRI;
204
- t = FIELD_DP32(t, MVFR1, SIMDHP, 2); /* v8.2-FP16 */
104
- }
205
- cpu->isar.mvfr1 = t;
206
-
105
-
207
- t = cpu->isar.mvfr2;
106
- if (secstate) {
208
- t = FIELD_DP32(t, MVFR2, SIMDMISC, 3); /* SIMD MaxNum */
107
- mmu_idx |= ARM_MMU_IDX_M_S;
209
- t = FIELD_DP32(t, MVFR2, FPMISC, 4); /* FP MaxNum */
108
- }
210
- cpu->isar.mvfr2 = t;
211
-
109
-
212
- t = cpu->isar.id_mmfr3;
110
- return mmu_idx;
213
- t = FIELD_DP32(t, ID_MMFR3, PAN, 2); /* ATS1E1 */
111
-}
214
- cpu->isar.id_mmfr3 = t;
215
-
112
-
216
- t = cpu->isar.id_mmfr4;
113
-ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
217
- t = FIELD_DP32(t, ID_MMFR4, HPDS, 1); /* AA32HPD */
114
- bool secstate, bool priv)
218
- t = FIELD_DP32(t, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */
115
-{
219
- t = FIELD_DP32(t, ID_MMFR4, CNP, 1); /* TTCNP */
116
- bool negpri = armv7m_nvic_neg_prio_requested(env->nvic, secstate);
220
- t = FIELD_DP32(t, ID_MMFR4, XNX, 1); /* TTS2UXN */
221
- cpu->isar.id_mmfr4 = t;
222
-
117
-
223
- t = cpu->isar.id_pfr0;
118
- return arm_v7m_mmu_idx_all(env, secstate, priv, negpri);
224
- t = FIELD_DP32(t, ID_PFR0, DIT, 1);
119
-}
225
- cpu->isar.id_pfr0 = t;
226
-
120
-
227
- t = cpu->isar.id_pfr2;
121
-/* Return the MMU index for a v7M CPU in the specified security state */
228
- t = FIELD_DP32(t, ID_PFR2, SSBS, 1);
122
-ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate)
229
- cpu->isar.id_pfr2 = t;
123
-{
124
- bool priv = arm_v7m_is_handler_mode(env) ||
125
- !(env->v7m.control[secstate] & 1);
230
-
126
-
231
- t = cpu->isar.id_dfr0;
127
- return arm_v7m_mmu_idx_for_secstate_and_priv(env, secstate, priv);
232
- t = FIELD_DP32(t, ID_DFR0, PERFMON, 5); /* v8.4-PMU */
128
-}
233
- cpu->isar.id_dfr0 = t;
129
-
234
+ aa32_max_features(cpu);
130
#endif /* !CONFIG_USER_ONLY */
235
236
#ifdef CONFIG_USER_ONLY
237
/*
238
--
131
--
239
2.25.1
132
2.34.1
133
134
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
2
3
Drop el3_no_el2_cp_reginfo, el3_no_el2_v8_cp_reginfo, and the local
3
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
vpidr_regs definition, and rely on the squashing to ARM_CP_CONST
4
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
while registering for v8.
5
Message-id: 20230206223502.25122-5-philmd@linaro.org
6
7
This is a behavior change for v7 cpus with Security Extensions and
8
without Virtualization Extensions, in that the virtualization cpregs
9
are now correctly not present. This would be a migration compatibility
10
break, except that we have an existing bug in which migration of 32-bit
11
cpus with Security Extensions enabled does not work.
12
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20220506180242.216785-3-richard.henderson@linaro.org
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
7
---
18
target/arm/helper.c | 158 ++++----------------------------------------
8
target/arm/helper.c | 12 ++++++++++--
19
1 file changed, 13 insertions(+), 145 deletions(-)
9
1 file changed, 10 insertions(+), 2 deletions(-)
20
10
21
diff --git a/target/arm/helper.c b/target/arm/helper.c
11
diff --git a/target/arm/helper.c b/target/arm/helper.c
22
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/helper.c
13
--- a/target/arm/helper.c
24
+++ b/target/arm/helper.c
14
+++ b/target/arm/helper.c
25
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
15
@@ -XXX,XX +XXX,XX @@ static void define_pmu_regs(ARMCPU *cpu)
26
.fieldoffset = offsetoflow32(CPUARMState, cp15.mdcr_el3) },
16
}
27
};
17
}
28
18
29
-/* Used to describe the behaviour of EL2 regs when EL2 does not exist. */
19
+#ifndef CONFIG_USER_ONLY
30
-static const ARMCPRegInfo el3_no_el2_cp_reginfo[] = {
20
/*
31
- { .name = "VBAR_EL2", .state = ARM_CP_STATE_BOTH,
21
* We don't know until after realize whether there's a GICv3
32
- .opc0 = 3, .opc1 = 4, .crn = 12, .crm = 0, .opc2 = 0,
22
* attached, and that is what registers the gicv3 sysregs.
33
- .access = PL2_RW,
23
@@ -XXX,XX +XXX,XX @@ static uint64_t id_pfr1_read(CPUARMState *env, const ARMCPRegInfo *ri)
34
- .readfn = arm_cp_read_zero, .writefn = arm_cp_write_ignore },
24
return pfr1;
35
- { .name = "HCR_EL2", .state = ARM_CP_STATE_BOTH,
25
}
36
- .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 0,
26
37
- .access = PL2_RW,
27
-#ifndef CONFIG_USER_ONLY
38
- .type = ARM_CP_CONST, .resetvalue = 0 },
28
static uint64_t id_aa64pfr0_read(CPUARMState *env, const ARMCPRegInfo *ri)
39
- { .name = "HACR_EL2", .state = ARM_CP_STATE_BOTH,
40
- .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 7,
41
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
42
- { .name = "ESR_EL2", .state = ARM_CP_STATE_BOTH,
43
- .opc0 = 3, .opc1 = 4, .crn = 5, .crm = 2, .opc2 = 0,
44
- .access = PL2_RW,
45
- .type = ARM_CP_CONST, .resetvalue = 0 },
46
- { .name = "CPTR_EL2", .state = ARM_CP_STATE_BOTH,
47
- .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 2,
48
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
49
- { .name = "MAIR_EL2", .state = ARM_CP_STATE_BOTH,
50
- .opc0 = 3, .opc1 = 4, .crn = 10, .crm = 2, .opc2 = 0,
51
- .access = PL2_RW, .type = ARM_CP_CONST,
52
- .resetvalue = 0 },
53
- { .name = "HMAIR1", .state = ARM_CP_STATE_AA32,
54
- .cp = 15, .opc1 = 4, .crn = 10, .crm = 2, .opc2 = 1,
55
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
56
- { .name = "AMAIR_EL2", .state = ARM_CP_STATE_BOTH,
57
- .opc0 = 3, .opc1 = 4, .crn = 10, .crm = 3, .opc2 = 0,
58
- .access = PL2_RW, .type = ARM_CP_CONST,
59
- .resetvalue = 0 },
60
- { .name = "HAMAIR1", .state = ARM_CP_STATE_AA32,
61
- .cp = 15, .opc1 = 4, .crn = 10, .crm = 3, .opc2 = 1,
62
- .access = PL2_RW, .type = ARM_CP_CONST,
63
- .resetvalue = 0 },
64
- { .name = "AFSR0_EL2", .state = ARM_CP_STATE_BOTH,
65
- .opc0 = 3, .opc1 = 4, .crn = 5, .crm = 1, .opc2 = 0,
66
- .access = PL2_RW, .type = ARM_CP_CONST,
67
- .resetvalue = 0 },
68
- { .name = "AFSR1_EL2", .state = ARM_CP_STATE_BOTH,
69
- .opc0 = 3, .opc1 = 4, .crn = 5, .crm = 1, .opc2 = 1,
70
- .access = PL2_RW, .type = ARM_CP_CONST,
71
- .resetvalue = 0 },
72
- { .name = "TCR_EL2", .state = ARM_CP_STATE_BOTH,
73
- .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 0, .opc2 = 2,
74
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
75
- { .name = "VTCR_EL2", .state = ARM_CP_STATE_BOTH,
76
- .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 1, .opc2 = 2,
77
- .access = PL2_RW, .accessfn = access_el3_aa32ns,
78
- .type = ARM_CP_CONST, .resetvalue = 0 },
79
- { .name = "VTTBR", .state = ARM_CP_STATE_AA32,
80
- .cp = 15, .opc1 = 6, .crm = 2,
81
- .access = PL2_RW, .accessfn = access_el3_aa32ns,
82
- .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
83
- { .name = "VTTBR_EL2", .state = ARM_CP_STATE_AA64,
84
- .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 1, .opc2 = 0,
85
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
86
- { .name = "SCTLR_EL2", .state = ARM_CP_STATE_BOTH,
87
- .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 0, .opc2 = 0,
88
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
89
- { .name = "TPIDR_EL2", .state = ARM_CP_STATE_BOTH,
90
- .opc0 = 3, .opc1 = 4, .crn = 13, .crm = 0, .opc2 = 2,
91
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
92
- { .name = "TTBR0_EL2", .state = ARM_CP_STATE_AA64,
93
- .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 0, .opc2 = 0,
94
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
95
- { .name = "HTTBR", .cp = 15, .opc1 = 4, .crm = 2,
96
- .access = PL2_RW, .type = ARM_CP_64BIT | ARM_CP_CONST,
97
- .resetvalue = 0 },
98
- { .name = "CNTHCTL_EL2", .state = ARM_CP_STATE_BOTH,
99
- .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 1, .opc2 = 0,
100
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
101
- { .name = "CNTVOFF_EL2", .state = ARM_CP_STATE_AA64,
102
- .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 0, .opc2 = 3,
103
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
104
- { .name = "CNTVOFF", .cp = 15, .opc1 = 4, .crm = 14,
105
- .access = PL2_RW, .type = ARM_CP_64BIT | ARM_CP_CONST,
106
- .resetvalue = 0 },
107
- { .name = "CNTHP_CVAL_EL2", .state = ARM_CP_STATE_AA64,
108
- .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 2, .opc2 = 2,
109
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
110
- { .name = "CNTHP_CVAL", .cp = 15, .opc1 = 6, .crm = 14,
111
- .access = PL2_RW, .type = ARM_CP_64BIT | ARM_CP_CONST,
112
- .resetvalue = 0 },
113
- { .name = "CNTHP_TVAL_EL2", .state = ARM_CP_STATE_BOTH,
114
- .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 2, .opc2 = 0,
115
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
116
- { .name = "CNTHP_CTL_EL2", .state = ARM_CP_STATE_BOTH,
117
- .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 2, .opc2 = 1,
118
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
119
- { .name = "MDCR_EL2", .state = ARM_CP_STATE_BOTH,
120
- .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 1,
121
- .access = PL2_RW, .accessfn = access_tda,
122
- .type = ARM_CP_CONST, .resetvalue = 0 },
123
- { .name = "HPFAR_EL2", .state = ARM_CP_STATE_BOTH,
124
- .opc0 = 3, .opc1 = 4, .crn = 6, .crm = 0, .opc2 = 4,
125
- .access = PL2_RW, .accessfn = access_el3_aa32ns,
126
- .type = ARM_CP_CONST, .resetvalue = 0 },
127
- { .name = "HSTR_EL2", .state = ARM_CP_STATE_BOTH,
128
- .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 3,
129
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
130
- { .name = "FAR_EL2", .state = ARM_CP_STATE_BOTH,
131
- .opc0 = 3, .opc1 = 4, .crn = 6, .crm = 0, .opc2 = 0,
132
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
133
- { .name = "HIFAR", .state = ARM_CP_STATE_AA32,
134
- .type = ARM_CP_CONST,
135
- .cp = 15, .opc1 = 4, .crn = 6, .crm = 0, .opc2 = 2,
136
- .access = PL2_RW, .resetvalue = 0 },
137
-};
138
-
139
-/* Ditto, but for registers which exist in ARMv8 but not v7 */
140
-static const ARMCPRegInfo el3_no_el2_v8_cp_reginfo[] = {
141
- { .name = "HCR2", .state = ARM_CP_STATE_AA32,
142
- .cp = 15, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 4,
143
- .access = PL2_RW,
144
- .type = ARM_CP_CONST, .resetvalue = 0 },
145
-};
146
-
147
static void do_hcr_write(CPUARMState *env, uint64_t value, uint64_t valid_mask)
148
{
29
{
149
ARMCPU *cpu = env_archcpu(env);
30
ARMCPU *cpu = env_archcpu(env);
150
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
31
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
151
define_arm_cp_regs(cpu, v8_idregs);
32
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 1,
152
define_arm_cp_regs(cpu, v8_cp_reginfo);
33
.access = PL1_R, .type = ARM_CP_NO_RAW,
153
}
34
.accessfn = access_aa32_tid3,
154
- if (arm_feature(env, ARM_FEATURE_EL2)) {
35
+#ifdef CONFIG_USER_ONLY
155
+
36
+ .type = ARM_CP_CONST,
156
+ /*
37
+ .resetvalue = cpu->isar.id_pfr1,
157
+ * Register the base EL2 cpregs.
38
+#else
158
+ * Pre v8, these registers are implemented only as part of the
39
+ .type = ARM_CP_NO_RAW,
159
+ * Virtualization Extensions (EL2 present). Beginning with v8,
40
+ .accessfn = access_aa32_tid3,
160
+ * if EL2 is missing but EL3 is enabled, mostly these become
41
.readfn = id_pfr1_read,
161
+ * RES0 from EL3, with some specific exceptions.
42
- .writefn = arm_cp_write_ignore },
162
+ */
43
+ .writefn = arm_cp_write_ignore
163
+ if (arm_feature(env, ARM_FEATURE_EL2)
44
+#endif
164
+ || (arm_feature(env, ARM_FEATURE_EL3)
45
+ },
165
+ && arm_feature(env, ARM_FEATURE_V8))) {
46
{ .name = "ID_DFR0", .state = ARM_CP_STATE_BOTH,
166
uint64_t vmpidr_def = mpidr_read_val(env);
47
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 2,
167
ARMCPRegInfo vpidr_regs[] = {
48
.access = PL1_R, .type = ARM_CP_CONST,
168
{ .name = "VPIDR", .state = ARM_CP_STATE_AA32,
169
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
170
};
171
define_one_arm_cp_reg(cpu, &rvbar);
172
}
173
- } else {
174
- /* If EL2 is missing but higher ELs are enabled, we need to
175
- * register the no_el2 reginfos.
176
- */
177
- if (arm_feature(env, ARM_FEATURE_EL3)) {
178
- /* When EL3 exists but not EL2, VPIDR and VMPIDR take the value
179
- * of MIDR_EL1 and MPIDR_EL1.
180
- */
181
- ARMCPRegInfo vpidr_regs[] = {
182
- { .name = "VPIDR_EL2", .state = ARM_CP_STATE_BOTH,
183
- .opc0 = 3, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 0,
184
- .access = PL2_RW, .accessfn = access_el3_aa32ns,
185
- .type = ARM_CP_CONST, .resetvalue = cpu->midr,
186
- .fieldoffset = offsetof(CPUARMState, cp15.vpidr_el2) },
187
- { .name = "VMPIDR_EL2", .state = ARM_CP_STATE_BOTH,
188
- .opc0 = 3, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 5,
189
- .access = PL2_RW, .accessfn = access_el3_aa32ns,
190
- .type = ARM_CP_NO_RAW,
191
- .writefn = arm_cp_write_ignore, .readfn = mpidr_read },
192
- };
193
- define_arm_cp_regs(cpu, vpidr_regs);
194
- define_arm_cp_regs(cpu, el3_no_el2_cp_reginfo);
195
- if (arm_feature(env, ARM_FEATURE_V8)) {
196
- define_arm_cp_regs(cpu, el3_no_el2_v8_cp_reginfo);
197
- }
198
- }
199
}
200
+
201
+ /* Register the base EL3 cpregs. */
202
if (arm_feature(env, ARM_FEATURE_EL3)) {
203
define_arm_cp_regs(cpu, el3_cp_reginfo);
204
ARMCPRegInfo el3_regs[] = {
205
--
49
--
206
2.25.1
50
2.34.1
51
52
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
2
3
This extension concerns cache speculation, which TCG does
3
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
4
not implement. Thus we can trivially enable this feature.
4
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Message-id: 20230206223502.25122-6-philmd@linaro.org
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20220506180242.216785-22-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
8
---
11
docs/system/arm/emulation.rst | 1 +
9
linux-user/user-internals.h | 2 +-
12
target/arm/cpu64.c | 1 +
10
target/arm/cpu.h | 2 +-
13
target/arm/cpu_tcg.c | 1 +
11
linux-user/arm/cpu_loop.c | 4 ++--
14
3 files changed, 3 insertions(+)
12
3 files changed, 4 insertions(+), 4 deletions(-)
15
13
16
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
14
diff --git a/linux-user/user-internals.h b/linux-user/user-internals.h
17
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
18
--- a/docs/system/arm/emulation.rst
16
--- a/linux-user/user-internals.h
19
+++ b/docs/system/arm/emulation.rst
17
+++ b/linux-user/user-internals.h
20
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
18
@@ -XXX,XX +XXX,XX @@ void print_termios(void *arg);
21
- FEAT_CSV2_1p1 (Cache speculation variant 2, version 1.1)
19
#ifdef TARGET_ARM
22
- FEAT_CSV2_1p2 (Cache speculation variant 2, version 1.2)
20
static inline int regpairs_aligned(CPUArchState *cpu_env, int num)
23
- FEAT_CSV2_2 (Cache speculation variant 2, version 2)
21
{
24
+- FEAT_CSV3 (Cache speculation variant 3)
22
- return cpu_env->eabi == 1;
25
- FEAT_DIT (Data Independent Timing instructions)
23
+ return cpu_env->eabi;
26
- FEAT_DPB (DC CVAP instruction)
24
}
27
- FEAT_Debugv8p2 (Debug changes for v8.2)
25
#elif defined(TARGET_MIPS) && defined(TARGET_ABI_MIPSO32)
28
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
26
static inline int regpairs_aligned(CPUArchState *cpu_env, int num) { return 1; }
27
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
29
index XXXXXXX..XXXXXXX 100644
28
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/cpu64.c
29
--- a/target/arm/cpu.h
31
+++ b/target/arm/cpu64.c
30
+++ b/target/arm/cpu.h
32
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
31
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
33
t = FIELD_DP64(t, ID_AA64PFR0, SEL2, 1); /* FEAT_SEL2 */
32
34
t = FIELD_DP64(t, ID_AA64PFR0, DIT, 1); /* FEAT_DIT */
33
#if defined(CONFIG_USER_ONLY)
35
t = FIELD_DP64(t, ID_AA64PFR0, CSV2, 2); /* FEAT_CSV2_2 */
34
/* For usermode syscall translation. */
36
+ t = FIELD_DP64(t, ID_AA64PFR0, CSV3, 1); /* FEAT_CSV3 */
35
- int eabi;
37
cpu->isar.id_aa64pfr0 = t;
36
+ bool eabi;
38
37
#endif
39
t = cpu->isar.id_aa64pfr1;
38
40
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
39
struct CPUBreakpoint *cpu_breakpoint[16];
40
diff --git a/linux-user/arm/cpu_loop.c b/linux-user/arm/cpu_loop.c
41
index XXXXXXX..XXXXXXX 100644
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/cpu_tcg.c
42
--- a/linux-user/arm/cpu_loop.c
43
+++ b/target/arm/cpu_tcg.c
43
+++ b/linux-user/arm/cpu_loop.c
44
@@ -XXX,XX +XXX,XX @@ void aa32_max_features(ARMCPU *cpu)
44
@@ -XXX,XX +XXX,XX @@ void cpu_loop(CPUARMState *env)
45
cpu->isar.id_pfr0 = t;
45
break;
46
46
case EXCP_SWI:
47
t = cpu->isar.id_pfr2;
47
{
48
+ t = FIELD_DP32(t, ID_PFR2, CSV3, 1); /* FEAT_CSV3 */
48
- env->eabi = 1;
49
t = FIELD_DP32(t, ID_PFR2, SSBS, 1); /* FEAT_SSBS */
49
+ env->eabi = true;
50
cpu->isar.id_pfr2 = t;
50
/* system call */
51
if (env->thumb) {
52
/* Thumb is always EABI style with syscall number in r7 */
53
@@ -XXX,XX +XXX,XX @@ void cpu_loop(CPUARMState *env)
54
* > 0xfffff and are handled below as out-of-range.
55
*/
56
n ^= ARM_SYSCALL_BASE;
57
- env->eabi = 0;
58
+ env->eabi = false;
59
}
60
}
51
61
52
--
62
--
53
2.25.1
63
2.34.1
64
65
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
2
3
There is no branch prediction in TCG, therefore there is no
3
Although the 'eabi' field is only used in user emulation where
4
need to actually include the context number into the predictor.
4
CPU reset doesn't occur, it doesn't belong to the area to reset.
5
Therefore all we need to do is add the state for SCXTNUM_ELx.
5
Move it after the 'end_reset_fields' for consistency.
6
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
Message-id: 20220506180242.216785-21-richard.henderson@linaro.org
9
Message-id: 20230206223502.25122-7-philmd@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
11
---
12
docs/system/arm/emulation.rst | 3 ++
12
target/arm/cpu.h | 9 ++++-----
13
target/arm/cpu.h | 16 +++++++++
13
1 file changed, 4 insertions(+), 5 deletions(-)
14
target/arm/cpu.c | 5 +++
15
target/arm/cpu64.c | 3 +-
16
target/arm/helper.c | 61 ++++++++++++++++++++++++++++++++++-
17
5 files changed, 86 insertions(+), 2 deletions(-)
18
14
19
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
20
index XXXXXXX..XXXXXXX 100644
21
--- a/docs/system/arm/emulation.rst
22
+++ b/docs/system/arm/emulation.rst
23
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
24
- FEAT_BF16 (AArch64 BFloat16 instructions)
25
- FEAT_BTI (Branch Target Identification)
26
- FEAT_CSV2 (Cache speculation variant 2)
27
+- FEAT_CSV2_1p1 (Cache speculation variant 2, version 1.1)
28
+- FEAT_CSV2_1p2 (Cache speculation variant 2, version 1.2)
29
+- FEAT_CSV2_2 (Cache speculation variant 2, version 2)
30
- FEAT_DIT (Data Independent Timing instructions)
31
- FEAT_DPB (DC CVAP instruction)
32
- FEAT_Debugv8p2 (Debug changes for v8.2)
33
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
34
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/cpu.h
17
--- a/target/arm/cpu.h
36
+++ b/target/arm/cpu.h
18
+++ b/target/arm/cpu.h
37
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
19
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
38
ARMPACKey apdb;
20
ARMVectorReg zarray[ARM_MAX_VQ * 16];
39
ARMPACKey apga;
40
} keys;
41
+
42
+ uint64_t scxtnum_el[4];
43
#endif
21
#endif
44
22
45
#if defined(CONFIG_USER_ONLY)
23
-#if defined(CONFIG_USER_ONLY)
46
@@ -XXX,XX +XXX,XX @@ void pmu_init(ARMCPU *cpu);
24
- /* For usermode syscall translation. */
47
#define SCTLR_WXN (1U << 19)
25
- bool eabi;
48
#define SCTLR_ST (1U << 20) /* up to ??, RAZ in v6 */
49
#define SCTLR_UWXN (1U << 20) /* v7 onward, AArch32 only */
50
+#define SCTLR_TSCXT (1U << 20) /* FEAT_CSV2_1p2, AArch64 only */
51
#define SCTLR_FI (1U << 21) /* up to v7, v8 RES0 */
52
#define SCTLR_IESB (1U << 21) /* v8.2-IESB, AArch64 only */
53
#define SCTLR_U (1U << 22) /* up to v6, RAO in v7 */
54
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_dit(const ARMISARegisters *id)
55
return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, DIT) != 0;
56
}
57
58
+static inline bool isar_feature_aa64_scxtnum(const ARMISARegisters *id)
59
+{
60
+ int key = FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, CSV2);
61
+ if (key >= 2) {
62
+ return true; /* FEAT_CSV2_2 */
63
+ }
64
+ if (key == 1) {
65
+ key = FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, CSV2_FRAC);
66
+ return key >= 2; /* FEAT_CSV2_1p2 */
67
+ }
68
+ return false;
69
+}
70
+
71
static inline bool isar_feature_aa64_ssbs(const ARMISARegisters *id)
72
{
73
return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, SSBS) != 0;
74
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
75
index XXXXXXX..XXXXXXX 100644
76
--- a/target/arm/cpu.c
77
+++ b/target/arm/cpu.c
78
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
79
*/
80
env->cp15.gcr_el1 = 0x1ffff;
81
}
82
+ /*
83
+ * Disable access to SCXTNUM_EL0 from CSV2_1p2.
84
+ * This is not yet exposed from the Linux kernel in any way.
85
+ */
86
+ env->cp15.sctlr_el[1] |= SCTLR_TSCXT;
87
#else
88
/* Reset into the highest available EL */
89
if (arm_feature(env, ARM_FEATURE_EL3)) {
90
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
91
index XXXXXXX..XXXXXXX 100644
92
--- a/target/arm/cpu64.c
93
+++ b/target/arm/cpu64.c
94
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
95
t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1);
96
t = FIELD_DP64(t, ID_AA64PFR0, SEL2, 1); /* FEAT_SEL2 */
97
t = FIELD_DP64(t, ID_AA64PFR0, DIT, 1); /* FEAT_DIT */
98
- t = FIELD_DP64(t, ID_AA64PFR0, CSV2, 1); /* FEAT_CSV2 */
99
+ t = FIELD_DP64(t, ID_AA64PFR0, CSV2, 2); /* FEAT_CSV2_2 */
100
cpu->isar.id_aa64pfr0 = t;
101
102
t = cpu->isar.id_aa64pfr1;
103
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
104
* we do for EL2 with the virtualization=on property.
105
*/
106
t = FIELD_DP64(t, ID_AA64PFR1, MTE, 3); /* FEAT_MTE3 */
107
+ t = FIELD_DP64(t, ID_AA64PFR1, CSV2_FRAC, 0); /* FEAT_CSV2_2 */
108
cpu->isar.id_aa64pfr1 = t;
109
110
t = cpu->isar.id_aa64mmfr0;
111
diff --git a/target/arm/helper.c b/target/arm/helper.c
112
index XXXXXXX..XXXXXXX 100644
113
--- a/target/arm/helper.c
114
+++ b/target/arm/helper.c
115
@@ -XXX,XX +XXX,XX @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
116
if (cpu_isar_feature(aa64_mte, cpu)) {
117
valid_mask |= SCR_ATA;
118
}
119
+ if (cpu_isar_feature(aa64_scxtnum, cpu)) {
120
+ valid_mask |= SCR_ENSCXT;
121
+ }
122
} else {
123
valid_mask &= ~(SCR_RW | SCR_ST);
124
if (cpu_isar_feature(aa32_ras, cpu)) {
125
@@ -XXX,XX +XXX,XX @@ static void do_hcr_write(CPUARMState *env, uint64_t value, uint64_t valid_mask)
126
if (cpu_isar_feature(aa64_mte, cpu)) {
127
valid_mask |= HCR_ATA | HCR_DCT | HCR_TID5;
128
}
129
+ if (cpu_isar_feature(aa64_scxtnum, cpu)) {
130
+ valid_mask |= HCR_ENSCXT;
131
+ }
132
}
133
134
/* Clear RES0 bits. */
135
@@ -XXX,XX +XXX,XX @@ static void define_arm_vh_e2h_redirects_aliases(ARMCPU *cpu)
136
{ K(3, 0, 5, 6, 0), K(3, 4, 5, 6, 0), K(3, 5, 5, 6, 0),
137
"TFSR_EL1", "TFSR_EL2", "TFSR_EL12", isar_feature_aa64_mte },
138
139
+ { K(3, 0, 13, 0, 7), K(3, 4, 13, 0, 7), K(3, 5, 13, 0, 7),
140
+ "SCXTNUM_EL1", "SCXTNUM_EL2", "SCXTNUM_EL12",
141
+ isar_feature_aa64_scxtnum },
142
+
143
/* TODO: ARMv8.2-SPE -- PMSCR_EL2 */
144
/* TODO: ARMv8.4-Trace -- TRFCR_EL2 */
145
};
146
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo mte_el0_cacheop_reginfo[] = {
147
},
148
};
149
150
-#endif
26
-#endif
151
+static CPAccessResult access_scxtnum(CPUARMState *env, const ARMCPRegInfo *ri,
27
-
152
+ bool isread)
28
struct CPUBreakpoint *cpu_breakpoint[16];
153
+{
29
struct CPUWatchpoint *cpu_watchpoint[16];
154
+ uint64_t hcr = arm_hcr_el2_eff(env);
30
155
+ int el = arm_current_el(env);
31
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
156
+
32
const struct arm_boot_info *boot_info;
157
+ if (el == 0 && !((hcr & HCR_E2H) && (hcr & HCR_TGE))) {
33
/* Store GICv3CPUState to access from this struct */
158
+ if (env->cp15.sctlr_el[1] & SCTLR_TSCXT) {
34
void *gicv3state;
159
+ if (hcr & HCR_TGE) {
35
+#if defined(CONFIG_USER_ONLY)
160
+ return CP_ACCESS_TRAP_EL2;
36
+ /* For usermode syscall translation. */
161
+ }
37
+ bool eabi;
162
+ return CP_ACCESS_TRAP;
38
+#endif /* CONFIG_USER_ONLY */
163
+ }
39
164
+ } else if (el < 2 && (env->cp15.sctlr_el[2] & SCTLR_TSCXT)) {
40
#ifdef TARGET_TAGGED_ADDRESSES
165
+ return CP_ACCESS_TRAP_EL2;
41
/* Linux syscall tagged address support */
166
+ }
167
+ if (el < 2 && arm_is_el2_enabled(env) && !(hcr & HCR_ENSCXT)) {
168
+ return CP_ACCESS_TRAP_EL2;
169
+ }
170
+ if (el < 3
171
+ && arm_feature(env, ARM_FEATURE_EL3)
172
+ && !(env->cp15.scr_el3 & SCR_ENSCXT)) {
173
+ return CP_ACCESS_TRAP_EL3;
174
+ }
175
+ return CP_ACCESS_OK;
176
+}
177
+
178
+static const ARMCPRegInfo scxtnum_reginfo[] = {
179
+ { .name = "SCXTNUM_EL0", .state = ARM_CP_STATE_AA64,
180
+ .opc0 = 3, .opc1 = 3, .crn = 13, .crm = 0, .opc2 = 7,
181
+ .access = PL0_RW, .accessfn = access_scxtnum,
182
+ .fieldoffset = offsetof(CPUARMState, scxtnum_el[0]) },
183
+ { .name = "SCXTNUM_EL1", .state = ARM_CP_STATE_AA64,
184
+ .opc0 = 3, .opc1 = 0, .crn = 13, .crm = 0, .opc2 = 7,
185
+ .access = PL1_RW, .accessfn = access_scxtnum,
186
+ .fieldoffset = offsetof(CPUARMState, scxtnum_el[1]) },
187
+ { .name = "SCXTNUM_EL2", .state = ARM_CP_STATE_AA64,
188
+ .opc0 = 3, .opc1 = 4, .crn = 13, .crm = 0, .opc2 = 7,
189
+ .access = PL2_RW, .accessfn = access_scxtnum,
190
+ .fieldoffset = offsetof(CPUARMState, scxtnum_el[2]) },
191
+ { .name = "SCXTNUM_EL3", .state = ARM_CP_STATE_AA64,
192
+ .opc0 = 3, .opc1 = 6, .crn = 13, .crm = 0, .opc2 = 7,
193
+ .access = PL3_RW,
194
+ .fieldoffset = offsetof(CPUARMState, scxtnum_el[3]) },
195
+};
196
+#endif /* TARGET_AARCH64 */
197
198
static CPAccessResult access_predinv(CPUARMState *env, const ARMCPRegInfo *ri,
199
bool isread)
200
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
201
define_arm_cp_regs(cpu, mte_tco_ro_reginfo);
202
define_arm_cp_regs(cpu, mte_el0_cacheop_reginfo);
203
}
204
+
205
+ if (cpu_isar_feature(aa64_scxtnum, cpu)) {
206
+ define_arm_cp_regs(cpu, scxtnum_reginfo);
207
+ }
208
#endif
209
210
if (cpu_isar_feature(any_predinv, cpu)) {
211
--
42
--
212
2.25.1
43
2.34.1
44
45
diff view generated by jsdifflib
1
From: Gavin Shan <gshan@redhat.com>
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
2
3
The CPU topology isn't enabled on arm/virt machine yet, but we're
3
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
going to do it in next patch. After the CPU topology is enabled by
4
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
next patch, "thread-id=1" becomes invalid because the CPU core is
5
Message-id: 20230206223502.25122-8-philmd@linaro.org
6
preferred on arm/virt machine. It means these two CPUs have 0/1
7
as their core IDs, but their thread IDs are all 0. It will trigger
8
test failure as the following message indicates:
9
10
[14/21 qemu:qtest+qtest-aarch64 / qtest-aarch64/numa-test ERROR
11
1.48s killed by signal 6 SIGABRT
12
>>> G_TEST_DBUS_DAEMON=/home/gavin/sandbox/qemu.main/tests/dbus-vmstate-daemon.sh \
13
QTEST_QEMU_STORAGE_DAEMON_BINARY=./storage-daemon/qemu-storage-daemon \
14
QTEST_QEMU_BINARY=./qemu-system-aarch64 \
15
QTEST_QEMU_IMG=./qemu-img MALLOC_PERTURB_=83 \
16
/home/gavin/sandbox/qemu.main/build/tests/qtest/numa-test --tap -k
17
――――――――――――――――――――――――――――――――――――――――――――――
18
stderr:
19
qemu-system-aarch64: -numa cpu,node-id=0,thread-id=1: no match found
20
21
This fixes the issue by providing comprehensive SMP configurations
22
in aarch64_numa_cpu(). The SMP configurations aren't used before
23
the CPU topology is enabled in next patch.
24
25
Signed-off-by: Gavin Shan <gshan@redhat.com>
26
Reviewed-by: Yanan Wang <wangyanan55@huawei.com>
27
Message-id: 20220503140304.855514-3-gshan@redhat.com
28
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
29
---
7
---
30
tests/qtest/numa-test.c | 3 ++-
8
target/arm/cpu.h | 3 ++-
31
1 file changed, 2 insertions(+), 1 deletion(-)
9
1 file changed, 2 insertions(+), 1 deletion(-)
32
10
33
diff --git a/tests/qtest/numa-test.c b/tests/qtest/numa-test.c
11
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
34
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
35
--- a/tests/qtest/numa-test.c
13
--- a/target/arm/cpu.h
36
+++ b/tests/qtest/numa-test.c
14
+++ b/target/arm/cpu.h
37
@@ -XXX,XX +XXX,XX @@ static void aarch64_numa_cpu(const void *data)
15
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
38
QTestState *qts;
16
39
g_autofree char *cli = NULL;
17
void *nvic;
40
18
const struct arm_boot_info *boot_info;
41
- cli = make_cli(data, "-machine smp.cpus=2 "
19
+#if !defined(CONFIG_USER_ONLY)
42
+ cli = make_cli(data, "-machine "
20
/* Store GICv3CPUState to access from this struct */
43
+ "smp.cpus=2,smp.sockets=1,smp.clusters=1,smp.cores=1,smp.threads=2 "
21
void *gicv3state;
44
"-numa node,nodeid=0,memdev=ram -numa node,nodeid=1 "
22
-#if defined(CONFIG_USER_ONLY)
45
"-numa cpu,node-id=1,thread-id=0 "
23
+#else /* CONFIG_USER_ONLY */
46
"-numa cpu,node-id=0,thread-id=1");
24
/* For usermode syscall translation. */
25
bool eabi;
26
#endif /* CONFIG_USER_ONLY */
47
--
27
--
48
2.25.1
28
2.34.1
49
29
50
30
diff view generated by jsdifflib
1
From: Gavin Shan <gshan@redhat.com>
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
2
3
This adds cluster-id in CPU instance properties, which will be used
3
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
by arm/virt machine. Besides, the cluster-id is also verified or
4
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
dumped in various spots:
5
Message-id: 20230206223502.25122-9-philmd@linaro.org
6
7
* hw/core/machine.c::machine_set_cpu_numa_node() to associate
8
CPU with its NUMA node.
9
10
* hw/core/machine.c::machine_numa_finish_cpu_init() to record
11
CPU slots with no NUMA mapping set.
12
13
* hw/core/machine-hmp-cmds.c::hmp_hotpluggable_cpus() to dump
14
cluster-id.
15
16
Signed-off-by: Gavin Shan <gshan@redhat.com>
17
Reviewed-by: Yanan Wang <wangyanan55@huawei.com>
18
Acked-by: Igor Mammedov <imammedo@redhat.com>
19
Message-id: 20220503140304.855514-2-gshan@redhat.com
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21
---
7
---
22
qapi/machine.json | 6 ++++--
8
target/arm/cpu.h | 2 +-
23
hw/core/machine-hmp-cmds.c | 4 ++++
9
1 file changed, 1 insertion(+), 1 deletion(-)
24
hw/core/machine.c | 16 ++++++++++++++++
25
3 files changed, 24 insertions(+), 2 deletions(-)
26
10
27
diff --git a/qapi/machine.json b/qapi/machine.json
11
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
28
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
29
--- a/qapi/machine.json
13
--- a/target/arm/cpu.h
30
+++ b/qapi/machine.json
14
+++ b/target/arm/cpu.h
31
@@ -XXX,XX +XXX,XX @@
15
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
32
# @node-id: NUMA node ID the CPU belongs to
16
} sau;
33
# @socket-id: socket number within node/board the CPU belongs to
17
34
# @die-id: die number within socket the CPU belongs to (since 4.1)
18
void *nvic;
35
-# @core-id: core number within die the CPU belongs to
19
- const struct arm_boot_info *boot_info;
36
+# @cluster-id: cluster number within die the CPU belongs to (since 7.1)
20
#if !defined(CONFIG_USER_ONLY)
37
+# @core-id: core number within cluster the CPU belongs to
21
+ const struct arm_boot_info *boot_info;
38
# @thread-id: thread number within core the CPU belongs to
22
/* Store GICv3CPUState to access from this struct */
39
#
23
void *gicv3state;
40
-# Note: currently there are 5 properties that could be present
24
#else /* CONFIG_USER_ONLY */
41
+# Note: currently there are 6 properties that could be present
42
# but management should be prepared to pass through other
43
# properties with device_add command to allow for future
44
# interface extension. This also requires the filed names to be kept in
45
@@ -XXX,XX +XXX,XX @@
46
'data': { '*node-id': 'int',
47
'*socket-id': 'int',
48
'*die-id': 'int',
49
+ '*cluster-id': 'int',
50
'*core-id': 'int',
51
'*thread-id': 'int'
52
}
53
diff --git a/hw/core/machine-hmp-cmds.c b/hw/core/machine-hmp-cmds.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/hw/core/machine-hmp-cmds.c
56
+++ b/hw/core/machine-hmp-cmds.c
57
@@ -XXX,XX +XXX,XX @@ void hmp_hotpluggable_cpus(Monitor *mon, const QDict *qdict)
58
if (c->has_die_id) {
59
monitor_printf(mon, " die-id: \"%" PRIu64 "\"\n", c->die_id);
60
}
61
+ if (c->has_cluster_id) {
62
+ monitor_printf(mon, " cluster-id: \"%" PRIu64 "\"\n",
63
+ c->cluster_id);
64
+ }
65
if (c->has_core_id) {
66
monitor_printf(mon, " core-id: \"%" PRIu64 "\"\n", c->core_id);
67
}
68
diff --git a/hw/core/machine.c b/hw/core/machine.c
69
index XXXXXXX..XXXXXXX 100644
70
--- a/hw/core/machine.c
71
+++ b/hw/core/machine.c
72
@@ -XXX,XX +XXX,XX @@ void machine_set_cpu_numa_node(MachineState *machine,
73
return;
74
}
75
76
+ if (props->has_cluster_id && !slot->props.has_cluster_id) {
77
+ error_setg(errp, "cluster-id is not supported");
78
+ return;
79
+ }
80
+
81
if (props->has_socket_id && !slot->props.has_socket_id) {
82
error_setg(errp, "socket-id is not supported");
83
return;
84
@@ -XXX,XX +XXX,XX @@ void machine_set_cpu_numa_node(MachineState *machine,
85
continue;
86
}
87
88
+ if (props->has_cluster_id &&
89
+ props->cluster_id != slot->props.cluster_id) {
90
+ continue;
91
+ }
92
+
93
if (props->has_die_id && props->die_id != slot->props.die_id) {
94
continue;
95
}
96
@@ -XXX,XX +XXX,XX @@ static char *cpu_slot_to_string(const CPUArchId *cpu)
97
}
98
g_string_append_printf(s, "die-id: %"PRId64, cpu->props.die_id);
99
}
100
+ if (cpu->props.has_cluster_id) {
101
+ if (s->len) {
102
+ g_string_append_printf(s, ", ");
103
+ }
104
+ g_string_append_printf(s, "cluster-id: %"PRId64, cpu->props.cluster_id);
105
+ }
106
if (cpu->props.has_core_id) {
107
if (s->len) {
108
g_string_append_printf(s, ", ");
109
--
25
--
110
2.25.1
26
2.34.1
27
28
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
2
3
Add only the system registers required to implement zero error
3
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
4
records. This means that all values for ERRSELR are out of range,
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
which means that it and all of the indexed error record registers
5
Message-id: 20230206223502.25122-10-philmd@linaro.org
6
need not be implemented.
7
8
Add the EL2 registers required for injecting virtual SError.
9
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20220506180242.216785-14-richard.henderson@linaro.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
7
---
15
target/arm/cpu.h | 5 +++
8
target/arm/cpu.h | 2 +-
16
target/arm/helper.c | 84 +++++++++++++++++++++++++++++++++++++++++++++
9
1 file changed, 1 insertion(+), 1 deletion(-)
17
2 files changed, 89 insertions(+)
18
10
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
11
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
20
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/cpu.h
13
--- a/target/arm/cpu.h
22
+++ b/target/arm/cpu.h
14
+++ b/target/arm/cpu.h
23
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
15
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
24
uint64_t tfsr_el[4]; /* tfsre0_el1 is index 0. */
16
uint32_t ctrl;
25
uint64_t gcr_el1;
17
} sau;
26
uint64_t rgsr_el1;
18
27
+
19
- void *nvic;
28
+ /* Minimal RAS registers */
20
#if !defined(CONFIG_USER_ONLY)
29
+ uint64_t disr_el1;
21
+ void *nvic;
30
+ uint64_t vdisr_el2;
22
const struct arm_boot_info *boot_info;
31
+ uint64_t vsesr_el2;
23
/* Store GICv3CPUState to access from this struct */
32
} cp15;
24
void *gicv3state;
33
34
struct {
35
diff --git a/target/arm/helper.c b/target/arm/helper.c
36
index XXXXXXX..XXXXXXX 100644
37
--- a/target/arm/helper.c
38
+++ b/target/arm/helper.c
39
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo debug_lpae_cp_reginfo[] = {
40
.access = PL0_R, .type = ARM_CP_CONST|ARM_CP_64BIT, .resetvalue = 0 },
41
};
42
43
+/*
44
+ * Check for traps to RAS registers, which are controlled
45
+ * by HCR_EL2.TERR and SCR_EL3.TERR.
46
+ */
47
+static CPAccessResult access_terr(CPUARMState *env, const ARMCPRegInfo *ri,
48
+ bool isread)
49
+{
50
+ int el = arm_current_el(env);
51
+
52
+ if (el < 2 && (arm_hcr_el2_eff(env) & HCR_TERR)) {
53
+ return CP_ACCESS_TRAP_EL2;
54
+ }
55
+ if (el < 3 && (env->cp15.scr_el3 & SCR_TERR)) {
56
+ return CP_ACCESS_TRAP_EL3;
57
+ }
58
+ return CP_ACCESS_OK;
59
+}
60
+
61
+static uint64_t disr_read(CPUARMState *env, const ARMCPRegInfo *ri)
62
+{
63
+ int el = arm_current_el(env);
64
+
65
+ if (el < 2 && (arm_hcr_el2_eff(env) & HCR_AMO)) {
66
+ return env->cp15.vdisr_el2;
67
+ }
68
+ if (el < 3 && (env->cp15.scr_el3 & SCR_EA)) {
69
+ return 0; /* RAZ/WI */
70
+ }
71
+ return env->cp15.disr_el1;
72
+}
73
+
74
+static void disr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t val)
75
+{
76
+ int el = arm_current_el(env);
77
+
78
+ if (el < 2 && (arm_hcr_el2_eff(env) & HCR_AMO)) {
79
+ env->cp15.vdisr_el2 = val;
80
+ return;
81
+ }
82
+ if (el < 3 && (env->cp15.scr_el3 & SCR_EA)) {
83
+ return; /* RAZ/WI */
84
+ }
85
+ env->cp15.disr_el1 = val;
86
+}
87
+
88
+/*
89
+ * Minimal RAS implementation with no Error Records.
90
+ * Which means that all of the Error Record registers:
91
+ * ERXADDR_EL1
92
+ * ERXCTLR_EL1
93
+ * ERXFR_EL1
94
+ * ERXMISC0_EL1
95
+ * ERXMISC1_EL1
96
+ * ERXMISC2_EL1
97
+ * ERXMISC3_EL1
98
+ * ERXPFGCDN_EL1 (RASv1p1)
99
+ * ERXPFGCTL_EL1 (RASv1p1)
100
+ * ERXPFGF_EL1 (RASv1p1)
101
+ * ERXSTATUS_EL1
102
+ * and
103
+ * ERRSELR_EL1
104
+ * may generate UNDEFINED, which is the effect we get by not
105
+ * listing them at all.
106
+ */
107
+static const ARMCPRegInfo minimal_ras_reginfo[] = {
108
+ { .name = "DISR_EL1", .state = ARM_CP_STATE_BOTH,
109
+ .opc0 = 3, .opc1 = 0, .crn = 12, .crm = 1, .opc2 = 1,
110
+ .access = PL1_RW, .fieldoffset = offsetof(CPUARMState, cp15.disr_el1),
111
+ .readfn = disr_read, .writefn = disr_write, .raw_writefn = raw_write },
112
+ { .name = "ERRIDR_EL1", .state = ARM_CP_STATE_BOTH,
113
+ .opc0 = 3, .opc1 = 0, .crn = 5, .crm = 3, .opc2 = 0,
114
+ .access = PL1_R, .accessfn = access_terr,
115
+ .type = ARM_CP_CONST, .resetvalue = 0 },
116
+ { .name = "VDISR_EL2", .state = ARM_CP_STATE_BOTH,
117
+ .opc0 = 3, .opc1 = 4, .crn = 12, .crm = 1, .opc2 = 1,
118
+ .access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.vdisr_el2) },
119
+ { .name = "VSESR_EL2", .state = ARM_CP_STATE_BOTH,
120
+ .opc0 = 3, .opc1 = 4, .crn = 5, .crm = 2, .opc2 = 3,
121
+ .access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.vsesr_el2) },
122
+};
123
+
124
/* Return the exception level to which exceptions should be taken
125
* via SVEAccessTrap. If an exception should be routed through
126
* AArch64.AdvSIMDFPAccessTrap, return 0; fp_exception_el should
127
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
128
if (cpu_isar_feature(aa64_ssbs, cpu)) {
129
define_one_arm_cp_reg(cpu, &ssbs_reginfo);
130
}
131
+ if (cpu_isar_feature(any_ras, cpu)) {
132
+ define_arm_cp_regs(cpu, minimal_ras_reginfo);
133
+ }
134
135
if (cpu_isar_feature(aa64_vh, cpu) ||
136
cpu_isar_feature(aa64_debugv8p2, cpu)) {
137
--
25
--
138
2.25.1
26
2.34.1
27
28
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
2
3
Virtual SError exceptions are raised by setting HCR_EL2.VSE,
3
There is no point in using a void pointer to access the NVIC.
4
and are routed to EL1 just like other virtual exceptions.
4
Use the real type to avoid casting it while debugging.
5
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20220506180242.216785-16-richard.henderson@linaro.org
8
Message-id: 20230206223502.25122-11-philmd@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
10
---
11
target/arm/cpu.h | 2 ++
11
target/arm/cpu.h | 46 ++++++++++++++++++++++---------------------
12
target/arm/internals.h | 8 ++++++++
12
hw/intc/armv7m_nvic.c | 38 ++++++++++++-----------------------
13
target/arm/syndrome.h | 5 +++++
13
target/arm/cpu.c | 1 +
14
target/arm/cpu.c | 38 +++++++++++++++++++++++++++++++++++++-
14
target/arm/m_helper.c | 2 +-
15
target/arm/helper.c | 40 +++++++++++++++++++++++++++++++++++++++-
15
4 files changed, 39 insertions(+), 48 deletions(-)
16
5 files changed, 91 insertions(+), 2 deletions(-)
17
16
18
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
19
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/cpu.h
19
--- a/target/arm/cpu.h
21
+++ b/target/arm/cpu.h
20
+++ b/target/arm/cpu.h
22
@@ -XXX,XX +XXX,XX @@
21
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMTBFlags {
23
#define EXCP_LSERR 21 /* v8M LSERR SecureFault */
22
24
#define EXCP_UNALIGNED 22 /* v7M UNALIGNED UsageFault */
23
typedef struct ARMMMUFaultInfo ARMMMUFaultInfo;
25
#define EXCP_DIVBYZERO 23 /* v7M DIVBYZERO UsageFault */
24
26
+#define EXCP_VSERR 24
25
+typedef struct NVICState NVICState;
27
/* NB: add new EXCP_ defines to the array in arm_log_exception() too */
26
+
28
27
typedef struct CPUArchState {
29
#define ARMV7M_EXCP_RESET 1
28
/* Regs for current mode. */
30
@@ -XXX,XX +XXX,XX @@ enum {
29
uint32_t regs[16];
31
#define CPU_INTERRUPT_FIQ CPU_INTERRUPT_TGT_EXT_1
30
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
32
#define CPU_INTERRUPT_VIRQ CPU_INTERRUPT_TGT_EXT_2
31
} sau;
33
#define CPU_INTERRUPT_VFIQ CPU_INTERRUPT_TGT_EXT_3
32
34
+#define CPU_INTERRUPT_VSERR CPU_INTERRUPT_TGT_INT_0
33
#if !defined(CONFIG_USER_ONLY)
35
34
- void *nvic;
36
/* The usual mapping for an AArch64 system register to its AArch32
35
+ NVICState *nvic;
37
* counterpart is for the 32 bit world to have access to the lower
36
const struct arm_boot_info *boot_info;
38
diff --git a/target/arm/internals.h b/target/arm/internals.h
37
/* Store GICv3CPUState to access from this struct */
38
void *gicv3state;
39
@@ -XXX,XX +XXX,XX @@ uint32_t arm_phys_excp_target_el(CPUState *cs, uint32_t excp_idx,
40
41
/* Interface between CPU and Interrupt controller. */
42
#ifndef CONFIG_USER_ONLY
43
-bool armv7m_nvic_can_take_pending_exception(void *opaque);
44
+bool armv7m_nvic_can_take_pending_exception(NVICState *s);
45
#else
46
-static inline bool armv7m_nvic_can_take_pending_exception(void *opaque)
47
+static inline bool armv7m_nvic_can_take_pending_exception(NVICState *s)
48
{
49
return true;
50
}
51
#endif
52
/**
53
* armv7m_nvic_set_pending: mark the specified exception as pending
54
- * @opaque: the NVIC
55
+ * @s: the NVIC
56
* @irq: the exception number to mark pending
57
* @secure: false for non-banked exceptions or for the nonsecure
58
* version of a banked exception, true for the secure version of a banked
59
@@ -XXX,XX +XXX,XX @@ static inline bool armv7m_nvic_can_take_pending_exception(void *opaque)
60
* if @secure is true and @irq does not specify one of the fixed set
61
* of architecturally banked exceptions.
62
*/
63
-void armv7m_nvic_set_pending(void *opaque, int irq, bool secure);
64
+void armv7m_nvic_set_pending(NVICState *s, int irq, bool secure);
65
/**
66
* armv7m_nvic_set_pending_derived: mark this derived exception as pending
67
- * @opaque: the NVIC
68
+ * @s: the NVIC
69
* @irq: the exception number to mark pending
70
* @secure: false for non-banked exceptions or for the nonsecure
71
* version of a banked exception, true for the secure version of a banked
72
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_set_pending(void *opaque, int irq, bool secure);
73
* exceptions (exceptions generated in the course of trying to take
74
* a different exception).
75
*/
76
-void armv7m_nvic_set_pending_derived(void *opaque, int irq, bool secure);
77
+void armv7m_nvic_set_pending_derived(NVICState *s, int irq, bool secure);
78
/**
79
* armv7m_nvic_set_pending_lazyfp: mark this lazy FP exception as pending
80
- * @opaque: the NVIC
81
+ * @s: the NVIC
82
* @irq: the exception number to mark pending
83
* @secure: false for non-banked exceptions or for the nonsecure
84
* version of a banked exception, true for the secure version of a banked
85
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_set_pending_derived(void *opaque, int irq, bool secure);
86
* Similar to armv7m_nvic_set_pending(), but specifically for exceptions
87
* generated in the course of lazy stacking of FP registers.
88
*/
89
-void armv7m_nvic_set_pending_lazyfp(void *opaque, int irq, bool secure);
90
+void armv7m_nvic_set_pending_lazyfp(NVICState *s, int irq, bool secure);
91
/**
92
* armv7m_nvic_get_pending_irq_info: return highest priority pending
93
* exception, and whether it targets Secure state
94
- * @opaque: the NVIC
95
+ * @s: the NVIC
96
* @pirq: set to pending exception number
97
* @ptargets_secure: set to whether pending exception targets Secure
98
*
99
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_set_pending_lazyfp(void *opaque, int irq, bool secure);
100
* to true if the current highest priority pending exception should
101
* be taken to Secure state, false for NS.
102
*/
103
-void armv7m_nvic_get_pending_irq_info(void *opaque, int *pirq,
104
+void armv7m_nvic_get_pending_irq_info(NVICState *s, int *pirq,
105
bool *ptargets_secure);
106
/**
107
* armv7m_nvic_acknowledge_irq: make highest priority pending exception active
108
- * @opaque: the NVIC
109
+ * @s: the NVIC
110
*
111
* Move the current highest priority pending exception from the pending
112
* state to the active state, and update v7m.exception to indicate that
113
* it is the exception currently being handled.
114
*/
115
-void armv7m_nvic_acknowledge_irq(void *opaque);
116
+void armv7m_nvic_acknowledge_irq(NVICState *s);
117
/**
118
* armv7m_nvic_complete_irq: complete specified interrupt or exception
119
- * @opaque: the NVIC
120
+ * @s: the NVIC
121
* @irq: the exception number to complete
122
* @secure: true if this exception was secure
123
*
124
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_acknowledge_irq(void *opaque);
125
* 0 if there is still an irq active after this one was completed
126
* (Ignoring -1, this is the same as the RETTOBASE value before completion.)
127
*/
128
-int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure);
129
+int armv7m_nvic_complete_irq(NVICState *s, int irq, bool secure);
130
/**
131
* armv7m_nvic_get_ready_status(void *opaque, int irq, bool secure)
132
- * @opaque: the NVIC
133
+ * @s: the NVIC
134
* @irq: the exception number to mark pending
135
* @secure: false for non-banked exceptions or for the nonsecure
136
* version of a banked exception, true for the secure version of a banked
137
@@ -XXX,XX +XXX,XX @@ int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure);
138
* interrupt the current execution priority. This controls whether the
139
* RDY bit for it in the FPCCR is set.
140
*/
141
-bool armv7m_nvic_get_ready_status(void *opaque, int irq, bool secure);
142
+bool armv7m_nvic_get_ready_status(NVICState *s, int irq, bool secure);
143
/**
144
* armv7m_nvic_raw_execution_priority: return the raw execution priority
145
- * @opaque: the NVIC
146
+ * @s: the NVIC
147
*
148
* Returns: the raw execution priority as defined by the v8M architecture.
149
* This is the execution priority minus the effects of AIRCR.PRIS,
150
* and minus any PRIMASK/FAULTMASK/BASEPRI priority boosting.
151
* (v8M ARM ARM I_PKLD.)
152
*/
153
-int armv7m_nvic_raw_execution_priority(void *opaque);
154
+int armv7m_nvic_raw_execution_priority(NVICState *s);
155
/**
156
* armv7m_nvic_neg_prio_requested: return true if the requested execution
157
* priority is negative for the specified security state.
158
- * @opaque: the NVIC
159
+ * @s: the NVIC
160
* @secure: the security state to test
161
* This corresponds to the pseudocode IsReqExecPriNeg().
162
*/
163
#ifndef CONFIG_USER_ONLY
164
-bool armv7m_nvic_neg_prio_requested(void *opaque, bool secure);
165
+bool armv7m_nvic_neg_prio_requested(NVICState *s, bool secure);
166
#else
167
-static inline bool armv7m_nvic_neg_prio_requested(void *opaque, bool secure)
168
+static inline bool armv7m_nvic_neg_prio_requested(NVICState *s, bool secure)
169
{
170
return false;
171
}
172
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
39
index XXXXXXX..XXXXXXX 100644
173
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/internals.h
174
--- a/hw/intc/armv7m_nvic.c
41
+++ b/target/arm/internals.h
175
+++ b/hw/intc/armv7m_nvic.c
42
@@ -XXX,XX +XXX,XX @@ void arm_cpu_update_virq(ARMCPU *cpu);
176
@@ -XXX,XX +XXX,XX @@ static inline int nvic_exec_prio(NVICState *s)
43
*/
177
return MIN(running, s->exception_prio);
44
void arm_cpu_update_vfiq(ARMCPU *cpu);
178
}
45
179
46
+/**
180
-bool armv7m_nvic_neg_prio_requested(void *opaque, bool secure)
47
+ * arm_cpu_update_vserr: Update CPU_INTERRUPT_VSERR bit
181
+bool armv7m_nvic_neg_prio_requested(NVICState *s, bool secure)
48
+ *
182
{
49
+ * Update the CPU_INTERRUPT_VSERR bit in cs->interrupt_request,
183
/* Return true if the requested execution priority is negative
50
+ * following a change to the HCR_EL2.VSE bit.
184
* for the specified security state, ie that security state
51
+ */
185
@@ -XXX,XX +XXX,XX @@ bool armv7m_nvic_neg_prio_requested(void *opaque, bool secure)
52
+void arm_cpu_update_vserr(ARMCPU *cpu);
186
* mean we don't allow FAULTMASK_NS to actually make the execution
53
+
187
* priority negative). Compare pseudocode IsReqExcPriNeg().
54
/**
188
*/
55
* arm_mmu_idx_el:
189
- NVICState *s = opaque;
56
* @env: The cpu environment
190
-
57
diff --git a/target/arm/syndrome.h b/target/arm/syndrome.h
191
if (s->cpu->env.v7m.faultmask[secure]) {
58
index XXXXXXX..XXXXXXX 100644
192
return true;
59
--- a/target/arm/syndrome.h
193
}
60
+++ b/target/arm/syndrome.h
194
@@ -XXX,XX +XXX,XX @@ bool armv7m_nvic_neg_prio_requested(void *opaque, bool secure)
61
@@ -XXX,XX +XXX,XX @@ static inline uint32_t syn_pcalignment(void)
195
return false;
62
return (EC_PCALIGNMENT << ARM_EL_EC_SHIFT) | ARM_EL_IL;
196
}
63
}
197
64
198
-bool armv7m_nvic_can_take_pending_exception(void *opaque)
65
+static inline uint32_t syn_serror(uint32_t extra)
199
+bool armv7m_nvic_can_take_pending_exception(NVICState *s)
66
+{
200
{
67
+ return (EC_SERROR << ARM_EL_EC_SHIFT) | ARM_EL_IL | extra;
201
- NVICState *s = opaque;
68
+}
202
-
69
+
203
return nvic_exec_prio(s) > nvic_pending_prio(s);
70
#endif /* TARGET_ARM_SYNDROME_H */
204
}
205
206
-int armv7m_nvic_raw_execution_priority(void *opaque)
207
+int armv7m_nvic_raw_execution_priority(NVICState *s)
208
{
209
- NVICState *s = opaque;
210
-
211
return s->exception_prio;
212
}
213
214
@@ -XXX,XX +XXX,XX @@ static void nvic_irq_update(NVICState *s)
215
* if @secure is true and @irq does not specify one of the fixed set
216
* of architecturally banked exceptions.
217
*/
218
-static void armv7m_nvic_clear_pending(void *opaque, int irq, bool secure)
219
+static void armv7m_nvic_clear_pending(NVICState *s, int irq, bool secure)
220
{
221
- NVICState *s = (NVICState *)opaque;
222
VecInfo *vec;
223
224
assert(irq > ARMV7M_EXCP_RESET && irq < s->num_irq);
225
@@ -XXX,XX +XXX,XX @@ static void do_armv7m_nvic_set_pending(void *opaque, int irq, bool secure,
226
}
227
}
228
229
-void armv7m_nvic_set_pending(void *opaque, int irq, bool secure)
230
+void armv7m_nvic_set_pending(NVICState *s, int irq, bool secure)
231
{
232
- do_armv7m_nvic_set_pending(opaque, irq, secure, false);
233
+ do_armv7m_nvic_set_pending(s, irq, secure, false);
234
}
235
236
-void armv7m_nvic_set_pending_derived(void *opaque, int irq, bool secure)
237
+void armv7m_nvic_set_pending_derived(NVICState *s, int irq, bool secure)
238
{
239
- do_armv7m_nvic_set_pending(opaque, irq, secure, true);
240
+ do_armv7m_nvic_set_pending(s, irq, secure, true);
241
}
242
243
-void armv7m_nvic_set_pending_lazyfp(void *opaque, int irq, bool secure)
244
+void armv7m_nvic_set_pending_lazyfp(NVICState *s, int irq, bool secure)
245
{
246
/*
247
* Pend an exception during lazy FP stacking. This differs
248
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_set_pending_lazyfp(void *opaque, int irq, bool secure)
249
* whether we should escalate depends on the saved context
250
* in the FPCCR register, not on the current state of the CPU/NVIC.
251
*/
252
- NVICState *s = (NVICState *)opaque;
253
bool banked = exc_is_banked(irq);
254
VecInfo *vec;
255
bool targets_secure;
256
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_set_pending_lazyfp(void *opaque, int irq, bool secure)
257
}
258
259
/* Make pending IRQ active. */
260
-void armv7m_nvic_acknowledge_irq(void *opaque)
261
+void armv7m_nvic_acknowledge_irq(NVICState *s)
262
{
263
- NVICState *s = (NVICState *)opaque;
264
CPUARMState *env = &s->cpu->env;
265
const int pending = s->vectpending;
266
const int running = nvic_exec_prio(s);
267
@@ -XXX,XX +XXX,XX @@ static bool vectpending_targets_secure(NVICState *s)
268
exc_targets_secure(s, s->vectpending);
269
}
270
271
-void armv7m_nvic_get_pending_irq_info(void *opaque,
272
+void armv7m_nvic_get_pending_irq_info(NVICState *s,
273
int *pirq, bool *ptargets_secure)
274
{
275
- NVICState *s = (NVICState *)opaque;
276
const int pending = s->vectpending;
277
bool targets_secure;
278
279
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_get_pending_irq_info(void *opaque,
280
*pirq = pending;
281
}
282
283
-int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure)
284
+int armv7m_nvic_complete_irq(NVICState *s, int irq, bool secure)
285
{
286
- NVICState *s = (NVICState *)opaque;
287
VecInfo *vec = NULL;
288
int ret = 0;
289
290
@@ -XXX,XX +XXX,XX @@ int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure)
291
return ret;
292
}
293
294
-bool armv7m_nvic_get_ready_status(void *opaque, int irq, bool secure)
295
+bool armv7m_nvic_get_ready_status(NVICState *s, int irq, bool secure)
296
{
297
/*
298
* Return whether an exception is "ready", i.e. it is enabled and is
299
@@ -XXX,XX +XXX,XX @@ bool armv7m_nvic_get_ready_status(void *opaque, int irq, bool secure)
300
* for non-banked exceptions secure is always false; for banked exceptions
301
* it indicates which of the exceptions is required.
302
*/
303
- NVICState *s = (NVICState *)opaque;
304
bool banked = exc_is_banked(irq);
305
VecInfo *vec;
306
int running = nvic_exec_prio(s);
71
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
307
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
72
index XXXXXXX..XXXXXXX 100644
308
index XXXXXXX..XXXXXXX 100644
73
--- a/target/arm/cpu.c
309
--- a/target/arm/cpu.c
74
+++ b/target/arm/cpu.c
310
+++ b/target/arm/cpu.c
75
@@ -XXX,XX +XXX,XX @@ static bool arm_cpu_has_work(CPUState *cs)
311
@@ -XXX,XX +XXX,XX @@
76
return (cpu->power_state != PSCI_OFF)
312
#if !defined(CONFIG_USER_ONLY)
77
&& cs->interrupt_request &
313
#include "hw/loader.h"
78
(CPU_INTERRUPT_FIQ | CPU_INTERRUPT_HARD
314
#include "hw/boards.h"
79
- | CPU_INTERRUPT_VFIQ | CPU_INTERRUPT_VIRQ
315
+#include "hw/intc/armv7m_nvic.h"
80
+ | CPU_INTERRUPT_VFIQ | CPU_INTERRUPT_VIRQ | CPU_INTERRUPT_VSERR
316
#endif
81
| CPU_INTERRUPT_EXITTB);
317
#include "sysemu/tcg.h"
82
}
318
#include "sysemu/qtest.h"
83
319
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
84
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
85
return false;
86
}
87
return !(env->daif & PSTATE_I);
88
+ case EXCP_VSERR:
89
+ if (!(hcr_el2 & HCR_AMO) || (hcr_el2 & HCR_TGE)) {
90
+ /* VIRQs are only taken when hypervized. */
91
+ return false;
92
+ }
93
+ return !(env->daif & PSTATE_A);
94
default:
95
g_assert_not_reached();
96
}
97
@@ -XXX,XX +XXX,XX @@ static bool arm_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
98
goto found;
99
}
100
}
101
+ if (interrupt_request & CPU_INTERRUPT_VSERR) {
102
+ excp_idx = EXCP_VSERR;
103
+ target_el = 1;
104
+ if (arm_excp_unmasked(cs, excp_idx, target_el,
105
+ cur_el, secure, hcr_el2)) {
106
+ /* Taking a virtual abort clears HCR_EL2.VSE */
107
+ env->cp15.hcr_el2 &= ~HCR_VSE;
108
+ cpu_reset_interrupt(cs, CPU_INTERRUPT_VSERR);
109
+ goto found;
110
+ }
111
+ }
112
return false;
113
114
found:
115
@@ -XXX,XX +XXX,XX @@ void arm_cpu_update_vfiq(ARMCPU *cpu)
116
}
117
}
118
119
+void arm_cpu_update_vserr(ARMCPU *cpu)
120
+{
121
+ /*
122
+ * Update the interrupt level for VSERR, which is the HCR_EL2.VSE bit.
123
+ */
124
+ CPUARMState *env = &cpu->env;
125
+ CPUState *cs = CPU(cpu);
126
+
127
+ bool new_state = env->cp15.hcr_el2 & HCR_VSE;
128
+
129
+ if (new_state != ((cs->interrupt_request & CPU_INTERRUPT_VSERR) != 0)) {
130
+ if (new_state) {
131
+ cpu_interrupt(cs, CPU_INTERRUPT_VSERR);
132
+ } else {
133
+ cpu_reset_interrupt(cs, CPU_INTERRUPT_VSERR);
134
+ }
135
+ }
136
+}
137
+
138
#ifndef CONFIG_USER_ONLY
139
static void arm_cpu_set_irq(void *opaque, int irq, int level)
140
{
141
diff --git a/target/arm/helper.c b/target/arm/helper.c
142
index XXXXXXX..XXXXXXX 100644
320
index XXXXXXX..XXXXXXX 100644
143
--- a/target/arm/helper.c
321
--- a/target/arm/m_helper.c
144
+++ b/target/arm/helper.c
322
+++ b/target/arm/m_helper.c
145
@@ -XXX,XX +XXX,XX @@ static uint64_t isr_read(CPUARMState *env, const ARMCPRegInfo *ri)
323
@@ -XXX,XX +XXX,XX @@ static void v7m_update_fpccr(CPUARMState *env, uint32_t frameptr,
146
}
324
* that we will need later in order to do lazy FP reg stacking.
147
}
325
*/
148
326
bool is_secure = env->v7m.secure;
149
- /* External aborts are not possible in QEMU so A bit is always clear */
327
- void *nvic = env->nvic;
150
+ if (hcr_el2 & HCR_AMO) {
328
+ NVICState *nvic = env->nvic;
151
+ if (cs->interrupt_request & CPU_INTERRUPT_VSERR) {
329
/*
152
+ ret |= CPSR_A;
330
* Some bits are unbanked and live always in fpccr[M_REG_S]; some bits
153
+ }
331
* are banked and we want to update the bit in the bank for the
154
+ }
155
+
156
return ret;
157
}
158
159
@@ -XXX,XX +XXX,XX @@ static void do_hcr_write(CPUARMState *env, uint64_t value, uint64_t valid_mask)
160
g_assert(qemu_mutex_iothread_locked());
161
arm_cpu_update_virq(cpu);
162
arm_cpu_update_vfiq(cpu);
163
+ arm_cpu_update_vserr(cpu);
164
}
165
166
static void hcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
167
@@ -XXX,XX +XXX,XX @@ void arm_log_exception(CPUState *cs)
168
[EXCP_LSERR] = "v8M LSERR UsageFault",
169
[EXCP_UNALIGNED] = "v7M UNALIGNED UsageFault",
170
[EXCP_DIVBYZERO] = "v7M DIVBYZERO UsageFault",
171
+ [EXCP_VSERR] = "Virtual SERR",
172
};
173
174
if (idx >= 0 && idx < ARRAY_SIZE(excnames)) {
175
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch32(CPUState *cs)
176
mask = CPSR_A | CPSR_I | CPSR_F;
177
offset = 4;
178
break;
179
+ case EXCP_VSERR:
180
+ {
181
+ /*
182
+ * Note that this is reported as a data abort, but the DFAR
183
+ * has an UNKNOWN value. Construct the SError syndrome from
184
+ * AET and ExT fields.
185
+ */
186
+ ARMMMUFaultInfo fi = { .type = ARMFault_AsyncExternal, };
187
+
188
+ if (extended_addresses_enabled(env)) {
189
+ env->exception.fsr = arm_fi_to_lfsc(&fi);
190
+ } else {
191
+ env->exception.fsr = arm_fi_to_sfsc(&fi);
192
+ }
193
+ env->exception.fsr |= env->cp15.vsesr_el2 & 0xd000;
194
+ A32_BANKED_CURRENT_REG_SET(env, dfsr, env->exception.fsr);
195
+ qemu_log_mask(CPU_LOG_INT, "...with IFSR 0x%x\n",
196
+ env->exception.fsr);
197
+
198
+ new_mode = ARM_CPU_MODE_ABT;
199
+ addr = 0x10;
200
+ mask = CPSR_A | CPSR_I;
201
+ offset = 8;
202
+ }
203
+ break;
204
case EXCP_SMC:
205
new_mode = ARM_CPU_MODE_MON;
206
addr = 0x08;
207
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
208
case EXCP_VFIQ:
209
addr += 0x100;
210
break;
211
+ case EXCP_VSERR:
212
+ addr += 0x180;
213
+ /* Construct the SError syndrome from IDS and ISS fields. */
214
+ env->exception.syndrome = syn_serror(env->cp15.vsesr_el2 & 0x1ffffff);
215
+ env->cp15.esr_el[new_el] = env->exception.syndrome;
216
+ break;
217
default:
218
cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index);
219
}
220
--
332
--
221
2.25.1
333
2.34.1
334
335
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
2
3
Previously we were defining some of these in user-only mode,
3
While dozens of files include "cpu.h", only 3 files require
4
but none of them are accessible from user-only, therefore
4
these NVIC helper declarations.
5
define them only in system mode.
5
6
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
This will shortly be used from cpu_tcg.c also.
7
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
8
Message-id: 20230206223502.25122-12-philmd@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20220506180242.216785-6-richard.henderson@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
10
---
14
target/arm/internals.h | 6 ++++
11
include/hw/intc/armv7m_nvic.h | 123 ++++++++++++++++++++++++++++++++++
15
target/arm/cpu64.c | 64 +++---------------------------------------
12
target/arm/cpu.h | 123 ----------------------------------
16
target/arm/cpu_tcg.c | 59 ++++++++++++++++++++++++++++++++++++++
13
target/arm/cpu.c | 4 +-
17
3 files changed, 69 insertions(+), 60 deletions(-)
14
target/arm/cpu_tcg.c | 3 +
18
15
target/arm/m_helper.c | 3 +
19
diff --git a/target/arm/internals.h b/target/arm/internals.h
16
5 files changed, 132 insertions(+), 124 deletions(-)
20
index XXXXXXX..XXXXXXX 100644
17
21
--- a/target/arm/internals.h
18
diff --git a/include/hw/intc/armv7m_nvic.h b/include/hw/intc/armv7m_nvic.h
22
+++ b/target/arm/internals.h
19
index XXXXXXX..XXXXXXX 100644
23
@@ -XXX,XX +XXX,XX @@ int aarch64_fpu_gdb_get_reg(CPUARMState *env, GByteArray *buf, int reg);
20
--- a/include/hw/intc/armv7m_nvic.h
24
int aarch64_fpu_gdb_set_reg(CPUARMState *env, uint8_t *buf, int reg);
21
+++ b/include/hw/intc/armv7m_nvic.h
25
#endif
22
@@ -XXX,XX +XXX,XX @@ struct NVICState {
26
23
qemu_irq sysresetreq;
27
+#ifdef CONFIG_USER_ONLY
24
};
28
+static inline void define_cortex_a72_a57_a53_cp_reginfo(ARMCPU *cpu) { }
25
26
+/* Interface between CPU and Interrupt controller. */
27
+/**
28
+ * armv7m_nvic_set_pending: mark the specified exception as pending
29
+ * @s: the NVIC
30
+ * @irq: the exception number to mark pending
31
+ * @secure: false for non-banked exceptions or for the nonsecure
32
+ * version of a banked exception, true for the secure version of a banked
33
+ * exception.
34
+ *
35
+ * Marks the specified exception as pending. Note that we will assert()
36
+ * if @secure is true and @irq does not specify one of the fixed set
37
+ * of architecturally banked exceptions.
38
+ */
39
+void armv7m_nvic_set_pending(NVICState *s, int irq, bool secure);
40
+/**
41
+ * armv7m_nvic_set_pending_derived: mark this derived exception as pending
42
+ * @s: the NVIC
43
+ * @irq: the exception number to mark pending
44
+ * @secure: false for non-banked exceptions or for the nonsecure
45
+ * version of a banked exception, true for the secure version of a banked
46
+ * exception.
47
+ *
48
+ * Similar to armv7m_nvic_set_pending(), but specifically for derived
49
+ * exceptions (exceptions generated in the course of trying to take
50
+ * a different exception).
51
+ */
52
+void armv7m_nvic_set_pending_derived(NVICState *s, int irq, bool secure);
53
+/**
54
+ * armv7m_nvic_set_pending_lazyfp: mark this lazy FP exception as pending
55
+ * @s: the NVIC
56
+ * @irq: the exception number to mark pending
57
+ * @secure: false for non-banked exceptions or for the nonsecure
58
+ * version of a banked exception, true for the secure version of a banked
59
+ * exception.
60
+ *
61
+ * Similar to armv7m_nvic_set_pending(), but specifically for exceptions
62
+ * generated in the course of lazy stacking of FP registers.
63
+ */
64
+void armv7m_nvic_set_pending_lazyfp(NVICState *s, int irq, bool secure);
65
+/**
66
+ * armv7m_nvic_get_pending_irq_info: return highest priority pending
67
+ * exception, and whether it targets Secure state
68
+ * @s: the NVIC
69
+ * @pirq: set to pending exception number
70
+ * @ptargets_secure: set to whether pending exception targets Secure
71
+ *
72
+ * This function writes the number of the highest priority pending
73
+ * exception (the one which would be made active by
74
+ * armv7m_nvic_acknowledge_irq()) to @pirq, and sets @ptargets_secure
75
+ * to true if the current highest priority pending exception should
76
+ * be taken to Secure state, false for NS.
77
+ */
78
+void armv7m_nvic_get_pending_irq_info(NVICState *s, int *pirq,
79
+ bool *ptargets_secure);
80
+/**
81
+ * armv7m_nvic_acknowledge_irq: make highest priority pending exception active
82
+ * @s: the NVIC
83
+ *
84
+ * Move the current highest priority pending exception from the pending
85
+ * state to the active state, and update v7m.exception to indicate that
86
+ * it is the exception currently being handled.
87
+ */
88
+void armv7m_nvic_acknowledge_irq(NVICState *s);
89
+/**
90
+ * armv7m_nvic_complete_irq: complete specified interrupt or exception
91
+ * @s: the NVIC
92
+ * @irq: the exception number to complete
93
+ * @secure: true if this exception was secure
94
+ *
95
+ * Returns: -1 if the irq was not active
96
+ * 1 if completing this irq brought us back to base (no active irqs)
97
+ * 0 if there is still an irq active after this one was completed
98
+ * (Ignoring -1, this is the same as the RETTOBASE value before completion.)
99
+ */
100
+int armv7m_nvic_complete_irq(NVICState *s, int irq, bool secure);
101
+/**
102
+ * armv7m_nvic_get_ready_status(void *opaque, int irq, bool secure)
103
+ * @s: the NVIC
104
+ * @irq: the exception number to mark pending
105
+ * @secure: false for non-banked exceptions or for the nonsecure
106
+ * version of a banked exception, true for the secure version of a banked
107
+ * exception.
108
+ *
109
+ * Return whether an exception is "ready", i.e. whether the exception is
110
+ * enabled and is configured at a priority which would allow it to
111
+ * interrupt the current execution priority. This controls whether the
112
+ * RDY bit for it in the FPCCR is set.
113
+ */
114
+bool armv7m_nvic_get_ready_status(NVICState *s, int irq, bool secure);
115
+/**
116
+ * armv7m_nvic_raw_execution_priority: return the raw execution priority
117
+ * @s: the NVIC
118
+ *
119
+ * Returns: the raw execution priority as defined by the v8M architecture.
120
+ * This is the execution priority minus the effects of AIRCR.PRIS,
121
+ * and minus any PRIMASK/FAULTMASK/BASEPRI priority boosting.
122
+ * (v8M ARM ARM I_PKLD.)
123
+ */
124
+int armv7m_nvic_raw_execution_priority(NVICState *s);
125
+/**
126
+ * armv7m_nvic_neg_prio_requested: return true if the requested execution
127
+ * priority is negative for the specified security state.
128
+ * @s: the NVIC
129
+ * @secure: the security state to test
130
+ * This corresponds to the pseudocode IsReqExecPriNeg().
131
+ */
132
+#ifndef CONFIG_USER_ONLY
133
+bool armv7m_nvic_neg_prio_requested(NVICState *s, bool secure);
29
+#else
134
+#else
30
+void define_cortex_a72_a57_a53_cp_reginfo(ARMCPU *cpu);
135
+static inline bool armv7m_nvic_neg_prio_requested(NVICState *s, bool secure)
136
+{
137
+ return false;
138
+}
139
+#endif
140
+#ifndef CONFIG_USER_ONLY
141
+bool armv7m_nvic_can_take_pending_exception(NVICState *s);
142
+#else
143
+static inline bool armv7m_nvic_can_take_pending_exception(NVICState *s)
144
+{
145
+ return true;
146
+}
31
+#endif
147
+#endif
32
+
148
+
33
#endif
149
#endif
34
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
150
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
35
index XXXXXXX..XXXXXXX 100644
151
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/cpu64.c
152
--- a/target/arm/cpu.h
37
+++ b/target/arm/cpu64.c
153
+++ b/target/arm/cpu.h
38
@@ -XXX,XX +XXX,XX @@
154
@@ -XXX,XX +XXX,XX @@ void arm_cpu_list(void);
39
#include "hvf_arm.h"
155
uint32_t arm_phys_excp_target_el(CPUState *cs, uint32_t excp_idx,
40
#include "qapi/visitor.h"
156
uint32_t cur_el, bool secure);
41
#include "hw/qdev-properties.h"
157
42
-#include "cpregs.h"
158
-/* Interface between CPU and Interrupt controller. */
43
+#include "internals.h"
44
45
46
-#ifndef CONFIG_USER_ONLY
159
-#ifndef CONFIG_USER_ONLY
47
-static uint64_t a57_a53_l2ctlr_read(CPUARMState *env, const ARMCPRegInfo *ri)
160
-bool armv7m_nvic_can_take_pending_exception(NVICState *s);
161
-#else
162
-static inline bool armv7m_nvic_can_take_pending_exception(NVICState *s)
48
-{
163
-{
49
- ARMCPU *cpu = env_archcpu(env);
164
- return true;
50
-
165
-}
51
- /* Number of cores is in [25:24]; otherwise we RAZ */
166
-#endif
52
- return (cpu->core_count - 1) << 24;
167
-/**
168
- * armv7m_nvic_set_pending: mark the specified exception as pending
169
- * @s: the NVIC
170
- * @irq: the exception number to mark pending
171
- * @secure: false for non-banked exceptions or for the nonsecure
172
- * version of a banked exception, true for the secure version of a banked
173
- * exception.
174
- *
175
- * Marks the specified exception as pending. Note that we will assert()
176
- * if @secure is true and @irq does not specify one of the fixed set
177
- * of architecturally banked exceptions.
178
- */
179
-void armv7m_nvic_set_pending(NVICState *s, int irq, bool secure);
180
-/**
181
- * armv7m_nvic_set_pending_derived: mark this derived exception as pending
182
- * @s: the NVIC
183
- * @irq: the exception number to mark pending
184
- * @secure: false for non-banked exceptions or for the nonsecure
185
- * version of a banked exception, true for the secure version of a banked
186
- * exception.
187
- *
188
- * Similar to armv7m_nvic_set_pending(), but specifically for derived
189
- * exceptions (exceptions generated in the course of trying to take
190
- * a different exception).
191
- */
192
-void armv7m_nvic_set_pending_derived(NVICState *s, int irq, bool secure);
193
-/**
194
- * armv7m_nvic_set_pending_lazyfp: mark this lazy FP exception as pending
195
- * @s: the NVIC
196
- * @irq: the exception number to mark pending
197
- * @secure: false for non-banked exceptions or for the nonsecure
198
- * version of a banked exception, true for the secure version of a banked
199
- * exception.
200
- *
201
- * Similar to armv7m_nvic_set_pending(), but specifically for exceptions
202
- * generated in the course of lazy stacking of FP registers.
203
- */
204
-void armv7m_nvic_set_pending_lazyfp(NVICState *s, int irq, bool secure);
205
-/**
206
- * armv7m_nvic_get_pending_irq_info: return highest priority pending
207
- * exception, and whether it targets Secure state
208
- * @s: the NVIC
209
- * @pirq: set to pending exception number
210
- * @ptargets_secure: set to whether pending exception targets Secure
211
- *
212
- * This function writes the number of the highest priority pending
213
- * exception (the one which would be made active by
214
- * armv7m_nvic_acknowledge_irq()) to @pirq, and sets @ptargets_secure
215
- * to true if the current highest priority pending exception should
216
- * be taken to Secure state, false for NS.
217
- */
218
-void armv7m_nvic_get_pending_irq_info(NVICState *s, int *pirq,
219
- bool *ptargets_secure);
220
-/**
221
- * armv7m_nvic_acknowledge_irq: make highest priority pending exception active
222
- * @s: the NVIC
223
- *
224
- * Move the current highest priority pending exception from the pending
225
- * state to the active state, and update v7m.exception to indicate that
226
- * it is the exception currently being handled.
227
- */
228
-void armv7m_nvic_acknowledge_irq(NVICState *s);
229
-/**
230
- * armv7m_nvic_complete_irq: complete specified interrupt or exception
231
- * @s: the NVIC
232
- * @irq: the exception number to complete
233
- * @secure: true if this exception was secure
234
- *
235
- * Returns: -1 if the irq was not active
236
- * 1 if completing this irq brought us back to base (no active irqs)
237
- * 0 if there is still an irq active after this one was completed
238
- * (Ignoring -1, this is the same as the RETTOBASE value before completion.)
239
- */
240
-int armv7m_nvic_complete_irq(NVICState *s, int irq, bool secure);
241
-/**
242
- * armv7m_nvic_get_ready_status(void *opaque, int irq, bool secure)
243
- * @s: the NVIC
244
- * @irq: the exception number to mark pending
245
- * @secure: false for non-banked exceptions or for the nonsecure
246
- * version of a banked exception, true for the secure version of a banked
247
- * exception.
248
- *
249
- * Return whether an exception is "ready", i.e. whether the exception is
250
- * enabled and is configured at a priority which would allow it to
251
- * interrupt the current execution priority. This controls whether the
252
- * RDY bit for it in the FPCCR is set.
253
- */
254
-bool armv7m_nvic_get_ready_status(NVICState *s, int irq, bool secure);
255
-/**
256
- * armv7m_nvic_raw_execution_priority: return the raw execution priority
257
- * @s: the NVIC
258
- *
259
- * Returns: the raw execution priority as defined by the v8M architecture.
260
- * This is the execution priority minus the effects of AIRCR.PRIS,
261
- * and minus any PRIMASK/FAULTMASK/BASEPRI priority boosting.
262
- * (v8M ARM ARM I_PKLD.)
263
- */
264
-int armv7m_nvic_raw_execution_priority(NVICState *s);
265
-/**
266
- * armv7m_nvic_neg_prio_requested: return true if the requested execution
267
- * priority is negative for the specified security state.
268
- * @s: the NVIC
269
- * @secure: the security state to test
270
- * This corresponds to the pseudocode IsReqExecPriNeg().
271
- */
272
-#ifndef CONFIG_USER_ONLY
273
-bool armv7m_nvic_neg_prio_requested(NVICState *s, bool secure);
274
-#else
275
-static inline bool armv7m_nvic_neg_prio_requested(NVICState *s, bool secure)
276
-{
277
- return false;
53
-}
278
-}
54
-#endif
279
-#endif
55
-
280
-
56
-static const ARMCPRegInfo cortex_a72_a57_a53_cp_reginfo[] = {
281
/* Interface for defining coprocessor registers.
57
-#ifndef CONFIG_USER_ONLY
282
* Registers are defined in tables of arm_cp_reginfo structs
58
- { .name = "L2CTLR_EL1", .state = ARM_CP_STATE_AA64,
283
* which are passed to define_arm_cp_regs().
59
- .opc0 = 3, .opc1 = 1, .crn = 11, .crm = 0, .opc2 = 2,
284
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
60
- .access = PL1_RW, .readfn = a57_a53_l2ctlr_read,
285
index XXXXXXX..XXXXXXX 100644
61
- .writefn = arm_cp_write_ignore },
286
--- a/target/arm/cpu.c
62
- { .name = "L2CTLR",
287
+++ b/target/arm/cpu.c
63
- .cp = 15, .opc1 = 1, .crn = 9, .crm = 0, .opc2 = 2,
288
@@ -XXX,XX +XXX,XX @@
64
- .access = PL1_RW, .readfn = a57_a53_l2ctlr_read,
289
#if !defined(CONFIG_USER_ONLY)
65
- .writefn = arm_cp_write_ignore },
290
#include "hw/loader.h"
291
#include "hw/boards.h"
292
+#ifdef CONFIG_TCG
293
#include "hw/intc/armv7m_nvic.h"
66
-#endif
294
-#endif
67
- { .name = "L2ECTLR_EL1", .state = ARM_CP_STATE_AA64,
295
+#endif /* CONFIG_TCG */
68
- .opc0 = 3, .opc1 = 1, .crn = 11, .crm = 0, .opc2 = 3,
296
+#endif /* !CONFIG_USER_ONLY */
69
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
297
#include "sysemu/tcg.h"
70
- { .name = "L2ECTLR",
298
#include "sysemu/qtest.h"
71
- .cp = 15, .opc1 = 1, .crn = 9, .crm = 0, .opc2 = 3,
299
#include "sysemu/hw_accel.h"
72
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
73
- { .name = "L2ACTLR", .state = ARM_CP_STATE_BOTH,
74
- .opc0 = 3, .opc1 = 1, .crn = 15, .crm = 0, .opc2 = 0,
75
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
76
- { .name = "CPUACTLR_EL1", .state = ARM_CP_STATE_AA64,
77
- .opc0 = 3, .opc1 = 1, .crn = 15, .crm = 2, .opc2 = 0,
78
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
79
- { .name = "CPUACTLR",
80
- .cp = 15, .opc1 = 0, .crm = 15,
81
- .access = PL1_RW, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
82
- { .name = "CPUECTLR_EL1", .state = ARM_CP_STATE_AA64,
83
- .opc0 = 3, .opc1 = 1, .crn = 15, .crm = 2, .opc2 = 1,
84
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
85
- { .name = "CPUECTLR",
86
- .cp = 15, .opc1 = 1, .crm = 15,
87
- .access = PL1_RW, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
88
- { .name = "CPUMERRSR_EL1", .state = ARM_CP_STATE_AA64,
89
- .opc0 = 3, .opc1 = 1, .crn = 15, .crm = 2, .opc2 = 2,
90
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
91
- { .name = "CPUMERRSR",
92
- .cp = 15, .opc1 = 2, .crm = 15,
93
- .access = PL1_RW, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
94
- { .name = "L2MERRSR_EL1", .state = ARM_CP_STATE_AA64,
95
- .opc0 = 3, .opc1 = 1, .crn = 15, .crm = 2, .opc2 = 3,
96
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
97
- { .name = "L2MERRSR",
98
- .cp = 15, .opc1 = 3, .crm = 15,
99
- .access = PL1_RW, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
100
-};
101
-
102
static void aarch64_a57_initfn(Object *obj)
103
{
104
ARMCPU *cpu = ARM_CPU(obj);
105
@@ -XXX,XX +XXX,XX @@ static void aarch64_a57_initfn(Object *obj)
106
cpu->gic_num_lrs = 4;
107
cpu->gic_vpribits = 5;
108
cpu->gic_vprebits = 5;
109
- define_arm_cp_regs(cpu, cortex_a72_a57_a53_cp_reginfo);
110
+ define_cortex_a72_a57_a53_cp_reginfo(cpu);
111
}
112
113
static void aarch64_a53_initfn(Object *obj)
114
@@ -XXX,XX +XXX,XX @@ static void aarch64_a53_initfn(Object *obj)
115
cpu->gic_num_lrs = 4;
116
cpu->gic_vpribits = 5;
117
cpu->gic_vprebits = 5;
118
- define_arm_cp_regs(cpu, cortex_a72_a57_a53_cp_reginfo);
119
+ define_cortex_a72_a57_a53_cp_reginfo(cpu);
120
}
121
122
static void aarch64_a72_initfn(Object *obj)
123
@@ -XXX,XX +XXX,XX @@ static void aarch64_a72_initfn(Object *obj)
124
cpu->gic_num_lrs = 4;
125
cpu->gic_vpribits = 5;
126
cpu->gic_vprebits = 5;
127
- define_arm_cp_regs(cpu, cortex_a72_a57_a53_cp_reginfo);
128
+ define_cortex_a72_a57_a53_cp_reginfo(cpu);
129
}
130
131
void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
132
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
300
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
133
index XXXXXXX..XXXXXXX 100644
301
index XXXXXXX..XXXXXXX 100644
134
--- a/target/arm/cpu_tcg.c
302
--- a/target/arm/cpu_tcg.c
135
+++ b/target/arm/cpu_tcg.c
303
+++ b/target/arm/cpu_tcg.c
136
@@ -XXX,XX +XXX,XX @@
304
@@ -XXX,XX +XXX,XX @@
305
#include "hw/boards.h"
137
#endif
306
#endif
138
#include "cpregs.h"
307
#include "cpregs.h"
139
308
+#if !defined(CONFIG_USER_ONLY) && defined(CONFIG_TCG)
140
+#ifndef CONFIG_USER_ONLY
309
+#include "hw/intc/armv7m_nvic.h"
141
+static uint64_t l2ctlr_read(CPUARMState *env, const ARMCPRegInfo *ri)
310
+#endif
142
+{
311
143
+ ARMCPU *cpu = env_archcpu(env);
312
144
+
313
/* Share AArch32 -cpu max features with AArch64. */
145
+ /* Number of cores is in [25:24]; otherwise we RAZ */
314
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
146
+ return (cpu->core_count - 1) << 24;
315
index XXXXXXX..XXXXXXX 100644
147
+}
316
--- a/target/arm/m_helper.c
148
+
317
+++ b/target/arm/m_helper.c
149
+static const ARMCPRegInfo cortex_a72_a57_a53_cp_reginfo[] = {
318
@@ -XXX,XX +XXX,XX @@
150
+ { .name = "L2CTLR_EL1", .state = ARM_CP_STATE_AA64,
319
#include "exec/cpu_ldst.h"
151
+ .opc0 = 3, .opc1 = 1, .crn = 11, .crm = 0, .opc2 = 2,
320
#include "semihosting/common-semi.h"
152
+ .access = PL1_RW, .readfn = l2ctlr_read,
321
#endif
153
+ .writefn = arm_cp_write_ignore },
322
+#if !defined(CONFIG_USER_ONLY)
154
+ { .name = "L2CTLR",
323
+#include "hw/intc/armv7m_nvic.h"
155
+ .cp = 15, .opc1 = 1, .crn = 9, .crm = 0, .opc2 = 2,
324
+#endif
156
+ .access = PL1_RW, .readfn = l2ctlr_read,
325
157
+ .writefn = arm_cp_write_ignore },
326
static void v7m_msr_xpsr(CPUARMState *env, uint32_t mask,
158
+ { .name = "L2ECTLR_EL1", .state = ARM_CP_STATE_AA64,
327
uint32_t reg, uint32_t val)
159
+ .opc0 = 3, .opc1 = 1, .crn = 11, .crm = 0, .opc2 = 3,
160
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
161
+ { .name = "L2ECTLR",
162
+ .cp = 15, .opc1 = 1, .crn = 9, .crm = 0, .opc2 = 3,
163
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
164
+ { .name = "L2ACTLR", .state = ARM_CP_STATE_BOTH,
165
+ .opc0 = 3, .opc1 = 1, .crn = 15, .crm = 0, .opc2 = 0,
166
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
167
+ { .name = "CPUACTLR_EL1", .state = ARM_CP_STATE_AA64,
168
+ .opc0 = 3, .opc1 = 1, .crn = 15, .crm = 2, .opc2 = 0,
169
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
170
+ { .name = "CPUACTLR",
171
+ .cp = 15, .opc1 = 0, .crm = 15,
172
+ .access = PL1_RW, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
173
+ { .name = "CPUECTLR_EL1", .state = ARM_CP_STATE_AA64,
174
+ .opc0 = 3, .opc1 = 1, .crn = 15, .crm = 2, .opc2 = 1,
175
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
176
+ { .name = "CPUECTLR",
177
+ .cp = 15, .opc1 = 1, .crm = 15,
178
+ .access = PL1_RW, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
179
+ { .name = "CPUMERRSR_EL1", .state = ARM_CP_STATE_AA64,
180
+ .opc0 = 3, .opc1 = 1, .crn = 15, .crm = 2, .opc2 = 2,
181
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
182
+ { .name = "CPUMERRSR",
183
+ .cp = 15, .opc1 = 2, .crm = 15,
184
+ .access = PL1_RW, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
185
+ { .name = "L2MERRSR_EL1", .state = ARM_CP_STATE_AA64,
186
+ .opc0 = 3, .opc1 = 1, .crn = 15, .crm = 2, .opc2 = 3,
187
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
188
+ { .name = "L2MERRSR",
189
+ .cp = 15, .opc1 = 3, .crm = 15,
190
+ .access = PL1_RW, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
191
+};
192
+
193
+void define_cortex_a72_a57_a53_cp_reginfo(ARMCPU *cpu)
194
+{
195
+ define_arm_cp_regs(cpu, cortex_a72_a57_a53_cp_reginfo);
196
+}
197
+#endif /* !CONFIG_USER_ONLY */
198
+
199
/* CPU models. These are not needed for the AArch64 linux-user build. */
200
#if !defined(CONFIG_USER_ONLY) || !defined(TARGET_AARCH64)
201
202
--
328
--
203
2.25.1
329
2.34.1
330
331
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Alex Bennée <alex.bennee@linaro.org>
2
2
3
This extension concerns not merging memory access, which TCG does
3
The two TCG tests for GICv2 and GICv3 are very heavy weight distros
4
not implement. Thus we can trivially enable this feature.
4
that take a long time to boot up, especially for an --enable-debug
5
Add a comment to handle_hint for the DGH instruction, but no code.
5
build. The total code coverage they give is:
6
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Overall coverage rate:
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
lines......: 11.2% (59584 of 530123 lines)
9
Message-id: 20220506180242.216785-23-richard.henderson@linaro.org
9
functions..: 15.0% (7436 of 49443 functions)
10
branches...: 6.3% (19273 of 303933 branches)
11
12
We already get pretty close to that with the machine_aarch64_virt
13
tests which only does one full boot (~120s vs ~600s) of alpine. We
14
expand the kernel+initrd boot (~8s) to test both GICs and also add an
15
RNG device and a block device to generate a few IRQs and exercise the
16
storage layer. With that we get to a coverage of:
17
18
Overall coverage rate:
19
lines......: 11.0% (58121 of 530123 lines)
20
functions..: 14.9% (7343 of 49443 functions)
21
branches...: 6.0% (18269 of 303933 branches)
22
23
which I feel is close enough given the massive time saving. If we want
24
to target any more sub-systems we can use lighter weight more directed
25
tests.
26
27
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
28
Reviewed-by: Fabiano Rosas <farosas@suse.de>
29
Acked-by: Richard Henderson <richard.henderson@linaro.org>
30
Message-id: 20230203181632.2919715-1-alex.bennee@linaro.org
31
Cc: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
32
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
33
---
12
docs/system/arm/emulation.rst | 1 +
34
tests/avocado/boot_linux.py | 48 ++++----------------
13
target/arm/cpu64.c | 1 +
35
tests/avocado/machine_aarch64_virt.py | 63 ++++++++++++++++++++++++---
14
target/arm/translate-a64.c | 1 +
36
2 files changed, 65 insertions(+), 46 deletions(-)
15
3 files changed, 3 insertions(+)
37
16
38
diff --git a/tests/avocado/boot_linux.py b/tests/avocado/boot_linux.py
17
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
18
index XXXXXXX..XXXXXXX 100644
39
index XXXXXXX..XXXXXXX 100644
19
--- a/docs/system/arm/emulation.rst
40
--- a/tests/avocado/boot_linux.py
20
+++ b/docs/system/arm/emulation.rst
41
+++ b/tests/avocado/boot_linux.py
21
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
42
@@ -XXX,XX +XXX,XX @@ def test_pc_q35_kvm(self):
22
- FEAT_CSV2_1p2 (Cache speculation variant 2, version 1.2)
43
self.launch_and_wait(set_up_ssh_connection=False)
23
- FEAT_CSV2_2 (Cache speculation variant 2, version 2)
44
24
- FEAT_CSV3 (Cache speculation variant 3)
45
25
+- FEAT_DGH (Data gathering hint)
46
-# For Aarch64 we only boot KVM tests in CI as the TCG tests are very
26
- FEAT_DIT (Data Independent Timing instructions)
47
-# heavyweight. There are lighter weight distros which we use in the
27
- FEAT_DPB (DC CVAP instruction)
48
-# machine_aarch64_virt.py tests.
28
- FEAT_Debugv8p2 (Debug changes for v8.2)
49
+# For Aarch64 we only boot KVM tests in CI as booting the current
29
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
50
+# Fedora OS in TCG tests is very heavyweight. There are lighter weight
51
+# distros which we use in the machine_aarch64_virt.py tests.
52
class BootLinuxAarch64(LinuxTest):
53
"""
54
:avocado: tags=arch:aarch64
55
:avocado: tags=machine:virt
56
- :avocado: tags=machine:gic-version=2
57
"""
58
timeout = 720
59
60
- def add_common_args(self):
61
- self.vm.add_args('-bios',
62
- os.path.join(BUILD_DIR, 'pc-bios',
63
- 'edk2-aarch64-code.fd'))
64
- self.vm.add_args('-device', 'virtio-rng-pci,rng=rng0')
65
- self.vm.add_args('-object', 'rng-random,id=rng0,filename=/dev/urandom')
66
-
67
- @skipIf(os.getenv('GITLAB_CI'), 'Running on GitLab')
68
- def test_fedora_cloud_tcg_gicv2(self):
69
- """
70
- :avocado: tags=accel:tcg
71
- :avocado: tags=cpu:max
72
- :avocado: tags=device:gicv2
73
- """
74
- self.require_accelerator("tcg")
75
- self.vm.add_args("-accel", "tcg")
76
- self.vm.add_args("-cpu", "max,lpa2=off")
77
- self.vm.add_args("-machine", "virt,gic-version=2")
78
- self.add_common_args()
79
- self.launch_and_wait(set_up_ssh_connection=False)
80
-
81
- @skipIf(os.getenv('GITLAB_CI'), 'Running on GitLab')
82
- def test_fedora_cloud_tcg_gicv3(self):
83
- """
84
- :avocado: tags=accel:tcg
85
- :avocado: tags=cpu:max
86
- :avocado: tags=device:gicv3
87
- """
88
- self.require_accelerator("tcg")
89
- self.vm.add_args("-accel", "tcg")
90
- self.vm.add_args("-cpu", "max,lpa2=off")
91
- self.vm.add_args("-machine", "virt,gic-version=3")
92
- self.add_common_args()
93
- self.launch_and_wait(set_up_ssh_connection=False)
94
-
95
def test_virt_kvm(self):
96
"""
97
:avocado: tags=accel:kvm
98
@@ -XXX,XX +XXX,XX @@ def test_virt_kvm(self):
99
self.require_accelerator("kvm")
100
self.vm.add_args("-accel", "kvm")
101
self.vm.add_args("-machine", "virt,gic-version=host")
102
- self.add_common_args()
103
+ self.vm.add_args('-bios',
104
+ os.path.join(BUILD_DIR, 'pc-bios',
105
+ 'edk2-aarch64-code.fd'))
106
+ self.vm.add_args('-device', 'virtio-rng-pci,rng=rng0')
107
+ self.vm.add_args('-object', 'rng-random,id=rng0,filename=/dev/urandom')
108
self.launch_and_wait(set_up_ssh_connection=False)
109
110
111
diff --git a/tests/avocado/machine_aarch64_virt.py b/tests/avocado/machine_aarch64_virt.py
30
index XXXXXXX..XXXXXXX 100644
112
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/cpu64.c
113
--- a/tests/avocado/machine_aarch64_virt.py
32
+++ b/target/arm/cpu64.c
114
+++ b/tests/avocado/machine_aarch64_virt.py
33
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
115
@@ -XXX,XX +XXX,XX @@
34
t = FIELD_DP64(t, ID_AA64ISAR1, SB, 1); /* FEAT_SB */
116
35
t = FIELD_DP64(t, ID_AA64ISAR1, SPECRES, 1); /* FEAT_SPECRES */
117
import time
36
t = FIELD_DP64(t, ID_AA64ISAR1, BF16, 1); /* FEAT_BF16 */
118
import os
37
+ t = FIELD_DP64(t, ID_AA64ISAR1, DGH, 1); /* FEAT_DGH */
119
+import logging
38
t = FIELD_DP64(t, ID_AA64ISAR1, I8MM, 1); /* FEAT_I8MM */
120
39
cpu->isar.id_aa64isar1 = t;
121
from avocado_qemu import QemuSystemTest
40
122
from avocado_qemu import wait_for_console_pattern
41
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
123
from avocado_qemu import exec_command
42
index XXXXXXX..XXXXXXX 100644
124
from avocado_qemu import BUILD_DIR
43
--- a/target/arm/translate-a64.c
125
+from avocado.utils import process
44
+++ b/target/arm/translate-a64.c
126
+from avocado.utils.path import find_command
45
@@ -XXX,XX +XXX,XX @@ static void handle_hint(DisasContext *s, uint32_t insn,
127
46
break;
128
class Aarch64VirtMachine(QemuSystemTest):
47
case 0b00100: /* SEV */
129
KERNEL_COMMON_COMMAND_LINE = 'printk.time=0 '
48
case 0b00101: /* SEVL */
130
@@ -XXX,XX +XXX,XX @@ def test_alpine_virt_tcg_gic_max(self):
49
+ case 0b00110: /* DGH */
131
self.wait_for_console_pattern('Welcome to Alpine Linux 3.16')
50
/* we treat all as NOP at least for now */
132
51
break;
133
52
case 0b00111: /* XPACLRI */
134
- def test_aarch64_virt(self):
135
+ def common_aarch64_virt(self, machine):
136
"""
137
- :avocado: tags=arch:aarch64
138
- :avocado: tags=machine:virt
139
- :avocado: tags=accel:tcg
140
- :avocado: tags=cpu:max
141
+ Common code to launch basic virt machine with kernel+initrd
142
+ and a scratch disk.
143
"""
144
+ logger = logging.getLogger('aarch64_virt')
145
+
146
kernel_url = ('https://fileserver.linaro.org/s/'
147
'z6B2ARM7DQT3HWN/download')
148
-
149
kernel_hash = 'ed11daab50c151dde0e1e9c9cb8b2d9bd3215347'
150
kernel_path = self.fetch_asset(kernel_url, asset_hash=kernel_hash)
151
152
@@ -XXX,XX +XXX,XX @@ def test_aarch64_virt(self):
153
'console=ttyAMA0')
154
self.require_accelerator("tcg")
155
self.vm.add_args('-cpu', 'max,pauth-impdef=on',
156
+ '-machine', machine,
157
'-accel', 'tcg',
158
'-kernel', kernel_path,
159
'-append', kernel_command_line)
160
+
161
+ # A RNG offers an easy way to generate a few IRQs
162
+ self.vm.add_args('-device', 'virtio-rng-pci,rng=rng0')
163
+ self.vm.add_args('-object',
164
+ 'rng-random,id=rng0,filename=/dev/urandom')
165
+
166
+ # Also add a scratch block device
167
+ logger.info('creating scratch qcow2 image')
168
+ image_path = os.path.join(self.workdir, 'scratch.qcow2')
169
+ qemu_img = os.path.join(BUILD_DIR, 'qemu-img')
170
+ if not os.path.exists(qemu_img):
171
+ qemu_img = find_command('qemu-img', False)
172
+ if qemu_img is False:
173
+ self.cancel('Could not find "qemu-img", which is required to '
174
+ 'create the temporary qcow2 image')
175
+ cmd = '%s create -f qcow2 %s 8M' % (qemu_img, image_path)
176
+ process.run(cmd)
177
+
178
+ # Add the device
179
+ self.vm.add_args('-blockdev',
180
+ f"driver=qcow2,file.driver=file,file.filename={image_path},node-name=scratch")
181
+ self.vm.add_args('-device',
182
+ 'virtio-blk-device,drive=scratch')
183
+
184
self.vm.launch()
185
self.wait_for_console_pattern('Welcome to Buildroot')
186
time.sleep(0.1)
187
exec_command(self, 'root')
188
time.sleep(0.1)
189
+ exec_command(self, 'dd if=/dev/hwrng of=/dev/vda bs=512 count=4')
190
+ time.sleep(0.1)
191
+ exec_command(self, 'md5sum /dev/vda')
192
+ time.sleep(0.1)
193
+ exec_command(self, 'cat /proc/interrupts')
194
+ time.sleep(0.1)
195
exec_command(self, 'cat /proc/self/maps')
196
time.sleep(0.1)
197
+
198
+ def test_aarch64_virt_gicv3(self):
199
+ """
200
+ :avocado: tags=arch:aarch64
201
+ :avocado: tags=machine:virt
202
+ :avocado: tags=accel:tcg
203
+ :avocado: tags=cpu:max
204
+ """
205
+ self.common_aarch64_virt("virt,gic_version=3")
206
+
207
+ def test_aarch64_virt_gicv2(self):
208
+ """
209
+ :avocado: tags=arch:aarch64
210
+ :avocado: tags=machine:virt
211
+ :avocado: tags=accel:tcg
212
+ :avocado: tags=cpu:max
213
+ """
214
+ self.common_aarch64_virt("virt,gic-version=2")
53
--
215
--
54
2.25.1
216
2.34.1
217
218
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Mostafa Saleh <smostafa@google.com>
2
2
3
Enable the a76 for virt and sbsa board use.
3
GBPA register can be used to globally abort all
4
transactions.
4
5
6
It is described in the SMMU manual in "6.3.14 SMMU_GBPA".
7
ABORT reset value is IMPLEMENTATION DEFINED, it is chosen to
8
be zero(Do not abort incoming transactions).
9
10
Other fields have default values of Use Incoming.
11
12
If UPDATE is not set, the write is ignored. This is the only permitted
13
behavior in SMMUv3.2 and later.(6.3.14.1 Update procedure)
14
15
As this patch adds a new state to the SMMU (GBPA), it is added
16
in a new subsection for forward migration compatibility.
17
GBPA is only migrated if its value is different from the reset value.
18
It does this to be backward migration compatible if SW didn't write
19
the register.
20
21
Signed-off-by: Mostafa Saleh <smostafa@google.com>
22
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
23
Reviewed-by: Eric Auger <eric.auger@redhat.com>
24
Message-id: 20230214094009.2445653-1-smostafa@google.com
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
25
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20220506180242.216785-24-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
26
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
27
---
10
docs/system/arm/virt.rst | 1 +
28
hw/arm/smmuv3-internal.h | 7 +++++++
11
hw/arm/sbsa-ref.c | 1 +
29
include/hw/arm/smmuv3.h | 1 +
12
hw/arm/virt.c | 1 +
30
hw/arm/smmuv3.c | 43 +++++++++++++++++++++++++++++++++++++++-
13
target/arm/cpu64.c | 66 ++++++++++++++++++++++++++++++++++++++++
31
3 files changed, 50 insertions(+), 1 deletion(-)
14
4 files changed, 69 insertions(+)
15
32
16
diff --git a/docs/system/arm/virt.rst b/docs/system/arm/virt.rst
33
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
17
index XXXXXXX..XXXXXXX 100644
34
index XXXXXXX..XXXXXXX 100644
18
--- a/docs/system/arm/virt.rst
35
--- a/hw/arm/smmuv3-internal.h
19
+++ b/docs/system/arm/virt.rst
36
+++ b/hw/arm/smmuv3-internal.h
20
@@ -XXX,XX +XXX,XX @@ Supported guest CPU types:
37
@@ -XXX,XX +XXX,XX @@ REG32(CR0ACK, 0x24)
21
- ``cortex-a53`` (64-bit)
38
REG32(CR1, 0x28)
22
- ``cortex-a57`` (64-bit)
39
REG32(CR2, 0x2c)
23
- ``cortex-a72`` (64-bit)
40
REG32(STATUSR, 0x40)
24
+- ``cortex-a76`` (64-bit)
41
+REG32(GBPA, 0x44)
25
- ``a64fx`` (64-bit)
42
+ FIELD(GBPA, ABORT, 20, 1)
26
- ``host`` (with KVM only)
43
+ FIELD(GBPA, UPDATE, 31, 1)
27
- ``max`` (same as ``host`` for KVM; best possible emulation with TCG)
44
+
28
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
45
+/* Use incoming. */
46
+#define SMMU_GBPA_RESET_VAL 0x1000
47
+
48
REG32(IRQ_CTRL, 0x50)
49
FIELD(IRQ_CTRL, GERROR_IRQEN, 0, 1)
50
FIELD(IRQ_CTRL, PRI_IRQEN, 1, 1)
51
diff --git a/include/hw/arm/smmuv3.h b/include/hw/arm/smmuv3.h
29
index XXXXXXX..XXXXXXX 100644
52
index XXXXXXX..XXXXXXX 100644
30
--- a/hw/arm/sbsa-ref.c
53
--- a/include/hw/arm/smmuv3.h
31
+++ b/hw/arm/sbsa-ref.c
54
+++ b/include/hw/arm/smmuv3.h
32
@@ -XXX,XX +XXX,XX @@ static const int sbsa_ref_irqmap[] = {
55
@@ -XXX,XX +XXX,XX @@ struct SMMUv3State {
33
static const char * const valid_cpus[] = {
56
uint32_t cr[3];
34
ARM_CPU_TYPE_NAME("cortex-a57"),
57
uint32_t cr0ack;
35
ARM_CPU_TYPE_NAME("cortex-a72"),
58
uint32_t statusr;
36
+ ARM_CPU_TYPE_NAME("cortex-a76"),
59
+ uint32_t gbpa;
37
ARM_CPU_TYPE_NAME("max"),
60
uint32_t irq_ctrl;
61
uint32_t gerror;
62
uint32_t gerrorn;
63
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
64
index XXXXXXX..XXXXXXX 100644
65
--- a/hw/arm/smmuv3.c
66
+++ b/hw/arm/smmuv3.c
67
@@ -XXX,XX +XXX,XX @@ static void smmuv3_init_regs(SMMUv3State *s)
68
s->gerror = 0;
69
s->gerrorn = 0;
70
s->statusr = 0;
71
+ s->gbpa = SMMU_GBPA_RESET_VAL;
72
}
73
74
static int smmu_get_ste(SMMUv3State *s, dma_addr_t addr, STE *buf,
75
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
76
qemu_mutex_lock(&s->mutex);
77
78
if (!smmu_enabled(s)) {
79
- status = SMMU_TRANS_DISABLE;
80
+ if (FIELD_EX32(s->gbpa, GBPA, ABORT)) {
81
+ status = SMMU_TRANS_ABORT;
82
+ } else {
83
+ status = SMMU_TRANS_DISABLE;
84
+ }
85
goto epilogue;
86
}
87
88
@@ -XXX,XX +XXX,XX @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
89
case A_GERROR_IRQ_CFG2:
90
s->gerror_irq_cfg2 = data;
91
return MEMTX_OK;
92
+ case A_GBPA:
93
+ /*
94
+ * If UPDATE is not set, the write is ignored. This is the only
95
+ * permitted behavior in SMMUv3.2 and later.
96
+ */
97
+ if (data & R_GBPA_UPDATE_MASK) {
98
+ /* Ignore update bit as write is synchronous. */
99
+ s->gbpa = data & ~R_GBPA_UPDATE_MASK;
100
+ }
101
+ return MEMTX_OK;
102
case A_STRTAB_BASE: /* 64b */
103
s->strtab_base = deposit64(s->strtab_base, 0, 32, data);
104
return MEMTX_OK;
105
@@ -XXX,XX +XXX,XX @@ static MemTxResult smmu_readl(SMMUv3State *s, hwaddr offset,
106
case A_STATUSR:
107
*data = s->statusr;
108
return MEMTX_OK;
109
+ case A_GBPA:
110
+ *data = s->gbpa;
111
+ return MEMTX_OK;
112
case A_IRQ_CTRL:
113
case A_IRQ_CTRL_ACK:
114
*data = s->irq_ctrl;
115
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_smmuv3_queue = {
116
},
38
};
117
};
39
118
40
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
119
+static bool smmuv3_gbpa_needed(void *opaque)
41
index XXXXXXX..XXXXXXX 100644
42
--- a/hw/arm/virt.c
43
+++ b/hw/arm/virt.c
44
@@ -XXX,XX +XXX,XX @@ static const char *valid_cpus[] = {
45
ARM_CPU_TYPE_NAME("cortex-a53"),
46
ARM_CPU_TYPE_NAME("cortex-a57"),
47
ARM_CPU_TYPE_NAME("cortex-a72"),
48
+ ARM_CPU_TYPE_NAME("cortex-a76"),
49
ARM_CPU_TYPE_NAME("a64fx"),
50
ARM_CPU_TYPE_NAME("host"),
51
ARM_CPU_TYPE_NAME("max"),
52
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/target/arm/cpu64.c
55
+++ b/target/arm/cpu64.c
56
@@ -XXX,XX +XXX,XX @@ static void aarch64_a72_initfn(Object *obj)
57
define_cortex_a72_a57_a53_cp_reginfo(cpu);
58
}
59
60
+static void aarch64_a76_initfn(Object *obj)
61
+{
120
+{
62
+ ARMCPU *cpu = ARM_CPU(obj);
121
+ SMMUv3State *s = opaque;
63
+
122
+
64
+ cpu->dtb_compatible = "arm,cortex-a76";
123
+ /* Only migrate GBPA if it has different reset value. */
65
+ set_feature(&cpu->env, ARM_FEATURE_V8);
124
+ return s->gbpa != SMMU_GBPA_RESET_VAL;
66
+ set_feature(&cpu->env, ARM_FEATURE_NEON);
67
+ set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
68
+ set_feature(&cpu->env, ARM_FEATURE_AARCH64);
69
+ set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
70
+ set_feature(&cpu->env, ARM_FEATURE_EL2);
71
+ set_feature(&cpu->env, ARM_FEATURE_EL3);
72
+ set_feature(&cpu->env, ARM_FEATURE_PMU);
73
+
74
+ /* Ordered by B2.4 AArch64 registers by functional group */
75
+ cpu->clidr = 0x82000023;
76
+ cpu->ctr = 0x8444C004;
77
+ cpu->dcz_blocksize = 4;
78
+ cpu->isar.id_aa64dfr0 = 0x0000000010305408ull;
79
+ cpu->isar.id_aa64isar0 = 0x0000100010211120ull;
80
+ cpu->isar.id_aa64isar1 = 0x0000000000100001ull;
81
+ cpu->isar.id_aa64mmfr0 = 0x0000000000101122ull;
82
+ cpu->isar.id_aa64mmfr1 = 0x0000000010212122ull;
83
+ cpu->isar.id_aa64mmfr2 = 0x0000000000001011ull;
84
+ cpu->isar.id_aa64pfr0 = 0x1100000010111112ull; /* GIC filled in later */
85
+ cpu->isar.id_aa64pfr1 = 0x0000000000000010ull;
86
+ cpu->id_afr0 = 0x00000000;
87
+ cpu->isar.id_dfr0 = 0x04010088;
88
+ cpu->isar.id_isar0 = 0x02101110;
89
+ cpu->isar.id_isar1 = 0x13112111;
90
+ cpu->isar.id_isar2 = 0x21232042;
91
+ cpu->isar.id_isar3 = 0x01112131;
92
+ cpu->isar.id_isar4 = 0x00010142;
93
+ cpu->isar.id_isar5 = 0x01011121;
94
+ cpu->isar.id_isar6 = 0x00000010;
95
+ cpu->isar.id_mmfr0 = 0x10201105;
96
+ cpu->isar.id_mmfr1 = 0x40000000;
97
+ cpu->isar.id_mmfr2 = 0x01260000;
98
+ cpu->isar.id_mmfr3 = 0x02122211;
99
+ cpu->isar.id_mmfr4 = 0x00021110;
100
+ cpu->isar.id_pfr0 = 0x10010131;
101
+ cpu->isar.id_pfr1 = 0x00010000; /* GIC filled in later */
102
+ cpu->isar.id_pfr2 = 0x00000011;
103
+ cpu->midr = 0x414fd0b1; /* r4p1 */
104
+ cpu->revidr = 0;
105
+
106
+ /* From B2.18 CCSIDR_EL1 */
107
+ cpu->ccsidr[0] = 0x701fe01a; /* 64KB L1 dcache */
108
+ cpu->ccsidr[1] = 0x201fe01a; /* 64KB L1 icache */
109
+ cpu->ccsidr[2] = 0x707fe03a; /* 512KB L2 cache */
110
+
111
+ /* From B2.93 SCTLR_EL3 */
112
+ cpu->reset_sctlr = 0x30c50838;
113
+
114
+ /* From B4.23 ICH_VTR_EL2 */
115
+ cpu->gic_num_lrs = 4;
116
+ cpu->gic_vpribits = 5;
117
+ cpu->gic_vprebits = 5;
118
+
119
+ /* From B5.1 AdvSIMD AArch64 register summary */
120
+ cpu->isar.mvfr0 = 0x10110222;
121
+ cpu->isar.mvfr1 = 0x13211111;
122
+ cpu->isar.mvfr2 = 0x00000043;
123
+}
125
+}
124
+
126
+
125
void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
127
+static const VMStateDescription vmstate_gbpa = {
126
{
128
+ .name = "smmuv3/gbpa",
127
/*
129
+ .version_id = 1,
128
@@ -XXX,XX +XXX,XX @@ static const ARMCPUInfo aarch64_cpus[] = {
130
+ .minimum_version_id = 1,
129
{ .name = "cortex-a57", .initfn = aarch64_a57_initfn },
131
+ .needed = smmuv3_gbpa_needed,
130
{ .name = "cortex-a53", .initfn = aarch64_a53_initfn },
132
+ .fields = (VMStateField[]) {
131
{ .name = "cortex-a72", .initfn = aarch64_a72_initfn },
133
+ VMSTATE_UINT32(gbpa, SMMUv3State),
132
+ { .name = "cortex-a76", .initfn = aarch64_a76_initfn },
134
+ VMSTATE_END_OF_LIST()
133
{ .name = "a64fx", .initfn = aarch64_a64fx_initfn },
135
+ }
134
{ .name = "max", .initfn = aarch64_max_initfn },
136
+};
135
#if defined(CONFIG_KVM) || defined(CONFIG_HVF)
137
+
138
static const VMStateDescription vmstate_smmuv3 = {
139
.name = "smmuv3",
140
.version_id = 1,
141
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_smmuv3 = {
142
143
VMSTATE_END_OF_LIST(),
144
},
145
+ .subsections = (const VMStateDescription * []) {
146
+ &vmstate_gbpa,
147
+ NULL
148
+ }
149
};
150
151
static void smmuv3_instance_init(Object *obj)
136
--
152
--
137
2.25.1
153
2.34.1
diff view generated by jsdifflib
1
From: Leif Lindholm <quic_llindhol@quicinc.com>
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
2
3
The sbsa-ref machine is continuously evolving. Some of the changes we
3
Since commit acc0b8b05a when running the ZynqMP ZCU102 board with
4
want to make in the near future, to align with real components (e.g.
4
a QEMU configured using --without-default-devices, we get:
5
the GIC-700), will break compatibility for existing firmware.
6
5
7
Introduce two new properties to the DT generated on machine generation:
6
$ qemu-system-aarch64 -M xlnx-zcu102
8
- machine-version-major
7
qemu-system-aarch64: missing object type 'usb_dwc3'
9
To be incremented when a platform change makes the machine
8
Abort trap: 6
10
incompatible with existing firmware.
11
- machine-version-minor
12
To be incremented when functionality is added to the machine
13
without causing incompatibility with existing firmware.
14
to be reset to 0 when machine-version-major is incremented.
15
9
16
This versioning scheme is *neither*:
10
Fix by adding the missing Kconfig dependency.
17
- A QEMU versioned machine type; a given version of QEMU will emulate
18
a given version of the platform.
19
- A reflection of level of SBSA (now SystemReady SR) support provided.
20
11
21
The version will increment on guest-visible functional changes only,
12
Fixes: acc0b8b05a ("hw/arm/xlnx-zynqmp: Connect ZynqMP's USB controllers")
22
akin to a revision ID register found on a physical platform.
13
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
23
14
Message-id: 20230216092327.2203-1-philmd@linaro.org
24
These properties are both introduced with the value 0.
15
Reviewed-by: Francisco Iglesias <francisco.iglesias@amd.com>
25
(Hence, a machine where the DT is lacking these nodes is equivalent
26
to version 0.0.)
27
28
Signed-off-by: Leif Lindholm <quic_llindhol@quicinc.com>
29
Message-id: 20220505113947.75714-1-quic_llindhol@quicinc.com
30
Cc: Peter Maydell <peter.maydell@linaro.org>
31
Cc: Radoslaw Biernacki <rad@semihalf.com>
32
Cc: Cédric Le Goater <clg@kaod.org>
33
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
34
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
35
---
17
---
36
hw/arm/sbsa-ref.c | 14 ++++++++++++++
18
hw/arm/Kconfig | 1 +
37
1 file changed, 14 insertions(+)
19
1 file changed, 1 insertion(+)
38
20
39
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
21
diff --git a/hw/arm/Kconfig b/hw/arm/Kconfig
40
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
41
--- a/hw/arm/sbsa-ref.c
23
--- a/hw/arm/Kconfig
42
+++ b/hw/arm/sbsa-ref.c
24
+++ b/hw/arm/Kconfig
43
@@ -XXX,XX +XXX,XX @@ static void create_fdt(SBSAMachineState *sms)
25
@@ -XXX,XX +XXX,XX @@ config XLNX_ZYNQMP_ARM
44
qemu_fdt_setprop_cell(fdt, "/", "#address-cells", 0x2);
26
select XLNX_CSU_DMA
45
qemu_fdt_setprop_cell(fdt, "/", "#size-cells", 0x2);
27
select XLNX_ZYNQMP
46
28
select XLNX_ZDMA
47
+ /*
29
+ select USB_DWC3
48
+ * This versioning scheme is for informing platform fw only. It is neither:
30
49
+ * - A QEMU versioned machine type; a given version of QEMU will emulate
31
config XLNX_VERSAL
50
+ * a given version of the platform.
32
bool
51
+ * - A reflection of level of SBSA (now SystemReady SR) support provided.
52
+ *
53
+ * machine-version-major: updated when changes breaking fw compatibility
54
+ * are introduced.
55
+ * machine-version-minor: updated when features are added that don't break
56
+ * fw compatibility.
57
+ */
58
+ qemu_fdt_setprop_cell(fdt, "/", "machine-version-major", 0);
59
+ qemu_fdt_setprop_cell(fdt, "/", "machine-version-minor", 0);
60
+
61
if (ms->numa_state->have_numa_distance) {
62
int size = nb_numa_nodes * nb_numa_nodes * 3 * sizeof(uint32_t);
63
uint32_t *matrix = g_malloc0(size);
64
--
33
--
65
2.25.1
34
2.34.1
66
35
67
36
diff view generated by jsdifflib
1
From: Gavin Shan <gshan@redhat.com>
1
From: Cornelia Huck <cohuck@redhat.com>
2
2
3
When CPU-to-NUMA association isn't explicitly provided by users,
3
Just use current_accel_name() directly.
4
the default one is given by mc->get_default_cpu_node_id(). However,
5
the CPU topology isn't fully considered in the default association
6
and this causes CPU topology broken warnings on booting Linux guest.
7
4
8
For example, the following warning messages are observed when the
5
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
9
Linux guest is booted with the following command lines.
6
Reviewed-by: Eric Auger <eric.auger@redhat.com>
10
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
/home/gavin/sandbox/qemu.main/build/qemu-system-aarch64 \
12
-accel kvm -machine virt,gic-version=host \
13
-cpu host \
14
-smp 6,sockets=2,cores=3,threads=1 \
15
-m 1024M,slots=16,maxmem=64G \
16
-object memory-backend-ram,id=mem0,size=128M \
17
-object memory-backend-ram,id=mem1,size=128M \
18
-object memory-backend-ram,id=mem2,size=128M \
19
-object memory-backend-ram,id=mem3,size=128M \
20
-object memory-backend-ram,id=mem4,size=128M \
21
-object memory-backend-ram,id=mem4,size=384M \
22
-numa node,nodeid=0,memdev=mem0 \
23
-numa node,nodeid=1,memdev=mem1 \
24
-numa node,nodeid=2,memdev=mem2 \
25
-numa node,nodeid=3,memdev=mem3 \
26
-numa node,nodeid=4,memdev=mem4 \
27
-numa node,nodeid=5,memdev=mem5
28
:
29
alternatives: patching kernel code
30
BUG: arch topology borken
31
the CLS domain not a subset of the MC domain
32
<the above error log repeats>
33
BUG: arch topology borken
34
the DIE domain not a subset of the NODE domain
35
36
With current implementation of mc->get_default_cpu_node_id(),
37
CPU#0 to CPU#5 are associated with NODE#0 to NODE#5 separately.
38
That's incorrect because CPU#0/1/2 should be associated with same
39
NUMA node because they're seated in same socket.
40
41
This fixes the issue by considering the socket ID when the default
42
CPU-to-NUMA association is provided in virt_possible_cpu_arch_ids().
43
With this applied, no more CPU topology broken warnings are seen
44
from the Linux guest. The 6 CPUs are associated with NODE#0/1, but
45
there are no CPUs associated with NODE#2/3/4/5.
46
47
Signed-off-by: Gavin Shan <gshan@redhat.com>
48
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
49
Reviewed-by: Yanan Wang <wangyanan55@huawei.com>
50
Message-id: 20220503140304.855514-6-gshan@redhat.com
51
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
52
---
9
---
53
hw/arm/virt.c | 4 +++-
10
hw/arm/virt.c | 6 +++---
54
1 file changed, 3 insertions(+), 1 deletion(-)
11
1 file changed, 3 insertions(+), 3 deletions(-)
55
12
56
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
13
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
57
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
58
--- a/hw/arm/virt.c
15
--- a/hw/arm/virt.c
59
+++ b/hw/arm/virt.c
16
+++ b/hw/arm/virt.c
60
@@ -XXX,XX +XXX,XX @@ virt_cpu_index_to_props(MachineState *ms, unsigned cpu_index)
17
@@ -XXX,XX +XXX,XX @@ static void machvirt_init(MachineState *machine)
61
18
if (vms->secure && (kvm_enabled() || hvf_enabled())) {
62
static int64_t virt_get_default_cpu_node_id(const MachineState *ms, int idx)
19
error_report("mach-virt: %s does not support providing "
63
{
20
"Security extensions (TrustZone) to the guest CPU",
64
- return idx % ms->numa_state->num_nodes;
21
- kvm_enabled() ? "KVM" : "HVF");
65
+ int64_t socket_id = ms->possible_cpus->cpus[idx].props.socket_id;
22
+ current_accel_name());
66
+
23
exit(1);
67
+ return socket_id % ms->numa_state->num_nodes;
24
}
68
}
25
69
26
if (vms->virt && (kvm_enabled() || hvf_enabled())) {
70
static const CPUArchIdList *virt_possible_cpu_arch_ids(MachineState *ms)
27
error_report("mach-virt: %s does not support providing "
28
"Virtualization extensions to the guest CPU",
29
- kvm_enabled() ? "KVM" : "HVF");
30
+ current_accel_name());
31
exit(1);
32
}
33
34
if (vms->mte && (kvm_enabled() || hvf_enabled())) {
35
error_report("mach-virt: %s does not support providing "
36
"MTE to the guest CPU",
37
- kvm_enabled() ? "KVM" : "HVF");
38
+ current_accel_name());
39
exit(1);
40
}
41
71
--
42
--
72
2.25.1
43
2.34.1
diff view generated by jsdifflib
1
From: Leif Lindholm <quic_llindhol@quicinc.com>
1
From: Hao Wu <wuhaotsh@google.com>
2
2
3
NUVIA was acquired by Qualcomm in March 2021, but kept functioning on
3
Havard is no longer working on the Nuvoton systems for a while
4
separate infrastructure for a transitional period. We've now switched
4
and won't be able to do any work on it in the future. So I'll
5
over to contributing as Qualcomm Innovation Center (quicinc), so update
5
take over maintaining the Nuvoton system from him.
6
my email address to reflect this.
7
6
8
Signed-off-by: Leif Lindholm <quic_llindhol@quicinc.com>
7
Signed-off-by: Hao Wu <wuhaotsh@google.com>
9
Message-id: 20220505113740.75565-1-quic_llindhol@quicinc.com
8
Acked-by: Havard Skinnemoen <hskinnemoen@google.com>
10
Cc: Leif Lindholm <leif@nuviainc.com>
9
Reviewed-by: Philippe Mathieu-Daude <philmd@linaro.org>
11
Cc: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 20230208235433.3989937-2-wuhaotsh@google.com
12
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
13
[Fixed commit message typo]
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
12
---
16
.mailmap | 3 ++-
17
MAINTAINERS | 2 +-
13
MAINTAINERS | 2 +-
18
2 files changed, 3 insertions(+), 2 deletions(-)
14
1 file changed, 1 insertion(+), 1 deletion(-)
19
15
20
diff --git a/.mailmap b/.mailmap
21
index XXXXXXX..XXXXXXX 100644
22
--- a/.mailmap
23
+++ b/.mailmap
24
@@ -XXX,XX +XXX,XX @@ Greg Kurz <groug@kaod.org> <gkurz@linux.vnet.ibm.com>
25
Huacai Chen <chenhuacai@kernel.org> <chenhc@lemote.com>
26
Huacai Chen <chenhuacai@kernel.org> <chenhuacai@loongson.cn>
27
James Hogan <jhogan@kernel.org> <james.hogan@imgtec.com>
28
-Leif Lindholm <leif@nuviainc.com> <leif.lindholm@linaro.org>
29
+Leif Lindholm <quic_llindhol@quicinc.com> <leif.lindholm@linaro.org>
30
+Leif Lindholm <quic_llindhol@quicinc.com> <leif@nuviainc.com>
31
Radoslaw Biernacki <rad@semihalf.com> <radoslaw.biernacki@linaro.org>
32
Paul Burton <paulburton@kernel.org> <paul.burton@mips.com>
33
Paul Burton <paulburton@kernel.org> <paul.burton@imgtec.com>
34
diff --git a/MAINTAINERS b/MAINTAINERS
16
diff --git a/MAINTAINERS b/MAINTAINERS
35
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
36
--- a/MAINTAINERS
18
--- a/MAINTAINERS
37
+++ b/MAINTAINERS
19
+++ b/MAINTAINERS
38
@@ -XXX,XX +XXX,XX @@ F: include/hw/ssi/imx_spi.h
20
@@ -XXX,XX +XXX,XX @@ F: include/hw/net/mv88w8618_eth.h
39
SBSA-REF
21
F: docs/system/arm/musicpal.rst
40
M: Radoslaw Biernacki <rad@semihalf.com>
22
41
M: Peter Maydell <peter.maydell@linaro.org>
23
Nuvoton NPCM7xx
42
-R: Leif Lindholm <leif@nuviainc.com>
24
-M: Havard Skinnemoen <hskinnemoen@google.com>
43
+R: Leif Lindholm <quic_llindhol@quicinc.com>
25
M: Tyrone Ting <kfting@nuvoton.com>
26
+M: Hao Wu <wuhaotsh@google.com>
44
L: qemu-arm@nongnu.org
27
L: qemu-arm@nongnu.org
45
S: Maintained
28
S: Supported
46
F: hw/arm/sbsa-ref.c
29
F: hw/*/npcm7xx*
47
--
30
--
48
2.25.1
31
2.34.1
49
50
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Hao Wu <wuhaotsh@google.com>
2
2
3
Check for and defer any pending virtual SError.
3
Nuvoton's PSPI is a general purpose SPI module which enables
4
connections to SPI-based peripheral devices.
4
5
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Hao Wu <wuhaotsh@google.com>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Chris Rauer <crauer@google.com>
7
Message-id: 20220506180242.216785-17-richard.henderson@linaro.org
8
Reviewed-by: Philippe Mathieu-Daude <philmd@linaro.org>
9
Message-id: 20230208235433.3989937-3-wuhaotsh@google.com
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
11
---
10
target/arm/helper.h | 1 +
12
MAINTAINERS | 6 +-
11
target/arm/a32.decode | 16 ++++++++------
13
include/hw/ssi/npcm_pspi.h | 53 +++++++++
12
target/arm/t32.decode | 18 ++++++++--------
14
hw/ssi/npcm_pspi.c | 221 +++++++++++++++++++++++++++++++++++++
13
target/arm/op_helper.c | 43 ++++++++++++++++++++++++++++++++++++++
15
hw/ssi/meson.build | 2 +-
14
target/arm/translate-a64.c | 17 +++++++++++++++
16
hw/ssi/trace-events | 5 +
15
target/arm/translate.c | 23 ++++++++++++++++++++
17
5 files changed, 283 insertions(+), 4 deletions(-)
16
6 files changed, 103 insertions(+), 15 deletions(-)
18
create mode 100644 include/hw/ssi/npcm_pspi.h
19
create mode 100644 hw/ssi/npcm_pspi.c
17
20
18
diff --git a/target/arm/helper.h b/target/arm/helper.h
21
diff --git a/MAINTAINERS b/MAINTAINERS
19
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/helper.h
23
--- a/MAINTAINERS
21
+++ b/target/arm/helper.h
24
+++ b/MAINTAINERS
22
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_1(wfe, void, env)
25
@@ -XXX,XX +XXX,XX @@ M: Tyrone Ting <kfting@nuvoton.com>
23
DEF_HELPER_1(yield, void, env)
26
M: Hao Wu <wuhaotsh@google.com>
24
DEF_HELPER_1(pre_hvc, void, env)
27
L: qemu-arm@nongnu.org
25
DEF_HELPER_2(pre_smc, void, env, i32)
28
S: Supported
26
+DEF_HELPER_1(vesb, void, env)
29
-F: hw/*/npcm7xx*
27
30
-F: include/hw/*/npcm7xx*
28
DEF_HELPER_3(cpsr_write, void, env, i32, i32)
31
-F: tests/qtest/npcm7xx*
29
DEF_HELPER_2(cpsr_write_eret, void, env, i32)
32
+F: hw/*/npcm*
30
diff --git a/target/arm/a32.decode b/target/arm/a32.decode
33
+F: include/hw/*/npcm*
31
index XXXXXXX..XXXXXXX 100644
34
+F: tests/qtest/npcm*
32
--- a/target/arm/a32.decode
35
F: pc-bios/npcm7xx_bootrom.bin
33
+++ b/target/arm/a32.decode
36
F: roms/vbootrom
34
@@ -XXX,XX +XXX,XX @@ SMULTT .... 0001 0110 .... 0000 .... 1110 .... @rd0mn
37
F: docs/system/arm/nuvoton.rst
35
38
diff --git a/include/hw/ssi/npcm_pspi.h b/include/hw/ssi/npcm_pspi.h
36
{
39
new file mode 100644
37
{
40
index XXXXXXX..XXXXXXX
38
- YIELD ---- 0011 0010 0000 1111 ---- 0000 0001
41
--- /dev/null
39
- WFE ---- 0011 0010 0000 1111 ---- 0000 0010
42
+++ b/include/hw/ssi/npcm_pspi.h
40
- WFI ---- 0011 0010 0000 1111 ---- 0000 0011
43
@@ -XXX,XX +XXX,XX @@
41
+ [
42
+ YIELD ---- 0011 0010 0000 1111 ---- 0000 0001
43
+ WFE ---- 0011 0010 0000 1111 ---- 0000 0010
44
+ WFI ---- 0011 0010 0000 1111 ---- 0000 0011
45
46
- # TODO: Implement SEV, SEVL; may help SMP performance.
47
- # SEV ---- 0011 0010 0000 1111 ---- 0000 0100
48
- # SEVL ---- 0011 0010 0000 1111 ---- 0000 0101
49
+ # TODO: Implement SEV, SEVL; may help SMP performance.
50
+ # SEV ---- 0011 0010 0000 1111 ---- 0000 0100
51
+ # SEVL ---- 0011 0010 0000 1111 ---- 0000 0101
52
+
53
+ ESB ---- 0011 0010 0000 1111 ---- 0001 0000
54
+ ]
55
56
# The canonical nop ends in 00000000, but the whole of the
57
# rest of the space executes as nop if otherwise unsupported.
58
diff --git a/target/arm/t32.decode b/target/arm/t32.decode
59
index XXXXXXX..XXXXXXX 100644
60
--- a/target/arm/t32.decode
61
+++ b/target/arm/t32.decode
62
@@ -XXX,XX +XXX,XX @@ CLZ 1111 1010 1011 ---- 1111 .... 1000 .... @rdm
63
[
64
# Hints, and CPS
65
{
66
- YIELD 1111 0011 1010 1111 1000 0000 0000 0001
67
- WFE 1111 0011 1010 1111 1000 0000 0000 0010
68
- WFI 1111 0011 1010 1111 1000 0000 0000 0011
69
+ [
70
+ YIELD 1111 0011 1010 1111 1000 0000 0000 0001
71
+ WFE 1111 0011 1010 1111 1000 0000 0000 0010
72
+ WFI 1111 0011 1010 1111 1000 0000 0000 0011
73
74
- # TODO: Implement SEV, SEVL; may help SMP performance.
75
- # SEV 1111 0011 1010 1111 1000 0000 0000 0100
76
- # SEVL 1111 0011 1010 1111 1000 0000 0000 0101
77
+ # TODO: Implement SEV, SEVL; may help SMP performance.
78
+ # SEV 1111 0011 1010 1111 1000 0000 0000 0100
79
+ # SEVL 1111 0011 1010 1111 1000 0000 0000 0101
80
81
- # For M-profile minimal-RAS ESB can be a NOP, which is the
82
- # default behaviour since it is in the hint space.
83
- # ESB 1111 0011 1010 1111 1000 0000 0001 0000
84
+ ESB 1111 0011 1010 1111 1000 0000 0001 0000
85
+ ]
86
87
# The canonical nop ends in 0000 0000, but the whole rest
88
# of the space is "reserved hint, behaves as nop".
89
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
90
index XXXXXXX..XXXXXXX 100644
91
--- a/target/arm/op_helper.c
92
+++ b/target/arm/op_helper.c
93
@@ -XXX,XX +XXX,XX @@ void HELPER(probe_access)(CPUARMState *env, target_ulong ptr,
94
access_type, mmu_idx, ra);
95
}
96
}
97
+
98
+/*
44
+/*
99
+ * This function corresponds to AArch64.vESBOperation().
45
+ * Nuvoton Peripheral SPI Module
100
+ * Note that the AArch32 version is not functionally different.
46
+ *
47
+ * Copyright 2023 Google LLC
48
+ *
49
+ * This program is free software; you can redistribute it and/or modify it
50
+ * under the terms of the GNU General Public License as published by the
51
+ * Free Software Foundation; either version 2 of the License, or
52
+ * (at your option) any later version.
53
+ *
54
+ * This program is distributed in the hope that it will be useful, but WITHOUT
55
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
56
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
57
+ * for more details.
101
+ */
58
+ */
102
+void HELPER(vesb)(CPUARMState *env)
59
+#ifndef NPCM_PSPI_H
103
+{
60
+#define NPCM_PSPI_H
104
+ /*
61
+
105
+ * The EL2Enabled() check is done inside arm_hcr_el2_eff,
62
+#include "hw/ssi/ssi.h"
106
+ * and will return HCR_EL2.VSE == 0, so nothing happens.
63
+#include "hw/sysbus.h"
107
+ */
64
+
108
+ uint64_t hcr = arm_hcr_el2_eff(env);
65
+/*
109
+ bool enabled = !(hcr & HCR_TGE) && (hcr & HCR_AMO);
66
+ * Number of registers in our device state structure. Don't change this without
110
+ bool pending = enabled && (hcr & HCR_VSE);
67
+ * incrementing the version_id in the vmstate.
111
+ bool masked = (env->daif & PSTATE_A);
68
+ */
112
+
69
+#define NPCM_PSPI_NR_REGS 3
113
+ /* If VSE pending and masked, defer the exception. */
70
+
114
+ if (pending && masked) {
71
+/**
115
+ uint32_t syndrome;
72
+ * NPCMPSPIState - Device state for one Flash Interface Unit.
116
+
73
+ * @parent: System bus device.
117
+ if (arm_el_is_aa64(env, 1)) {
74
+ * @mmio: Memory region for register access.
118
+ /* Copy across IDS and ISS from VSESR. */
75
+ * @spi: The SPI bus mastered by this controller.
119
+ syndrome = env->cp15.vsesr_el2 & 0x1ffffff;
76
+ * @regs: Register contents.
120
+ } else {
77
+ * @irq: The interrupt request queue for this module.
121
+ ARMMMUFaultInfo fi = { .type = ARMFault_AsyncExternal };
78
+ *
122
+
79
+ * Each PSPI has a shared bank of registers, and controls up to four chip
123
+ if (extended_addresses_enabled(env)) {
80
+ * selects. Each chip select has a dedicated memory region which may be used to
124
+ syndrome = arm_fi_to_lfsc(&fi);
81
+ * read and write the flash connected to that chip select as if it were memory.
125
+ } else {
82
+ */
126
+ syndrome = arm_fi_to_sfsc(&fi);
83
+typedef struct NPCMPSPIState {
127
+ }
84
+ SysBusDevice parent;
128
+ /* Copy across AET and ExT from VSESR. */
85
+
129
+ syndrome |= env->cp15.vsesr_el2 & 0xd000;
86
+ MemoryRegion mmio;
87
+
88
+ SSIBus *spi;
89
+ uint16_t regs[NPCM_PSPI_NR_REGS];
90
+ qemu_irq irq;
91
+} NPCMPSPIState;
92
+
93
+#define TYPE_NPCM_PSPI "npcm-pspi"
94
+OBJECT_DECLARE_SIMPLE_TYPE(NPCMPSPIState, NPCM_PSPI)
95
+
96
+#endif /* NPCM_PSPI_H */
97
diff --git a/hw/ssi/npcm_pspi.c b/hw/ssi/npcm_pspi.c
98
new file mode 100644
99
index XXXXXXX..XXXXXXX
100
--- /dev/null
101
+++ b/hw/ssi/npcm_pspi.c
102
@@ -XXX,XX +XXX,XX @@
103
+/*
104
+ * Nuvoton NPCM Peripheral SPI Module (PSPI)
105
+ *
106
+ * Copyright 2023 Google LLC
107
+ *
108
+ * This program is free software; you can redistribute it and/or modify it
109
+ * under the terms of the GNU General Public License as published by the
110
+ * Free Software Foundation; either version 2 of the License, or
111
+ * (at your option) any later version.
112
+ *
113
+ * This program is distributed in the hope that it will be useful, but WITHOUT
114
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
115
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
116
+ * for more details.
117
+ */
118
+
119
+#include "qemu/osdep.h"
120
+
121
+#include "hw/irq.h"
122
+#include "hw/registerfields.h"
123
+#include "hw/ssi/npcm_pspi.h"
124
+#include "migration/vmstate.h"
125
+#include "qapi/error.h"
126
+#include "qemu/error-report.h"
127
+#include "qemu/log.h"
128
+#include "qemu/module.h"
129
+#include "qemu/units.h"
130
+
131
+#include "trace.h"
132
+
133
+REG16(PSPI_DATA, 0x0)
134
+REG16(PSPI_CTL1, 0x2)
135
+ FIELD(PSPI_CTL1, SPIEN, 0, 1)
136
+ FIELD(PSPI_CTL1, MOD, 2, 1)
137
+ FIELD(PSPI_CTL1, EIR, 5, 1)
138
+ FIELD(PSPI_CTL1, EIW, 6, 1)
139
+ FIELD(PSPI_CTL1, SCM, 7, 1)
140
+ FIELD(PSPI_CTL1, SCIDL, 8, 1)
141
+ FIELD(PSPI_CTL1, SCDV, 9, 7)
142
+REG16(PSPI_STAT, 0x4)
143
+ FIELD(PSPI_STAT, BSY, 0, 1)
144
+ FIELD(PSPI_STAT, RBF, 1, 1)
145
+
146
+static void npcm_pspi_update_irq(NPCMPSPIState *s)
147
+{
148
+ int level = 0;
149
+
150
+ /* Only fire IRQ when the module is enabled. */
151
+ if (FIELD_EX16(s->regs[R_PSPI_CTL1], PSPI_CTL1, SPIEN)) {
152
+ /* Update interrupt as BSY is cleared. */
153
+ if ((!FIELD_EX16(s->regs[R_PSPI_STAT], PSPI_STAT, BSY)) &&
154
+ FIELD_EX16(s->regs[R_PSPI_CTL1], PSPI_CTL1, EIW)) {
155
+ level = 1;
130
+ }
156
+ }
131
+
157
+
132
+ /* Set VDISR_EL2.A along with the syndrome. */
158
+ /* Update interrupt as RBF is set. */
133
+ env->cp15.vdisr_el2 = syndrome | (1u << 31);
159
+ if (FIELD_EX16(s->regs[R_PSPI_STAT], PSPI_STAT, RBF) &&
134
+
160
+ FIELD_EX16(s->regs[R_PSPI_CTL1], PSPI_CTL1, EIR)) {
135
+ /* Clear pending virtual SError */
161
+ level = 1;
136
+ env->cp15.hcr_el2 &= ~HCR_VSE;
137
+ cpu_reset_interrupt(env_cpu(env), CPU_INTERRUPT_VSERR);
138
+ }
139
+}
140
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
141
index XXXXXXX..XXXXXXX 100644
142
--- a/target/arm/translate-a64.c
143
+++ b/target/arm/translate-a64.c
144
@@ -XXX,XX +XXX,XX @@ static void handle_hint(DisasContext *s, uint32_t insn,
145
gen_helper_autib(cpu_X[17], cpu_env, cpu_X[17], cpu_X[16]);
146
}
147
break;
148
+ case 0b10000: /* ESB */
149
+ /* Without RAS, we must implement this as NOP. */
150
+ if (dc_isar_feature(aa64_ras, s)) {
151
+ /*
152
+ * QEMU does not have a source of physical SErrors,
153
+ * so we are only concerned with virtual SErrors.
154
+ * The pseudocode in the ARM for this case is
155
+ * if PSTATE.EL IN {EL0, EL1} && EL2Enabled() then
156
+ * AArch64.vESBOperation();
157
+ * Most of the condition can be evaluated at translation time.
158
+ * Test for EL2 present, and defer test for SEL2 to runtime.
159
+ */
160
+ if (s->current_el <= 1 && arm_dc_feature(s, ARM_FEATURE_EL2)) {
161
+ gen_helper_vesb(cpu_env);
162
+ }
163
+ }
164
+ break;
165
case 0b11000: /* PACIAZ */
166
if (s->pauth_active) {
167
gen_helper_pacia(cpu_X[30], cpu_env, cpu_X[30],
168
diff --git a/target/arm/translate.c b/target/arm/translate.c
169
index XXXXXXX..XXXXXXX 100644
170
--- a/target/arm/translate.c
171
+++ b/target/arm/translate.c
172
@@ -XXX,XX +XXX,XX @@ static bool trans_WFI(DisasContext *s, arg_WFI *a)
173
return true;
174
}
175
176
+static bool trans_ESB(DisasContext *s, arg_ESB *a)
177
+{
178
+ /*
179
+ * For M-profile, minimal-RAS ESB can be a NOP.
180
+ * Without RAS, we must implement this as NOP.
181
+ */
182
+ if (!arm_dc_feature(s, ARM_FEATURE_M) && dc_isar_feature(aa32_ras, s)) {
183
+ /*
184
+ * QEMU does not have a source of physical SErrors,
185
+ * so we are only concerned with virtual SErrors.
186
+ * The pseudocode in the ARM for this case is
187
+ * if PSTATE.EL IN {EL0, EL1} && EL2Enabled() then
188
+ * AArch32.vESBOperation();
189
+ * Most of the condition can be evaluated at translation time.
190
+ * Test for EL2 present, and defer test for SEL2 to runtime.
191
+ */
192
+ if (s->current_el <= 1 && arm_dc_feature(s, ARM_FEATURE_EL2)) {
193
+ gen_helper_vesb(cpu_env);
194
+ }
162
+ }
195
+ }
163
+ }
196
+ return true;
164
+ qemu_set_irq(s->irq, level);
197
+}
165
+}
198
+
166
+
199
static bool trans_NOP(DisasContext *s, arg_NOP *a)
167
+static uint16_t npcm_pspi_read_data(NPCMPSPIState *s)
200
{
168
+{
201
return true;
169
+ uint16_t value = s->regs[R_PSPI_DATA];
170
+
171
+ /* Clear stat bits as the value are read out. */
172
+ s->regs[R_PSPI_STAT] = 0;
173
+
174
+ return value;
175
+}
176
+
177
+static void npcm_pspi_write_data(NPCMPSPIState *s, uint16_t data)
178
+{
179
+ uint16_t value = 0;
180
+
181
+ if (FIELD_EX16(s->regs[R_PSPI_CTL1], PSPI_CTL1, MOD)) {
182
+ value = ssi_transfer(s->spi, extract16(data, 8, 8)) << 8;
183
+ }
184
+ value |= ssi_transfer(s->spi, extract16(data, 0, 8));
185
+ s->regs[R_PSPI_DATA] = value;
186
+
187
+ /* Mark data as available */
188
+ s->regs[R_PSPI_STAT] = R_PSPI_STAT_BSY_MASK | R_PSPI_STAT_RBF_MASK;
189
+}
190
+
191
+/* Control register read handler. */
192
+static uint64_t npcm_pspi_ctrl_read(void *opaque, hwaddr addr,
193
+ unsigned int size)
194
+{
195
+ NPCMPSPIState *s = opaque;
196
+ uint16_t value;
197
+
198
+ switch (addr) {
199
+ case A_PSPI_DATA:
200
+ value = npcm_pspi_read_data(s);
201
+ break;
202
+
203
+ case A_PSPI_CTL1:
204
+ value = s->regs[R_PSPI_CTL1];
205
+ break;
206
+
207
+ case A_PSPI_STAT:
208
+ value = s->regs[R_PSPI_STAT];
209
+ break;
210
+
211
+ default:
212
+ qemu_log_mask(LOG_GUEST_ERROR,
213
+ "%s: write to invalid offset 0x%" PRIx64 "\n",
214
+ DEVICE(s)->canonical_path, addr);
215
+ return 0;
216
+ }
217
+ trace_npcm_pspi_ctrl_read(DEVICE(s)->canonical_path, addr, value);
218
+ npcm_pspi_update_irq(s);
219
+
220
+ return value;
221
+}
222
+
223
+/* Control register write handler. */
224
+static void npcm_pspi_ctrl_write(void *opaque, hwaddr addr, uint64_t v,
225
+ unsigned int size)
226
+{
227
+ NPCMPSPIState *s = opaque;
228
+ uint16_t value = v;
229
+
230
+ trace_npcm_pspi_ctrl_write(DEVICE(s)->canonical_path, addr, value);
231
+
232
+ switch (addr) {
233
+ case A_PSPI_DATA:
234
+ npcm_pspi_write_data(s, value);
235
+ break;
236
+
237
+ case A_PSPI_CTL1:
238
+ s->regs[R_PSPI_CTL1] = value;
239
+ break;
240
+
241
+ case A_PSPI_STAT:
242
+ qemu_log_mask(LOG_GUEST_ERROR,
243
+ "%s: write to read-only register PSPI_STAT: 0x%08"
244
+ PRIx64 "\n", DEVICE(s)->canonical_path, v);
245
+ break;
246
+
247
+ default:
248
+ qemu_log_mask(LOG_GUEST_ERROR,
249
+ "%s: write to invalid offset 0x%" PRIx64 "\n",
250
+ DEVICE(s)->canonical_path, addr);
251
+ return;
252
+ }
253
+ npcm_pspi_update_irq(s);
254
+}
255
+
256
+static const MemoryRegionOps npcm_pspi_ctrl_ops = {
257
+ .read = npcm_pspi_ctrl_read,
258
+ .write = npcm_pspi_ctrl_write,
259
+ .endianness = DEVICE_LITTLE_ENDIAN,
260
+ .valid = {
261
+ .min_access_size = 1,
262
+ .max_access_size = 2,
263
+ .unaligned = false,
264
+ },
265
+ .impl = {
266
+ .min_access_size = 2,
267
+ .max_access_size = 2,
268
+ .unaligned = false,
269
+ },
270
+};
271
+
272
+static void npcm_pspi_enter_reset(Object *obj, ResetType type)
273
+{
274
+ NPCMPSPIState *s = NPCM_PSPI(obj);
275
+
276
+ trace_npcm_pspi_enter_reset(DEVICE(obj)->canonical_path, type);
277
+ memset(s->regs, 0, sizeof(s->regs));
278
+}
279
+
280
+static void npcm_pspi_realize(DeviceState *dev, Error **errp)
281
+{
282
+ NPCMPSPIState *s = NPCM_PSPI(dev);
283
+ SysBusDevice *sbd = SYS_BUS_DEVICE(dev);
284
+ Object *obj = OBJECT(dev);
285
+
286
+ s->spi = ssi_create_bus(dev, "pspi");
287
+ memory_region_init_io(&s->mmio, obj, &npcm_pspi_ctrl_ops, s,
288
+ "mmio", 4 * KiB);
289
+ sysbus_init_mmio(sbd, &s->mmio);
290
+ sysbus_init_irq(sbd, &s->irq);
291
+}
292
+
293
+static const VMStateDescription vmstate_npcm_pspi = {
294
+ .name = "npcm-pspi",
295
+ .version_id = 0,
296
+ .minimum_version_id = 0,
297
+ .fields = (VMStateField[]) {
298
+ VMSTATE_UINT16_ARRAY(regs, NPCMPSPIState, NPCM_PSPI_NR_REGS),
299
+ VMSTATE_END_OF_LIST(),
300
+ },
301
+};
302
+
303
+
304
+static void npcm_pspi_class_init(ObjectClass *klass, void *data)
305
+{
306
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
307
+ DeviceClass *dc = DEVICE_CLASS(klass);
308
+
309
+ dc->desc = "NPCM Peripheral SPI Module";
310
+ dc->realize = npcm_pspi_realize;
311
+ dc->vmsd = &vmstate_npcm_pspi;
312
+ rc->phases.enter = npcm_pspi_enter_reset;
313
+}
314
+
315
+static const TypeInfo npcm_pspi_types[] = {
316
+ {
317
+ .name = TYPE_NPCM_PSPI,
318
+ .parent = TYPE_SYS_BUS_DEVICE,
319
+ .instance_size = sizeof(NPCMPSPIState),
320
+ .class_init = npcm_pspi_class_init,
321
+ },
322
+};
323
+DEFINE_TYPES(npcm_pspi_types);
324
diff --git a/hw/ssi/meson.build b/hw/ssi/meson.build
325
index XXXXXXX..XXXXXXX 100644
326
--- a/hw/ssi/meson.build
327
+++ b/hw/ssi/meson.build
328
@@ -XXX,XX +XXX,XX @@
329
softmmu_ss.add(when: 'CONFIG_ASPEED_SOC', if_true: files('aspeed_smc.c'))
330
softmmu_ss.add(when: 'CONFIG_MSF2', if_true: files('mss-spi.c'))
331
-softmmu_ss.add(when: 'CONFIG_NPCM7XX', if_true: files('npcm7xx_fiu.c'))
332
+softmmu_ss.add(when: 'CONFIG_NPCM7XX', if_true: files('npcm7xx_fiu.c', 'npcm_pspi.c'))
333
softmmu_ss.add(when: 'CONFIG_PL022', if_true: files('pl022.c'))
334
softmmu_ss.add(when: 'CONFIG_SIFIVE_SPI', if_true: files('sifive_spi.c'))
335
softmmu_ss.add(when: 'CONFIG_SSI', if_true: files('ssi.c'))
336
diff --git a/hw/ssi/trace-events b/hw/ssi/trace-events
337
index XXXXXXX..XXXXXXX 100644
338
--- a/hw/ssi/trace-events
339
+++ b/hw/ssi/trace-events
340
@@ -XXX,XX +XXX,XX @@ npcm7xx_fiu_ctrl_write(const char *id, uint64_t addr, uint32_t data) "%s offset:
341
npcm7xx_fiu_flash_read(const char *id, int cs, uint64_t addr, unsigned int size, uint64_t value) "%s[%d] offset: 0x%08" PRIx64 " size: %u value: 0x%" PRIx64
342
npcm7xx_fiu_flash_write(const char *id, unsigned cs, uint64_t addr, unsigned int size, uint64_t value) "%s[%d] offset: 0x%08" PRIx64 " size: %u value: 0x%" PRIx64
343
344
+# npcm_pspi.c
345
+npcm_pspi_enter_reset(const char *id, int reset_type) "%s reset type: %d"
346
+npcm_pspi_ctrl_read(const char *id, uint64_t addr, uint16_t data) "%s offset: 0x%03" PRIx64 " value: 0x%04" PRIx16
347
+npcm_pspi_ctrl_write(const char *id, uint64_t addr, uint16_t data) "%s offset: 0x%03" PRIx64 " value: 0x%04" PRIx16
348
+
349
# ibex_spi_host.c
350
351
ibex_spi_host_reset(const char *msg) "%s"
202
--
352
--
203
2.25.1
353
2.34.1
diff view generated by jsdifflib
1
From: Gavin Shan <gshan@redhat.com>
1
From: Hao Wu <wuhaotsh@google.com>
2
2
3
When the PPTT table is built, the CPU topology is re-calculated, but
3
Signed-off-by: Hao Wu <wuhaotsh@google.com>
4
it's unecessary because the CPU topology has been populated in
4
Reviewed-by: Titus Rwantare <titusr@google.com>
5
virt_possible_cpu_arch_ids() on arm/virt machine.
5
Reviewed-by: Philippe Mathieu-Daude <philmd@linaro.org>
6
6
Message-id: 20230208235433.3989937-4-wuhaotsh@google.com
7
This reworks build_pptt() to avoid by reusing the existing IDs in
8
ms->possible_cpus. Currently, the only user of build_pptt() is
9
arm/virt machine.
10
11
Signed-off-by: Gavin Shan <gshan@redhat.com>
12
Tested-by: Yanan Wang <wangyanan55@huawei.com>
13
Reviewed-by: Yanan Wang <wangyanan55@huawei.com>
14
Acked-by: Igor Mammedov <imammedo@redhat.com>
15
Acked-by: Michael S. Tsirkin <mst@redhat.com>
16
Message-id: 20220503140304.855514-7-gshan@redhat.com
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
---
8
---
19
hw/acpi/aml-build.c | 111 +++++++++++++++++++-------------------------
9
docs/system/arm/nuvoton.rst | 2 +-
20
1 file changed, 48 insertions(+), 63 deletions(-)
10
include/hw/arm/npcm7xx.h | 2 ++
11
hw/arm/npcm7xx.c | 25 +++++++++++++++++++++++--
12
3 files changed, 26 insertions(+), 3 deletions(-)
21
13
22
diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
14
diff --git a/docs/system/arm/nuvoton.rst b/docs/system/arm/nuvoton.rst
23
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
24
--- a/hw/acpi/aml-build.c
16
--- a/docs/system/arm/nuvoton.rst
25
+++ b/hw/acpi/aml-build.c
17
+++ b/docs/system/arm/nuvoton.rst
26
@@ -XXX,XX +XXX,XX @@ void build_pptt(GArray *table_data, BIOSLinker *linker, MachineState *ms,
18
@@ -XXX,XX +XXX,XX @@ Supported devices
27
const char *oem_id, const char *oem_table_id)
19
* SMBus controller (SMBF)
28
{
20
* Ethernet controller (EMC)
29
MachineClass *mc = MACHINE_GET_CLASS(ms);
21
* Tachometer
30
- GQueue *list = g_queue_new();
22
+ * Peripheral SPI controller (PSPI)
31
- guint pptt_start = table_data->len;
23
32
- guint parent_offset;
24
Missing devices
33
- guint length, i;
25
---------------
34
- int uid = 0;
26
@@ -XXX,XX +XXX,XX @@ Missing devices
35
- int socket;
27
36
+ CPUArchIdList *cpus = ms->possible_cpus;
28
* Ethernet controller (GMAC)
37
+ int64_t socket_id = -1, cluster_id = -1, core_id = -1;
29
* USB device (USBD)
38
+ uint32_t socket_offset = 0, cluster_offset = 0, core_offset = 0;
30
- * Peripheral SPI controller (PSPI)
39
+ uint32_t pptt_start = table_data->len;
31
* SD/MMC host
40
+ int n;
32
* PECI interface
41
AcpiTable table = { .sig = "PPTT", .rev = 2,
33
* PCI and PCIe root complex and bridges
42
.oem_id = oem_id, .oem_table_id = oem_table_id };
34
diff --git a/include/hw/arm/npcm7xx.h b/include/hw/arm/npcm7xx.h
43
35
index XXXXXXX..XXXXXXX 100644
44
acpi_table_begin(&table, table_data);
36
--- a/include/hw/arm/npcm7xx.h
45
37
+++ b/include/hw/arm/npcm7xx.h
46
- for (socket = 0; socket < ms->smp.sockets; socket++) {
38
@@ -XXX,XX +XXX,XX @@
47
- g_queue_push_tail(list,
39
#include "hw/nvram/npcm7xx_otp.h"
48
- GUINT_TO_POINTER(table_data->len - pptt_start));
40
#include "hw/timer/npcm7xx_timer.h"
49
- build_processor_hierarchy_node(
41
#include "hw/ssi/npcm7xx_fiu.h"
50
- table_data,
42
+#include "hw/ssi/npcm_pspi.h"
51
- /*
43
#include "hw/usb/hcd-ehci.h"
52
- * Physical package - represents the boundary
44
#include "hw/usb/hcd-ohci.h"
53
- * of a physical package
45
#include "target/arm/cpu.h"
54
- */
46
@@ -XXX,XX +XXX,XX @@ struct NPCM7xxState {
55
- (1 << 0),
47
NPCM7xxFIUState fiu[2];
56
- 0, socket, NULL, 0);
48
NPCM7xxEMCState emc[2];
57
- }
49
NPCM7xxSDHCIState mmc;
58
-
50
+ NPCMPSPIState pspi[2];
59
- if (mc->smp_props.clusters_supported) {
51
};
60
- length = g_queue_get_length(list);
52
61
- for (i = 0; i < length; i++) {
53
#define TYPE_NPCM7XX "npcm7xx"
62
- int cluster;
54
diff --git a/hw/arm/npcm7xx.c b/hw/arm/npcm7xx.c
63
-
55
index XXXXXXX..XXXXXXX 100644
64
- parent_offset = GPOINTER_TO_UINT(g_queue_pop_head(list));
56
--- a/hw/arm/npcm7xx.c
65
- for (cluster = 0; cluster < ms->smp.clusters; cluster++) {
57
+++ b/hw/arm/npcm7xx.c
66
- g_queue_push_tail(list,
58
@@ -XXX,XX +XXX,XX @@ enum NPCM7xxInterrupt {
67
- GUINT_TO_POINTER(table_data->len - pptt_start));
59
NPCM7XX_EMC1RX_IRQ = 15,
68
- build_processor_hierarchy_node(
60
NPCM7XX_EMC1TX_IRQ,
69
- table_data,
61
NPCM7XX_MMC_IRQ = 26,
70
- (0 << 0), /* not a physical package */
62
+ NPCM7XX_PSPI2_IRQ = 28,
71
- parent_offset, cluster, NULL, 0);
63
+ NPCM7XX_PSPI1_IRQ = 31,
72
- }
64
NPCM7XX_TIMER0_IRQ = 32, /* Timer Module 0 */
73
+ /*
65
NPCM7XX_TIMER1_IRQ,
74
+ * This works with the assumption that cpus[n].props.*_id has been
66
NPCM7XX_TIMER2_IRQ,
75
+ * sorted from top to down levels in mc->possible_cpu_arch_ids().
67
@@ -XXX,XX +XXX,XX @@ static const hwaddr npcm7xx_emc_addr[] = {
76
+ * Otherwise, the unexpected and duplicated containers will be
68
0xf0826000,
77
+ * created.
69
};
78
+ */
70
79
+ for (n = 0; n < cpus->len; n++) {
71
+/* Register base address for each PSPI Module */
80
+ if (cpus->cpus[n].props.socket_id != socket_id) {
72
+static const hwaddr npcm7xx_pspi_addr[] = {
81
+ assert(cpus->cpus[n].props.socket_id > socket_id);
73
+ 0xf0200000,
82
+ socket_id = cpus->cpus[n].props.socket_id;
74
+ 0xf0201000,
83
+ cluster_id = -1;
75
+};
84
+ core_id = -1;
76
+
85
+ socket_offset = table_data->len - pptt_start;
77
static const struct {
86
+ build_processor_hierarchy_node(table_data,
78
hwaddr regs_addr;
87
+ (1 << 0), /* Physical package */
79
uint32_t unconnected_pins;
88
+ 0, socket_id, NULL, 0);
80
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_init(Object *obj)
89
}
81
object_initialize_child(obj, "emc[*]", &s->emc[i], TYPE_NPCM7XX_EMC);
90
- }
91
92
- length = g_queue_get_length(list);
93
- for (i = 0; i < length; i++) {
94
- int core;
95
-
96
- parent_offset = GPOINTER_TO_UINT(g_queue_pop_head(list));
97
- for (core = 0; core < ms->smp.cores; core++) {
98
- if (ms->smp.threads > 1) {
99
- g_queue_push_tail(list,
100
- GUINT_TO_POINTER(table_data->len - pptt_start));
101
- build_processor_hierarchy_node(
102
- table_data,
103
- (0 << 0), /* not a physical package */
104
- parent_offset, core, NULL, 0);
105
- } else {
106
- build_processor_hierarchy_node(
107
- table_data,
108
- (1 << 1) | /* ACPI Processor ID valid */
109
- (1 << 3), /* Node is a Leaf */
110
- parent_offset, uid++, NULL, 0);
111
+ if (mc->smp_props.clusters_supported) {
112
+ if (cpus->cpus[n].props.cluster_id != cluster_id) {
113
+ assert(cpus->cpus[n].props.cluster_id > cluster_id);
114
+ cluster_id = cpus->cpus[n].props.cluster_id;
115
+ core_id = -1;
116
+ cluster_offset = table_data->len - pptt_start;
117
+ build_processor_hierarchy_node(table_data,
118
+ (0 << 0), /* Not a physical package */
119
+ socket_offset, cluster_id, NULL, 0);
120
}
121
+ } else {
122
+ cluster_offset = socket_offset;
123
}
124
- }
125
126
- length = g_queue_get_length(list);
127
- for (i = 0; i < length; i++) {
128
- int thread;
129
+ if (ms->smp.threads == 1) {
130
+ build_processor_hierarchy_node(table_data,
131
+ (1 << 1) | /* ACPI Processor ID valid */
132
+ (1 << 3), /* Node is a Leaf */
133
+ cluster_offset, n, NULL, 0);
134
+ } else {
135
+ if (cpus->cpus[n].props.core_id != core_id) {
136
+ assert(cpus->cpus[n].props.core_id > core_id);
137
+ core_id = cpus->cpus[n].props.core_id;
138
+ core_offset = table_data->len - pptt_start;
139
+ build_processor_hierarchy_node(table_data,
140
+ (0 << 0), /* Not a physical package */
141
+ cluster_offset, core_id, NULL, 0);
142
+ }
143
144
- parent_offset = GPOINTER_TO_UINT(g_queue_pop_head(list));
145
- for (thread = 0; thread < ms->smp.threads; thread++) {
146
- build_processor_hierarchy_node(
147
- table_data,
148
+ build_processor_hierarchy_node(table_data,
149
(1 << 1) | /* ACPI Processor ID valid */
150
(1 << 2) | /* Processor is a Thread */
151
(1 << 3), /* Node is a Leaf */
152
- parent_offset, uid++, NULL, 0);
153
+ core_offset, n, NULL, 0);
154
}
155
}
82
}
156
83
157
- g_queue_free(list);
84
+ for (i = 0; i < ARRAY_SIZE(s->pspi); i++) {
158
acpi_table_end(linker, &table);
85
+ object_initialize_child(obj, "pspi[*]", &s->pspi[i], TYPE_NPCM_PSPI);
86
+ }
87
+
88
object_initialize_child(obj, "mmc", &s->mmc, TYPE_NPCM7XX_SDHCI);
159
}
89
}
160
90
91
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_realize(DeviceState *dev, Error **errp)
92
sysbus_connect_irq(SYS_BUS_DEVICE(&s->mmc), 0,
93
npcm7xx_irq(s, NPCM7XX_MMC_IRQ));
94
95
+ /* PSPI */
96
+ QEMU_BUILD_BUG_ON(ARRAY_SIZE(npcm7xx_pspi_addr) != ARRAY_SIZE(s->pspi));
97
+ for (i = 0; i < ARRAY_SIZE(s->pspi); i++) {
98
+ SysBusDevice *sbd = SYS_BUS_DEVICE(&s->pspi[i]);
99
+ int irq = (i == 0) ? NPCM7XX_PSPI1_IRQ : NPCM7XX_PSPI2_IRQ;
100
+
101
+ sysbus_realize(sbd, &error_abort);
102
+ sysbus_mmio_map(sbd, 0, npcm7xx_pspi_addr[i]);
103
+ sysbus_connect_irq(sbd, 0, npcm7xx_irq(s, irq));
104
+ }
105
+
106
create_unimplemented_device("npcm7xx.shm", 0xc0001000, 4 * KiB);
107
create_unimplemented_device("npcm7xx.vdmx", 0xe0800000, 4 * KiB);
108
create_unimplemented_device("npcm7xx.pcierc", 0xe1000000, 64 * KiB);
109
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_realize(DeviceState *dev, Error **errp)
110
create_unimplemented_device("npcm7xx.peci", 0xf0100000, 4 * KiB);
111
create_unimplemented_device("npcm7xx.siox[1]", 0xf0101000, 4 * KiB);
112
create_unimplemented_device("npcm7xx.siox[2]", 0xf0102000, 4 * KiB);
113
- create_unimplemented_device("npcm7xx.pspi1", 0xf0200000, 4 * KiB);
114
- create_unimplemented_device("npcm7xx.pspi2", 0xf0201000, 4 * KiB);
115
create_unimplemented_device("npcm7xx.ahbpci", 0xf0400000, 1 * MiB);
116
create_unimplemented_device("npcm7xx.mcphy", 0xf05f0000, 64 * KiB);
117
create_unimplemented_device("npcm7xx.gmac1", 0xf0802000, 8 * KiB);
161
--
118
--
162
2.25.1
119
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Jean-Philippe Brucker <jean-philippe@linaro.org>
2
2
3
This extension concerns branch speculation, which TCG does
3
Addresses targeting the second translation table (TTB1) in the SMMU have
4
not implement. Thus we can trivially enable this feature.
4
all upper bits set. Ensure the IOMMU region covers all 64 bits.
5
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
8
Message-id: 20220506180242.216785-20-richard.henderson@linaro.org
8
Reviewed-by: Eric Auger <eric.auger@redhat.com>
9
Message-id: 20230214171921.1917916-2-jean-philippe@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
---
11
docs/system/arm/emulation.rst | 1 +
12
include/hw/arm/smmu-common.h | 2 --
12
target/arm/cpu64.c | 1 +
13
hw/arm/smmu-common.c | 2 +-
13
target/arm/cpu_tcg.c | 1 +
14
2 files changed, 1 insertion(+), 3 deletions(-)
14
3 files changed, 3 insertions(+)
15
15
16
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
16
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
17
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
18
--- a/docs/system/arm/emulation.rst
18
--- a/include/hw/arm/smmu-common.h
19
+++ b/docs/system/arm/emulation.rst
19
+++ b/include/hw/arm/smmu-common.h
20
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
20
@@ -XXX,XX +XXX,XX @@
21
- FEAT_BBM at level 2 (Translation table break-before-make levels)
21
#define SMMU_PCI_DEVFN_MAX 256
22
- FEAT_BF16 (AArch64 BFloat16 instructions)
22
#define SMMU_PCI_DEVFN(sid) (sid & 0xFF)
23
- FEAT_BTI (Branch Target Identification)
23
24
+- FEAT_CSV2 (Cache speculation variant 2)
24
-#define SMMU_MAX_VA_BITS 48
25
- FEAT_DIT (Data Independent Timing instructions)
25
-
26
- FEAT_DPB (DC CVAP instruction)
26
/*
27
- FEAT_Debugv8p2 (Debug changes for v8.2)
27
* Page table walk error types
28
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
28
*/
29
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
29
index XXXXXXX..XXXXXXX 100644
30
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/cpu64.c
31
--- a/hw/arm/smmu-common.c
31
+++ b/target/arm/cpu64.c
32
+++ b/hw/arm/smmu-common.c
32
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
33
@@ -XXX,XX +XXX,XX @@ static AddressSpace *smmu_find_add_as(PCIBus *bus, void *opaque, int devfn)
33
t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1);
34
34
t = FIELD_DP64(t, ID_AA64PFR0, SEL2, 1); /* FEAT_SEL2 */
35
memory_region_init_iommu(&sdev->iommu, sizeof(sdev->iommu),
35
t = FIELD_DP64(t, ID_AA64PFR0, DIT, 1); /* FEAT_DIT */
36
s->mrtypename,
36
+ t = FIELD_DP64(t, ID_AA64PFR0, CSV2, 1); /* FEAT_CSV2 */
37
- OBJECT(s), name, 1ULL << SMMU_MAX_VA_BITS);
37
cpu->isar.id_aa64pfr0 = t;
38
+ OBJECT(s), name, UINT64_MAX);
38
39
address_space_init(&sdev->as,
39
t = cpu->isar.id_aa64pfr1;
40
MEMORY_REGION(&sdev->iommu), name);
40
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
41
trace_smmu_add_mr(name);
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/cpu_tcg.c
43
+++ b/target/arm/cpu_tcg.c
44
@@ -XXX,XX +XXX,XX @@ void aa32_max_features(ARMCPU *cpu)
45
cpu->isar.id_mmfr4 = t;
46
47
t = cpu->isar.id_pfr0;
48
+ t = FIELD_DP32(t, ID_PFR0, CSV2, 2); /* FEAT_CVS2 */
49
t = FIELD_DP32(t, ID_PFR0, DIT, 1); /* FEAT_DIT */
50
t = FIELD_DP32(t, ID_PFR0, RAS, 1); /* FEAT_RAS */
51
cpu->isar.id_pfr0 = t;
52
--
42
--
53
2.25.1
43
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Jean-Philippe Brucker <jean-philippe@linaro.org>
2
2
3
This feature is AArch64 only, and applies to physical SErrors,
3
Addresses targeting the second translation table (TTB1) in the SMMU have
4
which QEMU does not implement, thus the feature is a nop.
4
all upper bits set (except for the top byte when TBI is enabled). Fix
5
the TTB1 check.
5
6
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reported-by: Ola Hugosson <ola.hugosson@arm.com>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Eric Auger <eric.auger@redhat.com>
8
Message-id: 20220506180242.216785-19-richard.henderson@linaro.org
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
11
Message-id: 20230214171921.1917916-3-jean-philippe@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
13
---
11
docs/system/arm/emulation.rst | 1 +
14
hw/arm/smmu-common.c | 2 +-
12
target/arm/cpu64.c | 1 +
15
1 file changed, 1 insertion(+), 1 deletion(-)
13
2 files changed, 2 insertions(+)
14
16
15
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
17
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
16
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
17
--- a/docs/system/arm/emulation.rst
19
--- a/hw/arm/smmu-common.c
18
+++ b/docs/system/arm/emulation.rst
20
+++ b/hw/arm/smmu-common.c
19
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
21
@@ -XXX,XX +XXX,XX @@ SMMUTransTableInfo *select_tt(SMMUTransCfg *cfg, dma_addr_t iova)
20
- FEAT_FlagM2 (Enhancements to flag manipulation instructions)
22
/* there is a ttbr0 region and we are in it (high bits all zero) */
21
- FEAT_HPDS (Hierarchical permission disables)
23
return &cfg->tt[0];
22
- FEAT_I8MM (AArch64 Int8 matrix multiplication instructions)
24
} else if (cfg->tt[1].tsz &&
23
+- FEAT_IESB (Implicit error synchronization event)
25
- !extract64(iova, 64 - cfg->tt[1].tsz, cfg->tt[1].tsz - tbi_byte)) {
24
- FEAT_JSCVT (JavaScript conversion instructions)
26
+ sextract64(iova, 64 - cfg->tt[1].tsz, cfg->tt[1].tsz - tbi_byte) == -1) {
25
- FEAT_LOR (Limited ordering regions)
27
/* there is a ttbr1 region and we are in it (high bits all one) */
26
- FEAT_LPA (Large Physical Address space)
28
return &cfg->tt[1];
27
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
29
} else if (!cfg->tt[0].tsz) {
28
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/cpu64.c
30
+++ b/target/arm/cpu64.c
31
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
32
t = cpu->isar.id_aa64mmfr2;
33
t = FIELD_DP64(t, ID_AA64MMFR2, CNP, 1); /* FEAT_TTCNP */
34
t = FIELD_DP64(t, ID_AA64MMFR2, UAO, 1); /* FEAT_UAO */
35
+ t = FIELD_DP64(t, ID_AA64MMFR2, IESB, 1); /* FEAT_IESB */
36
t = FIELD_DP64(t, ID_AA64MMFR2, VARANGE, 1); /* FEAT_LVA */
37
t = FIELD_DP64(t, ID_AA64MMFR2, ST, 1); /* FEAT_TTST */
38
t = FIELD_DP64(t, ID_AA64MMFR2, TTL, 1); /* FEAT_TTL */
39
--
30
--
40
2.25.1
31
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Claudio Fontana <cfontana@suse.de>
2
2
3
Enable writes to the TERR and TEA bits when RAS is enabled.
3
make it clearer from the name that this is a tcg-only function.
4
These bits are otherwise RES0.
5
4
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Claudio Fontana <cfontana@suse.de>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Fabiano Rosas <farosas@suse.de>
8
Message-id: 20220506180242.216785-15-richard.henderson@linaro.org
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
---
11
target/arm/helper.c | 9 +++++++++
12
target/arm/helper.c | 4 ++--
12
1 file changed, 9 insertions(+)
13
1 file changed, 2 insertions(+), 2 deletions(-)
13
14
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper.c
17
--- a/target/arm/helper.c
17
+++ b/target/arm/helper.c
18
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
19
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
19
}
20
* trapped to the hypervisor in KVM.
20
valid_mask &= ~SCR_NET;
21
*/
21
22
#ifdef CONFIG_TCG
22
+ if (cpu_isar_feature(aa64_ras, cpu)) {
23
-static void handle_semihosting(CPUState *cs)
23
+ valid_mask |= SCR_TERR;
24
+static void tcg_handle_semihosting(CPUState *cs)
24
+ }
25
{
25
if (cpu_isar_feature(aa64_lor, cpu)) {
26
ARMCPU *cpu = ARM_CPU(cs);
26
valid_mask |= SCR_TLOR;
27
CPUARMState *env = &cpu->env;
27
}
28
@@ -XXX,XX +XXX,XX @@ void arm_cpu_do_interrupt(CPUState *cs)
28
@@ -XXX,XX +XXX,XX @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
29
*/
29
}
30
#ifdef CONFIG_TCG
30
} else {
31
if (cs->exception_index == EXCP_SEMIHOST) {
31
valid_mask &= ~(SCR_RW | SCR_ST);
32
- handle_semihosting(cs);
32
+ if (cpu_isar_feature(aa32_ras, cpu)) {
33
+ tcg_handle_semihosting(cs);
33
+ valid_mask |= SCR_TERR;
34
return;
34
+ }
35
}
35
}
36
36
#endif
37
if (!arm_feature(env, ARM_FEATURE_EL2)) {
38
@@ -XXX,XX +XXX,XX @@ static void do_hcr_write(CPUARMState *env, uint64_t value, uint64_t valid_mask)
39
if (cpu_isar_feature(aa64_vh, cpu)) {
40
valid_mask |= HCR_E2H;
41
}
42
+ if (cpu_isar_feature(aa64_ras, cpu)) {
43
+ valid_mask |= HCR_TERR | HCR_TEA;
44
+ }
45
if (cpu_isar_feature(aa64_lor, cpu)) {
46
valid_mask |= HCR_TLOR;
47
}
48
--
37
--
49
2.25.1
38
2.34.1
39
40
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Claudio Fontana <cfontana@suse.de>
2
2
3
This register is present for either VHE or Debugv8p2.
3
for "all" builds (tcg + kvm), we want to avoid doing
4
the psci check if tcg is built-in, but not enabled.
4
5
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Claudio Fontana <cfontana@suse.de>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20220506180242.216785-5-richard.henderson@linaro.org
8
Signed-off-by: Fabiano Rosas <farosas@suse.de>
9
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
11
---
10
target/arm/helper.c | 15 +++++++++++----
12
target/arm/helper.c | 3 ++-
11
1 file changed, 11 insertions(+), 4 deletions(-)
13
1 file changed, 2 insertions(+), 1 deletion(-)
12
14
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
17
--- a/target/arm/helper.c
16
+++ b/target/arm/helper.c
18
+++ b/target/arm/helper.c
17
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo jazelle_regs[] = {
19
@@ -XXX,XX +XXX,XX @@
18
.access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
20
#include "hw/irq.h"
19
};
21
#include "sysemu/cpu-timers.h"
20
22
#include "sysemu/kvm.h"
21
+static const ARMCPRegInfo contextidr_el2 = {
23
+#include "sysemu/tcg.h"
22
+ .name = "CONTEXTIDR_EL2", .state = ARM_CP_STATE_AA64,
24
#include "qapi/qapi-commands-machine-target.h"
23
+ .opc0 = 3, .opc1 = 4, .crn = 13, .crm = 0, .opc2 = 1,
25
#include "qapi/error.h"
24
+ .access = PL2_RW,
26
#include "qemu/guest-random.h"
25
+ .fieldoffset = offsetof(CPUARMState, cp15.contextidr_el[2])
27
@@ -XXX,XX +XXX,XX @@ void arm_cpu_do_interrupt(CPUState *cs)
26
+};
28
env->exception.syndrome);
27
+
28
static const ARMCPRegInfo vhe_reginfo[] = {
29
- { .name = "CONTEXTIDR_EL2", .state = ARM_CP_STATE_AA64,
30
- .opc0 = 3, .opc1 = 4, .crn = 13, .crm = 0, .opc2 = 1,
31
- .access = PL2_RW,
32
- .fieldoffset = offsetof(CPUARMState, cp15.contextidr_el[2]) },
33
{ .name = "TTBR1_EL2", .state = ARM_CP_STATE_AA64,
34
.opc0 = 3, .opc1 = 4, .crn = 2, .crm = 0, .opc2 = 1,
35
.access = PL2_RW, .writefn = vmsa_tcr_ttbr_el2_write,
36
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
37
define_one_arm_cp_reg(cpu, &ssbs_reginfo);
38
}
29
}
39
30
40
+ if (cpu_isar_feature(aa64_vh, cpu) ||
31
- if (arm_is_psci_call(cpu, cs->exception_index)) {
41
+ cpu_isar_feature(aa64_debugv8p2, cpu)) {
32
+ if (tcg_enabled() && arm_is_psci_call(cpu, cs->exception_index)) {
42
+ define_one_arm_cp_reg(cpu, &contextidr_el2);
33
arm_handle_psci_call(cpu);
43
+ }
34
qemu_log_mask(CPU_LOG_INT, "...handled as PSCI call\n");
44
if (arm_feature(env, ARM_FEATURE_EL2) && cpu_isar_feature(aa64_vh, cpu)) {
35
return;
45
define_arm_cp_regs(cpu, vhe_reginfo);
46
}
47
--
36
--
48
2.25.1
37
2.34.1
38
39
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Claudio Fontana <cfontana@suse.de>
2
2
3
Drop zcr_no_el2_reginfo and merge the 3 registers into one array,
3
Signed-off-by: Claudio Fontana <cfontana@suse.de>
4
now that ZCR_EL2 can be squashed to RES0 and ZCR_EL3 dropped
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
while registering.
5
Signed-off-by: Fabiano Rosas <farosas@suse.de>
6
6
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20220506180242.216785-4-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
8
---
12
target/arm/helper.c | 55 ++++++++++++++-------------------------------
9
target/arm/helper.c | 12 +++++++-----
13
1 file changed, 17 insertions(+), 38 deletions(-)
10
1 file changed, 7 insertions(+), 5 deletions(-)
14
11
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
12
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
14
--- a/target/arm/helper.c
18
+++ b/target/arm/helper.c
15
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@ static void zcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
16
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
20
}
17
unsigned int cur_el = arm_current_el(env);
21
}
18
int rt;
22
19
23
-static const ARMCPRegInfo zcr_el1_reginfo = {
20
- /*
24
- .name = "ZCR_EL1", .state = ARM_CP_STATE_AA64,
21
- * Note that new_el can never be 0. If cur_el is 0, then
25
- .opc0 = 3, .opc1 = 0, .crn = 1, .crm = 2, .opc2 = 0,
22
- * el0_a64 is is_a64(), else el0_a64 is ignored.
26
- .access = PL1_RW, .type = ARM_CP_SVE,
23
- */
27
- .fieldoffset = offsetof(CPUARMState, vfp.zcr_el[1]),
24
- aarch64_sve_change_el(env, cur_el, new_el, is_a64(env));
28
- .writefn = zcr_write, .raw_writefn = raw_write
25
+ if (tcg_enabled()) {
29
-};
26
+ /*
30
-
27
+ * Note that new_el can never be 0. If cur_el is 0, then
31
-static const ARMCPRegInfo zcr_el2_reginfo = {
28
+ * el0_a64 is is_a64(), else el0_a64 is ignored.
32
- .name = "ZCR_EL2", .state = ARM_CP_STATE_AA64,
29
+ */
33
- .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 2, .opc2 = 0,
30
+ aarch64_sve_change_el(env, cur_el, new_el, is_a64(env));
34
- .access = PL2_RW, .type = ARM_CP_SVE,
31
+ }
35
- .fieldoffset = offsetof(CPUARMState, vfp.zcr_el[2]),
32
36
- .writefn = zcr_write, .raw_writefn = raw_write
33
if (cur_el < new_el) {
37
-};
34
/*
38
-
39
-static const ARMCPRegInfo zcr_no_el2_reginfo = {
40
- .name = "ZCR_EL2", .state = ARM_CP_STATE_AA64,
41
- .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 2, .opc2 = 0,
42
- .access = PL2_RW, .type = ARM_CP_SVE,
43
- .readfn = arm_cp_read_zero, .writefn = arm_cp_write_ignore
44
-};
45
-
46
-static const ARMCPRegInfo zcr_el3_reginfo = {
47
- .name = "ZCR_EL3", .state = ARM_CP_STATE_AA64,
48
- .opc0 = 3, .opc1 = 6, .crn = 1, .crm = 2, .opc2 = 0,
49
- .access = PL3_RW, .type = ARM_CP_SVE,
50
- .fieldoffset = offsetof(CPUARMState, vfp.zcr_el[3]),
51
- .writefn = zcr_write, .raw_writefn = raw_write
52
+static const ARMCPRegInfo zcr_reginfo[] = {
53
+ { .name = "ZCR_EL1", .state = ARM_CP_STATE_AA64,
54
+ .opc0 = 3, .opc1 = 0, .crn = 1, .crm = 2, .opc2 = 0,
55
+ .access = PL1_RW, .type = ARM_CP_SVE,
56
+ .fieldoffset = offsetof(CPUARMState, vfp.zcr_el[1]),
57
+ .writefn = zcr_write, .raw_writefn = raw_write },
58
+ { .name = "ZCR_EL2", .state = ARM_CP_STATE_AA64,
59
+ .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 2, .opc2 = 0,
60
+ .access = PL2_RW, .type = ARM_CP_SVE,
61
+ .fieldoffset = offsetof(CPUARMState, vfp.zcr_el[2]),
62
+ .writefn = zcr_write, .raw_writefn = raw_write },
63
+ { .name = "ZCR_EL3", .state = ARM_CP_STATE_AA64,
64
+ .opc0 = 3, .opc1 = 6, .crn = 1, .crm = 2, .opc2 = 0,
65
+ .access = PL3_RW, .type = ARM_CP_SVE,
66
+ .fieldoffset = offsetof(CPUARMState, vfp.zcr_el[3]),
67
+ .writefn = zcr_write, .raw_writefn = raw_write },
68
};
69
70
void hw_watchpoint_update(ARMCPU *cpu, int n)
71
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
72
}
73
74
if (cpu_isar_feature(aa64_sve, cpu)) {
75
- define_one_arm_cp_reg(cpu, &zcr_el1_reginfo);
76
- if (arm_feature(env, ARM_FEATURE_EL2)) {
77
- define_one_arm_cp_reg(cpu, &zcr_el2_reginfo);
78
- } else {
79
- define_one_arm_cp_reg(cpu, &zcr_no_el2_reginfo);
80
- }
81
- if (arm_feature(env, ARM_FEATURE_EL3)) {
82
- define_one_arm_cp_reg(cpu, &zcr_el3_reginfo);
83
- }
84
+ define_arm_cp_regs(cpu, zcr_reginfo);
85
}
86
87
#ifdef TARGET_AARCH64
88
--
35
--
89
2.25.1
36
2.34.1
37
38
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Fabiano Rosas <farosas@suse.de>
2
2
3
Use FIELD_DP{32,64} to manipulate id_pfr1 and id_aa64pfr0
3
Move this earlier to make the next patch diff cleaner. While here
4
during arm_cpu_realizefn.
4
update the comment slightly to not give the impression that the
5
misalignment affects only TCG.
5
6
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Message-id: 20220506180242.216785-11-richard.henderson@linaro.org
9
Signed-off-by: Fabiano Rosas <farosas@suse.de>
10
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
12
---
11
target/arm/cpu.c | 22 +++++++++++++---------
13
target/arm/machine.c | 18 +++++++++---------
12
1 file changed, 13 insertions(+), 9 deletions(-)
14
1 file changed, 9 insertions(+), 9 deletions(-)
13
15
14
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
16
diff --git a/target/arm/machine.c b/target/arm/machine.c
15
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/cpu.c
18
--- a/target/arm/machine.c
17
+++ b/target/arm/cpu.c
19
+++ b/target/arm/machine.c
18
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
20
@@ -XXX,XX +XXX,XX @@ static int cpu_post_load(void *opaque, int version_id)
19
*/
21
}
20
unset_feature(env, ARM_FEATURE_EL3);
21
22
- /* Disable the security extension feature bits in the processor feature
23
- * registers as well. These are id_pfr1[7:4] and id_aa64pfr0[15:12].
24
+ /*
25
+ * Disable the security extension feature bits in the processor
26
+ * feature registers as well.
27
*/
28
- cpu->isar.id_pfr1 &= ~0xf0;
29
- cpu->isar.id_aa64pfr0 &= ~0xf000;
30
+ cpu->isar.id_pfr1 = FIELD_DP32(cpu->isar.id_pfr1, ID_PFR1, SECURITY, 0);
31
+ cpu->isar.id_aa64pfr0 = FIELD_DP64(cpu->isar.id_aa64pfr0,
32
+ ID_AA64PFR0, EL3, 0);
33
}
22
}
34
23
35
if (!cpu->has_el2) {
24
+ /*
36
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
25
+ * Misaligned thumb pc is architecturally impossible. Fail the
26
+ * incoming migration. For TCG it would trigger the assert in
27
+ * thumb_tr_translate_insn().
28
+ */
29
+ if (!is_a64(env) && env->thumb && (env->regs[15] & 1)) {
30
+ return -1;
31
+ }
32
+
33
hw_breakpoint_update_all(cpu);
34
hw_watchpoint_update_all(cpu);
35
36
@@ -XXX,XX +XXX,XX @@ static int cpu_post_load(void *opaque, int version_id)
37
}
37
}
38
}
38
39
39
if (!arm_feature(env, ARM_FEATURE_EL2)) {
40
- /*
40
- /* Disable the hypervisor feature bits in the processor feature
41
- * Misaligned thumb pc is architecturally impossible.
41
- * registers if we don't have EL2. These are id_pfr1[15:12] and
42
- * We have an assert in thumb_tr_translate_insn to verify this.
42
- * id_aa64pfr0_el1[11:8].
43
- * Fail an incoming migrate to avoid this assert.
43
+ /*
44
- */
44
+ * Disable the hypervisor feature bits in the processor feature
45
- if (!is_a64(env) && env->thumb && (env->regs[15] & 1)) {
45
+ * registers if we don't have EL2.
46
- return -1;
46
*/
47
- }
47
- cpu->isar.id_aa64pfr0 &= ~0xf00;
48
-
48
- cpu->isar.id_pfr1 &= ~0xf000;
49
if (!kvm_enabled()) {
49
+ cpu->isar.id_aa64pfr0 = FIELD_DP64(cpu->isar.id_aa64pfr0,
50
pmu_op_finish(&cpu->env);
50
+ ID_AA64PFR0, EL2, 0);
51
+ cpu->isar.id_pfr1 = FIELD_DP32(cpu->isar.id_pfr1,
52
+ ID_PFR1, VIRTUALIZATION, 0);
53
}
51
}
54
55
#ifndef CONFIG_USER_ONLY
56
--
52
--
57
2.25.1
53
2.34.1
54
55
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Fabiano Rosas <farosas@suse.de>
2
2
3
More gracefully handle cpregs when EL2 and/or EL3 are missing.
3
Since commit cf7c6d1004 ("target/arm: Split out cpregs.h") we now have
4
If the reg is entirely inaccessible, do not register it at all.
4
a cpregs.h header which is more suitable for this code.
5
If the reg is for EL2, and EL3 is present but EL2 is not,
5
6
either discard, squash to res0, const, or keep unchanged.
6
Code moved verbatim.
7
7
8
Per rule RJFFP, mark the 4 aarch32 hypervisor access registers
8
Signed-off-by: Fabiano Rosas <farosas@suse.de>
9
with ARM_CP_EL3_NO_EL2_KEEP, and mark all of the EL2 address
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
translation and tlb invalidation "regs" ARM_CP_EL3_NO_EL2_UNDEF.
10
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
11
Mark the 2 virtualization processor id regs ARM_CP_EL3_NO_EL2_C_NZ.
11
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
12
13
This will simplify cpreg registration for conditional arm features.
14
15
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
17
Message-id: 20220506180242.216785-2-richard.henderson@linaro.org
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
---
13
---
20
target/arm/cpregs.h | 11 +++
14
target/arm/cpregs.h | 98 +++++++++++++++++++++++++++++++++++++++++++++
21
target/arm/helper.c | 178 ++++++++++++++++++++++++++++++--------------
15
target/arm/cpu.h | 91 -----------------------------------------
22
2 files changed, 133 insertions(+), 56 deletions(-)
16
2 files changed, 98 insertions(+), 91 deletions(-)
23
17
24
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
18
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
25
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/cpregs.h
20
--- a/target/arm/cpregs.h
27
+++ b/target/arm/cpregs.h
21
+++ b/target/arm/cpregs.h
28
@@ -XXX,XX +XXX,XX @@ enum {
22
@@ -XXX,XX +XXX,XX @@ enum {
29
ARM_CP_SVE = 1 << 14,
23
ARM_CP_SME = 1 << 19,
30
/* Flag: Do not expose in gdb sysreg xml. */
31
ARM_CP_NO_GDB = 1 << 15,
32
+ /*
33
+ * Flags: If EL3 but not EL2...
34
+ * - UNDEF: discard the cpreg,
35
+ * - KEEP: retain the cpreg as is,
36
+ * - C_NZ: set const on the cpreg, but retain resetvalue,
37
+ * - else: set const on the cpreg, zero resetvalue, aka RES0.
38
+ * See rule RJFFP in section D1.1.3 of DDI0487H.a.
39
+ */
40
+ ARM_CP_EL3_NO_EL2_UNDEF = 1 << 16,
41
+ ARM_CP_EL3_NO_EL2_KEEP = 1 << 17,
42
+ ARM_CP_EL3_NO_EL2_C_NZ = 1 << 18,
43
};
24
};
44
25
45
/*
26
+/*
46
diff --git a/target/arm/helper.c b/target/arm/helper.c
27
+ * Interface for defining coprocessor registers.
47
index XXXXXXX..XXXXXXX 100644
28
+ * Registers are defined in tables of arm_cp_reginfo structs
48
--- a/target/arm/helper.c
29
+ * which are passed to define_arm_cp_regs().
49
+++ b/target/arm/helper.c
30
+ */
50
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
31
+
51
.access = PL1_RW, .readfn = spsel_read, .writefn = spsel_write },
32
+/*
52
{ .name = "FPEXC32_EL2", .state = ARM_CP_STATE_AA64,
33
+ * When looking up a coprocessor register we look for it
53
.opc0 = 3, .opc1 = 4, .crn = 5, .crm = 3, .opc2 = 0,
34
+ * via an integer which encodes all of:
54
- .access = PL2_RW, .type = ARM_CP_ALIAS | ARM_CP_FPU,
35
+ * coprocessor number
55
+ .access = PL2_RW,
36
+ * Crn, Crm, opc1, opc2 fields
56
+ .type = ARM_CP_ALIAS | ARM_CP_FPU | ARM_CP_EL3_NO_EL2_KEEP,
37
+ * 32 or 64 bit register (ie is it accessed via MRC/MCR
57
.fieldoffset = offsetof(CPUARMState, vfp.xregs[ARM_VFP_FPEXC]) },
38
+ * or via MRRC/MCRR?)
58
{ .name = "DACR32_EL2", .state = ARM_CP_STATE_AA64,
39
+ * non-secure/secure bank (AArch32 only)
59
.opc0 = 3, .opc1 = 4, .crn = 3, .crm = 0, .opc2 = 0,
40
+ * We allow 4 bits for opc1 because MRRC/MCRR have a 4 bit field.
60
- .access = PL2_RW, .resetvalue = 0,
41
+ * (In this case crn and opc2 should be zero.)
61
+ .access = PL2_RW, .resetvalue = 0, .type = ARM_CP_EL3_NO_EL2_KEEP,
42
+ * For AArch64, there is no 32/64 bit size distinction;
62
.writefn = dacr_write, .raw_writefn = raw_write,
43
+ * instead all registers have a 2 bit op0, 3 bit op1 and op2,
63
.fieldoffset = offsetof(CPUARMState, cp15.dacr32_el2) },
44
+ * and 4 bit CRn and CRm. The encoding patterns are chosen
64
{ .name = "IFSR32_EL2", .state = ARM_CP_STATE_AA64,
45
+ * to be easy to convert to and from the KVM encodings, and also
65
.opc0 = 3, .opc1 = 4, .crn = 5, .crm = 0, .opc2 = 1,
46
+ * so that the hashtable can contain both AArch32 and AArch64
66
- .access = PL2_RW, .resetvalue = 0,
47
+ * registers (to allow for interprocessing where we might run
67
+ .access = PL2_RW, .resetvalue = 0, .type = ARM_CP_EL3_NO_EL2_KEEP,
48
+ * 32 bit code on a 64 bit core).
68
.fieldoffset = offsetof(CPUARMState, cp15.ifsr32_el2) },
49
+ */
69
{ .name = "SPSR_IRQ", .state = ARM_CP_STATE_AA64,
50
+/*
70
.type = ARM_CP_ALIAS,
51
+ * This bit is private to our hashtable cpreg; in KVM register
71
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
52
+ * IDs the AArch64/32 distinction is the KVM_REG_ARM/ARM64
72
.writefn = tlbimva_hyp_is_write },
53
+ * in the upper bits of the 64 bit ID.
73
{ .name = "TLBI_ALLE2", .state = ARM_CP_STATE_AA64,
54
+ */
74
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 0,
55
+#define CP_REG_AA64_SHIFT 28
75
- .type = ARM_CP_NO_RAW, .access = PL2_W,
56
+#define CP_REG_AA64_MASK (1 << CP_REG_AA64_SHIFT)
76
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
57
+
77
.writefn = tlbi_aa64_alle2_write },
58
+/*
78
{ .name = "TLBI_VAE2", .state = ARM_CP_STATE_AA64,
59
+ * To enable banking of coprocessor registers depending on ns-bit we
79
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 1,
60
+ * add a bit to distinguish between secure and non-secure cpregs in the
80
- .type = ARM_CP_NO_RAW, .access = PL2_W,
61
+ * hashtable.
81
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
62
+ */
82
.writefn = tlbi_aa64_vae2_write },
63
+#define CP_REG_NS_SHIFT 29
83
{ .name = "TLBI_VALE2", .state = ARM_CP_STATE_AA64,
64
+#define CP_REG_NS_MASK (1 << CP_REG_NS_SHIFT)
84
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 5,
65
+
85
- .access = PL2_W, .type = ARM_CP_NO_RAW,
66
+#define ENCODE_CP_REG(cp, is64, ns, crn, crm, opc1, opc2) \
86
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
67
+ ((ns) << CP_REG_NS_SHIFT | ((cp) << 16) | ((is64) << 15) | \
87
.writefn = tlbi_aa64_vae2_write },
68
+ ((crn) << 11) | ((crm) << 7) | ((opc1) << 3) | (opc2))
88
{ .name = "TLBI_ALLE2IS", .state = ARM_CP_STATE_AA64,
69
+
89
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 0,
70
+#define ENCODE_AA64_CP_REG(cp, crn, crm, op0, op1, op2) \
90
- .access = PL2_W, .type = ARM_CP_NO_RAW,
71
+ (CP_REG_AA64_MASK | \
91
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
72
+ ((cp) << CP_REG_ARM_COPROC_SHIFT) | \
92
.writefn = tlbi_aa64_alle2is_write },
73
+ ((op0) << CP_REG_ARM64_SYSREG_OP0_SHIFT) | \
93
{ .name = "TLBI_VAE2IS", .state = ARM_CP_STATE_AA64,
74
+ ((op1) << CP_REG_ARM64_SYSREG_OP1_SHIFT) | \
94
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 1,
75
+ ((crn) << CP_REG_ARM64_SYSREG_CRN_SHIFT) | \
95
- .type = ARM_CP_NO_RAW, .access = PL2_W,
76
+ ((crm) << CP_REG_ARM64_SYSREG_CRM_SHIFT) | \
96
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
77
+ ((op2) << CP_REG_ARM64_SYSREG_OP2_SHIFT))
97
.writefn = tlbi_aa64_vae2is_write },
78
+
98
{ .name = "TLBI_VALE2IS", .state = ARM_CP_STATE_AA64,
79
+/*
99
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 5,
80
+ * Convert a full 64 bit KVM register ID to the truncated 32 bit
100
- .access = PL2_W, .type = ARM_CP_NO_RAW,
81
+ * version used as a key for the coprocessor register hashtable
101
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
82
+ */
102
.writefn = tlbi_aa64_vae2is_write },
83
+static inline uint32_t kvm_to_cpreg_id(uint64_t kvmid)
103
#ifndef CONFIG_USER_ONLY
84
+{
104
/* Unlike the other EL2-related AT operations, these must
85
+ uint32_t cpregid = kvmid;
105
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
86
+ if ((kvmid & CP_REG_ARCH_MASK) == CP_REG_ARM64) {
106
{ .name = "AT_S1E2R", .state = ARM_CP_STATE_AA64,
87
+ cpregid |= CP_REG_AA64_MASK;
107
.opc0 = 1, .opc1 = 4, .crn = 7, .crm = 8, .opc2 = 0,
88
+ } else {
108
.access = PL2_W, .accessfn = at_s1e2_access,
89
+ if ((kvmid & CP_REG_SIZE_MASK) == CP_REG_SIZE_U64) {
109
- .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC, .writefn = ats_write64 },
90
+ cpregid |= (1 << 15);
110
+ .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC | ARM_CP_EL3_NO_EL2_UNDEF,
91
+ }
111
+ .writefn = ats_write64 },
92
+
112
{ .name = "AT_S1E2W", .state = ARM_CP_STATE_AA64,
113
.opc0 = 1, .opc1 = 4, .crn = 7, .crm = 8, .opc2 = 1,
114
.access = PL2_W, .accessfn = at_s1e2_access,
115
- .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC, .writefn = ats_write64 },
116
+ .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC | ARM_CP_EL3_NO_EL2_UNDEF,
117
+ .writefn = ats_write64 },
118
/* The AArch32 ATS1H* operations are CONSTRAINED UNPREDICTABLE
119
* if EL2 is not implemented; we choose to UNDEF. Behaviour at EL3
120
* with SCR.NS == 0 outside Monitor mode is UNPREDICTABLE; we choose
121
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo debug_cp_reginfo[] = {
122
{ .name = "DBGVCR32_EL2", .state = ARM_CP_STATE_AA64,
123
.opc0 = 2, .opc1 = 4, .crn = 0, .crm = 7, .opc2 = 0,
124
.access = PL2_RW, .accessfn = access_tda,
125
- .type = ARM_CP_NOP },
126
+ .type = ARM_CP_NOP | ARM_CP_EL3_NO_EL2_KEEP },
127
/* Dummy MDCCINT_EL1, since we don't implement the Debug Communications
128
* Channel but Linux may try to access this register. The 32-bit
129
* alias is DBGDCCINT.
130
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbirange_reginfo[] = {
131
.access = PL2_W, .type = ARM_CP_NOP },
132
{ .name = "TLBI_RVAE2IS", .state = ARM_CP_STATE_AA64,
133
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 2, .opc2 = 1,
134
- .access = PL2_W, .type = ARM_CP_NO_RAW,
135
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
136
.writefn = tlbi_aa64_rvae2is_write },
137
{ .name = "TLBI_RVALE2IS", .state = ARM_CP_STATE_AA64,
138
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 2, .opc2 = 5,
139
- .access = PL2_W, .type = ARM_CP_NO_RAW,
140
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
141
.writefn = tlbi_aa64_rvae2is_write },
142
{ .name = "TLBI_RIPAS2E1", .state = ARM_CP_STATE_AA64,
143
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 2,
144
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbirange_reginfo[] = {
145
.access = PL2_W, .type = ARM_CP_NOP },
146
{ .name = "TLBI_RVAE2OS", .state = ARM_CP_STATE_AA64,
147
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 5, .opc2 = 1,
148
- .access = PL2_W, .type = ARM_CP_NO_RAW,
149
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
150
.writefn = tlbi_aa64_rvae2is_write },
151
{ .name = "TLBI_RVALE2OS", .state = ARM_CP_STATE_AA64,
152
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 5, .opc2 = 5,
153
- .access = PL2_W, .type = ARM_CP_NO_RAW,
154
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
155
.writefn = tlbi_aa64_rvae2is_write },
156
{ .name = "TLBI_RVAE2", .state = ARM_CP_STATE_AA64,
157
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 6, .opc2 = 1,
158
- .access = PL2_W, .type = ARM_CP_NO_RAW,
159
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
160
.writefn = tlbi_aa64_rvae2_write },
161
{ .name = "TLBI_RVALE2", .state = ARM_CP_STATE_AA64,
162
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 6, .opc2 = 5,
163
- .access = PL2_W, .type = ARM_CP_NO_RAW,
164
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
165
.writefn = tlbi_aa64_rvae2_write },
166
{ .name = "TLBI_RVAE3IS", .state = ARM_CP_STATE_AA64,
167
.opc0 = 1, .opc1 = 6, .crn = 8, .crm = 2, .opc2 = 1,
168
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbios_reginfo[] = {
169
.writefn = tlbi_aa64_vae1is_write },
170
{ .name = "TLBI_ALLE2OS", .state = ARM_CP_STATE_AA64,
171
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 0,
172
- .access = PL2_W, .type = ARM_CP_NO_RAW,
173
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
174
.writefn = tlbi_aa64_alle2is_write },
175
{ .name = "TLBI_VAE2OS", .state = ARM_CP_STATE_AA64,
176
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 1,
177
- .access = PL2_W, .type = ARM_CP_NO_RAW,
178
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
179
.writefn = tlbi_aa64_vae2is_write },
180
{ .name = "TLBI_ALLE1OS", .state = ARM_CP_STATE_AA64,
181
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 4,
182
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbios_reginfo[] = {
183
.writefn = tlbi_aa64_alle1is_write },
184
{ .name = "TLBI_VALE2OS", .state = ARM_CP_STATE_AA64,
185
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 5,
186
- .access = PL2_W, .type = ARM_CP_NO_RAW,
187
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
188
.writefn = tlbi_aa64_vae2is_write },
189
{ .name = "TLBI_VMALLS12E1OS", .state = ARM_CP_STATE_AA64,
190
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 6,
191
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
192
{ .name = "VPIDR", .state = ARM_CP_STATE_AA32,
193
.cp = 15, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 0,
194
.access = PL2_RW, .accessfn = access_el3_aa32ns,
195
- .resetvalue = cpu->midr, .type = ARM_CP_ALIAS,
196
+ .resetvalue = cpu->midr,
197
+ .type = ARM_CP_ALIAS | ARM_CP_EL3_NO_EL2_C_NZ,
198
.fieldoffset = offsetoflow32(CPUARMState, cp15.vpidr_el2) },
199
{ .name = "VPIDR_EL2", .state = ARM_CP_STATE_AA64,
200
.opc0 = 3, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 0,
201
.access = PL2_RW, .resetvalue = cpu->midr,
202
+ .type = ARM_CP_EL3_NO_EL2_C_NZ,
203
.fieldoffset = offsetof(CPUARMState, cp15.vpidr_el2) },
204
{ .name = "VMPIDR", .state = ARM_CP_STATE_AA32,
205
.cp = 15, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 5,
206
.access = PL2_RW, .accessfn = access_el3_aa32ns,
207
- .resetvalue = vmpidr_def, .type = ARM_CP_ALIAS,
208
+ .resetvalue = vmpidr_def,
209
+ .type = ARM_CP_ALIAS | ARM_CP_EL3_NO_EL2_C_NZ,
210
.fieldoffset = offsetoflow32(CPUARMState, cp15.vmpidr_el2) },
211
{ .name = "VMPIDR_EL2", .state = ARM_CP_STATE_AA64,
212
.opc0 = 3, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 5,
213
- .access = PL2_RW,
214
- .resetvalue = vmpidr_def,
215
+ .access = PL2_RW, .resetvalue = vmpidr_def,
216
+ .type = ARM_CP_EL3_NO_EL2_C_NZ,
217
.fieldoffset = offsetof(CPUARMState, cp15.vmpidr_el2) },
218
};
219
define_arm_cp_regs(cpu, vpidr_regs);
220
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
221
int crm, int opc1, int opc2,
222
const char *name)
223
{
224
+ CPUARMState *env = &cpu->env;
225
uint32_t key;
226
ARMCPRegInfo *r2;
227
bool is64 = r->type & ARM_CP_64BIT;
228
bool ns = secstate & ARM_CP_SECSTATE_NS;
229
int cp = r->cp;
230
- bool isbanked;
231
size_t name_len;
232
+ bool make_const;
233
234
switch (state) {
235
case ARM_CP_STATE_AA32:
236
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
237
}
238
}
239
240
+ /*
241
+ * Eliminate registers that are not present because the EL is missing.
242
+ * Doing this here makes it easier to put all registers for a given
243
+ * feature into the same ARMCPRegInfo array and define them all at once.
244
+ */
245
+ make_const = false;
246
+ if (arm_feature(env, ARM_FEATURE_EL3)) {
247
+ /*
93
+ /*
248
+ * An EL2 register without EL2 but with EL3 is (usually) RES0.
94
+ * KVM is always non-secure so add the NS flag on AArch32 register
249
+ * See rule RJFFP in section D1.1.3 of DDI0487H.a.
95
+ * entries.
250
+ */
96
+ */
251
+ int min_el = ctz32(r->access) / 2;
97
+ cpregid |= 1 << CP_REG_NS_SHIFT;
252
+ if (min_el == 2 && !arm_feature(env, ARM_FEATURE_EL2)) {
98
+ }
253
+ if (r->type & ARM_CP_EL3_NO_EL2_UNDEF) {
99
+ return cpregid;
254
+ return;
100
+}
255
+ }
101
+
256
+ make_const = !(r->type & ARM_CP_EL3_NO_EL2_KEEP);
102
+/*
257
+ }
103
+ * Convert a truncated 32 bit hashtable key into the full
104
+ * 64 bit KVM register ID.
105
+ */
106
+static inline uint64_t cpreg_to_kvm_id(uint32_t cpregid)
107
+{
108
+ uint64_t kvmid;
109
+
110
+ if (cpregid & CP_REG_AA64_MASK) {
111
+ kvmid = cpregid & ~CP_REG_AA64_MASK;
112
+ kvmid |= CP_REG_SIZE_U64 | CP_REG_ARM64;
258
+ } else {
113
+ } else {
259
+ CPAccessRights max_el = (arm_feature(env, ARM_FEATURE_EL2)
114
+ kvmid = cpregid & ~(1 << 15);
260
+ ? PL2_RW : PL1_RW);
115
+ if (cpregid & (1 << 15)) {
261
+ if ((r->access & max_el) == 0) {
116
+ kvmid |= CP_REG_SIZE_U64 | CP_REG_ARM;
262
+ return;
117
+ } else {
118
+ kvmid |= CP_REG_SIZE_U32 | CP_REG_ARM;
263
+ }
119
+ }
264
+ }
120
+ }
265
+
121
+ return kvmid;
266
/* Combine cpreg and name into one allocation. */
122
+}
267
name_len = strlen(name) + 1;
123
+
268
r2 = g_malloc(sizeof(*r2) + name_len);
124
/*
269
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
125
* Valid values for ARMCPRegInfo state field, indicating which of
270
r2->opaque = opaque;
126
* the AArch32 and AArch64 execution states this register is visible in.
271
}
127
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
272
128
index XXXXXXX..XXXXXXX 100644
273
- isbanked = r->bank_fieldoffsets[0] && r->bank_fieldoffsets[1];
129
--- a/target/arm/cpu.h
274
- if (isbanked) {
130
+++ b/target/arm/cpu.h
275
+ if (make_const) {
131
@@ -XXX,XX +XXX,XX @@ void arm_cpu_list(void);
276
+ /* This should not have been a very special register to begin. */
132
uint32_t arm_phys_excp_target_el(CPUState *cs, uint32_t excp_idx,
277
+ int old_special = r2->type & ARM_CP_SPECIAL_MASK;
133
uint32_t cur_el, bool secure);
278
+ assert(old_special == 0 || old_special == ARM_CP_NOP);
134
279
/*
135
-/* Interface for defining coprocessor registers.
280
- * Register is banked (using both entries in array).
136
- * Registers are defined in tables of arm_cp_reginfo structs
281
- * Overwriting fieldoffset as the array is only used to define
137
- * which are passed to define_arm_cp_regs().
282
- * banked registers but later only fieldoffset is used.
138
- */
283
+ * Set the special function to CONST, retaining the other flags.
139
-
284
+ * This is important for e.g. ARM_CP_SVE so that we still
140
-/* When looking up a coprocessor register we look for it
285
+ * take the SVE trap if CPTR_EL3.EZ == 0.
141
- * via an integer which encodes all of:
286
*/
142
- * coprocessor number
287
- r2->fieldoffset = r->bank_fieldoffsets[ns];
143
- * Crn, Crm, opc1, opc2 fields
144
- * 32 or 64 bit register (ie is it accessed via MRC/MCR
145
- * or via MRRC/MCRR?)
146
- * non-secure/secure bank (AArch32 only)
147
- * We allow 4 bits for opc1 because MRRC/MCRR have a 4 bit field.
148
- * (In this case crn and opc2 should be zero.)
149
- * For AArch64, there is no 32/64 bit size distinction;
150
- * instead all registers have a 2 bit op0, 3 bit op1 and op2,
151
- * and 4 bit CRn and CRm. The encoding patterns are chosen
152
- * to be easy to convert to and from the KVM encodings, and also
153
- * so that the hashtable can contain both AArch32 and AArch64
154
- * registers (to allow for interprocessing where we might run
155
- * 32 bit code on a 64 bit core).
156
- */
157
-/* This bit is private to our hashtable cpreg; in KVM register
158
- * IDs the AArch64/32 distinction is the KVM_REG_ARM/ARM64
159
- * in the upper bits of the 64 bit ID.
160
- */
161
-#define CP_REG_AA64_SHIFT 28
162
-#define CP_REG_AA64_MASK (1 << CP_REG_AA64_SHIFT)
163
-
164
-/* To enable banking of coprocessor registers depending on ns-bit we
165
- * add a bit to distinguish between secure and non-secure cpregs in the
166
- * hashtable.
167
- */
168
-#define CP_REG_NS_SHIFT 29
169
-#define CP_REG_NS_MASK (1 << CP_REG_NS_SHIFT)
170
-
171
-#define ENCODE_CP_REG(cp, is64, ns, crn, crm, opc1, opc2) \
172
- ((ns) << CP_REG_NS_SHIFT | ((cp) << 16) | ((is64) << 15) | \
173
- ((crn) << 11) | ((crm) << 7) | ((opc1) << 3) | (opc2))
174
-
175
-#define ENCODE_AA64_CP_REG(cp, crn, crm, op0, op1, op2) \
176
- (CP_REG_AA64_MASK | \
177
- ((cp) << CP_REG_ARM_COPROC_SHIFT) | \
178
- ((op0) << CP_REG_ARM64_SYSREG_OP0_SHIFT) | \
179
- ((op1) << CP_REG_ARM64_SYSREG_OP1_SHIFT) | \
180
- ((crn) << CP_REG_ARM64_SYSREG_CRN_SHIFT) | \
181
- ((crm) << CP_REG_ARM64_SYSREG_CRM_SHIFT) | \
182
- ((op2) << CP_REG_ARM64_SYSREG_OP2_SHIFT))
183
-
184
-/* Convert a full 64 bit KVM register ID to the truncated 32 bit
185
- * version used as a key for the coprocessor register hashtable
186
- */
187
-static inline uint32_t kvm_to_cpreg_id(uint64_t kvmid)
188
-{
189
- uint32_t cpregid = kvmid;
190
- if ((kvmid & CP_REG_ARCH_MASK) == CP_REG_ARM64) {
191
- cpregid |= CP_REG_AA64_MASK;
192
- } else {
193
- if ((kvmid & CP_REG_SIZE_MASK) == CP_REG_SIZE_U64) {
194
- cpregid |= (1 << 15);
195
- }
196
-
197
- /* KVM is always non-secure so add the NS flag on AArch32 register
198
- * entries.
199
- */
200
- cpregid |= 1 << CP_REG_NS_SHIFT;
288
- }
201
- }
289
+ r2->type = (r2->type & ~ARM_CP_SPECIAL_MASK) | ARM_CP_CONST;
202
- return cpregid;
290
+ /*
203
-}
291
+ * Usually, these registers become RES0, but there are a few
204
-
292
+ * special cases like VPIDR_EL2 which have a constant non-zero
205
-/* Convert a truncated 32 bit hashtable key into the full
293
+ * value with writes ignored.
206
- * 64 bit KVM register ID.
294
+ */
207
- */
295
+ if (!(r->type & ARM_CP_EL3_NO_EL2_C_NZ)) {
208
-static inline uint64_t cpreg_to_kvm_id(uint32_t cpregid)
296
+ r2->resetvalue = 0;
209
-{
297
+ }
210
- uint64_t kvmid;
298
+ /*
211
-
299
+ * ARM_CP_CONST has precedence, so removing the callbacks and
212
- if (cpregid & CP_REG_AA64_MASK) {
300
+ * offsets are not strictly necessary, but it is potentially
213
- kvmid = cpregid & ~CP_REG_AA64_MASK;
301
+ * less confusing to debug later.
214
- kvmid |= CP_REG_SIZE_U64 | CP_REG_ARM64;
302
+ */
215
- } else {
303
+ r2->readfn = NULL;
216
- kvmid = cpregid & ~(1 << 15);
304
+ r2->writefn = NULL;
217
- if (cpregid & (1 << 15)) {
305
+ r2->raw_readfn = NULL;
218
- kvmid |= CP_REG_SIZE_U64 | CP_REG_ARM;
306
+ r2->raw_writefn = NULL;
219
- } else {
307
+ r2->resetfn = NULL;
220
- kvmid |= CP_REG_SIZE_U32 | CP_REG_ARM;
308
+ r2->fieldoffset = 0;
309
+ r2->bank_fieldoffsets[0] = 0;
310
+ r2->bank_fieldoffsets[1] = 0;
311
+ } else {
312
+ bool isbanked = r->bank_fieldoffsets[0] && r->bank_fieldoffsets[1];
313
314
- if (state == ARM_CP_STATE_AA32) {
315
if (isbanked) {
316
/*
317
- * If the register is banked then we don't need to migrate or
318
- * reset the 32-bit instance in certain cases:
319
- *
320
- * 1) If the register has both 32-bit and 64-bit instances then we
321
- * can count on the 64-bit instance taking care of the
322
- * non-secure bank.
323
- * 2) If ARMv8 is enabled then we can count on a 64-bit version
324
- * taking care of the secure bank. This requires that separate
325
- * 32 and 64-bit definitions are provided.
326
+ * Register is banked (using both entries in array).
327
+ * Overwriting fieldoffset as the array is only used to define
328
+ * banked registers but later only fieldoffset is used.
329
*/
330
- if ((r->state == ARM_CP_STATE_BOTH && ns) ||
331
- (arm_feature(&cpu->env, ARM_FEATURE_V8) && !ns)) {
332
+ r2->fieldoffset = r->bank_fieldoffsets[ns];
333
+ }
334
+ if (state == ARM_CP_STATE_AA32) {
335
+ if (isbanked) {
336
+ /*
337
+ * If the register is banked then we don't need to migrate or
338
+ * reset the 32-bit instance in certain cases:
339
+ *
340
+ * 1) If the register has both 32-bit and 64-bit instances
341
+ * then we can count on the 64-bit instance taking care
342
+ * of the non-secure bank.
343
+ * 2) If ARMv8 is enabled then we can count on a 64-bit
344
+ * version taking care of the secure bank. This requires
345
+ * that separate 32 and 64-bit definitions are provided.
346
+ */
347
+ if ((r->state == ARM_CP_STATE_BOTH && ns) ||
348
+ (arm_feature(env, ARM_FEATURE_V8) && !ns)) {
349
+ r2->type |= ARM_CP_ALIAS;
350
+ }
351
+ } else if ((secstate != r->secure) && !ns) {
352
+ /*
353
+ * The register is not banked so we only want to allow
354
+ * migration of the non-secure instance.
355
+ */
356
r2->type |= ARM_CP_ALIAS;
357
}
358
- } else if ((secstate != r->secure) && !ns) {
359
- /*
360
- * The register is not banked so we only want to allow migration
361
- * of the non-secure instance.
362
- */
363
- r2->type |= ARM_CP_ALIAS;
364
- }
221
- }
365
222
- }
366
- if (HOST_BIG_ENDIAN &&
223
- return kvmid;
367
- r->state == ARM_CP_STATE_BOTH && r2->fieldoffset) {
224
-}
368
- r2->fieldoffset += sizeof(uint32_t);
225
-
369
+ if (HOST_BIG_ENDIAN &&
226
/* Return the highest implemented Exception Level */
370
+ r->state == ARM_CP_STATE_BOTH && r2->fieldoffset) {
227
static inline int arm_highest_el(CPUARMState *env)
371
+ r2->fieldoffset += sizeof(uint32_t);
228
{
372
+ }
373
}
374
}
375
376
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
377
* multiple times. Special registers (ie NOP/WFI) are
378
* never migratable and not even raw-accessible.
379
*/
380
- if (r->type & ARM_CP_SPECIAL_MASK) {
381
+ if (r2->type & ARM_CP_SPECIAL_MASK) {
382
r2->type |= ARM_CP_NO_RAW;
383
}
384
if (((r->crm == CP_ANY) && crm != 0) ||
385
--
229
--
386
2.25.1
230
2.34.1
231
232
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Instead of starting with cortex-a15 and adding v8 features to
4
a v7 cpu, begin with a v8 cpu stripped of its aarch64 features.
5
This fixes the long-standing to-do where we only enabled v8
6
features for user-only.
7
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20220506180242.216785-7-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
target/arm/cpu_tcg.c | 151 ++++++++++++++++++++++++++-----------------
14
1 file changed, 92 insertions(+), 59 deletions(-)
15
16
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/cpu_tcg.c
19
+++ b/target/arm/cpu_tcg.c
20
@@ -XXX,XX +XXX,XX @@ static void arm_v7m_class_init(ObjectClass *oc, void *data)
21
static void arm_max_initfn(Object *obj)
22
{
23
ARMCPU *cpu = ARM_CPU(obj);
24
+ uint32_t t;
25
26
- cortex_a15_initfn(obj);
27
+ /* aarch64_a57_initfn, advertising none of the aarch64 features */
28
+ cpu->dtb_compatible = "arm,cortex-a57";
29
+ set_feature(&cpu->env, ARM_FEATURE_V8);
30
+ set_feature(&cpu->env, ARM_FEATURE_NEON);
31
+ set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
32
+ set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
33
+ set_feature(&cpu->env, ARM_FEATURE_EL2);
34
+ set_feature(&cpu->env, ARM_FEATURE_EL3);
35
+ set_feature(&cpu->env, ARM_FEATURE_PMU);
36
+ cpu->midr = 0x411fd070;
37
+ cpu->revidr = 0x00000000;
38
+ cpu->reset_fpsid = 0x41034070;
39
+ cpu->isar.mvfr0 = 0x10110222;
40
+ cpu->isar.mvfr1 = 0x12111111;
41
+ cpu->isar.mvfr2 = 0x00000043;
42
+ cpu->ctr = 0x8444c004;
43
+ cpu->reset_sctlr = 0x00c50838;
44
+ cpu->isar.id_pfr0 = 0x00000131;
45
+ cpu->isar.id_pfr1 = 0x00011011;
46
+ cpu->isar.id_dfr0 = 0x03010066;
47
+ cpu->id_afr0 = 0x00000000;
48
+ cpu->isar.id_mmfr0 = 0x10101105;
49
+ cpu->isar.id_mmfr1 = 0x40000000;
50
+ cpu->isar.id_mmfr2 = 0x01260000;
51
+ cpu->isar.id_mmfr3 = 0x02102211;
52
+ cpu->isar.id_isar0 = 0x02101110;
53
+ cpu->isar.id_isar1 = 0x13112111;
54
+ cpu->isar.id_isar2 = 0x21232042;
55
+ cpu->isar.id_isar3 = 0x01112131;
56
+ cpu->isar.id_isar4 = 0x00011142;
57
+ cpu->isar.id_isar5 = 0x00011121;
58
+ cpu->isar.id_isar6 = 0;
59
+ cpu->isar.dbgdidr = 0x3516d000;
60
+ cpu->clidr = 0x0a200023;
61
+ cpu->ccsidr[0] = 0x701fe00a; /* 32KB L1 dcache */
62
+ cpu->ccsidr[1] = 0x201fe012; /* 48KB L1 icache */
63
+ cpu->ccsidr[2] = 0x70ffe07a; /* 2048KB L2 cache */
64
+ define_cortex_a72_a57_a53_cp_reginfo(cpu);
65
66
- /* old-style VFP short-vector support */
67
- cpu->isar.mvfr0 = FIELD_DP32(cpu->isar.mvfr0, MVFR0, FPSHVEC, 1);
68
+ /* Add additional features supported by QEMU */
69
+ t = cpu->isar.id_isar5;
70
+ t = FIELD_DP32(t, ID_ISAR5, AES, 2);
71
+ t = FIELD_DP32(t, ID_ISAR5, SHA1, 1);
72
+ t = FIELD_DP32(t, ID_ISAR5, SHA2, 1);
73
+ t = FIELD_DP32(t, ID_ISAR5, CRC32, 1);
74
+ t = FIELD_DP32(t, ID_ISAR5, RDM, 1);
75
+ t = FIELD_DP32(t, ID_ISAR5, VCMA, 1);
76
+ cpu->isar.id_isar5 = t;
77
+
78
+ t = cpu->isar.id_isar6;
79
+ t = FIELD_DP32(t, ID_ISAR6, JSCVT, 1);
80
+ t = FIELD_DP32(t, ID_ISAR6, DP, 1);
81
+ t = FIELD_DP32(t, ID_ISAR6, FHM, 1);
82
+ t = FIELD_DP32(t, ID_ISAR6, SB, 1);
83
+ t = FIELD_DP32(t, ID_ISAR6, SPECRES, 1);
84
+ t = FIELD_DP32(t, ID_ISAR6, BF16, 1);
85
+ t = FIELD_DP32(t, ID_ISAR6, I8MM, 1);
86
+ cpu->isar.id_isar6 = t;
87
+
88
+ t = cpu->isar.mvfr1;
89
+ t = FIELD_DP32(t, MVFR1, FPHP, 3); /* v8.2-FP16 */
90
+ t = FIELD_DP32(t, MVFR1, SIMDHP, 2); /* v8.2-FP16 */
91
+ cpu->isar.mvfr1 = t;
92
+
93
+ t = cpu->isar.mvfr2;
94
+ t = FIELD_DP32(t, MVFR2, SIMDMISC, 3); /* SIMD MaxNum */
95
+ t = FIELD_DP32(t, MVFR2, FPMISC, 4); /* FP MaxNum */
96
+ cpu->isar.mvfr2 = t;
97
+
98
+ t = cpu->isar.id_mmfr3;
99
+ t = FIELD_DP32(t, ID_MMFR3, PAN, 2); /* ATS1E1 */
100
+ cpu->isar.id_mmfr3 = t;
101
+
102
+ t = cpu->isar.id_mmfr4;
103
+ t = FIELD_DP32(t, ID_MMFR4, HPDS, 1); /* AA32HPD */
104
+ t = FIELD_DP32(t, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */
105
+ t = FIELD_DP32(t, ID_MMFR4, CNP, 1); /* TTCNP */
106
+ t = FIELD_DP32(t, ID_MMFR4, XNX, 1); /* TTS2UXN */
107
+ cpu->isar.id_mmfr4 = t;
108
+
109
+ t = cpu->isar.id_pfr0;
110
+ t = FIELD_DP32(t, ID_PFR0, DIT, 1);
111
+ cpu->isar.id_pfr0 = t;
112
+
113
+ t = cpu->isar.id_pfr2;
114
+ t = FIELD_DP32(t, ID_PFR2, SSBS, 1);
115
+ cpu->isar.id_pfr2 = t;
116
117
#ifdef CONFIG_USER_ONLY
118
/*
119
- * We don't set these in system emulation mode for the moment,
120
- * since we don't correctly set (all of) the ID registers to
121
- * advertise them.
122
+ * Break with true ARMv8 and add back old-style VFP short-vector support.
123
+ * Only do this for user-mode, where -cpu max is the default, so that
124
+ * older v6 and v7 programs are more likely to work without adjustment.
125
*/
126
- set_feature(&cpu->env, ARM_FEATURE_V8);
127
- {
128
- uint32_t t;
129
-
130
- t = cpu->isar.id_isar5;
131
- t = FIELD_DP32(t, ID_ISAR5, AES, 2);
132
- t = FIELD_DP32(t, ID_ISAR5, SHA1, 1);
133
- t = FIELD_DP32(t, ID_ISAR5, SHA2, 1);
134
- t = FIELD_DP32(t, ID_ISAR5, CRC32, 1);
135
- t = FIELD_DP32(t, ID_ISAR5, RDM, 1);
136
- t = FIELD_DP32(t, ID_ISAR5, VCMA, 1);
137
- cpu->isar.id_isar5 = t;
138
-
139
- t = cpu->isar.id_isar6;
140
- t = FIELD_DP32(t, ID_ISAR6, JSCVT, 1);
141
- t = FIELD_DP32(t, ID_ISAR6, DP, 1);
142
- t = FIELD_DP32(t, ID_ISAR6, FHM, 1);
143
- t = FIELD_DP32(t, ID_ISAR6, SB, 1);
144
- t = FIELD_DP32(t, ID_ISAR6, SPECRES, 1);
145
- t = FIELD_DP32(t, ID_ISAR6, BF16, 1);
146
- t = FIELD_DP32(t, ID_ISAR6, I8MM, 1);
147
- cpu->isar.id_isar6 = t;
148
-
149
- t = cpu->isar.mvfr1;
150
- t = FIELD_DP32(t, MVFR1, FPHP, 3); /* v8.2-FP16 */
151
- t = FIELD_DP32(t, MVFR1, SIMDHP, 2); /* v8.2-FP16 */
152
- cpu->isar.mvfr1 = t;
153
-
154
- t = cpu->isar.mvfr2;
155
- t = FIELD_DP32(t, MVFR2, SIMDMISC, 3); /* SIMD MaxNum */
156
- t = FIELD_DP32(t, MVFR2, FPMISC, 4); /* FP MaxNum */
157
- cpu->isar.mvfr2 = t;
158
-
159
- t = cpu->isar.id_mmfr3;
160
- t = FIELD_DP32(t, ID_MMFR3, PAN, 2); /* ATS1E1 */
161
- cpu->isar.id_mmfr3 = t;
162
-
163
- t = cpu->isar.id_mmfr4;
164
- t = FIELD_DP32(t, ID_MMFR4, HPDS, 1); /* AA32HPD */
165
- t = FIELD_DP32(t, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */
166
- t = FIELD_DP32(t, ID_MMFR4, CNP, 1); /* TTCNP */
167
- t = FIELD_DP32(t, ID_MMFR4, XNX, 1); /* TTS2UXN */
168
- cpu->isar.id_mmfr4 = t;
169
-
170
- t = cpu->isar.id_pfr0;
171
- t = FIELD_DP32(t, ID_PFR0, DIT, 1);
172
- cpu->isar.id_pfr0 = t;
173
-
174
- t = cpu->isar.id_pfr2;
175
- t = FIELD_DP32(t, ID_PFR2, SSBS, 1);
176
- cpu->isar.id_pfr2 = t;
177
- }
178
-#endif /* CONFIG_USER_ONLY */
179
+ cpu->isar.mvfr0 = FIELD_DP32(cpu->isar.mvfr0, MVFR0, FPSHVEC, 1);
180
+#endif
181
}
182
#endif /* !TARGET_AARCH64 */
183
184
--
185
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
We set this for qemu-system-aarch64, but failed to do so
4
for the strictly 32-bit emulation.
5
6
Fixes: 3bec78447a9 ("target/arm: Provide ARMv8.4-PMU in '-cpu max'")
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20220506180242.216785-8-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/cpu_tcg.c | 4 ++++
13
1 file changed, 4 insertions(+)
14
15
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpu_tcg.c
18
+++ b/target/arm/cpu_tcg.c
19
@@ -XXX,XX +XXX,XX @@ static void arm_max_initfn(Object *obj)
20
t = FIELD_DP32(t, ID_PFR2, SSBS, 1);
21
cpu->isar.id_pfr2 = t;
22
23
+ t = cpu->isar.id_dfr0;
24
+ t = FIELD_DP32(t, ID_DFR0, PERFMON, 5); /* v8.4-PMU */
25
+ cpu->isar.id_dfr0 = t;
26
+
27
#ifdef CONFIG_USER_ONLY
28
/*
29
* Break with true ARMv8 and add back old-style VFP short-vector support.
30
--
31
2.25.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Fabiano Rosas <farosas@suse.de>
2
2
3
Update the legacy feature names to the current names.
3
If a test was tagged with the "accel" tag and the specified
4
Provide feature names for id changes that were not marked.
4
accelerator it not present in the qemu binary, cancel the test.
5
Sort the field updates into increasing bitfield order.
6
5
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
We can now write tests without explicit calls to require_accelerator,
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
just the tag is enough.
9
Message-id: 20220506180242.216785-10-richard.henderson@linaro.org
8
9
Signed-off-by: Fabiano Rosas <farosas@suse.de>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
13
---
12
target/arm/cpu64.c | 100 +++++++++++++++++++++----------------------
14
tests/avocado/avocado_qemu/__init__.py | 4 ++++
13
target/arm/cpu_tcg.c | 48 ++++++++++-----------
15
1 file changed, 4 insertions(+)
14
2 files changed, 74 insertions(+), 74 deletions(-)
15
16
16
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
17
diff --git a/tests/avocado/avocado_qemu/__init__.py b/tests/avocado/avocado_qemu/__init__.py
17
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/cpu64.c
19
--- a/tests/avocado/avocado_qemu/__init__.py
19
+++ b/target/arm/cpu64.c
20
+++ b/tests/avocado/avocado_qemu/__init__.py
20
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
21
@@ -XXX,XX +XXX,XX @@ def setUp(self):
21
cpu->midr = t;
22
22
23
super().setUp('qemu-system-')
23
t = cpu->isar.id_aa64isar0;
24
24
- t = FIELD_DP64(t, ID_AA64ISAR0, AES, 2); /* AES + PMULL */
25
+ accel_required = self._get_unique_tag_val('accel')
25
- t = FIELD_DP64(t, ID_AA64ISAR0, SHA1, 1);
26
+ if accel_required:
26
- t = FIELD_DP64(t, ID_AA64ISAR0, SHA2, 2); /* SHA512 */
27
+ self.require_accelerator(accel_required)
27
+ t = FIELD_DP64(t, ID_AA64ISAR0, AES, 2); /* FEAT_PMULL */
28
+
28
+ t = FIELD_DP64(t, ID_AA64ISAR0, SHA1, 1); /* FEAT_SHA1 */
29
self.machine = self.params.get('machine',
29
+ t = FIELD_DP64(t, ID_AA64ISAR0, SHA2, 2); /* FEAT_SHA512 */
30
default=self._get_unique_tag_val('machine'))
30
t = FIELD_DP64(t, ID_AA64ISAR0, CRC32, 1);
31
- t = FIELD_DP64(t, ID_AA64ISAR0, ATOMIC, 2);
32
- t = FIELD_DP64(t, ID_AA64ISAR0, RDM, 1);
33
- t = FIELD_DP64(t, ID_AA64ISAR0, SHA3, 1);
34
- t = FIELD_DP64(t, ID_AA64ISAR0, SM3, 1);
35
- t = FIELD_DP64(t, ID_AA64ISAR0, SM4, 1);
36
- t = FIELD_DP64(t, ID_AA64ISAR0, DP, 1);
37
- t = FIELD_DP64(t, ID_AA64ISAR0, FHM, 1);
38
- t = FIELD_DP64(t, ID_AA64ISAR0, TS, 2); /* v8.5-CondM */
39
- t = FIELD_DP64(t, ID_AA64ISAR0, TLB, 2); /* FEAT_TLBIRANGE */
40
- t = FIELD_DP64(t, ID_AA64ISAR0, RNDR, 1);
41
+ t = FIELD_DP64(t, ID_AA64ISAR0, ATOMIC, 2); /* FEAT_LSE */
42
+ t = FIELD_DP64(t, ID_AA64ISAR0, RDM, 1); /* FEAT_RDM */
43
+ t = FIELD_DP64(t, ID_AA64ISAR0, SHA3, 1); /* FEAT_SHA3 */
44
+ t = FIELD_DP64(t, ID_AA64ISAR0, SM3, 1); /* FEAT_SM3 */
45
+ t = FIELD_DP64(t, ID_AA64ISAR0, SM4, 1); /* FEAT_SM4 */
46
+ t = FIELD_DP64(t, ID_AA64ISAR0, DP, 1); /* FEAT_DotProd */
47
+ t = FIELD_DP64(t, ID_AA64ISAR0, FHM, 1); /* FEAT_FHM */
48
+ t = FIELD_DP64(t, ID_AA64ISAR0, TS, 2); /* FEAT_FlagM2 */
49
+ t = FIELD_DP64(t, ID_AA64ISAR0, TLB, 2); /* FEAT_TLBIRANGE */
50
+ t = FIELD_DP64(t, ID_AA64ISAR0, RNDR, 1); /* FEAT_RNG */
51
cpu->isar.id_aa64isar0 = t;
52
53
t = cpu->isar.id_aa64isar1;
54
- t = FIELD_DP64(t, ID_AA64ISAR1, DPB, 2);
55
- t = FIELD_DP64(t, ID_AA64ISAR1, JSCVT, 1);
56
- t = FIELD_DP64(t, ID_AA64ISAR1, FCMA, 1);
57
- t = FIELD_DP64(t, ID_AA64ISAR1, SB, 1);
58
- t = FIELD_DP64(t, ID_AA64ISAR1, SPECRES, 1);
59
- t = FIELD_DP64(t, ID_AA64ISAR1, BF16, 1);
60
- t = FIELD_DP64(t, ID_AA64ISAR1, FRINTTS, 1);
61
- t = FIELD_DP64(t, ID_AA64ISAR1, LRCPC, 2); /* ARMv8.4-RCPC */
62
- t = FIELD_DP64(t, ID_AA64ISAR1, I8MM, 1);
63
+ t = FIELD_DP64(t, ID_AA64ISAR1, DPB, 2); /* FEAT_DPB2 */
64
+ t = FIELD_DP64(t, ID_AA64ISAR1, JSCVT, 1); /* FEAT_JSCVT */
65
+ t = FIELD_DP64(t, ID_AA64ISAR1, FCMA, 1); /* FEAT_FCMA */
66
+ t = FIELD_DP64(t, ID_AA64ISAR1, LRCPC, 2); /* FEAT_LRCPC2 */
67
+ t = FIELD_DP64(t, ID_AA64ISAR1, FRINTTS, 1); /* FEAT_FRINTTS */
68
+ t = FIELD_DP64(t, ID_AA64ISAR1, SB, 1); /* FEAT_SB */
69
+ t = FIELD_DP64(t, ID_AA64ISAR1, SPECRES, 1); /* FEAT_SPECRES */
70
+ t = FIELD_DP64(t, ID_AA64ISAR1, BF16, 1); /* FEAT_BF16 */
71
+ t = FIELD_DP64(t, ID_AA64ISAR1, I8MM, 1); /* FEAT_I8MM */
72
cpu->isar.id_aa64isar1 = t;
73
74
t = cpu->isar.id_aa64pfr0;
75
+ t = FIELD_DP64(t, ID_AA64PFR0, FP, 1); /* FEAT_FP16 */
76
+ t = FIELD_DP64(t, ID_AA64PFR0, ADVSIMD, 1); /* FEAT_FP16 */
77
t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1);
78
- t = FIELD_DP64(t, ID_AA64PFR0, FP, 1);
79
- t = FIELD_DP64(t, ID_AA64PFR0, ADVSIMD, 1);
80
- t = FIELD_DP64(t, ID_AA64PFR0, SEL2, 1);
81
- t = FIELD_DP64(t, ID_AA64PFR0, DIT, 1);
82
+ t = FIELD_DP64(t, ID_AA64PFR0, SEL2, 1); /* FEAT_SEL2 */
83
+ t = FIELD_DP64(t, ID_AA64PFR0, DIT, 1); /* FEAT_DIT */
84
cpu->isar.id_aa64pfr0 = t;
85
86
t = cpu->isar.id_aa64pfr1;
87
- t = FIELD_DP64(t, ID_AA64PFR1, BT, 1);
88
- t = FIELD_DP64(t, ID_AA64PFR1, SSBS, 2);
89
+ t = FIELD_DP64(t, ID_AA64PFR1, BT, 1); /* FEAT_BTI */
90
+ t = FIELD_DP64(t, ID_AA64PFR1, SSBS, 2); /* FEAT_SSBS2 */
91
/*
92
* Begin with full support for MTE. This will be downgraded to MTE=0
93
* during realize if the board provides no tag memory, much like
94
* we do for EL2 with the virtualization=on property.
95
*/
96
- t = FIELD_DP64(t, ID_AA64PFR1, MTE, 3);
97
+ t = FIELD_DP64(t, ID_AA64PFR1, MTE, 3); /* FEAT_MTE3 */
98
cpu->isar.id_aa64pfr1 = t;
99
100
t = cpu->isar.id_aa64mmfr0;
101
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
102
cpu->isar.id_aa64mmfr0 = t;
103
104
t = cpu->isar.id_aa64mmfr1;
105
- t = FIELD_DP64(t, ID_AA64MMFR1, HPDS, 1); /* HPD */
106
- t = FIELD_DP64(t, ID_AA64MMFR1, LO, 1);
107
- t = FIELD_DP64(t, ID_AA64MMFR1, VH, 1);
108
- t = FIELD_DP64(t, ID_AA64MMFR1, PAN, 2); /* ATS1E1 */
109
- t = FIELD_DP64(t, ID_AA64MMFR1, VMIDBITS, 2); /* VMID16 */
110
- t = FIELD_DP64(t, ID_AA64MMFR1, XNX, 1); /* TTS2UXN */
111
+ t = FIELD_DP64(t, ID_AA64MMFR1, VMIDBITS, 2); /* FEAT_VMID16 */
112
+ t = FIELD_DP64(t, ID_AA64MMFR1, VH, 1); /* FEAT_VHE */
113
+ t = FIELD_DP64(t, ID_AA64MMFR1, HPDS, 1); /* FEAT_HPDS */
114
+ t = FIELD_DP64(t, ID_AA64MMFR1, LO, 1); /* FEAT_LOR */
115
+ t = FIELD_DP64(t, ID_AA64MMFR1, PAN, 2); /* FEAT_PAN2 */
116
+ t = FIELD_DP64(t, ID_AA64MMFR1, XNX, 1); /* FEAT_XNX */
117
cpu->isar.id_aa64mmfr1 = t;
118
119
t = cpu->isar.id_aa64mmfr2;
120
- t = FIELD_DP64(t, ID_AA64MMFR2, UAO, 1);
121
- t = FIELD_DP64(t, ID_AA64MMFR2, CNP, 1); /* TTCNP */
122
- t = FIELD_DP64(t, ID_AA64MMFR2, ST, 1); /* TTST */
123
- t = FIELD_DP64(t, ID_AA64MMFR2, VARANGE, 1); /* FEAT_LVA */
124
- t = FIELD_DP64(t, ID_AA64MMFR2, TTL, 1); /* FEAT_TTL */
125
- t = FIELD_DP64(t, ID_AA64MMFR2, BBM, 2); /* FEAT_BBM at level 2 */
126
+ t = FIELD_DP64(t, ID_AA64MMFR2, CNP, 1); /* FEAT_TTCNP */
127
+ t = FIELD_DP64(t, ID_AA64MMFR2, UAO, 1); /* FEAT_UAO */
128
+ t = FIELD_DP64(t, ID_AA64MMFR2, VARANGE, 1); /* FEAT_LVA */
129
+ t = FIELD_DP64(t, ID_AA64MMFR2, ST, 1); /* FEAT_TTST */
130
+ t = FIELD_DP64(t, ID_AA64MMFR2, TTL, 1); /* FEAT_TTL */
131
+ t = FIELD_DP64(t, ID_AA64MMFR2, BBM, 2); /* FEAT_BBM at level 2 */
132
cpu->isar.id_aa64mmfr2 = t;
133
134
t = cpu->isar.id_aa64zfr0;
135
t = FIELD_DP64(t, ID_AA64ZFR0, SVEVER, 1);
136
- t = FIELD_DP64(t, ID_AA64ZFR0, AES, 2); /* PMULL */
137
- t = FIELD_DP64(t, ID_AA64ZFR0, BITPERM, 1);
138
- t = FIELD_DP64(t, ID_AA64ZFR0, BFLOAT16, 1);
139
- t = FIELD_DP64(t, ID_AA64ZFR0, SHA3, 1);
140
- t = FIELD_DP64(t, ID_AA64ZFR0, SM4, 1);
141
- t = FIELD_DP64(t, ID_AA64ZFR0, I8MM, 1);
142
- t = FIELD_DP64(t, ID_AA64ZFR0, F32MM, 1);
143
- t = FIELD_DP64(t, ID_AA64ZFR0, F64MM, 1);
144
+ t = FIELD_DP64(t, ID_AA64ZFR0, AES, 2); /* FEAT_SVE_PMULL128 */
145
+ t = FIELD_DP64(t, ID_AA64ZFR0, BITPERM, 1); /* FEAT_SVE_BitPerm */
146
+ t = FIELD_DP64(t, ID_AA64ZFR0, BFLOAT16, 1); /* FEAT_BF16 */
147
+ t = FIELD_DP64(t, ID_AA64ZFR0, SHA3, 1); /* FEAT_SVE_SHA3 */
148
+ t = FIELD_DP64(t, ID_AA64ZFR0, SM4, 1); /* FEAT_SVE_SM4 */
149
+ t = FIELD_DP64(t, ID_AA64ZFR0, I8MM, 1); /* FEAT_I8MM */
150
+ t = FIELD_DP64(t, ID_AA64ZFR0, F32MM, 1); /* FEAT_F32MM */
151
+ t = FIELD_DP64(t, ID_AA64ZFR0, F64MM, 1); /* FEAT_F64MM */
152
cpu->isar.id_aa64zfr0 = t;
153
154
t = cpu->isar.id_aa64dfr0;
155
- t = FIELD_DP64(t, ID_AA64DFR0, PMUVER, 5); /* v8.4-PMU */
156
+ t = FIELD_DP64(t, ID_AA64DFR0, PMUVER, 5); /* FEAT_PMUv3p4 */
157
cpu->isar.id_aa64dfr0 = t;
158
159
/* Replicate the same data to the 32-bit id registers. */
160
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
161
index XXXXXXX..XXXXXXX 100644
162
--- a/target/arm/cpu_tcg.c
163
+++ b/target/arm/cpu_tcg.c
164
@@ -XXX,XX +XXX,XX @@ void aa32_max_features(ARMCPU *cpu)
165
166
/* Add additional features supported by QEMU */
167
t = cpu->isar.id_isar5;
168
- t = FIELD_DP32(t, ID_ISAR5, AES, 2);
169
- t = FIELD_DP32(t, ID_ISAR5, SHA1, 1);
170
- t = FIELD_DP32(t, ID_ISAR5, SHA2, 1);
171
+ t = FIELD_DP32(t, ID_ISAR5, AES, 2); /* FEAT_PMULL */
172
+ t = FIELD_DP32(t, ID_ISAR5, SHA1, 1); /* FEAT_SHA1 */
173
+ t = FIELD_DP32(t, ID_ISAR5, SHA2, 1); /* FEAT_SHA256 */
174
t = FIELD_DP32(t, ID_ISAR5, CRC32, 1);
175
- t = FIELD_DP32(t, ID_ISAR5, RDM, 1);
176
- t = FIELD_DP32(t, ID_ISAR5, VCMA, 1);
177
+ t = FIELD_DP32(t, ID_ISAR5, RDM, 1); /* FEAT_RDM */
178
+ t = FIELD_DP32(t, ID_ISAR5, VCMA, 1); /* FEAT_FCMA */
179
cpu->isar.id_isar5 = t;
180
181
t = cpu->isar.id_isar6;
182
- t = FIELD_DP32(t, ID_ISAR6, JSCVT, 1);
183
- t = FIELD_DP32(t, ID_ISAR6, DP, 1);
184
- t = FIELD_DP32(t, ID_ISAR6, FHM, 1);
185
- t = FIELD_DP32(t, ID_ISAR6, SB, 1);
186
- t = FIELD_DP32(t, ID_ISAR6, SPECRES, 1);
187
- t = FIELD_DP32(t, ID_ISAR6, BF16, 1);
188
- t = FIELD_DP32(t, ID_ISAR6, I8MM, 1);
189
+ t = FIELD_DP32(t, ID_ISAR6, JSCVT, 1); /* FEAT_JSCVT */
190
+ t = FIELD_DP32(t, ID_ISAR6, DP, 1); /* Feat_DotProd */
191
+ t = FIELD_DP32(t, ID_ISAR6, FHM, 1); /* FEAT_FHM */
192
+ t = FIELD_DP32(t, ID_ISAR6, SB, 1); /* FEAT_SB */
193
+ t = FIELD_DP32(t, ID_ISAR6, SPECRES, 1); /* FEAT_SPECRES */
194
+ t = FIELD_DP32(t, ID_ISAR6, BF16, 1); /* FEAT_AA32BF16 */
195
+ t = FIELD_DP32(t, ID_ISAR6, I8MM, 1); /* FEAT_AA32I8MM */
196
cpu->isar.id_isar6 = t;
197
198
t = cpu->isar.mvfr1;
199
- t = FIELD_DP32(t, MVFR1, FPHP, 3); /* v8.2-FP16 */
200
- t = FIELD_DP32(t, MVFR1, SIMDHP, 2); /* v8.2-FP16 */
201
+ t = FIELD_DP32(t, MVFR1, FPHP, 3); /* FEAT_FP16 */
202
+ t = FIELD_DP32(t, MVFR1, SIMDHP, 2); /* FEAT_FP16 */
203
cpu->isar.mvfr1 = t;
204
205
t = cpu->isar.mvfr2;
206
- t = FIELD_DP32(t, MVFR2, SIMDMISC, 3); /* SIMD MaxNum */
207
- t = FIELD_DP32(t, MVFR2, FPMISC, 4); /* FP MaxNum */
208
+ t = FIELD_DP32(t, MVFR2, SIMDMISC, 3); /* SIMD MaxNum */
209
+ t = FIELD_DP32(t, MVFR2, FPMISC, 4); /* FP MaxNum */
210
cpu->isar.mvfr2 = t;
211
212
t = cpu->isar.id_mmfr3;
213
- t = FIELD_DP32(t, ID_MMFR3, PAN, 2); /* ATS1E1 */
214
+ t = FIELD_DP32(t, ID_MMFR3, PAN, 2); /* FEAT_PAN2 */
215
cpu->isar.id_mmfr3 = t;
216
217
t = cpu->isar.id_mmfr4;
218
- t = FIELD_DP32(t, ID_MMFR4, HPDS, 1); /* AA32HPD */
219
- t = FIELD_DP32(t, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */
220
- t = FIELD_DP32(t, ID_MMFR4, CNP, 1); /* TTCNP */
221
- t = FIELD_DP32(t, ID_MMFR4, XNX, 1); /* TTS2UXN */
222
+ t = FIELD_DP32(t, ID_MMFR4, HPDS, 1); /* FEAT_AA32HPD */
223
+ t = FIELD_DP32(t, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */
224
+ t = FIELD_DP32(t, ID_MMFR4, CNP, 1); /* FEAT_TTCNP */
225
+ t = FIELD_DP32(t, ID_MMFR4, XNX, 1); /* FEAT_XNX*/
226
cpu->isar.id_mmfr4 = t;
227
228
t = cpu->isar.id_pfr0;
229
- t = FIELD_DP32(t, ID_PFR0, DIT, 1);
230
+ t = FIELD_DP32(t, ID_PFR0, DIT, 1); /* FEAT_DIT */
231
cpu->isar.id_pfr0 = t;
232
233
t = cpu->isar.id_pfr2;
234
- t = FIELD_DP32(t, ID_PFR2, SSBS, 1);
235
+ t = FIELD_DP32(t, ID_PFR2, SSBS, 1); /* FEAT_SSBS */
236
cpu->isar.id_pfr2 = t;
237
238
t = cpu->isar.id_dfr0;
239
- t = FIELD_DP32(t, ID_DFR0, PERFMON, 5); /* v8.4-PMU */
240
+ t = FIELD_DP32(t, ID_DFR0, PERFMON, 5); /* FEAT_PMUv3p4 */
241
cpu->isar.id_dfr0 = t;
242
}
243
31
244
--
32
--
245
2.25.1
33
2.34.1
34
35
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Fabiano Rosas <farosas@suse.de>
2
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
This allows the test to be skipped when TCG is not present in the QEMU
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
binary.
5
Message-id: 20220506180242.216785-18-richard.henderson@linaro.org
5
6
Signed-off-by: Fabiano Rosas <farosas@suse.de>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
docs/system/arm/emulation.rst | 1 +
11
tests/avocado/boot_linux_console.py | 1 +
9
target/arm/cpu64.c | 1 +
12
tests/avocado/reverse_debugging.py | 8 ++++++++
10
target/arm/cpu_tcg.c | 1 +
13
2 files changed, 9 insertions(+)
11
3 files changed, 3 insertions(+)
12
14
13
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
15
diff --git a/tests/avocado/boot_linux_console.py b/tests/avocado/boot_linux_console.py
14
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
15
--- a/docs/system/arm/emulation.rst
17
--- a/tests/avocado/boot_linux_console.py
16
+++ b/docs/system/arm/emulation.rst
18
+++ b/tests/avocado/boot_linux_console.py
17
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
19
@@ -XXX,XX +XXX,XX @@ def test_arm_orangepi_uboot_netbsd9(self):
18
- FEAT_PMULL (PMULL, PMULL2 instructions)
20
19
- FEAT_PMUv3p1 (PMU Extensions v3.1)
21
def test_aarch64_raspi3_atf(self):
20
- FEAT_PMUv3p4 (PMU Extensions v3.4)
22
"""
21
+- FEAT_RAS (Reliability, availability, and serviceability)
23
+ :avocado: tags=accel:tcg
22
- FEAT_RDM (Advanced SIMD rounding double multiply accumulate instructions)
24
:avocado: tags=arch:aarch64
23
- FEAT_RNG (Random number generator)
25
:avocado: tags=machine:raspi3b
24
- FEAT_SB (Speculation Barrier)
26
:avocado: tags=cpu:cortex-a53
25
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
27
diff --git a/tests/avocado/reverse_debugging.py b/tests/avocado/reverse_debugging.py
26
index XXXXXXX..XXXXXXX 100644
28
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/cpu64.c
29
--- a/tests/avocado/reverse_debugging.py
28
+++ b/target/arm/cpu64.c
30
+++ b/tests/avocado/reverse_debugging.py
29
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
31
@@ -XXX,XX +XXX,XX @@ def reverse_debugging(self, shift=7, args=None):
30
t = cpu->isar.id_aa64pfr0;
32
vm.shutdown()
31
t = FIELD_DP64(t, ID_AA64PFR0, FP, 1); /* FEAT_FP16 */
33
32
t = FIELD_DP64(t, ID_AA64PFR0, ADVSIMD, 1); /* FEAT_FP16 */
34
class ReverseDebugging_X86_64(ReverseDebugging):
33
+ t = FIELD_DP64(t, ID_AA64PFR0, RAS, 1); /* FEAT_RAS */
35
+ """
34
t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1);
36
+ :avocado: tags=accel:tcg
35
t = FIELD_DP64(t, ID_AA64PFR0, SEL2, 1); /* FEAT_SEL2 */
37
+ """
36
t = FIELD_DP64(t, ID_AA64PFR0, DIT, 1); /* FEAT_DIT */
38
+
37
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
39
REG_PC = 0x10
38
index XXXXXXX..XXXXXXX 100644
40
REG_CS = 0x12
39
--- a/target/arm/cpu_tcg.c
41
def get_pc(self, g):
40
+++ b/target/arm/cpu_tcg.c
42
@@ -XXX,XX +XXX,XX @@ def test_x86_64_pc(self):
41
@@ -XXX,XX +XXX,XX @@ void aa32_max_features(ARMCPU *cpu)
43
self.reverse_debugging()
42
44
43
t = cpu->isar.id_pfr0;
45
class ReverseDebugging_AArch64(ReverseDebugging):
44
t = FIELD_DP32(t, ID_PFR0, DIT, 1); /* FEAT_DIT */
46
+ """
45
+ t = FIELD_DP32(t, ID_PFR0, RAS, 1); /* FEAT_RAS */
47
+ :avocado: tags=accel:tcg
46
cpu->isar.id_pfr0 = t;
48
+ """
47
49
+
48
t = cpu->isar.id_pfr2;
50
REG_PC = 32
51
52
# unidentified gitlab timeout problem
49
--
53
--
50
2.25.1
54
2.34.1
55
56
diff view generated by jsdifflib
1
From: Gavin Shan <gshan@redhat.com>
1
From: Fabiano Rosas <farosas@suse.de>
2
2
3
Currently, the SMP configuration isn't considered when the CPU
3
Now that the cortex-a15 is under CONFIG_TCG, use as default CPU for a
4
topology is populated. In this case, it's impossible to provide
4
KVM-only build the 'max' cpu.
5
the default CPU-to-NUMA mapping or association based on the socket
6
ID of the given CPU.
7
5
8
This takes account of SMP configuration when the CPU topology
6
Note that we cannot use 'host' here because the qtests can run without
9
is populated. The die ID for the given CPU isn't assigned since
7
any other accelerator (than qtest) and 'host' depends on KVM being
10
it's not supported on arm/virt machine. Besides, the used SMP
8
enabled.
11
configuration in qtest/numa-test/aarch64_numa_cpu() is corrcted
12
to avoid testing failure
13
9
14
Signed-off-by: Gavin Shan <gshan@redhat.com>
10
Signed-off-by: Fabiano Rosas <farosas@suse.de>
15
Reviewed-by: Yanan Wang <wangyanan55@huawei.com>
11
Acked-by: Richard Henderson <richard.henderson@linaro.org>
16
Acked-by: Igor Mammedov <imammedo@redhat.com>
12
Reviewed-by: Thomas Huth <thuth@redhat.com>
17
Message-id: 20220503140304.855514-4-gshan@redhat.com
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
---
14
---
20
hw/arm/virt.c | 15 ++++++++++++++-
15
hw/arm/virt.c | 4 ++++
21
1 file changed, 14 insertions(+), 1 deletion(-)
16
1 file changed, 4 insertions(+)
22
17
23
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
18
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
24
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
25
--- a/hw/arm/virt.c
20
--- a/hw/arm/virt.c
26
+++ b/hw/arm/virt.c
21
+++ b/hw/arm/virt.c
27
@@ -XXX,XX +XXX,XX @@ static const CPUArchIdList *virt_possible_cpu_arch_ids(MachineState *ms)
22
@@ -XXX,XX +XXX,XX @@ static void virt_machine_class_init(ObjectClass *oc, void *data)
28
int n;
23
mc->minimum_page_bits = 12;
29
unsigned int max_cpus = ms->smp.max_cpus;
24
mc->possible_cpu_arch_ids = virt_possible_cpu_arch_ids;
30
VirtMachineState *vms = VIRT_MACHINE(ms);
25
mc->cpu_index_to_instance_props = virt_cpu_index_to_props;
31
+ MachineClass *mc = MACHINE_GET_CLASS(vms);
26
+#ifdef CONFIG_TCG
32
27
mc->default_cpu_type = ARM_CPU_TYPE_NAME("cortex-a15");
33
if (ms->possible_cpus) {
28
+#else
34
assert(ms->possible_cpus->len == max_cpus);
29
+ mc->default_cpu_type = ARM_CPU_TYPE_NAME("max");
35
@@ -XXX,XX +XXX,XX @@ static const CPUArchIdList *virt_possible_cpu_arch_ids(MachineState *ms)
30
+#endif
36
ms->possible_cpus->cpus[n].type = ms->cpu_type;
31
mc->get_default_cpu_node_id = virt_get_default_cpu_node_id;
37
ms->possible_cpus->cpus[n].arch_id =
32
mc->kvm_type = virt_kvm_type;
38
virt_cpu_mp_affinity(vms, n);
33
assert(!mc->get_hotplug_handler);
39
+
40
+ assert(!mc->smp_props.dies_supported);
41
+ ms->possible_cpus->cpus[n].props.has_socket_id = true;
42
+ ms->possible_cpus->cpus[n].props.socket_id =
43
+ n / (ms->smp.clusters * ms->smp.cores * ms->smp.threads);
44
+ ms->possible_cpus->cpus[n].props.has_cluster_id = true;
45
+ ms->possible_cpus->cpus[n].props.cluster_id =
46
+ (n / (ms->smp.cores * ms->smp.threads)) % ms->smp.clusters;
47
+ ms->possible_cpus->cpus[n].props.has_core_id = true;
48
+ ms->possible_cpus->cpus[n].props.core_id =
49
+ (n / ms->smp.threads) % ms->smp.cores;
50
ms->possible_cpus->cpus[n].props.has_thread_id = true;
51
- ms->possible_cpus->cpus[n].props.thread_id = n;
52
+ ms->possible_cpus->cpus[n].props.thread_id =
53
+ n % ms->smp.threads;
54
}
55
return ms->possible_cpus;
56
}
57
--
34
--
58
2.25.1
35
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Fabiano Rosas <farosas@suse.de>
2
2
3
This extension concerns changes to the External Debug interface,
3
Signed-off-by: Fabiano Rosas <farosas@suse.de>
4
with Secure and Non-secure access to the debug registers, and all
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
of it is outside the scope of QEMU. Indicating support for this
5
Acked-by: Thomas Huth <thuth@redhat.com>
6
is mandatory with FEAT_SEL2, which we do implement.
7
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20220506180242.216785-13-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
7
---
13
docs/system/arm/emulation.rst | 1 +
8
tests/qtest/arm-cpu-features.c | 28 ++++++++++++++++++----------
14
target/arm/cpu64.c | 2 +-
9
1 file changed, 18 insertions(+), 10 deletions(-)
15
target/arm/cpu_tcg.c | 4 ++--
16
3 files changed, 4 insertions(+), 3 deletions(-)
17
10
18
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
11
diff --git a/tests/qtest/arm-cpu-features.c b/tests/qtest/arm-cpu-features.c
19
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
20
--- a/docs/system/arm/emulation.rst
13
--- a/tests/qtest/arm-cpu-features.c
21
+++ b/docs/system/arm/emulation.rst
14
+++ b/tests/qtest/arm-cpu-features.c
22
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
15
@@ -XXX,XX +XXX,XX @@
23
- FEAT_DIT (Data Independent Timing instructions)
16
#define SVE_MAX_VQ 16
24
- FEAT_DPB (DC CVAP instruction)
17
25
- FEAT_Debugv8p2 (Debug changes for v8.2)
18
#define MACHINE "-machine virt,gic-version=max -accel tcg "
26
+- FEAT_Debugv8p4 (Debug changes for v8.4)
19
-#define MACHINE_KVM "-machine virt,gic-version=max -accel kvm -accel tcg "
27
- FEAT_DotProd (Advanced SIMD dot product instructions)
20
+#define MACHINE_KVM "-machine virt,gic-version=max -accel kvm "
28
- FEAT_FCMA (Floating-point complex number instructions)
21
#define QUERY_HEAD "{ 'execute': 'query-cpu-model-expansion', " \
29
- FEAT_FHM (Floating-point half-precision multiplication instructions)
22
" 'arguments': { 'type': 'full', "
30
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
23
#define QUERY_TAIL "}}"
31
index XXXXXXX..XXXXXXX 100644
24
@@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv)
32
--- a/target/arm/cpu64.c
25
{
33
+++ b/target/arm/cpu64.c
26
g_test_init(&argc, &argv, NULL);
34
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
27
35
cpu->isar.id_aa64zfr0 = t;
28
- qtest_add_data_func("/arm/query-cpu-model-expansion",
36
29
- NULL, test_query_cpu_model_expansion);
37
t = cpu->isar.id_aa64dfr0;
30
+ if (qtest_has_accel("tcg")) {
38
- t = FIELD_DP64(t, ID_AA64DFR0, DEBUGVER, 8); /* FEAT_Debugv8p2 */
31
+ qtest_add_data_func("/arm/query-cpu-model-expansion",
39
+ t = FIELD_DP64(t, ID_AA64DFR0, DEBUGVER, 9); /* FEAT_Debugv8p4 */
32
+ NULL, test_query_cpu_model_expansion);
40
t = FIELD_DP64(t, ID_AA64DFR0, PMUVER, 5); /* FEAT_PMUv3p4 */
33
+ }
41
cpu->isar.id_aa64dfr0 = t;
34
+
42
35
+ if (!g_str_equal(qtest_get_arch(), "aarch64")) {
43
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
36
+ goto out;
44
index XXXXXXX..XXXXXXX 100644
37
+ }
45
--- a/target/arm/cpu_tcg.c
38
46
+++ b/target/arm/cpu_tcg.c
39
/*
47
@@ -XXX,XX +XXX,XX @@ void aa32_max_features(ARMCPU *cpu)
40
* For now we only run KVM specific tests with AArch64 QEMU in
48
cpu->isar.id_pfr2 = t;
41
* order avoid attempting to run an AArch32 QEMU with KVM on
49
42
* AArch64 hosts. That won't work and isn't easy to detect.
50
t = cpu->isar.id_dfr0;
43
*/
51
- t = FIELD_DP32(t, ID_DFR0, COPDBG, 8); /* FEAT_Debugv8p2 */
44
- if (g_str_equal(qtest_get_arch(), "aarch64") && qtest_has_accel("kvm")) {
52
- t = FIELD_DP32(t, ID_DFR0, COPSDBG, 8); /* FEAT_Debugv8p2 */
45
+ if (qtest_has_accel("kvm")) {
53
+ t = FIELD_DP32(t, ID_DFR0, COPDBG, 9); /* FEAT_Debugv8p4 */
46
/*
54
+ t = FIELD_DP32(t, ID_DFR0, COPSDBG, 9); /* FEAT_Debugv8p4 */
47
* This tests target the 'host' CPU type, so register it only if
55
t = FIELD_DP32(t, ID_DFR0, PERFMON, 5); /* FEAT_PMUv3p4 */
48
* KVM is available.
56
cpu->isar.id_dfr0 = t;
49
*/
50
qtest_add_data_func("/arm/kvm/query-cpu-model-expansion",
51
NULL, test_query_cpu_model_expansion_kvm);
52
- }
53
54
- if (g_str_equal(qtest_get_arch(), "aarch64")) {
55
- qtest_add_data_func("/arm/max/query-cpu-model-expansion/sve-max-vq-8",
56
- NULL, sve_tests_sve_max_vq_8);
57
- qtest_add_data_func("/arm/max/query-cpu-model-expansion/sve-off",
58
- NULL, sve_tests_sve_off);
59
qtest_add_data_func("/arm/kvm/query-cpu-model-expansion/sve-off",
60
NULL, sve_tests_sve_off_kvm);
61
}
62
63
+ if (qtest_has_accel("tcg")) {
64
+ qtest_add_data_func("/arm/max/query-cpu-model-expansion/sve-max-vq-8",
65
+ NULL, sve_tests_sve_max_vq_8);
66
+ qtest_add_data_func("/arm/max/query-cpu-model-expansion/sve-off",
67
+ NULL, sve_tests_sve_off);
68
+ }
69
+
70
+out:
71
return g_test_run();
57
}
72
}
58
--
73
--
59
2.25.1
74
2.34.1
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Fabiano Rosas <farosas@suse.de>
2
2
3
The only portion of FEAT_Debugv8p2 that is relevant to QEMU
3
These tests set -accel tcg, so restrict them to when TCG is present.
4
is CONTEXTIDR_EL2, which is also conditionally implemented
5
with FEAT_VHE. The rest of the debug extension concerns the
6
External debug interface, which is outside the scope of QEMU.
7
4
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Fabiano Rosas <farosas@suse.de>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Acked-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20220506180242.216785-12-richard.henderson@linaro.org
7
Reviewed-by: Thomas Huth <thuth@redhat.com>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
9
---
13
docs/system/arm/emulation.rst | 1 +
10
tests/qtest/meson.build | 4 ++--
14
target/arm/cpu.c | 1 +
11
1 file changed, 2 insertions(+), 2 deletions(-)
15
target/arm/cpu64.c | 1 +
16
target/arm/cpu_tcg.c | 2 ++
17
4 files changed, 5 insertions(+)
18
12
19
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
13
diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
20
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
21
--- a/docs/system/arm/emulation.rst
15
--- a/tests/qtest/meson.build
22
+++ b/docs/system/arm/emulation.rst
16
+++ b/tests/qtest/meson.build
23
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
17
@@ -XXX,XX +XXX,XX @@ qtests_arm = \
24
- FEAT_BTI (Branch Target Identification)
18
# TODO: once aarch64 TCG is fixed on ARM 32 bit host, make bios-tables-test unconditional
25
- FEAT_DIT (Data Independent Timing instructions)
19
qtests_aarch64 = \
26
- FEAT_DPB (DC CVAP instruction)
20
(cpu != 'arm' and unpack_edk2_blobs ? ['bios-tables-test'] : []) + \
27
+- FEAT_Debugv8p2 (Debug changes for v8.2)
21
- (config_all_devices.has_key('CONFIG_TPM_TIS_SYSBUS') ? ['tpm-tis-device-test'] : []) + \
28
- FEAT_DotProd (Advanced SIMD dot product instructions)
22
- (config_all_devices.has_key('CONFIG_TPM_TIS_SYSBUS') ? ['tpm-tis-device-swtpm-test'] : []) + \
29
- FEAT_FCMA (Floating-point complex number instructions)
23
+ (config_all.has_key('CONFIG_TCG') and config_all_devices.has_key('CONFIG_TPM_TIS_SYSBUS') ? \
30
- FEAT_FHM (Floating-point half-precision multiplication instructions)
24
+ ['tpm-tis-device-test', 'tpm-tis-device-swtpm-test'] : []) + \
31
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
25
(config_all_devices.has_key('CONFIG_XLNX_ZYNQMP_ARM') ? ['xlnx-can-test', 'fuzz-xlnx-dp-test'] : []) + \
32
index XXXXXXX..XXXXXXX 100644
26
(config_all_devices.has_key('CONFIG_RASPI') ? ['bcm2835-dma-test'] : []) + \
33
--- a/target/arm/cpu.c
27
['arm-cpu-features',
34
+++ b/target/arm/cpu.c
35
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
36
* feature registers as well.
37
*/
38
cpu->isar.id_pfr1 = FIELD_DP32(cpu->isar.id_pfr1, ID_PFR1, SECURITY, 0);
39
+ cpu->isar.id_dfr0 = FIELD_DP32(cpu->isar.id_dfr0, ID_DFR0, COPSDBG, 0);
40
cpu->isar.id_aa64pfr0 = FIELD_DP64(cpu->isar.id_aa64pfr0,
41
ID_AA64PFR0, EL3, 0);
42
}
43
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/arm/cpu64.c
46
+++ b/target/arm/cpu64.c
47
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
48
cpu->isar.id_aa64zfr0 = t;
49
50
t = cpu->isar.id_aa64dfr0;
51
+ t = FIELD_DP64(t, ID_AA64DFR0, DEBUGVER, 8); /* FEAT_Debugv8p2 */
52
t = FIELD_DP64(t, ID_AA64DFR0, PMUVER, 5); /* FEAT_PMUv3p4 */
53
cpu->isar.id_aa64dfr0 = t;
54
55
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
56
index XXXXXXX..XXXXXXX 100644
57
--- a/target/arm/cpu_tcg.c
58
+++ b/target/arm/cpu_tcg.c
59
@@ -XXX,XX +XXX,XX @@ void aa32_max_features(ARMCPU *cpu)
60
cpu->isar.id_pfr2 = t;
61
62
t = cpu->isar.id_dfr0;
63
+ t = FIELD_DP32(t, ID_DFR0, COPDBG, 8); /* FEAT_Debugv8p2 */
64
+ t = FIELD_DP32(t, ID_DFR0, COPSDBG, 8); /* FEAT_Debugv8p2 */
65
t = FIELD_DP32(t, ID_DFR0, PERFMON, 5); /* FEAT_PMUv3p4 */
66
cpu->isar.id_dfr0 = t;
67
}
68
--
28
--
69
2.25.1
29
2.34.1
diff view generated by jsdifflib