1
Hi; here's this week's arm pullreq. Mostly this is my
1
Hi; here's the latest target-arm queue. Mostly this is refactoring
2
work on FEAT_MOPS and FEAT_HBC, but there are some
2
and cleanup type patches.
3
other bits and pieces in there too, including a recent
4
set of elf2dmp patches.
5
3
6
thanks
4
thanks
7
-- PMM
5
-- PMM
8
6
9
The following changes since commit 55394dcbec8f0c29c30e792c102a0edd50a52bf4:
7
The following changes since commit c60be6e3e38cb36dc66129e757ec4b34152232be:
10
8
11
Merge tag 'pull-loongarch-20230920' of https://gitlab.com/gaosong/qemu into staging (2023-09-20 13:56:18 -0400)
9
Merge tag 'pull-sp-20231025' of https://gitlab.com/rth7680/qemu into staging (2023-10-27 09:43:53 +0900)
12
10
13
are available in the Git repository at:
11
are available in the Git repository at:
14
12
15
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20230921
13
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20231027
16
14
17
for you to fetch changes up to 231f6a7d66254a58bedbee458591b780e0a507b1:
15
for you to fetch changes up to df93de987f423a0ed918c425f5dbd9a25d3c6229:
18
16
19
elf2dmp: rework PDB_STREAM_INDEXES::segments obtaining (2023-09-21 16:13:54 +0100)
17
hw/net/cadence_gem: enforce 32 bits variable size for CRC (2023-10-27 15:27:06 +0100)
20
18
21
----------------------------------------------------------------
19
----------------------------------------------------------------
22
target-arm queue:
20
target-arm queue:
23
* target/m68k: Add URL to semihosting spec
21
* Correct minor errors in Cortex-A710 definition
24
* docs/devel/loads-stores: Fix git grep regexes
22
* Implement Neoverse N2 CPU model
25
* hw/arm/boot: Set SCR_EL3.FGTEn when booting kernel
23
* Refactor feature test functions out into separate header
26
* linux-user: Correct SME feature names reported in cpuinfo
24
* Fix syndrome for FGT traps on ERET
27
* linux-user: Add missing arm32 hwcaps
25
* Remove 'hw/arm/boot.h' includes from various header files
28
* Don't skip MTE checks for LDRT/STRT at EL0
26
* pxa2xx: Refactoring/cleanup
29
* Implement FEAT_HBC
27
* Avoid using 'first_cpu' when first ARM CPU is reachable
30
* Implement FEAT_MOPS
28
* misc/led: LED state is set opposite of what is expected
31
* audio/jackaudio: Avoid dynamic stack allocation
29
* hw/net/cadence_gen: clean up to use FIELD macros
32
* sbsa-ref: add non-secure EL2 virtual timer
30
* hw/net/cadence_gem: perform PHY access on write only
33
* elf2dmp: improve Win2022, Win11 and large dumps
31
* hw/net/cadence_gem: enforce 32 bits variable size for CRC
34
32
35
----------------------------------------------------------------
33
----------------------------------------------------------------
36
Fabian Vogt (1):
34
Glenn Miles (1):
37
hw/arm/boot: Set SCR_EL3.FGTEn when booting kernel
35
misc/led: LED state is set opposite of what is expected
38
36
39
Marcin Juszkiewicz (1):
37
Luc Michel (11):
40
sbsa-ref: add non-secure EL2 virtual timer
38
hw/net/cadence_gem: use REG32 macro for register definitions
39
hw/net/cadence_gem: use FIELD for screening registers
40
hw/net/cadence_gem: use FIELD to describe NWCTRL register fields
41
hw/net/cadence_gem: use FIELD to describe NWCFG register fields
42
hw/net/cadence_gem: use FIELD to describe DMACFG register fields
43
hw/net/cadence_gem: use FIELD to describe [TX|RX]STATUS register fields
44
hw/net/cadence_gem: use FIELD to describe IRQ register fields
45
hw/net/cadence_gem: use FIELD to describe DESCONF6 register fields
46
hw/net/cadence_gem: use FIELD to describe PHYMNTNC register fields
47
hw/net/cadence_gem: perform PHY access on write only
48
hw/net/cadence_gem: enforce 32 bits variable size for CRC
41
49
42
Peter Maydell (23):
50
Peter Maydell (9):
43
target/m68k: Add URL to semihosting spec
51
target/arm: Correct minor errors in Cortex-A710 definition
44
docs/devel/loads-stores: Fix git grep regexes
52
target/arm: Implement Neoverse N2 CPU model
45
linux-user/elfload.c: Correct SME feature names reported in cpuinfo
53
target/arm: Move feature test functions to their own header
46
linux-user/elfload.c: Add missing arm and arm64 hwcap values
54
target/arm: Move ID_AA64MMFR1 and ID_AA64MMFR2 tests together
47
linux-user/elfload.c: Report previously missing arm32 hwcaps
55
target/arm: Move ID_AA64MMFR0 tests up to before MMFR1 and MMFR2
48
target/arm: Update AArch64 ID register field definitions
56
target/arm: Move ID_AA64ISAR* test functions together
49
target/arm: Update user-mode ID reg mask values
57
target/arm: Move ID_AA64PFR* tests together
50
target/arm: Implement FEAT_HBC
58
target/arm: Move ID_AA64DFR* feature tests together
51
target/arm: Remove unused allocation_tag_mem() argument
59
target/arm: Fix syndrome for FGT traps on ERET
52
target/arm: Don't skip MTE checks for LDRT/STRT at EL0
53
target/arm: Implement FEAT_MOPS enable bits
54
target/arm: Pass unpriv bool to get_a64_user_mem_index()
55
target/arm: Define syndrome function for MOPS exceptions
56
target/arm: New function allocation_tag_mem_probe()
57
target/arm: Implement MTE tag-checking functions for FEAT_MOPS
58
target/arm: Implement the SET* instructions
59
target/arm: Define new TB flag for ATA0
60
target/arm: Implement the SETG* instructions
61
target/arm: Implement MTE tag-checking functions for FEAT_MOPS copies
62
target/arm: Implement the CPY* instructions
63
target/arm: Enable FEAT_MOPS for CPU 'max'
64
audio/jackaudio: Avoid dynamic stack allocation in qjack_client_init
65
audio/jackaudio: Avoid dynamic stack allocation in qjack_process()
66
60
67
Viktor Prutyanov (5):
61
Philippe Mathieu-Daudé (20):
68
elf2dmp: replace PE export name check with PDB name check
62
hw/arm/allwinner-a10: Remove 'hw/arm/boot.h' from header
69
elf2dmp: introduce physical block alignment
63
hw/arm/allwinner-h3: Remove 'hw/arm/boot.h' from header
70
elf2dmp: introduce merging of physical memory runs
64
hw/arm/allwinner-r40: Remove 'hw/arm/boot.h' from header
71
elf2dmp: use Linux mmap with MAP_NORESERVE when possible
65
hw/arm/fsl-imx25: Remove 'hw/arm/boot.h' from header
72
elf2dmp: rework PDB_STREAM_INDEXES::segments obtaining
66
hw/arm/fsl-imx31: Remove 'hw/arm/boot.h' from header
67
hw/arm/fsl-imx6: Remove 'hw/arm/boot.h' from header
68
hw/arm/fsl-imx6ul: Remove 'hw/arm/boot.h' from header
69
hw/arm/fsl-imx7: Remove 'hw/arm/boot.h' from header
70
hw/arm/xlnx-versal: Remove 'hw/arm/boot.h' from header
71
hw/arm/xlnx-zynqmp: Remove 'hw/arm/boot.h' from header
72
hw/sd/pxa2xx: Realize sysbus device before accessing it
73
hw/sd/pxa2xx: Do not open-code sysbus_create_simple()
74
hw/pcmcia/pxa2xx: Realize sysbus device before accessing it
75
hw/pcmcia/pxa2xx: Do not open-code sysbus_create_simple()
76
hw/pcmcia/pxa2xx: Inline pxa2xx_pcmcia_init()
77
hw/intc/pxa2xx: Convert to Resettable interface
78
hw/intc/pxa2xx: Pass CPU reference using QOM link property
79
hw/intc/pxa2xx: Factor pxa2xx_pic_realize() out of pxa2xx_pic_init()
80
hw/arm/pxa2xx: Realize PXA2XX_I2C device before accessing it
81
hw/arm: Avoid using 'first_cpu' when first ARM CPU is reachable
73
82
74
docs/devel/loads-stores.rst | 40 +-
83
docs/system/arm/virt.rst | 1 +
75
docs/system/arm/emulation.rst | 2 +
84
bsd-user/arm/target_arch.h | 1 +
76
contrib/elf2dmp/addrspace.h | 1 +
85
include/hw/arm/allwinner-a10.h | 1 -
77
contrib/elf2dmp/pdb.h | 2 +-
86
include/hw/arm/allwinner-h3.h | 1 -
78
contrib/elf2dmp/qemu_elf.h | 2 +
87
include/hw/arm/allwinner-r40.h | 1 -
79
target/arm/cpu.h | 35 ++
88
include/hw/arm/fsl-imx25.h | 1 -
80
target/arm/internals.h | 55 +++
89
include/hw/arm/fsl-imx31.h | 1 -
81
target/arm/syndrome.h | 12 +
90
include/hw/arm/fsl-imx6.h | 1 -
82
target/arm/tcg/helper-a64.h | 14 +
91
include/hw/arm/fsl-imx6ul.h | 1 -
83
target/arm/tcg/translate.h | 4 +-
92
include/hw/arm/fsl-imx7.h | 1 -
84
target/arm/tcg/a64.decode | 38 +-
93
include/hw/arm/pxa.h | 2 -
85
audio/jackaudio.c | 21 +-
94
include/hw/arm/xlnx-versal.h | 1 -
86
contrib/elf2dmp/addrspace.c | 31 +-
95
include/hw/arm/xlnx-zynqmp.h | 1 -
87
contrib/elf2dmp/main.c | 154 ++++----
96
linux-user/aarch64/target_prctl.h | 2 +
88
contrib/elf2dmp/pdb.c | 15 +-
97
target/arm/cpu-features.h | 994 ++++++++++++++++++++++++++++++++++++++
89
contrib/elf2dmp/qemu_elf.c | 68 +++-
98
target/arm/cpu.h | 971 -------------------------------------
90
hw/arm/boot.c | 4 +
99
target/arm/internals.h | 1 +
91
hw/arm/sbsa-ref.c | 2 +
100
target/arm/tcg/translate.h | 2 +-
92
linux-user/elfload.c | 72 +++-
101
hw/arm/armv7m.c | 1 +
93
target/arm/helper.c | 39 +-
102
hw/arm/bananapi_m2u.c | 3 +-
94
target/arm/tcg/cpu64.c | 5 +
103
hw/arm/cubieboard.c | 1 +
95
target/arm/tcg/helper-a64.c | 878 +++++++++++++++++++++++++++++++++++++++++
104
hw/arm/exynos4_boards.c | 7 +-
96
target/arm/tcg/hflags.c | 21 +
105
hw/arm/imx25_pdk.c | 1 +
97
target/arm/tcg/mte_helper.c | 281 +++++++++++--
106
hw/arm/kzm.c | 1 +
98
target/arm/tcg/translate-a64.c | 164 +++++++-
107
hw/arm/mcimx6ul-evk.c | 1 +
99
target/m68k/m68k-semi.c | 4 +
108
hw/arm/mcimx7d-sabre.c | 1 +
100
tests/tcg/aarch64/sysregs.c | 4 +-
109
hw/arm/orangepi.c | 3 +-
101
27 files changed, 1768 insertions(+), 200 deletions(-)
110
hw/arm/pxa2xx.c | 17 +-
111
hw/arm/pxa2xx_pic.c | 38 +-
112
hw/arm/realview.c | 2 +-
113
hw/arm/sabrelite.c | 1 +
114
hw/arm/sbsa-ref.c | 1 +
115
hw/arm/virt.c | 1 +
116
hw/arm/xilinx_zynq.c | 2 +-
117
hw/arm/xlnx-versal-virt.c | 1 +
118
hw/arm/xlnx-zcu102.c | 1 +
119
hw/intc/armv7m_nvic.c | 1 +
120
hw/misc/led.c | 2 +-
121
hw/net/cadence_gem.c | 884 ++++++++++++++++++---------------
122
hw/pcmcia/pxa2xx.c | 15 -
123
hw/sd/pxa2xx_mmci.c | 7 +-
124
linux-user/aarch64/cpu_loop.c | 1 +
125
linux-user/aarch64/signal.c | 1 +
126
linux-user/arm/signal.c | 1 +
127
linux-user/elfload.c | 4 +
128
linux-user/mmap.c | 4 +
129
target/arm/arch_dump.c | 1 +
130
target/arm/cpu.c | 1 +
131
target/arm/cpu64.c | 1 +
132
target/arm/debug_helper.c | 1 +
133
target/arm/gdbstub.c | 1 +
134
target/arm/helper.c | 1 +
135
target/arm/kvm64.c | 1 +
136
target/arm/machine.c | 1 +
137
target/arm/ptw.c | 1 +
138
target/arm/tcg/cpu64.c | 115 ++++-
139
target/arm/tcg/hflags.c | 1 +
140
target/arm/tcg/m_helper.c | 1 +
141
target/arm/tcg/op_helper.c | 1 +
142
target/arm/tcg/pauth_helper.c | 1 +
143
target/arm/tcg/tlb_helper.c | 1 +
144
target/arm/tcg/translate-a64.c | 4 +-
145
target/arm/vfp_helper.c | 1 +
146
63 files changed, 1702 insertions(+), 1419 deletions(-)
147
create mode 100644 target/arm/cpu-features.h
148
diff view generated by jsdifflib
1
Enable FEAT_MOPS on the AArch64 'max' CPU, and add it to
1
Correct a couple of minor errors in the Cortex-A710 definition:
2
the list of features we implement.
2
* ID_AA64DFR0_EL1.DebugVer is 9 (indicating Armv8.4 debug architecture)
3
* ID_AA64ISAR1_EL1.APA is 5 (indicating more PAuth support)
4
* there is an IMPDEF CPUCFR_EL1, like that on the Neoverse-N1
3
5
6
Fixes: e3d45c0a89576 ("target/arm: Implement cortex-a710")
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Tested-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
6
Message-id: 20230912140434.1333369-13-peter.maydell@linaro.org
9
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
10
Message-id: 20230915185453.1871167-2-peter.maydell@linaro.org
7
---
11
---
8
docs/system/arm/emulation.rst | 1 +
12
target/arm/tcg/cpu64.c | 11 +++++++++--
9
linux-user/elfload.c | 1 +
13
1 file changed, 9 insertions(+), 2 deletions(-)
10
target/arm/tcg/cpu64.c | 1 +
11
3 files changed, 3 insertions(+)
12
14
13
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
14
index XXXXXXX..XXXXXXX 100644
15
--- a/docs/system/arm/emulation.rst
16
+++ b/docs/system/arm/emulation.rst
17
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
18
- FEAT_LSE (Large System Extensions)
19
- FEAT_LSE2 (Large System Extensions v2)
20
- FEAT_LVA (Large Virtual Address space)
21
+- FEAT_MOPS (Standardization of memory operations)
22
- FEAT_MTE (Memory Tagging Extension)
23
- FEAT_MTE2 (Memory Tagging Extension)
24
- FEAT_MTE3 (MTE Asymmetric Fault Handling)
25
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
26
index XXXXXXX..XXXXXXX 100644
27
--- a/linux-user/elfload.c
28
+++ b/linux-user/elfload.c
29
@@ -XXX,XX +XXX,XX @@ uint32_t get_elf_hwcap2(void)
30
GET_FEATURE_ID(aa64_sme_i16i64, ARM_HWCAP2_A64_SME_I16I64);
31
GET_FEATURE_ID(aa64_sme_fa64, ARM_HWCAP2_A64_SME_FA64);
32
GET_FEATURE_ID(aa64_hbc, ARM_HWCAP2_A64_HBC);
33
+ GET_FEATURE_ID(aa64_mops, ARM_HWCAP2_A64_MOPS);
34
35
return hwcaps;
36
}
37
diff --git a/target/arm/tcg/cpu64.c b/target/arm/tcg/cpu64.c
15
diff --git a/target/arm/tcg/cpu64.c b/target/arm/tcg/cpu64.c
38
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/tcg/cpu64.c
17
--- a/target/arm/tcg/cpu64.c
40
+++ b/target/arm/tcg/cpu64.c
18
+++ b/target/arm/tcg/cpu64.c
41
@@ -XXX,XX +XXX,XX @@ void aarch64_max_tcg_initfn(Object *obj)
19
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cortex_a710_cp_reginfo[] = {
42
cpu->isar.id_aa64isar1 = t;
20
{ .name = "CPUPFR_EL3", .state = ARM_CP_STATE_AA64,
43
21
.opc0 = 3, .opc1 = 6, .crn = 15, .crm = 8, .opc2 = 6,
44
t = cpu->isar.id_aa64isar2;
22
.access = PL3_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
45
+ t = FIELD_DP64(t, ID_AA64ISAR2, MOPS, 1); /* FEAT_MOPS */
23
+ /*
46
t = FIELD_DP64(t, ID_AA64ISAR2, BC, 1); /* FEAT_HBC */
24
+ * Report CPUCFR_EL1.SCU as 1, as we do not implement the DSU
47
cpu->isar.id_aa64isar2 = t;
25
+ * (and in particular its system registers).
48
26
+ */
27
+ { .name = "CPUCFR_EL1", .state = ARM_CP_STATE_AA64,
28
+ .opc0 = 3, .opc1 = 0, .crn = 15, .crm = 0, .opc2 = 0,
29
+ .access = PL1_R, .type = ARM_CP_CONST, .resetvalue = 4 },
30
31
/*
32
* Stub RAMINDEX, as we don't actually implement caches, BTB,
33
@@ -XXX,XX +XXX,XX @@ static void aarch64_a710_initfn(Object *obj)
34
cpu->isar.id_aa64pfr0 = 0x1201111120111112ull; /* GIC filled in later */
35
cpu->isar.id_aa64pfr1 = 0x0000000000000221ull;
36
cpu->isar.id_aa64zfr0 = 0x0000110100110021ull; /* with Crypto */
37
- cpu->isar.id_aa64dfr0 = 0x000011f010305611ull;
38
+ cpu->isar.id_aa64dfr0 = 0x000011f010305619ull;
39
cpu->isar.id_aa64dfr1 = 0;
40
cpu->id_aa64afr0 = 0;
41
cpu->id_aa64afr1 = 0;
42
cpu->isar.id_aa64isar0 = 0x0221111110212120ull; /* with Crypto */
43
- cpu->isar.id_aa64isar1 = 0x0010111101211032ull;
44
+ cpu->isar.id_aa64isar1 = 0x0010111101211052ull;
45
cpu->isar.id_aa64mmfr0 = 0x0000022200101122ull;
46
cpu->isar.id_aa64mmfr1 = 0x0000000010212122ull;
47
cpu->isar.id_aa64mmfr2 = 0x1221011110101011ull;
49
--
48
--
50
2.34.1
49
2.34.1
50
51
diff view generated by jsdifflib
1
From: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
1
Implement a model of the Neoverse N2 CPU. This is an Armv9.0-A
2
processor very similar to the Cortex-A710. The differences are:
3
* no FEAT_EVT
4
* FEAT_DGH (data gathering hint)
5
* FEAT_NV (not yet implemented in QEMU)
6
* Statistical Profiling Extension (not implemented in QEMU)
7
* 48 bit physical address range, not 40
8
* CTR_EL0.DIC = 1 (no explicit icache cleaning needed)
9
* PMCR_EL0.N = 6 (always 6 PMU counters, not 20)
2
10
3
Armv8.1+ cpus have Virtual Host Extension (VHE) which added non-secure
11
Because it has 48-bit physical address support, we can use
4
EL2 virtual timer.
12
this CPU in the sbsa-ref board as well as the virt board.
5
13
6
This change adds it to fullfil Arm BSA (Base System Architecture)
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
requirements.
15
Tested-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
16
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
17
Message-id: 20230915185453.1871167-3-peter.maydell@linaro.org
18
---
19
docs/system/arm/virt.rst | 1 +
20
hw/arm/sbsa-ref.c | 1 +
21
hw/arm/virt.c | 1 +
22
target/arm/tcg/cpu64.c | 103 +++++++++++++++++++++++++++++++++++++++
23
4 files changed, 106 insertions(+)
8
24
9
Signed-off-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
25
diff --git a/docs/system/arm/virt.rst b/docs/system/arm/virt.rst
10
Message-id: 20230913140610.214893-2-marcin.juszkiewicz@linaro.org
26
index XXXXXXX..XXXXXXX 100644
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
27
--- a/docs/system/arm/virt.rst
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
28
+++ b/docs/system/arm/virt.rst
13
---
29
@@ -XXX,XX +XXX,XX @@ Supported guest CPU types:
14
hw/arm/sbsa-ref.c | 2 ++
30
- ``host`` (with KVM only)
15
1 file changed, 2 insertions(+)
31
- ``neoverse-n1`` (64-bit)
16
32
- ``neoverse-v1`` (64-bit)
33
+- ``neoverse-n2`` (64-bit)
34
- ``max`` (same as ``host`` for KVM; best possible emulation with TCG)
35
36
Note that the default is ``cortex-a15``, so for an AArch64 guest you must
17
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
37
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
18
index XXXXXXX..XXXXXXX 100644
38
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/arm/sbsa-ref.c
39
--- a/hw/arm/sbsa-ref.c
20
+++ b/hw/arm/sbsa-ref.c
40
+++ b/hw/arm/sbsa-ref.c
21
@@ -XXX,XX +XXX,XX @@
41
@@ -XXX,XX +XXX,XX @@ static const char * const valid_cpus[] = {
22
#define ARCH_TIMER_S_EL1_IRQ 13
42
ARM_CPU_TYPE_NAME("cortex-a72"),
23
#define ARCH_TIMER_NS_EL1_IRQ 14
43
ARM_CPU_TYPE_NAME("neoverse-n1"),
24
#define ARCH_TIMER_NS_EL2_IRQ 10
44
ARM_CPU_TYPE_NAME("neoverse-v1"),
25
+#define ARCH_TIMER_NS_EL2_VIRT_IRQ 12
45
+ ARM_CPU_TYPE_NAME("neoverse-n2"),
26
46
ARM_CPU_TYPE_NAME("max"),
27
enum {
47
};
28
SBSA_FLASH,
48
29
@@ -XXX,XX +XXX,XX @@ static void create_gic(SBSAMachineState *sms, MemoryRegion *mem)
49
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
30
[GTIMER_VIRT] = ARCH_TIMER_VIRT_IRQ,
50
index XXXXXXX..XXXXXXX 100644
31
[GTIMER_HYP] = ARCH_TIMER_NS_EL2_IRQ,
51
--- a/hw/arm/virt.c
32
[GTIMER_SEC] = ARCH_TIMER_S_EL1_IRQ,
52
+++ b/hw/arm/virt.c
33
+ [GTIMER_HYPVIRT] = ARCH_TIMER_NS_EL2_VIRT_IRQ,
53
@@ -XXX,XX +XXX,XX @@ static const char *valid_cpus[] = {
34
};
54
ARM_CPU_TYPE_NAME("a64fx"),
35
55
ARM_CPU_TYPE_NAME("neoverse-n1"),
36
for (irq = 0; irq < ARRAY_SIZE(timer_irq); irq++) {
56
ARM_CPU_TYPE_NAME("neoverse-v1"),
57
+ ARM_CPU_TYPE_NAME("neoverse-n2"),
58
#endif
59
ARM_CPU_TYPE_NAME("cortex-a53"),
60
ARM_CPU_TYPE_NAME("cortex-a57"),
61
diff --git a/target/arm/tcg/cpu64.c b/target/arm/tcg/cpu64.c
62
index XXXXXXX..XXXXXXX 100644
63
--- a/target/arm/tcg/cpu64.c
64
+++ b/target/arm/tcg/cpu64.c
65
@@ -XXX,XX +XXX,XX @@ static void aarch64_a710_initfn(Object *obj)
66
aarch64_add_sve_properties(obj);
67
}
68
69
+/* Extra IMPDEF regs in the N2 beyond those in the A710 */
70
+static const ARMCPRegInfo neoverse_n2_cp_reginfo[] = {
71
+ { .name = "CPURNDBR_EL3", .state = ARM_CP_STATE_AA64,
72
+ .opc0 = 3, .opc1 = 6, .crn = 15, .crm = 3, .opc2 = 0,
73
+ .access = PL3_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
74
+ { .name = "CPURNDPEID_EL3", .state = ARM_CP_STATE_AA64,
75
+ .opc0 = 3, .opc1 = 6, .crn = 15, .crm = 3, .opc2 = 1,
76
+ .access = PL3_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
77
+};
78
+
79
+static void aarch64_neoverse_n2_initfn(Object *obj)
80
+{
81
+ ARMCPU *cpu = ARM_CPU(obj);
82
+
83
+ cpu->dtb_compatible = "arm,neoverse-n2";
84
+ set_feature(&cpu->env, ARM_FEATURE_V8);
85
+ set_feature(&cpu->env, ARM_FEATURE_NEON);
86
+ set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
87
+ set_feature(&cpu->env, ARM_FEATURE_AARCH64);
88
+ set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
89
+ set_feature(&cpu->env, ARM_FEATURE_EL2);
90
+ set_feature(&cpu->env, ARM_FEATURE_EL3);
91
+ set_feature(&cpu->env, ARM_FEATURE_PMU);
92
+
93
+ /* Ordered by Section B.5: AArch64 ID registers */
94
+ cpu->midr = 0x410FD493; /* r0p3 */
95
+ cpu->revidr = 0;
96
+ cpu->isar.id_pfr0 = 0x21110131;
97
+ cpu->isar.id_pfr1 = 0x00010000; /* GIC filled in later */
98
+ cpu->isar.id_dfr0 = 0x16011099;
99
+ cpu->id_afr0 = 0;
100
+ cpu->isar.id_mmfr0 = 0x10201105;
101
+ cpu->isar.id_mmfr1 = 0x40000000;
102
+ cpu->isar.id_mmfr2 = 0x01260000;
103
+ cpu->isar.id_mmfr3 = 0x02122211;
104
+ cpu->isar.id_isar0 = 0x02101110;
105
+ cpu->isar.id_isar1 = 0x13112111;
106
+ cpu->isar.id_isar2 = 0x21232042;
107
+ cpu->isar.id_isar3 = 0x01112131;
108
+ cpu->isar.id_isar4 = 0x00010142;
109
+ cpu->isar.id_isar5 = 0x11011121; /* with Crypto */
110
+ cpu->isar.id_mmfr4 = 0x01021110;
111
+ cpu->isar.id_isar6 = 0x01111111;
112
+ cpu->isar.mvfr0 = 0x10110222;
113
+ cpu->isar.mvfr1 = 0x13211111;
114
+ cpu->isar.mvfr2 = 0x00000043;
115
+ cpu->isar.id_pfr2 = 0x00000011;
116
+ cpu->isar.id_aa64pfr0 = 0x1201111120111112ull; /* GIC filled in later */
117
+ cpu->isar.id_aa64pfr1 = 0x0000000000000221ull;
118
+ cpu->isar.id_aa64zfr0 = 0x0000110100110021ull; /* with Crypto */
119
+ cpu->isar.id_aa64dfr0 = 0x000011f210305619ull;
120
+ cpu->isar.id_aa64dfr1 = 0;
121
+ cpu->id_aa64afr0 = 0;
122
+ cpu->id_aa64afr1 = 0;
123
+ cpu->isar.id_aa64isar0 = 0x0221111110212120ull; /* with Crypto */
124
+ cpu->isar.id_aa64isar1 = 0x0011111101211052ull;
125
+ cpu->isar.id_aa64mmfr0 = 0x0000022200101125ull;
126
+ cpu->isar.id_aa64mmfr1 = 0x0000000010212122ull;
127
+ cpu->isar.id_aa64mmfr2 = 0x1221011112101011ull;
128
+ cpu->clidr = 0x0000001482000023ull;
129
+ cpu->gm_blocksize = 4;
130
+ cpu->ctr = 0x00000004b444c004ull;
131
+ cpu->dcz_blocksize = 4;
132
+ /* TODO FEAT_MPAM: mpamidr_el1 = 0x0000_0001_001e_01ff */
133
+
134
+ /* Section B.7.2: PMCR_EL0 */
135
+ cpu->isar.reset_pmcr_el0 = 0x3000; /* with 6 counters */
136
+
137
+ /* Section B.8.9: ICH_VTR_EL2 */
138
+ cpu->gic_num_lrs = 4;
139
+ cpu->gic_vpribits = 5;
140
+ cpu->gic_vprebits = 5;
141
+ cpu->gic_pribits = 5;
142
+
143
+ /* Section 14: Scalable Vector Extensions support */
144
+ cpu->sve_vq.supported = 1 << 0; /* 128bit */
145
+
146
+ /*
147
+ * The Neoverse N2 TRM does not list CCSIDR values. The layout of
148
+ * the caches are in text in Table 7-1, Table 8-1, and Table 9-1.
149
+ *
150
+ * L1: 4-way set associative 64-byte line size, total 64K.
151
+ * L2: 8-way set associative 64 byte line size, total either 512K or 1024K.
152
+ */
153
+ cpu->ccsidr[0] = make_ccsidr64(4, 64, 64 * KiB); /* L1 dcache */
154
+ cpu->ccsidr[1] = cpu->ccsidr[0]; /* L1 icache */
155
+ cpu->ccsidr[2] = make_ccsidr64(8, 64, 512 * KiB); /* L2 cache */
156
+
157
+ /* FIXME: Not documented -- copied from neoverse-v1 */
158
+ cpu->reset_sctlr = 0x30c50838;
159
+
160
+ /*
161
+ * The Neoverse N2 has all of the Cortex-A710 IMPDEF registers,
162
+ * and a few more RNG related ones.
163
+ */
164
+ define_arm_cp_regs(cpu, cortex_a710_cp_reginfo);
165
+ define_arm_cp_regs(cpu, neoverse_n2_cp_reginfo);
166
+
167
+ aarch64_add_pauth_properties(obj);
168
+ aarch64_add_sve_properties(obj);
169
+}
170
+
171
/*
172
* -cpu max: a CPU with as many features enabled as our emulation supports.
173
* The version of '-cpu max' for qemu-system-arm is defined in cpu32.c;
174
@@ -XXX,XX +XXX,XX @@ static const ARMCPUInfo aarch64_cpus[] = {
175
{ .name = "a64fx", .initfn = aarch64_a64fx_initfn },
176
{ .name = "neoverse-n1", .initfn = aarch64_neoverse_n1_initfn },
177
{ .name = "neoverse-v1", .initfn = aarch64_neoverse_v1_initfn },
178
+ { .name = "neoverse-n2", .initfn = aarch64_neoverse_n2_initfn },
179
};
180
181
static void aarch64_cpu_register_types(void)
37
--
182
--
38
2.34.1
183
2.34.1
184
185
diff view generated by jsdifflib
1
Currently the only tag-setting instructions always do so in the
1
The feature test functions isar_feature_*() now take up nearly
2
context of the current EL, and so we only need one ATA bit in the TB
2
a thousand lines in target/arm/cpu.h. This header file is included
3
flags. The FEAT_MOPS SETG instructions include ones which set tags
3
by a lot of source files, most of which don't need these functions.
4
for a non-privileged access, so we now also need the equivalent "are
4
Move the feature test functions to their own header file.
5
tags enabled?" information for EL0.
6
7
Add the new TB flag, and convert the existing 'bool ata' field in
8
DisasContext to a 'bool ata[2]' that can be indexed by the is_unpriv
9
bit in an instruction, similarly to mte[2].
10
5
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20230912140434.1333369-9-peter.maydell@linaro.org
9
Message-id: 20231024163510.2972081-2-peter.maydell@linaro.org
14
---
10
---
15
target/arm/cpu.h | 1 +
11
bsd-user/arm/target_arch.h | 1 +
16
target/arm/tcg/translate.h | 4 ++--
12
linux-user/aarch64/target_prctl.h | 2 +
17
target/arm/tcg/hflags.c | 12 ++++++++++++
13
target/arm/cpu-features.h | 994 ++++++++++++++++++++++++++++++
18
target/arm/tcg/translate-a64.c | 23 ++++++++++++-----------
14
target/arm/cpu.h | 971 -----------------------------
19
4 files changed, 27 insertions(+), 13 deletions(-)
15
target/arm/internals.h | 1 +
16
target/arm/tcg/translate.h | 2 +-
17
hw/arm/armv7m.c | 1 +
18
hw/intc/armv7m_nvic.c | 1 +
19
linux-user/aarch64/cpu_loop.c | 1 +
20
linux-user/aarch64/signal.c | 1 +
21
linux-user/arm/signal.c | 1 +
22
linux-user/elfload.c | 4 +
23
linux-user/mmap.c | 4 +
24
target/arm/arch_dump.c | 1 +
25
target/arm/cpu.c | 1 +
26
target/arm/cpu64.c | 1 +
27
target/arm/debug_helper.c | 1 +
28
target/arm/gdbstub.c | 1 +
29
target/arm/helper.c | 1 +
30
target/arm/kvm64.c | 1 +
31
target/arm/machine.c | 1 +
32
target/arm/ptw.c | 1 +
33
target/arm/tcg/cpu64.c | 1 +
34
target/arm/tcg/hflags.c | 1 +
35
target/arm/tcg/m_helper.c | 1 +
36
target/arm/tcg/op_helper.c | 1 +
37
target/arm/tcg/pauth_helper.c | 1 +
38
target/arm/tcg/tlb_helper.c | 1 +
39
target/arm/vfp_helper.c | 1 +
40
29 files changed, 1028 insertions(+), 972 deletions(-)
41
create mode 100644 target/arm/cpu-features.h
20
42
43
diff --git a/bsd-user/arm/target_arch.h b/bsd-user/arm/target_arch.h
44
index XXXXXXX..XXXXXXX 100644
45
--- a/bsd-user/arm/target_arch.h
46
+++ b/bsd-user/arm/target_arch.h
47
@@ -XXX,XX +XXX,XX @@
48
#define TARGET_ARCH_H
49
50
#include "qemu.h"
51
+#include "target/arm/cpu-features.h"
52
53
void target_cpu_set_tls(CPUARMState *env, target_ulong newtls);
54
target_ulong target_cpu_get_tls(CPUARMState *env);
55
diff --git a/linux-user/aarch64/target_prctl.h b/linux-user/aarch64/target_prctl.h
56
index XXXXXXX..XXXXXXX 100644
57
--- a/linux-user/aarch64/target_prctl.h
58
+++ b/linux-user/aarch64/target_prctl.h
59
@@ -XXX,XX +XXX,XX @@
60
#ifndef AARCH64_TARGET_PRCTL_H
61
#define AARCH64_TARGET_PRCTL_H
62
63
+#include "target/arm/cpu-features.h"
64
+
65
static abi_long do_prctl_sve_get_vl(CPUArchState *env)
66
{
67
ARMCPU *cpu = env_archcpu(env);
68
diff --git a/target/arm/cpu-features.h b/target/arm/cpu-features.h
69
new file mode 100644
70
index XXXXXXX..XXXXXXX
71
--- /dev/null
72
+++ b/target/arm/cpu-features.h
73
@@ -XXX,XX +XXX,XX @@
74
+/*
75
+ * QEMU Arm CPU -- feature test functions
76
+ *
77
+ * Copyright (c) 2023 Linaro Ltd
78
+ *
79
+ * This library is free software; you can redistribute it and/or
80
+ * modify it under the terms of the GNU Lesser General Public
81
+ * License as published by the Free Software Foundation; either
82
+ * version 2.1 of the License, or (at your option) any later version.
83
+ *
84
+ * This library is distributed in the hope that it will be useful,
85
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
86
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
87
+ * Lesser General Public License for more details.
88
+ *
89
+ * You should have received a copy of the GNU Lesser General Public
90
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
91
+ */
92
+
93
+#ifndef TARGET_ARM_FEATURES_H
94
+#define TARGET_ARM_FEATURES_H
95
+
96
+/*
97
+ * Naming convention for isar_feature functions:
98
+ * Functions which test 32-bit ID registers should have _aa32_ in
99
+ * their name. Functions which test 64-bit ID registers should have
100
+ * _aa64_ in their name. These must only be used in code where we
101
+ * know for certain that the CPU has AArch32 or AArch64 respectively
102
+ * or where the correct answer for a CPU which doesn't implement that
103
+ * CPU state is "false" (eg when generating A32 or A64 code, if adding
104
+ * system registers that are specific to that CPU state, for "should
105
+ * we let this system register bit be set" tests where the 32-bit
106
+ * flavour of the register doesn't have the bit, and so on).
107
+ * Functions which simply ask "does this feature exist at all" have
108
+ * _any_ in their name, and always return the logical OR of the _aa64_
109
+ * and the _aa32_ function.
110
+ */
111
+
112
+/*
113
+ * 32-bit feature tests via id registers.
114
+ */
115
+static inline bool isar_feature_aa32_thumb_div(const ARMISARegisters *id)
116
+{
117
+ return FIELD_EX32(id->id_isar0, ID_ISAR0, DIVIDE) != 0;
118
+}
119
+
120
+static inline bool isar_feature_aa32_arm_div(const ARMISARegisters *id)
121
+{
122
+ return FIELD_EX32(id->id_isar0, ID_ISAR0, DIVIDE) > 1;
123
+}
124
+
125
+static inline bool isar_feature_aa32_lob(const ARMISARegisters *id)
126
+{
127
+ /* (M-profile) low-overhead loops and branch future */
128
+ return FIELD_EX32(id->id_isar0, ID_ISAR0, CMPBRANCH) >= 3;
129
+}
130
+
131
+static inline bool isar_feature_aa32_jazelle(const ARMISARegisters *id)
132
+{
133
+ return FIELD_EX32(id->id_isar1, ID_ISAR1, JAZELLE) != 0;
134
+}
135
+
136
+static inline bool isar_feature_aa32_aes(const ARMISARegisters *id)
137
+{
138
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, AES) != 0;
139
+}
140
+
141
+static inline bool isar_feature_aa32_pmull(const ARMISARegisters *id)
142
+{
143
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, AES) > 1;
144
+}
145
+
146
+static inline bool isar_feature_aa32_sha1(const ARMISARegisters *id)
147
+{
148
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, SHA1) != 0;
149
+}
150
+
151
+static inline bool isar_feature_aa32_sha2(const ARMISARegisters *id)
152
+{
153
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, SHA2) != 0;
154
+}
155
+
156
+static inline bool isar_feature_aa32_crc32(const ARMISARegisters *id)
157
+{
158
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, CRC32) != 0;
159
+}
160
+
161
+static inline bool isar_feature_aa32_rdm(const ARMISARegisters *id)
162
+{
163
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, RDM) != 0;
164
+}
165
+
166
+static inline bool isar_feature_aa32_vcma(const ARMISARegisters *id)
167
+{
168
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, VCMA) != 0;
169
+}
170
+
171
+static inline bool isar_feature_aa32_jscvt(const ARMISARegisters *id)
172
+{
173
+ return FIELD_EX32(id->id_isar6, ID_ISAR6, JSCVT) != 0;
174
+}
175
+
176
+static inline bool isar_feature_aa32_dp(const ARMISARegisters *id)
177
+{
178
+ return FIELD_EX32(id->id_isar6, ID_ISAR6, DP) != 0;
179
+}
180
+
181
+static inline bool isar_feature_aa32_fhm(const ARMISARegisters *id)
182
+{
183
+ return FIELD_EX32(id->id_isar6, ID_ISAR6, FHM) != 0;
184
+}
185
+
186
+static inline bool isar_feature_aa32_sb(const ARMISARegisters *id)
187
+{
188
+ return FIELD_EX32(id->id_isar6, ID_ISAR6, SB) != 0;
189
+}
190
+
191
+static inline bool isar_feature_aa32_predinv(const ARMISARegisters *id)
192
+{
193
+ return FIELD_EX32(id->id_isar6, ID_ISAR6, SPECRES) != 0;
194
+}
195
+
196
+static inline bool isar_feature_aa32_bf16(const ARMISARegisters *id)
197
+{
198
+ return FIELD_EX32(id->id_isar6, ID_ISAR6, BF16) != 0;
199
+}
200
+
201
+static inline bool isar_feature_aa32_i8mm(const ARMISARegisters *id)
202
+{
203
+ return FIELD_EX32(id->id_isar6, ID_ISAR6, I8MM) != 0;
204
+}
205
+
206
+static inline bool isar_feature_aa32_ras(const ARMISARegisters *id)
207
+{
208
+ return FIELD_EX32(id->id_pfr0, ID_PFR0, RAS) != 0;
209
+}
210
+
211
+static inline bool isar_feature_aa32_mprofile(const ARMISARegisters *id)
212
+{
213
+ return FIELD_EX32(id->id_pfr1, ID_PFR1, MPROGMOD) != 0;
214
+}
215
+
216
+static inline bool isar_feature_aa32_m_sec_state(const ARMISARegisters *id)
217
+{
218
+ /*
219
+ * Return true if M-profile state handling insns
220
+ * (VSCCLRM, CLRM, FPCTX access insns) are implemented
221
+ */
222
+ return FIELD_EX32(id->id_pfr1, ID_PFR1, SECURITY) >= 3;
223
+}
224
+
225
+static inline bool isar_feature_aa32_fp16_arith(const ARMISARegisters *id)
226
+{
227
+ /* Sadly this is encoded differently for A-profile and M-profile */
228
+ if (isar_feature_aa32_mprofile(id)) {
229
+ return FIELD_EX32(id->mvfr1, MVFR1, FP16) > 0;
230
+ } else {
231
+ return FIELD_EX32(id->mvfr1, MVFR1, FPHP) >= 3;
232
+ }
233
+}
234
+
235
+static inline bool isar_feature_aa32_mve(const ARMISARegisters *id)
236
+{
237
+ /*
238
+ * Return true if MVE is supported (either integer or floating point).
239
+ * We must check for M-profile as the MVFR1 field means something
240
+ * else for A-profile.
241
+ */
242
+ return isar_feature_aa32_mprofile(id) &&
243
+ FIELD_EX32(id->mvfr1, MVFR1, MVE) > 0;
244
+}
245
+
246
+static inline bool isar_feature_aa32_mve_fp(const ARMISARegisters *id)
247
+{
248
+ /*
249
+ * Return true if MVE is supported (either integer or floating point).
250
+ * We must check for M-profile as the MVFR1 field means something
251
+ * else for A-profile.
252
+ */
253
+ return isar_feature_aa32_mprofile(id) &&
254
+ FIELD_EX32(id->mvfr1, MVFR1, MVE) >= 2;
255
+}
256
+
257
+static inline bool isar_feature_aa32_vfp_simd(const ARMISARegisters *id)
258
+{
259
+ /*
260
+ * Return true if either VFP or SIMD is implemented.
261
+ * In this case, a minimum of VFP w/ D0-D15.
262
+ */
263
+ return FIELD_EX32(id->mvfr0, MVFR0, SIMDREG) > 0;
264
+}
265
+
266
+static inline bool isar_feature_aa32_simd_r32(const ARMISARegisters *id)
267
+{
268
+ /* Return true if D16-D31 are implemented */
269
+ return FIELD_EX32(id->mvfr0, MVFR0, SIMDREG) >= 2;
270
+}
271
+
272
+static inline bool isar_feature_aa32_fpshvec(const ARMISARegisters *id)
273
+{
274
+ return FIELD_EX32(id->mvfr0, MVFR0, FPSHVEC) > 0;
275
+}
276
+
277
+static inline bool isar_feature_aa32_fpsp_v2(const ARMISARegisters *id)
278
+{
279
+ /* Return true if CPU supports single precision floating point, VFPv2 */
280
+ return FIELD_EX32(id->mvfr0, MVFR0, FPSP) > 0;
281
+}
282
+
283
+static inline bool isar_feature_aa32_fpsp_v3(const ARMISARegisters *id)
284
+{
285
+ /* Return true if CPU supports single precision floating point, VFPv3 */
286
+ return FIELD_EX32(id->mvfr0, MVFR0, FPSP) >= 2;
287
+}
288
+
289
+static inline bool isar_feature_aa32_fpdp_v2(const ARMISARegisters *id)
290
+{
291
+ /* Return true if CPU supports double precision floating point, VFPv2 */
292
+ return FIELD_EX32(id->mvfr0, MVFR0, FPDP) > 0;
293
+}
294
+
295
+static inline bool isar_feature_aa32_fpdp_v3(const ARMISARegisters *id)
296
+{
297
+ /* Return true if CPU supports double precision floating point, VFPv3 */
298
+ return FIELD_EX32(id->mvfr0, MVFR0, FPDP) >= 2;
299
+}
300
+
301
+static inline bool isar_feature_aa32_vfp(const ARMISARegisters *id)
302
+{
303
+ return isar_feature_aa32_fpsp_v2(id) || isar_feature_aa32_fpdp_v2(id);
304
+}
305
+
306
+/*
307
+ * We always set the FP and SIMD FP16 fields to indicate identical
308
+ * levels of support (assuming SIMD is implemented at all), so
309
+ * we only need one set of accessors.
310
+ */
311
+static inline bool isar_feature_aa32_fp16_spconv(const ARMISARegisters *id)
312
+{
313
+ return FIELD_EX32(id->mvfr1, MVFR1, FPHP) > 0;
314
+}
315
+
316
+static inline bool isar_feature_aa32_fp16_dpconv(const ARMISARegisters *id)
317
+{
318
+ return FIELD_EX32(id->mvfr1, MVFR1, FPHP) > 1;
319
+}
320
+
321
+/*
322
+ * Note that this ID register field covers both VFP and Neon FMAC,
323
+ * so should usually be tested in combination with some other
324
+ * check that confirms the presence of whichever of VFP or Neon is
325
+ * relevant, to avoid accidentally enabling a Neon feature on
326
+ * a VFP-no-Neon core or vice-versa.
327
+ */
328
+static inline bool isar_feature_aa32_simdfmac(const ARMISARegisters *id)
329
+{
330
+ return FIELD_EX32(id->mvfr1, MVFR1, SIMDFMAC) != 0;
331
+}
332
+
333
+static inline bool isar_feature_aa32_vsel(const ARMISARegisters *id)
334
+{
335
+ return FIELD_EX32(id->mvfr2, MVFR2, FPMISC) >= 1;
336
+}
337
+
338
+static inline bool isar_feature_aa32_vcvt_dr(const ARMISARegisters *id)
339
+{
340
+ return FIELD_EX32(id->mvfr2, MVFR2, FPMISC) >= 2;
341
+}
342
+
343
+static inline bool isar_feature_aa32_vrint(const ARMISARegisters *id)
344
+{
345
+ return FIELD_EX32(id->mvfr2, MVFR2, FPMISC) >= 3;
346
+}
347
+
348
+static inline bool isar_feature_aa32_vminmaxnm(const ARMISARegisters *id)
349
+{
350
+ return FIELD_EX32(id->mvfr2, MVFR2, FPMISC) >= 4;
351
+}
352
+
353
+static inline bool isar_feature_aa32_pxn(const ARMISARegisters *id)
354
+{
355
+ return FIELD_EX32(id->id_mmfr0, ID_MMFR0, VMSA) >= 4;
356
+}
357
+
358
+static inline bool isar_feature_aa32_pan(const ARMISARegisters *id)
359
+{
360
+ return FIELD_EX32(id->id_mmfr3, ID_MMFR3, PAN) != 0;
361
+}
362
+
363
+static inline bool isar_feature_aa32_ats1e1(const ARMISARegisters *id)
364
+{
365
+ return FIELD_EX32(id->id_mmfr3, ID_MMFR3, PAN) >= 2;
366
+}
367
+
368
+static inline bool isar_feature_aa32_pmuv3p1(const ARMISARegisters *id)
369
+{
370
+ /* 0xf means "non-standard IMPDEF PMU" */
371
+ return FIELD_EX32(id->id_dfr0, ID_DFR0, PERFMON) >= 4 &&
372
+ FIELD_EX32(id->id_dfr0, ID_DFR0, PERFMON) != 0xf;
373
+}
374
+
375
+static inline bool isar_feature_aa32_pmuv3p4(const ARMISARegisters *id)
376
+{
377
+ /* 0xf means "non-standard IMPDEF PMU" */
378
+ return FIELD_EX32(id->id_dfr0, ID_DFR0, PERFMON) >= 5 &&
379
+ FIELD_EX32(id->id_dfr0, ID_DFR0, PERFMON) != 0xf;
380
+}
381
+
382
+static inline bool isar_feature_aa32_pmuv3p5(const ARMISARegisters *id)
383
+{
384
+ /* 0xf means "non-standard IMPDEF PMU" */
385
+ return FIELD_EX32(id->id_dfr0, ID_DFR0, PERFMON) >= 6 &&
386
+ FIELD_EX32(id->id_dfr0, ID_DFR0, PERFMON) != 0xf;
387
+}
388
+
389
+static inline bool isar_feature_aa32_hpd(const ARMISARegisters *id)
390
+{
391
+ return FIELD_EX32(id->id_mmfr4, ID_MMFR4, HPDS) != 0;
392
+}
393
+
394
+static inline bool isar_feature_aa32_ac2(const ARMISARegisters *id)
395
+{
396
+ return FIELD_EX32(id->id_mmfr4, ID_MMFR4, AC2) != 0;
397
+}
398
+
399
+static inline bool isar_feature_aa32_ccidx(const ARMISARegisters *id)
400
+{
401
+ return FIELD_EX32(id->id_mmfr4, ID_MMFR4, CCIDX) != 0;
402
+}
403
+
404
+static inline bool isar_feature_aa32_tts2uxn(const ARMISARegisters *id)
405
+{
406
+ return FIELD_EX32(id->id_mmfr4, ID_MMFR4, XNX) != 0;
407
+}
408
+
409
+static inline bool isar_feature_aa32_half_evt(const ARMISARegisters *id)
410
+{
411
+ return FIELD_EX32(id->id_mmfr4, ID_MMFR4, EVT) >= 1;
412
+}
413
+
414
+static inline bool isar_feature_aa32_evt(const ARMISARegisters *id)
415
+{
416
+ return FIELD_EX32(id->id_mmfr4, ID_MMFR4, EVT) >= 2;
417
+}
418
+
419
+static inline bool isar_feature_aa32_dit(const ARMISARegisters *id)
420
+{
421
+ return FIELD_EX32(id->id_pfr0, ID_PFR0, DIT) != 0;
422
+}
423
+
424
+static inline bool isar_feature_aa32_ssbs(const ARMISARegisters *id)
425
+{
426
+ return FIELD_EX32(id->id_pfr2, ID_PFR2, SSBS) != 0;
427
+}
428
+
429
+static inline bool isar_feature_aa32_debugv7p1(const ARMISARegisters *id)
430
+{
431
+ return FIELD_EX32(id->id_dfr0, ID_DFR0, COPDBG) >= 5;
432
+}
433
+
434
+static inline bool isar_feature_aa32_debugv8p2(const ARMISARegisters *id)
435
+{
436
+ return FIELD_EX32(id->id_dfr0, ID_DFR0, COPDBG) >= 8;
437
+}
438
+
439
+static inline bool isar_feature_aa32_doublelock(const ARMISARegisters *id)
440
+{
441
+ return FIELD_EX32(id->dbgdevid, DBGDEVID, DOUBLELOCK) > 0;
442
+}
443
+
444
+/*
445
+ * 64-bit feature tests via id registers.
446
+ */
447
+static inline bool isar_feature_aa64_aes(const ARMISARegisters *id)
448
+{
449
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, AES) != 0;
450
+}
451
+
452
+static inline bool isar_feature_aa64_pmull(const ARMISARegisters *id)
453
+{
454
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, AES) > 1;
455
+}
456
+
457
+static inline bool isar_feature_aa64_sha1(const ARMISARegisters *id)
458
+{
459
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SHA1) != 0;
460
+}
461
+
462
+static inline bool isar_feature_aa64_sha256(const ARMISARegisters *id)
463
+{
464
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SHA2) != 0;
465
+}
466
+
467
+static inline bool isar_feature_aa64_sha512(const ARMISARegisters *id)
468
+{
469
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SHA2) > 1;
470
+}
471
+
472
+static inline bool isar_feature_aa64_crc32(const ARMISARegisters *id)
473
+{
474
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, CRC32) != 0;
475
+}
476
+
477
+static inline bool isar_feature_aa64_atomics(const ARMISARegisters *id)
478
+{
479
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, ATOMIC) != 0;
480
+}
481
+
482
+static inline bool isar_feature_aa64_rdm(const ARMISARegisters *id)
483
+{
484
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, RDM) != 0;
485
+}
486
+
487
+static inline bool isar_feature_aa64_sha3(const ARMISARegisters *id)
488
+{
489
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SHA3) != 0;
490
+}
491
+
492
+static inline bool isar_feature_aa64_sm3(const ARMISARegisters *id)
493
+{
494
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SM3) != 0;
495
+}
496
+
497
+static inline bool isar_feature_aa64_sm4(const ARMISARegisters *id)
498
+{
499
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SM4) != 0;
500
+}
501
+
502
+static inline bool isar_feature_aa64_dp(const ARMISARegisters *id)
503
+{
504
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, DP) != 0;
505
+}
506
+
507
+static inline bool isar_feature_aa64_fhm(const ARMISARegisters *id)
508
+{
509
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, FHM) != 0;
510
+}
511
+
512
+static inline bool isar_feature_aa64_condm_4(const ARMISARegisters *id)
513
+{
514
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, TS) != 0;
515
+}
516
+
517
+static inline bool isar_feature_aa64_condm_5(const ARMISARegisters *id)
518
+{
519
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, TS) >= 2;
520
+}
521
+
522
+static inline bool isar_feature_aa64_rndr(const ARMISARegisters *id)
523
+{
524
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, RNDR) != 0;
525
+}
526
+
527
+static inline bool isar_feature_aa64_jscvt(const ARMISARegisters *id)
528
+{
529
+ return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, JSCVT) != 0;
530
+}
531
+
532
+static inline bool isar_feature_aa64_fcma(const ARMISARegisters *id)
533
+{
534
+ return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, FCMA) != 0;
535
+}
536
+
537
+/*
538
+ * These are the values from APA/API/APA3.
539
+ * In general these must be compared '>=', per the normal Arm ARM
540
+ * treatment of fields in ID registers.
541
+ */
542
+typedef enum {
543
+ PauthFeat_None = 0,
544
+ PauthFeat_1 = 1,
545
+ PauthFeat_EPAC = 2,
546
+ PauthFeat_2 = 3,
547
+ PauthFeat_FPAC = 4,
548
+ PauthFeat_FPACCOMBINED = 5,
549
+} ARMPauthFeature;
550
+
551
+static inline ARMPauthFeature
552
+isar_feature_pauth_feature(const ARMISARegisters *id)
553
+{
554
+ /*
555
+ * Architecturally, only one of {APA,API,APA3} may be active (non-zero)
556
+ * and the other two must be zero. Thus we may avoid conditionals.
557
+ */
558
+ return (FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, APA) |
559
+ FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, API) |
560
+ FIELD_EX64(id->id_aa64isar2, ID_AA64ISAR2, APA3));
561
+}
562
+
563
+static inline bool isar_feature_aa64_pauth(const ARMISARegisters *id)
564
+{
565
+ /*
566
+ * Return true if any form of pauth is enabled, as this
567
+ * predicate controls migration of the 128-bit keys.
568
+ */
569
+ return isar_feature_pauth_feature(id) != PauthFeat_None;
570
+}
571
+
572
+static inline bool isar_feature_aa64_pauth_qarma5(const ARMISARegisters *id)
573
+{
574
+ /*
575
+ * Return true if pauth is enabled with the architected QARMA5 algorithm.
576
+ * QEMU will always enable or disable both APA and GPA.
577
+ */
578
+ return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, APA) != 0;
579
+}
580
+
581
+static inline bool isar_feature_aa64_pauth_qarma3(const ARMISARegisters *id)
582
+{
583
+ /*
584
+ * Return true if pauth is enabled with the architected QARMA3 algorithm.
585
+ * QEMU will always enable or disable both APA3 and GPA3.
586
+ */
587
+ return FIELD_EX64(id->id_aa64isar2, ID_AA64ISAR2, APA3) != 0;
588
+}
589
+
590
+static inline bool isar_feature_aa64_tlbirange(const ARMISARegisters *id)
591
+{
592
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, TLB) == 2;
593
+}
594
+
595
+static inline bool isar_feature_aa64_tlbios(const ARMISARegisters *id)
596
+{
597
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, TLB) != 0;
598
+}
599
+
600
+static inline bool isar_feature_aa64_sb(const ARMISARegisters *id)
601
+{
602
+ return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, SB) != 0;
603
+}
604
+
605
+static inline bool isar_feature_aa64_predinv(const ARMISARegisters *id)
606
+{
607
+ return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, SPECRES) != 0;
608
+}
609
+
610
+static inline bool isar_feature_aa64_frint(const ARMISARegisters *id)
611
+{
612
+ return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, FRINTTS) != 0;
613
+}
614
+
615
+static inline bool isar_feature_aa64_dcpop(const ARMISARegisters *id)
616
+{
617
+ return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, DPB) != 0;
618
+}
619
+
620
+static inline bool isar_feature_aa64_dcpodp(const ARMISARegisters *id)
621
+{
622
+ return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, DPB) >= 2;
623
+}
624
+
625
+static inline bool isar_feature_aa64_bf16(const ARMISARegisters *id)
626
+{
627
+ return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, BF16) != 0;
628
+}
629
+
630
+static inline bool isar_feature_aa64_fp_simd(const ARMISARegisters *id)
631
+{
632
+ /* We always set the AdvSIMD and FP fields identically. */
633
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, FP) != 0xf;
634
+}
635
+
636
+static inline bool isar_feature_aa64_fp16(const ARMISARegisters *id)
637
+{
638
+ /* We always set the AdvSIMD and FP fields identically wrt FP16. */
639
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, FP) == 1;
640
+}
641
+
642
+static inline bool isar_feature_aa64_aa32(const ARMISARegisters *id)
643
+{
644
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, EL0) >= 2;
645
+}
646
+
647
+static inline bool isar_feature_aa64_aa32_el1(const ARMISARegisters *id)
648
+{
649
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, EL1) >= 2;
650
+}
651
+
652
+static inline bool isar_feature_aa64_aa32_el2(const ARMISARegisters *id)
653
+{
654
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, EL2) >= 2;
655
+}
656
+
657
+static inline bool isar_feature_aa64_ras(const ARMISARegisters *id)
658
+{
659
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, RAS) != 0;
660
+}
661
+
662
+static inline bool isar_feature_aa64_doublefault(const ARMISARegisters *id)
663
+{
664
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, RAS) >= 2;
665
+}
666
+
667
+static inline bool isar_feature_aa64_sve(const ARMISARegisters *id)
668
+{
669
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, SVE) != 0;
670
+}
671
+
672
+static inline bool isar_feature_aa64_sel2(const ARMISARegisters *id)
673
+{
674
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, SEL2) != 0;
675
+}
676
+
677
+static inline bool isar_feature_aa64_rme(const ARMISARegisters *id)
678
+{
679
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, RME) != 0;
680
+}
681
+
682
+static inline bool isar_feature_aa64_vh(const ARMISARegisters *id)
683
+{
684
+ return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, VH) != 0;
685
+}
686
+
687
+static inline bool isar_feature_aa64_lor(const ARMISARegisters *id)
688
+{
689
+ return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, LO) != 0;
690
+}
691
+
692
+static inline bool isar_feature_aa64_pan(const ARMISARegisters *id)
693
+{
694
+ return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, PAN) != 0;
695
+}
696
+
697
+static inline bool isar_feature_aa64_ats1e1(const ARMISARegisters *id)
698
+{
699
+ return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, PAN) >= 2;
700
+}
701
+
702
+static inline bool isar_feature_aa64_pan3(const ARMISARegisters *id)
703
+{
704
+ return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, PAN) >= 3;
705
+}
706
+
707
+static inline bool isar_feature_aa64_hcx(const ARMISARegisters *id)
708
+{
709
+ return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, HCX) != 0;
710
+}
711
+
712
+static inline bool isar_feature_aa64_tidcp1(const ARMISARegisters *id)
713
+{
714
+ return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR1, TIDCP1) != 0;
715
+}
716
+
717
+static inline bool isar_feature_aa64_uao(const ARMISARegisters *id)
718
+{
719
+ return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, UAO) != 0;
720
+}
721
+
722
+static inline bool isar_feature_aa64_st(const ARMISARegisters *id)
723
+{
724
+ return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, ST) != 0;
725
+}
726
+
727
+static inline bool isar_feature_aa64_lse2(const ARMISARegisters *id)
728
+{
729
+ return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, AT) != 0;
730
+}
731
+
732
+static inline bool isar_feature_aa64_fwb(const ARMISARegisters *id)
733
+{
734
+ return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, FWB) != 0;
735
+}
736
+
737
+static inline bool isar_feature_aa64_ids(const ARMISARegisters *id)
738
+{
739
+ return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, IDS) != 0;
740
+}
741
+
742
+static inline bool isar_feature_aa64_half_evt(const ARMISARegisters *id)
743
+{
744
+ return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, EVT) >= 1;
745
+}
746
+
747
+static inline bool isar_feature_aa64_evt(const ARMISARegisters *id)
748
+{
749
+ return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, EVT) >= 2;
750
+}
751
+
752
+static inline bool isar_feature_aa64_bti(const ARMISARegisters *id)
753
+{
754
+ return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, BT) != 0;
755
+}
756
+
757
+static inline bool isar_feature_aa64_mte_insn_reg(const ARMISARegisters *id)
758
+{
759
+ return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, MTE) != 0;
760
+}
761
+
762
+static inline bool isar_feature_aa64_mte(const ARMISARegisters *id)
763
+{
764
+ return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, MTE) >= 2;
765
+}
766
+
767
+static inline bool isar_feature_aa64_sme(const ARMISARegisters *id)
768
+{
769
+ return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, SME) != 0;
770
+}
771
+
772
+static inline bool isar_feature_aa64_pmuv3p1(const ARMISARegisters *id)
773
+{
774
+ return FIELD_EX64(id->id_aa64dfr0, ID_AA64DFR0, PMUVER) >= 4 &&
775
+ FIELD_EX64(id->id_aa64dfr0, ID_AA64DFR0, PMUVER) != 0xf;
776
+}
777
+
778
+static inline bool isar_feature_aa64_pmuv3p4(const ARMISARegisters *id)
779
+{
780
+ return FIELD_EX64(id->id_aa64dfr0, ID_AA64DFR0, PMUVER) >= 5 &&
781
+ FIELD_EX64(id->id_aa64dfr0, ID_AA64DFR0, PMUVER) != 0xf;
782
+}
783
+
784
+static inline bool isar_feature_aa64_pmuv3p5(const ARMISARegisters *id)
785
+{
786
+ return FIELD_EX64(id->id_aa64dfr0, ID_AA64DFR0, PMUVER) >= 6 &&
787
+ FIELD_EX64(id->id_aa64dfr0, ID_AA64DFR0, PMUVER) != 0xf;
788
+}
789
+
790
+static inline bool isar_feature_aa64_rcpc_8_3(const ARMISARegisters *id)
791
+{
792
+ return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, LRCPC) != 0;
793
+}
794
+
795
+static inline bool isar_feature_aa64_rcpc_8_4(const ARMISARegisters *id)
796
+{
797
+ return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, LRCPC) >= 2;
798
+}
799
+
800
+static inline bool isar_feature_aa64_i8mm(const ARMISARegisters *id)
801
+{
802
+ return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, I8MM) != 0;
803
+}
804
+
805
+static inline bool isar_feature_aa64_hbc(const ARMISARegisters *id)
806
+{
807
+ return FIELD_EX64(id->id_aa64isar2, ID_AA64ISAR2, BC) != 0;
808
+}
809
+
810
+static inline bool isar_feature_aa64_tgran4_lpa2(const ARMISARegisters *id)
811
+{
812
+ return FIELD_SEX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN4) >= 1;
813
+}
814
+
815
+static inline bool isar_feature_aa64_tgran4_2_lpa2(const ARMISARegisters *id)
816
+{
817
+ unsigned t = FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN4_2);
818
+ return t >= 3 || (t == 0 && isar_feature_aa64_tgran4_lpa2(id));
819
+}
820
+
821
+static inline bool isar_feature_aa64_tgran16_lpa2(const ARMISARegisters *id)
822
+{
823
+ return FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN16) >= 2;
824
+}
825
+
826
+static inline bool isar_feature_aa64_tgran16_2_lpa2(const ARMISARegisters *id)
827
+{
828
+ unsigned t = FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN16_2);
829
+ return t >= 3 || (t == 0 && isar_feature_aa64_tgran16_lpa2(id));
830
+}
831
+
832
+static inline bool isar_feature_aa64_tgran4(const ARMISARegisters *id)
833
+{
834
+ return FIELD_SEX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN4) >= 0;
835
+}
836
+
837
+static inline bool isar_feature_aa64_tgran16(const ARMISARegisters *id)
838
+{
839
+ return FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN16) >= 1;
840
+}
841
+
842
+static inline bool isar_feature_aa64_tgran64(const ARMISARegisters *id)
843
+{
844
+ return FIELD_SEX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN64) >= 0;
845
+}
846
+
847
+static inline bool isar_feature_aa64_tgran4_2(const ARMISARegisters *id)
848
+{
849
+ unsigned t = FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN4_2);
850
+ return t >= 2 || (t == 0 && isar_feature_aa64_tgran4(id));
851
+}
852
+
853
+static inline bool isar_feature_aa64_tgran16_2(const ARMISARegisters *id)
854
+{
855
+ unsigned t = FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN16_2);
856
+ return t >= 2 || (t == 0 && isar_feature_aa64_tgran16(id));
857
+}
858
+
859
+static inline bool isar_feature_aa64_tgran64_2(const ARMISARegisters *id)
860
+{
861
+ unsigned t = FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN64_2);
862
+ return t >= 2 || (t == 0 && isar_feature_aa64_tgran64(id));
863
+}
864
+
865
+static inline bool isar_feature_aa64_fgt(const ARMISARegisters *id)
866
+{
867
+ return FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, FGT) != 0;
868
+}
869
+
870
+static inline bool isar_feature_aa64_ccidx(const ARMISARegisters *id)
871
+{
872
+ return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, CCIDX) != 0;
873
+}
874
+
875
+static inline bool isar_feature_aa64_lva(const ARMISARegisters *id)
876
+{
877
+ return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, VARANGE) != 0;
878
+}
879
+
880
+static inline bool isar_feature_aa64_e0pd(const ARMISARegisters *id)
881
+{
882
+ return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, E0PD) != 0;
883
+}
884
+
885
+static inline bool isar_feature_aa64_hafs(const ARMISARegisters *id)
886
+{
887
+ return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, HAFDBS) != 0;
888
+}
889
+
890
+static inline bool isar_feature_aa64_hdbs(const ARMISARegisters *id)
891
+{
892
+ return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, HAFDBS) >= 2;
893
+}
894
+
895
+static inline bool isar_feature_aa64_tts2uxn(const ARMISARegisters *id)
896
+{
897
+ return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, XNX) != 0;
898
+}
899
+
900
+static inline bool isar_feature_aa64_dit(const ARMISARegisters *id)
901
+{
902
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, DIT) != 0;
903
+}
904
+
905
+static inline bool isar_feature_aa64_scxtnum(const ARMISARegisters *id)
906
+{
907
+ int key = FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, CSV2);
908
+ if (key >= 2) {
909
+ return true; /* FEAT_CSV2_2 */
910
+ }
911
+ if (key == 1) {
912
+ key = FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, CSV2_FRAC);
913
+ return key >= 2; /* FEAT_CSV2_1p2 */
914
+ }
915
+ return false;
916
+}
917
+
918
+static inline bool isar_feature_aa64_ssbs(const ARMISARegisters *id)
919
+{
920
+ return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, SSBS) != 0;
921
+}
922
+
923
+static inline bool isar_feature_aa64_debugv8p2(const ARMISARegisters *id)
924
+{
925
+ return FIELD_EX64(id->id_aa64dfr0, ID_AA64DFR0, DEBUGVER) >= 8;
926
+}
927
+
928
+static inline bool isar_feature_aa64_sve2(const ARMISARegisters *id)
929
+{
930
+ return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, SVEVER) != 0;
931
+}
932
+
933
+static inline bool isar_feature_aa64_sve2_aes(const ARMISARegisters *id)
934
+{
935
+ return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, AES) != 0;
936
+}
937
+
938
+static inline bool isar_feature_aa64_sve2_pmull128(const ARMISARegisters *id)
939
+{
940
+ return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, AES) >= 2;
941
+}
942
+
943
+static inline bool isar_feature_aa64_sve2_bitperm(const ARMISARegisters *id)
944
+{
945
+ return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, BITPERM) != 0;
946
+}
947
+
948
+static inline bool isar_feature_aa64_sve_bf16(const ARMISARegisters *id)
949
+{
950
+ return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, BFLOAT16) != 0;
951
+}
952
+
953
+static inline bool isar_feature_aa64_sve2_sha3(const ARMISARegisters *id)
954
+{
955
+ return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, SHA3) != 0;
956
+}
957
+
958
+static inline bool isar_feature_aa64_sve2_sm4(const ARMISARegisters *id)
959
+{
960
+ return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, SM4) != 0;
961
+}
962
+
963
+static inline bool isar_feature_aa64_sve_i8mm(const ARMISARegisters *id)
964
+{
965
+ return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, I8MM) != 0;
966
+}
967
+
968
+static inline bool isar_feature_aa64_sve_f32mm(const ARMISARegisters *id)
969
+{
970
+ return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, F32MM) != 0;
971
+}
972
+
973
+static inline bool isar_feature_aa64_sve_f64mm(const ARMISARegisters *id)
974
+{
975
+ return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, F64MM) != 0;
976
+}
977
+
978
+static inline bool isar_feature_aa64_sme_f64f64(const ARMISARegisters *id)
979
+{
980
+ return FIELD_EX64(id->id_aa64smfr0, ID_AA64SMFR0, F64F64);
981
+}
982
+
983
+static inline bool isar_feature_aa64_sme_i16i64(const ARMISARegisters *id)
984
+{
985
+ return FIELD_EX64(id->id_aa64smfr0, ID_AA64SMFR0, I16I64) == 0xf;
986
+}
987
+
988
+static inline bool isar_feature_aa64_sme_fa64(const ARMISARegisters *id)
989
+{
990
+ return FIELD_EX64(id->id_aa64smfr0, ID_AA64SMFR0, FA64);
991
+}
992
+
993
+static inline bool isar_feature_aa64_doublelock(const ARMISARegisters *id)
994
+{
995
+ return FIELD_SEX64(id->id_aa64dfr0, ID_AA64DFR0, DOUBLELOCK) >= 0;
996
+}
997
+
998
+static inline bool isar_feature_aa64_mops(const ARMISARegisters *id)
999
+{
1000
+ return FIELD_EX64(id->id_aa64isar2, ID_AA64ISAR2, MOPS);
1001
+}
1002
+
1003
+/*
1004
+ * Feature tests for "does this exist in either 32-bit or 64-bit?"
1005
+ */
1006
+static inline bool isar_feature_any_fp16(const ARMISARegisters *id)
1007
+{
1008
+ return isar_feature_aa64_fp16(id) || isar_feature_aa32_fp16_arith(id);
1009
+}
1010
+
1011
+static inline bool isar_feature_any_predinv(const ARMISARegisters *id)
1012
+{
1013
+ return isar_feature_aa64_predinv(id) || isar_feature_aa32_predinv(id);
1014
+}
1015
+
1016
+static inline bool isar_feature_any_pmuv3p1(const ARMISARegisters *id)
1017
+{
1018
+ return isar_feature_aa64_pmuv3p1(id) || isar_feature_aa32_pmuv3p1(id);
1019
+}
1020
+
1021
+static inline bool isar_feature_any_pmuv3p4(const ARMISARegisters *id)
1022
+{
1023
+ return isar_feature_aa64_pmuv3p4(id) || isar_feature_aa32_pmuv3p4(id);
1024
+}
1025
+
1026
+static inline bool isar_feature_any_pmuv3p5(const ARMISARegisters *id)
1027
+{
1028
+ return isar_feature_aa64_pmuv3p5(id) || isar_feature_aa32_pmuv3p5(id);
1029
+}
1030
+
1031
+static inline bool isar_feature_any_ccidx(const ARMISARegisters *id)
1032
+{
1033
+ return isar_feature_aa64_ccidx(id) || isar_feature_aa32_ccidx(id);
1034
+}
1035
+
1036
+static inline bool isar_feature_any_tts2uxn(const ARMISARegisters *id)
1037
+{
1038
+ return isar_feature_aa64_tts2uxn(id) || isar_feature_aa32_tts2uxn(id);
1039
+}
1040
+
1041
+static inline bool isar_feature_any_debugv8p2(const ARMISARegisters *id)
1042
+{
1043
+ return isar_feature_aa64_debugv8p2(id) || isar_feature_aa32_debugv8p2(id);
1044
+}
1045
+
1046
+static inline bool isar_feature_any_ras(const ARMISARegisters *id)
1047
+{
1048
+ return isar_feature_aa64_ras(id) || isar_feature_aa32_ras(id);
1049
+}
1050
+
1051
+static inline bool isar_feature_any_half_evt(const ARMISARegisters *id)
1052
+{
1053
+ return isar_feature_aa64_half_evt(id) || isar_feature_aa32_half_evt(id);
1054
+}
1055
+
1056
+static inline bool isar_feature_any_evt(const ARMISARegisters *id)
1057
+{
1058
+ return isar_feature_aa64_evt(id) || isar_feature_aa32_evt(id);
1059
+}
1060
+
1061
+/*
1062
+ * Forward to the above feature tests given an ARMCPU pointer.
1063
+ */
1064
+#define cpu_isar_feature(name, cpu) \
1065
+ ({ ARMCPU *cpu_ = (cpu); isar_feature_##name(&cpu_->isar); })
1066
+
1067
+#endif
21
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
1068
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
22
index XXXXXXX..XXXXXXX 100644
1069
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/cpu.h
1070
--- a/target/arm/cpu.h
24
+++ b/target/arm/cpu.h
1071
+++ b/target/arm/cpu.h
25
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_A64, SVL, 24, 4)
1072
@@ -XXX,XX +XXX,XX @@ static inline target_ulong cpu_untagged_addr(CPUState *cs, target_ulong x)
26
FIELD(TBFLAG_A64, SME_TRAP_NONSTREAMING, 28, 1)
1073
}
27
FIELD(TBFLAG_A64, FGT_ERET, 29, 1)
1074
#endif
28
FIELD(TBFLAG_A64, NAA, 30, 1)
1075
29
+FIELD(TBFLAG_A64, ATA0, 31, 1)
1076
-/*
30
1077
- * Naming convention for isar_feature functions:
31
/*
1078
- * Functions which test 32-bit ID registers should have _aa32_ in
32
* Helpers for using the above.
1079
- * their name. Functions which test 64-bit ID registers should have
1080
- * _aa64_ in their name. These must only be used in code where we
1081
- * know for certain that the CPU has AArch32 or AArch64 respectively
1082
- * or where the correct answer for a CPU which doesn't implement that
1083
- * CPU state is "false" (eg when generating A32 or A64 code, if adding
1084
- * system registers that are specific to that CPU state, for "should
1085
- * we let this system register bit be set" tests where the 32-bit
1086
- * flavour of the register doesn't have the bit, and so on).
1087
- * Functions which simply ask "does this feature exist at all" have
1088
- * _any_ in their name, and always return the logical OR of the _aa64_
1089
- * and the _aa32_ function.
1090
- */
1091
-
1092
-/*
1093
- * 32-bit feature tests via id registers.
1094
- */
1095
-static inline bool isar_feature_aa32_thumb_div(const ARMISARegisters *id)
1096
-{
1097
- return FIELD_EX32(id->id_isar0, ID_ISAR0, DIVIDE) != 0;
1098
-}
1099
-
1100
-static inline bool isar_feature_aa32_arm_div(const ARMISARegisters *id)
1101
-{
1102
- return FIELD_EX32(id->id_isar0, ID_ISAR0, DIVIDE) > 1;
1103
-}
1104
-
1105
-static inline bool isar_feature_aa32_lob(const ARMISARegisters *id)
1106
-{
1107
- /* (M-profile) low-overhead loops and branch future */
1108
- return FIELD_EX32(id->id_isar0, ID_ISAR0, CMPBRANCH) >= 3;
1109
-}
1110
-
1111
-static inline bool isar_feature_aa32_jazelle(const ARMISARegisters *id)
1112
-{
1113
- return FIELD_EX32(id->id_isar1, ID_ISAR1, JAZELLE) != 0;
1114
-}
1115
-
1116
-static inline bool isar_feature_aa32_aes(const ARMISARegisters *id)
1117
-{
1118
- return FIELD_EX32(id->id_isar5, ID_ISAR5, AES) != 0;
1119
-}
1120
-
1121
-static inline bool isar_feature_aa32_pmull(const ARMISARegisters *id)
1122
-{
1123
- return FIELD_EX32(id->id_isar5, ID_ISAR5, AES) > 1;
1124
-}
1125
-
1126
-static inline bool isar_feature_aa32_sha1(const ARMISARegisters *id)
1127
-{
1128
- return FIELD_EX32(id->id_isar5, ID_ISAR5, SHA1) != 0;
1129
-}
1130
-
1131
-static inline bool isar_feature_aa32_sha2(const ARMISARegisters *id)
1132
-{
1133
- return FIELD_EX32(id->id_isar5, ID_ISAR5, SHA2) != 0;
1134
-}
1135
-
1136
-static inline bool isar_feature_aa32_crc32(const ARMISARegisters *id)
1137
-{
1138
- return FIELD_EX32(id->id_isar5, ID_ISAR5, CRC32) != 0;
1139
-}
1140
-
1141
-static inline bool isar_feature_aa32_rdm(const ARMISARegisters *id)
1142
-{
1143
- return FIELD_EX32(id->id_isar5, ID_ISAR5, RDM) != 0;
1144
-}
1145
-
1146
-static inline bool isar_feature_aa32_vcma(const ARMISARegisters *id)
1147
-{
1148
- return FIELD_EX32(id->id_isar5, ID_ISAR5, VCMA) != 0;
1149
-}
1150
-
1151
-static inline bool isar_feature_aa32_jscvt(const ARMISARegisters *id)
1152
-{
1153
- return FIELD_EX32(id->id_isar6, ID_ISAR6, JSCVT) != 0;
1154
-}
1155
-
1156
-static inline bool isar_feature_aa32_dp(const ARMISARegisters *id)
1157
-{
1158
- return FIELD_EX32(id->id_isar6, ID_ISAR6, DP) != 0;
1159
-}
1160
-
1161
-static inline bool isar_feature_aa32_fhm(const ARMISARegisters *id)
1162
-{
1163
- return FIELD_EX32(id->id_isar6, ID_ISAR6, FHM) != 0;
1164
-}
1165
-
1166
-static inline bool isar_feature_aa32_sb(const ARMISARegisters *id)
1167
-{
1168
- return FIELD_EX32(id->id_isar6, ID_ISAR6, SB) != 0;
1169
-}
1170
-
1171
-static inline bool isar_feature_aa32_predinv(const ARMISARegisters *id)
1172
-{
1173
- return FIELD_EX32(id->id_isar6, ID_ISAR6, SPECRES) != 0;
1174
-}
1175
-
1176
-static inline bool isar_feature_aa32_bf16(const ARMISARegisters *id)
1177
-{
1178
- return FIELD_EX32(id->id_isar6, ID_ISAR6, BF16) != 0;
1179
-}
1180
-
1181
-static inline bool isar_feature_aa32_i8mm(const ARMISARegisters *id)
1182
-{
1183
- return FIELD_EX32(id->id_isar6, ID_ISAR6, I8MM) != 0;
1184
-}
1185
-
1186
-static inline bool isar_feature_aa32_ras(const ARMISARegisters *id)
1187
-{
1188
- return FIELD_EX32(id->id_pfr0, ID_PFR0, RAS) != 0;
1189
-}
1190
-
1191
-static inline bool isar_feature_aa32_mprofile(const ARMISARegisters *id)
1192
-{
1193
- return FIELD_EX32(id->id_pfr1, ID_PFR1, MPROGMOD) != 0;
1194
-}
1195
-
1196
-static inline bool isar_feature_aa32_m_sec_state(const ARMISARegisters *id)
1197
-{
1198
- /*
1199
- * Return true if M-profile state handling insns
1200
- * (VSCCLRM, CLRM, FPCTX access insns) are implemented
1201
- */
1202
- return FIELD_EX32(id->id_pfr1, ID_PFR1, SECURITY) >= 3;
1203
-}
1204
-
1205
-static inline bool isar_feature_aa32_fp16_arith(const ARMISARegisters *id)
1206
-{
1207
- /* Sadly this is encoded differently for A-profile and M-profile */
1208
- if (isar_feature_aa32_mprofile(id)) {
1209
- return FIELD_EX32(id->mvfr1, MVFR1, FP16) > 0;
1210
- } else {
1211
- return FIELD_EX32(id->mvfr1, MVFR1, FPHP) >= 3;
1212
- }
1213
-}
1214
-
1215
-static inline bool isar_feature_aa32_mve(const ARMISARegisters *id)
1216
-{
1217
- /*
1218
- * Return true if MVE is supported (either integer or floating point).
1219
- * We must check for M-profile as the MVFR1 field means something
1220
- * else for A-profile.
1221
- */
1222
- return isar_feature_aa32_mprofile(id) &&
1223
- FIELD_EX32(id->mvfr1, MVFR1, MVE) > 0;
1224
-}
1225
-
1226
-static inline bool isar_feature_aa32_mve_fp(const ARMISARegisters *id)
1227
-{
1228
- /*
1229
- * Return true if MVE is supported (either integer or floating point).
1230
- * We must check for M-profile as the MVFR1 field means something
1231
- * else for A-profile.
1232
- */
1233
- return isar_feature_aa32_mprofile(id) &&
1234
- FIELD_EX32(id->mvfr1, MVFR1, MVE) >= 2;
1235
-}
1236
-
1237
-static inline bool isar_feature_aa32_vfp_simd(const ARMISARegisters *id)
1238
-{
1239
- /*
1240
- * Return true if either VFP or SIMD is implemented.
1241
- * In this case, a minimum of VFP w/ D0-D15.
1242
- */
1243
- return FIELD_EX32(id->mvfr0, MVFR0, SIMDREG) > 0;
1244
-}
1245
-
1246
-static inline bool isar_feature_aa32_simd_r32(const ARMISARegisters *id)
1247
-{
1248
- /* Return true if D16-D31 are implemented */
1249
- return FIELD_EX32(id->mvfr0, MVFR0, SIMDREG) >= 2;
1250
-}
1251
-
1252
-static inline bool isar_feature_aa32_fpshvec(const ARMISARegisters *id)
1253
-{
1254
- return FIELD_EX32(id->mvfr0, MVFR0, FPSHVEC) > 0;
1255
-}
1256
-
1257
-static inline bool isar_feature_aa32_fpsp_v2(const ARMISARegisters *id)
1258
-{
1259
- /* Return true if CPU supports single precision floating point, VFPv2 */
1260
- return FIELD_EX32(id->mvfr0, MVFR0, FPSP) > 0;
1261
-}
1262
-
1263
-static inline bool isar_feature_aa32_fpsp_v3(const ARMISARegisters *id)
1264
-{
1265
- /* Return true if CPU supports single precision floating point, VFPv3 */
1266
- return FIELD_EX32(id->mvfr0, MVFR0, FPSP) >= 2;
1267
-}
1268
-
1269
-static inline bool isar_feature_aa32_fpdp_v2(const ARMISARegisters *id)
1270
-{
1271
- /* Return true if CPU supports double precision floating point, VFPv2 */
1272
- return FIELD_EX32(id->mvfr0, MVFR0, FPDP) > 0;
1273
-}
1274
-
1275
-static inline bool isar_feature_aa32_fpdp_v3(const ARMISARegisters *id)
1276
-{
1277
- /* Return true if CPU supports double precision floating point, VFPv3 */
1278
- return FIELD_EX32(id->mvfr0, MVFR0, FPDP) >= 2;
1279
-}
1280
-
1281
-static inline bool isar_feature_aa32_vfp(const ARMISARegisters *id)
1282
-{
1283
- return isar_feature_aa32_fpsp_v2(id) || isar_feature_aa32_fpdp_v2(id);
1284
-}
1285
-
1286
-/*
1287
- * We always set the FP and SIMD FP16 fields to indicate identical
1288
- * levels of support (assuming SIMD is implemented at all), so
1289
- * we only need one set of accessors.
1290
- */
1291
-static inline bool isar_feature_aa32_fp16_spconv(const ARMISARegisters *id)
1292
-{
1293
- return FIELD_EX32(id->mvfr1, MVFR1, FPHP) > 0;
1294
-}
1295
-
1296
-static inline bool isar_feature_aa32_fp16_dpconv(const ARMISARegisters *id)
1297
-{
1298
- return FIELD_EX32(id->mvfr1, MVFR1, FPHP) > 1;
1299
-}
1300
-
1301
-/*
1302
- * Note that this ID register field covers both VFP and Neon FMAC,
1303
- * so should usually be tested in combination with some other
1304
- * check that confirms the presence of whichever of VFP or Neon is
1305
- * relevant, to avoid accidentally enabling a Neon feature on
1306
- * a VFP-no-Neon core or vice-versa.
1307
- */
1308
-static inline bool isar_feature_aa32_simdfmac(const ARMISARegisters *id)
1309
-{
1310
- return FIELD_EX32(id->mvfr1, MVFR1, SIMDFMAC) != 0;
1311
-}
1312
-
1313
-static inline bool isar_feature_aa32_vsel(const ARMISARegisters *id)
1314
-{
1315
- return FIELD_EX32(id->mvfr2, MVFR2, FPMISC) >= 1;
1316
-}
1317
-
1318
-static inline bool isar_feature_aa32_vcvt_dr(const ARMISARegisters *id)
1319
-{
1320
- return FIELD_EX32(id->mvfr2, MVFR2, FPMISC) >= 2;
1321
-}
1322
-
1323
-static inline bool isar_feature_aa32_vrint(const ARMISARegisters *id)
1324
-{
1325
- return FIELD_EX32(id->mvfr2, MVFR2, FPMISC) >= 3;
1326
-}
1327
-
1328
-static inline bool isar_feature_aa32_vminmaxnm(const ARMISARegisters *id)
1329
-{
1330
- return FIELD_EX32(id->mvfr2, MVFR2, FPMISC) >= 4;
1331
-}
1332
-
1333
-static inline bool isar_feature_aa32_pxn(const ARMISARegisters *id)
1334
-{
1335
- return FIELD_EX32(id->id_mmfr0, ID_MMFR0, VMSA) >= 4;
1336
-}
1337
-
1338
-static inline bool isar_feature_aa32_pan(const ARMISARegisters *id)
1339
-{
1340
- return FIELD_EX32(id->id_mmfr3, ID_MMFR3, PAN) != 0;
1341
-}
1342
-
1343
-static inline bool isar_feature_aa32_ats1e1(const ARMISARegisters *id)
1344
-{
1345
- return FIELD_EX32(id->id_mmfr3, ID_MMFR3, PAN) >= 2;
1346
-}
1347
-
1348
-static inline bool isar_feature_aa32_pmuv3p1(const ARMISARegisters *id)
1349
-{
1350
- /* 0xf means "non-standard IMPDEF PMU" */
1351
- return FIELD_EX32(id->id_dfr0, ID_DFR0, PERFMON) >= 4 &&
1352
- FIELD_EX32(id->id_dfr0, ID_DFR0, PERFMON) != 0xf;
1353
-}
1354
-
1355
-static inline bool isar_feature_aa32_pmuv3p4(const ARMISARegisters *id)
1356
-{
1357
- /* 0xf means "non-standard IMPDEF PMU" */
1358
- return FIELD_EX32(id->id_dfr0, ID_DFR0, PERFMON) >= 5 &&
1359
- FIELD_EX32(id->id_dfr0, ID_DFR0, PERFMON) != 0xf;
1360
-}
1361
-
1362
-static inline bool isar_feature_aa32_pmuv3p5(const ARMISARegisters *id)
1363
-{
1364
- /* 0xf means "non-standard IMPDEF PMU" */
1365
- return FIELD_EX32(id->id_dfr0, ID_DFR0, PERFMON) >= 6 &&
1366
- FIELD_EX32(id->id_dfr0, ID_DFR0, PERFMON) != 0xf;
1367
-}
1368
-
1369
-static inline bool isar_feature_aa32_hpd(const ARMISARegisters *id)
1370
-{
1371
- return FIELD_EX32(id->id_mmfr4, ID_MMFR4, HPDS) != 0;
1372
-}
1373
-
1374
-static inline bool isar_feature_aa32_ac2(const ARMISARegisters *id)
1375
-{
1376
- return FIELD_EX32(id->id_mmfr4, ID_MMFR4, AC2) != 0;
1377
-}
1378
-
1379
-static inline bool isar_feature_aa32_ccidx(const ARMISARegisters *id)
1380
-{
1381
- return FIELD_EX32(id->id_mmfr4, ID_MMFR4, CCIDX) != 0;
1382
-}
1383
-
1384
-static inline bool isar_feature_aa32_tts2uxn(const ARMISARegisters *id)
1385
-{
1386
- return FIELD_EX32(id->id_mmfr4, ID_MMFR4, XNX) != 0;
1387
-}
1388
-
1389
-static inline bool isar_feature_aa32_half_evt(const ARMISARegisters *id)
1390
-{
1391
- return FIELD_EX32(id->id_mmfr4, ID_MMFR4, EVT) >= 1;
1392
-}
1393
-
1394
-static inline bool isar_feature_aa32_evt(const ARMISARegisters *id)
1395
-{
1396
- return FIELD_EX32(id->id_mmfr4, ID_MMFR4, EVT) >= 2;
1397
-}
1398
-
1399
-static inline bool isar_feature_aa32_dit(const ARMISARegisters *id)
1400
-{
1401
- return FIELD_EX32(id->id_pfr0, ID_PFR0, DIT) != 0;
1402
-}
1403
-
1404
-static inline bool isar_feature_aa32_ssbs(const ARMISARegisters *id)
1405
-{
1406
- return FIELD_EX32(id->id_pfr2, ID_PFR2, SSBS) != 0;
1407
-}
1408
-
1409
-static inline bool isar_feature_aa32_debugv7p1(const ARMISARegisters *id)
1410
-{
1411
- return FIELD_EX32(id->id_dfr0, ID_DFR0, COPDBG) >= 5;
1412
-}
1413
-
1414
-static inline bool isar_feature_aa32_debugv8p2(const ARMISARegisters *id)
1415
-{
1416
- return FIELD_EX32(id->id_dfr0, ID_DFR0, COPDBG) >= 8;
1417
-}
1418
-
1419
-static inline bool isar_feature_aa32_doublelock(const ARMISARegisters *id)
1420
-{
1421
- return FIELD_EX32(id->dbgdevid, DBGDEVID, DOUBLELOCK) > 0;
1422
-}
1423
-
1424
-/*
1425
- * 64-bit feature tests via id registers.
1426
- */
1427
-static inline bool isar_feature_aa64_aes(const ARMISARegisters *id)
1428
-{
1429
- return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, AES) != 0;
1430
-}
1431
-
1432
-static inline bool isar_feature_aa64_pmull(const ARMISARegisters *id)
1433
-{
1434
- return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, AES) > 1;
1435
-}
1436
-
1437
-static inline bool isar_feature_aa64_sha1(const ARMISARegisters *id)
1438
-{
1439
- return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SHA1) != 0;
1440
-}
1441
-
1442
-static inline bool isar_feature_aa64_sha256(const ARMISARegisters *id)
1443
-{
1444
- return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SHA2) != 0;
1445
-}
1446
-
1447
-static inline bool isar_feature_aa64_sha512(const ARMISARegisters *id)
1448
-{
1449
- return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SHA2) > 1;
1450
-}
1451
-
1452
-static inline bool isar_feature_aa64_crc32(const ARMISARegisters *id)
1453
-{
1454
- return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, CRC32) != 0;
1455
-}
1456
-
1457
-static inline bool isar_feature_aa64_atomics(const ARMISARegisters *id)
1458
-{
1459
- return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, ATOMIC) != 0;
1460
-}
1461
-
1462
-static inline bool isar_feature_aa64_rdm(const ARMISARegisters *id)
1463
-{
1464
- return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, RDM) != 0;
1465
-}
1466
-
1467
-static inline bool isar_feature_aa64_sha3(const ARMISARegisters *id)
1468
-{
1469
- return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SHA3) != 0;
1470
-}
1471
-
1472
-static inline bool isar_feature_aa64_sm3(const ARMISARegisters *id)
1473
-{
1474
- return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SM3) != 0;
1475
-}
1476
-
1477
-static inline bool isar_feature_aa64_sm4(const ARMISARegisters *id)
1478
-{
1479
- return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SM4) != 0;
1480
-}
1481
-
1482
-static inline bool isar_feature_aa64_dp(const ARMISARegisters *id)
1483
-{
1484
- return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, DP) != 0;
1485
-}
1486
-
1487
-static inline bool isar_feature_aa64_fhm(const ARMISARegisters *id)
1488
-{
1489
- return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, FHM) != 0;
1490
-}
1491
-
1492
-static inline bool isar_feature_aa64_condm_4(const ARMISARegisters *id)
1493
-{
1494
- return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, TS) != 0;
1495
-}
1496
-
1497
-static inline bool isar_feature_aa64_condm_5(const ARMISARegisters *id)
1498
-{
1499
- return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, TS) >= 2;
1500
-}
1501
-
1502
-static inline bool isar_feature_aa64_rndr(const ARMISARegisters *id)
1503
-{
1504
- return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, RNDR) != 0;
1505
-}
1506
-
1507
-static inline bool isar_feature_aa64_jscvt(const ARMISARegisters *id)
1508
-{
1509
- return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, JSCVT) != 0;
1510
-}
1511
-
1512
-static inline bool isar_feature_aa64_fcma(const ARMISARegisters *id)
1513
-{
1514
- return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, FCMA) != 0;
1515
-}
1516
-
1517
-/*
1518
- * These are the values from APA/API/APA3.
1519
- * In general these must be compared '>=', per the normal Arm ARM
1520
- * treatment of fields in ID registers.
1521
- */
1522
-typedef enum {
1523
- PauthFeat_None = 0,
1524
- PauthFeat_1 = 1,
1525
- PauthFeat_EPAC = 2,
1526
- PauthFeat_2 = 3,
1527
- PauthFeat_FPAC = 4,
1528
- PauthFeat_FPACCOMBINED = 5,
1529
-} ARMPauthFeature;
1530
-
1531
-static inline ARMPauthFeature
1532
-isar_feature_pauth_feature(const ARMISARegisters *id)
1533
-{
1534
- /*
1535
- * Architecturally, only one of {APA,API,APA3} may be active (non-zero)
1536
- * and the other two must be zero. Thus we may avoid conditionals.
1537
- */
1538
- return (FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, APA) |
1539
- FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, API) |
1540
- FIELD_EX64(id->id_aa64isar2, ID_AA64ISAR2, APA3));
1541
-}
1542
-
1543
-static inline bool isar_feature_aa64_pauth(const ARMISARegisters *id)
1544
-{
1545
- /*
1546
- * Return true if any form of pauth is enabled, as this
1547
- * predicate controls migration of the 128-bit keys.
1548
- */
1549
- return isar_feature_pauth_feature(id) != PauthFeat_None;
1550
-}
1551
-
1552
-static inline bool isar_feature_aa64_pauth_qarma5(const ARMISARegisters *id)
1553
-{
1554
- /*
1555
- * Return true if pauth is enabled with the architected QARMA5 algorithm.
1556
- * QEMU will always enable or disable both APA and GPA.
1557
- */
1558
- return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, APA) != 0;
1559
-}
1560
-
1561
-static inline bool isar_feature_aa64_pauth_qarma3(const ARMISARegisters *id)
1562
-{
1563
- /*
1564
- * Return true if pauth is enabled with the architected QARMA3 algorithm.
1565
- * QEMU will always enable or disable both APA3 and GPA3.
1566
- */
1567
- return FIELD_EX64(id->id_aa64isar2, ID_AA64ISAR2, APA3) != 0;
1568
-}
1569
-
1570
-static inline bool isar_feature_aa64_tlbirange(const ARMISARegisters *id)
1571
-{
1572
- return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, TLB) == 2;
1573
-}
1574
-
1575
-static inline bool isar_feature_aa64_tlbios(const ARMISARegisters *id)
1576
-{
1577
- return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, TLB) != 0;
1578
-}
1579
-
1580
-static inline bool isar_feature_aa64_sb(const ARMISARegisters *id)
1581
-{
1582
- return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, SB) != 0;
1583
-}
1584
-
1585
-static inline bool isar_feature_aa64_predinv(const ARMISARegisters *id)
1586
-{
1587
- return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, SPECRES) != 0;
1588
-}
1589
-
1590
-static inline bool isar_feature_aa64_frint(const ARMISARegisters *id)
1591
-{
1592
- return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, FRINTTS) != 0;
1593
-}
1594
-
1595
-static inline bool isar_feature_aa64_dcpop(const ARMISARegisters *id)
1596
-{
1597
- return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, DPB) != 0;
1598
-}
1599
-
1600
-static inline bool isar_feature_aa64_dcpodp(const ARMISARegisters *id)
1601
-{
1602
- return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, DPB) >= 2;
1603
-}
1604
-
1605
-static inline bool isar_feature_aa64_bf16(const ARMISARegisters *id)
1606
-{
1607
- return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, BF16) != 0;
1608
-}
1609
-
1610
-static inline bool isar_feature_aa64_fp_simd(const ARMISARegisters *id)
1611
-{
1612
- /* We always set the AdvSIMD and FP fields identically. */
1613
- return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, FP) != 0xf;
1614
-}
1615
-
1616
-static inline bool isar_feature_aa64_fp16(const ARMISARegisters *id)
1617
-{
1618
- /* We always set the AdvSIMD and FP fields identically wrt FP16. */
1619
- return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, FP) == 1;
1620
-}
1621
-
1622
-static inline bool isar_feature_aa64_aa32(const ARMISARegisters *id)
1623
-{
1624
- return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, EL0) >= 2;
1625
-}
1626
-
1627
-static inline bool isar_feature_aa64_aa32_el1(const ARMISARegisters *id)
1628
-{
1629
- return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, EL1) >= 2;
1630
-}
1631
-
1632
-static inline bool isar_feature_aa64_aa32_el2(const ARMISARegisters *id)
1633
-{
1634
- return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, EL2) >= 2;
1635
-}
1636
-
1637
-static inline bool isar_feature_aa64_ras(const ARMISARegisters *id)
1638
-{
1639
- return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, RAS) != 0;
1640
-}
1641
-
1642
-static inline bool isar_feature_aa64_doublefault(const ARMISARegisters *id)
1643
-{
1644
- return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, RAS) >= 2;
1645
-}
1646
-
1647
-static inline bool isar_feature_aa64_sve(const ARMISARegisters *id)
1648
-{
1649
- return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, SVE) != 0;
1650
-}
1651
-
1652
-static inline bool isar_feature_aa64_sel2(const ARMISARegisters *id)
1653
-{
1654
- return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, SEL2) != 0;
1655
-}
1656
-
1657
-static inline bool isar_feature_aa64_rme(const ARMISARegisters *id)
1658
-{
1659
- return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, RME) != 0;
1660
-}
1661
-
1662
-static inline bool isar_feature_aa64_vh(const ARMISARegisters *id)
1663
-{
1664
- return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, VH) != 0;
1665
-}
1666
-
1667
-static inline bool isar_feature_aa64_lor(const ARMISARegisters *id)
1668
-{
1669
- return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, LO) != 0;
1670
-}
1671
-
1672
-static inline bool isar_feature_aa64_pan(const ARMISARegisters *id)
1673
-{
1674
- return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, PAN) != 0;
1675
-}
1676
-
1677
-static inline bool isar_feature_aa64_ats1e1(const ARMISARegisters *id)
1678
-{
1679
- return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, PAN) >= 2;
1680
-}
1681
-
1682
-static inline bool isar_feature_aa64_pan3(const ARMISARegisters *id)
1683
-{
1684
- return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, PAN) >= 3;
1685
-}
1686
-
1687
-static inline bool isar_feature_aa64_hcx(const ARMISARegisters *id)
1688
-{
1689
- return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, HCX) != 0;
1690
-}
1691
-
1692
-static inline bool isar_feature_aa64_tidcp1(const ARMISARegisters *id)
1693
-{
1694
- return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR1, TIDCP1) != 0;
1695
-}
1696
-
1697
-static inline bool isar_feature_aa64_uao(const ARMISARegisters *id)
1698
-{
1699
- return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, UAO) != 0;
1700
-}
1701
-
1702
-static inline bool isar_feature_aa64_st(const ARMISARegisters *id)
1703
-{
1704
- return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, ST) != 0;
1705
-}
1706
-
1707
-static inline bool isar_feature_aa64_lse2(const ARMISARegisters *id)
1708
-{
1709
- return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, AT) != 0;
1710
-}
1711
-
1712
-static inline bool isar_feature_aa64_fwb(const ARMISARegisters *id)
1713
-{
1714
- return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, FWB) != 0;
1715
-}
1716
-
1717
-static inline bool isar_feature_aa64_ids(const ARMISARegisters *id)
1718
-{
1719
- return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, IDS) != 0;
1720
-}
1721
-
1722
-static inline bool isar_feature_aa64_half_evt(const ARMISARegisters *id)
1723
-{
1724
- return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, EVT) >= 1;
1725
-}
1726
-
1727
-static inline bool isar_feature_aa64_evt(const ARMISARegisters *id)
1728
-{
1729
- return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, EVT) >= 2;
1730
-}
1731
-
1732
-static inline bool isar_feature_aa64_bti(const ARMISARegisters *id)
1733
-{
1734
- return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, BT) != 0;
1735
-}
1736
-
1737
-static inline bool isar_feature_aa64_mte_insn_reg(const ARMISARegisters *id)
1738
-{
1739
- return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, MTE) != 0;
1740
-}
1741
-
1742
-static inline bool isar_feature_aa64_mte(const ARMISARegisters *id)
1743
-{
1744
- return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, MTE) >= 2;
1745
-}
1746
-
1747
-static inline bool isar_feature_aa64_sme(const ARMISARegisters *id)
1748
-{
1749
- return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, SME) != 0;
1750
-}
1751
-
1752
-static inline bool isar_feature_aa64_pmuv3p1(const ARMISARegisters *id)
1753
-{
1754
- return FIELD_EX64(id->id_aa64dfr0, ID_AA64DFR0, PMUVER) >= 4 &&
1755
- FIELD_EX64(id->id_aa64dfr0, ID_AA64DFR0, PMUVER) != 0xf;
1756
-}
1757
-
1758
-static inline bool isar_feature_aa64_pmuv3p4(const ARMISARegisters *id)
1759
-{
1760
- return FIELD_EX64(id->id_aa64dfr0, ID_AA64DFR0, PMUVER) >= 5 &&
1761
- FIELD_EX64(id->id_aa64dfr0, ID_AA64DFR0, PMUVER) != 0xf;
1762
-}
1763
-
1764
-static inline bool isar_feature_aa64_pmuv3p5(const ARMISARegisters *id)
1765
-{
1766
- return FIELD_EX64(id->id_aa64dfr0, ID_AA64DFR0, PMUVER) >= 6 &&
1767
- FIELD_EX64(id->id_aa64dfr0, ID_AA64DFR0, PMUVER) != 0xf;
1768
-}
1769
-
1770
-static inline bool isar_feature_aa64_rcpc_8_3(const ARMISARegisters *id)
1771
-{
1772
- return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, LRCPC) != 0;
1773
-}
1774
-
1775
-static inline bool isar_feature_aa64_rcpc_8_4(const ARMISARegisters *id)
1776
-{
1777
- return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, LRCPC) >= 2;
1778
-}
1779
-
1780
-static inline bool isar_feature_aa64_i8mm(const ARMISARegisters *id)
1781
-{
1782
- return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, I8MM) != 0;
1783
-}
1784
-
1785
-static inline bool isar_feature_aa64_hbc(const ARMISARegisters *id)
1786
-{
1787
- return FIELD_EX64(id->id_aa64isar2, ID_AA64ISAR2, BC) != 0;
1788
-}
1789
-
1790
-static inline bool isar_feature_aa64_tgran4_lpa2(const ARMISARegisters *id)
1791
-{
1792
- return FIELD_SEX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN4) >= 1;
1793
-}
1794
-
1795
-static inline bool isar_feature_aa64_tgran4_2_lpa2(const ARMISARegisters *id)
1796
-{
1797
- unsigned t = FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN4_2);
1798
- return t >= 3 || (t == 0 && isar_feature_aa64_tgran4_lpa2(id));
1799
-}
1800
-
1801
-static inline bool isar_feature_aa64_tgran16_lpa2(const ARMISARegisters *id)
1802
-{
1803
- return FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN16) >= 2;
1804
-}
1805
-
1806
-static inline bool isar_feature_aa64_tgran16_2_lpa2(const ARMISARegisters *id)
1807
-{
1808
- unsigned t = FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN16_2);
1809
- return t >= 3 || (t == 0 && isar_feature_aa64_tgran16_lpa2(id));
1810
-}
1811
-
1812
-static inline bool isar_feature_aa64_tgran4(const ARMISARegisters *id)
1813
-{
1814
- return FIELD_SEX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN4) >= 0;
1815
-}
1816
-
1817
-static inline bool isar_feature_aa64_tgran16(const ARMISARegisters *id)
1818
-{
1819
- return FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN16) >= 1;
1820
-}
1821
-
1822
-static inline bool isar_feature_aa64_tgran64(const ARMISARegisters *id)
1823
-{
1824
- return FIELD_SEX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN64) >= 0;
1825
-}
1826
-
1827
-static inline bool isar_feature_aa64_tgran4_2(const ARMISARegisters *id)
1828
-{
1829
- unsigned t = FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN4_2);
1830
- return t >= 2 || (t == 0 && isar_feature_aa64_tgran4(id));
1831
-}
1832
-
1833
-static inline bool isar_feature_aa64_tgran16_2(const ARMISARegisters *id)
1834
-{
1835
- unsigned t = FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN16_2);
1836
- return t >= 2 || (t == 0 && isar_feature_aa64_tgran16(id));
1837
-}
1838
-
1839
-static inline bool isar_feature_aa64_tgran64_2(const ARMISARegisters *id)
1840
-{
1841
- unsigned t = FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN64_2);
1842
- return t >= 2 || (t == 0 && isar_feature_aa64_tgran64(id));
1843
-}
1844
-
1845
-static inline bool isar_feature_aa64_fgt(const ARMISARegisters *id)
1846
-{
1847
- return FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, FGT) != 0;
1848
-}
1849
-
1850
-static inline bool isar_feature_aa64_ccidx(const ARMISARegisters *id)
1851
-{
1852
- return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, CCIDX) != 0;
1853
-}
1854
-
1855
-static inline bool isar_feature_aa64_lva(const ARMISARegisters *id)
1856
-{
1857
- return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, VARANGE) != 0;
1858
-}
1859
-
1860
-static inline bool isar_feature_aa64_e0pd(const ARMISARegisters *id)
1861
-{
1862
- return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, E0PD) != 0;
1863
-}
1864
-
1865
-static inline bool isar_feature_aa64_hafs(const ARMISARegisters *id)
1866
-{
1867
- return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, HAFDBS) != 0;
1868
-}
1869
-
1870
-static inline bool isar_feature_aa64_hdbs(const ARMISARegisters *id)
1871
-{
1872
- return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, HAFDBS) >= 2;
1873
-}
1874
-
1875
-static inline bool isar_feature_aa64_tts2uxn(const ARMISARegisters *id)
1876
-{
1877
- return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, XNX) != 0;
1878
-}
1879
-
1880
-static inline bool isar_feature_aa64_dit(const ARMISARegisters *id)
1881
-{
1882
- return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, DIT) != 0;
1883
-}
1884
-
1885
-static inline bool isar_feature_aa64_scxtnum(const ARMISARegisters *id)
1886
-{
1887
- int key = FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, CSV2);
1888
- if (key >= 2) {
1889
- return true; /* FEAT_CSV2_2 */
1890
- }
1891
- if (key == 1) {
1892
- key = FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, CSV2_FRAC);
1893
- return key >= 2; /* FEAT_CSV2_1p2 */
1894
- }
1895
- return false;
1896
-}
1897
-
1898
-static inline bool isar_feature_aa64_ssbs(const ARMISARegisters *id)
1899
-{
1900
- return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, SSBS) != 0;
1901
-}
1902
-
1903
-static inline bool isar_feature_aa64_debugv8p2(const ARMISARegisters *id)
1904
-{
1905
- return FIELD_EX64(id->id_aa64dfr0, ID_AA64DFR0, DEBUGVER) >= 8;
1906
-}
1907
-
1908
-static inline bool isar_feature_aa64_sve2(const ARMISARegisters *id)
1909
-{
1910
- return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, SVEVER) != 0;
1911
-}
1912
-
1913
-static inline bool isar_feature_aa64_sve2_aes(const ARMISARegisters *id)
1914
-{
1915
- return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, AES) != 0;
1916
-}
1917
-
1918
-static inline bool isar_feature_aa64_sve2_pmull128(const ARMISARegisters *id)
1919
-{
1920
- return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, AES) >= 2;
1921
-}
1922
-
1923
-static inline bool isar_feature_aa64_sve2_bitperm(const ARMISARegisters *id)
1924
-{
1925
- return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, BITPERM) != 0;
1926
-}
1927
-
1928
-static inline bool isar_feature_aa64_sve_bf16(const ARMISARegisters *id)
1929
-{
1930
- return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, BFLOAT16) != 0;
1931
-}
1932
-
1933
-static inline bool isar_feature_aa64_sve2_sha3(const ARMISARegisters *id)
1934
-{
1935
- return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, SHA3) != 0;
1936
-}
1937
-
1938
-static inline bool isar_feature_aa64_sve2_sm4(const ARMISARegisters *id)
1939
-{
1940
- return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, SM4) != 0;
1941
-}
1942
-
1943
-static inline bool isar_feature_aa64_sve_i8mm(const ARMISARegisters *id)
1944
-{
1945
- return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, I8MM) != 0;
1946
-}
1947
-
1948
-static inline bool isar_feature_aa64_sve_f32mm(const ARMISARegisters *id)
1949
-{
1950
- return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, F32MM) != 0;
1951
-}
1952
-
1953
-static inline bool isar_feature_aa64_sve_f64mm(const ARMISARegisters *id)
1954
-{
1955
- return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, F64MM) != 0;
1956
-}
1957
-
1958
-static inline bool isar_feature_aa64_sme_f64f64(const ARMISARegisters *id)
1959
-{
1960
- return FIELD_EX64(id->id_aa64smfr0, ID_AA64SMFR0, F64F64);
1961
-}
1962
-
1963
-static inline bool isar_feature_aa64_sme_i16i64(const ARMISARegisters *id)
1964
-{
1965
- return FIELD_EX64(id->id_aa64smfr0, ID_AA64SMFR0, I16I64) == 0xf;
1966
-}
1967
-
1968
-static inline bool isar_feature_aa64_sme_fa64(const ARMISARegisters *id)
1969
-{
1970
- return FIELD_EX64(id->id_aa64smfr0, ID_AA64SMFR0, FA64);
1971
-}
1972
-
1973
-static inline bool isar_feature_aa64_doublelock(const ARMISARegisters *id)
1974
-{
1975
- return FIELD_SEX64(id->id_aa64dfr0, ID_AA64DFR0, DOUBLELOCK) >= 0;
1976
-}
1977
-
1978
-static inline bool isar_feature_aa64_mops(const ARMISARegisters *id)
1979
-{
1980
- return FIELD_EX64(id->id_aa64isar2, ID_AA64ISAR2, MOPS);
1981
-}
1982
-
1983
-/*
1984
- * Feature tests for "does this exist in either 32-bit or 64-bit?"
1985
- */
1986
-static inline bool isar_feature_any_fp16(const ARMISARegisters *id)
1987
-{
1988
- return isar_feature_aa64_fp16(id) || isar_feature_aa32_fp16_arith(id);
1989
-}
1990
-
1991
-static inline bool isar_feature_any_predinv(const ARMISARegisters *id)
1992
-{
1993
- return isar_feature_aa64_predinv(id) || isar_feature_aa32_predinv(id);
1994
-}
1995
-
1996
-static inline bool isar_feature_any_pmuv3p1(const ARMISARegisters *id)
1997
-{
1998
- return isar_feature_aa64_pmuv3p1(id) || isar_feature_aa32_pmuv3p1(id);
1999
-}
2000
-
2001
-static inline bool isar_feature_any_pmuv3p4(const ARMISARegisters *id)
2002
-{
2003
- return isar_feature_aa64_pmuv3p4(id) || isar_feature_aa32_pmuv3p4(id);
2004
-}
2005
-
2006
-static inline bool isar_feature_any_pmuv3p5(const ARMISARegisters *id)
2007
-{
2008
- return isar_feature_aa64_pmuv3p5(id) || isar_feature_aa32_pmuv3p5(id);
2009
-}
2010
-
2011
-static inline bool isar_feature_any_ccidx(const ARMISARegisters *id)
2012
-{
2013
- return isar_feature_aa64_ccidx(id) || isar_feature_aa32_ccidx(id);
2014
-}
2015
-
2016
-static inline bool isar_feature_any_tts2uxn(const ARMISARegisters *id)
2017
-{
2018
- return isar_feature_aa64_tts2uxn(id) || isar_feature_aa32_tts2uxn(id);
2019
-}
2020
-
2021
-static inline bool isar_feature_any_debugv8p2(const ARMISARegisters *id)
2022
-{
2023
- return isar_feature_aa64_debugv8p2(id) || isar_feature_aa32_debugv8p2(id);
2024
-}
2025
-
2026
-static inline bool isar_feature_any_ras(const ARMISARegisters *id)
2027
-{
2028
- return isar_feature_aa64_ras(id) || isar_feature_aa32_ras(id);
2029
-}
2030
-
2031
-static inline bool isar_feature_any_half_evt(const ARMISARegisters *id)
2032
-{
2033
- return isar_feature_aa64_half_evt(id) || isar_feature_aa32_half_evt(id);
2034
-}
2035
-
2036
-static inline bool isar_feature_any_evt(const ARMISARegisters *id)
2037
-{
2038
- return isar_feature_aa64_evt(id) || isar_feature_aa32_evt(id);
2039
-}
2040
-
2041
-/*
2042
- * Forward to the above feature tests given an ARMCPU pointer.
2043
- */
2044
-#define cpu_isar_feature(name, cpu) \
2045
- ({ ARMCPU *cpu_ = (cpu); isar_feature_##name(&cpu_->isar); })
2046
-
2047
#endif
2048
diff --git a/target/arm/internals.h b/target/arm/internals.h
2049
index XXXXXXX..XXXXXXX 100644
2050
--- a/target/arm/internals.h
2051
+++ b/target/arm/internals.h
2052
@@ -XXX,XX +XXX,XX @@
2053
#include "hw/registerfields.h"
2054
#include "tcg/tcg-gvec-desc.h"
2055
#include "syndrome.h"
2056
+#include "cpu-features.h"
2057
2058
/* register banks for CPU modes */
2059
#define BANK_USRSYS 0
33
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
2060
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
34
index XXXXXXX..XXXXXXX 100644
2061
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/tcg/translate.h
2062
--- a/target/arm/tcg/translate.h
36
+++ b/target/arm/tcg/translate.h
2063
+++ b/target/arm/tcg/translate.h
37
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
2064
@@ -XXX,XX +XXX,XX @@
38
bool unpriv;
2065
#include "exec/translator.h"
39
/* True if v8.3-PAuth is active. */
2066
#include "exec/helper-gen.h"
40
bool pauth_active;
2067
#include "internals.h"
41
- /* True if v8.5-MTE access to tags is enabled. */
2068
-
42
- bool ata;
2069
+#include "cpu-features.h"
43
+ /* True if v8.5-MTE access to tags is enabled; index with is_unpriv. */
2070
44
+ bool ata[2];
2071
/* internal defines */
45
/* True if v8.5-MTE tag checks affect the PE; index with is_unpriv. */
2072
46
bool mte_active[2];
2073
diff --git a/hw/arm/armv7m.c b/hw/arm/armv7m.c
47
/* True with v8.5-BTI and SCTLR_ELx.BT* set. */
2074
index XXXXXXX..XXXXXXX 100644
2075
--- a/hw/arm/armv7m.c
2076
+++ b/hw/arm/armv7m.c
2077
@@ -XXX,XX +XXX,XX @@
2078
#include "qemu/module.h"
2079
#include "qemu/log.h"
2080
#include "target/arm/idau.h"
2081
+#include "target/arm/cpu-features.h"
2082
#include "migration/vmstate.h"
2083
2084
/* Bitbanded IO. Each word corresponds to a single bit. */
2085
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
2086
index XXXXXXX..XXXXXXX 100644
2087
--- a/hw/intc/armv7m_nvic.c
2088
+++ b/hw/intc/armv7m_nvic.c
2089
@@ -XXX,XX +XXX,XX @@
2090
#include "sysemu/tcg.h"
2091
#include "sysemu/runstate.h"
2092
#include "target/arm/cpu.h"
2093
+#include "target/arm/cpu-features.h"
2094
#include "exec/exec-all.h"
2095
#include "exec/memop.h"
2096
#include "qemu/log.h"
2097
diff --git a/linux-user/aarch64/cpu_loop.c b/linux-user/aarch64/cpu_loop.c
2098
index XXXXXXX..XXXXXXX 100644
2099
--- a/linux-user/aarch64/cpu_loop.c
2100
+++ b/linux-user/aarch64/cpu_loop.c
2101
@@ -XXX,XX +XXX,XX @@
2102
#include "qemu/guest-random.h"
2103
#include "semihosting/common-semi.h"
2104
#include "target/arm/syndrome.h"
2105
+#include "target/arm/cpu-features.h"
2106
2107
#define get_user_code_u32(x, gaddr, env) \
2108
({ abi_long __r = get_user_u32((x), (gaddr)); \
2109
diff --git a/linux-user/aarch64/signal.c b/linux-user/aarch64/signal.c
2110
index XXXXXXX..XXXXXXX 100644
2111
--- a/linux-user/aarch64/signal.c
2112
+++ b/linux-user/aarch64/signal.c
2113
@@ -XXX,XX +XXX,XX @@
2114
#include "user-internals.h"
2115
#include "signal-common.h"
2116
#include "linux-user/trace.h"
2117
+#include "target/arm/cpu-features.h"
2118
2119
struct target_sigcontext {
2120
uint64_t fault_address;
2121
diff --git a/linux-user/arm/signal.c b/linux-user/arm/signal.c
2122
index XXXXXXX..XXXXXXX 100644
2123
--- a/linux-user/arm/signal.c
2124
+++ b/linux-user/arm/signal.c
2125
@@ -XXX,XX +XXX,XX @@
2126
#include "user-internals.h"
2127
#include "signal-common.h"
2128
#include "linux-user/trace.h"
2129
+#include "target/arm/cpu-features.h"
2130
2131
struct target_sigcontext {
2132
abi_ulong trap_no;
2133
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
2134
index XXXXXXX..XXXXXXX 100644
2135
--- a/linux-user/elfload.c
2136
+++ b/linux-user/elfload.c
2137
@@ -XXX,XX +XXX,XX @@
2138
#include "target_signal.h"
2139
#include "accel/tcg/debuginfo.h"
2140
2141
+#ifdef TARGET_ARM
2142
+#include "target/arm/cpu-features.h"
2143
+#endif
2144
+
2145
#ifdef _ARCH_PPC64
2146
#undef ARCH_DLINFO
2147
#undef ELF_PLATFORM
2148
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
2149
index XXXXXXX..XXXXXXX 100644
2150
--- a/linux-user/mmap.c
2151
+++ b/linux-user/mmap.c
2152
@@ -XXX,XX +XXX,XX @@
2153
#include "target_mman.h"
2154
#include "qemu/interval-tree.h"
2155
2156
+#ifdef TARGET_ARM
2157
+#include "target/arm/cpu-features.h"
2158
+#endif
2159
+
2160
static pthread_mutex_t mmap_mutex = PTHREAD_MUTEX_INITIALIZER;
2161
static __thread int mmap_lock_count;
2162
2163
diff --git a/target/arm/arch_dump.c b/target/arm/arch_dump.c
2164
index XXXXXXX..XXXXXXX 100644
2165
--- a/target/arm/arch_dump.c
2166
+++ b/target/arm/arch_dump.c
2167
@@ -XXX,XX +XXX,XX @@
2168
#include "cpu.h"
2169
#include "elf.h"
2170
#include "sysemu/dump.h"
2171
+#include "cpu-features.h"
2172
2173
/* struct user_pt_regs from arch/arm64/include/uapi/asm/ptrace.h */
2174
struct aarch64_user_regs {
2175
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
2176
index XXXXXXX..XXXXXXX 100644
2177
--- a/target/arm/cpu.c
2178
+++ b/target/arm/cpu.c
2179
@@ -XXX,XX +XXX,XX @@
2180
#include "hw/core/tcg-cpu-ops.h"
2181
#endif /* CONFIG_TCG */
2182
#include "internals.h"
2183
+#include "cpu-features.h"
2184
#include "exec/exec-all.h"
2185
#include "hw/qdev-properties.h"
2186
#if !defined(CONFIG_USER_ONLY)
2187
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
2188
index XXXXXXX..XXXXXXX 100644
2189
--- a/target/arm/cpu64.c
2190
+++ b/target/arm/cpu64.c
2191
@@ -XXX,XX +XXX,XX @@
2192
#include "qapi/visitor.h"
2193
#include "hw/qdev-properties.h"
2194
#include "internals.h"
2195
+#include "cpu-features.h"
2196
#include "cpregs.h"
2197
2198
void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
2199
diff --git a/target/arm/debug_helper.c b/target/arm/debug_helper.c
2200
index XXXXXXX..XXXXXXX 100644
2201
--- a/target/arm/debug_helper.c
2202
+++ b/target/arm/debug_helper.c
2203
@@ -XXX,XX +XXX,XX @@
2204
#include "qemu/log.h"
2205
#include "cpu.h"
2206
#include "internals.h"
2207
+#include "cpu-features.h"
2208
#include "cpregs.h"
2209
#include "exec/exec-all.h"
2210
#include "exec/helper-proto.h"
2211
diff --git a/target/arm/gdbstub.c b/target/arm/gdbstub.c
2212
index XXXXXXX..XXXXXXX 100644
2213
--- a/target/arm/gdbstub.c
2214
+++ b/target/arm/gdbstub.c
2215
@@ -XXX,XX +XXX,XX @@
2216
#include "gdbstub/helpers.h"
2217
#include "sysemu/tcg.h"
2218
#include "internals.h"
2219
+#include "cpu-features.h"
2220
#include "cpregs.h"
2221
2222
typedef struct RegisterSysregXmlParam {
2223
diff --git a/target/arm/helper.c b/target/arm/helper.c
2224
index XXXXXXX..XXXXXXX 100644
2225
--- a/target/arm/helper.c
2226
+++ b/target/arm/helper.c
2227
@@ -XXX,XX +XXX,XX @@
2228
#include "trace.h"
2229
#include "cpu.h"
2230
#include "internals.h"
2231
+#include "cpu-features.h"
2232
#include "exec/helper-proto.h"
2233
#include "qemu/main-loop.h"
2234
#include "qemu/timer.h"
2235
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
2236
index XXXXXXX..XXXXXXX 100644
2237
--- a/target/arm/kvm64.c
2238
+++ b/target/arm/kvm64.c
2239
@@ -XXX,XX +XXX,XX @@
2240
#include "sysemu/kvm_int.h"
2241
#include "kvm_arm.h"
2242
#include "internals.h"
2243
+#include "cpu-features.h"
2244
#include "hw/acpi/acpi.h"
2245
#include "hw/acpi/ghes.h"
2246
2247
diff --git a/target/arm/machine.c b/target/arm/machine.c
2248
index XXXXXXX..XXXXXXX 100644
2249
--- a/target/arm/machine.c
2250
+++ b/target/arm/machine.c
2251
@@ -XXX,XX +XXX,XX @@
2252
#include "sysemu/tcg.h"
2253
#include "kvm_arm.h"
2254
#include "internals.h"
2255
+#include "cpu-features.h"
2256
#include "migration/cpu.h"
2257
2258
static bool vfp_needed(void *opaque)
2259
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
2260
index XXXXXXX..XXXXXXX 100644
2261
--- a/target/arm/ptw.c
2262
+++ b/target/arm/ptw.c
2263
@@ -XXX,XX +XXX,XX @@
2264
#include "exec/exec-all.h"
2265
#include "cpu.h"
2266
#include "internals.h"
2267
+#include "cpu-features.h"
2268
#include "idau.h"
2269
#ifdef CONFIG_TCG
2270
# include "tcg/oversized-guest.h"
2271
diff --git a/target/arm/tcg/cpu64.c b/target/arm/tcg/cpu64.c
2272
index XXXXXXX..XXXXXXX 100644
2273
--- a/target/arm/tcg/cpu64.c
2274
+++ b/target/arm/tcg/cpu64.c
2275
@@ -XXX,XX +XXX,XX @@
2276
#include "hw/qdev-properties.h"
2277
#include "qemu/units.h"
2278
#include "internals.h"
2279
+#include "cpu-features.h"
2280
#include "cpregs.h"
2281
2282
static uint64_t make_ccsidr64(unsigned assoc, unsigned linesize,
48
diff --git a/target/arm/tcg/hflags.c b/target/arm/tcg/hflags.c
2283
diff --git a/target/arm/tcg/hflags.c b/target/arm/tcg/hflags.c
49
index XXXXXXX..XXXXXXX 100644
2284
index XXXXXXX..XXXXXXX 100644
50
--- a/target/arm/tcg/hflags.c
2285
--- a/target/arm/tcg/hflags.c
51
+++ b/target/arm/tcg/hflags.c
2286
+++ b/target/arm/tcg/hflags.c
52
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
2287
@@ -XXX,XX +XXX,XX @@
53
&& allocation_tag_access_enabled(env, 0, sctlr)) {
2288
#include "qemu/osdep.h"
54
DP_TBFLAG_A64(flags, MTE0_ACTIVE, 1);
2289
#include "cpu.h"
55
}
2290
#include "internals.h"
56
+ /*
2291
+#include "cpu-features.h"
57
+ * For unpriv tag-setting accesses we alse need ATA0. Again, in
2292
#include "exec/helper-proto.h"
58
+ * contexts where unpriv and normal insns are the same we
2293
#include "cpregs.h"
59
+ * duplicate the ATA bit to save effort for translate-a64.c.
2294
60
+ */
2295
diff --git a/target/arm/tcg/m_helper.c b/target/arm/tcg/m_helper.c
61
+ if (EX_TBFLAG_A64(flags, UNPRIV)) {
2296
index XXXXXXX..XXXXXXX 100644
62
+ if (allocation_tag_access_enabled(env, 0, sctlr)) {
2297
--- a/target/arm/tcg/m_helper.c
63
+ DP_TBFLAG_A64(flags, ATA0, 1);
2298
+++ b/target/arm/tcg/m_helper.c
64
+ }
2299
@@ -XXX,XX +XXX,XX @@
65
+ } else {
2300
#include "qemu/osdep.h"
66
+ DP_TBFLAG_A64(flags, ATA0, EX_TBFLAG_A64(flags, ATA));
2301
#include "cpu.h"
67
+ }
2302
#include "internals.h"
68
/* Cache TCMA as well as TBI. */
2303
+#include "cpu-features.h"
69
DP_TBFLAG_A64(flags, TCMA, aa64_va_parameter_tcma(tcr, mmu_idx));
2304
#include "gdbstub/helpers.h"
70
}
2305
#include "exec/helper-proto.h"
71
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
2306
#include "qemu/main-loop.h"
72
index XXXXXXX..XXXXXXX 100644
2307
diff --git a/target/arm/tcg/op_helper.c b/target/arm/tcg/op_helper.c
73
--- a/target/arm/tcg/translate-a64.c
2308
index XXXXXXX..XXXXXXX 100644
74
+++ b/target/arm/tcg/translate-a64.c
2309
--- a/target/arm/tcg/op_helper.c
75
@@ -XXX,XX +XXX,XX @@ static void handle_sys(DisasContext *s, bool isread,
2310
+++ b/target/arm/tcg/op_helper.c
76
clean_addr = clean_data_tbi(s, tcg_rt);
2311
@@ -XXX,XX +XXX,XX @@
77
gen_probe_access(s, clean_addr, MMU_DATA_STORE, MO_8);
2312
#include "cpu.h"
78
2313
#include "exec/helper-proto.h"
79
- if (s->ata) {
2314
#include "internals.h"
80
+ if (s->ata[0]) {
2315
+#include "cpu-features.h"
81
/* Extract the tag from the register to match STZGM. */
2316
#include "exec/exec-all.h"
82
tag = tcg_temp_new_i64();
2317
#include "exec/cpu_ldst.h"
83
tcg_gen_shri_i64(tag, tcg_rt, 56);
2318
#include "cpregs.h"
84
@@ -XXX,XX +XXX,XX @@ static void handle_sys(DisasContext *s, bool isread,
2319
diff --git a/target/arm/tcg/pauth_helper.c b/target/arm/tcg/pauth_helper.c
85
clean_addr = clean_data_tbi(s, tcg_rt);
2320
index XXXXXXX..XXXXXXX 100644
86
gen_helper_dc_zva(cpu_env, clean_addr);
2321
--- a/target/arm/tcg/pauth_helper.c
87
2322
+++ b/target/arm/tcg/pauth_helper.c
88
- if (s->ata) {
2323
@@ -XXX,XX +XXX,XX @@
89
+ if (s->ata[0]) {
2324
#include "qemu/osdep.h"
90
/* Extract the tag from the register to match STZGM. */
2325
#include "cpu.h"
91
tag = tcg_temp_new_i64();
2326
#include "internals.h"
92
tcg_gen_shri_i64(tag, tcg_rt, 56);
2327
+#include "cpu-features.h"
93
@@ -XXX,XX +XXX,XX @@ static bool trans_STGP(DisasContext *s, arg_ldstpair *a)
2328
#include "exec/exec-all.h"
94
tcg_gen_qemu_st_i128(tmp, clean_addr, get_mem_index(s), mop);
2329
#include "exec/cpu_ldst.h"
95
2330
#include "exec/helper-proto.h"
96
/* Perform the tag store, if tag access enabled. */
2331
diff --git a/target/arm/tcg/tlb_helper.c b/target/arm/tcg/tlb_helper.c
97
- if (s->ata) {
2332
index XXXXXXX..XXXXXXX 100644
98
+ if (s->ata[0]) {
2333
--- a/target/arm/tcg/tlb_helper.c
99
if (tb_cflags(s->base.tb) & CF_PARALLEL) {
2334
+++ b/target/arm/tcg/tlb_helper.c
100
gen_helper_stg_parallel(cpu_env, dirty_addr, dirty_addr);
2335
@@ -XXX,XX +XXX,XX @@
101
} else {
2336
#include "qemu/osdep.h"
102
@@ -XXX,XX +XXX,XX @@ static bool trans_STZGM(DisasContext *s, arg_ldst_tag *a)
2337
#include "cpu.h"
103
tcg_gen_addi_i64(addr, addr, a->imm);
2338
#include "internals.h"
104
tcg_rt = cpu_reg(s, a->rt);
2339
+#include "cpu-features.h"
105
2340
#include "exec/exec-all.h"
106
- if (s->ata) {
2341
#include "exec/helper-proto.h"
107
+ if (s->ata[0]) {
2342
108
gen_helper_stzgm_tags(cpu_env, addr, tcg_rt);
2343
diff --git a/target/arm/vfp_helper.c b/target/arm/vfp_helper.c
109
}
2344
index XXXXXXX..XXXXXXX 100644
110
/*
2345
--- a/target/arm/vfp_helper.c
111
@@ -XXX,XX +XXX,XX @@ static bool trans_STGM(DisasContext *s, arg_ldst_tag *a)
2346
+++ b/target/arm/vfp_helper.c
112
tcg_gen_addi_i64(addr, addr, a->imm);
2347
@@ -XXX,XX +XXX,XX @@
113
tcg_rt = cpu_reg(s, a->rt);
2348
#include "cpu.h"
114
2349
#include "exec/helper-proto.h"
115
- if (s->ata) {
2350
#include "internals.h"
116
+ if (s->ata[0]) {
2351
+#include "cpu-features.h"
117
gen_helper_stgm(cpu_env, addr, tcg_rt);
2352
#ifdef CONFIG_TCG
118
} else {
2353
#include "qemu/log.h"
119
MMUAccessType acc = MMU_DATA_STORE;
2354
#include "fpu/softfloat.h"
120
@@ -XXX,XX +XXX,XX @@ static bool trans_LDGM(DisasContext *s, arg_ldst_tag *a)
121
tcg_gen_addi_i64(addr, addr, a->imm);
122
tcg_rt = cpu_reg(s, a->rt);
123
124
- if (s->ata) {
125
+ if (s->ata[0]) {
126
gen_helper_ldgm(tcg_rt, cpu_env, addr);
127
} else {
128
MMUAccessType acc = MMU_DATA_LOAD;
129
@@ -XXX,XX +XXX,XX @@ static bool trans_LDG(DisasContext *s, arg_ldst_tag *a)
130
131
tcg_gen_andi_i64(addr, addr, -TAG_GRANULE);
132
tcg_rt = cpu_reg(s, a->rt);
133
- if (s->ata) {
134
+ if (s->ata[0]) {
135
gen_helper_ldg(tcg_rt, cpu_env, addr, tcg_rt);
136
} else {
137
/*
138
@@ -XXX,XX +XXX,XX @@ static bool do_STG(DisasContext *s, arg_ldst_tag *a, bool is_zero, bool is_pair)
139
tcg_gen_addi_i64(addr, addr, a->imm);
140
}
141
tcg_rt = cpu_reg_sp(s, a->rt);
142
- if (!s->ata) {
143
+ if (!s->ata[0]) {
144
/*
145
* For STG and ST2G, we need to check alignment and probe memory.
146
* TODO: For STZG and STZ2G, we could rely on the stores below,
147
@@ -XXX,XX +XXX,XX @@ static bool gen_add_sub_imm_with_tags(DisasContext *s, arg_rri_tag *a,
148
tcg_rn = cpu_reg_sp(s, a->rn);
149
tcg_rd = cpu_reg_sp(s, a->rd);
150
151
- if (s->ata) {
152
+ if (s->ata[0]) {
153
gen_helper_addsubg(tcg_rd, cpu_env, tcg_rn,
154
tcg_constant_i32(imm),
155
tcg_constant_i32(a->uimm4));
156
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_2src(DisasContext *s, uint32_t insn)
157
if (sf == 0 || !dc_isar_feature(aa64_mte_insn_reg, s)) {
158
goto do_unallocated;
159
}
160
- if (s->ata) {
161
+ if (s->ata[0]) {
162
gen_helper_irg(cpu_reg_sp(s, rd), cpu_env,
163
cpu_reg_sp(s, rn), cpu_reg(s, rm));
164
} else {
165
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
166
dc->bt = EX_TBFLAG_A64(tb_flags, BT);
167
dc->btype = EX_TBFLAG_A64(tb_flags, BTYPE);
168
dc->unpriv = EX_TBFLAG_A64(tb_flags, UNPRIV);
169
- dc->ata = EX_TBFLAG_A64(tb_flags, ATA);
170
+ dc->ata[0] = EX_TBFLAG_A64(tb_flags, ATA);
171
+ dc->ata[1] = EX_TBFLAG_A64(tb_flags, ATA0);
172
dc->mte_active[0] = EX_TBFLAG_A64(tb_flags, MTE_ACTIVE);
173
dc->mte_active[1] = EX_TBFLAG_A64(tb_flags, MTE0_ACTIVE);
174
dc->pstate_sm = EX_TBFLAG_A64(tb_flags, PSTATE_SM);
175
--
2355
--
176
2.34.1
2356
2.34.1
2357
2358
diff view generated by jsdifflib
1
The FEAT_MOPS CPY* instructions implement memory copies. These
1
Our list of isar_feature functions is not in any particular order,
2
come in both "always forwards" (memcpy-style) and "overlap OK"
2
but tests on fields of the same ID register tend to be grouped
3
(memmove-style) flavours.
3
together. A few functions that are tests of fields in ID_AA64MMFR1
4
and ID_AA64MMFR2 are not in the same place as the rest; move them
5
into their groups.
4
6
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20230912140434.1333369-12-peter.maydell@linaro.org
10
Message-id: 20231024163510.2972081-3-peter.maydell@linaro.org
8
---
11
---
9
target/arm/tcg/helper-a64.h | 7 +
12
target/arm/cpu-features.h | 60 +++++++++++++++++++--------------------
10
target/arm/tcg/a64.decode | 14 +
13
1 file changed, 30 insertions(+), 30 deletions(-)
11
target/arm/tcg/helper-a64.c | 454 +++++++++++++++++++++++++++++++++
12
target/arm/tcg/translate-a64.c | 60 +++++
13
4 files changed, 535 insertions(+)
14
14
15
diff --git a/target/arm/tcg/helper-a64.h b/target/arm/tcg/helper-a64.h
15
diff --git a/target/arm/cpu-features.h b/target/arm/cpu-features.h
16
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/tcg/helper-a64.h
17
--- a/target/arm/cpu-features.h
18
+++ b/target/arm/tcg/helper-a64.h
18
+++ b/target/arm/cpu-features.h
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_3(sete, void, env, i32, i32)
19
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_tidcp1(const ARMISARegisters *id)
20
DEF_HELPER_3(setgp, void, env, i32, i32)
20
return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR1, TIDCP1) != 0;
21
DEF_HELPER_3(setgm, void, env, i32, i32)
22
DEF_HELPER_3(setge, void, env, i32, i32)
23
+
24
+DEF_HELPER_4(cpyp, void, env, i32, i32, i32)
25
+DEF_HELPER_4(cpym, void, env, i32, i32, i32)
26
+DEF_HELPER_4(cpye, void, env, i32, i32, i32)
27
+DEF_HELPER_4(cpyfp, void, env, i32, i32, i32)
28
+DEF_HELPER_4(cpyfm, void, env, i32, i32, i32)
29
+DEF_HELPER_4(cpyfe, void, env, i32, i32, i32)
30
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
31
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/tcg/a64.decode
33
+++ b/target/arm/tcg/a64.decode
34
@@ -XXX,XX +XXX,XX @@ SETE 00 011001110 ..... 10 . . 01 ..... ..... @set
35
SETGP 00 011101110 ..... 00 . . 01 ..... ..... @set
36
SETGM 00 011101110 ..... 01 . . 01 ..... ..... @set
37
SETGE 00 011101110 ..... 10 . . 01 ..... ..... @set
38
+
39
+# Memmove/Memcopy: the CPY insns allow overlapping src/dest and
40
+# copy in the correct direction; the CPYF insns always copy forwards.
41
+#
42
+# options has the nontemporal and unpriv bits for src and dest
43
+&cpy rs rn rd options
44
+@cpy .. ... . ..... rs:5 options:4 .. rn:5 rd:5 &cpy
45
+
46
+CPYFP 00 011 0 01000 ..... .... 01 ..... ..... @cpy
47
+CPYFM 00 011 0 01010 ..... .... 01 ..... ..... @cpy
48
+CPYFE 00 011 0 01100 ..... .... 01 ..... ..... @cpy
49
+CPYP 00 011 1 01000 ..... .... 01 ..... ..... @cpy
50
+CPYM 00 011 1 01010 ..... .... 01 ..... ..... @cpy
51
+CPYE 00 011 1 01100 ..... .... 01 ..... ..... @cpy
52
diff --git a/target/arm/tcg/helper-a64.c b/target/arm/tcg/helper-a64.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/target/arm/tcg/helper-a64.c
55
+++ b/target/arm/tcg/helper-a64.c
56
@@ -XXX,XX +XXX,XX @@ static uint64_t page_limit(uint64_t addr)
57
return TARGET_PAGE_ALIGN(addr + 1) - addr;
58
}
21
}
59
22
60
+/*
23
+static inline bool isar_feature_aa64_hafs(const ARMISARegisters *id)
61
+ * Return the number of bytes we can copy starting from addr and working
62
+ * backwards without crossing a page boundary.
63
+ */
64
+static uint64_t page_limit_rev(uint64_t addr)
65
+{
24
+{
66
+ return (addr & ~TARGET_PAGE_MASK) + 1;
25
+ return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, HAFDBS) != 0;
67
+}
26
+}
68
+
27
+
69
/*
28
+static inline bool isar_feature_aa64_hdbs(const ARMISARegisters *id)
70
* Perform part of a memory set on an area of guest memory starting at
71
* toaddr (a dirty address) and extending for setsize bytes.
72
@@ -XXX,XX +XXX,XX @@ void HELPER(setge)(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc)
73
{
74
do_sete(env, syndrome, mtedesc, set_step_tags, true, GETPC());
75
}
76
+
77
+/*
78
+ * Perform part of a memory copy from the guest memory at fromaddr
79
+ * and extending for copysize bytes, to the guest memory at
80
+ * toaddr. Both addreses are dirty.
81
+ *
82
+ * Returns the number of bytes actually set, which might be less than
83
+ * copysize; the caller should loop until the whole copy has been done.
84
+ * The caller should ensure that the guest registers are correct
85
+ * for the possibility that the first byte of the copy encounters
86
+ * an exception or watchpoint. We guarantee not to take any faults
87
+ * for bytes other than the first.
88
+ */
89
+static uint64_t copy_step(CPUARMState *env, uint64_t toaddr, uint64_t fromaddr,
90
+ uint64_t copysize, int wmemidx, int rmemidx,
91
+ uint32_t *wdesc, uint32_t *rdesc, uintptr_t ra)
92
+{
29
+{
93
+ void *rmem;
30
+ return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, HAFDBS) >= 2;
94
+ void *wmem;
95
+
96
+ /* Don't cross a page boundary on either source or destination */
97
+ copysize = MIN(copysize, page_limit(toaddr));
98
+ copysize = MIN(copysize, page_limit(fromaddr));
99
+ /*
100
+ * Handle MTE tag checks: either handle the tag mismatch for byte 0,
101
+ * or else copy up to but not including the byte with the mismatch.
102
+ */
103
+ if (*rdesc) {
104
+ uint64_t mtesize = mte_mops_probe(env, fromaddr, copysize, *rdesc);
105
+ if (mtesize == 0) {
106
+ mte_check_fail(env, *rdesc, fromaddr, ra);
107
+ *rdesc = 0;
108
+ } else {
109
+ copysize = MIN(copysize, mtesize);
110
+ }
111
+ }
112
+ if (*wdesc) {
113
+ uint64_t mtesize = mte_mops_probe(env, toaddr, copysize, *wdesc);
114
+ if (mtesize == 0) {
115
+ mte_check_fail(env, *wdesc, toaddr, ra);
116
+ *wdesc = 0;
117
+ } else {
118
+ copysize = MIN(copysize, mtesize);
119
+ }
120
+ }
121
+
122
+ toaddr = useronly_clean_ptr(toaddr);
123
+ fromaddr = useronly_clean_ptr(fromaddr);
124
+ /* Trapless lookup of whether we can get a host memory pointer */
125
+ wmem = tlb_vaddr_to_host(env, toaddr, MMU_DATA_STORE, wmemidx);
126
+ rmem = tlb_vaddr_to_host(env, fromaddr, MMU_DATA_LOAD, rmemidx);
127
+
128
+#ifndef CONFIG_USER_ONLY
129
+ /*
130
+ * If we don't have host memory for both source and dest then just
131
+ * do a single byte copy. This will handle watchpoints, invalid pages,
132
+ * etc correctly. For clean code pages, the next iteration will see
133
+ * the page dirty and will use the fast path.
134
+ */
135
+ if (unlikely(!rmem || !wmem)) {
136
+ uint8_t byte;
137
+ if (rmem) {
138
+ byte = *(uint8_t *)rmem;
139
+ } else {
140
+ byte = cpu_ldub_mmuidx_ra(env, fromaddr, rmemidx, ra);
141
+ }
142
+ if (wmem) {
143
+ *(uint8_t *)wmem = byte;
144
+ } else {
145
+ cpu_stb_mmuidx_ra(env, toaddr, byte, wmemidx, ra);
146
+ }
147
+ return 1;
148
+ }
149
+#endif
150
+ /* Easy case: just memmove the host memory */
151
+ memmove(wmem, rmem, copysize);
152
+ return copysize;
153
+}
31
+}
154
+
32
+
155
+/*
33
+static inline bool isar_feature_aa64_tts2uxn(const ARMISARegisters *id)
156
+ * Do part of a backwards memory copy. Here toaddr and fromaddr point
157
+ * to the *last* byte to be copied.
158
+ */
159
+static uint64_t copy_step_rev(CPUARMState *env, uint64_t toaddr,
160
+ uint64_t fromaddr,
161
+ uint64_t copysize, int wmemidx, int rmemidx,
162
+ uint32_t *wdesc, uint32_t *rdesc, uintptr_t ra)
163
+{
34
+{
164
+ void *rmem;
35
+ return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, XNX) != 0;
165
+ void *wmem;
166
+
167
+ /* Don't cross a page boundary on either source or destination */
168
+ copysize = MIN(copysize, page_limit_rev(toaddr));
169
+ copysize = MIN(copysize, page_limit_rev(fromaddr));
170
+
171
+ /*
172
+ * Handle MTE tag checks: either handle the tag mismatch for byte 0,
173
+ * or else copy up to but not including the byte with the mismatch.
174
+ */
175
+ if (*rdesc) {
176
+ uint64_t mtesize = mte_mops_probe_rev(env, fromaddr, copysize, *rdesc);
177
+ if (mtesize == 0) {
178
+ mte_check_fail(env, *rdesc, fromaddr, ra);
179
+ *rdesc = 0;
180
+ } else {
181
+ copysize = MIN(copysize, mtesize);
182
+ }
183
+ }
184
+ if (*wdesc) {
185
+ uint64_t mtesize = mte_mops_probe_rev(env, toaddr, copysize, *wdesc);
186
+ if (mtesize == 0) {
187
+ mte_check_fail(env, *wdesc, toaddr, ra);
188
+ *wdesc = 0;
189
+ } else {
190
+ copysize = MIN(copysize, mtesize);
191
+ }
192
+ }
193
+
194
+ toaddr = useronly_clean_ptr(toaddr);
195
+ fromaddr = useronly_clean_ptr(fromaddr);
196
+ /* Trapless lookup of whether we can get a host memory pointer */
197
+ wmem = tlb_vaddr_to_host(env, toaddr, MMU_DATA_STORE, wmemidx);
198
+ rmem = tlb_vaddr_to_host(env, fromaddr, MMU_DATA_LOAD, rmemidx);
199
+
200
+#ifndef CONFIG_USER_ONLY
201
+ /*
202
+ * If we don't have host memory for both source and dest then just
203
+ * do a single byte copy. This will handle watchpoints, invalid pages,
204
+ * etc correctly. For clean code pages, the next iteration will see
205
+ * the page dirty and will use the fast path.
206
+ */
207
+ if (unlikely(!rmem || !wmem)) {
208
+ uint8_t byte;
209
+ if (rmem) {
210
+ byte = *(uint8_t *)rmem;
211
+ } else {
212
+ byte = cpu_ldub_mmuidx_ra(env, fromaddr, rmemidx, ra);
213
+ }
214
+ if (wmem) {
215
+ *(uint8_t *)wmem = byte;
216
+ } else {
217
+ cpu_stb_mmuidx_ra(env, toaddr, byte, wmemidx, ra);
218
+ }
219
+ return 1;
220
+ }
221
+#endif
222
+ /*
223
+ * Easy case: just memmove the host memory. Note that wmem and
224
+ * rmem here point to the *last* byte to copy.
225
+ */
226
+ memmove(wmem - (copysize - 1), rmem - (copysize - 1), copysize);
227
+ return copysize;
228
+}
36
+}
229
+
37
+
230
+/*
38
static inline bool isar_feature_aa64_uao(const ARMISARegisters *id)
231
+ * for the Memory Copy operation, our implementation chooses always
39
{
232
+ * to use "option A", where we update Xd and Xs to the final addresses
40
return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, UAO) != 0;
233
+ * in the CPYP insn, and then in CPYM and CPYE only need to update Xn.
41
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_evt(const ARMISARegisters *id)
234
+ *
42
return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, EVT) >= 2;
235
+ * @env: CPU
43
}
236
+ * @syndrome: syndrome value for mismatch exceptions
44
237
+ * (also contains the register numbers we need to use)
45
+static inline bool isar_feature_aa64_ccidx(const ARMISARegisters *id)
238
+ * @wdesc: MTE descriptor for the writes (destination)
239
+ * @rdesc: MTE descriptor for the reads (source)
240
+ * @move: true if this is CPY (memmove), false for CPYF (memcpy forwards)
241
+ */
242
+static void do_cpyp(CPUARMState *env, uint32_t syndrome, uint32_t wdesc,
243
+ uint32_t rdesc, uint32_t move, uintptr_t ra)
244
+{
46
+{
245
+ int rd = mops_destreg(syndrome);
47
+ return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, CCIDX) != 0;
246
+ int rs = mops_srcreg(syndrome);
247
+ int rn = mops_sizereg(syndrome);
248
+ uint32_t rmemidx = FIELD_EX32(rdesc, MTEDESC, MIDX);
249
+ uint32_t wmemidx = FIELD_EX32(wdesc, MTEDESC, MIDX);
250
+ bool forwards = true;
251
+ uint64_t toaddr = env->xregs[rd];
252
+ uint64_t fromaddr = env->xregs[rs];
253
+ uint64_t copysize = env->xregs[rn];
254
+ uint64_t stagecopysize, step;
255
+
256
+ check_mops_enabled(env, ra);
257
+
258
+
259
+ if (move) {
260
+ /*
261
+ * Copy backwards if necessary. The direction for a non-overlapping
262
+ * copy is IMPDEF; we choose forwards.
263
+ */
264
+ if (copysize > 0x007FFFFFFFFFFFFFULL) {
265
+ copysize = 0x007FFFFFFFFFFFFFULL;
266
+ }
267
+ uint64_t fs = extract64(fromaddr, 0, 56);
268
+ uint64_t ts = extract64(toaddr, 0, 56);
269
+ uint64_t fe = extract64(fromaddr + copysize, 0, 56);
270
+
271
+ if (fs < ts && fe > ts) {
272
+ forwards = false;
273
+ }
274
+ } else {
275
+ if (copysize > INT64_MAX) {
276
+ copysize = INT64_MAX;
277
+ }
278
+ }
279
+
280
+ if (!mte_checks_needed(fromaddr, rdesc)) {
281
+ rdesc = 0;
282
+ }
283
+ if (!mte_checks_needed(toaddr, wdesc)) {
284
+ wdesc = 0;
285
+ }
286
+
287
+ if (forwards) {
288
+ stagecopysize = MIN(copysize, page_limit(toaddr));
289
+ stagecopysize = MIN(stagecopysize, page_limit(fromaddr));
290
+ while (stagecopysize) {
291
+ env->xregs[rd] = toaddr;
292
+ env->xregs[rs] = fromaddr;
293
+ env->xregs[rn] = copysize;
294
+ step = copy_step(env, toaddr, fromaddr, stagecopysize,
295
+ wmemidx, rmemidx, &wdesc, &rdesc, ra);
296
+ toaddr += step;
297
+ fromaddr += step;
298
+ copysize -= step;
299
+ stagecopysize -= step;
300
+ }
301
+ /* Insn completed, so update registers to the Option A format */
302
+ env->xregs[rd] = toaddr + copysize;
303
+ env->xregs[rs] = fromaddr + copysize;
304
+ env->xregs[rn] = -copysize;
305
+ } else {
306
+ /*
307
+ * In a reverse copy the to and from addrs in Xs and Xd are the start
308
+ * of the range, but it's more convenient for us to work with pointers
309
+ * to the last byte being copied.
310
+ */
311
+ toaddr += copysize - 1;
312
+ fromaddr += copysize - 1;
313
+ stagecopysize = MIN(copysize, page_limit_rev(toaddr));
314
+ stagecopysize = MIN(stagecopysize, page_limit_rev(fromaddr));
315
+ while (stagecopysize) {
316
+ env->xregs[rn] = copysize;
317
+ step = copy_step_rev(env, toaddr, fromaddr, stagecopysize,
318
+ wmemidx, rmemidx, &wdesc, &rdesc, ra);
319
+ copysize -= step;
320
+ stagecopysize -= step;
321
+ toaddr -= step;
322
+ fromaddr -= step;
323
+ }
324
+ /*
325
+ * Insn completed, so update registers to the Option A format.
326
+ * For a reverse copy this is no different to the CPYP input format.
327
+ */
328
+ env->xregs[rn] = copysize;
329
+ }
330
+
331
+ /* Set NZCV = 0000 to indicate we are an Option A implementation */
332
+ env->NF = 0;
333
+ env->ZF = 1; /* our env->ZF encoding is inverted */
334
+ env->CF = 0;
335
+ env->VF = 0;
336
+ return;
337
+}
48
+}
338
+
49
+
339
+void HELPER(cpyp)(CPUARMState *env, uint32_t syndrome, uint32_t wdesc,
50
+static inline bool isar_feature_aa64_lva(const ARMISARegisters *id)
340
+ uint32_t rdesc)
341
+{
51
+{
342
+ do_cpyp(env, syndrome, wdesc, rdesc, true, GETPC());
52
+ return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, VARANGE) != 0;
343
+}
53
+}
344
+
54
+
345
+void HELPER(cpyfp)(CPUARMState *env, uint32_t syndrome, uint32_t wdesc,
55
+static inline bool isar_feature_aa64_e0pd(const ARMISARegisters *id)
346
+ uint32_t rdesc)
347
+{
56
+{
348
+ do_cpyp(env, syndrome, wdesc, rdesc, false, GETPC());
57
+ return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, E0PD) != 0;
349
+}
58
+}
350
+
59
+
351
+static void do_cpym(CPUARMState *env, uint32_t syndrome, uint32_t wdesc,
60
static inline bool isar_feature_aa64_bti(const ARMISARegisters *id)
352
+ uint32_t rdesc, uint32_t move, uintptr_t ra)
61
{
353
+{
62
return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, BT) != 0;
354
+ /* Main: we choose to copy until less than a page remaining */
63
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_fgt(const ARMISARegisters *id)
355
+ CPUState *cs = env_cpu(env);
64
return FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, FGT) != 0;
356
+ int rd = mops_destreg(syndrome);
65
}
357
+ int rs = mops_srcreg(syndrome);
66
358
+ int rn = mops_sizereg(syndrome);
67
-static inline bool isar_feature_aa64_ccidx(const ARMISARegisters *id)
359
+ uint32_t rmemidx = FIELD_EX32(rdesc, MTEDESC, MIDX);
68
-{
360
+ uint32_t wmemidx = FIELD_EX32(wdesc, MTEDESC, MIDX);
69
- return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, CCIDX) != 0;
361
+ bool forwards = true;
70
-}
362
+ uint64_t toaddr, fromaddr, copysize, step;
71
-
363
+
72
-static inline bool isar_feature_aa64_lva(const ARMISARegisters *id)
364
+ check_mops_enabled(env, ra);
73
-{
365
+
74
- return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, VARANGE) != 0;
366
+ /* We choose to NOP out "no data to copy" before consistency checks */
75
-}
367
+ if (env->xregs[rn] == 0) {
76
-
368
+ return;
77
-static inline bool isar_feature_aa64_e0pd(const ARMISARegisters *id)
369
+ }
78
-{
370
+
79
- return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, E0PD) != 0;
371
+ check_mops_wrong_option(env, syndrome, ra);
80
-}
372
+
81
-
373
+ if (move) {
82
-static inline bool isar_feature_aa64_hafs(const ARMISARegisters *id)
374
+ forwards = (int64_t)env->xregs[rn] < 0;
83
-{
375
+ }
84
- return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, HAFDBS) != 0;
376
+
85
-}
377
+ if (forwards) {
86
-
378
+ toaddr = env->xregs[rd] + env->xregs[rn];
87
-static inline bool isar_feature_aa64_hdbs(const ARMISARegisters *id)
379
+ fromaddr = env->xregs[rs] + env->xregs[rn];
88
-{
380
+ copysize = -env->xregs[rn];
89
- return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, HAFDBS) >= 2;
381
+ } else {
90
-}
382
+ copysize = env->xregs[rn];
91
-
383
+ /* This toaddr and fromaddr point to the *last* byte to copy */
92
-static inline bool isar_feature_aa64_tts2uxn(const ARMISARegisters *id)
384
+ toaddr = env->xregs[rd] + copysize - 1;
93
-{
385
+ fromaddr = env->xregs[rs] + copysize - 1;
94
- return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, XNX) != 0;
386
+ }
95
-}
387
+
96
-
388
+ if (!mte_checks_needed(fromaddr, rdesc)) {
97
static inline bool isar_feature_aa64_dit(const ARMISARegisters *id)
389
+ rdesc = 0;
98
{
390
+ }
99
return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, DIT) != 0;
391
+ if (!mte_checks_needed(toaddr, wdesc)) {
392
+ wdesc = 0;
393
+ }
394
+
395
+ /* Our implementation has no particular parameter requirements for CPYM */
396
+
397
+ /* Do the actual memmove */
398
+ if (forwards) {
399
+ while (copysize >= TARGET_PAGE_SIZE) {
400
+ step = copy_step(env, toaddr, fromaddr, copysize,
401
+ wmemidx, rmemidx, &wdesc, &rdesc, ra);
402
+ toaddr += step;
403
+ fromaddr += step;
404
+ copysize -= step;
405
+ env->xregs[rn] = -copysize;
406
+ if (copysize >= TARGET_PAGE_SIZE &&
407
+ unlikely(cpu_loop_exit_requested(cs))) {
408
+ cpu_loop_exit_restore(cs, ra);
409
+ }
410
+ }
411
+ } else {
412
+ while (copysize >= TARGET_PAGE_SIZE) {
413
+ step = copy_step_rev(env, toaddr, fromaddr, copysize,
414
+ wmemidx, rmemidx, &wdesc, &rdesc, ra);
415
+ toaddr -= step;
416
+ fromaddr -= step;
417
+ copysize -= step;
418
+ env->xregs[rn] = copysize;
419
+ if (copysize >= TARGET_PAGE_SIZE &&
420
+ unlikely(cpu_loop_exit_requested(cs))) {
421
+ cpu_loop_exit_restore(cs, ra);
422
+ }
423
+ }
424
+ }
425
+}
426
+
427
+void HELPER(cpym)(CPUARMState *env, uint32_t syndrome, uint32_t wdesc,
428
+ uint32_t rdesc)
429
+{
430
+ do_cpym(env, syndrome, wdesc, rdesc, true, GETPC());
431
+}
432
+
433
+void HELPER(cpyfm)(CPUARMState *env, uint32_t syndrome, uint32_t wdesc,
434
+ uint32_t rdesc)
435
+{
436
+ do_cpym(env, syndrome, wdesc, rdesc, false, GETPC());
437
+}
438
+
439
+static void do_cpye(CPUARMState *env, uint32_t syndrome, uint32_t wdesc,
440
+ uint32_t rdesc, uint32_t move, uintptr_t ra)
441
+{
442
+ /* Epilogue: do the last partial page */
443
+ int rd = mops_destreg(syndrome);
444
+ int rs = mops_srcreg(syndrome);
445
+ int rn = mops_sizereg(syndrome);
446
+ uint32_t rmemidx = FIELD_EX32(rdesc, MTEDESC, MIDX);
447
+ uint32_t wmemidx = FIELD_EX32(wdesc, MTEDESC, MIDX);
448
+ bool forwards = true;
449
+ uint64_t toaddr, fromaddr, copysize, step;
450
+
451
+ check_mops_enabled(env, ra);
452
+
453
+ /* We choose to NOP out "no data to copy" before consistency checks */
454
+ if (env->xregs[rn] == 0) {
455
+ return;
456
+ }
457
+
458
+ check_mops_wrong_option(env, syndrome, ra);
459
+
460
+ if (move) {
461
+ forwards = (int64_t)env->xregs[rn] < 0;
462
+ }
463
+
464
+ if (forwards) {
465
+ toaddr = env->xregs[rd] + env->xregs[rn];
466
+ fromaddr = env->xregs[rs] + env->xregs[rn];
467
+ copysize = -env->xregs[rn];
468
+ } else {
469
+ copysize = env->xregs[rn];
470
+ /* This toaddr and fromaddr point to the *last* byte to copy */
471
+ toaddr = env->xregs[rd] + copysize - 1;
472
+ fromaddr = env->xregs[rs] + copysize - 1;
473
+ }
474
+
475
+ if (!mte_checks_needed(fromaddr, rdesc)) {
476
+ rdesc = 0;
477
+ }
478
+ if (!mte_checks_needed(toaddr, wdesc)) {
479
+ wdesc = 0;
480
+ }
481
+
482
+ /* Check the size; we don't want to have do a check-for-interrupts */
483
+ if (copysize >= TARGET_PAGE_SIZE) {
484
+ raise_exception_ra(env, EXCP_UDEF, syndrome,
485
+ mops_mismatch_exception_target_el(env), ra);
486
+ }
487
+
488
+ /* Do the actual memmove */
489
+ if (forwards) {
490
+ while (copysize > 0) {
491
+ step = copy_step(env, toaddr, fromaddr, copysize,
492
+ wmemidx, rmemidx, &wdesc, &rdesc, ra);
493
+ toaddr += step;
494
+ fromaddr += step;
495
+ copysize -= step;
496
+ env->xregs[rn] = -copysize;
497
+ }
498
+ } else {
499
+ while (copysize > 0) {
500
+ step = copy_step_rev(env, toaddr, fromaddr, copysize,
501
+ wmemidx, rmemidx, &wdesc, &rdesc, ra);
502
+ toaddr -= step;
503
+ fromaddr -= step;
504
+ copysize -= step;
505
+ env->xregs[rn] = copysize;
506
+ }
507
+ }
508
+}
509
+
510
+void HELPER(cpye)(CPUARMState *env, uint32_t syndrome, uint32_t wdesc,
511
+ uint32_t rdesc)
512
+{
513
+ do_cpye(env, syndrome, wdesc, rdesc, true, GETPC());
514
+}
515
+
516
+void HELPER(cpyfe)(CPUARMState *env, uint32_t syndrome, uint32_t wdesc,
517
+ uint32_t rdesc)
518
+{
519
+ do_cpye(env, syndrome, wdesc, rdesc, false, GETPC());
520
+}
521
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
522
index XXXXXXX..XXXXXXX 100644
523
--- a/target/arm/tcg/translate-a64.c
524
+++ b/target/arm/tcg/translate-a64.c
525
@@ -XXX,XX +XXX,XX @@ TRANS_FEAT(SETGP, aa64_mops, do_SET, a, false, true, gen_helper_setgp)
526
TRANS_FEAT(SETGM, aa64_mops, do_SET, a, false, true, gen_helper_setgm)
527
TRANS_FEAT(SETGE, aa64_mops, do_SET, a, true, true, gen_helper_setge)
528
529
+typedef void CpyFn(TCGv_env, TCGv_i32, TCGv_i32, TCGv_i32);
530
+
531
+static bool do_CPY(DisasContext *s, arg_cpy *a, bool is_epilogue, CpyFn fn)
532
+{
533
+ int rmemidx, wmemidx;
534
+ uint32_t syndrome, rdesc = 0, wdesc = 0;
535
+ bool wunpriv = extract32(a->options, 0, 1);
536
+ bool runpriv = extract32(a->options, 1, 1);
537
+
538
+ /*
539
+ * UNPREDICTABLE cases: we choose to UNDEF, which allows
540
+ * us to pull this check before the CheckMOPSEnabled() test
541
+ * (which we do in the helper function)
542
+ */
543
+ if (a->rs == a->rn || a->rs == a->rd || a->rn == a->rd ||
544
+ a->rd == 31 || a->rs == 31 || a->rn == 31) {
545
+ return false;
546
+ }
547
+
548
+ rmemidx = get_a64_user_mem_index(s, runpriv);
549
+ wmemidx = get_a64_user_mem_index(s, wunpriv);
550
+
551
+ /*
552
+ * We pass option_a == true, matching our implementation;
553
+ * we pass wrong_option == false: helper function may set that bit.
554
+ */
555
+ syndrome = syn_mop(false, false, a->options, is_epilogue,
556
+ false, true, a->rd, a->rs, a->rn);
557
+
558
+ /* If we need to do MTE tag checking, assemble the descriptors */
559
+ if (s->mte_active[runpriv]) {
560
+ rdesc = FIELD_DP32(rdesc, MTEDESC, TBI, s->tbid);
561
+ rdesc = FIELD_DP32(rdesc, MTEDESC, TCMA, s->tcma);
562
+ }
563
+ if (s->mte_active[wunpriv]) {
564
+ wdesc = FIELD_DP32(wdesc, MTEDESC, TBI, s->tbid);
565
+ wdesc = FIELD_DP32(wdesc, MTEDESC, TCMA, s->tcma);
566
+ wdesc = FIELD_DP32(wdesc, MTEDESC, WRITE, true);
567
+ }
568
+ /* The helper function needs these parts of the descriptor regardless */
569
+ rdesc = FIELD_DP32(rdesc, MTEDESC, MIDX, rmemidx);
570
+ wdesc = FIELD_DP32(wdesc, MTEDESC, MIDX, wmemidx);
571
+
572
+ /*
573
+ * The helper needs the register numbers, but since they're in
574
+ * the syndrome anyway, we let it extract them from there rather
575
+ * than passing in an extra three integer arguments.
576
+ */
577
+ fn(cpu_env, tcg_constant_i32(syndrome), tcg_constant_i32(wdesc),
578
+ tcg_constant_i32(rdesc));
579
+ return true;
580
+}
581
+
582
+TRANS_FEAT(CPYP, aa64_mops, do_CPY, a, false, gen_helper_cpyp)
583
+TRANS_FEAT(CPYM, aa64_mops, do_CPY, a, false, gen_helper_cpym)
584
+TRANS_FEAT(CPYE, aa64_mops, do_CPY, a, true, gen_helper_cpye)
585
+TRANS_FEAT(CPYFP, aa64_mops, do_CPY, a, false, gen_helper_cpyfp)
586
+TRANS_FEAT(CPYFM, aa64_mops, do_CPY, a, false, gen_helper_cpyfm)
587
+TRANS_FEAT(CPYFE, aa64_mops, do_CPY, a, true, gen_helper_cpyfe)
588
+
589
typedef void ArithTwoOp(TCGv_i64, TCGv_i64, TCGv_i64);
590
591
static bool gen_rri(DisasContext *s, arg_rri_sf *a,
592
--
100
--
593
2.34.1
101
2.34.1
102
103
diff view generated by jsdifflib
1
Implement the SET* instructions which collectively implement a
1
Move the ID_AA64MMFR0 feature test functions up so they are
2
"memset" operation. These come in a set of three, eg SETP
2
before the ones for ID_AA64MMFR1 and ID_AA64MMFR2.
3
(prologue), SETM (main), SETE (epilogue), and each of those has
4
different flavours to indicate whether memory accesses should be
5
unpriv or non-temporal.
6
7
This commit does not include the "memset with tag setting"
8
SETG* instructions.
9
3
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20230912140434.1333369-8-peter.maydell@linaro.org
7
Message-id: 20231024163510.2972081-4-peter.maydell@linaro.org
13
---
8
---
14
target/arm/tcg/helper-a64.h | 4 +
9
target/arm/cpu-features.h | 120 +++++++++++++++++++-------------------
15
target/arm/tcg/a64.decode | 16 ++
10
1 file changed, 60 insertions(+), 60 deletions(-)
16
target/arm/tcg/helper-a64.c | 344 +++++++++++++++++++++++++++++++++
17
target/arm/tcg/translate-a64.c | 49 +++++
18
4 files changed, 413 insertions(+)
19
11
20
diff --git a/target/arm/tcg/helper-a64.h b/target/arm/tcg/helper-a64.h
12
diff --git a/target/arm/cpu-features.h b/target/arm/cpu-features.h
21
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/tcg/helper-a64.h
14
--- a/target/arm/cpu-features.h
23
+++ b/target/arm/tcg/helper-a64.h
15
+++ b/target/arm/cpu-features.h
24
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(stzgm_tags, TCG_CALL_NO_WG, void, env, i64, i64)
16
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_rme(const ARMISARegisters *id)
25
17
return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, RME) != 0;
26
DEF_HELPER_FLAGS_4(unaligned_access, TCG_CALL_NO_WG,
27
noreturn, env, i64, i32, i32)
28
+
29
+DEF_HELPER_3(setp, void, env, i32, i32)
30
+DEF_HELPER_3(setm, void, env, i32, i32)
31
+DEF_HELPER_3(sete, void, env, i32, i32)
32
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
33
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/tcg/a64.decode
35
+++ b/target/arm/tcg/a64.decode
36
@@ -XXX,XX +XXX,XX @@ LDGM 11011001 11 1 ......... 00 ..... ..... @ldst_tag_mult p=0 w=0
37
STZ2G 11011001 11 1 ......... 01 ..... ..... @ldst_tag p=1 w=1
38
STZ2G 11011001 11 1 ......... 10 ..... ..... @ldst_tag p=0 w=0
39
STZ2G 11011001 11 1 ......... 11 ..... ..... @ldst_tag p=0 w=1
40
+
41
+# Memory operations (memset, memcpy, memmove)
42
+# Each of these comes in a set of three, eg SETP (prologue), SETM (main),
43
+# SETE (epilogue), and each of those has different flavours to
44
+# indicate whether memory accesses should be unpriv or non-temporal.
45
+# We don't distinguish temporal and non-temporal accesses, but we
46
+# do need to report it in syndrome register values.
47
+
48
+# Memset
49
+&set rs rn rd unpriv nontemp
50
+# op2 bit 1 is nontemporal bit
51
+@set .. ......... rs:5 .. nontemp:1 unpriv:1 .. rn:5 rd:5 &set
52
+
53
+SETP 00 011001110 ..... 00 . . 01 ..... ..... @set
54
+SETM 00 011001110 ..... 01 . . 01 ..... ..... @set
55
+SETE 00 011001110 ..... 10 . . 01 ..... ..... @set
56
diff --git a/target/arm/tcg/helper-a64.c b/target/arm/tcg/helper-a64.c
57
index XXXXXXX..XXXXXXX 100644
58
--- a/target/arm/tcg/helper-a64.c
59
+++ b/target/arm/tcg/helper-a64.c
60
@@ -XXX,XX +XXX,XX @@ void HELPER(unaligned_access)(CPUARMState *env, uint64_t addr,
61
arm_cpu_do_unaligned_access(env_cpu(env), addr, access_type,
62
mmu_idx, GETPC());
63
}
18
}
64
+
19
65
+/* Memory operations (memset, memmove, memcpy) */
20
+static inline bool isar_feature_aa64_tgran4_lpa2(const ARMISARegisters *id)
66
+
67
+/*
68
+ * Return true if the CPY* and SET* insns can execute; compare
69
+ * pseudocode CheckMOPSEnabled(), though we refactor it a little.
70
+ */
71
+static bool mops_enabled(CPUARMState *env)
72
+{
21
+{
73
+ int el = arm_current_el(env);
22
+ return FIELD_SEX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN4) >= 1;
74
+
75
+ if (el < 2 &&
76
+ (arm_hcr_el2_eff(env) & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE) &&
77
+ !(arm_hcrx_el2_eff(env) & HCRX_MSCEN)) {
78
+ return false;
79
+ }
80
+
81
+ if (el == 0) {
82
+ if (!el_is_in_host(env, 0)) {
83
+ return env->cp15.sctlr_el[1] & SCTLR_MSCEN;
84
+ } else {
85
+ return env->cp15.sctlr_el[2] & SCTLR_MSCEN;
86
+ }
87
+ }
88
+ return true;
89
+}
23
+}
90
+
24
+
91
+static void check_mops_enabled(CPUARMState *env, uintptr_t ra)
25
+static inline bool isar_feature_aa64_tgran4_2_lpa2(const ARMISARegisters *id)
92
+{
26
+{
93
+ if (!mops_enabled(env)) {
27
+ unsigned t = FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN4_2);
94
+ raise_exception_ra(env, EXCP_UDEF, syn_uncategorized(),
28
+ return t >= 3 || (t == 0 && isar_feature_aa64_tgran4_lpa2(id));
95
+ exception_target_el(env), ra);
96
+ }
97
+}
29
+}
98
+
30
+
99
+/*
31
+static inline bool isar_feature_aa64_tgran16_lpa2(const ARMISARegisters *id)
100
+ * Return the target exception level for an exception due
101
+ * to mismatched arguments in a FEAT_MOPS copy or set.
102
+ * Compare pseudocode MismatchedCpySetTargetEL()
103
+ */
104
+static int mops_mismatch_exception_target_el(CPUARMState *env)
105
+{
32
+{
106
+ int el = arm_current_el(env);
33
+ return FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN16) >= 2;
107
+
108
+ if (el > 1) {
109
+ return el;
110
+ }
111
+ if (el == 0 && (arm_hcr_el2_eff(env) & HCR_TGE)) {
112
+ return 2;
113
+ }
114
+ if (el == 1 && (arm_hcrx_el2_eff(env) & HCRX_MCE2)) {
115
+ return 2;
116
+ }
117
+ return 1;
118
+}
34
+}
119
+
35
+
120
+/*
36
+static inline bool isar_feature_aa64_tgran16_2_lpa2(const ARMISARegisters *id)
121
+ * Check whether an M or E instruction was executed with a CF value
122
+ * indicating the wrong option for this implementation.
123
+ * Assumes we are always Option A.
124
+ */
125
+static void check_mops_wrong_option(CPUARMState *env, uint32_t syndrome,
126
+ uintptr_t ra)
127
+{
37
+{
128
+ if (env->CF != 0) {
38
+ unsigned t = FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN16_2);
129
+ syndrome |= 1 << 17; /* Set the wrong-option bit */
39
+ return t >= 3 || (t == 0 && isar_feature_aa64_tgran16_lpa2(id));
130
+ raise_exception_ra(env, EXCP_UDEF, syndrome,
131
+ mops_mismatch_exception_target_el(env), ra);
132
+ }
133
+}
40
+}
134
+
41
+
135
+/*
42
+static inline bool isar_feature_aa64_tgran4(const ARMISARegisters *id)
136
+ * Return the maximum number of bytes we can transfer starting at addr
137
+ * without crossing a page boundary.
138
+ */
139
+static uint64_t page_limit(uint64_t addr)
140
+{
43
+{
141
+ return TARGET_PAGE_ALIGN(addr + 1) - addr;
44
+ return FIELD_SEX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN4) >= 0;
142
+}
45
+}
143
+
46
+
144
+/*
47
+static inline bool isar_feature_aa64_tgran16(const ARMISARegisters *id)
145
+ * Perform part of a memory set on an area of guest memory starting at
146
+ * toaddr (a dirty address) and extending for setsize bytes.
147
+ *
148
+ * Returns the number of bytes actually set, which might be less than
149
+ * setsize; the caller should loop until the whole set has been done.
150
+ * The caller should ensure that the guest registers are correct
151
+ * for the possibility that the first byte of the set encounters
152
+ * an exception or watchpoint. We guarantee not to take any faults
153
+ * for bytes other than the first.
154
+ */
155
+static uint64_t set_step(CPUARMState *env, uint64_t toaddr,
156
+ uint64_t setsize, uint32_t data, int memidx,
157
+ uint32_t *mtedesc, uintptr_t ra)
158
+{
48
+{
159
+ void *mem;
49
+ return FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN16) >= 1;
160
+
161
+ setsize = MIN(setsize, page_limit(toaddr));
162
+ if (*mtedesc) {
163
+ uint64_t mtesize = mte_mops_probe(env, toaddr, setsize, *mtedesc);
164
+ if (mtesize == 0) {
165
+ /* Trap, or not. All CPU state is up to date */
166
+ mte_check_fail(env, *mtedesc, toaddr, ra);
167
+ /* Continue, with no further MTE checks required */
168
+ *mtedesc = 0;
169
+ } else {
170
+ /* Advance to the end, or to the tag mismatch */
171
+ setsize = MIN(setsize, mtesize);
172
+ }
173
+ }
174
+
175
+ toaddr = useronly_clean_ptr(toaddr);
176
+ /*
177
+ * Trapless lookup: returns NULL for invalid page, I/O,
178
+ * watchpoints, clean pages, etc.
179
+ */
180
+ mem = tlb_vaddr_to_host(env, toaddr, MMU_DATA_STORE, memidx);
181
+
182
+#ifndef CONFIG_USER_ONLY
183
+ if (unlikely(!mem)) {
184
+ /*
185
+ * Slow-path: just do one byte write. This will handle the
186
+ * watchpoint, invalid page, etc handling correctly.
187
+ * For clean code pages, the next iteration will see
188
+ * the page dirty and will use the fast path.
189
+ */
190
+ cpu_stb_mmuidx_ra(env, toaddr, data, memidx, ra);
191
+ return 1;
192
+ }
193
+#endif
194
+ /* Easy case: just memset the host memory */
195
+ memset(mem, data, setsize);
196
+ return setsize;
197
+}
50
+}
198
+
51
+
199
+typedef uint64_t StepFn(CPUARMState *env, uint64_t toaddr,
52
+static inline bool isar_feature_aa64_tgran64(const ARMISARegisters *id)
200
+ uint64_t setsize, uint32_t data,
201
+ int memidx, uint32_t *mtedesc, uintptr_t ra);
202
+
203
+/* Extract register numbers from a MOPS exception syndrome value */
204
+static int mops_destreg(uint32_t syndrome)
205
+{
53
+{
206
+ return extract32(syndrome, 10, 5);
54
+ return FIELD_SEX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN64) >= 0;
207
+}
55
+}
208
+
56
+
209
+static int mops_srcreg(uint32_t syndrome)
57
+static inline bool isar_feature_aa64_tgran4_2(const ARMISARegisters *id)
210
+{
58
+{
211
+ return extract32(syndrome, 5, 5);
59
+ unsigned t = FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN4_2);
60
+ return t >= 2 || (t == 0 && isar_feature_aa64_tgran4(id));
212
+}
61
+}
213
+
62
+
214
+static int mops_sizereg(uint32_t syndrome)
63
+static inline bool isar_feature_aa64_tgran16_2(const ARMISARegisters *id)
215
+{
64
+{
216
+ return extract32(syndrome, 0, 5);
65
+ unsigned t = FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN16_2);
66
+ return t >= 2 || (t == 0 && isar_feature_aa64_tgran16(id));
217
+}
67
+}
218
+
68
+
219
+/*
69
+static inline bool isar_feature_aa64_tgran64_2(const ARMISARegisters *id)
220
+ * Return true if TCMA and TBI bits mean we need to do MTE checks.
221
+ * We only need to do this once per MOPS insn, not for every page.
222
+ */
223
+static bool mte_checks_needed(uint64_t ptr, uint32_t desc)
224
+{
70
+{
225
+ int bit55 = extract64(ptr, 55, 1);
71
+ unsigned t = FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN64_2);
226
+
72
+ return t >= 2 || (t == 0 && isar_feature_aa64_tgran64(id));
227
+ /*
228
+ * Note that tbi_check() returns true for "access checked" but
229
+ * tcma_check() returns true for "access unchecked".
230
+ */
231
+ if (!tbi_check(desc, bit55)) {
232
+ return false;
233
+ }
234
+ return !tcma_check(desc, bit55, allocation_tag_from_addr(ptr));
235
+}
73
+}
236
+
74
+
237
+/*
75
+static inline bool isar_feature_aa64_fgt(const ARMISARegisters *id)
238
+ * For the Memory Set operation, our implementation chooses
239
+ * always to use "option A", where we update Xd to the final
240
+ * address in the SETP insn, and set Xn to be -(bytes remaining).
241
+ * On SETM and SETE insns we only need update Xn.
242
+ *
243
+ * @env: CPU
244
+ * @syndrome: syndrome value for mismatch exceptions
245
+ * (also contains the register numbers we need to use)
246
+ * @mtedesc: MTE descriptor word
247
+ * @stepfn: function which does a single part of the set operation
248
+ * @is_setg: true if this is the tag-setting SETG variant
249
+ */
250
+static void do_setp(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc,
251
+ StepFn *stepfn, bool is_setg, uintptr_t ra)
252
+{
76
+{
253
+ /* Prologue: we choose to do up to the next page boundary */
77
+ return FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, FGT) != 0;
254
+ int rd = mops_destreg(syndrome);
255
+ int rs = mops_srcreg(syndrome);
256
+ int rn = mops_sizereg(syndrome);
257
+ uint8_t data = env->xregs[rs];
258
+ uint32_t memidx = FIELD_EX32(mtedesc, MTEDESC, MIDX);
259
+ uint64_t toaddr = env->xregs[rd];
260
+ uint64_t setsize = env->xregs[rn];
261
+ uint64_t stagesetsize, step;
262
+
263
+ check_mops_enabled(env, ra);
264
+
265
+ if (setsize > INT64_MAX) {
266
+ setsize = INT64_MAX;
267
+ }
268
+
269
+ if (!mte_checks_needed(toaddr, mtedesc)) {
270
+ mtedesc = 0;
271
+ }
272
+
273
+ stagesetsize = MIN(setsize, page_limit(toaddr));
274
+ while (stagesetsize) {
275
+ env->xregs[rd] = toaddr;
276
+ env->xregs[rn] = setsize;
277
+ step = stepfn(env, toaddr, stagesetsize, data, memidx, &mtedesc, ra);
278
+ toaddr += step;
279
+ setsize -= step;
280
+ stagesetsize -= step;
281
+ }
282
+ /* Insn completed, so update registers to the Option A format */
283
+ env->xregs[rd] = toaddr + setsize;
284
+ env->xregs[rn] = -setsize;
285
+
286
+ /* Set NZCV = 0000 to indicate we are an Option A implementation */
287
+ env->NF = 0;
288
+ env->ZF = 1; /* our env->ZF encoding is inverted */
289
+ env->CF = 0;
290
+ env->VF = 0;
291
+ return;
292
+}
78
+}
293
+
79
+
294
+void HELPER(setp)(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc)
80
static inline bool isar_feature_aa64_vh(const ARMISARegisters *id)
295
+{
81
{
296
+ do_setp(env, syndrome, mtedesc, set_step, false, GETPC());
82
return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, VH) != 0;
297
+}
83
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_hbc(const ARMISARegisters *id)
298
+
84
return FIELD_EX64(id->id_aa64isar2, ID_AA64ISAR2, BC) != 0;
299
+static void do_setm(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc,
85
}
300
+ StepFn *stepfn, bool is_setg, uintptr_t ra)
86
301
+{
87
-static inline bool isar_feature_aa64_tgran4_lpa2(const ARMISARegisters *id)
302
+ /* Main: we choose to do all the full-page chunks */
88
-{
303
+ CPUState *cs = env_cpu(env);
89
- return FIELD_SEX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN4) >= 1;
304
+ int rd = mops_destreg(syndrome);
90
-}
305
+ int rs = mops_srcreg(syndrome);
91
-
306
+ int rn = mops_sizereg(syndrome);
92
-static inline bool isar_feature_aa64_tgran4_2_lpa2(const ARMISARegisters *id)
307
+ uint8_t data = env->xregs[rs];
93
-{
308
+ uint64_t toaddr = env->xregs[rd] + env->xregs[rn];
94
- unsigned t = FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN4_2);
309
+ uint64_t setsize = -env->xregs[rn];
95
- return t >= 3 || (t == 0 && isar_feature_aa64_tgran4_lpa2(id));
310
+ uint32_t memidx = FIELD_EX32(mtedesc, MTEDESC, MIDX);
96
-}
311
+ uint64_t step, stagesetsize;
97
-
312
+
98
-static inline bool isar_feature_aa64_tgran16_lpa2(const ARMISARegisters *id)
313
+ check_mops_enabled(env, ra);
99
-{
314
+
100
- return FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN16) >= 2;
315
+ /*
101
-}
316
+ * We're allowed to NOP out "no data to copy" before the consistency
102
-
317
+ * checks; we choose to do so.
103
-static inline bool isar_feature_aa64_tgran16_2_lpa2(const ARMISARegisters *id)
318
+ */
104
-{
319
+ if (env->xregs[rn] == 0) {
105
- unsigned t = FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN16_2);
320
+ return;
106
- return t >= 3 || (t == 0 && isar_feature_aa64_tgran16_lpa2(id));
321
+ }
107
-}
322
+
108
-
323
+ check_mops_wrong_option(env, syndrome, ra);
109
-static inline bool isar_feature_aa64_tgran4(const ARMISARegisters *id)
324
+
110
-{
325
+ /*
111
- return FIELD_SEX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN4) >= 0;
326
+ * Our implementation will work fine even if we have an unaligned
112
-}
327
+ * destination address, and because we update Xn every time around
113
-
328
+ * the loop below and the return value from stepfn() may be less
114
-static inline bool isar_feature_aa64_tgran16(const ARMISARegisters *id)
329
+ * than requested, we might find toaddr is unaligned. So we don't
115
-{
330
+ * have an IMPDEF check for alignment here.
116
- return FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN16) >= 1;
331
+ */
117
-}
332
+
118
-
333
+ if (!mte_checks_needed(toaddr, mtedesc)) {
119
-static inline bool isar_feature_aa64_tgran64(const ARMISARegisters *id)
334
+ mtedesc = 0;
120
-{
335
+ }
121
- return FIELD_SEX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN64) >= 0;
336
+
122
-}
337
+ /* Do the actual memset: we leave the last partial page to SETE */
123
-
338
+ stagesetsize = setsize & TARGET_PAGE_MASK;
124
-static inline bool isar_feature_aa64_tgran4_2(const ARMISARegisters *id)
339
+ while (stagesetsize > 0) {
125
-{
340
+ step = stepfn(env, toaddr, setsize, data, memidx, &mtedesc, ra);
126
- unsigned t = FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN4_2);
341
+ toaddr += step;
127
- return t >= 2 || (t == 0 && isar_feature_aa64_tgran4(id));
342
+ setsize -= step;
128
-}
343
+ stagesetsize -= step;
129
-
344
+ env->xregs[rn] = -setsize;
130
-static inline bool isar_feature_aa64_tgran16_2(const ARMISARegisters *id)
345
+ if (stagesetsize > 0 && unlikely(cpu_loop_exit_requested(cs))) {
131
-{
346
+ cpu_loop_exit_restore(cs, ra);
132
- unsigned t = FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN16_2);
347
+ }
133
- return t >= 2 || (t == 0 && isar_feature_aa64_tgran16(id));
348
+ }
134
-}
349
+}
135
-
350
+
136
-static inline bool isar_feature_aa64_tgran64_2(const ARMISARegisters *id)
351
+void HELPER(setm)(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc)
137
-{
352
+{
138
- unsigned t = FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN64_2);
353
+ do_setm(env, syndrome, mtedesc, set_step, false, GETPC());
139
- return t >= 2 || (t == 0 && isar_feature_aa64_tgran64(id));
354
+}
140
-}
355
+
141
-
356
+static void do_sete(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc,
142
-static inline bool isar_feature_aa64_fgt(const ARMISARegisters *id)
357
+ StepFn *stepfn, bool is_setg, uintptr_t ra)
143
-{
358
+{
144
- return FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, FGT) != 0;
359
+ /* Epilogue: do the last partial page */
145
-}
360
+ int rd = mops_destreg(syndrome);
146
-
361
+ int rs = mops_srcreg(syndrome);
147
static inline bool isar_feature_aa64_dit(const ARMISARegisters *id)
362
+ int rn = mops_sizereg(syndrome);
148
{
363
+ uint8_t data = env->xregs[rs];
149
return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, DIT) != 0;
364
+ uint64_t toaddr = env->xregs[rd] + env->xregs[rn];
365
+ uint64_t setsize = -env->xregs[rn];
366
+ uint32_t memidx = FIELD_EX32(mtedesc, MTEDESC, MIDX);
367
+ uint64_t step;
368
+
369
+ check_mops_enabled(env, ra);
370
+
371
+ /*
372
+ * We're allowed to NOP out "no data to copy" before the consistency
373
+ * checks; we choose to do so.
374
+ */
375
+ if (setsize == 0) {
376
+ return;
377
+ }
378
+
379
+ check_mops_wrong_option(env, syndrome, ra);
380
+
381
+ /*
382
+ * Our implementation has no address alignment requirements, but
383
+ * we do want to enforce the "less than a page" size requirement,
384
+ * so we don't need to have the "check for interrupts" here.
385
+ */
386
+ if (setsize >= TARGET_PAGE_SIZE) {
387
+ raise_exception_ra(env, EXCP_UDEF, syndrome,
388
+ mops_mismatch_exception_target_el(env), ra);
389
+ }
390
+
391
+ if (!mte_checks_needed(toaddr, mtedesc)) {
392
+ mtedesc = 0;
393
+ }
394
+
395
+ /* Do the actual memset */
396
+ while (setsize > 0) {
397
+ step = stepfn(env, toaddr, setsize, data, memidx, &mtedesc, ra);
398
+ toaddr += step;
399
+ setsize -= step;
400
+ env->xregs[rn] = -setsize;
401
+ }
402
+}
403
+
404
+void HELPER(sete)(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc)
405
+{
406
+ do_sete(env, syndrome, mtedesc, set_step, false, GETPC());
407
+}
408
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
409
index XXXXXXX..XXXXXXX 100644
410
--- a/target/arm/tcg/translate-a64.c
411
+++ b/target/arm/tcg/translate-a64.c
412
@@ -XXX,XX +XXX,XX @@ TRANS_FEAT(STZG, aa64_mte_insn_reg, do_STG, a, true, false)
413
TRANS_FEAT(ST2G, aa64_mte_insn_reg, do_STG, a, false, true)
414
TRANS_FEAT(STZ2G, aa64_mte_insn_reg, do_STG, a, true, true)
415
416
+typedef void SetFn(TCGv_env, TCGv_i32, TCGv_i32);
417
+
418
+static bool do_SET(DisasContext *s, arg_set *a, bool is_epilogue, SetFn fn)
419
+{
420
+ int memidx;
421
+ uint32_t syndrome, desc = 0;
422
+
423
+ /*
424
+ * UNPREDICTABLE cases: we choose to UNDEF, which allows
425
+ * us to pull this check before the CheckMOPSEnabled() test
426
+ * (which we do in the helper function)
427
+ */
428
+ if (a->rs == a->rn || a->rs == a->rd || a->rn == a->rd ||
429
+ a->rd == 31 || a->rn == 31) {
430
+ return false;
431
+ }
432
+
433
+ memidx = get_a64_user_mem_index(s, a->unpriv);
434
+
435
+ /*
436
+ * We pass option_a == true, matching our implementation;
437
+ * we pass wrong_option == false: helper function may set that bit.
438
+ */
439
+ syndrome = syn_mop(true, false, (a->nontemp << 1) | a->unpriv,
440
+ is_epilogue, false, true, a->rd, a->rs, a->rn);
441
+
442
+ if (s->mte_active[a->unpriv]) {
443
+ /* We may need to do MTE tag checking, so assemble the descriptor */
444
+ desc = FIELD_DP32(desc, MTEDESC, TBI, s->tbid);
445
+ desc = FIELD_DP32(desc, MTEDESC, TCMA, s->tcma);
446
+ desc = FIELD_DP32(desc, MTEDESC, WRITE, true);
447
+ /* SIZEM1 and ALIGN we leave 0 (byte write) */
448
+ }
449
+ /* The helper function always needs the memidx even with MTE disabled */
450
+ desc = FIELD_DP32(desc, MTEDESC, MIDX, memidx);
451
+
452
+ /*
453
+ * The helper needs the register numbers, but since they're in
454
+ * the syndrome anyway, we let it extract them from there rather
455
+ * than passing in an extra three integer arguments.
456
+ */
457
+ fn(cpu_env, tcg_constant_i32(syndrome), tcg_constant_i32(desc));
458
+ return true;
459
+}
460
+
461
+TRANS_FEAT(SETP, aa64_mops, do_SET, a, false, gen_helper_setp)
462
+TRANS_FEAT(SETM, aa64_mops, do_SET, a, false, gen_helper_setm)
463
+TRANS_FEAT(SETE, aa64_mops, do_SET, a, true, gen_helper_sete)
464
+
465
typedef void ArithTwoOp(TCGv_i64, TCGv_i64, TCGv_i64);
466
467
static bool gen_rri(DisasContext *s, arg_rri_sf *a,
468
--
150
--
469
2.34.1
151
2.34.1
152
153
diff view generated by jsdifflib
1
FEAT_MOPS defines a handful of new enable bits:
1
Move the feature test functions that test ID_AA64ISAR* fields
2
* HCRX_EL2.MSCEn, SCTLR_EL1.MSCEn, SCTLR_EL2.MSCen:
2
together.
3
define whether the new insns should UNDEF or not
4
* HCRX_EL2.MCE2: defines whether memops exceptions from
5
EL1 should be taken to EL1 or EL2
6
7
Since we don't sanitise what bits can be written for the SCTLR
8
registers, we only need to handle the new bits in HCRX_EL2, and
9
define SCTLR_MSCEN for the new SCTLR bit value.
10
11
The precedence of "HCRX bits acts as 0 if SCR_EL3.HXEn is 0" versus
12
"bit acts as 1 if EL2 disabled" is not clear from the register
13
definition text, but it is clear in the CheckMOPSEnabled()
14
pseudocode(), so we follow that. We'll have to check whether other
15
bits we need to implement in future follow the same logic or not.
16
3
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
18
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
19
Message-id: 20230912140434.1333369-3-peter.maydell@linaro.org
7
Message-id: 20231024163510.2972081-5-peter.maydell@linaro.org
20
---
8
---
21
target/arm/cpu.h | 6 ++++++
9
target/arm/cpu-features.h | 70 +++++++++++++++++++--------------------
22
target/arm/helper.c | 28 +++++++++++++++++++++-------
10
1 file changed, 35 insertions(+), 35 deletions(-)
23
2 files changed, 27 insertions(+), 7 deletions(-)
24
11
25
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
12
diff --git a/target/arm/cpu-features.h b/target/arm/cpu-features.h
26
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/cpu.h
14
--- a/target/arm/cpu-features.h
28
+++ b/target/arm/cpu.h
15
+++ b/target/arm/cpu-features.h
29
@@ -XXX,XX +XXX,XX @@ void pmu_init(ARMCPU *cpu);
16
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_rndr(const ARMISARegisters *id)
30
#define SCTLR_EnIB (1U << 30) /* v8.3, AArch64 only */
17
return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, RNDR) != 0;
31
#define SCTLR_EnIA (1U << 31) /* v8.3, AArch64 only */
32
#define SCTLR_DSSBS_32 (1U << 31) /* v8.5, AArch32 only */
33
+#define SCTLR_MSCEN (1ULL << 33) /* FEAT_MOPS */
34
#define SCTLR_BT0 (1ULL << 35) /* v8.5-BTI */
35
#define SCTLR_BT1 (1ULL << 36) /* v8.5-BTI */
36
#define SCTLR_ITFSB (1ULL << 37) /* v8.5-MemTag */
37
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_doublelock(const ARMISARegisters *id)
38
return FIELD_SEX64(id->id_aa64dfr0, ID_AA64DFR0, DOUBLELOCK) >= 0;
39
}
18
}
40
19
20
+static inline bool isar_feature_aa64_tlbirange(const ARMISARegisters *id)
21
+{
22
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, TLB) == 2;
23
+}
24
+
25
+static inline bool isar_feature_aa64_tlbios(const ARMISARegisters *id)
26
+{
27
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, TLB) != 0;
28
+}
29
+
30
static inline bool isar_feature_aa64_jscvt(const ARMISARegisters *id)
31
{
32
return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, JSCVT) != 0;
33
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_pauth_qarma3(const ARMISARegisters *id)
34
return FIELD_EX64(id->id_aa64isar2, ID_AA64ISAR2, APA3) != 0;
35
}
36
37
-static inline bool isar_feature_aa64_tlbirange(const ARMISARegisters *id)
38
-{
39
- return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, TLB) == 2;
40
-}
41
-
42
-static inline bool isar_feature_aa64_tlbios(const ARMISARegisters *id)
43
-{
44
- return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, TLB) != 0;
45
-}
46
-
47
static inline bool isar_feature_aa64_sb(const ARMISARegisters *id)
48
{
49
return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, SB) != 0;
50
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_bf16(const ARMISARegisters *id)
51
return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, BF16) != 0;
52
}
53
54
+static inline bool isar_feature_aa64_rcpc_8_3(const ARMISARegisters *id)
55
+{
56
+ return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, LRCPC) != 0;
57
+}
58
+
59
+static inline bool isar_feature_aa64_rcpc_8_4(const ARMISARegisters *id)
60
+{
61
+ return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, LRCPC) >= 2;
62
+}
63
+
64
+static inline bool isar_feature_aa64_i8mm(const ARMISARegisters *id)
65
+{
66
+ return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, I8MM) != 0;
67
+}
68
+
69
+static inline bool isar_feature_aa64_hbc(const ARMISARegisters *id)
70
+{
71
+ return FIELD_EX64(id->id_aa64isar2, ID_AA64ISAR2, BC) != 0;
72
+}
73
+
41
+static inline bool isar_feature_aa64_mops(const ARMISARegisters *id)
74
+static inline bool isar_feature_aa64_mops(const ARMISARegisters *id)
42
+{
75
+{
43
+ return FIELD_EX64(id->id_aa64isar2, ID_AA64ISAR2, MOPS);
76
+ return FIELD_EX64(id->id_aa64isar2, ID_AA64ISAR2, MOPS);
44
+}
77
+}
45
+
78
+
79
static inline bool isar_feature_aa64_fp_simd(const ARMISARegisters *id)
80
{
81
/* We always set the AdvSIMD and FP fields identically. */
82
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_pmuv3p5(const ARMISARegisters *id)
83
FIELD_EX64(id->id_aa64dfr0, ID_AA64DFR0, PMUVER) != 0xf;
84
}
85
86
-static inline bool isar_feature_aa64_rcpc_8_3(const ARMISARegisters *id)
87
-{
88
- return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, LRCPC) != 0;
89
-}
90
-
91
-static inline bool isar_feature_aa64_rcpc_8_4(const ARMISARegisters *id)
92
-{
93
- return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, LRCPC) >= 2;
94
-}
95
-
96
-static inline bool isar_feature_aa64_i8mm(const ARMISARegisters *id)
97
-{
98
- return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, I8MM) != 0;
99
-}
100
-
101
-static inline bool isar_feature_aa64_hbc(const ARMISARegisters *id)
102
-{
103
- return FIELD_EX64(id->id_aa64isar2, ID_AA64ISAR2, BC) != 0;
104
-}
105
-
106
static inline bool isar_feature_aa64_dit(const ARMISARegisters *id)
107
{
108
return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, DIT) != 0;
109
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_doublelock(const ARMISARegisters *id)
110
return FIELD_SEX64(id->id_aa64dfr0, ID_AA64DFR0, DOUBLELOCK) >= 0;
111
}
112
113
-static inline bool isar_feature_aa64_mops(const ARMISARegisters *id)
114
-{
115
- return FIELD_EX64(id->id_aa64isar2, ID_AA64ISAR2, MOPS);
116
-}
117
-
46
/*
118
/*
47
* Feature tests for "does this exist in either 32-bit or 64-bit?"
119
* Feature tests for "does this exist in either 32-bit or 64-bit?"
48
*/
120
*/
49
diff --git a/target/arm/helper.c b/target/arm/helper.c
50
index XXXXXXX..XXXXXXX 100644
51
--- a/target/arm/helper.c
52
+++ b/target/arm/helper.c
53
@@ -XXX,XX +XXX,XX @@ static void hcrx_write(CPUARMState *env, const ARMCPRegInfo *ri,
54
{
55
uint64_t valid_mask = 0;
56
57
- /* No features adding bits to HCRX are implemented. */
58
+ /* FEAT_MOPS adds MSCEn and MCE2 */
59
+ if (cpu_isar_feature(aa64_mops, env_archcpu(env))) {
60
+ valid_mask |= HCRX_MSCEN | HCRX_MCE2;
61
+ }
62
63
/* Clear RES0 bits. */
64
env->cp15.hcrx_el2 = value & valid_mask;
65
@@ -XXX,XX +XXX,XX @@ uint64_t arm_hcrx_el2_eff(CPUARMState *env)
66
{
67
/*
68
* The bits in this register behave as 0 for all purposes other than
69
- * direct reads of the register if:
70
- * - EL2 is not enabled in the current security state,
71
- * - SCR_EL3.HXEn is 0.
72
+ * direct reads of the register if SCR_EL3.HXEn is 0.
73
+ * If EL2 is not enabled in the current security state, then the
74
+ * bit may behave as if 0, or as if 1, depending on the bit.
75
+ * For the moment, we treat the EL2-disabled case as taking
76
+ * priority over the HXEn-disabled case. This is true for the only
77
+ * bit for a feature which we implement where the answer is different
78
+ * for the two cases (MSCEn for FEAT_MOPS).
79
+ * This may need to be revisited for future bits.
80
*/
81
- if (!arm_is_el2_enabled(env)
82
- || (arm_feature(env, ARM_FEATURE_EL3)
83
- && !(env->cp15.scr_el3 & SCR_HXEN))) {
84
+ if (!arm_is_el2_enabled(env)) {
85
+ uint64_t hcrx = 0;
86
+ if (cpu_isar_feature(aa64_mops, env_archcpu(env))) {
87
+ /* MSCEn behaves as 1 if EL2 is not enabled */
88
+ hcrx |= HCRX_MSCEN;
89
+ }
90
+ return hcrx;
91
+ }
92
+ if (arm_feature(env, ARM_FEATURE_EL3) && !(env->cp15.scr_el3 & SCR_HXEN)) {
93
return 0;
94
}
95
return env->cp15.hcrx_el2;
96
--
121
--
97
2.34.1
122
2.34.1
123
124
diff view generated by jsdifflib
1
FEAT_HBC (Hinted conditional branches) provides a new instruction
1
Move all the ID_AA64PFR* feature test functions together.
2
BC.cond, which behaves exactly like the existing B.cond except
3
that it provides a hint to the branch predictor about the
4
likely behaviour of the branch.
5
6
Since QEMU does not implement branch prediction, we can treat
7
this identically to B.cond.
8
2
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20231024163510.2972081-6-peter.maydell@linaro.org
12
---
7
---
13
docs/system/arm/emulation.rst | 1 +
8
target/arm/cpu-features.h | 86 +++++++++++++++++++--------------------
14
target/arm/cpu.h | 5 +++++
9
1 file changed, 43 insertions(+), 43 deletions(-)
15
target/arm/tcg/a64.decode | 3 ++-
16
linux-user/elfload.c | 1 +
17
target/arm/tcg/cpu64.c | 4 ++++
18
target/arm/tcg/translate-a64.c | 4 ++++
19
6 files changed, 17 insertions(+), 1 deletion(-)
20
10
21
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
11
diff --git a/target/arm/cpu-features.h b/target/arm/cpu-features.h
22
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
23
--- a/docs/system/arm/emulation.rst
13
--- a/target/arm/cpu-features.h
24
+++ b/docs/system/arm/emulation.rst
14
+++ b/target/arm/cpu-features.h
25
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
15
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_rme(const ARMISARegisters *id)
26
- FEAT_FlagM2 (Enhancements to flag manipulation instructions)
16
return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, RME) != 0;
27
- FEAT_GTG (Guest translation granule size)
28
- FEAT_HAFDBS (Hardware management of the access flag and dirty bit state)
29
+- FEAT_HBC (Hinted conditional branches)
30
- FEAT_HCX (Support for the HCRX_EL2 register)
31
- FEAT_HPDS (Hierarchical permission disables)
32
- FEAT_HPDS2 (Translation table page-based hardware attributes)
33
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/cpu.h
36
+++ b/target/arm/cpu.h
37
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_i8mm(const ARMISARegisters *id)
38
return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, I8MM) != 0;
39
}
17
}
40
18
41
+static inline bool isar_feature_aa64_hbc(const ARMISARegisters *id)
19
+static inline bool isar_feature_aa64_dit(const ARMISARegisters *id)
42
+{
20
+{
43
+ return FIELD_EX64(id->id_aa64isar2, ID_AA64ISAR2, BC) != 0;
21
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, DIT) != 0;
22
+}
23
+
24
+static inline bool isar_feature_aa64_scxtnum(const ARMISARegisters *id)
25
+{
26
+ int key = FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, CSV2);
27
+ if (key >= 2) {
28
+ return true; /* FEAT_CSV2_2 */
29
+ }
30
+ if (key == 1) {
31
+ key = FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, CSV2_FRAC);
32
+ return key >= 2; /* FEAT_CSV2_1p2 */
33
+ }
34
+ return false;
35
+}
36
+
37
+static inline bool isar_feature_aa64_ssbs(const ARMISARegisters *id)
38
+{
39
+ return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, SSBS) != 0;
40
+}
41
+
42
+static inline bool isar_feature_aa64_bti(const ARMISARegisters *id)
43
+{
44
+ return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, BT) != 0;
45
+}
46
+
47
+static inline bool isar_feature_aa64_mte_insn_reg(const ARMISARegisters *id)
48
+{
49
+ return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, MTE) != 0;
50
+}
51
+
52
+static inline bool isar_feature_aa64_mte(const ARMISARegisters *id)
53
+{
54
+ return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, MTE) >= 2;
55
+}
56
+
57
+static inline bool isar_feature_aa64_sme(const ARMISARegisters *id)
58
+{
59
+ return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, SME) != 0;
44
+}
60
+}
45
+
61
+
46
static inline bool isar_feature_aa64_tgran4_lpa2(const ARMISARegisters *id)
62
static inline bool isar_feature_aa64_tgran4_lpa2(const ARMISARegisters *id)
47
{
63
{
48
return FIELD_SEX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN4) >= 1;
64
return FIELD_SEX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN4) >= 1;
49
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
65
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_e0pd(const ARMISARegisters *id)
50
index XXXXXXX..XXXXXXX 100644
66
return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, E0PD) != 0;
51
--- a/target/arm/tcg/a64.decode
52
+++ b/target/arm/tcg/a64.decode
53
@@ -XXX,XX +XXX,XX @@ CBZ sf:1 011010 nz:1 ................... rt:5 &cbz imm=%imm19
54
55
TBZ . 011011 nz:1 ..... .............. rt:5 &tbz imm=%imm14 bitpos=%imm31_19
56
57
-B_cond 0101010 0 ................... 0 cond:4 imm=%imm19
58
+# B.cond and BC.cond
59
+B_cond 0101010 0 ................... c:1 cond:4 imm=%imm19
60
61
BR 1101011 0000 11111 000000 rn:5 00000 &r
62
BLR 1101011 0001 11111 000000 rn:5 00000 &r
63
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
64
index XXXXXXX..XXXXXXX 100644
65
--- a/linux-user/elfload.c
66
+++ b/linux-user/elfload.c
67
@@ -XXX,XX +XXX,XX @@ uint32_t get_elf_hwcap2(void)
68
GET_FEATURE_ID(aa64_sme_f64f64, ARM_HWCAP2_A64_SME_F64F64);
69
GET_FEATURE_ID(aa64_sme_i16i64, ARM_HWCAP2_A64_SME_I16I64);
70
GET_FEATURE_ID(aa64_sme_fa64, ARM_HWCAP2_A64_SME_FA64);
71
+ GET_FEATURE_ID(aa64_hbc, ARM_HWCAP2_A64_HBC);
72
73
return hwcaps;
74
}
67
}
75
diff --git a/target/arm/tcg/cpu64.c b/target/arm/tcg/cpu64.c
68
76
index XXXXXXX..XXXXXXX 100644
69
-static inline bool isar_feature_aa64_bti(const ARMISARegisters *id)
77
--- a/target/arm/tcg/cpu64.c
70
-{
78
+++ b/target/arm/tcg/cpu64.c
71
- return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, BT) != 0;
79
@@ -XXX,XX +XXX,XX @@ void aarch64_max_tcg_initfn(Object *obj)
72
-}
80
t = FIELD_DP64(t, ID_AA64ISAR1, I8MM, 1); /* FEAT_I8MM */
73
-
81
cpu->isar.id_aa64isar1 = t;
74
-static inline bool isar_feature_aa64_mte_insn_reg(const ARMISARegisters *id)
82
75
-{
83
+ t = cpu->isar.id_aa64isar2;
76
- return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, MTE) != 0;
84
+ t = FIELD_DP64(t, ID_AA64ISAR2, BC, 1); /* FEAT_HBC */
77
-}
85
+ cpu->isar.id_aa64isar2 = t;
78
-
86
+
79
-static inline bool isar_feature_aa64_mte(const ARMISARegisters *id)
87
t = cpu->isar.id_aa64pfr0;
80
-{
88
t = FIELD_DP64(t, ID_AA64PFR0, FP, 1); /* FEAT_FP16 */
81
- return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, MTE) >= 2;
89
t = FIELD_DP64(t, ID_AA64PFR0, ADVSIMD, 1); /* FEAT_FP16 */
82
-}
90
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
83
-
91
index XXXXXXX..XXXXXXX 100644
84
-static inline bool isar_feature_aa64_sme(const ARMISARegisters *id)
92
--- a/target/arm/tcg/translate-a64.c
85
-{
93
+++ b/target/arm/tcg/translate-a64.c
86
- return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, SME) != 0;
94
@@ -XXX,XX +XXX,XX @@ static bool trans_TBZ(DisasContext *s, arg_tbz *a)
87
-}
95
88
-
96
static bool trans_B_cond(DisasContext *s, arg_B_cond *a)
89
static inline bool isar_feature_aa64_pmuv3p1(const ARMISARegisters *id)
97
{
90
{
98
+ /* BC.cond is only present with FEAT_HBC */
91
return FIELD_EX64(id->id_aa64dfr0, ID_AA64DFR0, PMUVER) >= 4 &&
99
+ if (a->c && !dc_isar_feature(aa64_hbc, s)) {
92
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_pmuv3p5(const ARMISARegisters *id)
100
+ return false;
93
FIELD_EX64(id->id_aa64dfr0, ID_AA64DFR0, PMUVER) != 0xf;
101
+ }
94
}
102
reset_btype(s);
95
103
if (a->cond < 0x0e) {
96
-static inline bool isar_feature_aa64_dit(const ARMISARegisters *id)
104
/* genuinely conditional branches */
97
-{
98
- return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, DIT) != 0;
99
-}
100
-
101
-static inline bool isar_feature_aa64_scxtnum(const ARMISARegisters *id)
102
-{
103
- int key = FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, CSV2);
104
- if (key >= 2) {
105
- return true; /* FEAT_CSV2_2 */
106
- }
107
- if (key == 1) {
108
- key = FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, CSV2_FRAC);
109
- return key >= 2; /* FEAT_CSV2_1p2 */
110
- }
111
- return false;
112
-}
113
-
114
-static inline bool isar_feature_aa64_ssbs(const ARMISARegisters *id)
115
-{
116
- return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, SSBS) != 0;
117
-}
118
-
119
static inline bool isar_feature_aa64_debugv8p2(const ARMISARegisters *id)
120
{
121
return FIELD_EX64(id->id_aa64dfr0, ID_AA64DFR0, DEBUGVER) >= 8;
105
--
122
--
106
2.34.1
123
2.34.1
107
124
108
125
diff view generated by jsdifflib
1
Some of the names we use for CPU features in linux-user's dummy
1
Move all the ID_AA64DFR* feature test functions together.
2
/proc/cpuinfo don't match the strings in the real kernel in
3
arch/arm64/kernel/cpuinfo.c. Specifically, the SME related
4
features have an underscore in the HWCAP_FOO define name,
5
but (like the SVE ones) they do not have an underscore in the
6
string in cpuinfo. Correct the errors.
7
2
8
Fixes: a55b9e7226708 ("linux-user: Emulate /proc/cpuinfo on aarch64 and arm")
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20231024163510.2972081-7-peter.maydell@linaro.org
12
---
7
---
13
linux-user/elfload.c | 14 +++++++-------
8
target/arm/cpu-features.h | 10 +++++-----
14
1 file changed, 7 insertions(+), 7 deletions(-)
9
1 file changed, 5 insertions(+), 5 deletions(-)
15
10
16
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
11
diff --git a/target/arm/cpu-features.h b/target/arm/cpu-features.h
17
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
18
--- a/linux-user/elfload.c
13
--- a/target/arm/cpu-features.h
19
+++ b/linux-user/elfload.c
14
+++ b/target/arm/cpu-features.h
20
@@ -XXX,XX +XXX,XX @@ const char *elf_hwcap2_str(uint32_t bit)
15
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_debugv8p2(const ARMISARegisters *id)
21
[__builtin_ctz(ARM_HWCAP2_A64_RPRES )] = "rpres",
16
return FIELD_EX64(id->id_aa64dfr0, ID_AA64DFR0, DEBUGVER) >= 8;
22
[__builtin_ctz(ARM_HWCAP2_A64_MTE3 )] = "mte3",
17
}
23
[__builtin_ctz(ARM_HWCAP2_A64_SME )] = "sme",
18
24
- [__builtin_ctz(ARM_HWCAP2_A64_SME_I16I64 )] = "sme_i16i64",
19
+static inline bool isar_feature_aa64_doublelock(const ARMISARegisters *id)
25
- [__builtin_ctz(ARM_HWCAP2_A64_SME_F64F64 )] = "sme_f64f64",
20
+{
26
- [__builtin_ctz(ARM_HWCAP2_A64_SME_I8I32 )] = "sme_i8i32",
21
+ return FIELD_SEX64(id->id_aa64dfr0, ID_AA64DFR0, DOUBLELOCK) >= 0;
27
- [__builtin_ctz(ARM_HWCAP2_A64_SME_F16F32 )] = "sme_f16f32",
22
+}
28
- [__builtin_ctz(ARM_HWCAP2_A64_SME_B16F32 )] = "sme_b16f32",
23
+
29
- [__builtin_ctz(ARM_HWCAP2_A64_SME_F32F32 )] = "sme_f32f32",
24
static inline bool isar_feature_aa64_sve2(const ARMISARegisters *id)
30
- [__builtin_ctz(ARM_HWCAP2_A64_SME_FA64 )] = "sme_fa64",
25
{
31
+ [__builtin_ctz(ARM_HWCAP2_A64_SME_I16I64 )] = "smei16i64",
26
return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, SVEVER) != 0;
32
+ [__builtin_ctz(ARM_HWCAP2_A64_SME_F64F64 )] = "smef64f64",
27
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_sme_fa64(const ARMISARegisters *id)
33
+ [__builtin_ctz(ARM_HWCAP2_A64_SME_I8I32 )] = "smei8i32",
28
return FIELD_EX64(id->id_aa64smfr0, ID_AA64SMFR0, FA64);
34
+ [__builtin_ctz(ARM_HWCAP2_A64_SME_F16F32 )] = "smef16f32",
29
}
35
+ [__builtin_ctz(ARM_HWCAP2_A64_SME_B16F32 )] = "smeb16f32",
30
36
+ [__builtin_ctz(ARM_HWCAP2_A64_SME_F32F32 )] = "smef32f32",
31
-static inline bool isar_feature_aa64_doublelock(const ARMISARegisters *id)
37
+ [__builtin_ctz(ARM_HWCAP2_A64_SME_FA64 )] = "smefa64",
32
-{
38
};
33
- return FIELD_SEX64(id->id_aa64dfr0, ID_AA64DFR0, DOUBLELOCK) >= 0;
39
34
-}
40
return bit < ARRAY_SIZE(hwcap_str) ? hwcap_str[bit] : NULL;
35
-
36
/*
37
* Feature tests for "does this exist in either 32-bit or 64-bit?"
38
*/
41
--
39
--
42
2.34.1
40
2.34.1
43
41
44
42
diff view generated by jsdifflib
1
In every place that we call the get_a64_user_mem_index() function
1
In commit 442c9d682c94fc2 when we converted the ERET, ERETAA, ERETAB
2
we do it like this:
2
instructions to decodetree, the conversion accidentally lost the
3
memidx = a->unpriv ? get_a64_user_mem_index(s) : get_mem_index(s);
3
correct setting of the syndrome register when taking a trap because
4
Refactor so the caller passes in the bool that says whether they
4
of the FEAT_FGT HFGITR_EL1.ERET bit. Instead of reporting a correct
5
want the 'unpriv' or 'normal' mem_index rather than having to
5
full syndrome value with the EC and IL bits, we only reported the low
6
do the ?: themselves.
6
two bits of the syndrome, because the call to syn_erettrap() got
7
dropped.
7
8
9
Fix the syndrome values for these traps by reinstating the
10
syn_erettrap() calls.
11
12
Fixes: 442c9d682c94fc2 ("target/arm: Convert ERET, ERETAA, ERETAB to decodetree")
13
Cc: qemu-stable@nongnu.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20230912140434.1333369-4-peter.maydell@linaro.org
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Message-id: 20231024172438.2990945-1-peter.maydell@linaro.org
10
---
17
---
11
target/arm/tcg/translate-a64.c | 20 ++++++++++++++------
18
target/arm/tcg/translate-a64.c | 4 ++--
12
1 file changed, 14 insertions(+), 6 deletions(-)
19
1 file changed, 2 insertions(+), 2 deletions(-)
13
20
14
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
21
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
15
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/tcg/translate-a64.c
23
--- a/target/arm/tcg/translate-a64.c
17
+++ b/target/arm/tcg/translate-a64.c
24
+++ b/target/arm/tcg/translate-a64.c
18
@@ -XXX,XX +XXX,XX @@ void a64_translate_init(void)
25
@@ -XXX,XX +XXX,XX @@ static bool trans_ERET(DisasContext *s, arg_ERET *a)
19
}
26
return false;
20
21
/*
22
- * Return the core mmu_idx to use for A64 "unprivileged load/store" insns
23
+ * Return the core mmu_idx to use for A64 load/store insns which
24
+ * have a "unprivileged load/store" variant. Those insns access
25
+ * EL0 if executed from an EL which has control over EL0 (usually
26
+ * EL1) but behave like normal loads and stores if executed from
27
+ * elsewhere (eg EL3).
28
+ *
29
+ * @unpriv : true for the unprivileged encoding; false for the
30
+ * normal encoding (in which case we will return the same
31
+ * thing as get_mem_index().
32
*/
33
-static int get_a64_user_mem_index(DisasContext *s)
34
+static int get_a64_user_mem_index(DisasContext *s, bool unpriv)
35
{
36
/*
37
* If AccType_UNPRIV is not used, the insn uses AccType_NORMAL,
38
@@ -XXX,XX +XXX,XX @@ static int get_a64_user_mem_index(DisasContext *s)
39
*/
40
ARMMMUIdx useridx = s->mmu_idx;
41
42
- if (s->unpriv) {
43
+ if (unpriv && s->unpriv) {
44
/*
45
* We have pre-computed the condition for AccType_UNPRIV.
46
* Therefore we should never get here with a mmu_idx for
47
@@ -XXX,XX +XXX,XX @@ static void op_addr_ldst_imm_pre(DisasContext *s, arg_ldst_imm *a,
48
if (!a->p) {
49
tcg_gen_addi_i64(*dirty_addr, *dirty_addr, offset);
50
}
27
}
51
- memidx = a->unpriv ? get_a64_user_mem_index(s) : get_mem_index(s);
28
if (s->fgt_eret) {
52
+ memidx = get_a64_user_mem_index(s, a->unpriv);
29
- gen_exception_insn_el(s, 0, EXCP_UDEF, 0, 2);
53
*clean_addr = gen_mte_check1_mmuidx(s, *dirty_addr, is_store,
30
+ gen_exception_insn_el(s, 0, EXCP_UDEF, syn_erettrap(0), 2);
54
a->w || a->rn != 31,
31
return true;
55
mop, a->unpriv, memidx);
32
}
56
@@ -XXX,XX +XXX,XX @@ static bool trans_STR_i(DisasContext *s, arg_ldst_imm *a)
33
dst = tcg_temp_new_i64();
57
{
34
@@ -XXX,XX +XXX,XX @@ static bool trans_ERETA(DisasContext *s, arg_reta *a)
58
bool iss_sf, iss_valid = !a->w;
35
}
59
TCGv_i64 clean_addr, dirty_addr, tcg_rt;
36
/* The FGT trap takes precedence over an auth trap. */
60
- int memidx = a->unpriv ? get_a64_user_mem_index(s) : get_mem_index(s);
37
if (s->fgt_eret) {
61
+ int memidx = get_a64_user_mem_index(s, a->unpriv);
38
- gen_exception_insn_el(s, 0, EXCP_UDEF, a->m ? 3 : 2, 2);
62
MemOp mop = finalize_memop(s, a->sz + a->sign * MO_SIGN);
39
+ gen_exception_insn_el(s, 0, EXCP_UDEF, syn_erettrap(a->m ? 3 : 2), 2);
63
40
return true;
64
op_addr_ldst_imm_pre(s, a, &clean_addr, &dirty_addr, a->imm, true, mop);
41
}
65
@@ -XXX,XX +XXX,XX @@ static bool trans_LDR_i(DisasContext *s, arg_ldst_imm *a)
42
dst = tcg_temp_new_i64();
66
{
67
bool iss_sf, iss_valid = !a->w;
68
TCGv_i64 clean_addr, dirty_addr, tcg_rt;
69
- int memidx = a->unpriv ? get_a64_user_mem_index(s) : get_mem_index(s);
70
+ int memidx = get_a64_user_mem_index(s, a->unpriv);
71
MemOp mop = finalize_memop(s, a->sz + a->sign * MO_SIGN);
72
73
op_addr_ldst_imm_pre(s, a, &clean_addr, &dirty_addr, a->imm, false, mop);
74
--
43
--
75
2.34.1
44
2.34.1
diff view generated by jsdifflib
New patch
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
1
2
3
"hw/arm/boot.h" is only required on the source file.
4
5
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Luc Michel <luc.michel@amd.com>
8
Message-id: 20231025065316.56817-2-philmd@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
include/hw/arm/allwinner-a10.h | 1 -
12
hw/arm/cubieboard.c | 1 +
13
2 files changed, 1 insertion(+), 1 deletion(-)
14
15
diff --git a/include/hw/arm/allwinner-a10.h b/include/hw/arm/allwinner-a10.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/include/hw/arm/allwinner-a10.h
18
+++ b/include/hw/arm/allwinner-a10.h
19
@@ -XXX,XX +XXX,XX @@
20
#ifndef HW_ARM_ALLWINNER_A10_H
21
#define HW_ARM_ALLWINNER_A10_H
22
23
-#include "hw/arm/boot.h"
24
#include "hw/timer/allwinner-a10-pit.h"
25
#include "hw/intc/allwinner-a10-pic.h"
26
#include "hw/net/allwinner_emac.h"
27
diff --git a/hw/arm/cubieboard.c b/hw/arm/cubieboard.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/hw/arm/cubieboard.c
30
+++ b/hw/arm/cubieboard.c
31
@@ -XXX,XX +XXX,XX @@
32
#include "hw/boards.h"
33
#include "hw/qdev-properties.h"
34
#include "hw/arm/allwinner-a10.h"
35
+#include "hw/arm/boot.h"
36
#include "hw/i2c/i2c.h"
37
38
static struct arm_boot_info cubieboard_binfo = {
39
--
40
2.34.1
41
42
diff view generated by jsdifflib
New patch
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
1
2
3
"hw/arm/boot.h" is only required on the source file.
4
5
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Luc Michel <luc.michel@amd.com>
8
Message-id: 20231025065316.56817-3-philmd@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
include/hw/arm/allwinner-h3.h | 1 -
12
hw/arm/orangepi.c | 1 +
13
2 files changed, 1 insertion(+), 1 deletion(-)
14
15
diff --git a/include/hw/arm/allwinner-h3.h b/include/hw/arm/allwinner-h3.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/include/hw/arm/allwinner-h3.h
18
+++ b/include/hw/arm/allwinner-h3.h
19
@@ -XXX,XX +XXX,XX @@
20
#define HW_ARM_ALLWINNER_H3_H
21
22
#include "qom/object.h"
23
-#include "hw/arm/boot.h"
24
#include "hw/timer/allwinner-a10-pit.h"
25
#include "hw/intc/arm_gic.h"
26
#include "hw/misc/allwinner-h3-ccu.h"
27
diff --git a/hw/arm/orangepi.c b/hw/arm/orangepi.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/hw/arm/orangepi.c
30
+++ b/hw/arm/orangepi.c
31
@@ -XXX,XX +XXX,XX @@
32
#include "hw/boards.h"
33
#include "hw/qdev-properties.h"
34
#include "hw/arm/allwinner-h3.h"
35
+#include "hw/arm/boot.h"
36
37
static struct arm_boot_info orangepi_binfo;
38
39
--
40
2.34.1
41
42
diff view generated by jsdifflib
New patch
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
1
2
3
"hw/arm/boot.h" is only required on the source file.
4
5
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Luc Michel <luc.michel@amd.com>
8
Message-id: 20231025065316.56817-4-philmd@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
include/hw/arm/allwinner-r40.h | 1 -
12
hw/arm/bananapi_m2u.c | 1 +
13
2 files changed, 1 insertion(+), 1 deletion(-)
14
15
diff --git a/include/hw/arm/allwinner-r40.h b/include/hw/arm/allwinner-r40.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/include/hw/arm/allwinner-r40.h
18
+++ b/include/hw/arm/allwinner-r40.h
19
@@ -XXX,XX +XXX,XX @@
20
#define HW_ARM_ALLWINNER_R40_H
21
22
#include "qom/object.h"
23
-#include "hw/arm/boot.h"
24
#include "hw/timer/allwinner-a10-pit.h"
25
#include "hw/intc/arm_gic.h"
26
#include "hw/sd/allwinner-sdhost.h"
27
diff --git a/hw/arm/bananapi_m2u.c b/hw/arm/bananapi_m2u.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/hw/arm/bananapi_m2u.c
30
+++ b/hw/arm/bananapi_m2u.c
31
@@ -XXX,XX +XXX,XX @@
32
#include "hw/i2c/i2c.h"
33
#include "hw/qdev-properties.h"
34
#include "hw/arm/allwinner-r40.h"
35
+#include "hw/arm/boot.h"
36
37
static struct arm_boot_info bpim2u_binfo;
38
39
--
40
2.34.1
41
42
diff view generated by jsdifflib
New patch
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
1
2
3
"hw/arm/boot.h" is only required on the source file.
4
5
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Luc Michel <luc.michel@amd.com>
8
Message-id: 20231025065316.56817-5-philmd@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
include/hw/arm/fsl-imx25.h | 1 -
12
hw/arm/imx25_pdk.c | 1 +
13
2 files changed, 1 insertion(+), 1 deletion(-)
14
15
diff --git a/include/hw/arm/fsl-imx25.h b/include/hw/arm/fsl-imx25.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/include/hw/arm/fsl-imx25.h
18
+++ b/include/hw/arm/fsl-imx25.h
19
@@ -XXX,XX +XXX,XX @@
20
#ifndef FSL_IMX25_H
21
#define FSL_IMX25_H
22
23
-#include "hw/arm/boot.h"
24
#include "hw/intc/imx_avic.h"
25
#include "hw/misc/imx25_ccm.h"
26
#include "hw/char/imx_serial.h"
27
diff --git a/hw/arm/imx25_pdk.c b/hw/arm/imx25_pdk.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/hw/arm/imx25_pdk.c
30
+++ b/hw/arm/imx25_pdk.c
31
@@ -XXX,XX +XXX,XX @@
32
#include "qapi/error.h"
33
#include "hw/qdev-properties.h"
34
#include "hw/arm/fsl-imx25.h"
35
+#include "hw/arm/boot.h"
36
#include "hw/boards.h"
37
#include "qemu/error-report.h"
38
#include "sysemu/qtest.h"
39
--
40
2.34.1
41
42
diff view generated by jsdifflib
New patch
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
1
2
3
"hw/arm/boot.h" is only required on the source file.
4
5
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Luc Michel <luc.michel@amd.com>
8
Message-id: 20231025065316.56817-6-philmd@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
include/hw/arm/fsl-imx31.h | 1 -
12
hw/arm/kzm.c | 1 +
13
2 files changed, 1 insertion(+), 1 deletion(-)
14
15
diff --git a/include/hw/arm/fsl-imx31.h b/include/hw/arm/fsl-imx31.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/include/hw/arm/fsl-imx31.h
18
+++ b/include/hw/arm/fsl-imx31.h
19
@@ -XXX,XX +XXX,XX @@
20
#ifndef FSL_IMX31_H
21
#define FSL_IMX31_H
22
23
-#include "hw/arm/boot.h"
24
#include "hw/intc/imx_avic.h"
25
#include "hw/misc/imx31_ccm.h"
26
#include "hw/char/imx_serial.h"
27
diff --git a/hw/arm/kzm.c b/hw/arm/kzm.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/hw/arm/kzm.c
30
+++ b/hw/arm/kzm.c
31
@@ -XXX,XX +XXX,XX @@
32
#include "qemu/osdep.h"
33
#include "qapi/error.h"
34
#include "hw/arm/fsl-imx31.h"
35
+#include "hw/arm/boot.h"
36
#include "hw/boards.h"
37
#include "qemu/error-report.h"
38
#include "exec/address-spaces.h"
39
--
40
2.34.1
41
42
diff view generated by jsdifflib
New patch
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
1
2
3
"hw/arm/boot.h" is only required on the source file.
4
5
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Luc Michel <luc.michel@amd.com>
8
Message-id: 20231025065316.56817-7-philmd@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
include/hw/arm/fsl-imx6.h | 1 -
12
hw/arm/sabrelite.c | 1 +
13
2 files changed, 1 insertion(+), 1 deletion(-)
14
15
diff --git a/include/hw/arm/fsl-imx6.h b/include/hw/arm/fsl-imx6.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/include/hw/arm/fsl-imx6.h
18
+++ b/include/hw/arm/fsl-imx6.h
19
@@ -XXX,XX +XXX,XX @@
20
#ifndef FSL_IMX6_H
21
#define FSL_IMX6_H
22
23
-#include "hw/arm/boot.h"
24
#include "hw/cpu/a9mpcore.h"
25
#include "hw/misc/imx6_ccm.h"
26
#include "hw/misc/imx6_src.h"
27
diff --git a/hw/arm/sabrelite.c b/hw/arm/sabrelite.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/hw/arm/sabrelite.c
30
+++ b/hw/arm/sabrelite.c
31
@@ -XXX,XX +XXX,XX @@
32
#include "qemu/osdep.h"
33
#include "qapi/error.h"
34
#include "hw/arm/fsl-imx6.h"
35
+#include "hw/arm/boot.h"
36
#include "hw/boards.h"
37
#include "hw/qdev-properties.h"
38
#include "qemu/error-report.h"
39
--
40
2.34.1
41
42
diff view generated by jsdifflib
New patch
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
1
2
3
"hw/arm/boot.h" is only required on the source file.
4
5
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Luc Michel <luc.michel@amd.com>
8
Message-id: 20231025065316.56817-8-philmd@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
include/hw/arm/fsl-imx6ul.h | 1 -
12
hw/arm/mcimx6ul-evk.c | 1 +
13
2 files changed, 1 insertion(+), 1 deletion(-)
14
15
diff --git a/include/hw/arm/fsl-imx6ul.h b/include/hw/arm/fsl-imx6ul.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/include/hw/arm/fsl-imx6ul.h
18
+++ b/include/hw/arm/fsl-imx6ul.h
19
@@ -XXX,XX +XXX,XX @@
20
#ifndef FSL_IMX6UL_H
21
#define FSL_IMX6UL_H
22
23
-#include "hw/arm/boot.h"
24
#include "hw/cpu/a15mpcore.h"
25
#include "hw/misc/imx6ul_ccm.h"
26
#include "hw/misc/imx6_src.h"
27
diff --git a/hw/arm/mcimx6ul-evk.c b/hw/arm/mcimx6ul-evk.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/hw/arm/mcimx6ul-evk.c
30
+++ b/hw/arm/mcimx6ul-evk.c
31
@@ -XXX,XX +XXX,XX @@
32
#include "qemu/osdep.h"
33
#include "qapi/error.h"
34
#include "hw/arm/fsl-imx6ul.h"
35
+#include "hw/arm/boot.h"
36
#include "hw/boards.h"
37
#include "hw/qdev-properties.h"
38
#include "qemu/error-report.h"
39
--
40
2.34.1
41
42
diff view generated by jsdifflib
New patch
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
1
2
3
"hw/arm/boot.h" is only required on the source file.
4
5
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Luc Michel <luc.michel@amd.com>
8
Message-id: 20231025065316.56817-9-philmd@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
include/hw/arm/fsl-imx7.h | 1 -
12
hw/arm/mcimx7d-sabre.c | 1 +
13
2 files changed, 1 insertion(+), 1 deletion(-)
14
15
diff --git a/include/hw/arm/fsl-imx7.h b/include/hw/arm/fsl-imx7.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/include/hw/arm/fsl-imx7.h
18
+++ b/include/hw/arm/fsl-imx7.h
19
@@ -XXX,XX +XXX,XX @@
20
#ifndef FSL_IMX7_H
21
#define FSL_IMX7_H
22
23
-#include "hw/arm/boot.h"
24
#include "hw/cpu/a15mpcore.h"
25
#include "hw/intc/imx_gpcv2.h"
26
#include "hw/misc/imx7_ccm.h"
27
diff --git a/hw/arm/mcimx7d-sabre.c b/hw/arm/mcimx7d-sabre.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/hw/arm/mcimx7d-sabre.c
30
+++ b/hw/arm/mcimx7d-sabre.c
31
@@ -XXX,XX +XXX,XX @@
32
#include "qemu/osdep.h"
33
#include "qapi/error.h"
34
#include "hw/arm/fsl-imx7.h"
35
+#include "hw/arm/boot.h"
36
#include "hw/boards.h"
37
#include "hw/qdev-properties.h"
38
#include "qemu/error-report.h"
39
--
40
2.34.1
41
42
diff view generated by jsdifflib
New patch
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
1
2
3
"hw/arm/boot.h" is only required on the source file.
4
5
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Luc Michel <luc.michel@amd.com>
8
Message-id: 20231025065316.56817-10-philmd@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
include/hw/arm/xlnx-versal.h | 1 -
12
hw/arm/xlnx-versal-virt.c | 1 +
13
2 files changed, 1 insertion(+), 1 deletion(-)
14
15
diff --git a/include/hw/arm/xlnx-versal.h b/include/hw/arm/xlnx-versal.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/include/hw/arm/xlnx-versal.h
18
+++ b/include/hw/arm/xlnx-versal.h
19
@@ -XXX,XX +XXX,XX @@
20
#define XLNX_VERSAL_H
21
22
#include "hw/sysbus.h"
23
-#include "hw/arm/boot.h"
24
#include "hw/cpu/cluster.h"
25
#include "hw/or-irq.h"
26
#include "hw/sd/sdhci.h"
27
diff --git a/hw/arm/xlnx-versal-virt.c b/hw/arm/xlnx-versal-virt.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/hw/arm/xlnx-versal-virt.c
30
+++ b/hw/arm/xlnx-versal-virt.c
31
@@ -XXX,XX +XXX,XX @@
32
#include "cpu.h"
33
#include "hw/qdev-properties.h"
34
#include "hw/arm/xlnx-versal.h"
35
+#include "hw/arm/boot.h"
36
#include "qom/object.h"
37
38
#define TYPE_XLNX_VERSAL_VIRT_MACHINE MACHINE_TYPE_NAME("xlnx-versal-virt")
39
--
40
2.34.1
41
42
diff view generated by jsdifflib
New patch
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
1
2
3
"hw/arm/boot.h" is only required on the source file.
4
5
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Luc Michel <luc.michel@amd.com>
8
Message-id: 20231025065316.56817-11-philmd@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
include/hw/arm/xlnx-zynqmp.h | 1 -
12
hw/arm/xlnx-zcu102.c | 1 +
13
2 files changed, 1 insertion(+), 1 deletion(-)
14
15
diff --git a/include/hw/arm/xlnx-zynqmp.h b/include/hw/arm/xlnx-zynqmp.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/include/hw/arm/xlnx-zynqmp.h
18
+++ b/include/hw/arm/xlnx-zynqmp.h
19
@@ -XXX,XX +XXX,XX @@
20
#ifndef XLNX_ZYNQMP_H
21
#define XLNX_ZYNQMP_H
22
23
-#include "hw/arm/boot.h"
24
#include "hw/intc/arm_gic.h"
25
#include "hw/net/cadence_gem.h"
26
#include "hw/char/cadence_uart.h"
27
diff --git a/hw/arm/xlnx-zcu102.c b/hw/arm/xlnx-zcu102.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/hw/arm/xlnx-zcu102.c
30
+++ b/hw/arm/xlnx-zcu102.c
31
@@ -XXX,XX +XXX,XX @@
32
#include "qemu/osdep.h"
33
#include "qapi/error.h"
34
#include "hw/arm/xlnx-zynqmp.h"
35
+#include "hw/arm/boot.h"
36
#include "hw/boards.h"
37
#include "qemu/error-report.h"
38
#include "qemu/log.h"
39
--
40
2.34.1
41
42
diff view generated by jsdifflib
New patch
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
1
2
3
sysbus_mmio_map() and sysbus_connect_irq() should not be
4
called on unrealized device.
5
6
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Reviewed-by: Thomas Huth <thuth@redhat.com>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20231020130331.50048-2-philmd@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
hw/sd/pxa2xx_mmci.c | 2 +-
13
1 file changed, 1 insertion(+), 1 deletion(-)
14
15
diff --git a/hw/sd/pxa2xx_mmci.c b/hw/sd/pxa2xx_mmci.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/sd/pxa2xx_mmci.c
18
+++ b/hw/sd/pxa2xx_mmci.c
19
@@ -XXX,XX +XXX,XX @@ PXA2xxMMCIState *pxa2xx_mmci_init(MemoryRegion *sysmem,
20
21
dev = qdev_new(TYPE_PXA2XX_MMCI);
22
sbd = SYS_BUS_DEVICE(dev);
23
+ sysbus_realize_and_unref(sbd, &error_fatal);
24
sysbus_mmio_map(sbd, 0, base);
25
sysbus_connect_irq(sbd, 0, irq);
26
qdev_connect_gpio_out_named(dev, "rx-dma", 0, rx_dma);
27
qdev_connect_gpio_out_named(dev, "tx-dma", 0, tx_dma);
28
- sysbus_realize_and_unref(sbd, &error_fatal);
29
30
return PXA2XX_MMCI(dev);
31
}
32
--
33
2.34.1
34
35
diff view generated by jsdifflib
1
From: Viktor Prutyanov <viktor@daynix.com>
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
2
3
PDB for Windows 11 kernel has slightly different structure compared to
3
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
4
previous versions. Since elf2dmp don't use the other fields, copy only
4
Reviewed-by: Thomas Huth <thuth@redhat.com>
5
'segments' field from PDB_STREAM_INDEXES.
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
6
Message-id: 20231020130331.50048-3-philmd@linaro.org
7
Signed-off-by: Viktor Prutyanov <viktor@daynix.com>
8
Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
9
Message-id: 20230915170153.10959-6-viktor@daynix.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
8
---
12
contrib/elf2dmp/pdb.h | 2 +-
9
hw/sd/pxa2xx_mmci.c | 7 +------
13
contrib/elf2dmp/pdb.c | 15 ++++-----------
10
1 file changed, 1 insertion(+), 6 deletions(-)
14
2 files changed, 5 insertions(+), 12 deletions(-)
15
11
16
diff --git a/contrib/elf2dmp/pdb.h b/contrib/elf2dmp/pdb.h
12
diff --git a/hw/sd/pxa2xx_mmci.c b/hw/sd/pxa2xx_mmci.c
17
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
18
--- a/contrib/elf2dmp/pdb.h
14
--- a/hw/sd/pxa2xx_mmci.c
19
+++ b/contrib/elf2dmp/pdb.h
15
+++ b/hw/sd/pxa2xx_mmci.c
20
@@ -XXX,XX +XXX,XX @@ struct pdb_reader {
16
@@ -XXX,XX +XXX,XX @@ PXA2xxMMCIState *pxa2xx_mmci_init(MemoryRegion *sysmem,
21
} ds;
17
qemu_irq irq, qemu_irq rx_dma, qemu_irq tx_dma)
22
uint32_t file_used[1024];
23
PDB_SYMBOLS *symbols;
24
- PDB_STREAM_INDEXES sidx;
25
+ uint16_t segments;
26
uint8_t *modimage;
27
char *segs;
28
size_t segs_size;
29
diff --git a/contrib/elf2dmp/pdb.c b/contrib/elf2dmp/pdb.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/contrib/elf2dmp/pdb.c
32
+++ b/contrib/elf2dmp/pdb.c
33
@@ -XXX,XX +XXX,XX @@ static void *pdb_ds_read_file(struct pdb_reader* r, uint32_t file_number)
34
static int pdb_init_segments(struct pdb_reader *r)
35
{
18
{
36
char *segs;
19
DeviceState *dev;
37
- unsigned stream_idx = r->sidx.segments;
20
- SysBusDevice *sbd;
38
+ unsigned stream_idx = r->segments;
21
39
22
- dev = qdev_new(TYPE_PXA2XX_MMCI);
40
segs = pdb_ds_read_file(r, stream_idx);
23
- sbd = SYS_BUS_DEVICE(dev);
41
if (!segs) {
24
- sysbus_realize_and_unref(sbd, &error_fatal);
42
@@ -XXX,XX +XXX,XX @@ static int pdb_init_symbols(struct pdb_reader *r)
25
- sysbus_mmio_map(sbd, 0, base);
43
{
26
- sysbus_connect_irq(sbd, 0, irq);
44
int err = 0;
27
+ dev = sysbus_create_simple(TYPE_PXA2XX_MMCI, base, irq);
45
PDB_SYMBOLS *symbols;
28
qdev_connect_gpio_out_named(dev, "rx-dma", 0, rx_dma);
46
- PDB_STREAM_INDEXES *sidx = &r->sidx;
29
qdev_connect_gpio_out_named(dev, "tx-dma", 0, tx_dma);
47
-
30
48
- memset(sidx, -1, sizeof(*sidx));
49
50
symbols = pdb_ds_read_file(r, 3);
51
if (!symbols) {
52
@@ -XXX,XX +XXX,XX @@ static int pdb_init_symbols(struct pdb_reader *r)
53
54
r->symbols = symbols;
55
56
- if (symbols->stream_index_size != sizeof(PDB_STREAM_INDEXES)) {
57
- err = 1;
58
- goto out_symbols;
59
- }
60
-
61
- memcpy(sidx, (const char *)symbols + sizeof(PDB_SYMBOLS) +
62
+ r->segments = *(uint16_t *)((const char *)symbols + sizeof(PDB_SYMBOLS) +
63
symbols->module_size + symbols->offset_size +
64
symbols->hash_size + symbols->srcmodule_size +
65
- symbols->pdbimport_size + symbols->unknown2_size, sizeof(*sidx));
66
+ symbols->pdbimport_size + symbols->unknown2_size +
67
+ offsetof(PDB_STREAM_INDEXES, segments));
68
69
/* Read global symbol table */
70
r->modimage = pdb_ds_read_file(r, symbols->gsym_file);
71
--
31
--
72
2.34.1
32
2.34.1
33
34
diff view generated by jsdifflib
1
Avoid a dynamic stack allocation in qjack_process(). Since this
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
function is a JACK process callback, we are not permitted to malloc()
3
here, so we allocate a working buffer in qjack_client_init() instead.
4
2
5
The codebase has very few VLAs, and if we can get rid of them all we
3
sysbus_mmio_map() should not be called on unrealized device.
6
can make the compiler error on new additions. This is a defensive
7
measure against security bugs where an on-stack dynamic allocation
8
isn't correctly size-checked (e.g. CVE-2021-3527).
9
4
5
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
Reviewed-by: Thomas Huth <thuth@redhat.com>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20231020130331.50048-4-philmd@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
12
Reviewed-by: Francisco Iglesias <frasse.iglesias@gmail.com>
13
Reviewed-by: Christian Schoenebeck <qemu_oss@crudebyte.com>
14
Message-id: 20230818155846.1651287-3-peter.maydell@linaro.org
15
---
10
---
16
audio/jackaudio.c | 16 +++++++++++-----
11
hw/pcmcia/pxa2xx.c | 7 ++-----
17
1 file changed, 11 insertions(+), 5 deletions(-)
12
1 file changed, 2 insertions(+), 5 deletions(-)
18
13
19
diff --git a/audio/jackaudio.c b/audio/jackaudio.c
14
diff --git a/hw/pcmcia/pxa2xx.c b/hw/pcmcia/pxa2xx.c
20
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
21
--- a/audio/jackaudio.c
16
--- a/hw/pcmcia/pxa2xx.c
22
+++ b/audio/jackaudio.c
17
+++ b/hw/pcmcia/pxa2xx.c
23
@@ -XXX,XX +XXX,XX @@ typedef struct QJackClient {
18
@@ -XXX,XX +XXX,XX @@ PXA2xxPCMCIAState *pxa2xx_pcmcia_init(MemoryRegion *sysmem,
24
int buffersize;
19
hwaddr base)
25
jack_port_t **port;
20
{
26
QJackBuffer fifo;
21
DeviceState *dev;
27
+
22
- PXA2xxPCMCIAState *s;
28
+ /* Used as workspace by qjack_process() */
23
29
+ float **process_buffers;
24
dev = qdev_new(TYPE_PXA2XX_PCMCIA);
25
- sysbus_mmio_map(SYS_BUS_DEVICE(dev), 0, base);
26
- s = PXA2XX_PCMCIA(dev);
27
-
28
sysbus_realize_and_unref(SYS_BUS_DEVICE(dev), &error_fatal);
29
+ sysbus_mmio_map(SYS_BUS_DEVICE(dev), 0, base);
30
31
- return s;
32
+ return PXA2XX_PCMCIA(dev);
30
}
33
}
31
QJackClient;
34
32
35
static void pxa2xx_pcmcia_initfn(Object *obj)
33
@@ -XXX,XX +XXX,XX @@ static int qjack_process(jack_nframes_t nframes, void *arg)
34
}
35
36
/* get the buffers for the ports */
37
- float *buffers[c->nchannels];
38
for (int i = 0; i < c->nchannels; ++i) {
39
- buffers[i] = jack_port_get_buffer(c->port[i], nframes);
40
+ c->process_buffers[i] = jack_port_get_buffer(c->port[i], nframes);
41
}
42
43
if (c->out) {
44
if (likely(c->enabled)) {
45
- qjack_buffer_read_l(&c->fifo, buffers, nframes);
46
+ qjack_buffer_read_l(&c->fifo, c->process_buffers, nframes);
47
} else {
48
for (int i = 0; i < c->nchannels; ++i) {
49
- memset(buffers[i], 0, nframes * sizeof(float));
50
+ memset(c->process_buffers[i], 0, nframes * sizeof(float));
51
}
52
}
53
} else {
54
if (likely(c->enabled)) {
55
- qjack_buffer_write_l(&c->fifo, buffers, nframes);
56
+ qjack_buffer_write_l(&c->fifo, c->process_buffers, nframes);
57
}
58
}
59
60
@@ -XXX,XX +XXX,XX @@ static int qjack_client_init(QJackClient *c)
61
jack_get_client_name(c->client));
62
}
63
64
+ /* Allocate working buffer for process callback */
65
+ c->process_buffers = g_new(float *, c->nchannels);
66
+
67
jack_set_process_callback(c->client, qjack_process , c);
68
jack_set_port_registration_callback(c->client, qjack_port_registration, c);
69
jack_set_xrun_callback(c->client, qjack_xrun, c);
70
@@ -XXX,XX +XXX,XX @@ static void qjack_client_fini_locked(QJackClient *c)
71
72
qjack_buffer_free(&c->fifo);
73
g_free(c->port);
74
+ g_free(c->process_buffers);
75
76
c->state = QJACK_STATE_DISCONNECTED;
77
/* fallthrough */
78
--
36
--
79
2.34.1
37
2.34.1
80
38
81
39
diff view generated by jsdifflib
1
The allocation_tag_mem() function takes an argument tag_size,
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
but it never uses it. Remove the argument. In mte_probe_int()
3
in particular this also lets us delete the code computing
4
the value we were passing in.
5
2
3
Reviewed-by: Thomas Huth <thuth@redhat.com>
4
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20231020130331.50048-5-philmd@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
---
8
---
10
target/arm/tcg/mte_helper.c | 42 +++++++++++++------------------------
9
hw/pcmcia/pxa2xx.c | 4 +---
11
1 file changed, 14 insertions(+), 28 deletions(-)
10
1 file changed, 1 insertion(+), 3 deletions(-)
12
11
13
diff --git a/target/arm/tcg/mte_helper.c b/target/arm/tcg/mte_helper.c
12
diff --git a/hw/pcmcia/pxa2xx.c b/hw/pcmcia/pxa2xx.c
14
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/tcg/mte_helper.c
14
--- a/hw/pcmcia/pxa2xx.c
16
+++ b/target/arm/tcg/mte_helper.c
15
+++ b/hw/pcmcia/pxa2xx.c
17
@@ -XXX,XX +XXX,XX @@ static int choose_nonexcluded_tag(int tag, int offset, uint16_t exclude)
16
@@ -XXX,XX +XXX,XX @@ PXA2xxPCMCIAState *pxa2xx_pcmcia_init(MemoryRegion *sysmem,
18
* @ptr_access: the access to use for the virtual address
19
* @ptr_size: the number of bytes in the normal memory access
20
* @tag_access: the access to use for the tag memory
21
- * @tag_size: the number of bytes in the tag memory access
22
* @ra: the return address for exception handling
23
*
24
* Our tag memory is formatted as a sequence of little-endian nibbles.
25
@@ -XXX,XX +XXX,XX @@ static int choose_nonexcluded_tag(int tag, int offset, uint16_t exclude)
26
* a pointer to the corresponding tag byte. Exit with exception if the
27
* virtual address is not accessible for @ptr_access.
28
*
29
- * The @ptr_size and @tag_size values may not have an obvious relation
30
- * due to the alignment of @ptr, and the number of tag checks required.
31
- *
32
* If there is no tag storage corresponding to @ptr, return NULL.
33
*/
34
static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
35
uint64_t ptr, MMUAccessType ptr_access,
36
int ptr_size, MMUAccessType tag_access,
37
- int tag_size, uintptr_t ra)
38
+ uintptr_t ra)
39
{
17
{
40
#ifdef CONFIG_USER_ONLY
18
DeviceState *dev;
41
uint64_t clean_ptr = useronly_clean_ptr(ptr);
19
42
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(ldg)(CPUARMState *env, uint64_t ptr, uint64_t xt)
20
- dev = qdev_new(TYPE_PXA2XX_PCMCIA);
43
21
- sysbus_realize_and_unref(SYS_BUS_DEVICE(dev), &error_fatal);
44
/* Trap if accessing an invalid page. */
22
- sysbus_mmio_map(SYS_BUS_DEVICE(dev), 0, base);
45
mem = allocation_tag_mem(env, mmu_idx, ptr, MMU_DATA_LOAD, 1,
23
+ dev = sysbus_create_simple(TYPE_PXA2XX_PCMCIA, base, NULL);
46
- MMU_DATA_LOAD, 1, GETPC());
24
47
+ MMU_DATA_LOAD, GETPC());
25
return PXA2XX_PCMCIA(dev);
48
26
}
49
/* Load if page supports tags. */
50
if (mem) {
51
@@ -XXX,XX +XXX,XX @@ static inline void do_stg(CPUARMState *env, uint64_t ptr, uint64_t xt,
52
53
/* Trap if accessing an invalid page. */
54
mem = allocation_tag_mem(env, mmu_idx, ptr, MMU_DATA_STORE, TAG_GRANULE,
55
- MMU_DATA_STORE, 1, ra);
56
+ MMU_DATA_STORE, ra);
57
58
/* Store if page supports tags. */
59
if (mem) {
60
@@ -XXX,XX +XXX,XX @@ static inline void do_st2g(CPUARMState *env, uint64_t ptr, uint64_t xt,
61
if (ptr & TAG_GRANULE) {
62
/* Two stores unaligned mod TAG_GRANULE*2 -- modify two bytes. */
63
mem1 = allocation_tag_mem(env, mmu_idx, ptr, MMU_DATA_STORE,
64
- TAG_GRANULE, MMU_DATA_STORE, 1, ra);
65
+ TAG_GRANULE, MMU_DATA_STORE, ra);
66
mem2 = allocation_tag_mem(env, mmu_idx, ptr + TAG_GRANULE,
67
MMU_DATA_STORE, TAG_GRANULE,
68
- MMU_DATA_STORE, 1, ra);
69
+ MMU_DATA_STORE, ra);
70
71
/* Store if page(s) support tags. */
72
if (mem1) {
73
@@ -XXX,XX +XXX,XX @@ static inline void do_st2g(CPUARMState *env, uint64_t ptr, uint64_t xt,
74
} else {
75
/* Two stores aligned mod TAG_GRANULE*2 -- modify one byte. */
76
mem1 = allocation_tag_mem(env, mmu_idx, ptr, MMU_DATA_STORE,
77
- 2 * TAG_GRANULE, MMU_DATA_STORE, 1, ra);
78
+ 2 * TAG_GRANULE, MMU_DATA_STORE, ra);
79
if (mem1) {
80
tag |= tag << 4;
81
qatomic_set(mem1, tag);
82
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(ldgm)(CPUARMState *env, uint64_t ptr)
83
84
/* Trap if accessing an invalid page. */
85
tag_mem = allocation_tag_mem(env, mmu_idx, ptr, MMU_DATA_LOAD,
86
- gm_bs_bytes, MMU_DATA_LOAD,
87
- gm_bs_bytes / (2 * TAG_GRANULE), ra);
88
+ gm_bs_bytes, MMU_DATA_LOAD, ra);
89
90
/* The tag is squashed to zero if the page does not support tags. */
91
if (!tag_mem) {
92
@@ -XXX,XX +XXX,XX @@ void HELPER(stgm)(CPUARMState *env, uint64_t ptr, uint64_t val)
93
94
/* Trap if accessing an invalid page. */
95
tag_mem = allocation_tag_mem(env, mmu_idx, ptr, MMU_DATA_STORE,
96
- gm_bs_bytes, MMU_DATA_LOAD,
97
- gm_bs_bytes / (2 * TAG_GRANULE), ra);
98
+ gm_bs_bytes, MMU_DATA_LOAD, ra);
99
100
/*
101
* Tag store only happens if the page support tags,
102
@@ -XXX,XX +XXX,XX @@ void HELPER(stzgm_tags)(CPUARMState *env, uint64_t ptr, uint64_t val)
103
ptr &= -dcz_bytes;
104
105
mem = allocation_tag_mem(env, mmu_idx, ptr, MMU_DATA_STORE, dcz_bytes,
106
- MMU_DATA_STORE, tag_bytes, ra);
107
+ MMU_DATA_STORE, ra);
108
if (mem) {
109
int tag_pair = (val & 0xf) * 0x11;
110
memset(mem, tag_pair, tag_bytes);
111
@@ -XXX,XX +XXX,XX @@ static int mte_probe_int(CPUARMState *env, uint32_t desc, uint64_t ptr,
112
int mmu_idx, ptr_tag, bit55;
113
uint64_t ptr_last, prev_page, next_page;
114
uint64_t tag_first, tag_last;
115
- uint64_t tag_byte_first, tag_byte_last;
116
- uint32_t sizem1, tag_count, tag_size, n, c;
117
+ uint32_t sizem1, tag_count, n, c;
118
uint8_t *mem1, *mem2;
119
MMUAccessType type;
120
121
@@ -XXX,XX +XXX,XX @@ static int mte_probe_int(CPUARMState *env, uint32_t desc, uint64_t ptr,
122
tag_last = QEMU_ALIGN_DOWN(ptr_last, TAG_GRANULE);
123
tag_count = ((tag_last - tag_first) / TAG_GRANULE) + 1;
124
125
- /* Round the bounds to twice the tag granule, and compute the bytes. */
126
- tag_byte_first = QEMU_ALIGN_DOWN(ptr, 2 * TAG_GRANULE);
127
- tag_byte_last = QEMU_ALIGN_DOWN(ptr_last, 2 * TAG_GRANULE);
128
-
129
/* Locate the page boundaries. */
130
prev_page = ptr & TARGET_PAGE_MASK;
131
next_page = prev_page + TARGET_PAGE_SIZE;
132
133
if (likely(tag_last - prev_page < TARGET_PAGE_SIZE)) {
134
/* Memory access stays on one page. */
135
- tag_size = ((tag_byte_last - tag_byte_first) / (2 * TAG_GRANULE)) + 1;
136
mem1 = allocation_tag_mem(env, mmu_idx, ptr, type, sizem1 + 1,
137
- MMU_DATA_LOAD, tag_size, ra);
138
+ MMU_DATA_LOAD, ra);
139
if (!mem1) {
140
return 1;
141
}
142
@@ -XXX,XX +XXX,XX @@ static int mte_probe_int(CPUARMState *env, uint32_t desc, uint64_t ptr,
143
n = checkN(mem1, ptr & TAG_GRANULE, ptr_tag, tag_count);
144
} else {
145
/* Memory access crosses to next page. */
146
- tag_size = (next_page - tag_byte_first) / (2 * TAG_GRANULE);
147
mem1 = allocation_tag_mem(env, mmu_idx, ptr, type, next_page - ptr,
148
- MMU_DATA_LOAD, tag_size, ra);
149
+ MMU_DATA_LOAD, ra);
150
151
- tag_size = ((tag_byte_last - next_page) / (2 * TAG_GRANULE)) + 1;
152
mem2 = allocation_tag_mem(env, mmu_idx, next_page, type,
153
ptr_last - next_page + 1,
154
- MMU_DATA_LOAD, tag_size, ra);
155
+ MMU_DATA_LOAD, ra);
156
157
/*
158
* Perform all of the comparisons.
159
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(mte_check_zva)(CPUARMState *env, uint32_t desc, uint64_t ptr)
160
mmu_idx = FIELD_EX32(desc, MTEDESC, MIDX);
161
(void) probe_write(env, ptr, 1, mmu_idx, ra);
162
mem = allocation_tag_mem(env, mmu_idx, align_ptr, MMU_DATA_STORE,
163
- dcz_bytes, MMU_DATA_LOAD, tag_bytes, ra);
164
+ dcz_bytes, MMU_DATA_LOAD, ra);
165
if (!mem) {
166
goto done;
167
}
168
--
27
--
169
2.34.1
28
2.34.1
170
29
171
30
diff view generated by jsdifflib
1
For user-only mode we reveal a subset of the AArch64 ID registers
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
to the guest, to emulate the kernel's trap-and-emulate-ID-regs
3
handling. Update the feature bit masks to match upstream kernel
4
commit a48fa7efaf1161c1c.
5
2
6
None of these features are yet implemented by QEMU, so this
3
Reviewed-by: Thomas Huth <thuth@redhat.com>
7
doesn't yet have a behavioural change, but implementation of
4
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
FEAT_MOPS and FEAT_HBC is imminent.
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20231020130331.50048-6-philmd@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
9
include/hw/arm/pxa.h | 2 --
10
hw/arm/pxa2xx.c | 12 ++++++++----
11
hw/pcmcia/pxa2xx.c | 10 ----------
12
3 files changed, 8 insertions(+), 16 deletions(-)
9
13
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
diff --git a/include/hw/arm/pxa.h b/include/hw/arm/pxa.h
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
---
13
target/arm/helper.c | 11 ++++++++++-
14
tests/tcg/aarch64/sysregs.c | 4 ++--
15
2 files changed, 12 insertions(+), 3 deletions(-)
16
17
diff --git a/target/arm/helper.c b/target/arm/helper.c
18
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper.c
16
--- a/include/hw/arm/pxa.h
20
+++ b/target/arm/helper.c
17
+++ b/include/hw/arm/pxa.h
21
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
18
@@ -XXX,XX +XXX,XX @@ void pxa2xx_mmci_handlers(PXA2xxMMCIState *s, qemu_irq readonly,
22
R_ID_AA64ZFR0_F64MM_MASK },
19
#define TYPE_PXA2XX_PCMCIA "pxa2xx-pcmcia"
23
{ .name = "ID_AA64SMFR0_EL1",
20
OBJECT_DECLARE_SIMPLE_TYPE(PXA2xxPCMCIAState, PXA2XX_PCMCIA)
24
.exported_bits = R_ID_AA64SMFR0_F32F32_MASK |
21
25
+ R_ID_AA64SMFR0_BI32I32_MASK |
22
-PXA2xxPCMCIAState *pxa2xx_pcmcia_init(MemoryRegion *sysmem,
26
R_ID_AA64SMFR0_B16F32_MASK |
23
- hwaddr base);
27
R_ID_AA64SMFR0_F16F32_MASK |
24
int pxa2xx_pcmcia_attach(void *opaque, PCMCIACardState *card);
28
R_ID_AA64SMFR0_I8I32_MASK |
25
int pxa2xx_pcmcia_detach(void *opaque);
29
+ R_ID_AA64SMFR0_F16F16_MASK |
26
void pxa2xx_pcmcia_set_irq_cb(void *opaque, qemu_irq irq, qemu_irq cd_irq);
30
+ R_ID_AA64SMFR0_B16B16_MASK |
27
diff --git a/hw/arm/pxa2xx.c b/hw/arm/pxa2xx.c
31
+ R_ID_AA64SMFR0_I16I32_MASK |
32
R_ID_AA64SMFR0_F64F64_MASK |
33
R_ID_AA64SMFR0_I16I64_MASK |
34
+ R_ID_AA64SMFR0_SMEVER_MASK |
35
R_ID_AA64SMFR0_FA64_MASK },
36
{ .name = "ID_AA64MMFR0_EL1",
37
.exported_bits = R_ID_AA64MMFR0_ECV_MASK,
38
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
39
.exported_bits = R_ID_AA64ISAR2_WFXT_MASK |
40
R_ID_AA64ISAR2_RPRES_MASK |
41
R_ID_AA64ISAR2_GPA3_MASK |
42
- R_ID_AA64ISAR2_APA3_MASK },
43
+ R_ID_AA64ISAR2_APA3_MASK |
44
+ R_ID_AA64ISAR2_MOPS_MASK |
45
+ R_ID_AA64ISAR2_BC_MASK |
46
+ R_ID_AA64ISAR2_RPRFM_MASK |
47
+ R_ID_AA64ISAR2_CSSC_MASK },
48
{ .name = "ID_AA64ISAR*_EL1_RESERVED",
49
.is_glob = true },
50
};
51
diff --git a/tests/tcg/aarch64/sysregs.c b/tests/tcg/aarch64/sysregs.c
52
index XXXXXXX..XXXXXXX 100644
28
index XXXXXXX..XXXXXXX 100644
53
--- a/tests/tcg/aarch64/sysregs.c
29
--- a/hw/arm/pxa2xx.c
54
+++ b/tests/tcg/aarch64/sysregs.c
30
+++ b/hw/arm/pxa2xx.c
55
@@ -XXX,XX +XXX,XX @@ int main(void)
31
@@ -XXX,XX +XXX,XX @@ PXA2xxState *pxa270_init(unsigned int sdram_size, const char *cpu_type)
56
*/
32
sysbus_create_simple("sysbus-ohci", 0x4c000000,
57
get_cpu_reg_check_mask(id_aa64isar0_el1, _m(f0ff,ffff,f0ff,fff0));
33
qdev_get_gpio_in(s->pic, PXA2XX_PIC_USBH1));
58
get_cpu_reg_check_mask(id_aa64isar1_el1, _m(00ff,f0ff,ffff,ffff));
34
59
- get_cpu_reg_check_mask(SYS_ID_AA64ISAR2_EL1, _m(0000,0000,0000,ffff));
35
- s->pcmcia[0] = pxa2xx_pcmcia_init(address_space, 0x20000000);
60
+ get_cpu_reg_check_mask(SYS_ID_AA64ISAR2_EL1, _m(00ff,0000,00ff,ffff));
36
- s->pcmcia[1] = pxa2xx_pcmcia_init(address_space, 0x30000000);
61
/* TGran4 & TGran64 as pegged to -1 */
37
+ s->pcmcia[0] = PXA2XX_PCMCIA(sysbus_create_simple(TYPE_PXA2XX_PCMCIA,
62
get_cpu_reg_check_mask(id_aa64mmfr0_el1, _m(f000,0000,ff00,0000));
38
+ 0x20000000, NULL));
63
get_cpu_reg_check_mask(id_aa64mmfr1_el1, _m(0000,f000,0000,0000));
39
+ s->pcmcia[1] = PXA2XX_PCMCIA(sysbus_create_simple(TYPE_PXA2XX_PCMCIA,
64
@@ -XXX,XX +XXX,XX @@ int main(void)
40
+ 0x30000000, NULL));
65
get_cpu_reg_check_mask(id_aa64dfr0_el1, _m(0000,0000,0000,0006));
41
66
get_cpu_reg_check_zero(id_aa64dfr1_el1);
42
sysbus_create_simple(TYPE_PXA2XX_RTC, 0x40900000,
67
get_cpu_reg_check_mask(SYS_ID_AA64ZFR0_EL1, _m(0ff0,ff0f,00ff,00ff));
43
qdev_get_gpio_in(s->pic, PXA2XX_PIC_RTCALARM));
68
- get_cpu_reg_check_mask(SYS_ID_AA64SMFR0_EL1, _m(80f1,00fd,0000,0000));
44
@@ -XXX,XX +XXX,XX @@ PXA2xxState *pxa255_init(unsigned int sdram_size)
69
+ get_cpu_reg_check_mask(SYS_ID_AA64SMFR0_EL1, _m(8ff1,fcff,0000,0000));
45
s->ssp[i] = (SSIBus *)qdev_get_child_bus(dev, "ssi");
70
46
}
71
get_cpu_reg_check_zero(id_aa64afr0_el1);
47
72
get_cpu_reg_check_zero(id_aa64afr1_el1);
48
- s->pcmcia[0] = pxa2xx_pcmcia_init(address_space, 0x20000000);
49
- s->pcmcia[1] = pxa2xx_pcmcia_init(address_space, 0x30000000);
50
+ s->pcmcia[0] = PXA2XX_PCMCIA(sysbus_create_simple(TYPE_PXA2XX_PCMCIA,
51
+ 0x20000000, NULL));
52
+ s->pcmcia[1] = PXA2XX_PCMCIA(sysbus_create_simple(TYPE_PXA2XX_PCMCIA,
53
+ 0x30000000, NULL));
54
55
sysbus_create_simple(TYPE_PXA2XX_RTC, 0x40900000,
56
qdev_get_gpio_in(s->pic, PXA2XX_PIC_RTCALARM));
57
diff --git a/hw/pcmcia/pxa2xx.c b/hw/pcmcia/pxa2xx.c
58
index XXXXXXX..XXXXXXX 100644
59
--- a/hw/pcmcia/pxa2xx.c
60
+++ b/hw/pcmcia/pxa2xx.c
61
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_pcmcia_set_irq(void *opaque, int line, int level)
62
qemu_set_irq(s->irq, level);
63
}
64
65
-PXA2xxPCMCIAState *pxa2xx_pcmcia_init(MemoryRegion *sysmem,
66
- hwaddr base)
67
-{
68
- DeviceState *dev;
69
-
70
- dev = sysbus_create_simple(TYPE_PXA2XX_PCMCIA, base, NULL);
71
-
72
- return PXA2XX_PCMCIA(dev);
73
-}
74
-
75
static void pxa2xx_pcmcia_initfn(Object *obj)
76
{
77
SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
73
--
78
--
74
2.34.1
79
2.34.1
80
81
diff view generated by jsdifflib
1
The FEAT_MOPS memory operations can raise a Memory Copy or Memory Set
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
exception if a copy or set instruction is executed when the CPU
3
register state is not correct for that instruction. Define the
4
usual syn_* function that constructs the syndrome register value
5
for these exceptions.
6
2
3
Factor reset code out of the DeviceRealize() handler.
4
5
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Thomas Huth <thuth@redhat.com>
8
Message-id: 20231020130331.50048-7-philmd@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20230912140434.1333369-5-peter.maydell@linaro.org
10
---
10
---
11
target/arm/syndrome.h | 12 ++++++++++++
11
hw/arm/pxa2xx_pic.c | 17 ++++++++++++-----
12
1 file changed, 12 insertions(+)
12
1 file changed, 12 insertions(+), 5 deletions(-)
13
13
14
diff --git a/target/arm/syndrome.h b/target/arm/syndrome.h
14
diff --git a/hw/arm/pxa2xx_pic.c b/hw/arm/pxa2xx_pic.c
15
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/syndrome.h
16
--- a/hw/arm/pxa2xx_pic.c
17
+++ b/target/arm/syndrome.h
17
+++ b/hw/arm/pxa2xx_pic.c
18
@@ -XXX,XX +XXX,XX @@ enum arm_exception_class {
18
@@ -XXX,XX +XXX,XX @@ static int pxa2xx_pic_post_load(void *opaque, int version_id)
19
EC_DATAABORT = 0x24,
19
return 0;
20
EC_DATAABORT_SAME_EL = 0x25,
21
EC_SPALIGNMENT = 0x26,
22
+ EC_MOP = 0x27,
23
EC_AA32_FPTRAP = 0x28,
24
EC_AA64_FPTRAP = 0x2c,
25
EC_SERROR = 0x2f,
26
@@ -XXX,XX +XXX,XX @@ static inline uint32_t syn_serror(uint32_t extra)
27
return (EC_SERROR << ARM_EL_EC_SHIFT) | ARM_EL_IL | extra;
28
}
20
}
29
21
30
+static inline uint32_t syn_mop(bool is_set, bool is_setg, int options,
22
-DeviceState *pxa2xx_pic_init(hwaddr base, ARMCPU *cpu)
31
+ bool epilogue, bool wrong_option, bool option_a,
23
+static void pxa2xx_pic_reset_hold(Object *obj)
32
+ int destreg, int srcreg, int sizereg)
24
{
33
+{
25
- DeviceState *dev = qdev_new(TYPE_PXA2XX_PIC);
34
+ return (EC_MOP << ARM_EL_EC_SHIFT) | ARM_EL_IL |
26
- PXA2xxPICState *s = PXA2XX_PIC(dev);
35
+ (is_set << 24) | (is_setg << 23) | (options << 19) |
27
-
36
+ (epilogue << 18) | (wrong_option << 17) | (option_a << 16) |
28
- s->cpu = cpu;
37
+ (destreg << 10) | (srcreg << 5) | sizereg;
29
+ PXA2xxPICState *s = PXA2XX_PIC(obj);
30
31
s->int_pending[0] = 0;
32
s->int_pending[1] = 0;
33
@@ -XXX,XX +XXX,XX @@ DeviceState *pxa2xx_pic_init(hwaddr base, ARMCPU *cpu)
34
s->int_enabled[1] = 0;
35
s->is_fiq[0] = 0;
36
s->is_fiq[1] = 0;
38
+}
37
+}
39
+
38
+
39
+DeviceState *pxa2xx_pic_init(hwaddr base, ARMCPU *cpu)
40
+{
41
+ DeviceState *dev = qdev_new(TYPE_PXA2XX_PIC);
42
+ PXA2xxPICState *s = PXA2XX_PIC(dev);
40
+
43
+
41
#endif /* TARGET_ARM_SYNDROME_H */
44
+ s->cpu = cpu;
45
46
sysbus_realize_and_unref(SYS_BUS_DEVICE(dev), &error_fatal);
47
48
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_pxa2xx_pic_regs = {
49
static void pxa2xx_pic_class_init(ObjectClass *klass, void *data)
50
{
51
DeviceClass *dc = DEVICE_CLASS(klass);
52
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
53
54
dc->desc = "PXA2xx PIC";
55
dc->vmsd = &vmstate_pxa2xx_pic_regs;
56
+ rc->phases.hold = pxa2xx_pic_reset_hold;
57
}
58
59
static const TypeInfo pxa2xx_pic_info = {
42
--
60
--
43
2.34.1
61
2.34.1
62
63
diff view generated by jsdifflib
1
Our lists of Arm 32 and 64 bit hwcap values have lagged behind
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
the Linux kernel. Update them to include all the bits defined
3
as of upstream Linux git commit a48fa7efaf1161c1 (in the middle
4
of the kernel 6.6 dev cycle).
5
2
6
For 64-bit, we don't yet implement any of the features reported via
3
QOM objects shouldn't access each other internals fields
7
these hwcap bits. For 32-bit we do in fact already implement them
4
except using the QOM API.
8
all; we'll add the code to set them in a subsequent commit.
9
5
6
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Thomas Huth <thuth@redhat.com>
9
Message-id: 20231020130331.50048-8-philmd@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
---
11
---
14
linux-user/elfload.c | 44 ++++++++++++++++++++++++++++++++++++++++++++
12
hw/arm/pxa2xx_pic.c | 11 ++++++++++-
15
1 file changed, 44 insertions(+)
13
1 file changed, 10 insertions(+), 1 deletion(-)
16
14
17
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
15
diff --git a/hw/arm/pxa2xx_pic.c b/hw/arm/pxa2xx_pic.c
18
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
19
--- a/linux-user/elfload.c
17
--- a/hw/arm/pxa2xx_pic.c
20
+++ b/linux-user/elfload.c
18
+++ b/hw/arm/pxa2xx_pic.c
21
@@ -XXX,XX +XXX,XX @@ enum
19
@@ -XXX,XX +XXX,XX @@
22
ARM_HWCAP_ARM_VFPD32 = 1 << 19,
20
#include "cpu.h"
23
ARM_HWCAP_ARM_LPAE = 1 << 20,
21
#include "hw/arm/pxa.h"
24
ARM_HWCAP_ARM_EVTSTRM = 1 << 21,
22
#include "hw/sysbus.h"
25
+ ARM_HWCAP_ARM_FPHP = 1 << 22,
23
+#include "hw/qdev-properties.h"
26
+ ARM_HWCAP_ARM_ASIMDHP = 1 << 23,
24
#include "migration/vmstate.h"
27
+ ARM_HWCAP_ARM_ASIMDDP = 1 << 24,
25
#include "qom/object.h"
28
+ ARM_HWCAP_ARM_ASIMDFHM = 1 << 25,
26
#include "target/arm/cpregs.h"
29
+ ARM_HWCAP_ARM_ASIMDBF16 = 1 << 26,
27
@@ -XXX,XX +XXX,XX @@ DeviceState *pxa2xx_pic_init(hwaddr base, ARMCPU *cpu)
30
+ ARM_HWCAP_ARM_I8MM = 1 << 27,
28
DeviceState *dev = qdev_new(TYPE_PXA2XX_PIC);
29
PXA2xxPICState *s = PXA2XX_PIC(dev);
30
31
- s->cpu = cpu;
32
+ object_property_set_link(OBJECT(dev), "arm-cpu",
33
+ OBJECT(cpu), &error_abort);
34
35
sysbus_realize_and_unref(SYS_BUS_DEVICE(dev), &error_fatal);
36
37
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_pxa2xx_pic_regs = {
38
},
31
};
39
};
32
40
33
enum {
41
+static Property pxa2xx_pic_properties[] = {
34
@@ -XXX,XX +XXX,XX @@ enum {
42
+ DEFINE_PROP_LINK("arm-cpu", PXA2xxPICState, cpu,
35
ARM_HWCAP2_ARM_SHA1 = 1 << 2,
43
+ TYPE_ARM_CPU, ARMCPU *),
36
ARM_HWCAP2_ARM_SHA2 = 1 << 3,
44
+ DEFINE_PROP_END_OF_LIST(),
37
ARM_HWCAP2_ARM_CRC32 = 1 << 4,
45
+};
38
+ ARM_HWCAP2_ARM_SB = 1 << 5,
46
+
39
+ ARM_HWCAP2_ARM_SSBS = 1 << 6,
47
static void pxa2xx_pic_class_init(ObjectClass *klass, void *data)
40
};
48
{
41
49
DeviceClass *dc = DEVICE_CLASS(klass);
42
/* The commpage only exists for 32 bit kernels */
50
ResettableClass *rc = RESETTABLE_CLASS(klass);
43
@@ -XXX,XX +XXX,XX @@ const char *elf_hwcap_str(uint32_t bit)
51
44
[__builtin_ctz(ARM_HWCAP_ARM_VFPD32 )] = "vfpd32",
52
+ device_class_set_props(dc, pxa2xx_pic_properties);
45
[__builtin_ctz(ARM_HWCAP_ARM_LPAE )] = "lpae",
53
dc->desc = "PXA2xx PIC";
46
[__builtin_ctz(ARM_HWCAP_ARM_EVTSTRM )] = "evtstrm",
54
dc->vmsd = &vmstate_pxa2xx_pic_regs;
47
+ [__builtin_ctz(ARM_HWCAP_ARM_FPHP )] = "fphp",
55
rc->phases.hold = pxa2xx_pic_reset_hold;
48
+ [__builtin_ctz(ARM_HWCAP_ARM_ASIMDHP )] = "asimdhp",
49
+ [__builtin_ctz(ARM_HWCAP_ARM_ASIMDDP )] = "asimddp",
50
+ [__builtin_ctz(ARM_HWCAP_ARM_ASIMDFHM )] = "asimdfhm",
51
+ [__builtin_ctz(ARM_HWCAP_ARM_ASIMDBF16)] = "asimdbf16",
52
+ [__builtin_ctz(ARM_HWCAP_ARM_I8MM )] = "i8mm",
53
};
54
55
return bit < ARRAY_SIZE(hwcap_str) ? hwcap_str[bit] : NULL;
56
@@ -XXX,XX +XXX,XX @@ const char *elf_hwcap2_str(uint32_t bit)
57
[__builtin_ctz(ARM_HWCAP2_ARM_SHA1 )] = "sha1",
58
[__builtin_ctz(ARM_HWCAP2_ARM_SHA2 )] = "sha2",
59
[__builtin_ctz(ARM_HWCAP2_ARM_CRC32)] = "crc32",
60
+ [__builtin_ctz(ARM_HWCAP2_ARM_SB )] = "sb",
61
+ [__builtin_ctz(ARM_HWCAP2_ARM_SSBS )] = "ssbs",
62
};
63
64
return bit < ARRAY_SIZE(hwcap_str) ? hwcap_str[bit] : NULL;
65
@@ -XXX,XX +XXX,XX @@ enum {
66
ARM_HWCAP2_A64_SME_B16F32 = 1 << 28,
67
ARM_HWCAP2_A64_SME_F32F32 = 1 << 29,
68
ARM_HWCAP2_A64_SME_FA64 = 1 << 30,
69
+ ARM_HWCAP2_A64_WFXT = 1ULL << 31,
70
+ ARM_HWCAP2_A64_EBF16 = 1ULL << 32,
71
+ ARM_HWCAP2_A64_SVE_EBF16 = 1ULL << 33,
72
+ ARM_HWCAP2_A64_CSSC = 1ULL << 34,
73
+ ARM_HWCAP2_A64_RPRFM = 1ULL << 35,
74
+ ARM_HWCAP2_A64_SVE2P1 = 1ULL << 36,
75
+ ARM_HWCAP2_A64_SME2 = 1ULL << 37,
76
+ ARM_HWCAP2_A64_SME2P1 = 1ULL << 38,
77
+ ARM_HWCAP2_A64_SME_I16I32 = 1ULL << 39,
78
+ ARM_HWCAP2_A64_SME_BI32I32 = 1ULL << 40,
79
+ ARM_HWCAP2_A64_SME_B16B16 = 1ULL << 41,
80
+ ARM_HWCAP2_A64_SME_F16F16 = 1ULL << 42,
81
+ ARM_HWCAP2_A64_MOPS = 1ULL << 43,
82
+ ARM_HWCAP2_A64_HBC = 1ULL << 44,
83
};
84
85
#define ELF_HWCAP get_elf_hwcap()
86
@@ -XXX,XX +XXX,XX @@ const char *elf_hwcap2_str(uint32_t bit)
87
[__builtin_ctz(ARM_HWCAP2_A64_SME_B16F32 )] = "smeb16f32",
88
[__builtin_ctz(ARM_HWCAP2_A64_SME_F32F32 )] = "smef32f32",
89
[__builtin_ctz(ARM_HWCAP2_A64_SME_FA64 )] = "smefa64",
90
+ [__builtin_ctz(ARM_HWCAP2_A64_WFXT )] = "wfxt",
91
+ [__builtin_ctzll(ARM_HWCAP2_A64_EBF16 )] = "ebf16",
92
+ [__builtin_ctzll(ARM_HWCAP2_A64_SVE_EBF16 )] = "sveebf16",
93
+ [__builtin_ctzll(ARM_HWCAP2_A64_CSSC )] = "cssc",
94
+ [__builtin_ctzll(ARM_HWCAP2_A64_RPRFM )] = "rprfm",
95
+ [__builtin_ctzll(ARM_HWCAP2_A64_SVE2P1 )] = "sve2p1",
96
+ [__builtin_ctzll(ARM_HWCAP2_A64_SME2 )] = "sme2",
97
+ [__builtin_ctzll(ARM_HWCAP2_A64_SME2P1 )] = "sme2p1",
98
+ [__builtin_ctzll(ARM_HWCAP2_A64_SME_I16I32 )] = "smei16i32",
99
+ [__builtin_ctzll(ARM_HWCAP2_A64_SME_BI32I32)] = "smebi32i32",
100
+ [__builtin_ctzll(ARM_HWCAP2_A64_SME_B16B16 )] = "smeb16b16",
101
+ [__builtin_ctzll(ARM_HWCAP2_A64_SME_F16F16 )] = "smef16f16",
102
+ [__builtin_ctzll(ARM_HWCAP2_A64_MOPS )] = "mops",
103
+ [__builtin_ctzll(ARM_HWCAP2_A64_HBC )] = "hbc",
104
};
105
106
return bit < ARRAY_SIZE(hwcap_str) ? hwcap_str[bit] : NULL;
107
--
56
--
108
2.34.1
57
2.34.1
109
58
110
59
diff view generated by jsdifflib
1
The FEAT_MOPS memory copy operations need an extra helper routine
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
for checking for MTE tag checking failures beyond the ones we
3
already added for memory set operations:
4
* mte_mops_probe_rev() does the same job as mte_mops_probe(), but
5
it checks tags starting at the provided address and working
6
backwards, rather than forwards
7
2
3
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Thomas Huth <thuth@redhat.com>
6
Message-id: 20231020130331.50048-9-philmd@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20230912140434.1333369-11-peter.maydell@linaro.org
11
---
8
---
12
target/arm/internals.h | 17 +++++++
9
hw/arm/pxa2xx_pic.c | 16 ++++++++++------
13
target/arm/tcg/mte_helper.c | 99 +++++++++++++++++++++++++++++++++++++
10
1 file changed, 10 insertions(+), 6 deletions(-)
14
2 files changed, 116 insertions(+)
15
11
16
diff --git a/target/arm/internals.h b/target/arm/internals.h
12
diff --git a/hw/arm/pxa2xx_pic.c b/hw/arm/pxa2xx_pic.c
17
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/internals.h
14
--- a/hw/arm/pxa2xx_pic.c
19
+++ b/target/arm/internals.h
15
+++ b/hw/arm/pxa2xx_pic.c
20
@@ -XXX,XX +XXX,XX @@ uint64_t mte_check(CPUARMState *env, uint32_t desc, uint64_t ptr, uintptr_t ra);
16
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_pic_reset_hold(Object *obj)
21
uint64_t mte_mops_probe(CPUARMState *env, uint64_t ptr, uint64_t size,
17
DeviceState *pxa2xx_pic_init(hwaddr base, ARMCPU *cpu)
22
uint32_t desc);
18
{
23
19
DeviceState *dev = qdev_new(TYPE_PXA2XX_PIC);
24
+/**
20
- PXA2xxPICState *s = PXA2XX_PIC(dev);
25
+ * mte_mops_probe_rev: Check where the next MTE failure is for a FEAT_MOPS
21
26
+ * operation going in the reverse direction
22
object_property_set_link(OBJECT(dev), "arm-cpu",
27
+ * @env: CPU env
23
OBJECT(cpu), &error_abort);
28
+ * @ptr: *end* address of memory region (dirty pointer)
24
-
29
+ * @size: length of region (guaranteed not to cross a page boundary)
25
sysbus_realize_and_unref(SYS_BUS_DEVICE(dev), &error_fatal);
30
+ * @desc: MTEDESC descriptor word (0 means no MTE checks)
26
+ sysbus_mmio_map(SYS_BUS_DEVICE(dev), 0, base);
31
+ * Returns: the size of the region that can be copied without hitting
32
+ * an MTE tag failure
33
+ *
34
+ * Note that we assume that the caller has already checked the TBI
35
+ * and TCMA bits with mte_checks_needed() and an MTE check is definitely
36
+ * required.
37
+ */
38
+uint64_t mte_mops_probe_rev(CPUARMState *env, uint64_t ptr, uint64_t size,
39
+ uint32_t desc);
40
+
27
+
41
/**
28
+ return dev;
42
* mte_check_fail: Record an MTE tag check failure
43
* @env: CPU env
44
diff --git a/target/arm/tcg/mte_helper.c b/target/arm/tcg/mte_helper.c
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/tcg/mte_helper.c
47
+++ b/target/arm/tcg/mte_helper.c
48
@@ -XXX,XX +XXX,XX @@ static int checkN(uint8_t *mem, int odd, int cmp, int count)
49
return n;
50
}
51
52
+/**
53
+ * checkNrev:
54
+ * @tag: tag memory to test
55
+ * @odd: true to begin testing at tags at odd nibble
56
+ * @cmp: the tag to compare against
57
+ * @count: number of tags to test
58
+ *
59
+ * Return the number of successful tests.
60
+ * Thus a return value < @count indicates a failure.
61
+ *
62
+ * This is like checkN, but it runs backwards, checking the
63
+ * tags starting with @tag and then the tags preceding it.
64
+ * This is needed by the backwards-memory-copying operations.
65
+ */
66
+static int checkNrev(uint8_t *mem, int odd, int cmp, int count)
67
+{
68
+ int n = 0, diff;
69
+
70
+ /* Replicate the test tag and compare. */
71
+ cmp *= 0x11;
72
+ diff = *mem-- ^ cmp;
73
+
74
+ if (!odd) {
75
+ goto start_even;
76
+ }
77
+
78
+ while (1) {
79
+ /* Test odd tag. */
80
+ if (unlikely((diff) & 0xf0)) {
81
+ break;
82
+ }
83
+ if (++n == count) {
84
+ break;
85
+ }
86
+
87
+ start_even:
88
+ /* Test even tag. */
89
+ if (unlikely((diff) & 0x0f)) {
90
+ break;
91
+ }
92
+ if (++n == count) {
93
+ break;
94
+ }
95
+
96
+ diff = *mem-- ^ cmp;
97
+ }
98
+ return n;
99
+}
29
+}
100
+
30
+
101
/**
31
+static void pxa2xx_pic_realize(DeviceState *dev, Error **errp)
102
* mte_probe_int() - helper for mte_probe and mte_check
32
+{
103
* @env: CPU environment
33
+ PXA2xxPICState *s = PXA2XX_PIC(dev);
104
@@ -XXX,XX +XXX,XX @@ uint64_t mte_mops_probe(CPUARMState *env, uint64_t ptr, uint64_t size,
34
105
}
35
qdev_init_gpio_in(dev, pxa2xx_pic_set_irq, PXA2XX_PIC_SRCS);
36
37
@@ -XXX,XX +XXX,XX @@ DeviceState *pxa2xx_pic_init(hwaddr base, ARMCPU *cpu)
38
memory_region_init_io(&s->iomem, OBJECT(s), &pxa2xx_pic_ops, s,
39
"pxa2xx-pic", 0x00100000);
40
sysbus_init_mmio(SYS_BUS_DEVICE(dev), &s->iomem);
41
- sysbus_mmio_map(SYS_BUS_DEVICE(dev), 0, base);
42
43
/* Enable IC coprocessor access. */
44
- define_arm_cp_regs_with_opaque(cpu, pxa_pic_cp_reginfo, s);
45
-
46
- return dev;
47
+ define_arm_cp_regs_with_opaque(s->cpu, pxa_pic_cp_reginfo, s);
106
}
48
}
107
49
108
+uint64_t mte_mops_probe_rev(CPUARMState *env, uint64_t ptr, uint64_t size,
50
static const VMStateDescription vmstate_pxa2xx_pic_regs = {
109
+ uint32_t desc)
51
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_pic_class_init(ObjectClass *klass, void *data)
110
+{
52
ResettableClass *rc = RESETTABLE_CLASS(klass);
111
+ int mmu_idx, tag_count;
53
112
+ uint64_t ptr_tag, tag_first, tag_last;
54
device_class_set_props(dc, pxa2xx_pic_properties);
113
+ void *mem;
55
+ dc->realize = pxa2xx_pic_realize;
114
+ bool w = FIELD_EX32(desc, MTEDESC, WRITE);
56
dc->desc = "PXA2xx PIC";
115
+ uint32_t n;
57
dc->vmsd = &vmstate_pxa2xx_pic_regs;
116
+
58
rc->phases.hold = pxa2xx_pic_reset_hold;
117
+ mmu_idx = FIELD_EX32(desc, MTEDESC, MIDX);
118
+ /* True probe; this will never fault */
119
+ mem = allocation_tag_mem_probe(env, mmu_idx, ptr,
120
+ w ? MMU_DATA_STORE : MMU_DATA_LOAD,
121
+ size, MMU_DATA_LOAD, true, 0);
122
+ if (!mem) {
123
+ return size;
124
+ }
125
+
126
+ /*
127
+ * TODO: checkNrev() is not designed for checks of the size we expect
128
+ * for FEAT_MOPS operations, so we should implement this differently.
129
+ * Maybe we should do something like
130
+ * if (region start and size are aligned nicely) {
131
+ * do direct loads of 64 tag bits at a time;
132
+ * } else {
133
+ * call checkN()
134
+ * }
135
+ */
136
+ /* Round the bounds to the tag granule, and compute the number of tags. */
137
+ ptr_tag = allocation_tag_from_addr(ptr);
138
+ tag_first = QEMU_ALIGN_DOWN(ptr - (size - 1), TAG_GRANULE);
139
+ tag_last = QEMU_ALIGN_DOWN(ptr, TAG_GRANULE);
140
+ tag_count = ((tag_last - tag_first) / TAG_GRANULE) + 1;
141
+ n = checkNrev(mem, ptr & TAG_GRANULE, ptr_tag, tag_count);
142
+ if (likely(n == tag_count)) {
143
+ return size;
144
+ }
145
+
146
+ /*
147
+ * Failure; for the first granule, it's at @ptr. Otherwise
148
+ * it's at the last byte of the nth granule. Calculate how
149
+ * many bytes we can access without hitting that failure.
150
+ */
151
+ if (n == 0) {
152
+ return 0;
153
+ } else {
154
+ return (n - 1) * TAG_GRANULE + ((ptr + 1) - tag_last);
155
+ }
156
+}
157
+
158
void mte_mops_set_tags(CPUARMState *env, uint64_t ptr, uint64_t size,
159
uint32_t desc)
160
{
161
--
59
--
162
2.34.1
60
2.34.1
61
62
diff view generated by jsdifflib
1
Avoid a dynamic stack allocation in qjack_client_init(), by using
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
a g_autofree heap allocation instead.
3
2
4
(We stick with allocate + snprintf() because the JACK API requires
3
qbus_new(), called in i2c_init_bus(), should not be called
5
the name to be no more than its maximum size, so g_strdup_printf()
4
on unrealized device.
6
would require an extra truncation step.)
7
5
8
The codebase has very few VLAs, and if we can get rid of them all we
6
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
can make the compiler error on new additions. This is a defensive
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
measure against security bugs where an on-stack dynamic allocation
8
Reviewed-by: Thomas Huth <thuth@redhat.com>
11
isn't correctly size-checked (e.g. CVE-2021-3527).
9
Message-id: 20231020130331.50048-10-philmd@linaro.org
12
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
15
Reviewed-by: Francisco Iglesias <frasse.iglesias@gmail.com>
16
Reviewed-by: Christian Schoenebeck <qemu_oss@crudebyte.com>
17
Message-id: 20230818155846.1651287-2-peter.maydell@linaro.org
18
---
11
---
19
audio/jackaudio.c | 5 +++--
12
hw/arm/pxa2xx.c | 5 +++--
20
1 file changed, 3 insertions(+), 2 deletions(-)
13
1 file changed, 3 insertions(+), 2 deletions(-)
21
14
22
diff --git a/audio/jackaudio.c b/audio/jackaudio.c
15
diff --git a/hw/arm/pxa2xx.c b/hw/arm/pxa2xx.c
23
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
24
--- a/audio/jackaudio.c
17
--- a/hw/arm/pxa2xx.c
25
+++ b/audio/jackaudio.c
18
+++ b/hw/arm/pxa2xx.c
26
@@ -XXX,XX +XXX,XX @@ static void qjack_client_connect_ports(QJackClient *c)
19
@@ -XXX,XX +XXX,XX @@ PXA2xxI2CState *pxa2xx_i2c_init(hwaddr base,
27
static int qjack_client_init(QJackClient *c)
20
qdev_prop_set_uint32(dev, "size", region_size + 1);
28
{
21
qdev_prop_set_uint32(dev, "offset", base & region_size);
29
jack_status_t status;
22
30
- char client_name[jack_client_name_size()];
23
+ /* FIXME: Should the slave device really be on a separate bus? */
31
+ int client_name_len = jack_client_name_size(); /* includes NUL */
24
+ i2cbus = i2c_init_bus(dev, "dummy");
32
+ g_autofree char *client_name = g_new(char, client_name_len);
25
+
33
jack_options_t options = JackNullOption;
26
i2c_dev = SYS_BUS_DEVICE(dev);
34
27
sysbus_realize_and_unref(i2c_dev, &error_fatal);
35
if (c->state == QJACK_STATE_RUNNING) {
28
sysbus_mmio_map(i2c_dev, 0, base & ~region_size);
36
@@ -XXX,XX +XXX,XX @@ static int qjack_client_init(QJackClient *c)
29
sysbus_connect_irq(i2c_dev, 0, irq);
37
30
38
c->connect_ports = true;
31
s = PXA2XX_I2C(i2c_dev);
39
32
- /* FIXME: Should the slave device really be on a separate bus? */
40
- snprintf(client_name, sizeof(client_name), "%s-%s",
33
- i2cbus = i2c_init_bus(dev, "dummy");
41
+ snprintf(client_name, client_name_len, "%s-%s",
34
s->slave = PXA2XX_I2C_SLAVE(i2c_slave_create_simple(i2cbus,
42
c->out ? "out" : "in",
35
TYPE_PXA2XX_I2C_SLAVE,
43
c->opt->client_name ? c->opt->client_name : audio_application_name());
36
0));
44
45
--
37
--
46
2.34.1
38
2.34.1
47
39
48
40
diff view generated by jsdifflib
1
For the FEAT_MOPS operations, the existing allocation_tag_mem()
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
function almost does what we want, but it will take a watchpoint
3
exception even for an ra == 0 probe request, and it requires that the
4
caller guarantee that the memory is accessible. For FEAT_MOPS we
5
want a function that will not take any kind of exception, and will
6
return NULL for the not-accessible case.
7
2
8
Rename allocation_tag_mem() to allocation_tag_mem_probe() and add an
3
Prefer using a well known local first CPU rather than a global one.
9
extra 'probe' argument that lets us distinguish these cases;
10
allocation_tag_mem() is now a wrapper that always passes 'false'.
11
4
5
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20231025065909.57344-1-philmd@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20230912140434.1333369-6-peter.maydell@linaro.org
15
---
9
---
16
target/arm/tcg/mte_helper.c | 48 ++++++++++++++++++++++++++++---------
10
hw/arm/bananapi_m2u.c | 2 +-
17
1 file changed, 37 insertions(+), 11 deletions(-)
11
hw/arm/exynos4_boards.c | 7 ++++---
12
hw/arm/orangepi.c | 2 +-
13
hw/arm/realview.c | 2 +-
14
hw/arm/xilinx_zynq.c | 2 +-
15
5 files changed, 8 insertions(+), 7 deletions(-)
18
16
19
diff --git a/target/arm/tcg/mte_helper.c b/target/arm/tcg/mte_helper.c
17
diff --git a/hw/arm/bananapi_m2u.c b/hw/arm/bananapi_m2u.c
20
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/tcg/mte_helper.c
19
--- a/hw/arm/bananapi_m2u.c
22
+++ b/target/arm/tcg/mte_helper.c
20
+++ b/hw/arm/bananapi_m2u.c
23
@@ -XXX,XX +XXX,XX @@ static int choose_nonexcluded_tag(int tag, int offset, uint16_t exclude)
21
@@ -XXX,XX +XXX,XX @@ static void bpim2u_init(MachineState *machine)
22
bpim2u_binfo.loader_start = r40->memmap[AW_R40_DEV_SDRAM];
23
bpim2u_binfo.ram_size = machine->ram_size;
24
bpim2u_binfo.psci_conduit = QEMU_PSCI_CONDUIT_SMC;
25
- arm_load_kernel(ARM_CPU(first_cpu), machine, &bpim2u_binfo);
26
+ arm_load_kernel(&r40->cpus[0], machine, &bpim2u_binfo);
24
}
27
}
25
28
26
/**
29
static void bpim2u_machine_init(MachineClass *mc)
27
- * allocation_tag_mem:
30
diff --git a/hw/arm/exynos4_boards.c b/hw/arm/exynos4_boards.c
28
+ * allocation_tag_mem_probe:
31
index XXXXXXX..XXXXXXX 100644
29
* @env: the cpu environment
32
--- a/hw/arm/exynos4_boards.c
30
* @ptr_mmu_idx: the addressing regime to use for the virtual address
33
+++ b/hw/arm/exynos4_boards.c
31
* @ptr: the virtual address for which to look up tag memory
34
@@ -XXX,XX +XXX,XX @@ exynos4_boards_init_common(MachineState *machine,
32
* @ptr_access: the access to use for the virtual address
35
33
* @ptr_size: the number of bytes in the normal memory access
36
static void nuri_init(MachineState *machine)
34
* @tag_access: the access to use for the tag memory
35
+ * @probe: true to merely probe, never taking an exception
36
* @ra: the return address for exception handling
37
*
38
* Our tag memory is formatted as a sequence of little-endian nibbles.
39
@@ -XXX,XX +XXX,XX @@ static int choose_nonexcluded_tag(int tag, int offset, uint16_t exclude)
40
* for the higher addr.
41
*
42
* Here, resolve the physical address from the virtual address, and return
43
- * a pointer to the corresponding tag byte. Exit with exception if the
44
- * virtual address is not accessible for @ptr_access.
45
+ * a pointer to the corresponding tag byte.
46
*
47
* If there is no tag storage corresponding to @ptr, return NULL.
48
+ *
49
+ * If the page is inaccessible for @ptr_access, or has a watchpoint, there are
50
+ * three options:
51
+ * (1) probe = true, ra = 0 : pure probe -- we return NULL if the page is not
52
+ * accessible, and do not take watchpoint traps. The calling code must
53
+ * handle those cases in the right priority compared to MTE traps.
54
+ * (2) probe = false, ra = 0 : probe, no fault expected -- the caller guarantees
55
+ * that the page is going to be accessible. We will take watchpoint traps.
56
+ * (3) probe = false, ra != 0 : non-probe -- we will take both memory access
57
+ * traps and watchpoint traps.
58
+ * (probe = true, ra != 0 is invalid and will assert.)
59
*/
60
-static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
61
- uint64_t ptr, MMUAccessType ptr_access,
62
- int ptr_size, MMUAccessType tag_access,
63
- uintptr_t ra)
64
+static uint8_t *allocation_tag_mem_probe(CPUARMState *env, int ptr_mmu_idx,
65
+ uint64_t ptr, MMUAccessType ptr_access,
66
+ int ptr_size, MMUAccessType tag_access,
67
+ bool probe, uintptr_t ra)
68
{
37
{
69
#ifdef CONFIG_USER_ONLY
38
- exynos4_boards_init_common(machine, EXYNOS4_BOARD_NURI);
70
uint64_t clean_ptr = useronly_clean_ptr(ptr);
39
+ Exynos4BoardState *s = exynos4_boards_init_common(machine,
71
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
40
+ EXYNOS4_BOARD_NURI);
72
uint8_t *tags;
41
73
uintptr_t index;
42
- arm_load_kernel(ARM_CPU(first_cpu), machine, &exynos4_board_binfo);
74
43
+ arm_load_kernel(s->soc.cpu[0], machine, &exynos4_board_binfo);
75
+ assert(!(probe && ra));
76
+
77
if (!(flags & (ptr_access == MMU_DATA_STORE ? PAGE_WRITE_ORG : PAGE_READ))) {
78
cpu_loop_exit_sigsegv(env_cpu(env), ptr, ptr_access,
79
!(flags & PAGE_VALID), ra);
80
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
81
* exception for inaccessible pages, and resolves the virtual address
82
* into the softmmu tlb.
83
*
84
- * When RA == 0, this is for mte_probe. The page is expected to be
85
- * valid. Indicate to probe_access_flags no-fault, then assert that
86
- * we received a valid page.
87
+ * When RA == 0, this is either a pure probe or a no-fault-expected probe.
88
+ * Indicate to probe_access_flags no-fault, then either return NULL
89
+ * for the pure probe, or assert that we received a valid page for the
90
+ * no-fault-expected probe.
91
*/
92
flags = probe_access_full(env, ptr, 0, ptr_access, ptr_mmu_idx,
93
ra == 0, &host, &full, ra);
94
+ if (probe && (flags & TLB_INVALID_MASK)) {
95
+ return NULL;
96
+ }
97
assert(!(flags & TLB_INVALID_MASK));
98
99
/* If the virtual page MemAttr != Tagged, access unchecked. */
100
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
101
}
102
103
/* Any debug exception has priority over a tag check exception. */
104
- if (unlikely(flags & TLB_WATCHPOINT)) {
105
+ if (!probe && unlikely(flags & TLB_WATCHPOINT)) {
106
int wp = ptr_access == MMU_DATA_LOAD ? BP_MEM_READ : BP_MEM_WRITE;
107
assert(ra != 0);
108
cpu_check_watchpoint(env_cpu(env), ptr, ptr_size, attrs, wp, ra);
109
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
110
#endif
111
}
44
}
112
45
113
+static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
46
static void smdkc210_init(MachineState *machine)
114
+ uint64_t ptr, MMUAccessType ptr_access,
47
@@ -XXX,XX +XXX,XX @@ static void smdkc210_init(MachineState *machine)
115
+ int ptr_size, MMUAccessType tag_access,
48
116
+ uintptr_t ra)
49
lan9215_init(SMDK_LAN9118_BASE_ADDR,
117
+{
50
qemu_irq_invert(s->soc.irq_table[exynos4210_get_irq(37, 1)]));
118
+ return allocation_tag_mem_probe(env, ptr_mmu_idx, ptr, ptr_access,
51
- arm_load_kernel(ARM_CPU(first_cpu), machine, &exynos4_board_binfo);
119
+ ptr_size, tag_access, false, ra);
52
+ arm_load_kernel(s->soc.cpu[0], machine, &exynos4_board_binfo);
120
+}
53
}
121
+
54
122
uint64_t HELPER(irg)(CPUARMState *env, uint64_t rn, uint64_t rm)
55
static void nuri_class_init(ObjectClass *oc, void *data)
123
{
56
diff --git a/hw/arm/orangepi.c b/hw/arm/orangepi.c
124
uint16_t exclude = extract32(rm | env->cp15.gcr_el1, 0, 16);
57
index XXXXXXX..XXXXXXX 100644
58
--- a/hw/arm/orangepi.c
59
+++ b/hw/arm/orangepi.c
60
@@ -XXX,XX +XXX,XX @@ static void orangepi_init(MachineState *machine)
61
orangepi_binfo.loader_start = h3->memmap[AW_H3_DEV_SDRAM];
62
orangepi_binfo.ram_size = machine->ram_size;
63
orangepi_binfo.psci_conduit = QEMU_PSCI_CONDUIT_SMC;
64
- arm_load_kernel(ARM_CPU(first_cpu), machine, &orangepi_binfo);
65
+ arm_load_kernel(&h3->cpus[0], machine, &orangepi_binfo);
66
}
67
68
static void orangepi_machine_init(MachineClass *mc)
69
diff --git a/hw/arm/realview.c b/hw/arm/realview.c
70
index XXXXXXX..XXXXXXX 100644
71
--- a/hw/arm/realview.c
72
+++ b/hw/arm/realview.c
73
@@ -XXX,XX +XXX,XX @@ static void realview_init(MachineState *machine,
74
realview_binfo.ram_size = ram_size;
75
realview_binfo.board_id = realview_board_id[board_type];
76
realview_binfo.loader_start = (board_type == BOARD_PB_A8 ? 0x70000000 : 0);
77
- arm_load_kernel(ARM_CPU(first_cpu), machine, &realview_binfo);
78
+ arm_load_kernel(cpu, machine, &realview_binfo);
79
}
80
81
static void realview_eb_init(MachineState *machine)
82
diff --git a/hw/arm/xilinx_zynq.c b/hw/arm/xilinx_zynq.c
83
index XXXXXXX..XXXXXXX 100644
84
--- a/hw/arm/xilinx_zynq.c
85
+++ b/hw/arm/xilinx_zynq.c
86
@@ -XXX,XX +XXX,XX @@ static void zynq_init(MachineState *machine)
87
zynq_binfo.board_setup_addr = BOARD_SETUP_ADDR;
88
zynq_binfo.write_board_setup = zynq_write_board_setup;
89
90
- arm_load_kernel(ARM_CPU(first_cpu), machine, &zynq_binfo);
91
+ arm_load_kernel(cpu, machine, &zynq_binfo);
92
}
93
94
static void zynq_machine_class_init(ObjectClass *oc, void *data)
125
--
95
--
126
2.34.1
96
2.34.1
97
98
diff view generated by jsdifflib
1
The FEAT_MOPS instructions need a couple of helper routines that
1
From: Glenn Miles <milesg@linux.vnet.ibm.com>
2
check for MTE tag failures:
3
* mte_mops_probe() checks whether there is going to be a tag
4
error in the next up-to-a-page worth of data
5
* mte_check_fail() is an existing function to record the fact
6
of a tag failure, which we need to make global so we can
7
call it from helper-a64.c
8
2
3
Testing of the LED state showed that when the LED polarity was
4
set to GPIO_POLARITY_ACTIVE_LOW and a low logic value was set on
5
the input GPIO of the LED, the LED was being turn off when it was
6
expected to be turned on.
7
8
Fixes: ddb67f6402 ("hw/misc/led: Allow connecting from GPIO output")
9
Signed-off-by: Glenn Miles <milesg@linux.vnet.ibm.com>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Andrew Jeffery <andrew@codeconstruct.com.au>
12
Message-id: 20231024191945.4135036-1-milesg@linux.vnet.ibm.com
13
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20230912140434.1333369-7-peter.maydell@linaro.org
12
---
15
---
13
target/arm/internals.h | 28 +++++++++++++++++++
16
hw/misc/led.c | 2 +-
14
target/arm/tcg/mte_helper.c | 54 +++++++++++++++++++++++++++++++++++--
17
1 file changed, 1 insertion(+), 1 deletion(-)
15
2 files changed, 80 insertions(+), 2 deletions(-)
16
18
17
diff --git a/target/arm/internals.h b/target/arm/internals.h
19
diff --git a/hw/misc/led.c b/hw/misc/led.c
18
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/internals.h
21
--- a/hw/misc/led.c
20
+++ b/target/arm/internals.h
22
+++ b/hw/misc/led.c
21
@@ -XXX,XX +XXX,XX @@ FIELD(MTEDESC, SIZEM1, 12, SIMD_DATA_BITS - 12) /* size - 1 */
23
@@ -XXX,XX +XXX,XX @@ static void led_set_state_gpio_handler(void *opaque, int line, int new_state)
22
bool mte_probe(CPUARMState *env, uint32_t desc, uint64_t ptr);
24
LEDState *s = LED(opaque);
23
uint64_t mte_check(CPUARMState *env, uint32_t desc, uint64_t ptr, uintptr_t ra);
25
24
26
assert(line == 0);
25
+/**
27
- led_set_state(s, !!new_state != s->gpio_active_high);
26
+ * mte_mops_probe: Check where the next MTE failure is for a FEAT_MOPS operation
28
+ led_set_state(s, !!new_state == s->gpio_active_high);
27
+ * @env: CPU env
28
+ * @ptr: start address of memory region (dirty pointer)
29
+ * @size: length of region (guaranteed not to cross a page boundary)
30
+ * @desc: MTEDESC descriptor word (0 means no MTE checks)
31
+ * Returns: the size of the region that can be copied without hitting
32
+ * an MTE tag failure
33
+ *
34
+ * Note that we assume that the caller has already checked the TBI
35
+ * and TCMA bits with mte_checks_needed() and an MTE check is definitely
36
+ * required.
37
+ */
38
+uint64_t mte_mops_probe(CPUARMState *env, uint64_t ptr, uint64_t size,
39
+ uint32_t desc);
40
+
41
+/**
42
+ * mte_check_fail: Record an MTE tag check failure
43
+ * @env: CPU env
44
+ * @desc: MTEDESC descriptor word
45
+ * @dirty_ptr: Failing dirty address
46
+ * @ra: TCG retaddr
47
+ *
48
+ * This may never return (if the MTE tag checks are configured to fault).
49
+ */
50
+void mte_check_fail(CPUARMState *env, uint32_t desc,
51
+ uint64_t dirty_ptr, uintptr_t ra);
52
+
53
static inline int allocation_tag_from_addr(uint64_t ptr)
54
{
55
return extract64(ptr, 56, 4);
56
diff --git a/target/arm/tcg/mte_helper.c b/target/arm/tcg/mte_helper.c
57
index XXXXXXX..XXXXXXX 100644
58
--- a/target/arm/tcg/mte_helper.c
59
+++ b/target/arm/tcg/mte_helper.c
60
@@ -XXX,XX +XXX,XX @@ static void mte_async_check_fail(CPUARMState *env, uint64_t dirty_ptr,
61
}
29
}
62
30
63
/* Record a tag check failure. */
31
static void led_reset(DeviceState *dev)
64
-static void mte_check_fail(CPUARMState *env, uint32_t desc,
65
- uint64_t dirty_ptr, uintptr_t ra)
66
+void mte_check_fail(CPUARMState *env, uint32_t desc,
67
+ uint64_t dirty_ptr, uintptr_t ra)
68
{
69
int mmu_idx = FIELD_EX32(desc, MTEDESC, MIDX);
70
ARMMMUIdx arm_mmu_idx = core_to_aa64_mmu_idx(mmu_idx);
71
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(mte_check_zva)(CPUARMState *env, uint32_t desc, uint64_t ptr)
72
done:
73
return useronly_clean_ptr(ptr);
74
}
75
+
76
+uint64_t mte_mops_probe(CPUARMState *env, uint64_t ptr, uint64_t size,
77
+ uint32_t desc)
78
+{
79
+ int mmu_idx, tag_count;
80
+ uint64_t ptr_tag, tag_first, tag_last;
81
+ void *mem;
82
+ bool w = FIELD_EX32(desc, MTEDESC, WRITE);
83
+ uint32_t n;
84
+
85
+ mmu_idx = FIELD_EX32(desc, MTEDESC, MIDX);
86
+ /* True probe; this will never fault */
87
+ mem = allocation_tag_mem_probe(env, mmu_idx, ptr,
88
+ w ? MMU_DATA_STORE : MMU_DATA_LOAD,
89
+ size, MMU_DATA_LOAD, true, 0);
90
+ if (!mem) {
91
+ return size;
92
+ }
93
+
94
+ /*
95
+ * TODO: checkN() is not designed for checks of the size we expect
96
+ * for FEAT_MOPS operations, so we should implement this differently.
97
+ * Maybe we should do something like
98
+ * if (region start and size are aligned nicely) {
99
+ * do direct loads of 64 tag bits at a time;
100
+ * } else {
101
+ * call checkN()
102
+ * }
103
+ */
104
+ /* Round the bounds to the tag granule, and compute the number of tags. */
105
+ ptr_tag = allocation_tag_from_addr(ptr);
106
+ tag_first = QEMU_ALIGN_DOWN(ptr, TAG_GRANULE);
107
+ tag_last = QEMU_ALIGN_DOWN(ptr + size - 1, TAG_GRANULE);
108
+ tag_count = ((tag_last - tag_first) / TAG_GRANULE) + 1;
109
+ n = checkN(mem, ptr & TAG_GRANULE, ptr_tag, tag_count);
110
+ if (likely(n == tag_count)) {
111
+ return size;
112
+ }
113
+
114
+ /*
115
+ * Failure; for the first granule, it's at @ptr. Otherwise
116
+ * it's at the first byte of the nth granule. Calculate how
117
+ * many bytes we can access without hitting that failure.
118
+ */
119
+ if (n == 0) {
120
+ return 0;
121
+ } else {
122
+ return n * TAG_GRANULE - (ptr - tag_first);
123
+ }
124
+}
125
--
32
--
126
2.34.1
33
2.34.1
34
35
diff view generated by jsdifflib
1
The FEAT_MOPS SETG* instructions are very similar to the SET*
1
From: Luc Michel <luc.michel@amd.com>
2
instructions, but as well as setting memory contents they also
3
set the MTE tags. They are architecturally required to operate
4
on tag-granule aligned regions only.
5
2
3
Replace register defines with the REG32 macro from registerfields.h in
4
the Cadence GEM device.
5
6
Signed-off-by: Luc Michel <luc.michel@amd.com>
7
Reviewed-by: sai.pavan.boddu@amd.com
8
Message-id: 20231017194422.4124691-2-luc.michel@amd.com
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20230912140434.1333369-10-peter.maydell@linaro.org
9
---
10
---
10
target/arm/internals.h | 10 ++++
11
hw/net/cadence_gem.c | 527 +++++++++++++++++++++----------------------
11
target/arm/tcg/helper-a64.h | 3 ++
12
1 file changed, 261 insertions(+), 266 deletions(-)
12
target/arm/tcg/a64.decode | 5 ++
13
target/arm/tcg/helper-a64.c | 86 ++++++++++++++++++++++++++++++++--
14
target/arm/tcg/mte_helper.c | 40 ++++++++++++++++
15
target/arm/tcg/translate-a64.c | 20 +++++---
16
6 files changed, 155 insertions(+), 9 deletions(-)
17
13
18
diff --git a/target/arm/internals.h b/target/arm/internals.h
14
diff --git a/hw/net/cadence_gem.c b/hw/net/cadence_gem.c
19
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/internals.h
16
--- a/hw/net/cadence_gem.c
21
+++ b/target/arm/internals.h
17
+++ b/hw/net/cadence_gem.c
22
@@ -XXX,XX +XXX,XX @@ uint64_t mte_mops_probe(CPUARMState *env, uint64_t ptr, uint64_t size,
18
@@ -XXX,XX +XXX,XX @@
23
void mte_check_fail(CPUARMState *env, uint32_t desc,
19
#include "hw/irq.h"
24
uint64_t dirty_ptr, uintptr_t ra);
20
#include "hw/net/cadence_gem.h"
25
21
#include "hw/qdev-properties.h"
26
+/**
22
+#include "hw/registerfields.h"
27
+ * mte_mops_set_tags: Set MTE tags for a portion of a FEAT_MOPS operation
23
#include "migration/vmstate.h"
28
+ * @env: CPU env
24
#include "qapi/error.h"
29
+ * @dirty_ptr: Start address of memory region (dirty pointer)
25
#include "qemu/log.h"
30
+ * @size: length of region (guaranteed not to cross page boundary)
26
@@ -XXX,XX +XXX,XX @@
31
+ * @desc: MTEDESC descriptor word
27
} \
32
+ */
28
} while (0)
33
+void mte_mops_set_tags(CPUARMState *env, uint64_t dirty_ptr, uint64_t size,
29
34
+ uint32_t desc);
30
-#define GEM_NWCTRL (0x00000000 / 4) /* Network Control reg */
35
+
31
-#define GEM_NWCFG (0x00000004 / 4) /* Network Config reg */
36
static inline int allocation_tag_from_addr(uint64_t ptr)
32
-#define GEM_NWSTATUS (0x00000008 / 4) /* Network Status reg */
33
-#define GEM_USERIO (0x0000000C / 4) /* User IO reg */
34
-#define GEM_DMACFG (0x00000010 / 4) /* DMA Control reg */
35
-#define GEM_TXSTATUS (0x00000014 / 4) /* TX Status reg */
36
-#define GEM_RXQBASE (0x00000018 / 4) /* RX Q Base address reg */
37
-#define GEM_TXQBASE (0x0000001C / 4) /* TX Q Base address reg */
38
-#define GEM_RXSTATUS (0x00000020 / 4) /* RX Status reg */
39
-#define GEM_ISR (0x00000024 / 4) /* Interrupt Status reg */
40
-#define GEM_IER (0x00000028 / 4) /* Interrupt Enable reg */
41
-#define GEM_IDR (0x0000002C / 4) /* Interrupt Disable reg */
42
-#define GEM_IMR (0x00000030 / 4) /* Interrupt Mask reg */
43
-#define GEM_PHYMNTNC (0x00000034 / 4) /* Phy Maintenance reg */
44
-#define GEM_RXPAUSE (0x00000038 / 4) /* RX Pause Time reg */
45
-#define GEM_TXPAUSE (0x0000003C / 4) /* TX Pause Time reg */
46
-#define GEM_TXPARTIALSF (0x00000040 / 4) /* TX Partial Store and Forward */
47
-#define GEM_RXPARTIALSF (0x00000044 / 4) /* RX Partial Store and Forward */
48
-#define GEM_JUMBO_MAX_LEN (0x00000048 / 4) /* Max Jumbo Frame Size */
49
-#define GEM_HASHLO (0x00000080 / 4) /* Hash Low address reg */
50
-#define GEM_HASHHI (0x00000084 / 4) /* Hash High address reg */
51
-#define GEM_SPADDR1LO (0x00000088 / 4) /* Specific addr 1 low reg */
52
-#define GEM_SPADDR1HI (0x0000008C / 4) /* Specific addr 1 high reg */
53
-#define GEM_SPADDR2LO (0x00000090 / 4) /* Specific addr 2 low reg */
54
-#define GEM_SPADDR2HI (0x00000094 / 4) /* Specific addr 2 high reg */
55
-#define GEM_SPADDR3LO (0x00000098 / 4) /* Specific addr 3 low reg */
56
-#define GEM_SPADDR3HI (0x0000009C / 4) /* Specific addr 3 high reg */
57
-#define GEM_SPADDR4LO (0x000000A0 / 4) /* Specific addr 4 low reg */
58
-#define GEM_SPADDR4HI (0x000000A4 / 4) /* Specific addr 4 high reg */
59
-#define GEM_TIDMATCH1 (0x000000A8 / 4) /* Type ID1 Match reg */
60
-#define GEM_TIDMATCH2 (0x000000AC / 4) /* Type ID2 Match reg */
61
-#define GEM_TIDMATCH3 (0x000000B0 / 4) /* Type ID3 Match reg */
62
-#define GEM_TIDMATCH4 (0x000000B4 / 4) /* Type ID4 Match reg */
63
-#define GEM_WOLAN (0x000000B8 / 4) /* Wake on LAN reg */
64
-#define GEM_IPGSTRETCH (0x000000BC / 4) /* IPG Stretch reg */
65
-#define GEM_SVLAN (0x000000C0 / 4) /* Stacked VLAN reg */
66
-#define GEM_MODID (0x000000FC / 4) /* Module ID reg */
67
-#define GEM_OCTTXLO (0x00000100 / 4) /* Octets transmitted Low reg */
68
-#define GEM_OCTTXHI (0x00000104 / 4) /* Octets transmitted High reg */
69
-#define GEM_TXCNT (0x00000108 / 4) /* Error-free Frames transmitted */
70
-#define GEM_TXBCNT (0x0000010C / 4) /* Error-free Broadcast Frames */
71
-#define GEM_TXMCNT (0x00000110 / 4) /* Error-free Multicast Frame */
72
-#define GEM_TXPAUSECNT (0x00000114 / 4) /* Pause Frames Transmitted */
73
-#define GEM_TX64CNT (0x00000118 / 4) /* Error-free 64 TX */
74
-#define GEM_TX65CNT (0x0000011C / 4) /* Error-free 65-127 TX */
75
-#define GEM_TX128CNT (0x00000120 / 4) /* Error-free 128-255 TX */
76
-#define GEM_TX256CNT (0x00000124 / 4) /* Error-free 256-511 */
77
-#define GEM_TX512CNT (0x00000128 / 4) /* Error-free 512-1023 TX */
78
-#define GEM_TX1024CNT (0x0000012C / 4) /* Error-free 1024-1518 TX */
79
-#define GEM_TX1519CNT (0x00000130 / 4) /* Error-free larger than 1519 TX */
80
-#define GEM_TXURUNCNT (0x00000134 / 4) /* TX under run error counter */
81
-#define GEM_SINGLECOLLCNT (0x00000138 / 4) /* Single Collision Frames */
82
-#define GEM_MULTCOLLCNT (0x0000013C / 4) /* Multiple Collision Frames */
83
-#define GEM_EXCESSCOLLCNT (0x00000140 / 4) /* Excessive Collision Frames */
84
-#define GEM_LATECOLLCNT (0x00000144 / 4) /* Late Collision Frames */
85
-#define GEM_DEFERTXCNT (0x00000148 / 4) /* Deferred Transmission Frames */
86
-#define GEM_CSENSECNT (0x0000014C / 4) /* Carrier Sense Error Counter */
87
-#define GEM_OCTRXLO (0x00000150 / 4) /* Octets Received register Low */
88
-#define GEM_OCTRXHI (0x00000154 / 4) /* Octets Received register High */
89
-#define GEM_RXCNT (0x00000158 / 4) /* Error-free Frames Received */
90
-#define GEM_RXBROADCNT (0x0000015C / 4) /* Error-free Broadcast Frames RX */
91
-#define GEM_RXMULTICNT (0x00000160 / 4) /* Error-free Multicast Frames RX */
92
-#define GEM_RXPAUSECNT (0x00000164 / 4) /* Pause Frames Received Counter */
93
-#define GEM_RX64CNT (0x00000168 / 4) /* Error-free 64 byte Frames RX */
94
-#define GEM_RX65CNT (0x0000016C / 4) /* Error-free 65-127B Frames RX */
95
-#define GEM_RX128CNT (0x00000170 / 4) /* Error-free 128-255B Frames RX */
96
-#define GEM_RX256CNT (0x00000174 / 4) /* Error-free 256-512B Frames RX */
97
-#define GEM_RX512CNT (0x00000178 / 4) /* Error-free 512-1023B Frames RX */
98
-#define GEM_RX1024CNT (0x0000017C / 4) /* Error-free 1024-1518B Frames RX */
99
-#define GEM_RX1519CNT (0x00000180 / 4) /* Error-free 1519-max Frames RX */
100
-#define GEM_RXUNDERCNT (0x00000184 / 4) /* Undersize Frames Received */
101
-#define GEM_RXOVERCNT (0x00000188 / 4) /* Oversize Frames Received */
102
-#define GEM_RXJABCNT (0x0000018C / 4) /* Jabbers Received Counter */
103
-#define GEM_RXFCSCNT (0x00000190 / 4) /* Frame Check seq. Error Counter */
104
-#define GEM_RXLENERRCNT (0x00000194 / 4) /* Length Field Error Counter */
105
-#define GEM_RXSYMERRCNT (0x00000198 / 4) /* Symbol Error Counter */
106
-#define GEM_RXALIGNERRCNT (0x0000019C / 4) /* Alignment Error Counter */
107
-#define GEM_RXRSCERRCNT (0x000001A0 / 4) /* Receive Resource Error Counter */
108
-#define GEM_RXORUNCNT (0x000001A4 / 4) /* Receive Overrun Counter */
109
-#define GEM_RXIPCSERRCNT (0x000001A8 / 4) /* IP header Checksum Err Counter */
110
-#define GEM_RXTCPCCNT (0x000001AC / 4) /* TCP Checksum Error Counter */
111
-#define GEM_RXUDPCCNT (0x000001B0 / 4) /* UDP Checksum Error Counter */
112
+REG32(NWCTRL, 0x0) /* Network Control reg */
113
+REG32(NWCFG, 0x4) /* Network Config reg */
114
+REG32(NWSTATUS, 0x8) /* Network Status reg */
115
+REG32(USERIO, 0xc) /* User IO reg */
116
+REG32(DMACFG, 0x10) /* DMA Control reg */
117
+REG32(TXSTATUS, 0x14) /* TX Status reg */
118
+REG32(RXQBASE, 0x18) /* RX Q Base address reg */
119
+REG32(TXQBASE, 0x1c) /* TX Q Base address reg */
120
+REG32(RXSTATUS, 0x20) /* RX Status reg */
121
+REG32(ISR, 0x24) /* Interrupt Status reg */
122
+REG32(IER, 0x28) /* Interrupt Enable reg */
123
+REG32(IDR, 0x2c) /* Interrupt Disable reg */
124
+REG32(IMR, 0x30) /* Interrupt Mask reg */
125
+REG32(PHYMNTNC, 0x34) /* Phy Maintenance reg */
126
+REG32(RXPAUSE, 0x38) /* RX Pause Time reg */
127
+REG32(TXPAUSE, 0x3c) /* TX Pause Time reg */
128
+REG32(TXPARTIALSF, 0x40) /* TX Partial Store and Forward */
129
+REG32(RXPARTIALSF, 0x44) /* RX Partial Store and Forward */
130
+REG32(JUMBO_MAX_LEN, 0x48) /* Max Jumbo Frame Size */
131
+REG32(HASHLO, 0x80) /* Hash Low address reg */
132
+REG32(HASHHI, 0x84) /* Hash High address reg */
133
+REG32(SPADDR1LO, 0x88) /* Specific addr 1 low reg */
134
+REG32(SPADDR1HI, 0x8c) /* Specific addr 1 high reg */
135
+REG32(SPADDR2LO, 0x90) /* Specific addr 2 low reg */
136
+REG32(SPADDR2HI, 0x94) /* Specific addr 2 high reg */
137
+REG32(SPADDR3LO, 0x98) /* Specific addr 3 low reg */
138
+REG32(SPADDR3HI, 0x9c) /* Specific addr 3 high reg */
139
+REG32(SPADDR4LO, 0xa0) /* Specific addr 4 low reg */
140
+REG32(SPADDR4HI, 0xa4) /* Specific addr 4 high reg */
141
+REG32(TIDMATCH1, 0xa8) /* Type ID1 Match reg */
142
+REG32(TIDMATCH2, 0xac) /* Type ID2 Match reg */
143
+REG32(TIDMATCH3, 0xb0) /* Type ID3 Match reg */
144
+REG32(TIDMATCH4, 0xb4) /* Type ID4 Match reg */
145
+REG32(WOLAN, 0xb8) /* Wake on LAN reg */
146
+REG32(IPGSTRETCH, 0xbc) /* IPG Stretch reg */
147
+REG32(SVLAN, 0xc0) /* Stacked VLAN reg */
148
+REG32(MODID, 0xfc) /* Module ID reg */
149
+REG32(OCTTXLO, 0x100) /* Octects transmitted Low reg */
150
+REG32(OCTTXHI, 0x104) /* Octects transmitted High reg */
151
+REG32(TXCNT, 0x108) /* Error-free Frames transmitted */
152
+REG32(TXBCNT, 0x10c) /* Error-free Broadcast Frames */
153
+REG32(TXMCNT, 0x110) /* Error-free Multicast Frame */
154
+REG32(TXPAUSECNT, 0x114) /* Pause Frames Transmitted */
155
+REG32(TX64CNT, 0x118) /* Error-free 64 TX */
156
+REG32(TX65CNT, 0x11c) /* Error-free 65-127 TX */
157
+REG32(TX128CNT, 0x120) /* Error-free 128-255 TX */
158
+REG32(TX256CNT, 0x124) /* Error-free 256-511 */
159
+REG32(TX512CNT, 0x128) /* Error-free 512-1023 TX */
160
+REG32(TX1024CNT, 0x12c) /* Error-free 1024-1518 TX */
161
+REG32(TX1519CNT, 0x130) /* Error-free larger than 1519 TX */
162
+REG32(TXURUNCNT, 0x134) /* TX under run error counter */
163
+REG32(SINGLECOLLCNT, 0x138) /* Single Collision Frames */
164
+REG32(MULTCOLLCNT, 0x13c) /* Multiple Collision Frames */
165
+REG32(EXCESSCOLLCNT, 0x140) /* Excessive Collision Frames */
166
+REG32(LATECOLLCNT, 0x144) /* Late Collision Frames */
167
+REG32(DEFERTXCNT, 0x148) /* Deferred Transmission Frames */
168
+REG32(CSENSECNT, 0x14c) /* Carrier Sense Error Counter */
169
+REG32(OCTRXLO, 0x150) /* Octects Received register Low */
170
+REG32(OCTRXHI, 0x154) /* Octects Received register High */
171
+REG32(RXCNT, 0x158) /* Error-free Frames Received */
172
+REG32(RXBROADCNT, 0x15c) /* Error-free Broadcast Frames RX */
173
+REG32(RXMULTICNT, 0x160) /* Error-free Multicast Frames RX */
174
+REG32(RXPAUSECNT, 0x164) /* Pause Frames Received Counter */
175
+REG32(RX64CNT, 0x168) /* Error-free 64 byte Frames RX */
176
+REG32(RX65CNT, 0x16c) /* Error-free 65-127B Frames RX */
177
+REG32(RX128CNT, 0x170) /* Error-free 128-255B Frames RX */
178
+REG32(RX256CNT, 0x174) /* Error-free 256-512B Frames RX */
179
+REG32(RX512CNT, 0x178) /* Error-free 512-1023B Frames RX */
180
+REG32(RX1024CNT, 0x17c) /* Error-free 1024-1518B Frames RX */
181
+REG32(RX1519CNT, 0x180) /* Error-free 1519-max Frames RX */
182
+REG32(RXUNDERCNT, 0x184) /* Undersize Frames Received */
183
+REG32(RXOVERCNT, 0x188) /* Oversize Frames Received */
184
+REG32(RXJABCNT, 0x18c) /* Jabbers Received Counter */
185
+REG32(RXFCSCNT, 0x190) /* Frame Check seq. Error Counter */
186
+REG32(RXLENERRCNT, 0x194) /* Length Field Error Counter */
187
+REG32(RXSYMERRCNT, 0x198) /* Symbol Error Counter */
188
+REG32(RXALIGNERRCNT, 0x19c) /* Alignment Error Counter */
189
+REG32(RXRSCERRCNT, 0x1a0) /* Receive Resource Error Counter */
190
+REG32(RXORUNCNT, 0x1a4) /* Receive Overrun Counter */
191
+REG32(RXIPCSERRCNT, 0x1a8) /* IP header Checksum Err Counter */
192
+REG32(RXTCPCCNT, 0x1ac) /* TCP Checksum Error Counter */
193
+REG32(RXUDPCCNT, 0x1b0) /* UDP Checksum Error Counter */
194
195
-#define GEM_1588S (0x000001D0 / 4) /* 1588 Timer Seconds */
196
-#define GEM_1588NS (0x000001D4 / 4) /* 1588 Timer Nanoseconds */
197
-#define GEM_1588ADJ (0x000001D8 / 4) /* 1588 Timer Adjust */
198
-#define GEM_1588INC (0x000001DC / 4) /* 1588 Timer Increment */
199
-#define GEM_PTPETXS (0x000001E0 / 4) /* PTP Event Frame Transmitted (s) */
200
-#define GEM_PTPETXNS (0x000001E4 / 4) /*
201
- * PTP Event Frame Transmitted (ns)
202
- */
203
-#define GEM_PTPERXS (0x000001E8 / 4) /* PTP Event Frame Received (s) */
204
-#define GEM_PTPERXNS (0x000001EC / 4) /* PTP Event Frame Received (ns) */
205
-#define GEM_PTPPTXS (0x000001E0 / 4) /* PTP Peer Frame Transmitted (s) */
206
-#define GEM_PTPPTXNS (0x000001E4 / 4) /* PTP Peer Frame Transmitted (ns) */
207
-#define GEM_PTPPRXS (0x000001E8 / 4) /* PTP Peer Frame Received (s) */
208
-#define GEM_PTPPRXNS (0x000001EC / 4) /* PTP Peer Frame Received (ns) */
209
+REG32(1588S, 0x1d0) /* 1588 Timer Seconds */
210
+REG32(1588NS, 0x1d4) /* 1588 Timer Nanoseconds */
211
+REG32(1588ADJ, 0x1d8) /* 1588 Timer Adjust */
212
+REG32(1588INC, 0x1dc) /* 1588 Timer Increment */
213
+REG32(PTPETXS, 0x1e0) /* PTP Event Frame Transmitted (s) */
214
+REG32(PTPETXNS, 0x1e4) /* PTP Event Frame Transmitted (ns) */
215
+REG32(PTPERXS, 0x1e8) /* PTP Event Frame Received (s) */
216
+REG32(PTPERXNS, 0x1ec) /* PTP Event Frame Received (ns) */
217
+REG32(PTPPTXS, 0x1e0) /* PTP Peer Frame Transmitted (s) */
218
+REG32(PTPPTXNS, 0x1e4) /* PTP Peer Frame Transmitted (ns) */
219
+REG32(PTPPRXS, 0x1e8) /* PTP Peer Frame Received (s) */
220
+REG32(PTPPRXNS, 0x1ec) /* PTP Peer Frame Received (ns) */
221
222
/* Design Configuration Registers */
223
-#define GEM_DESCONF (0x00000280 / 4)
224
-#define GEM_DESCONF2 (0x00000284 / 4)
225
-#define GEM_DESCONF3 (0x00000288 / 4)
226
-#define GEM_DESCONF4 (0x0000028C / 4)
227
-#define GEM_DESCONF5 (0x00000290 / 4)
228
-#define GEM_DESCONF6 (0x00000294 / 4)
229
+REG32(DESCONF, 0x280)
230
+REG32(DESCONF2, 0x284)
231
+REG32(DESCONF3, 0x288)
232
+REG32(DESCONF4, 0x28c)
233
+REG32(DESCONF5, 0x290)
234
+REG32(DESCONF6, 0x294)
235
#define GEM_DESCONF6_64B_MASK (1U << 23)
236
-#define GEM_DESCONF7 (0x00000298 / 4)
237
+REG32(DESCONF7, 0x298)
238
239
-#define GEM_INT_Q1_STATUS (0x00000400 / 4)
240
-#define GEM_INT_Q1_MASK (0x00000640 / 4)
241
+REG32(INT_Q1_STATUS, 0x400)
242
+REG32(INT_Q1_MASK, 0x640)
243
244
-#define GEM_TRANSMIT_Q1_PTR (0x00000440 / 4)
245
-#define GEM_TRANSMIT_Q7_PTR (GEM_TRANSMIT_Q1_PTR + 6)
246
+REG32(TRANSMIT_Q1_PTR, 0x440)
247
+REG32(TRANSMIT_Q7_PTR, 0x458)
248
249
-#define GEM_RECEIVE_Q1_PTR (0x00000480 / 4)
250
-#define GEM_RECEIVE_Q7_PTR (GEM_RECEIVE_Q1_PTR + 6)
251
+REG32(RECEIVE_Q1_PTR, 0x480)
252
+REG32(RECEIVE_Q7_PTR, 0x498)
253
254
-#define GEM_TBQPH (0x000004C8 / 4)
255
-#define GEM_RBQPH (0x000004D4 / 4)
256
+REG32(TBQPH, 0x4c8)
257
+REG32(RBQPH, 0x4d4)
258
259
-#define GEM_INT_Q1_ENABLE (0x00000600 / 4)
260
-#define GEM_INT_Q7_ENABLE (GEM_INT_Q1_ENABLE + 6)
261
+REG32(INT_Q1_ENABLE, 0x600)
262
+REG32(INT_Q7_ENABLE, 0x618)
263
264
-#define GEM_INT_Q1_DISABLE (0x00000620 / 4)
265
-#define GEM_INT_Q7_DISABLE (GEM_INT_Q1_DISABLE + 6)
266
+REG32(INT_Q1_DISABLE, 0x620)
267
+REG32(INT_Q7_DISABLE, 0x638)
268
269
-#define GEM_INT_Q1_MASK (0x00000640 / 4)
270
-#define GEM_INT_Q7_MASK (GEM_INT_Q1_MASK + 6)
271
-
272
-#define GEM_SCREENING_TYPE1_REGISTER_0 (0x00000500 / 4)
273
+REG32(SCREENING_TYPE1_REG0, 0x500)
274
275
#define GEM_ST1R_UDP_PORT_MATCH_ENABLE (1 << 29)
276
#define GEM_ST1R_DSTC_ENABLE (1 << 28)
277
@@ -XXX,XX +XXX,XX @@
278
#define GEM_ST1R_QUEUE_SHIFT (0)
279
#define GEM_ST1R_QUEUE_WIDTH (3 - GEM_ST1R_QUEUE_SHIFT + 1)
280
281
-#define GEM_SCREENING_TYPE2_REGISTER_0 (0x00000540 / 4)
282
+REG32(SCREENING_TYPE2_REG0, 0x540)
283
284
#define GEM_ST2R_COMPARE_A_ENABLE (1 << 18)
285
#define GEM_ST2R_COMPARE_A_SHIFT (13)
286
@@ -XXX,XX +XXX,XX @@
287
#define GEM_ST2R_QUEUE_SHIFT (0)
288
#define GEM_ST2R_QUEUE_WIDTH (3 - GEM_ST2R_QUEUE_SHIFT + 1)
289
290
-#define GEM_SCREENING_TYPE2_ETHERTYPE_REG_0 (0x000006e0 / 4)
291
-#define GEM_TYPE2_COMPARE_0_WORD_0 (0x00000700 / 4)
292
+REG32(SCREENING_TYPE2_ETHERTYPE_REG0, 0x6e0)
293
+REG32(TYPE2_COMPARE_0_WORD_0, 0x700)
294
295
#define GEM_T2CW1_COMPARE_OFFSET_SHIFT (7)
296
#define GEM_T2CW1_COMPARE_OFFSET_WIDTH (8 - GEM_T2CW1_COMPARE_OFFSET_SHIFT + 1)
297
@@ -XXX,XX +XXX,XX @@ static inline uint64_t tx_desc_get_buffer(CadenceGEMState *s, uint32_t *desc)
37
{
298
{
38
return extract64(ptr, 56, 4);
299
uint64_t ret = desc[0];
39
diff --git a/target/arm/tcg/helper-a64.h b/target/arm/tcg/helper-a64.h
300
40
index XXXXXXX..XXXXXXX 100644
301
- if (s->regs[GEM_DMACFG] & GEM_DMACFG_ADDR_64B) {
41
--- a/target/arm/tcg/helper-a64.h
302
+ if (s->regs[R_DMACFG] & GEM_DMACFG_ADDR_64B) {
42
+++ b/target/arm/tcg/helper-a64.h
303
ret |= (uint64_t)desc[2] << 32;
43
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(unaligned_access, TCG_CALL_NO_WG,
304
}
44
DEF_HELPER_3(setp, void, env, i32, i32)
305
return ret;
45
DEF_HELPER_3(setm, void, env, i32, i32)
306
@@ -XXX,XX +XXX,XX @@ static inline uint64_t rx_desc_get_buffer(CadenceGEMState *s, uint32_t *desc)
46
DEF_HELPER_3(sete, void, env, i32, i32)
307
{
47
+DEF_HELPER_3(setgp, void, env, i32, i32)
308
uint64_t ret = desc[0] & ~0x3UL;
48
+DEF_HELPER_3(setgm, void, env, i32, i32)
309
49
+DEF_HELPER_3(setge, void, env, i32, i32)
310
- if (s->regs[GEM_DMACFG] & GEM_DMACFG_ADDR_64B) {
50
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
311
+ if (s->regs[R_DMACFG] & GEM_DMACFG_ADDR_64B) {
51
index XXXXXXX..XXXXXXX 100644
312
ret |= (uint64_t)desc[2] << 32;
52
--- a/target/arm/tcg/a64.decode
313
}
53
+++ b/target/arm/tcg/a64.decode
314
return ret;
54
@@ -XXX,XX +XXX,XX @@ STZ2G 11011001 11 1 ......... 11 ..... ..... @ldst_tag p=0 w=1
315
@@ -XXX,XX +XXX,XX @@ static inline int gem_get_desc_len(CadenceGEMState *s, bool rx_n_tx)
55
SETP 00 011001110 ..... 00 . . 01 ..... ..... @set
316
{
56
SETM 00 011001110 ..... 01 . . 01 ..... ..... @set
317
int ret = 2;
57
SETE 00 011001110 ..... 10 . . 01 ..... ..... @set
318
58
+
319
- if (s->regs[GEM_DMACFG] & GEM_DMACFG_ADDR_64B) {
59
+# Like SET, but also setting MTE tags
320
+ if (s->regs[R_DMACFG] & GEM_DMACFG_ADDR_64B) {
60
+SETGP 00 011101110 ..... 00 . . 01 ..... ..... @set
321
ret += 2;
61
+SETGM 00 011101110 ..... 01 . . 01 ..... ..... @set
322
}
62
+SETGE 00 011101110 ..... 10 . . 01 ..... ..... @set
323
- if (s->regs[GEM_DMACFG] & (rx_n_tx ? GEM_DMACFG_RX_BD_EXT
63
diff --git a/target/arm/tcg/helper-a64.c b/target/arm/tcg/helper-a64.c
324
+ if (s->regs[R_DMACFG] & (rx_n_tx ? GEM_DMACFG_RX_BD_EXT
64
index XXXXXXX..XXXXXXX 100644
325
: GEM_DMACFG_TX_BD_EXT)) {
65
--- a/target/arm/tcg/helper-a64.c
326
ret += 2;
66
+++ b/target/arm/tcg/helper-a64.c
327
}
67
@@ -XXX,XX +XXX,XX @@ static uint64_t set_step(CPUARMState *env, uint64_t toaddr,
328
@@ -XXX,XX +XXX,XX @@ static const uint8_t broadcast_addr[] = { 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF };
68
return setsize;
329
static uint32_t gem_get_max_buf_len(CadenceGEMState *s, bool tx)
330
{
331
uint32_t size;
332
- if (s->regs[GEM_NWCFG] & GEM_NWCFG_JUMBO_FRAME) {
333
- size = s->regs[GEM_JUMBO_MAX_LEN];
334
+ if (s->regs[R_NWCFG] & GEM_NWCFG_JUMBO_FRAME) {
335
+ size = s->regs[R_JUMBO_MAX_LEN];
336
if (size > s->jumbo_max_len) {
337
size = s->jumbo_max_len;
338
qemu_log_mask(LOG_GUEST_ERROR, "GEM_JUMBO_MAX_LEN reg cannot be"
339
@@ -XXX,XX +XXX,XX @@ static uint32_t gem_get_max_buf_len(CadenceGEMState *s, bool tx)
340
} else if (tx) {
341
size = 1518;
342
} else {
343
- size = s->regs[GEM_NWCFG] & GEM_NWCFG_RCV_1538 ? 1538 : 1518;
344
+ size = s->regs[R_NWCFG] & GEM_NWCFG_RCV_1538 ? 1538 : 1518;
345
}
346
return size;
69
}
347
}
70
348
@@ -XXX,XX +XXX,XX @@ static uint32_t gem_get_max_buf_len(CadenceGEMState *s, bool tx)
71
+/*
349
static void gem_set_isr(CadenceGEMState *s, int q, uint32_t flag)
72
+ * Similar, but setting tags. The architecture requires us to do this
350
{
73
+ * in 16-byte chunks. SETP accesses are not tag checked; they set
351
if (q == 0) {
74
+ * the tags.
352
- s->regs[GEM_ISR] |= flag & ~(s->regs[GEM_IMR]);
75
+ */
353
+ s->regs[R_ISR] |= flag & ~(s->regs[R_IMR]);
76
+static uint64_t set_step_tags(CPUARMState *env, uint64_t toaddr,
354
} else {
77
+ uint64_t setsize, uint32_t data, int memidx,
355
- s->regs[GEM_INT_Q1_STATUS + q - 1] |= flag &
78
+ uint32_t *mtedesc, uintptr_t ra)
356
- ~(s->regs[GEM_INT_Q1_MASK + q - 1]);
79
+{
357
+ s->regs[R_INT_Q1_STATUS + q - 1] |= flag &
80
+ void *mem;
358
+ ~(s->regs[R_INT_Q1_MASK + q - 1]);
81
+ uint64_t cleanaddr;
359
}
82
+
83
+ setsize = MIN(setsize, page_limit(toaddr));
84
+
85
+ cleanaddr = useronly_clean_ptr(toaddr);
86
+ /*
87
+ * Trapless lookup: returns NULL for invalid page, I/O,
88
+ * watchpoints, clean pages, etc.
89
+ */
90
+ mem = tlb_vaddr_to_host(env, cleanaddr, MMU_DATA_STORE, memidx);
91
+
92
+#ifndef CONFIG_USER_ONLY
93
+ if (unlikely(!mem)) {
94
+ /*
95
+ * Slow-path: just do one write. This will handle the
96
+ * watchpoint, invalid page, etc handling correctly.
97
+ * The architecture requires that we do 16 bytes at a time,
98
+ * and we know both ptr and size are 16 byte aligned.
99
+ * For clean code pages, the next iteration will see
100
+ * the page dirty and will use the fast path.
101
+ */
102
+ uint64_t repldata = data * 0x0101010101010101ULL;
103
+ MemOpIdx oi16 = make_memop_idx(MO_TE | MO_128, memidx);
104
+ cpu_st16_mmu(env, toaddr, int128_make128(repldata, repldata), oi16, ra);
105
+ mte_mops_set_tags(env, toaddr, 16, *mtedesc);
106
+ return 16;
107
+ }
108
+#endif
109
+ /* Easy case: just memset the host memory */
110
+ memset(mem, data, setsize);
111
+ mte_mops_set_tags(env, toaddr, setsize, *mtedesc);
112
+ return setsize;
113
+}
114
+
115
typedef uint64_t StepFn(CPUARMState *env, uint64_t toaddr,
116
uint64_t setsize, uint32_t data,
117
int memidx, uint32_t *mtedesc, uintptr_t ra);
118
@@ -XXX,XX +XXX,XX @@ static bool mte_checks_needed(uint64_t ptr, uint32_t desc)
119
return !tcma_check(desc, bit55, allocation_tag_from_addr(ptr));
120
}
360
}
121
361
122
+/* Take an exception if the SETG addr/size are not granule aligned */
362
@@ -XXX,XX +XXX,XX @@ static void gem_init_register_masks(CadenceGEMState *s)
123
+static void check_setg_alignment(CPUARMState *env, uint64_t ptr, uint64_t size,
363
unsigned int i;
124
+ uint32_t memidx, uintptr_t ra)
364
/* Mask of register bits which are read only */
125
+{
365
memset(&s->regs_ro[0], 0, sizeof(s->regs_ro));
126
+ if ((size != 0 && !QEMU_IS_ALIGNED(ptr, TAG_GRANULE)) ||
366
- s->regs_ro[GEM_NWCTRL] = 0xFFF80000;
127
+ !QEMU_IS_ALIGNED(size, TAG_GRANULE)) {
367
- s->regs_ro[GEM_NWSTATUS] = 0xFFFFFFFF;
128
+ arm_cpu_do_unaligned_access(env_cpu(env), ptr, MMU_DATA_STORE,
368
- s->regs_ro[GEM_DMACFG] = 0x8E00F000;
129
+ memidx, ra);
369
- s->regs_ro[GEM_TXSTATUS] = 0xFFFFFE08;
130
+
370
- s->regs_ro[GEM_RXQBASE] = 0x00000003;
131
+ }
371
- s->regs_ro[GEM_TXQBASE] = 0x00000003;
132
+}
372
- s->regs_ro[GEM_RXSTATUS] = 0xFFFFFFF0;
133
+
373
- s->regs_ro[GEM_ISR] = 0xFFFFFFFF;
134
/*
374
- s->regs_ro[GEM_IMR] = 0xFFFFFFFF;
135
* For the Memory Set operation, our implementation chooses
375
- s->regs_ro[GEM_MODID] = 0xFFFFFFFF;
136
* always to use "option A", where we update Xd to the final
376
+ s->regs_ro[R_NWCTRL] = 0xFFF80000;
137
@@ -XXX,XX +XXX,XX @@ static void do_setp(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc,
377
+ s->regs_ro[R_NWSTATUS] = 0xFFFFFFFF;
138
378
+ s->regs_ro[R_DMACFG] = 0x8E00F000;
139
if (setsize > INT64_MAX) {
379
+ s->regs_ro[R_TXSTATUS] = 0xFFFFFE08;
140
setsize = INT64_MAX;
380
+ s->regs_ro[R_RXQBASE] = 0x00000003;
141
+ if (is_setg) {
381
+ s->regs_ro[R_TXQBASE] = 0x00000003;
142
+ setsize &= ~0xf;
382
+ s->regs_ro[R_RXSTATUS] = 0xFFFFFFF0;
143
+ }
383
+ s->regs_ro[R_ISR] = 0xFFFFFFFF;
144
}
384
+ s->regs_ro[R_IMR] = 0xFFFFFFFF;
145
385
+ s->regs_ro[R_MODID] = 0xFFFFFFFF;
146
- if (!mte_checks_needed(toaddr, mtedesc)) {
386
for (i = 0; i < s->num_priority_queues; i++) {
147
+ if (unlikely(is_setg)) {
387
- s->regs_ro[GEM_INT_Q1_STATUS + i] = 0xFFFFFFFF;
148
+ check_setg_alignment(env, toaddr, setsize, memidx, ra);
388
- s->regs_ro[GEM_INT_Q1_ENABLE + i] = 0xFFFFF319;
149
+ } else if (!mte_checks_needed(toaddr, mtedesc)) {
389
- s->regs_ro[GEM_INT_Q1_DISABLE + i] = 0xFFFFF319;
150
mtedesc = 0;
390
- s->regs_ro[GEM_INT_Q1_MASK + i] = 0xFFFFFFFF;
151
}
391
+ s->regs_ro[R_INT_Q1_STATUS + i] = 0xFFFFFFFF;
152
392
+ s->regs_ro[R_INT_Q1_ENABLE + i] = 0xFFFFF319;
153
@@ -XXX,XX +XXX,XX @@ void HELPER(setp)(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc)
393
+ s->regs_ro[R_INT_Q1_DISABLE + i] = 0xFFFFF319;
154
do_setp(env, syndrome, mtedesc, set_step, false, GETPC());
394
+ s->regs_ro[R_INT_Q1_MASK + i] = 0xFFFFFFFF;
395
}
396
397
/* Mask of register bits which are clear on read */
398
memset(&s->regs_rtc[0], 0, sizeof(s->regs_rtc));
399
- s->regs_rtc[GEM_ISR] = 0xFFFFFFFF;
400
+ s->regs_rtc[R_ISR] = 0xFFFFFFFF;
401
for (i = 0; i < s->num_priority_queues; i++) {
402
- s->regs_rtc[GEM_INT_Q1_STATUS + i] = 0x00000CE6;
403
+ s->regs_rtc[R_INT_Q1_STATUS + i] = 0x00000CE6;
404
}
405
406
/* Mask of register bits which are write 1 to clear */
407
memset(&s->regs_w1c[0], 0, sizeof(s->regs_w1c));
408
- s->regs_w1c[GEM_TXSTATUS] = 0x000001F7;
409
- s->regs_w1c[GEM_RXSTATUS] = 0x0000000F;
410
+ s->regs_w1c[R_TXSTATUS] = 0x000001F7;
411
+ s->regs_w1c[R_RXSTATUS] = 0x0000000F;
412
413
/* Mask of register bits which are write only */
414
memset(&s->regs_wo[0], 0, sizeof(s->regs_wo));
415
- s->regs_wo[GEM_NWCTRL] = 0x00073E60;
416
- s->regs_wo[GEM_IER] = 0x07FFFFFF;
417
- s->regs_wo[GEM_IDR] = 0x07FFFFFF;
418
+ s->regs_wo[R_NWCTRL] = 0x00073E60;
419
+ s->regs_wo[R_IER] = 0x07FFFFFF;
420
+ s->regs_wo[R_IDR] = 0x07FFFFFF;
421
for (i = 0; i < s->num_priority_queues; i++) {
422
- s->regs_wo[GEM_INT_Q1_ENABLE + i] = 0x00000CE6;
423
- s->regs_wo[GEM_INT_Q1_DISABLE + i] = 0x00000CE6;
424
+ s->regs_wo[R_INT_Q1_ENABLE + i] = 0x00000CE6;
425
+ s->regs_wo[R_INT_Q1_DISABLE + i] = 0x00000CE6;
426
}
155
}
427
}
156
428
157
+void HELPER(setgp)(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc)
429
@@ -XXX,XX +XXX,XX @@ static bool gem_can_receive(NetClientState *nc)
158
+{
430
s = qemu_get_nic_opaque(nc);
159
+ do_setp(env, syndrome, mtedesc, set_step_tags, true, GETPC());
431
160
+}
432
/* Do nothing if receive is not enabled. */
161
+
433
- if (!(s->regs[GEM_NWCTRL] & GEM_NWCTRL_RXENA)) {
162
static void do_setm(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc,
434
+ if (!(s->regs[R_NWCTRL] & GEM_NWCTRL_RXENA)) {
163
StepFn *stepfn, bool is_setg, uintptr_t ra)
435
if (s->can_rx_state != 1) {
436
s->can_rx_state = 1;
437
DB_PRINT("can't receive - no enable\n");
438
@@ -XXX,XX +XXX,XX @@ static void gem_update_int_status(CadenceGEMState *s)
164
{
439
{
165
@@ -XXX,XX +XXX,XX @@ static void do_setm(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc,
440
int i;
166
* have an IMPDEF check for alignment here.
441
442
- qemu_set_irq(s->irq[0], !!s->regs[GEM_ISR]);
443
+ qemu_set_irq(s->irq[0], !!s->regs[R_ISR]);
444
445
for (i = 1; i < s->num_priority_queues; ++i) {
446
- qemu_set_irq(s->irq[i], !!s->regs[GEM_INT_Q1_STATUS + i - 1]);
447
+ qemu_set_irq(s->irq[i], !!s->regs[R_INT_Q1_STATUS + i - 1]);
448
}
449
}
450
451
@@ -XXX,XX +XXX,XX @@ static void gem_receive_updatestats(CadenceGEMState *s, const uint8_t *packet,
452
uint64_t octets;
453
454
/* Total octets (bytes) received */
455
- octets = ((uint64_t)(s->regs[GEM_OCTRXLO]) << 32) |
456
- s->regs[GEM_OCTRXHI];
457
+ octets = ((uint64_t)(s->regs[R_OCTRXLO]) << 32) |
458
+ s->regs[R_OCTRXHI];
459
octets += bytes;
460
- s->regs[GEM_OCTRXLO] = octets >> 32;
461
- s->regs[GEM_OCTRXHI] = octets;
462
+ s->regs[R_OCTRXLO] = octets >> 32;
463
+ s->regs[R_OCTRXHI] = octets;
464
465
/* Error-free Frames received */
466
- s->regs[GEM_RXCNT]++;
467
+ s->regs[R_RXCNT]++;
468
469
/* Error-free Broadcast Frames counter */
470
if (!memcmp(packet, broadcast_addr, 6)) {
471
- s->regs[GEM_RXBROADCNT]++;
472
+ s->regs[R_RXBROADCNT]++;
473
}
474
475
/* Error-free Multicast Frames counter */
476
if (packet[0] == 0x01) {
477
- s->regs[GEM_RXMULTICNT]++;
478
+ s->regs[R_RXMULTICNT]++;
479
}
480
481
if (bytes <= 64) {
482
- s->regs[GEM_RX64CNT]++;
483
+ s->regs[R_RX64CNT]++;
484
} else if (bytes <= 127) {
485
- s->regs[GEM_RX65CNT]++;
486
+ s->regs[R_RX65CNT]++;
487
} else if (bytes <= 255) {
488
- s->regs[GEM_RX128CNT]++;
489
+ s->regs[R_RX128CNT]++;
490
} else if (bytes <= 511) {
491
- s->regs[GEM_RX256CNT]++;
492
+ s->regs[R_RX256CNT]++;
493
} else if (bytes <= 1023) {
494
- s->regs[GEM_RX512CNT]++;
495
+ s->regs[R_RX512CNT]++;
496
} else if (bytes <= 1518) {
497
- s->regs[GEM_RX1024CNT]++;
498
+ s->regs[R_RX1024CNT]++;
499
} else {
500
- s->regs[GEM_RX1519CNT]++;
501
+ s->regs[R_RX1519CNT]++;
502
}
503
}
504
505
@@ -XXX,XX +XXX,XX @@ static int gem_mac_address_filter(CadenceGEMState *s, const uint8_t *packet)
506
int i, is_mc;
507
508
/* Promiscuous mode? */
509
- if (s->regs[GEM_NWCFG] & GEM_NWCFG_PROMISC) {
510
+ if (s->regs[R_NWCFG] & GEM_NWCFG_PROMISC) {
511
return GEM_RX_PROMISCUOUS_ACCEPT;
512
}
513
514
if (!memcmp(packet, broadcast_addr, 6)) {
515
/* Reject broadcast packets? */
516
- if (s->regs[GEM_NWCFG] & GEM_NWCFG_BCAST_REJ) {
517
+ if (s->regs[R_NWCFG] & GEM_NWCFG_BCAST_REJ) {
518
return GEM_RX_REJECT;
519
}
520
return GEM_RX_BROADCAST_ACCEPT;
521
@@ -XXX,XX +XXX,XX @@ static int gem_mac_address_filter(CadenceGEMState *s, const uint8_t *packet)
522
523
/* Accept packets -w- hash match? */
524
is_mc = is_multicast_ether_addr(packet);
525
- if ((is_mc && (s->regs[GEM_NWCFG] & GEM_NWCFG_MCAST_HASH)) ||
526
- (!is_mc && (s->regs[GEM_NWCFG] & GEM_NWCFG_UCAST_HASH))) {
527
+ if ((is_mc && (s->regs[R_NWCFG] & GEM_NWCFG_MCAST_HASH)) ||
528
+ (!is_mc && (s->regs[R_NWCFG] & GEM_NWCFG_UCAST_HASH))) {
529
uint64_t buckets;
530
unsigned hash_index;
531
532
hash_index = calc_mac_hash(packet);
533
- buckets = ((uint64_t)s->regs[GEM_HASHHI] << 32) | s->regs[GEM_HASHLO];
534
+ buckets = ((uint64_t)s->regs[R_HASHHI] << 32) | s->regs[R_HASHLO];
535
if ((buckets >> hash_index) & 1) {
536
return is_mc ? GEM_RX_MULTICAST_HASH_ACCEPT
537
: GEM_RX_UNICAST_HASH_ACCEPT;
538
@@ -XXX,XX +XXX,XX @@ static int gem_mac_address_filter(CadenceGEMState *s, const uint8_t *packet)
539
}
540
541
/* Check all 4 specific addresses */
542
- gem_spaddr = (uint8_t *)&(s->regs[GEM_SPADDR1LO]);
543
+ gem_spaddr = (uint8_t *)&(s->regs[R_SPADDR1LO]);
544
for (i = 3; i >= 0; i--) {
545
if (s->sar_active[i] && !memcmp(packet, gem_spaddr + 8 * i, 6)) {
546
return GEM_RX_SAR_ACCEPT + i;
547
@@ -XXX,XX +XXX,XX @@ static int get_queue_from_screen(CadenceGEMState *s, uint8_t *rxbuf_ptr,
548
int i, j;
549
550
for (i = 0; i < s->num_type1_screeners; i++) {
551
- reg = s->regs[GEM_SCREENING_TYPE1_REGISTER_0 + i];
552
+ reg = s->regs[R_SCREENING_TYPE1_REG0 + i];
553
matched = false;
554
mismatched = false;
555
556
@@ -XXX,XX +XXX,XX @@ static int get_queue_from_screen(CadenceGEMState *s, uint8_t *rxbuf_ptr,
557
}
558
559
for (i = 0; i < s->num_type2_screeners; i++) {
560
- reg = s->regs[GEM_SCREENING_TYPE2_REGISTER_0 + i];
561
+ reg = s->regs[R_SCREENING_TYPE2_REG0 + i];
562
matched = false;
563
mismatched = false;
564
565
@@ -XXX,XX +XXX,XX @@ static int get_queue_from_screen(CadenceGEMState *s, uint8_t *rxbuf_ptr,
566
qemu_log_mask(LOG_GUEST_ERROR, "Out of range ethertype "
567
"register index: %d\n", et_idx);
568
}
569
- if (type == s->regs[GEM_SCREENING_TYPE2_ETHERTYPE_REG_0 +
570
+ if (type == s->regs[R_SCREENING_TYPE2_ETHERTYPE_REG0 +
571
et_idx]) {
572
matched = true;
573
} else {
574
@@ -XXX,XX +XXX,XX @@ static int get_queue_from_screen(CadenceGEMState *s, uint8_t *rxbuf_ptr,
575
"register index: %d\n", cr_idx);
576
}
577
578
- cr0 = s->regs[GEM_TYPE2_COMPARE_0_WORD_0 + cr_idx * 2];
579
- cr1 = s->regs[GEM_TYPE2_COMPARE_0_WORD_0 + cr_idx * 2 + 1];
580
+ cr0 = s->regs[R_TYPE2_COMPARE_0_WORD_0 + cr_idx * 2];
581
+ cr1 = s->regs[R_TYPE2_COMPARE_0_WORD_0 + cr_idx * 2 + 1];
582
offset = extract32(cr1, GEM_T2CW1_OFFSET_VALUE_SHIFT,
583
GEM_T2CW1_OFFSET_VALUE_WIDTH);
584
585
@@ -XXX,XX +XXX,XX @@ static uint32_t gem_get_queue_base_addr(CadenceGEMState *s, bool tx, int q)
586
587
switch (q) {
588
case 0:
589
- base_addr = s->regs[tx ? GEM_TXQBASE : GEM_RXQBASE];
590
+ base_addr = s->regs[tx ? R_TXQBASE : R_RXQBASE];
591
break;
592
case 1 ... (MAX_PRIORITY_QUEUES - 1):
593
- base_addr = s->regs[(tx ? GEM_TRANSMIT_Q1_PTR :
594
- GEM_RECEIVE_Q1_PTR) + q - 1];
595
+ base_addr = s->regs[(tx ? R_TRANSMIT_Q1_PTR :
596
+ R_RECEIVE_Q1_PTR) + q - 1];
597
break;
598
default:
599
g_assert_not_reached();
600
@@ -XXX,XX +XXX,XX @@ static hwaddr gem_get_desc_addr(CadenceGEMState *s, bool tx, int q)
601
{
602
hwaddr desc_addr = 0;
603
604
- if (s->regs[GEM_DMACFG] & GEM_DMACFG_ADDR_64B) {
605
- desc_addr = s->regs[tx ? GEM_TBQPH : GEM_RBQPH];
606
+ if (s->regs[R_DMACFG] & GEM_DMACFG_ADDR_64B) {
607
+ desc_addr = s->regs[tx ? R_TBQPH : R_RBQPH];
608
}
609
desc_addr <<= 32;
610
desc_addr |= tx ? s->tx_desc_addr[q] : s->rx_desc_addr[q];
611
@@ -XXX,XX +XXX,XX @@ static void gem_get_rx_desc(CadenceGEMState *s, int q)
612
/* Descriptor owned by software ? */
613
if (rx_desc_get_ownership(s->rx_desc[q]) == 1) {
614
DB_PRINT("descriptor 0x%" HWADDR_PRIx " owned by sw.\n", desc_addr);
615
- s->regs[GEM_RXSTATUS] |= GEM_RXSTATUS_NOBUF;
616
+ s->regs[R_RXSTATUS] |= GEM_RXSTATUS_NOBUF;
617
gem_set_isr(s, q, GEM_INT_RXUSED);
618
/* Handle interrupt consequences */
619
gem_update_int_status(s);
620
@@ -XXX,XX +XXX,XX @@ static ssize_t gem_receive(NetClientState *nc, const uint8_t *buf, size_t size)
621
}
622
623
/* Discard packets with receive length error enabled ? */
624
- if (s->regs[GEM_NWCFG] & GEM_NWCFG_LERR_DISC) {
625
+ if (s->regs[R_NWCFG] & GEM_NWCFG_LERR_DISC) {
626
unsigned type_len;
627
628
/* Fish the ethertype / length field out of the RX packet */
629
@@ -XXX,XX +XXX,XX @@ static ssize_t gem_receive(NetClientState *nc, const uint8_t *buf, size_t size)
630
/*
631
* Determine configured receive buffer offset (probably 0)
167
*/
632
*/
168
633
- rxbuf_offset = (s->regs[GEM_NWCFG] & GEM_NWCFG_BUFF_OFST_M) >>
169
- if (!mte_checks_needed(toaddr, mtedesc)) {
634
+ rxbuf_offset = (s->regs[R_NWCFG] & GEM_NWCFG_BUFF_OFST_M) >>
170
+ if (unlikely(is_setg)) {
635
GEM_NWCFG_BUFF_OFST_S;
171
+ check_setg_alignment(env, toaddr, setsize, memidx, ra);
636
172
+ } else if (!mte_checks_needed(toaddr, mtedesc)) {
637
/* The configure size of each receive buffer. Determines how many
173
mtedesc = 0;
638
* buffers needed to hold this packet.
174
}
639
*/
175
640
- rxbufsize = ((s->regs[GEM_DMACFG] & GEM_DMACFG_RBUFSZ_M) >>
176
@@ -XXX,XX +XXX,XX @@ void HELPER(setm)(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc)
641
+ rxbufsize = ((s->regs[R_DMACFG] & GEM_DMACFG_RBUFSZ_M) >>
177
do_setm(env, syndrome, mtedesc, set_step, false, GETPC());
642
GEM_DMACFG_RBUFSZ_S) * GEM_DMACFG_RBUFSZ_MUL;
643
bytes_to_copy = size;
644
645
@@ -XXX,XX +XXX,XX @@ static ssize_t gem_receive(NetClientState *nc, const uint8_t *buf, size_t size)
646
}
647
648
/* Strip of FCS field ? (usually yes) */
649
- if (s->regs[GEM_NWCFG] & GEM_NWCFG_STRIP_FCS) {
650
+ if (s->regs[R_NWCFG] & GEM_NWCFG_STRIP_FCS) {
651
rxbuf_ptr = (void *)buf;
652
} else {
653
unsigned crc_val;
654
@@ -XXX,XX +XXX,XX @@ static ssize_t gem_receive(NetClientState *nc, const uint8_t *buf, size_t size)
655
/* Count it */
656
gem_receive_updatestats(s, buf, size);
657
658
- s->regs[GEM_RXSTATUS] |= GEM_RXSTATUS_FRMRCVD;
659
+ s->regs[R_RXSTATUS] |= GEM_RXSTATUS_FRMRCVD;
660
gem_set_isr(s, q, GEM_INT_RXCMPL);
661
662
/* Handle interrupt consequences */
663
@@ -XXX,XX +XXX,XX @@ static void gem_transmit_updatestats(CadenceGEMState *s, const uint8_t *packet,
664
uint64_t octets;
665
666
/* Total octets (bytes) transmitted */
667
- octets = ((uint64_t)(s->regs[GEM_OCTTXLO]) << 32) |
668
- s->regs[GEM_OCTTXHI];
669
+ octets = ((uint64_t)(s->regs[R_OCTTXLO]) << 32) |
670
+ s->regs[R_OCTTXHI];
671
octets += bytes;
672
- s->regs[GEM_OCTTXLO] = octets >> 32;
673
- s->regs[GEM_OCTTXHI] = octets;
674
+ s->regs[R_OCTTXLO] = octets >> 32;
675
+ s->regs[R_OCTTXHI] = octets;
676
677
/* Error-free Frames transmitted */
678
- s->regs[GEM_TXCNT]++;
679
+ s->regs[R_TXCNT]++;
680
681
/* Error-free Broadcast Frames counter */
682
if (!memcmp(packet, broadcast_addr, 6)) {
683
- s->regs[GEM_TXBCNT]++;
684
+ s->regs[R_TXBCNT]++;
685
}
686
687
/* Error-free Multicast Frames counter */
688
if (packet[0] == 0x01) {
689
- s->regs[GEM_TXMCNT]++;
690
+ s->regs[R_TXMCNT]++;
691
}
692
693
if (bytes <= 64) {
694
- s->regs[GEM_TX64CNT]++;
695
+ s->regs[R_TX64CNT]++;
696
} else if (bytes <= 127) {
697
- s->regs[GEM_TX65CNT]++;
698
+ s->regs[R_TX65CNT]++;
699
} else if (bytes <= 255) {
700
- s->regs[GEM_TX128CNT]++;
701
+ s->regs[R_TX128CNT]++;
702
} else if (bytes <= 511) {
703
- s->regs[GEM_TX256CNT]++;
704
+ s->regs[R_TX256CNT]++;
705
} else if (bytes <= 1023) {
706
- s->regs[GEM_TX512CNT]++;
707
+ s->regs[R_TX512CNT]++;
708
} else if (bytes <= 1518) {
709
- s->regs[GEM_TX1024CNT]++;
710
+ s->regs[R_TX1024CNT]++;
711
} else {
712
- s->regs[GEM_TX1519CNT]++;
713
+ s->regs[R_TX1519CNT]++;
714
}
178
}
715
}
179
716
180
+void HELPER(setgm)(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc)
717
@@ -XXX,XX +XXX,XX @@ static void gem_transmit(CadenceGEMState *s)
181
+{
718
int q = 0;
182
+ do_setm(env, syndrome, mtedesc, set_step_tags, true, GETPC());
719
183
+}
720
/* Do nothing if transmit is not enabled. */
184
+
721
- if (!(s->regs[GEM_NWCTRL] & GEM_NWCTRL_TXENA)) {
185
static void do_sete(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc,
722
+ if (!(s->regs[R_NWCTRL] & GEM_NWCTRL_TXENA)) {
186
StepFn *stepfn, bool is_setg, uintptr_t ra)
723
return;
187
{
724
}
188
@@ -XXX,XX +XXX,XX @@ static void do_sete(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc,
725
189
mops_mismatch_exception_target_el(env), ra);
726
@@ -XXX,XX +XXX,XX @@ static void gem_transmit(CadenceGEMState *s)
190
}
727
while (tx_desc_get_used(desc) == 0) {
191
728
192
- if (!mte_checks_needed(toaddr, mtedesc)) {
729
/* Do nothing if transmit is not enabled. */
193
+ if (unlikely(is_setg)) {
730
- if (!(s->regs[GEM_NWCTRL] & GEM_NWCTRL_TXENA)) {
194
+ check_setg_alignment(env, toaddr, setsize, memidx, ra);
731
+ if (!(s->regs[R_NWCTRL] & GEM_NWCTRL_TXENA)) {
195
+ } else if (!mte_checks_needed(toaddr, mtedesc)) {
732
return;
196
mtedesc = 0;
733
}
197
}
734
print_gem_tx_desc(desc, q);
198
735
@@ -XXX,XX +XXX,XX @@ static void gem_transmit(CadenceGEMState *s)
199
@@ -XXX,XX +XXX,XX @@ void HELPER(sete)(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc)
736
}
200
{
737
DB_PRINT("TX descriptor next: 0x%08x\n", s->tx_desc_addr[q]);
201
do_sete(env, syndrome, mtedesc, set_step, false, GETPC());
738
202
}
739
- s->regs[GEM_TXSTATUS] |= GEM_TXSTATUS_TXCMPL;
203
+
740
+ s->regs[R_TXSTATUS] |= GEM_TXSTATUS_TXCMPL;
204
+void HELPER(setge)(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc)
741
gem_set_isr(s, q, GEM_INT_TXCMPL);
205
+{
742
206
+ do_sete(env, syndrome, mtedesc, set_step_tags, true, GETPC());
743
/* Handle interrupt consequences */
207
+}
744
gem_update_int_status(s);
208
diff --git a/target/arm/tcg/mte_helper.c b/target/arm/tcg/mte_helper.c
745
209
index XXXXXXX..XXXXXXX 100644
746
/* Is checksum offload enabled? */
210
--- a/target/arm/tcg/mte_helper.c
747
- if (s->regs[GEM_DMACFG] & GEM_DMACFG_TXCSUM_OFFL) {
211
+++ b/target/arm/tcg/mte_helper.c
748
+ if (s->regs[R_DMACFG] & GEM_DMACFG_TXCSUM_OFFL) {
212
@@ -XXX,XX +XXX,XX @@ uint64_t mte_mops_probe(CPUARMState *env, uint64_t ptr, uint64_t size,
749
net_checksum_calculate(s->tx_packet, total_bytes, CSUM_ALL);
213
return n * TAG_GRANULE - (ptr - tag_first);
750
}
214
}
751
215
}
752
@@ -XXX,XX +XXX,XX @@ static void gem_transmit(CadenceGEMState *s)
216
+
753
gem_transmit_updatestats(s, s->tx_packet, total_bytes);
217
+void mte_mops_set_tags(CPUARMState *env, uint64_t ptr, uint64_t size,
754
218
+ uint32_t desc)
755
/* Send the packet somewhere */
219
+{
756
- if (s->phy_loop || (s->regs[GEM_NWCTRL] &
220
+ int mmu_idx, tag_count;
757
+ if (s->phy_loop || (s->regs[R_NWCTRL] &
221
+ uint64_t ptr_tag;
758
GEM_NWCTRL_LOCALLOOP)) {
222
+ void *mem;
759
qemu_receive_packet(qemu_get_queue(s->nic), s->tx_packet,
223
+
760
total_bytes);
224
+ if (!desc) {
761
@@ -XXX,XX +XXX,XX @@ static void gem_transmit(CadenceGEMState *s)
225
+ /* Tags not actually enabled */
762
226
+ return;
763
/* read next descriptor */
227
+ }
764
if (tx_desc_get_wrap(desc)) {
228
+
765
-
229
+ mmu_idx = FIELD_EX32(desc, MTEDESC, MIDX);
766
- if (s->regs[GEM_DMACFG] & GEM_DMACFG_ADDR_64B) {
230
+ /* True probe: this will never fault */
767
- packet_desc_addr = s->regs[GEM_TBQPH];
231
+ mem = allocation_tag_mem_probe(env, mmu_idx, ptr, MMU_DATA_STORE, size,
768
+ if (s->regs[R_DMACFG] & GEM_DMACFG_ADDR_64B) {
232
+ MMU_DATA_STORE, true, 0);
769
+ packet_desc_addr = s->regs[R_TBQPH];
233
+ if (!mem) {
770
packet_desc_addr <<= 32;
234
+ return;
771
} else {
235
+ }
772
packet_desc_addr = 0;
236
+
773
@@ -XXX,XX +XXX,XX @@ static void gem_transmit(CadenceGEMState *s)
237
+ /*
774
}
238
+ * We know that ptr and size are both TAG_GRANULE aligned; store
775
239
+ * the tag from the pointer value into the tag memory.
776
if (tx_desc_get_used(desc)) {
240
+ */
777
- s->regs[GEM_TXSTATUS] |= GEM_TXSTATUS_USED;
241
+ ptr_tag = allocation_tag_from_addr(ptr);
778
+ s->regs[R_TXSTATUS] |= GEM_TXSTATUS_USED;
242
+ tag_count = size / TAG_GRANULE;
779
/* IRQ TXUSED is defined only for queue 0 */
243
+ if (ptr & TAG_GRANULE) {
780
if (q == 0) {
244
+ /* Not 2*TAG_GRANULE-aligned: store tag to first nibble */
781
gem_set_isr(s, 0, GEM_INT_TXUSED);
245
+ store_tag1_parallel(TAG_GRANULE, mem, ptr_tag);
782
@@ -XXX,XX +XXX,XX @@ static void gem_reset(DeviceState *d)
246
+ mem++;
783
247
+ tag_count--;
784
/* Set post reset register values */
248
+ }
785
memset(&s->regs[0], 0, sizeof(s->regs));
249
+ memset(mem, ptr_tag | (ptr_tag << 4), tag_count / 2);
786
- s->regs[GEM_NWCFG] = 0x00080000;
250
+ if (tag_count & 1) {
787
- s->regs[GEM_NWSTATUS] = 0x00000006;
251
+ /* Final trailing unaligned nibble */
788
- s->regs[GEM_DMACFG] = 0x00020784;
252
+ mem += tag_count / 2;
789
- s->regs[GEM_IMR] = 0x07ffffff;
253
+ store_tag1_parallel(0, mem, ptr_tag);
790
- s->regs[GEM_TXPAUSE] = 0x0000ffff;
254
+ }
791
- s->regs[GEM_TXPARTIALSF] = 0x000003ff;
255
+}
792
- s->regs[GEM_RXPARTIALSF] = 0x000003ff;
256
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
793
- s->regs[GEM_MODID] = s->revision;
257
index XXXXXXX..XXXXXXX 100644
794
- s->regs[GEM_DESCONF] = 0x02D00111;
258
--- a/target/arm/tcg/translate-a64.c
795
- s->regs[GEM_DESCONF2] = 0x2ab10000 | s->jumbo_max_len;
259
+++ b/target/arm/tcg/translate-a64.c
796
- s->regs[GEM_DESCONF5] = 0x002f2045;
260
@@ -XXX,XX +XXX,XX @@ TRANS_FEAT(STZ2G, aa64_mte_insn_reg, do_STG, a, true, true)
797
- s->regs[GEM_DESCONF6] = GEM_DESCONF6_64B_MASK;
261
798
- s->regs[GEM_INT_Q1_MASK] = 0x00000CE6;
262
typedef void SetFn(TCGv_env, TCGv_i32, TCGv_i32);
799
- s->regs[GEM_JUMBO_MAX_LEN] = s->jumbo_max_len;
263
800
+ s->regs[R_NWCFG] = 0x00080000;
264
-static bool do_SET(DisasContext *s, arg_set *a, bool is_epilogue, SetFn fn)
801
+ s->regs[R_NWSTATUS] = 0x00000006;
265
+static bool do_SET(DisasContext *s, arg_set *a, bool is_epilogue,
802
+ s->regs[R_DMACFG] = 0x00020784;
266
+ bool is_setg, SetFn fn)
803
+ s->regs[R_IMR] = 0x07ffffff;
267
{
804
+ s->regs[R_TXPAUSE] = 0x0000ffff;
268
int memidx;
805
+ s->regs[R_TXPARTIALSF] = 0x000003ff;
269
uint32_t syndrome, desc = 0;
806
+ s->regs[R_RXPARTIALSF] = 0x000003ff;
270
807
+ s->regs[R_MODID] = s->revision;
271
+ if (is_setg && !dc_isar_feature(aa64_mte, s)) {
808
+ s->regs[R_DESCONF] = 0x02D00111;
272
+ return false;
809
+ s->regs[R_DESCONF2] = 0x2ab10000 | s->jumbo_max_len;
273
+ }
810
+ s->regs[R_DESCONF5] = 0x002f2045;
274
+
811
+ s->regs[R_DESCONF6] = GEM_DESCONF6_64B_MASK;
275
/*
812
+ s->regs[R_INT_Q1_MASK] = 0x00000CE6;
276
* UNPREDICTABLE cases: we choose to UNDEF, which allows
813
+ s->regs[R_JUMBO_MAX_LEN] = s->jumbo_max_len;
277
* us to pull this check before the CheckMOPSEnabled() test
814
278
@@ -XXX,XX +XXX,XX @@ static bool do_SET(DisasContext *s, arg_set *a, bool is_epilogue, SetFn fn)
815
if (s->num_priority_queues > 1) {
279
* We pass option_a == true, matching our implementation;
816
queues_mask = MAKE_64BIT_MASK(1, s->num_priority_queues - 1);
280
* we pass wrong_option == false: helper function may set that bit.
817
- s->regs[GEM_DESCONF6] |= queues_mask;
281
*/
818
+ s->regs[R_DESCONF6] |= queues_mask;
282
- syndrome = syn_mop(true, false, (a->nontemp << 1) | a->unpriv,
819
}
283
+ syndrome = syn_mop(true, is_setg, (a->nontemp << 1) | a->unpriv,
820
284
is_epilogue, false, true, a->rd, a->rs, a->rn);
821
/* Set MAC address */
285
822
a = &s->conf.macaddr.a[0];
286
- if (s->mte_active[a->unpriv]) {
823
- s->regs[GEM_SPADDR1LO] = a[0] | (a[1] << 8) | (a[2] << 16) | (a[3] << 24);
287
+ if (is_setg ? s->ata[a->unpriv] : s->mte_active[a->unpriv]) {
824
- s->regs[GEM_SPADDR1HI] = a[4] | (a[5] << 8);
288
/* We may need to do MTE tag checking, so assemble the descriptor */
825
+ s->regs[R_SPADDR1LO] = a[0] | (a[1] << 8) | (a[2] << 16) | (a[3] << 24);
289
desc = FIELD_DP32(desc, MTEDESC, TBI, s->tbid);
826
+ s->regs[R_SPADDR1HI] = a[4] | (a[5] << 8);
290
desc = FIELD_DP32(desc, MTEDESC, TCMA, s->tcma);
827
291
@@ -XXX,XX +XXX,XX @@ static bool do_SET(DisasContext *s, arg_set *a, bool is_epilogue, SetFn fn)
828
for (i = 0; i < 4; i++) {
292
return true;
829
s->sar_active[i] = false;
293
}
830
@@ -XXX,XX +XXX,XX @@ static uint64_t gem_read(void *opaque, hwaddr offset, unsigned size)
294
831
DB_PRINT("offset: 0x%04x read: 0x%08x\n", (unsigned)offset*4, retval);
295
-TRANS_FEAT(SETP, aa64_mops, do_SET, a, false, gen_helper_setp)
832
296
-TRANS_FEAT(SETM, aa64_mops, do_SET, a, false, gen_helper_setm)
833
switch (offset) {
297
-TRANS_FEAT(SETE, aa64_mops, do_SET, a, true, gen_helper_sete)
834
- case GEM_ISR:
298
+TRANS_FEAT(SETP, aa64_mops, do_SET, a, false, false, gen_helper_setp)
835
+ case R_ISR:
299
+TRANS_FEAT(SETM, aa64_mops, do_SET, a, false, false, gen_helper_setm)
836
DB_PRINT("lowering irqs on ISR read\n");
300
+TRANS_FEAT(SETE, aa64_mops, do_SET, a, true, false, gen_helper_sete)
837
/* The interrupts get updated at the end of the function. */
301
+TRANS_FEAT(SETGP, aa64_mops, do_SET, a, false, true, gen_helper_setgp)
838
break;
302
+TRANS_FEAT(SETGM, aa64_mops, do_SET, a, false, true, gen_helper_setgm)
839
- case GEM_PHYMNTNC:
303
+TRANS_FEAT(SETGE, aa64_mops, do_SET, a, true, true, gen_helper_setge)
840
+ case R_PHYMNTNC:
304
841
if (retval & GEM_PHYMNTNC_OP_R) {
305
typedef void ArithTwoOp(TCGv_i64, TCGv_i64, TCGv_i64);
842
uint32_t phy_addr, reg_num;
843
844
@@ -XXX,XX +XXX,XX @@ static void gem_write(void *opaque, hwaddr offset, uint64_t val,
845
846
/* Handle register write side effects */
847
switch (offset) {
848
- case GEM_NWCTRL:
849
+ case R_NWCTRL:
850
if (val & GEM_NWCTRL_RXENA) {
851
for (i = 0; i < s->num_priority_queues; ++i) {
852
gem_get_rx_desc(s, i);
853
@@ -XXX,XX +XXX,XX @@ static void gem_write(void *opaque, hwaddr offset, uint64_t val,
854
}
855
break;
856
857
- case GEM_TXSTATUS:
858
+ case R_TXSTATUS:
859
gem_update_int_status(s);
860
break;
861
- case GEM_RXQBASE:
862
+ case R_RXQBASE:
863
s->rx_desc_addr[0] = val;
864
break;
865
- case GEM_RECEIVE_Q1_PTR ... GEM_RECEIVE_Q7_PTR:
866
- s->rx_desc_addr[offset - GEM_RECEIVE_Q1_PTR + 1] = val;
867
+ case R_RECEIVE_Q1_PTR ... R_RECEIVE_Q7_PTR:
868
+ s->rx_desc_addr[offset - R_RECEIVE_Q1_PTR + 1] = val;
869
break;
870
- case GEM_TXQBASE:
871
+ case R_TXQBASE:
872
s->tx_desc_addr[0] = val;
873
break;
874
- case GEM_TRANSMIT_Q1_PTR ... GEM_TRANSMIT_Q7_PTR:
875
- s->tx_desc_addr[offset - GEM_TRANSMIT_Q1_PTR + 1] = val;
876
+ case R_TRANSMIT_Q1_PTR ... R_TRANSMIT_Q7_PTR:
877
+ s->tx_desc_addr[offset - R_TRANSMIT_Q1_PTR + 1] = val;
878
break;
879
- case GEM_RXSTATUS:
880
+ case R_RXSTATUS:
881
gem_update_int_status(s);
882
break;
883
- case GEM_IER:
884
- s->regs[GEM_IMR] &= ~val;
885
+ case R_IER:
886
+ s->regs[R_IMR] &= ~val;
887
gem_update_int_status(s);
888
break;
889
- case GEM_JUMBO_MAX_LEN:
890
- s->regs[GEM_JUMBO_MAX_LEN] = val & MAX_JUMBO_FRAME_SIZE_MASK;
891
+ case R_JUMBO_MAX_LEN:
892
+ s->regs[R_JUMBO_MAX_LEN] = val & MAX_JUMBO_FRAME_SIZE_MASK;
893
break;
894
- case GEM_INT_Q1_ENABLE ... GEM_INT_Q7_ENABLE:
895
- s->regs[GEM_INT_Q1_MASK + offset - GEM_INT_Q1_ENABLE] &= ~val;
896
+ case R_INT_Q1_ENABLE ... R_INT_Q7_ENABLE:
897
+ s->regs[R_INT_Q1_MASK + offset - R_INT_Q1_ENABLE] &= ~val;
898
gem_update_int_status(s);
899
break;
900
- case GEM_IDR:
901
- s->regs[GEM_IMR] |= val;
902
+ case R_IDR:
903
+ s->regs[R_IMR] |= val;
904
gem_update_int_status(s);
905
break;
906
- case GEM_INT_Q1_DISABLE ... GEM_INT_Q7_DISABLE:
907
- s->regs[GEM_INT_Q1_MASK + offset - GEM_INT_Q1_DISABLE] |= val;
908
+ case R_INT_Q1_DISABLE ... R_INT_Q7_DISABLE:
909
+ s->regs[R_INT_Q1_MASK + offset - R_INT_Q1_DISABLE] |= val;
910
gem_update_int_status(s);
911
break;
912
- case GEM_SPADDR1LO:
913
- case GEM_SPADDR2LO:
914
- case GEM_SPADDR3LO:
915
- case GEM_SPADDR4LO:
916
- s->sar_active[(offset - GEM_SPADDR1LO) / 2] = false;
917
+ case R_SPADDR1LO:
918
+ case R_SPADDR2LO:
919
+ case R_SPADDR3LO:
920
+ case R_SPADDR4LO:
921
+ s->sar_active[(offset - R_SPADDR1LO) / 2] = false;
922
break;
923
- case GEM_SPADDR1HI:
924
- case GEM_SPADDR2HI:
925
- case GEM_SPADDR3HI:
926
- case GEM_SPADDR4HI:
927
- s->sar_active[(offset - GEM_SPADDR1HI) / 2] = true;
928
+ case R_SPADDR1HI:
929
+ case R_SPADDR2HI:
930
+ case R_SPADDR3HI:
931
+ case R_SPADDR4HI:
932
+ s->sar_active[(offset - R_SPADDR1HI) / 2] = true;
933
break;
934
- case GEM_PHYMNTNC:
935
+ case R_PHYMNTNC:
936
if (val & GEM_PHYMNTNC_OP_W) {
937
uint32_t phy_addr, reg_num;
306
938
307
--
939
--
308
2.34.1
940
2.34.1
diff view generated by jsdifflib
1
Add the code to report the arm32 hwcaps we were previously missing:
1
From: Luc Michel <luc.michel@amd.com>
2
ss, ssbs, fphp, asimdhp, asimddp, asimdfhm, asimdbf16, i8mm
3
2
3
Describe screening registers fields using the FIELD macros.
4
5
Signed-off-by: Luc Michel <luc.michel@amd.com>
6
Reviewed-by: sai.pavan.boddu@amd.com
7
Message-id: 20231017194422.4124691-3-luc.michel@amd.com
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
---
9
---
7
linux-user/elfload.c | 12 ++++++++++++
10
hw/net/cadence_gem.c | 94 ++++++++++++++++++++++----------------------
8
1 file changed, 12 insertions(+)
11
1 file changed, 48 insertions(+), 46 deletions(-)
9
12
10
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
13
diff --git a/hw/net/cadence_gem.c b/hw/net/cadence_gem.c
11
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
12
--- a/linux-user/elfload.c
15
--- a/hw/net/cadence_gem.c
13
+++ b/linux-user/elfload.c
16
+++ b/hw/net/cadence_gem.c
14
@@ -XXX,XX +XXX,XX @@ uint32_t get_elf_hwcap(void)
17
@@ -XXX,XX +XXX,XX @@ REG32(INT_Q1_DISABLE, 0x620)
18
REG32(INT_Q7_DISABLE, 0x638)
19
20
REG32(SCREENING_TYPE1_REG0, 0x500)
21
-
22
-#define GEM_ST1R_UDP_PORT_MATCH_ENABLE (1 << 29)
23
-#define GEM_ST1R_DSTC_ENABLE (1 << 28)
24
-#define GEM_ST1R_UDP_PORT_MATCH_SHIFT (12)
25
-#define GEM_ST1R_UDP_PORT_MATCH_WIDTH (27 - GEM_ST1R_UDP_PORT_MATCH_SHIFT + 1)
26
-#define GEM_ST1R_DSTC_MATCH_SHIFT (4)
27
-#define GEM_ST1R_DSTC_MATCH_WIDTH (11 - GEM_ST1R_DSTC_MATCH_SHIFT + 1)
28
-#define GEM_ST1R_QUEUE_SHIFT (0)
29
-#define GEM_ST1R_QUEUE_WIDTH (3 - GEM_ST1R_QUEUE_SHIFT + 1)
30
+ FIELD(SCREENING_TYPE1_REG0, QUEUE_NUM, 0, 4)
31
+ FIELD(SCREENING_TYPE1_REG0, DSTC_MATCH, 4, 8)
32
+ FIELD(SCREENING_TYPE1_REG0, UDP_PORT_MATCH, 12, 16)
33
+ FIELD(SCREENING_TYPE1_REG0, DSTC_ENABLE, 28, 1)
34
+ FIELD(SCREENING_TYPE1_REG0, UDP_PORT_MATCH_EN, 29, 1)
35
+ FIELD(SCREENING_TYPE1_REG0, DROP_ON_MATCH, 30, 1)
36
37
REG32(SCREENING_TYPE2_REG0, 0x540)
38
-
39
-#define GEM_ST2R_COMPARE_A_ENABLE (1 << 18)
40
-#define GEM_ST2R_COMPARE_A_SHIFT (13)
41
-#define GEM_ST2R_COMPARE_WIDTH (17 - GEM_ST2R_COMPARE_A_SHIFT + 1)
42
-#define GEM_ST2R_ETHERTYPE_ENABLE (1 << 12)
43
-#define GEM_ST2R_ETHERTYPE_INDEX_SHIFT (9)
44
-#define GEM_ST2R_ETHERTYPE_INDEX_WIDTH (11 - GEM_ST2R_ETHERTYPE_INDEX_SHIFT \
45
- + 1)
46
-#define GEM_ST2R_QUEUE_SHIFT (0)
47
-#define GEM_ST2R_QUEUE_WIDTH (3 - GEM_ST2R_QUEUE_SHIFT + 1)
48
+ FIELD(SCREENING_TYPE2_REG0, QUEUE_NUM, 0, 4)
49
+ FIELD(SCREENING_TYPE2_REG0, VLAN_PRIORITY, 4, 3)
50
+ FIELD(SCREENING_TYPE2_REG0, VLAN_ENABLE, 8, 1)
51
+ FIELD(SCREENING_TYPE2_REG0, ETHERTYPE_REG_INDEX, 9, 3)
52
+ FIELD(SCREENING_TYPE2_REG0, ETHERTYPE_ENABLE, 12, 1)
53
+ FIELD(SCREENING_TYPE2_REG0, COMPARE_A, 13, 5)
54
+ FIELD(SCREENING_TYPE2_REG0, COMPARE_A_ENABLE, 18, 1)
55
+ FIELD(SCREENING_TYPE2_REG0, COMPARE_B, 19, 5)
56
+ FIELD(SCREENING_TYPE2_REG0, COMPARE_B_ENABLE, 24, 1)
57
+ FIELD(SCREENING_TYPE2_REG0, COMPARE_C, 25, 5)
58
+ FIELD(SCREENING_TYPE2_REG0, COMPARE_C_ENABLE, 30, 1)
59
+ FIELD(SCREENING_TYPE2_REG0, DROP_ON_MATCH, 31, 1)
60
61
REG32(SCREENING_TYPE2_ETHERTYPE_REG0, 0x6e0)
62
-REG32(TYPE2_COMPARE_0_WORD_0, 0x700)
63
64
-#define GEM_T2CW1_COMPARE_OFFSET_SHIFT (7)
65
-#define GEM_T2CW1_COMPARE_OFFSET_WIDTH (8 - GEM_T2CW1_COMPARE_OFFSET_SHIFT + 1)
66
-#define GEM_T2CW1_OFFSET_VALUE_SHIFT (0)
67
-#define GEM_T2CW1_OFFSET_VALUE_WIDTH (6 - GEM_T2CW1_OFFSET_VALUE_SHIFT + 1)
68
+REG32(TYPE2_COMPARE_0_WORD_0, 0x700)
69
+ FIELD(TYPE2_COMPARE_0_WORD_0, MASK_VALUE, 0, 16)
70
+ FIELD(TYPE2_COMPARE_0_WORD_0, COMPARE_VALUE, 16, 16)
71
+
72
+REG32(TYPE2_COMPARE_0_WORD_1, 0x704)
73
+ FIELD(TYPE2_COMPARE_0_WORD_1, OFFSET_VALUE, 0, 7)
74
+ FIELD(TYPE2_COMPARE_0_WORD_1, COMPARE_OFFSET, 7, 2)
75
+ FIELD(TYPE2_COMPARE_0_WORD_1, DISABLE_MASK, 9, 1)
76
+ FIELD(TYPE2_COMPARE_0_WORD_1, COMPARE_VLAN_ID, 10, 1)
77
78
/*****************************************/
79
#define GEM_NWCTRL_TXSTART 0x00000200 /* Transmit Enable */
80
@@ -XXX,XX +XXX,XX @@ static int get_queue_from_screen(CadenceGEMState *s, uint8_t *rxbuf_ptr,
81
mismatched = false;
82
83
/* Screening is based on UDP Port */
84
- if (reg & GEM_ST1R_UDP_PORT_MATCH_ENABLE) {
85
+ if (FIELD_EX32(reg, SCREENING_TYPE1_REG0, UDP_PORT_MATCH_EN)) {
86
uint16_t udp_port = rxbuf_ptr[14 + 22] << 8 | rxbuf_ptr[14 + 23];
87
- if (udp_port == extract32(reg, GEM_ST1R_UDP_PORT_MATCH_SHIFT,
88
- GEM_ST1R_UDP_PORT_MATCH_WIDTH)) {
89
+ if (udp_port == FIELD_EX32(reg, SCREENING_TYPE1_REG0, UDP_PORT_MATCH)) {
90
matched = true;
91
} else {
92
mismatched = true;
93
@@ -XXX,XX +XXX,XX @@ static int get_queue_from_screen(CadenceGEMState *s, uint8_t *rxbuf_ptr,
94
}
95
96
/* Screening is based on DS/TC */
97
- if (reg & GEM_ST1R_DSTC_ENABLE) {
98
+ if (FIELD_EX32(reg, SCREENING_TYPE1_REG0, DSTC_ENABLE)) {
99
uint8_t dscp = rxbuf_ptr[14 + 1];
100
- if (dscp == extract32(reg, GEM_ST1R_DSTC_MATCH_SHIFT,
101
- GEM_ST1R_DSTC_MATCH_WIDTH)) {
102
+ if (dscp == FIELD_EX32(reg, SCREENING_TYPE1_REG0, DSTC_MATCH)) {
103
matched = true;
104
} else {
105
mismatched = true;
106
@@ -XXX,XX +XXX,XX @@ static int get_queue_from_screen(CadenceGEMState *s, uint8_t *rxbuf_ptr,
107
}
108
109
if (matched && !mismatched) {
110
- return extract32(reg, GEM_ST1R_QUEUE_SHIFT, GEM_ST1R_QUEUE_WIDTH);
111
+ return FIELD_EX32(reg, SCREENING_TYPE1_REG0, QUEUE_NUM);
15
}
112
}
16
}
113
}
17
GET_FEATURE_ID(aa32_simdfmac, ARM_HWCAP_ARM_VFPv4);
114
18
+ /*
115
@@ -XXX,XX +XXX,XX @@ static int get_queue_from_screen(CadenceGEMState *s, uint8_t *rxbuf_ptr,
19
+ * MVFR1.FPHP and .SIMDHP must be in sync, and QEMU uses the same
116
matched = false;
20
+ * isar_feature function for both. The kernel reports them as two hwcaps.
117
mismatched = false;
21
+ */
118
22
+ GET_FEATURE_ID(aa32_fp16_arith, ARM_HWCAP_ARM_FPHP);
119
- if (reg & GEM_ST2R_ETHERTYPE_ENABLE) {
23
+ GET_FEATURE_ID(aa32_fp16_arith, ARM_HWCAP_ARM_ASIMDHP);
120
+ if (FIELD_EX32(reg, SCREENING_TYPE2_REG0, ETHERTYPE_ENABLE)) {
24
+ GET_FEATURE_ID(aa32_dp, ARM_HWCAP_ARM_ASIMDDP);
121
uint16_t type = rxbuf_ptr[12] << 8 | rxbuf_ptr[13];
25
+ GET_FEATURE_ID(aa32_fhm, ARM_HWCAP_ARM_ASIMDFHM);
122
- int et_idx = extract32(reg, GEM_ST2R_ETHERTYPE_INDEX_SHIFT,
26
+ GET_FEATURE_ID(aa32_bf16, ARM_HWCAP_ARM_ASIMDBF16);
123
- GEM_ST2R_ETHERTYPE_INDEX_WIDTH);
27
+ GET_FEATURE_ID(aa32_i8mm, ARM_HWCAP_ARM_I8MM);
124
+ int et_idx = FIELD_EX32(reg, SCREENING_TYPE2_REG0,
28
125
+ ETHERTYPE_REG_INDEX);
29
return hwcaps;
126
30
}
127
if (et_idx > s->num_type2_screeners) {
31
@@ -XXX,XX +XXX,XX @@ uint32_t get_elf_hwcap2(void)
128
qemu_log_mask(LOG_GUEST_ERROR, "Out of range ethertype "
32
GET_FEATURE_ID(aa32_sha1, ARM_HWCAP2_ARM_SHA1);
129
@@ -XXX,XX +XXX,XX @@ static int get_queue_from_screen(CadenceGEMState *s, uint8_t *rxbuf_ptr,
33
GET_FEATURE_ID(aa32_sha2, ARM_HWCAP2_ARM_SHA2);
130
34
GET_FEATURE_ID(aa32_crc32, ARM_HWCAP2_ARM_CRC32);
131
/* Compare A, B, C */
35
+ GET_FEATURE_ID(aa32_sb, ARM_HWCAP2_ARM_SB);
132
for (j = 0; j < 3; j++) {
36
+ GET_FEATURE_ID(aa32_ssbs, ARM_HWCAP2_ARM_SSBS);
133
- uint32_t cr0, cr1, mask;
37
return hwcaps;
134
+ uint32_t cr0, cr1, mask, compare;
38
}
135
uint16_t rx_cmp;
136
int offset;
137
- int cr_idx = extract32(reg, GEM_ST2R_COMPARE_A_SHIFT + j * 6,
138
- GEM_ST2R_COMPARE_WIDTH);
139
+ int cr_idx = extract32(reg, R_SCREENING_TYPE2_REG0_COMPARE_A_SHIFT + j * 6,
140
+ R_SCREENING_TYPE2_REG0_COMPARE_A_LENGTH);
141
142
- if (!(reg & (GEM_ST2R_COMPARE_A_ENABLE << (j * 6)))) {
143
+ if (!extract32(reg, R_SCREENING_TYPE2_REG0_COMPARE_A_ENABLE_SHIFT + j * 6,
144
+ R_SCREENING_TYPE2_REG0_COMPARE_A_ENABLE_LENGTH)) {
145
continue;
146
}
147
+
148
if (cr_idx > s->num_type2_screeners) {
149
qemu_log_mask(LOG_GUEST_ERROR, "Out of range compare "
150
"register index: %d\n", cr_idx);
151
}
152
153
cr0 = s->regs[R_TYPE2_COMPARE_0_WORD_0 + cr_idx * 2];
154
- cr1 = s->regs[R_TYPE2_COMPARE_0_WORD_0 + cr_idx * 2 + 1];
155
- offset = extract32(cr1, GEM_T2CW1_OFFSET_VALUE_SHIFT,
156
- GEM_T2CW1_OFFSET_VALUE_WIDTH);
157
+ cr1 = s->regs[R_TYPE2_COMPARE_0_WORD_1 + cr_idx * 2];
158
+ offset = FIELD_EX32(cr1, TYPE2_COMPARE_0_WORD_1, OFFSET_VALUE);
159
160
- switch (extract32(cr1, GEM_T2CW1_COMPARE_OFFSET_SHIFT,
161
- GEM_T2CW1_COMPARE_OFFSET_WIDTH)) {
162
+ switch (FIELD_EX32(cr1, TYPE2_COMPARE_0_WORD_1, COMPARE_OFFSET)) {
163
case 3: /* Skip UDP header */
164
qemu_log_mask(LOG_UNIMP, "TCP compare offsets"
165
"unimplemented - assuming UDP\n");
166
@@ -XXX,XX +XXX,XX @@ static int get_queue_from_screen(CadenceGEMState *s, uint8_t *rxbuf_ptr,
167
}
168
169
rx_cmp = rxbuf_ptr[offset] << 8 | rxbuf_ptr[offset];
170
- mask = extract32(cr0, 0, 16);
171
+ mask = FIELD_EX32(cr0, TYPE2_COMPARE_0_WORD_0, MASK_VALUE);
172
+ compare = FIELD_EX32(cr0, TYPE2_COMPARE_0_WORD_0, COMPARE_VALUE);
173
174
- if ((rx_cmp & mask) == (extract32(cr0, 16, 16) & mask)) {
175
+ if ((rx_cmp & mask) == (compare & mask)) {
176
matched = true;
177
} else {
178
mismatched = true;
179
@@ -XXX,XX +XXX,XX @@ static int get_queue_from_screen(CadenceGEMState *s, uint8_t *rxbuf_ptr,
180
}
181
182
if (matched && !mismatched) {
183
- return extract32(reg, GEM_ST2R_QUEUE_SHIFT, GEM_ST2R_QUEUE_WIDTH);
184
+ return FIELD_EX32(reg, SCREENING_TYPE2_REG0, QUEUE_NUM);
185
}
186
}
39
187
40
--
188
--
41
2.34.1
189
2.34.1
diff view generated by jsdifflib
1
From: Viktor Prutyanov <viktor@daynix.com>
1
From: Luc Michel <luc.michel@amd.com>
2
2
3
DMP supports 42 physical memory runs at most. So, merge adjacent
3
Use the FIELD macro to describe the NWCTRL register fields.
4
physical memory ranges from QEMU ELF when possible to minimize total
5
number of runs.
6
4
7
Signed-off-by: Viktor Prutyanov <viktor@daynix.com>
5
Signed-off-by: Luc Michel <luc.michel@amd.com>
8
Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
6
Reviewed-by: sai.pavan.boddu@amd.com
9
Message-id: 20230915170153.10959-4-viktor@daynix.com
7
Message-id: 20231017194422.4124691-4-luc.michel@amd.com
10
[PMM: fixed format string for printing size_t values]
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
9
---
13
contrib/elf2dmp/main.c | 56 ++++++++++++++++++++++++++++++++++++------
10
hw/net/cadence_gem.c | 53 +++++++++++++++++++++++++++++++++-----------
14
1 file changed, 48 insertions(+), 8 deletions(-)
11
1 file changed, 40 insertions(+), 13 deletions(-)
15
12
16
diff --git a/contrib/elf2dmp/main.c b/contrib/elf2dmp/main.c
13
diff --git a/hw/net/cadence_gem.c b/hw/net/cadence_gem.c
17
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
18
--- a/contrib/elf2dmp/main.c
15
--- a/hw/net/cadence_gem.c
19
+++ b/contrib/elf2dmp/main.c
16
+++ b/hw/net/cadence_gem.c
20
@@ -XXX,XX +XXX,XX @@
17
@@ -XXX,XX +XXX,XX @@
21
#define PE_NAME "ntoskrnl.exe"
18
} while (0)
22
19
23
#define INITIAL_MXCSR 0x1f80
20
REG32(NWCTRL, 0x0) /* Network Control reg */
24
+#define MAX_NUMBER_OF_RUNS 42
21
+ FIELD(NWCTRL, LOOPBACK , 0, 1)
25
22
+ FIELD(NWCTRL, LOOPBACK_LOCAL , 1, 1)
26
typedef struct idt_desc {
23
+ FIELD(NWCTRL, ENABLE_RECEIVE, 2, 1)
27
uint16_t offset1; /* offset bits 0..15 */
24
+ FIELD(NWCTRL, ENABLE_TRANSMIT, 3, 1)
28
@@ -XXX,XX +XXX,XX @@ static int fix_dtb(struct va_space *vs, QEMU_Elf *qe)
25
+ FIELD(NWCTRL, MAN_PORT_EN , 4, 1)
29
return 1;
26
+ FIELD(NWCTRL, CLEAR_ALL_STATS_REGS , 5, 1)
30
}
27
+ FIELD(NWCTRL, INC_ALL_STATS_REGS, 6, 1)
31
28
+ FIELD(NWCTRL, STATS_WRITE_EN, 7, 1)
32
+static void try_merge_runs(struct pa_space *ps,
29
+ FIELD(NWCTRL, BACK_PRESSURE, 8, 1)
33
+ WinDumpPhyMemDesc64 *PhysicalMemoryBlock)
30
+ FIELD(NWCTRL, TRANSMIT_START , 9, 1)
34
+{
31
+ FIELD(NWCTRL, TRANSMIT_HALT, 10, 1)
35
+ unsigned int merge_cnt = 0, run_idx = 0;
32
+ FIELD(NWCTRL, TX_PAUSE_FRAME_RE, 11, 1)
33
+ FIELD(NWCTRL, TX_PAUSE_FRAME_ZE, 12, 1)
34
+ FIELD(NWCTRL, STATS_TAKE_SNAP, 13, 1)
35
+ FIELD(NWCTRL, STATS_READ_SNAP, 14, 1)
36
+ FIELD(NWCTRL, STORE_RX_TS, 15, 1)
37
+ FIELD(NWCTRL, PFC_ENABLE, 16, 1)
38
+ FIELD(NWCTRL, PFC_PRIO_BASED, 17, 1)
39
+ FIELD(NWCTRL, FLUSH_RX_PKT_PCLK , 18, 1)
40
+ FIELD(NWCTRL, TX_LPI_EN, 19, 1)
41
+ FIELD(NWCTRL, PTP_UNICAST_ENA, 20, 1)
42
+ FIELD(NWCTRL, ALT_SGMII_MODE, 21, 1)
43
+ FIELD(NWCTRL, STORE_UDP_OFFSET, 22, 1)
44
+ FIELD(NWCTRL, EXT_TSU_PORT_EN, 23, 1)
45
+ FIELD(NWCTRL, ONE_STEP_SYNC_MO, 24, 1)
46
+ FIELD(NWCTRL, PFC_CTRL , 25, 1)
47
+ FIELD(NWCTRL, EXT_RXQ_SEL_EN , 26, 1)
48
+ FIELD(NWCTRL, OSS_CORRECTION_FIELD, 27, 1)
49
+ FIELD(NWCTRL, SEL_MII_ON_RGMII, 28, 1)
50
+ FIELD(NWCTRL, TWO_PT_FIVE_GIG, 29, 1)
51
+ FIELD(NWCTRL, IFG_EATS_QAV_CREDIT, 30, 1)
36
+
52
+
37
+ PhysicalMemoryBlock->NumberOfRuns = 0;
53
REG32(NWCFG, 0x4) /* Network Config reg */
38
+
54
REG32(NWSTATUS, 0x8) /* Network Status reg */
39
+ for (size_t idx = 0; idx < ps->block_nr; idx++) {
55
REG32(USERIO, 0xc) /* User IO reg */
40
+ struct pa_block *blk = ps->block + idx;
56
@@ -XXX,XX +XXX,XX @@ REG32(TYPE2_COMPARE_0_WORD_1, 0x704)
41
+ struct pa_block *next = blk + 1;
57
FIELD(TYPE2_COMPARE_0_WORD_1, COMPARE_VLAN_ID, 10, 1)
42
+
58
43
+ PhysicalMemoryBlock->NumberOfPages += blk->size / ELF2DMP_PAGE_SIZE;
59
/*****************************************/
44
+
60
-#define GEM_NWCTRL_TXSTART 0x00000200 /* Transmit Enable */
45
+ if (idx + 1 != ps->block_nr && blk->paddr + blk->size == next->paddr) {
61
-#define GEM_NWCTRL_TXENA 0x00000008 /* Transmit Enable */
46
+ printf("Block #%zu 0x%"PRIx64"+:0x%"PRIx64" and %u previous will be"
62
-#define GEM_NWCTRL_RXENA 0x00000004 /* Receive Enable */
47
+ " merged\n", idx, blk->paddr, blk->size, merge_cnt);
63
-#define GEM_NWCTRL_LOCALLOOP 0x00000002 /* Local Loopback */
48
+ merge_cnt++;
64
-
49
+ } else {
65
#define GEM_NWCFG_STRIP_FCS 0x00020000 /* Strip FCS field */
50
+ struct pa_block *first_merged = blk - merge_cnt;
66
#define GEM_NWCFG_LERR_DISC 0x00010000 /* Discard RX frames with len err */
51
+
67
#define GEM_NWCFG_BUFF_OFST_M 0x0000C000 /* Receive buffer offset mask */
52
+ printf("Block #%zu 0x%"PRIx64"+:0x%"PRIx64" and %u previous will be"
68
@@ -XXX,XX +XXX,XX @@ static bool gem_can_receive(NetClientState *nc)
53
+ " merged to 0x%"PRIx64"+:0x%"PRIx64" (run #%u)\n",
69
s = qemu_get_nic_opaque(nc);
54
+ idx, blk->paddr, blk->size, merge_cnt, first_merged->paddr,
70
55
+ blk->paddr + blk->size - first_merged->paddr, run_idx);
71
/* Do nothing if receive is not enabled. */
56
+ PhysicalMemoryBlock->Run[run_idx] = (WinDumpPhyMemRun64) {
72
- if (!(s->regs[R_NWCTRL] & GEM_NWCTRL_RXENA)) {
57
+ .BasePage = first_merged->paddr / ELF2DMP_PAGE_SIZE,
73
+ if (!FIELD_EX32(s->regs[R_NWCTRL], NWCTRL, ENABLE_RECEIVE)) {
58
+ .PageCount = (blk->paddr + blk->size - first_merged->paddr) /
74
if (s->can_rx_state != 1) {
59
+ ELF2DMP_PAGE_SIZE,
75
s->can_rx_state = 1;
60
+ };
76
DB_PRINT("can't receive - no enable\n");
61
+ PhysicalMemoryBlock->NumberOfRuns++;
77
@@ -XXX,XX +XXX,XX @@ static void gem_transmit(CadenceGEMState *s)
62
+ run_idx++;
78
int q = 0;
63
+ merge_cnt = 0;
79
64
+ }
80
/* Do nothing if transmit is not enabled. */
65
+ }
81
- if (!(s->regs[R_NWCTRL] & GEM_NWCTRL_TXENA)) {
66
+}
82
+ if (!FIELD_EX32(s->regs[R_NWCTRL], NWCTRL, ENABLE_TRANSMIT)) {
67
+
83
return;
68
static int fill_header(WinDumpHeader64 *hdr, struct pa_space *ps,
69
struct va_space *vs, uint64_t KdDebuggerDataBlock,
70
KDDEBUGGER_DATA64 *kdbg, uint64_t KdVersionBlock, int nr_cpus)
71
@@ -XXX,XX +XXX,XX @@ static int fill_header(WinDumpHeader64 *hdr, struct pa_space *ps,
72
KUSD_OFFSET_PRODUCT_TYPE);
73
DBGKD_GET_VERSION64 kvb;
74
WinDumpHeader64 h;
75
- size_t i;
76
77
QEMU_BUILD_BUG_ON(KUSD_OFFSET_SUITE_MASK >= ELF2DMP_PAGE_SIZE);
78
QEMU_BUILD_BUG_ON(KUSD_OFFSET_PRODUCT_TYPE >= ELF2DMP_PAGE_SIZE);
79
@@ -XXX,XX +XXX,XX @@ static int fill_header(WinDumpHeader64 *hdr, struct pa_space *ps,
80
.RequiredDumpSpace = sizeof(h),
81
};
82
83
- for (i = 0; i < ps->block_nr; i++) {
84
- h.PhysicalMemoryBlock.NumberOfPages +=
85
- ps->block[i].size / ELF2DMP_PAGE_SIZE;
86
- h.PhysicalMemoryBlock.Run[i] = (WinDumpPhyMemRun64) {
87
- .BasePage = ps->block[i].paddr / ELF2DMP_PAGE_SIZE,
88
- .PageCount = ps->block[i].size / ELF2DMP_PAGE_SIZE,
89
- };
90
+ if (h.PhysicalMemoryBlock.NumberOfRuns <= MAX_NUMBER_OF_RUNS) {
91
+ for (size_t idx = 0; idx < ps->block_nr; idx++) {
92
+ h.PhysicalMemoryBlock.NumberOfPages +=
93
+ ps->block[idx].size / ELF2DMP_PAGE_SIZE;
94
+ h.PhysicalMemoryBlock.Run[idx] = (WinDumpPhyMemRun64) {
95
+ .BasePage = ps->block[idx].paddr / ELF2DMP_PAGE_SIZE,
96
+ .PageCount = ps->block[idx].size / ELF2DMP_PAGE_SIZE,
97
+ };
98
+ }
99
+ } else {
100
+ try_merge_runs(ps, &h.PhysicalMemoryBlock);
101
}
84
}
102
85
103
h.RequiredDumpSpace +=
86
@@ -XXX,XX +XXX,XX @@ static void gem_transmit(CadenceGEMState *s)
87
while (tx_desc_get_used(desc) == 0) {
88
89
/* Do nothing if transmit is not enabled. */
90
- if (!(s->regs[R_NWCTRL] & GEM_NWCTRL_TXENA)) {
91
+ if (!FIELD_EX32(s->regs[R_NWCTRL], NWCTRL, ENABLE_TRANSMIT)) {
92
return;
93
}
94
print_gem_tx_desc(desc, q);
95
@@ -XXX,XX +XXX,XX @@ static void gem_transmit(CadenceGEMState *s)
96
gem_transmit_updatestats(s, s->tx_packet, total_bytes);
97
98
/* Send the packet somewhere */
99
- if (s->phy_loop || (s->regs[R_NWCTRL] &
100
- GEM_NWCTRL_LOCALLOOP)) {
101
+ if (s->phy_loop || FIELD_EX32(s->regs[R_NWCTRL], NWCTRL,
102
+ LOOPBACK_LOCAL)) {
103
qemu_receive_packet(qemu_get_queue(s->nic), s->tx_packet,
104
total_bytes);
105
} else {
106
@@ -XXX,XX +XXX,XX @@ static void gem_write(void *opaque, hwaddr offset, uint64_t val,
107
/* Handle register write side effects */
108
switch (offset) {
109
case R_NWCTRL:
110
- if (val & GEM_NWCTRL_RXENA) {
111
+ if (FIELD_EX32(val, NWCTRL, ENABLE_RECEIVE)) {
112
for (i = 0; i < s->num_priority_queues; ++i) {
113
gem_get_rx_desc(s, i);
114
}
115
}
116
- if (val & GEM_NWCTRL_TXSTART) {
117
+ if (FIELD_EX32(val, NWCTRL, TRANSMIT_START)) {
118
gem_transmit(s);
119
}
120
- if (!(val & GEM_NWCTRL_TXENA)) {
121
+ if (!(FIELD_EX32(val, NWCTRL, ENABLE_TRANSMIT))) {
122
/* Reset to start of Q when transmit disabled. */
123
for (i = 0; i < s->num_priority_queues; i++) {
124
s->tx_desc_addr[i] = gem_get_tx_queue_base_addr(s, i);
104
--
125
--
105
2.34.1
126
2.34.1
diff view generated by jsdifflib
1
From: Viktor Prutyanov <viktor@daynix.com>
1
From: Luc Michel <luc.michel@amd.com>
2
2
3
PE export name check introduced in d399d6b179 isn't reliable enough,
3
Use de FIELD macro to describe the NWCFG register fields.
4
because a page with the export directory may be not present for some
5
reason. On the other hand, elf2dmp retrieves the PDB name in any case.
6
It can be also used to check that a PE image is the kernel image. So,
7
check PDB name when searching for Windows kernel image.
8
4
9
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=2165917
5
Signed-off-by: Luc Michel <luc.michel@amd.com>
10
6
Reviewed-by: sai.pavan.boddu@amd.com
11
Signed-off-by: Viktor Prutyanov <viktor@daynix.com>
7
Message-id: 20231017194422.4124691-5-luc.michel@amd.com
12
Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
13
Message-id: 20230915170153.10959-2-viktor@daynix.com
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
9
---
16
contrib/elf2dmp/main.c | 93 +++++++++++++++---------------------------
10
hw/net/cadence_gem.c | 60 ++++++++++++++++++++++++++++----------------
17
1 file changed, 33 insertions(+), 60 deletions(-)
11
1 file changed, 39 insertions(+), 21 deletions(-)
18
12
19
diff --git a/contrib/elf2dmp/main.c b/contrib/elf2dmp/main.c
13
diff --git a/hw/net/cadence_gem.c b/hw/net/cadence_gem.c
20
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
21
--- a/contrib/elf2dmp/main.c
15
--- a/hw/net/cadence_gem.c
22
+++ b/contrib/elf2dmp/main.c
16
+++ b/hw/net/cadence_gem.c
23
@@ -XXX,XX +XXX,XX @@ static int write_dump(struct pa_space *ps,
17
@@ -XXX,XX +XXX,XX @@ REG32(NWCTRL, 0x0) /* Network Control reg */
24
return fclose(dmp_file);
18
FIELD(NWCTRL, IFG_EATS_QAV_CREDIT, 30, 1)
19
20
REG32(NWCFG, 0x4) /* Network Config reg */
21
+ FIELD(NWCFG, SPEED, 0, 1)
22
+ FIELD(NWCFG, FULL_DUPLEX, 1, 1)
23
+ FIELD(NWCFG, DISCARD_NON_VLAN_FRAMES, 2, 1)
24
+ FIELD(NWCFG, JUMBO_FRAMES, 3, 1)
25
+ FIELD(NWCFG, PROMISC, 4, 1)
26
+ FIELD(NWCFG, NO_BROADCAST, 5, 1)
27
+ FIELD(NWCFG, MULTICAST_HASH_EN, 6, 1)
28
+ FIELD(NWCFG, UNICAST_HASH_EN, 7, 1)
29
+ FIELD(NWCFG, RECV_1536_BYTE_FRAMES, 8, 1)
30
+ FIELD(NWCFG, EXTERNAL_ADDR_MATCH_EN, 9, 1)
31
+ FIELD(NWCFG, GIGABIT_MODE_ENABLE, 10, 1)
32
+ FIELD(NWCFG, PCS_SELECT, 11, 1)
33
+ FIELD(NWCFG, RETRY_TEST, 12, 1)
34
+ FIELD(NWCFG, PAUSE_ENABLE, 13, 1)
35
+ FIELD(NWCFG, RECV_BUF_OFFSET, 14, 2)
36
+ FIELD(NWCFG, LEN_ERR_DISCARD, 16, 1)
37
+ FIELD(NWCFG, FCS_REMOVE, 17, 1)
38
+ FIELD(NWCFG, MDC_CLOCK_DIV, 18, 3)
39
+ FIELD(NWCFG, DATA_BUS_WIDTH, 21, 2)
40
+ FIELD(NWCFG, DISABLE_COPY_PAUSE_FRAMES, 23, 1)
41
+ FIELD(NWCFG, RECV_CSUM_OFFLOAD_EN, 24, 1)
42
+ FIELD(NWCFG, EN_HALF_DUPLEX_RX, 25, 1)
43
+ FIELD(NWCFG, IGNORE_RX_FCS, 26, 1)
44
+ FIELD(NWCFG, SGMII_MODE_ENABLE, 27, 1)
45
+ FIELD(NWCFG, IPG_STRETCH_ENABLE, 28, 1)
46
+ FIELD(NWCFG, NSP_ACCEPT, 29, 1)
47
+ FIELD(NWCFG, IGNORE_IPG_RX_ER, 30, 1)
48
+ FIELD(NWCFG, UNI_DIRECTION_ENABLE, 31, 1)
49
+
50
REG32(NWSTATUS, 0x8) /* Network Status reg */
51
REG32(USERIO, 0xc) /* User IO reg */
52
REG32(DMACFG, 0x10) /* DMA Control reg */
53
@@ -XXX,XX +XXX,XX @@ REG32(TYPE2_COMPARE_0_WORD_1, 0x704)
54
FIELD(TYPE2_COMPARE_0_WORD_1, COMPARE_VLAN_ID, 10, 1)
55
56
/*****************************************/
57
-#define GEM_NWCFG_STRIP_FCS 0x00020000 /* Strip FCS field */
58
-#define GEM_NWCFG_LERR_DISC 0x00010000 /* Discard RX frames with len err */
59
-#define GEM_NWCFG_BUFF_OFST_M 0x0000C000 /* Receive buffer offset mask */
60
-#define GEM_NWCFG_BUFF_OFST_S 14 /* Receive buffer offset shift */
61
-#define GEM_NWCFG_RCV_1538 0x00000100 /* Receive 1538 bytes frame */
62
-#define GEM_NWCFG_UCAST_HASH 0x00000080 /* accept unicast if hash match */
63
-#define GEM_NWCFG_MCAST_HASH 0x00000040 /* accept multicast if hash match */
64
-#define GEM_NWCFG_BCAST_REJ 0x00000020 /* Reject broadcast packets */
65
-#define GEM_NWCFG_PROMISC 0x00000010 /* Accept all packets */
66
-#define GEM_NWCFG_JUMBO_FRAME 0x00000008 /* Jumbo Frames enable */
67
-
68
#define GEM_DMACFG_ADDR_64B (1U << 30)
69
#define GEM_DMACFG_TX_BD_EXT (1U << 29)
70
#define GEM_DMACFG_RX_BD_EXT (1U << 28)
71
@@ -XXX,XX +XXX,XX @@ static const uint8_t broadcast_addr[] = { 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF };
72
static uint32_t gem_get_max_buf_len(CadenceGEMState *s, bool tx)
73
{
74
uint32_t size;
75
- if (s->regs[R_NWCFG] & GEM_NWCFG_JUMBO_FRAME) {
76
+ if (FIELD_EX32(s->regs[R_NWCFG], NWCFG, JUMBO_FRAMES)) {
77
size = s->regs[R_JUMBO_MAX_LEN];
78
if (size > s->jumbo_max_len) {
79
size = s->jumbo_max_len;
80
@@ -XXX,XX +XXX,XX @@ static uint32_t gem_get_max_buf_len(CadenceGEMState *s, bool tx)
81
} else if (tx) {
82
size = 1518;
83
} else {
84
- size = s->regs[R_NWCFG] & GEM_NWCFG_RCV_1538 ? 1538 : 1518;
85
+ size = FIELD_EX32(s->regs[R_NWCFG],
86
+ NWCFG, RECV_1536_BYTE_FRAMES) ? 1538 : 1518;
87
}
88
return size;
25
}
89
}
26
90
@@ -XXX,XX +XXX,XX @@ static int gem_mac_address_filter(CadenceGEMState *s, const uint8_t *packet)
27
-static bool pe_check_export_name(uint64_t base, void *start_addr,
91
int i, is_mc;
28
- struct va_space *vs)
92
29
-{
93
/* Promiscuous mode? */
30
- IMAGE_EXPORT_DIRECTORY export_dir;
94
- if (s->regs[R_NWCFG] & GEM_NWCFG_PROMISC) {
31
- const char *pe_name;
95
+ if (FIELD_EX32(s->regs[R_NWCFG], NWCFG, PROMISC)) {
32
-
96
return GEM_RX_PROMISCUOUS_ACCEPT;
33
- if (pe_get_data_dir_entry(base, start_addr, IMAGE_FILE_EXPORT_DIRECTORY,
34
- &export_dir, sizeof(export_dir), vs)) {
35
- return false;
36
- }
37
-
38
- pe_name = va_space_resolve(vs, base + export_dir.Name);
39
- if (!pe_name) {
40
- return false;
41
- }
42
-
43
- return !strcmp(pe_name, PE_NAME);
44
-}
45
-
46
-static int pe_get_pdb_symstore_hash(uint64_t base, void *start_addr,
47
- char *hash, struct va_space *vs)
48
+static bool pe_check_pdb_name(uint64_t base, void *start_addr,
49
+ struct va_space *vs, OMFSignatureRSDS *rsds)
50
{
51
const char sign_rsds[4] = "RSDS";
52
IMAGE_DEBUG_DIRECTORY debug_dir;
53
- OMFSignatureRSDS rsds;
54
- char *pdb_name;
55
- size_t pdb_name_sz;
56
- size_t i;
57
+ char pdb_name[sizeof(PDB_NAME)];
58
59
if (pe_get_data_dir_entry(base, start_addr, IMAGE_FILE_DEBUG_DIRECTORY,
60
&debug_dir, sizeof(debug_dir), vs)) {
61
eprintf("Failed to get Debug Directory\n");
62
- return 1;
63
+ return false;
64
}
97
}
65
98
66
if (debug_dir.Type != IMAGE_DEBUG_TYPE_CODEVIEW) {
99
if (!memcmp(packet, broadcast_addr, 6)) {
67
- return 1;
100
/* Reject broadcast packets? */
68
+ eprintf("Debug Directory type is not CodeView\n");
101
- if (s->regs[R_NWCFG] & GEM_NWCFG_BCAST_REJ) {
69
+ return false;
102
+ if (FIELD_EX32(s->regs[R_NWCFG], NWCFG, NO_BROADCAST)) {
103
return GEM_RX_REJECT;
104
}
105
return GEM_RX_BROADCAST_ACCEPT;
106
@@ -XXX,XX +XXX,XX @@ static int gem_mac_address_filter(CadenceGEMState *s, const uint8_t *packet)
107
108
/* Accept packets -w- hash match? */
109
is_mc = is_multicast_ether_addr(packet);
110
- if ((is_mc && (s->regs[R_NWCFG] & GEM_NWCFG_MCAST_HASH)) ||
111
- (!is_mc && (s->regs[R_NWCFG] & GEM_NWCFG_UCAST_HASH))) {
112
+ if ((is_mc && (FIELD_EX32(s->regs[R_NWCFG], NWCFG, MULTICAST_HASH_EN))) ||
113
+ (!is_mc && FIELD_EX32(s->regs[R_NWCFG], NWCFG, UNICAST_HASH_EN))) {
114
uint64_t buckets;
115
unsigned hash_index;
116
117
@@ -XXX,XX +XXX,XX @@ static ssize_t gem_receive(NetClientState *nc, const uint8_t *buf, size_t size)
70
}
118
}
71
119
72
if (va_space_rw(vs,
120
/* Discard packets with receive length error enabled ? */
73
base + debug_dir.AddressOfRawData,
121
- if (s->regs[R_NWCFG] & GEM_NWCFG_LERR_DISC) {
74
- &rsds, sizeof(rsds), 0)) {
122
+ if (FIELD_EX32(s->regs[R_NWCFG], NWCFG, LEN_ERR_DISCARD)) {
75
- return 1;
123
unsigned type_len;
76
+ rsds, sizeof(*rsds), 0)) {
124
77
+ eprintf("Failed to resolve OMFSignatureRSDS\n");
125
/* Fish the ethertype / length field out of the RX packet */
78
+ return false;
126
@@ -XXX,XX +XXX,XX @@ static ssize_t gem_receive(NetClientState *nc, const uint8_t *buf, size_t size)
127
/*
128
* Determine configured receive buffer offset (probably 0)
129
*/
130
- rxbuf_offset = (s->regs[R_NWCFG] & GEM_NWCFG_BUFF_OFST_M) >>
131
- GEM_NWCFG_BUFF_OFST_S;
132
+ rxbuf_offset = FIELD_EX32(s->regs[R_NWCFG], NWCFG, RECV_BUF_OFFSET);
133
134
/* The configure size of each receive buffer. Determines how many
135
* buffers needed to hold this packet.
136
@@ -XXX,XX +XXX,XX @@ static ssize_t gem_receive(NetClientState *nc, const uint8_t *buf, size_t size)
79
}
137
}
80
138
81
- printf("CodeView signature is \'%.4s\'\n", rsds.Signature);
139
/* Strip of FCS field ? (usually yes) */
82
-
140
- if (s->regs[R_NWCFG] & GEM_NWCFG_STRIP_FCS) {
83
- if (memcmp(&rsds.Signature, sign_rsds, sizeof(sign_rsds))) {
141
+ if (FIELD_EX32(s->regs[R_NWCFG], NWCFG, FCS_REMOVE)) {
84
- return 1;
142
rxbuf_ptr = (void *)buf;
85
+ if (memcmp(&rsds->Signature, sign_rsds, sizeof(sign_rsds))) {
143
} else {
86
+ eprintf("CodeView signature is \'%.4s\', \'%s\' expected\n",
144
unsigned crc_val;
87
+ rsds->Signature, sign_rsds);
88
+ return false;
89
}
90
91
- pdb_name_sz = debug_dir.SizeOfData - sizeof(rsds);
92
- pdb_name = malloc(pdb_name_sz);
93
- if (!pdb_name) {
94
- return 1;
95
+ if (debug_dir.SizeOfData - sizeof(*rsds) != sizeof(PDB_NAME)) {
96
+ eprintf("PDB name size doesn't match\n");
97
+ return false;
98
}
99
100
if (va_space_rw(vs, base + debug_dir.AddressOfRawData +
101
- offsetof(OMFSignatureRSDS, name), pdb_name, pdb_name_sz, 0)) {
102
- free(pdb_name);
103
- return 1;
104
+ offsetof(OMFSignatureRSDS, name), pdb_name, sizeof(PDB_NAME),
105
+ 0)) {
106
+ eprintf("Failed to resolve PDB name\n");
107
+ return false;
108
}
109
110
printf("PDB name is \'%s\', \'%s\' expected\n", pdb_name, PDB_NAME);
111
112
- if (strcmp(pdb_name, PDB_NAME)) {
113
- eprintf("Unexpected PDB name, it seems the kernel isn't found\n");
114
- free(pdb_name);
115
- return 1;
116
- }
117
+ return !strcmp(pdb_name, PDB_NAME);
118
+}
119
120
- free(pdb_name);
121
-
122
- sprintf(hash, "%.08x%.04x%.04x%.02x%.02x", rsds.guid.a, rsds.guid.b,
123
- rsds.guid.c, rsds.guid.d[0], rsds.guid.d[1]);
124
+static void pe_get_pdb_symstore_hash(OMFSignatureRSDS *rsds, char *hash)
125
+{
126
+ sprintf(hash, "%.08x%.04x%.04x%.02x%.02x", rsds->guid.a, rsds->guid.b,
127
+ rsds->guid.c, rsds->guid.d[0], rsds->guid.d[1]);
128
hash += 20;
129
- for (i = 0; i < 6; i++, hash += 2) {
130
- sprintf(hash, "%.02x", rsds.guid.e[i]);
131
+ for (unsigned int i = 0; i < 6; i++, hash += 2) {
132
+ sprintf(hash, "%.02x", rsds->guid.e[i]);
133
}
134
135
- sprintf(hash, "%.01x", rsds.age);
136
-
137
- return 0;
138
+ sprintf(hash, "%.01x", rsds->age);
139
}
140
141
int main(int argc, char *argv[])
142
@@ -XXX,XX +XXX,XX @@ int main(int argc, char *argv[])
143
KDDEBUGGER_DATA64 *kdbg;
144
uint64_t KdVersionBlock;
145
bool kernel_found = false;
146
+ OMFSignatureRSDS rsds;
147
148
if (argc != 3) {
149
eprintf("usage:\n\t%s elf_file dmp_file\n", argv[0]);
150
@@ -XXX,XX +XXX,XX @@ int main(int argc, char *argv[])
151
}
152
153
if (*(uint16_t *)nt_start_addr == 0x5a4d) { /* MZ */
154
- if (pe_check_export_name(KernBase, nt_start_addr, &vs)) {
155
+ printf("Checking candidate KernBase = 0x%016"PRIx64"\n", KernBase);
156
+ if (pe_check_pdb_name(KernBase, nt_start_addr, &vs, &rsds)) {
157
kernel_found = true;
158
break;
159
}
160
@@ -XXX,XX +XXX,XX @@ int main(int argc, char *argv[])
161
printf("KernBase = 0x%016"PRIx64", signature is \'%.2s\'\n", KernBase,
162
(char *)nt_start_addr);
163
164
- if (pe_get_pdb_symstore_hash(KernBase, nt_start_addr, pdb_hash, &vs)) {
165
- eprintf("Failed to get PDB symbol store hash\n");
166
- err = 1;
167
- goto out_ps;
168
- }
169
+ pe_get_pdb_symstore_hash(&rsds, pdb_hash);
170
171
sprintf(pdb_url, "%s%s/%s/%s", SYM_URL_BASE, PDB_NAME, pdb_hash, PDB_NAME);
172
printf("PDB URL is %s\n", pdb_url);
173
--
145
--
174
2.34.1
146
2.34.1
diff view generated by jsdifflib
1
Update our AArch64 ID register field definitions from the 2023-06
1
From: Luc Michel <luc.michel@amd.com>
2
system register XML release:
3
https://developer.arm.com/documentation/ddi0601/2023-06/
4
2
3
Use de FIELD macro to describe the DMACFG register fields.
4
5
Signed-off-by: Luc Michel <luc.michel@amd.com>
6
Reviewed-by: sai.pavan.boddu@amd.com
7
Message-id: 20231017194422.4124691-6-luc.michel@amd.com
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
---
9
---
8
target/arm/cpu.h | 23 +++++++++++++++++++++++
10
hw/net/cadence_gem.c | 48 ++++++++++++++++++++++++++++----------------
9
1 file changed, 23 insertions(+)
11
1 file changed, 31 insertions(+), 17 deletions(-)
10
12
11
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
13
diff --git a/hw/net/cadence_gem.c b/hw/net/cadence_gem.c
12
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/cpu.h
15
--- a/hw/net/cadence_gem.c
14
+++ b/target/arm/cpu.h
16
+++ b/hw/net/cadence_gem.c
15
@@ -XXX,XX +XXX,XX @@ FIELD(ID_AA64ISAR0, SHA1, 8, 4)
17
@@ -XXX,XX +XXX,XX @@ REG32(NWCFG, 0x4) /* Network Config reg */
16
FIELD(ID_AA64ISAR0, SHA2, 12, 4)
18
17
FIELD(ID_AA64ISAR0, CRC32, 16, 4)
19
REG32(NWSTATUS, 0x8) /* Network Status reg */
18
FIELD(ID_AA64ISAR0, ATOMIC, 20, 4)
20
REG32(USERIO, 0xc) /* User IO reg */
19
+FIELD(ID_AA64ISAR0, TME, 24, 4)
21
+
20
FIELD(ID_AA64ISAR0, RDM, 28, 4)
22
REG32(DMACFG, 0x10) /* DMA Control reg */
21
FIELD(ID_AA64ISAR0, SHA3, 32, 4)
23
+ FIELD(DMACFG, SEND_BCAST_TO_ALL_QS, 31, 1)
22
FIELD(ID_AA64ISAR0, SM3, 36, 4)
24
+ FIELD(DMACFG, DMA_ADDR_BUS_WIDTH, 30, 1)
23
@@ -XXX,XX +XXX,XX @@ FIELD(ID_AA64ISAR2, APA3, 12, 4)
25
+ FIELD(DMACFG, TX_BD_EXT_MODE_EN , 29, 1)
24
FIELD(ID_AA64ISAR2, MOPS, 16, 4)
26
+ FIELD(DMACFG, RX_BD_EXT_MODE_EN , 28, 1)
25
FIELD(ID_AA64ISAR2, BC, 20, 4)
27
+ FIELD(DMACFG, FORCE_MAX_AMBA_BURST_TX, 26, 1)
26
FIELD(ID_AA64ISAR2, PAC_FRAC, 24, 4)
28
+ FIELD(DMACFG, FORCE_MAX_AMBA_BURST_RX, 25, 1)
27
+FIELD(ID_AA64ISAR2, CLRBHB, 28, 4)
29
+ FIELD(DMACFG, FORCE_DISCARD_ON_ERR, 24, 1)
28
+FIELD(ID_AA64ISAR2, SYSREG_128, 32, 4)
30
+ FIELD(DMACFG, RX_BUF_SIZE, 16, 8)
29
+FIELD(ID_AA64ISAR2, SYSINSTR_128, 36, 4)
31
+ FIELD(DMACFG, CRC_ERROR_REPORT, 13, 1)
30
+FIELD(ID_AA64ISAR2, PRFMSLC, 40, 4)
32
+ FIELD(DMACFG, INF_LAST_DBUF_SIZE_EN, 12, 1)
31
+FIELD(ID_AA64ISAR2, RPRFM, 48, 4)
33
+ FIELD(DMACFG, TX_PBUF_CSUM_OFFLOAD, 11, 1)
32
+FIELD(ID_AA64ISAR2, CSSC, 52, 4)
34
+ FIELD(DMACFG, TX_PBUF_SIZE, 10, 1)
33
+FIELD(ID_AA64ISAR2, ATS1A, 60, 4)
35
+ FIELD(DMACFG, RX_PBUF_SIZE, 8, 2)
34
36
+ FIELD(DMACFG, ENDIAN_SWAP_PACKET, 7, 1)
35
FIELD(ID_AA64PFR0, EL0, 0, 4)
37
+ FIELD(DMACFG, ENDIAN_SWAP_MGNT, 6, 1)
36
FIELD(ID_AA64PFR0, EL1, 4, 4)
38
+ FIELD(DMACFG, HDR_DATA_SPLIT_EN, 5, 1)
37
@@ -XXX,XX +XXX,XX @@ FIELD(ID_AA64PFR1, SME, 24, 4)
39
+ FIELD(DMACFG, AMBA_BURST_LEN , 0, 5)
38
FIELD(ID_AA64PFR1, RNDR_TRAP, 28, 4)
40
+#define GEM_DMACFG_RBUFSZ_MUL 64 /* DMA RX Buffer Size multiplier */
39
FIELD(ID_AA64PFR1, CSV2_FRAC, 32, 4)
41
+
40
FIELD(ID_AA64PFR1, NMI, 36, 4)
42
REG32(TXSTATUS, 0x14) /* TX Status reg */
41
+FIELD(ID_AA64PFR1, MTE_FRAC, 40, 4)
43
REG32(RXQBASE, 0x18) /* RX Q Base address reg */
42
+FIELD(ID_AA64PFR1, GCS, 44, 4)
44
REG32(TXQBASE, 0x1c) /* TX Q Base address reg */
43
+FIELD(ID_AA64PFR1, THE, 48, 4)
45
@@ -XXX,XX +XXX,XX @@ REG32(TYPE2_COMPARE_0_WORD_1, 0x704)
44
+FIELD(ID_AA64PFR1, MTEX, 52, 4)
46
FIELD(TYPE2_COMPARE_0_WORD_1, COMPARE_VLAN_ID, 10, 1)
45
+FIELD(ID_AA64PFR1, DF2, 56, 4)
47
46
+FIELD(ID_AA64PFR1, PFAR, 60, 4)
48
/*****************************************/
47
49
-#define GEM_DMACFG_ADDR_64B (1U << 30)
48
FIELD(ID_AA64MMFR0, PARANGE, 0, 4)
50
-#define GEM_DMACFG_TX_BD_EXT (1U << 29)
49
FIELD(ID_AA64MMFR0, ASIDBITS, 4, 4)
51
-#define GEM_DMACFG_RX_BD_EXT (1U << 28)
50
@@ -XXX,XX +XXX,XX @@ FIELD(ID_AA64MMFR1, AFP, 44, 4)
52
-#define GEM_DMACFG_RBUFSZ_M 0x00FF0000 /* DMA RX Buffer Size mask */
51
FIELD(ID_AA64MMFR1, NTLBPA, 48, 4)
53
-#define GEM_DMACFG_RBUFSZ_S 16 /* DMA RX Buffer Size shift */
52
FIELD(ID_AA64MMFR1, TIDCP1, 52, 4)
54
-#define GEM_DMACFG_RBUFSZ_MUL 64 /* DMA RX Buffer Size multiplier */
53
FIELD(ID_AA64MMFR1, CMOW, 56, 4)
55
-#define GEM_DMACFG_TXCSUM_OFFL 0x00000800 /* Transmit checksum offload */
54
+FIELD(ID_AA64MMFR1, ECBHB, 60, 4)
56
55
57
#define GEM_TXSTATUS_TXCMPL 0x00000020 /* Transmit Complete */
56
FIELD(ID_AA64MMFR2, CNP, 0, 4)
58
#define GEM_TXSTATUS_USED 0x00000001 /* sw owned descriptor encountered */
57
FIELD(ID_AA64MMFR2, UAO, 4, 4)
59
@@ -XXX,XX +XXX,XX @@ static inline uint64_t tx_desc_get_buffer(CadenceGEMState *s, uint32_t *desc)
58
@@ -XXX,XX +XXX,XX @@ FIELD(ID_AA64DFR0, DEBUGVER, 0, 4)
60
{
59
FIELD(ID_AA64DFR0, TRACEVER, 4, 4)
61
uint64_t ret = desc[0];
60
FIELD(ID_AA64DFR0, PMUVER, 8, 4)
62
61
FIELD(ID_AA64DFR0, BRPS, 12, 4)
63
- if (s->regs[R_DMACFG] & GEM_DMACFG_ADDR_64B) {
62
+FIELD(ID_AA64DFR0, PMSS, 16, 4)
64
+ if (FIELD_EX32(s->regs[R_DMACFG], DMACFG, DMA_ADDR_BUS_WIDTH)) {
63
FIELD(ID_AA64DFR0, WRPS, 20, 4)
65
ret |= (uint64_t)desc[2] << 32;
64
+FIELD(ID_AA64DFR0, SEBEP, 24, 4)
66
}
65
FIELD(ID_AA64DFR0, CTX_CMPS, 28, 4)
67
return ret;
66
FIELD(ID_AA64DFR0, PMSVER, 32, 4)
68
@@ -XXX,XX +XXX,XX @@ static inline uint64_t rx_desc_get_buffer(CadenceGEMState *s, uint32_t *desc)
67
FIELD(ID_AA64DFR0, DOUBLELOCK, 36, 4)
69
{
68
@@ -XXX,XX +XXX,XX @@ FIELD(ID_AA64DFR0, TRACEFILT, 40, 4)
70
uint64_t ret = desc[0] & ~0x3UL;
69
FIELD(ID_AA64DFR0, TRACEBUFFER, 44, 4)
71
70
FIELD(ID_AA64DFR0, MTPMU, 48, 4)
72
- if (s->regs[R_DMACFG] & GEM_DMACFG_ADDR_64B) {
71
FIELD(ID_AA64DFR0, BRBE, 52, 4)
73
+ if (FIELD_EX32(s->regs[R_DMACFG], DMACFG, DMA_ADDR_BUS_WIDTH)) {
72
+FIELD(ID_AA64DFR0, EXTTRCBUFF, 56, 4)
74
ret |= (uint64_t)desc[2] << 32;
73
FIELD(ID_AA64DFR0, HPMN0, 60, 4)
75
}
74
76
return ret;
75
FIELD(ID_AA64ZFR0, SVEVER, 0, 4)
77
@@ -XXX,XX +XXX,XX @@ static inline int gem_get_desc_len(CadenceGEMState *s, bool rx_n_tx)
76
FIELD(ID_AA64ZFR0, AES, 4, 4)
78
{
77
FIELD(ID_AA64ZFR0, BITPERM, 16, 4)
79
int ret = 2;
78
FIELD(ID_AA64ZFR0, BFLOAT16, 20, 4)
80
79
+FIELD(ID_AA64ZFR0, B16B16, 24, 4)
81
- if (s->regs[R_DMACFG] & GEM_DMACFG_ADDR_64B) {
80
FIELD(ID_AA64ZFR0, SHA3, 32, 4)
82
+ if (FIELD_EX32(s->regs[R_DMACFG], DMACFG, DMA_ADDR_BUS_WIDTH)) {
81
FIELD(ID_AA64ZFR0, SM4, 40, 4)
83
ret += 2;
82
FIELD(ID_AA64ZFR0, I8MM, 44, 4)
84
}
83
@@ -XXX,XX +XXX,XX @@ FIELD(ID_AA64ZFR0, F32MM, 52, 4)
85
- if (s->regs[R_DMACFG] & (rx_n_tx ? GEM_DMACFG_RX_BD_EXT
84
FIELD(ID_AA64ZFR0, F64MM, 56, 4)
86
- : GEM_DMACFG_TX_BD_EXT)) {
85
87
+ if (s->regs[R_DMACFG] & (rx_n_tx ? R_DMACFG_RX_BD_EXT_MODE_EN_MASK
86
FIELD(ID_AA64SMFR0, F32F32, 32, 1)
88
+ : R_DMACFG_TX_BD_EXT_MODE_EN_MASK)) {
87
+FIELD(ID_AA64SMFR0, BI32I32, 33, 1)
89
ret += 2;
88
FIELD(ID_AA64SMFR0, B16F32, 34, 1)
90
}
89
FIELD(ID_AA64SMFR0, F16F32, 35, 1)
91
90
FIELD(ID_AA64SMFR0, I8I32, 36, 4)
92
@@ -XXX,XX +XXX,XX @@ static hwaddr gem_get_desc_addr(CadenceGEMState *s, bool tx, int q)
91
+FIELD(ID_AA64SMFR0, F16F16, 42, 1)
93
{
92
+FIELD(ID_AA64SMFR0, B16B16, 43, 1)
94
hwaddr desc_addr = 0;
93
+FIELD(ID_AA64SMFR0, I16I32, 44, 4)
95
94
FIELD(ID_AA64SMFR0, F64F64, 48, 1)
96
- if (s->regs[R_DMACFG] & GEM_DMACFG_ADDR_64B) {
95
FIELD(ID_AA64SMFR0, I16I64, 52, 4)
97
+ if (FIELD_EX32(s->regs[R_DMACFG], DMACFG, DMA_ADDR_BUS_WIDTH)) {
96
FIELD(ID_AA64SMFR0, SMEVER, 56, 4)
98
desc_addr = s->regs[tx ? R_TBQPH : R_RBQPH];
99
}
100
desc_addr <<= 32;
101
@@ -XXX,XX +XXX,XX @@ static ssize_t gem_receive(NetClientState *nc, const uint8_t *buf, size_t size)
102
/* The configure size of each receive buffer. Determines how many
103
* buffers needed to hold this packet.
104
*/
105
- rxbufsize = ((s->regs[R_DMACFG] & GEM_DMACFG_RBUFSZ_M) >>
106
- GEM_DMACFG_RBUFSZ_S) * GEM_DMACFG_RBUFSZ_MUL;
107
+ rxbufsize = FIELD_EX32(s->regs[R_DMACFG], DMACFG, RX_BUF_SIZE);
108
+ rxbufsize *= GEM_DMACFG_RBUFSZ_MUL;
109
+
110
bytes_to_copy = size;
111
112
/* Hardware allows a zero value here but warns against it. To avoid QEMU
113
@@ -XXX,XX +XXX,XX @@ static void gem_transmit(CadenceGEMState *s)
114
gem_update_int_status(s);
115
116
/* Is checksum offload enabled? */
117
- if (s->regs[R_DMACFG] & GEM_DMACFG_TXCSUM_OFFL) {
118
+ if (FIELD_EX32(s->regs[R_DMACFG], DMACFG, TX_PBUF_CSUM_OFFLOAD)) {
119
net_checksum_calculate(s->tx_packet, total_bytes, CSUM_ALL);
120
}
121
122
@@ -XXX,XX +XXX,XX @@ static void gem_transmit(CadenceGEMState *s)
123
124
/* read next descriptor */
125
if (tx_desc_get_wrap(desc)) {
126
- if (s->regs[R_DMACFG] & GEM_DMACFG_ADDR_64B) {
127
+ if (FIELD_EX32(s->regs[R_DMACFG], DMACFG, DMA_ADDR_BUS_WIDTH)) {
128
packet_desc_addr = s->regs[R_TBQPH];
129
packet_desc_addr <<= 32;
130
} else {
97
--
131
--
98
2.34.1
132
2.34.1
diff view generated by jsdifflib
1
From: Fabian Vogt <fvogt@suse.de>
1
From: Luc Michel <luc.michel@amd.com>
2
2
3
Just like d7ef5e16a17c sets SCR_EL3.HXEn for FEAT_HCX, this commit
3
Use de FIELD macro to describe the TXSTATUS and RXSTATUS register
4
handles SCR_EL3.FGTEn for FEAT_FGT:
4
fields.
5
5
6
When we direct boot a kernel on a CPU which emulates EL3, we need to
6
Signed-off-by: Luc Michel <luc.michel@amd.com>
7
set up the EL3 system registers as the Linux kernel documentation
7
Reviewed-by: sai.pavan.boddu@amd.com
8
specifies:
8
Message-id: 20231017194422.4124691-7-luc.michel@amd.com
9
https://www.kernel.org/doc/Documentation/arm64/booting.rst
10
11
> For CPUs with the Fine Grained Traps (FEAT_FGT) extension present:
12
> - If EL3 is present and the kernel is entered at EL2:
13
> - SCR_EL3.FGTEn (bit 27) must be initialised to 0b1.
14
15
Cc: qemu-stable@nongnu.org
16
Signed-off-by: Fabian Vogt <fvogt@suse.de>
17
Message-id: 4831384.GXAFRqVoOG@linux-e202.suse.de
18
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
---
10
---
21
hw/arm/boot.c | 4 ++++
11
hw/net/cadence_gem.c | 34 +++++++++++++++++++++++++---------
22
1 file changed, 4 insertions(+)
12
1 file changed, 25 insertions(+), 9 deletions(-)
23
13
24
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
14
diff --git a/hw/net/cadence_gem.c b/hw/net/cadence_gem.c
25
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
26
--- a/hw/arm/boot.c
16
--- a/hw/net/cadence_gem.c
27
+++ b/hw/arm/boot.c
17
+++ b/hw/net/cadence_gem.c
28
@@ -XXX,XX +XXX,XX @@ static void do_cpu_reset(void *opaque)
18
@@ -XXX,XX +XXX,XX @@ REG32(DMACFG, 0x10) /* DMA Control reg */
29
if (cpu_isar_feature(aa64_hcx, cpu)) {
19
#define GEM_DMACFG_RBUFSZ_MUL 64 /* DMA RX Buffer Size multiplier */
30
env->cp15.scr_el3 |= SCR_HXEN;
20
31
}
21
REG32(TXSTATUS, 0x14) /* TX Status reg */
32
+ if (cpu_isar_feature(aa64_fgt, cpu)) {
22
+ FIELD(TXSTATUS, TX_USED_BIT_READ_MIDFRAME, 12, 1)
33
+ env->cp15.scr_el3 |= SCR_FGTEN;
23
+ FIELD(TXSTATUS, TX_FRAME_TOO_LARGE, 11, 1)
34
+ }
24
+ FIELD(TXSTATUS, TX_DMA_LOCKUP, 10, 1)
25
+ FIELD(TXSTATUS, TX_MAC_LOCKUP, 9, 1)
26
+ FIELD(TXSTATUS, RESP_NOT_OK, 8, 1)
27
+ FIELD(TXSTATUS, LATE_COLLISION, 7, 1)
28
+ FIELD(TXSTATUS, TRANSMIT_UNDER_RUN, 6, 1)
29
+ FIELD(TXSTATUS, TRANSMIT_COMPLETE, 5, 1)
30
+ FIELD(TXSTATUS, AMBA_ERROR, 4, 1)
31
+ FIELD(TXSTATUS, TRANSMIT_GO, 3, 1)
32
+ FIELD(TXSTATUS, RETRY_LIMIT, 2, 1)
33
+ FIELD(TXSTATUS, COLLISION, 1, 1)
34
+ FIELD(TXSTATUS, USED_BIT_READ, 0, 1)
35
+
35
+
36
/* AArch64 kernels never boot in secure mode */
36
REG32(RXQBASE, 0x18) /* RX Q Base address reg */
37
assert(!info->secure_boot);
37
REG32(TXQBASE, 0x1c) /* TX Q Base address reg */
38
/* This hook is only supported for AArch32 currently:
38
REG32(RXSTATUS, 0x20) /* RX Status reg */
39
+ FIELD(RXSTATUS, RX_DMA_LOCKUP, 5, 1)
40
+ FIELD(RXSTATUS, RX_MAC_LOCKUP, 4, 1)
41
+ FIELD(RXSTATUS, RESP_NOT_OK, 3, 1)
42
+ FIELD(RXSTATUS, RECEIVE_OVERRUN, 2, 1)
43
+ FIELD(RXSTATUS, FRAME_RECEIVED, 1, 1)
44
+ FIELD(RXSTATUS, BUF_NOT_AVAILABLE, 0, 1)
45
+
46
REG32(ISR, 0x24) /* Interrupt Status reg */
47
REG32(IER, 0x28) /* Interrupt Enable reg */
48
REG32(IDR, 0x2c) /* Interrupt Disable reg */
49
@@ -XXX,XX +XXX,XX @@ REG32(TYPE2_COMPARE_0_WORD_1, 0x704)
50
51
/*****************************************/
52
53
-#define GEM_TXSTATUS_TXCMPL 0x00000020 /* Transmit Complete */
54
-#define GEM_TXSTATUS_USED 0x00000001 /* sw owned descriptor encountered */
55
-
56
-#define GEM_RXSTATUS_FRMRCVD 0x00000002 /* Frame received */
57
-#define GEM_RXSTATUS_NOBUF 0x00000001 /* Buffer unavailable */
58
59
/* GEM_ISR GEM_IER GEM_IDR GEM_IMR */
60
#define GEM_INT_TXCMPL 0x00000080 /* Transmit Complete */
61
@@ -XXX,XX +XXX,XX @@ static void gem_get_rx_desc(CadenceGEMState *s, int q)
62
/* Descriptor owned by software ? */
63
if (rx_desc_get_ownership(s->rx_desc[q]) == 1) {
64
DB_PRINT("descriptor 0x%" HWADDR_PRIx " owned by sw.\n", desc_addr);
65
- s->regs[R_RXSTATUS] |= GEM_RXSTATUS_NOBUF;
66
+ s->regs[R_RXSTATUS] |= R_RXSTATUS_BUF_NOT_AVAILABLE_MASK;
67
gem_set_isr(s, q, GEM_INT_RXUSED);
68
/* Handle interrupt consequences */
69
gem_update_int_status(s);
70
@@ -XXX,XX +XXX,XX @@ static ssize_t gem_receive(NetClientState *nc, const uint8_t *buf, size_t size)
71
/* Count it */
72
gem_receive_updatestats(s, buf, size);
73
74
- s->regs[R_RXSTATUS] |= GEM_RXSTATUS_FRMRCVD;
75
+ s->regs[R_RXSTATUS] |= R_RXSTATUS_FRAME_RECEIVED_MASK;
76
gem_set_isr(s, q, GEM_INT_RXCMPL);
77
78
/* Handle interrupt consequences */
79
@@ -XXX,XX +XXX,XX @@ static void gem_transmit(CadenceGEMState *s)
80
}
81
DB_PRINT("TX descriptor next: 0x%08x\n", s->tx_desc_addr[q]);
82
83
- s->regs[R_TXSTATUS] |= GEM_TXSTATUS_TXCMPL;
84
+ s->regs[R_TXSTATUS] |= R_TXSTATUS_TRANSMIT_COMPLETE_MASK;
85
gem_set_isr(s, q, GEM_INT_TXCMPL);
86
87
/* Handle interrupt consequences */
88
@@ -XXX,XX +XXX,XX @@ static void gem_transmit(CadenceGEMState *s)
89
}
90
91
if (tx_desc_get_used(desc)) {
92
- s->regs[R_TXSTATUS] |= GEM_TXSTATUS_USED;
93
+ s->regs[R_TXSTATUS] |= R_TXSTATUS_USED_BIT_READ_MASK;
94
/* IRQ TXUSED is defined only for queue 0 */
95
if (q == 0) {
96
gem_set_isr(s, 0, GEM_INT_TXUSED);
39
--
97
--
40
2.34.1
98
2.34.1
diff view generated by jsdifflib
1
From: Viktor Prutyanov <viktor@daynix.com>
1
From: Luc Michel <luc.michel@amd.com>
2
2
3
Physical memory ranges may not be aligned to page size in QEMU ELF, but
3
Use de FIELD macro to describe the IRQ related register fields.
4
DMP can only contain page-aligned runs. So, align them.
5
4
6
Signed-off-by: Viktor Prutyanov <viktor@daynix.com>
5
Signed-off-by: Luc Michel <luc.michel@amd.com>
7
Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
6
Reviewed-by: sai.pavan.boddu@amd.com
8
Message-id: 20230915170153.10959-3-viktor@daynix.com
7
Message-id: 20231017194422.4124691-8-luc.michel@amd.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
9
---
11
contrib/elf2dmp/addrspace.h | 1 +
10
hw/net/cadence_gem.c | 51 +++++++++++++++++++++++++++++++++-----------
12
contrib/elf2dmp/addrspace.c | 31 +++++++++++++++++++++++++++++--
11
1 file changed, 39 insertions(+), 12 deletions(-)
13
contrib/elf2dmp/main.c | 5 +++--
14
3 files changed, 33 insertions(+), 4 deletions(-)
15
12
16
diff --git a/contrib/elf2dmp/addrspace.h b/contrib/elf2dmp/addrspace.h
13
diff --git a/hw/net/cadence_gem.c b/hw/net/cadence_gem.c
17
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
18
--- a/contrib/elf2dmp/addrspace.h
15
--- a/hw/net/cadence_gem.c
19
+++ b/contrib/elf2dmp/addrspace.h
16
+++ b/hw/net/cadence_gem.c
20
@@ -XXX,XX +XXX,XX @@
17
@@ -XXX,XX +XXX,XX @@ REG32(RXSTATUS, 0x20) /* RX Status reg */
21
18
FIELD(RXSTATUS, BUF_NOT_AVAILABLE, 0, 1)
22
#define ELF2DMP_PAGE_BITS 12
19
23
#define ELF2DMP_PAGE_SIZE (1ULL << ELF2DMP_PAGE_BITS)
20
REG32(ISR, 0x24) /* Interrupt Status reg */
24
+#define ELF2DMP_PAGE_MASK (ELF2DMP_PAGE_SIZE - 1)
21
+ FIELD(ISR, TX_LOCKUP, 31, 1)
25
#define ELF2DMP_PFN_MASK (~(ELF2DMP_PAGE_SIZE - 1))
22
+ FIELD(ISR, RX_LOCKUP, 30, 1)
26
23
+ FIELD(ISR, TSU_TIMER, 29, 1)
27
#define INVALID_PA UINT64_MAX
24
+ FIELD(ISR, WOL, 28, 1)
28
diff --git a/contrib/elf2dmp/addrspace.c b/contrib/elf2dmp/addrspace.c
25
+ FIELD(ISR, RECV_LPI, 27, 1)
29
index XXXXXXX..XXXXXXX 100644
26
+ FIELD(ISR, TSU_SEC_INCR, 26, 1)
30
--- a/contrib/elf2dmp/addrspace.c
27
+ FIELD(ISR, PTP_PDELAY_RESP_XMIT, 25, 1)
31
+++ b/contrib/elf2dmp/addrspace.c
28
+ FIELD(ISR, PTP_PDELAY_REQ_XMIT, 24, 1)
32
@@ -XXX,XX +XXX,XX @@ static struct pa_block *pa_space_find_block(struct pa_space *ps, uint64_t pa)
29
+ FIELD(ISR, PTP_PDELAY_RESP_RECV, 23, 1)
33
30
+ FIELD(ISR, PTP_PDELAY_REQ_RECV, 22, 1)
34
for (i = 0; i < ps->block_nr; i++) {
31
+ FIELD(ISR, PTP_SYNC_XMIT, 21, 1)
35
if (ps->block[i].paddr <= pa &&
32
+ FIELD(ISR, PTP_DELAY_REQ_XMIT, 20, 1)
36
- pa <= ps->block[i].paddr + ps->block[i].size) {
33
+ FIELD(ISR, PTP_SYNC_RECV, 19, 1)
37
+ pa < ps->block[i].paddr + ps->block[i].size) {
34
+ FIELD(ISR, PTP_DELAY_REQ_RECV, 18, 1)
38
return ps->block + i;
35
+ FIELD(ISR, PCS_LP_PAGE_RECV, 17, 1)
39
}
36
+ FIELD(ISR, PCS_AN_COMPLETE, 16, 1)
37
+ FIELD(ISR, EXT_IRQ, 15, 1)
38
+ FIELD(ISR, PAUSE_FRAME_XMIT, 14, 1)
39
+ FIELD(ISR, PAUSE_TIME_ELAPSED, 13, 1)
40
+ FIELD(ISR, PAUSE_FRAME_RECV, 12, 1)
41
+ FIELD(ISR, RESP_NOT_OK, 11, 1)
42
+ FIELD(ISR, RECV_OVERRUN, 10, 1)
43
+ FIELD(ISR, LINK_CHANGE, 9, 1)
44
+ FIELD(ISR, USXGMII_INT, 8, 1)
45
+ FIELD(ISR, XMIT_COMPLETE, 7, 1)
46
+ FIELD(ISR, AMBA_ERROR, 6, 1)
47
+ FIELD(ISR, RETRY_EXCEEDED, 5, 1)
48
+ FIELD(ISR, XMIT_UNDER_RUN, 4, 1)
49
+ FIELD(ISR, TX_USED, 3, 1)
50
+ FIELD(ISR, RX_USED, 2, 1)
51
+ FIELD(ISR, RECV_COMPLETE, 1, 1)
52
+ FIELD(ISR, MGNT_FRAME_SENT, 0, 1)
53
REG32(IER, 0x28) /* Interrupt Enable reg */
54
REG32(IDR, 0x2c) /* Interrupt Disable reg */
55
REG32(IMR, 0x30) /* Interrupt Mask reg */
56
+
57
REG32(PHYMNTNC, 0x34) /* Phy Maintenance reg */
58
REG32(RXPAUSE, 0x38) /* RX Pause Time reg */
59
REG32(TXPAUSE, 0x3c) /* TX Pause Time reg */
60
@@ -XXX,XX +XXX,XX @@ REG32(TYPE2_COMPARE_0_WORD_1, 0x704)
61
/*****************************************/
62
63
64
-/* GEM_ISR GEM_IER GEM_IDR GEM_IMR */
65
-#define GEM_INT_TXCMPL 0x00000080 /* Transmit Complete */
66
-#define GEM_INT_AMBA_ERR 0x00000040
67
-#define GEM_INT_TXUSED 0x00000008
68
-#define GEM_INT_RXUSED 0x00000004
69
-#define GEM_INT_RXCMPL 0x00000002
70
71
#define GEM_PHYMNTNC_OP_R 0x20000000 /* read operation */
72
#define GEM_PHYMNTNC_OP_W 0x10000000 /* write operation */
73
@@ -XXX,XX +XXX,XX @@ static void gem_get_rx_desc(CadenceGEMState *s, int q)
74
if (rx_desc_get_ownership(s->rx_desc[q]) == 1) {
75
DB_PRINT("descriptor 0x%" HWADDR_PRIx " owned by sw.\n", desc_addr);
76
s->regs[R_RXSTATUS] |= R_RXSTATUS_BUF_NOT_AVAILABLE_MASK;
77
- gem_set_isr(s, q, GEM_INT_RXUSED);
78
+ gem_set_isr(s, q, R_ISR_RX_USED_MASK);
79
/* Handle interrupt consequences */
80
gem_update_int_status(s);
40
}
81
}
41
@@ -XXX,XX +XXX,XX @@ static uint8_t *pa_space_resolve(struct pa_space *ps, uint64_t pa)
82
@@ -XXX,XX +XXX,XX @@ static ssize_t gem_receive(NetClientState *nc, const uint8_t *buf, size_t size)
42
return block->addr + (pa - block->paddr);
83
43
}
84
if (size > gem_get_max_buf_len(s, false)) {
44
85
qemu_log_mask(LOG_GUEST_ERROR, "rx frame too long\n");
45
+static void pa_block_align(struct pa_block *b)
86
- gem_set_isr(s, q, GEM_INT_AMBA_ERR);
46
+{
87
+ gem_set_isr(s, q, R_ISR_AMBA_ERROR_MASK);
47
+ uint64_t low_align = ((b->paddr - 1) | ELF2DMP_PAGE_MASK) + 1 - b->paddr;
88
return -1;
48
+ uint64_t high_align = (b->paddr + b->size) & ELF2DMP_PAGE_MASK;
49
+
50
+ if (low_align == 0 && high_align == 0) {
51
+ return;
52
+ }
53
+
54
+ if (low_align + high_align < b->size) {
55
+ printf("Block 0x%"PRIx64"+:0x%"PRIx64" will be aligned to "
56
+ "0x%"PRIx64"+:0x%"PRIx64"\n", b->paddr, b->size,
57
+ b->paddr + low_align, b->size - low_align - high_align);
58
+ b->size -= low_align + high_align;
59
+ } else {
60
+ printf("Block 0x%"PRIx64"+:0x%"PRIx64" is too small to align\n",
61
+ b->paddr, b->size);
62
+ b->size = 0;
63
+ }
64
+
65
+ b->addr += low_align;
66
+ b->paddr += low_align;
67
+}
68
+
69
int pa_space_create(struct pa_space *ps, QEMU_Elf *qemu_elf)
70
{
71
Elf64_Half phdr_nr = elf_getphdrnum(qemu_elf->map);
72
@@ -XXX,XX +XXX,XX @@ int pa_space_create(struct pa_space *ps, QEMU_Elf *qemu_elf)
73
.paddr = phdr[i].p_paddr,
74
.size = phdr[i].p_filesz,
75
};
76
- block_i++;
77
+ pa_block_align(&ps->block[block_i]);
78
+ block_i = ps->block[block_i].size ? (block_i + 1) : block_i;
79
}
80
}
89
}
81
90
82
+ ps->block_nr = block_i;
91
@@ -XXX,XX +XXX,XX @@ static ssize_t gem_receive(NetClientState *nc, const uint8_t *buf, size_t size)
83
+
92
gem_receive_updatestats(s, buf, size);
84
return 0;
93
85
}
94
s->regs[R_RXSTATUS] |= R_RXSTATUS_FRAME_RECEIVED_MASK;
86
95
- gem_set_isr(s, q, GEM_INT_RXCMPL);
87
diff --git a/contrib/elf2dmp/main.c b/contrib/elf2dmp/main.c
96
+ gem_set_isr(s, q, R_ISR_RECV_COMPLETE_MASK);
88
index XXXXXXX..XXXXXXX 100644
97
89
--- a/contrib/elf2dmp/main.c
98
/* Handle interrupt consequences */
90
+++ b/contrib/elf2dmp/main.c
99
gem_update_int_status(s);
91
@@ -XXX,XX +XXX,XX @@ static int write_dump(struct pa_space *ps,
100
@@ -XXX,XX +XXX,XX @@ static void gem_transmit(CadenceGEMState *s)
92
for (i = 0; i < ps->block_nr; i++) {
101
HWADDR_PRIx " too large: size 0x%x space 0x%zx\n",
93
struct pa_block *b = &ps->block[i];
102
packet_desc_addr, tx_desc_get_length(desc),
94
103
gem_get_max_buf_len(s, true) - (p - s->tx_packet));
95
- printf("Writing block #%zu/%zu to file...\n", i, ps->block_nr);
104
- gem_set_isr(s, q, GEM_INT_AMBA_ERR);
96
+ printf("Writing block #%zu/%zu of %"PRIu64" bytes to file...\n", i,
105
+ gem_set_isr(s, q, R_ISR_AMBA_ERROR_MASK);
97
+ ps->block_nr, b->size);
106
break;
98
if (fwrite(b->addr, b->size, 1, dmp_file) != 1) {
107
}
99
- eprintf("Failed to write dump header\n");
108
100
+ eprintf("Failed to write block\n");
109
@@ -XXX,XX +XXX,XX @@ static void gem_transmit(CadenceGEMState *s)
101
fclose(dmp_file);
110
DB_PRINT("TX descriptor next: 0x%08x\n", s->tx_desc_addr[q]);
102
return 1;
111
112
s->regs[R_TXSTATUS] |= R_TXSTATUS_TRANSMIT_COMPLETE_MASK;
113
- gem_set_isr(s, q, GEM_INT_TXCMPL);
114
+ gem_set_isr(s, q, R_ISR_XMIT_COMPLETE_MASK);
115
116
/* Handle interrupt consequences */
117
gem_update_int_status(s);
118
@@ -XXX,XX +XXX,XX @@ static void gem_transmit(CadenceGEMState *s)
119
s->regs[R_TXSTATUS] |= R_TXSTATUS_USED_BIT_READ_MASK;
120
/* IRQ TXUSED is defined only for queue 0 */
121
if (q == 0) {
122
- gem_set_isr(s, 0, GEM_INT_TXUSED);
123
+ gem_set_isr(s, 0, R_ISR_TX_USED_MASK);
124
}
125
gem_update_int_status(s);
103
}
126
}
104
--
127
--
105
2.34.1
128
2.34.1
diff view generated by jsdifflib
1
The loads-and-stores documentation includes git grep regexes to find
1
From: Luc Michel <luc.michel@amd.com>
2
occurrences of the various functions. Some of these regexes have
3
errors, typically failing to escape the '?', '(' and ')' when they
4
should be metacharacters (since these are POSIX basic REs). We also
5
weren't consistent about whether to have a ':' on the end of the
6
line introducing the list of regexes in each section.
7
2
8
Fix the errors.
3
Use the FIELD macro to describe the DESCONF6 register fields.
9
4
10
The following shell rune will complain about any REs in the
5
Signed-off-by: Luc Michel <luc.michel@amd.com>
11
file which don't have any matches in the codebase:
6
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
12
for re in $(sed -ne 's/ - ``\(\\<.*\)``/\1/p' docs/devel/loads-stores.rst); do git grep -q "$re" || echo "no matches for re $re"; done
7
Message-id: 20231017194422.4124691-9-luc.michel@amd.com
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
hw/net/cadence_gem.c | 4 ++--
11
1 file changed, 2 insertions(+), 2 deletions(-)
13
12
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
diff --git a/hw/net/cadence_gem.c b/hw/net/cadence_gem.c
15
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
16
Message-id: 20230904161703.3996734-1-peter.maydell@linaro.org
17
---
18
docs/devel/loads-stores.rst | 40 ++++++++++++++++++-------------------
19
1 file changed, 20 insertions(+), 20 deletions(-)
20
21
diff --git a/docs/devel/loads-stores.rst b/docs/devel/loads-stores.rst
22
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
23
--- a/docs/devel/loads-stores.rst
15
--- a/hw/net/cadence_gem.c
24
+++ b/docs/devel/loads-stores.rst
16
+++ b/hw/net/cadence_gem.c
25
@@ -XXX,XX +XXX,XX @@ which stores ``val`` to ``ptr`` as an ``{endian}`` order value
17
@@ -XXX,XX +XXX,XX @@ REG32(DESCONF3, 0x288)
26
of size ``sz`` bytes.
18
REG32(DESCONF4, 0x28c)
27
19
REG32(DESCONF5, 0x290)
28
20
REG32(DESCONF6, 0x294)
29
-Regexes for git grep
21
-#define GEM_DESCONF6_64B_MASK (1U << 23)
30
+Regexes for git grep:
22
+ FIELD(DESCONF6, DMA_ADDR_64B, 23, 1)
31
- ``\<ld[us]\?[bwlq]\(_[hbl]e\)\?_p\>``
23
REG32(DESCONF7, 0x298)
32
- ``\<st[bwlq]\(_[hbl]e\)\?_p\>``
24
33
- ``\<st24\(_[hbl]e\)\?_p\>``
25
REG32(INT_Q1_STATUS, 0x400)
34
- - ``\<ldn_\([hbl]e\)?_p\>``
26
@@ -XXX,XX +XXX,XX @@ static void gem_reset(DeviceState *d)
35
- - ``\<stn_\([hbl]e\)?_p\>``
27
s->regs[R_DESCONF] = 0x02D00111;
36
+ - ``\<ldn_\([hbl]e\)\?_p\>``
28
s->regs[R_DESCONF2] = 0x2ab10000 | s->jumbo_max_len;
37
+ - ``\<stn_\([hbl]e\)\?_p\>``
29
s->regs[R_DESCONF5] = 0x002f2045;
38
30
- s->regs[R_DESCONF6] = GEM_DESCONF6_64B_MASK;
39
``cpu_{ld,st}*_mmu``
31
+ s->regs[R_DESCONF6] = R_DESCONF6_DMA_ADDR_64B_MASK;
40
~~~~~~~~~~~~~~~~~~~~
32
s->regs[R_INT_Q1_MASK] = 0x00000CE6;
41
@@ -XXX,XX +XXX,XX @@ store: ``cpu_st{size}{end}_mmu(env, ptr, val, oi, retaddr)``
33
s->regs[R_JUMBO_MAX_LEN] = s->jumbo_max_len;
42
- ``_le`` : little endian
34
43
44
Regexes for git grep:
45
- - ``\<cpu_ld[bwlq](_[bl]e)\?_mmu\>``
46
- - ``\<cpu_st[bwlq](_[bl]e)\?_mmu\>``
47
+ - ``\<cpu_ld[bwlq]\(_[bl]e\)\?_mmu\>``
48
+ - ``\<cpu_st[bwlq]\(_[bl]e\)\?_mmu\>``
49
50
51
``cpu_{ld,st}*_mmuidx_ra``
52
@@ -XXX,XX +XXX,XX @@ store: ``cpu_st{size}{end}_mmuidx_ra(env, ptr, val, mmuidx, retaddr)``
53
- ``_le`` : little endian
54
55
Regexes for git grep:
56
- - ``\<cpu_ld[us]\?[bwlq](_[bl]e)\?_mmuidx_ra\>``
57
- - ``\<cpu_st[bwlq](_[bl]e)\?_mmuidx_ra\>``
58
+ - ``\<cpu_ld[us]\?[bwlq]\(_[bl]e\)\?_mmuidx_ra\>``
59
+ - ``\<cpu_st[bwlq]\(_[bl]e\)\?_mmuidx_ra\>``
60
61
``cpu_{ld,st}*_data_ra``
62
~~~~~~~~~~~~~~~~~~~~~~~~
63
@@ -XXX,XX +XXX,XX @@ store: ``cpu_st{size}{end}_data_ra(env, ptr, val, ra)``
64
- ``_le`` : little endian
65
66
Regexes for git grep:
67
- - ``\<cpu_ld[us]\?[bwlq](_[bl]e)\?_data_ra\>``
68
- - ``\<cpu_st[bwlq](_[bl]e)\?_data_ra\>``
69
+ - ``\<cpu_ld[us]\?[bwlq]\(_[bl]e\)\?_data_ra\>``
70
+ - ``\<cpu_st[bwlq]\(_[bl]e\)\?_data_ra\>``
71
72
``cpu_{ld,st}*_data``
73
~~~~~~~~~~~~~~~~~~~~~
74
@@ -XXX,XX +XXX,XX @@ store: ``cpu_st{size}{end}_data(env, ptr, val)``
75
- ``_be`` : big endian
76
- ``_le`` : little endian
77
78
-Regexes for git grep
79
- - ``\<cpu_ld[us]\?[bwlq](_[bl]e)\?_data\>``
80
- - ``\<cpu_st[bwlq](_[bl]e)\?_data\+\>``
81
+Regexes for git grep:
82
+ - ``\<cpu_ld[us]\?[bwlq]\(_[bl]e\)\?_data\>``
83
+ - ``\<cpu_st[bwlq]\(_[bl]e\)\?_data\+\>``
84
85
``cpu_ld*_code``
86
~~~~~~~~~~~~~~~~
87
@@ -XXX,XX +XXX,XX @@ swap: ``translator_ld{sign}{size}_swap(env, ptr, swap)``
88
- ``l`` : 32 bits
89
- ``q`` : 64 bits
90
91
-Regexes for git grep
92
+Regexes for git grep:
93
- ``\<translator_ld[us]\?[bwlq]\(_swap\)\?\>``
94
95
``helper_{ld,st}*_mmu``
96
@@ -XXX,XX +XXX,XX @@ store: ``helper_{size}_mmu(env, addr, val, opindex, retaddr)``
97
- ``l`` : 32 bits
98
- ``q`` : 64 bits
99
100
-Regexes for git grep
101
+Regexes for git grep:
102
- ``\<helper_ld[us]\?[bwlq]_mmu\>``
103
- ``\<helper_st[bwlq]_mmu\>``
104
105
@@ -XXX,XX +XXX,XX @@ succeeded using a MemTxResult return code.
106
107
The ``_{endian}`` suffix is omitted for byte accesses.
108
109
-Regexes for git grep
110
+Regexes for git grep:
111
- ``\<address_space_\(read\|write\|rw\)\>``
112
- ``\<address_space_ldu\?[bwql]\(_[lb]e\)\?\>``
113
- ``\<address_space_st[bwql]\(_[lb]e\)\?\>``
114
@@ -XXX,XX +XXX,XX @@ Note that portions of the write which attempt to write data to a
115
device will be silently ignored -- only real RAM and ROM will
116
be written to.
117
118
-Regexes for git grep
119
+Regexes for git grep:
120
- ``address_space_write_rom``
121
122
``{ld,st}*_phys``
123
@@ -XXX,XX +XXX,XX @@ device doing the access has no way to report such an error.
124
125
The ``_{endian}_`` infix is omitted for byte accesses.
126
127
-Regexes for git grep
128
+Regexes for git grep:
129
- ``\<ldu\?[bwlq]\(_[bl]e\)\?_phys\>``
130
- ``\<st[bwlq]\(_[bl]e\)\?_phys\>``
131
132
@@ -XXX,XX +XXX,XX @@ For new code they are better avoided:
133
134
``cpu_physical_memory_rw``
135
136
-Regexes for git grep
137
+Regexes for git grep:
138
- ``\<cpu_physical_memory_\(read\|write\|rw\)\>``
139
140
``cpu_memory_rw_debug``
141
@@ -XXX,XX +XXX,XX @@ make sure our existing code is doing things correctly.
142
143
``dma_memory_rw``
144
145
-Regexes for git grep
146
+Regexes for git grep:
147
- ``\<dma_memory_\(read\|write\|rw\)\>``
148
- ``\<ldu\?[bwlq]\(_[bl]e\)\?_dma\>``
149
- ``\<st[bwlq]\(_[bl]e\)\?_dma\>``
150
@@ -XXX,XX +XXX,XX @@ correct address space for that device.
151
152
The ``_{endian}_`` infix is omitted for byte accesses.
153
154
-Regexes for git grep
155
+Regexes for git grep:
156
- ``\<pci_dma_\(read\|write\|rw\)\>``
157
- ``\<ldu\?[bwlq]\(_[bl]e\)\?_pci_dma\>``
158
- ``\<st[bwlq]\(_[bl]e\)\?_pci_dma\>``
159
--
35
--
160
2.34.1
36
2.34.1
161
37
162
38
diff view generated by jsdifflib
1
The LDRT/STRT "unprivileged load/store" instructions behave like
1
From: Luc Michel <luc.michel@amd.com>
2
normal ones if executed at EL0. We handle this correctly for
3
the load/store semantics, but get the MTE checking wrong.
4
2
5
We always look at s->mte_active[is_unpriv] to see whether we should
3
Use the FIELD macro to describe the PHYMNTNC register fields.
6
be doing MTE checks, but in hflags.c when we set the TB flags that
7
will be used to fill the mte_active[] array we only set the
8
MTE0_ACTIVE bit if UNPRIV is true (i.e. we are not at EL0).
9
4
10
This means that a LDRT at EL0 will see s->mte_active[1] as 0,
5
Signed-off-by: Luc Michel <luc.michel@amd.com>
11
and will not do MTE checks even when MTE is enabled.
6
Reviewed-by: sai.pavan.boddu@amd.com
7
Message-id: 20231017194422.4124691-10-luc.michel@amd.com
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
hw/net/cadence_gem.c | 27 ++++++++++++++-------------
11
1 file changed, 14 insertions(+), 13 deletions(-)
12
12
13
To avoid the translate-time code having to do an explicit check on
13
diff --git a/hw/net/cadence_gem.c b/hw/net/cadence_gem.c
14
s->unpriv to see if it is OK to index into the mte_active[] array,
15
duplicate MTE_ACTIVE into MTE0_ACTIVE when UNPRIV is false.
16
17
(This isn't a very serious bug because generally nobody executes
18
LDRT/STRT at EL0, because they have no use there.)
19
20
Cc: qemu-stable@nongnu.org
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
23
Message-id: 20230912140434.1333369-2-peter.maydell@linaro.org
24
---
25
target/arm/tcg/hflags.c | 9 +++++++++
26
1 file changed, 9 insertions(+)
27
28
diff --git a/target/arm/tcg/hflags.c b/target/arm/tcg/hflags.c
29
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/tcg/hflags.c
15
--- a/hw/net/cadence_gem.c
31
+++ b/target/arm/tcg/hflags.c
16
+++ b/hw/net/cadence_gem.c
32
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
17
@@ -XXX,XX +XXX,XX @@ REG32(IDR, 0x2c) /* Interrupt Disable reg */
33
&& !(env->pstate & PSTATE_TCO)
18
REG32(IMR, 0x30) /* Interrupt Mask reg */
34
&& (sctlr & (el == 0 ? SCTLR_TCF0 : SCTLR_TCF))) {
19
35
DP_TBFLAG_A64(flags, MTE_ACTIVE, 1);
20
REG32(PHYMNTNC, 0x34) /* Phy Maintenance reg */
36
+ if (!EX_TBFLAG_A64(flags, UNPRIV)) {
21
+ FIELD(PHYMNTNC, DATA, 0, 16)
37
+ /*
22
+ FIELD(PHYMNTNC, REG_ADDR, 18, 5)
38
+ * In non-unpriv contexts (eg EL0), unpriv load/stores
23
+ FIELD(PHYMNTNC, PHY_ADDR, 23, 5)
39
+ * act like normal ones; duplicate the MTE info to
24
+ FIELD(PHYMNTNC, OP, 28, 2)
40
+ * avoid translate-a64.c having to check UNPRIV to see
25
+ FIELD(PHYMNTNC, ST, 30, 2)
41
+ * whether it is OK to index into MTE_ACTIVE[].
26
+#define MDIO_OP_READ 0x3
42
+ */
27
+#define MDIO_OP_WRITE 0x2
43
+ DP_TBFLAG_A64(flags, MTE0_ACTIVE, 1);
28
+
44
+ }
29
REG32(RXPAUSE, 0x38) /* RX Pause Time reg */
30
REG32(TXPAUSE, 0x3c) /* TX Pause Time reg */
31
REG32(TXPARTIALSF, 0x40) /* TX Partial Store and Forward */
32
@@ -XXX,XX +XXX,XX @@ REG32(TYPE2_COMPARE_0_WORD_1, 0x704)
33
34
35
36
-#define GEM_PHYMNTNC_OP_R 0x20000000 /* read operation */
37
-#define GEM_PHYMNTNC_OP_W 0x10000000 /* write operation */
38
-#define GEM_PHYMNTNC_ADDR 0x0F800000 /* Address bits */
39
-#define GEM_PHYMNTNC_ADDR_SHFT 23
40
-#define GEM_PHYMNTNC_REG 0x007C0000 /* register bits */
41
-#define GEM_PHYMNTNC_REG_SHIFT 18
42
-
43
/* Marvell PHY definitions */
44
#define BOARD_PHY_ADDRESS 0 /* PHY address we will emulate a device at */
45
46
@@ -XXX,XX +XXX,XX @@ static uint64_t gem_read(void *opaque, hwaddr offset, unsigned size)
47
/* The interrupts get updated at the end of the function. */
48
break;
49
case R_PHYMNTNC:
50
- if (retval & GEM_PHYMNTNC_OP_R) {
51
+ if (FIELD_EX32(retval, PHYMNTNC, OP) == MDIO_OP_READ) {
52
uint32_t phy_addr, reg_num;
53
54
- phy_addr = (retval & GEM_PHYMNTNC_ADDR) >> GEM_PHYMNTNC_ADDR_SHFT;
55
+ phy_addr = FIELD_EX32(retval, PHYMNTNC, PHY_ADDR);
56
if (phy_addr == s->phy_addr) {
57
- reg_num = (retval & GEM_PHYMNTNC_REG) >> GEM_PHYMNTNC_REG_SHIFT;
58
+ reg_num = FIELD_EX32(retval, PHYMNTNC, REG_ADDR);
59
retval &= 0xFFFF0000;
60
retval |= gem_phy_read(s, reg_num);
61
} else {
62
@@ -XXX,XX +XXX,XX @@ static void gem_write(void *opaque, hwaddr offset, uint64_t val,
63
s->sar_active[(offset - R_SPADDR1HI) / 2] = true;
64
break;
65
case R_PHYMNTNC:
66
- if (val & GEM_PHYMNTNC_OP_W) {
67
+ if (FIELD_EX32(val, PHYMNTNC, OP) == MDIO_OP_WRITE) {
68
uint32_t phy_addr, reg_num;
69
70
- phy_addr = (val & GEM_PHYMNTNC_ADDR) >> GEM_PHYMNTNC_ADDR_SHFT;
71
+ phy_addr = FIELD_EX32(val, PHYMNTNC, PHY_ADDR);
72
if (phy_addr == s->phy_addr) {
73
- reg_num = (val & GEM_PHYMNTNC_REG) >> GEM_PHYMNTNC_REG_SHIFT;
74
+ reg_num = FIELD_EX32(val, PHYMNTNC, REG_ADDR);
75
gem_phy_write(s, reg_num, val);
45
}
76
}
46
}
77
}
47
/* And again for unprivileged accesses, if required. */
48
--
78
--
49
2.34.1
79
2.34.1
diff view generated by jsdifflib
1
From: Viktor Prutyanov <viktor@daynix.com>
1
From: Luc Michel <luc.michel@amd.com>
2
2
3
Glib's g_mapped_file_new maps file with PROT_READ|PROT_WRITE and
3
The MDIO access is done only on a write to the PHYMNTNC register. A
4
MAP_PRIVATE. This leads to premature physical memory allocation of dump
4
subsequent read is used to retrieve the result but does not trigger an
5
file size on Linux hosts and may fail. On Linux, mapping the file with
5
MDIO access by itself.
6
MAP_NORESERVE limits the allocation by available memory.
7
6
8
Signed-off-by: Viktor Prutyanov <viktor@daynix.com>
7
Refactor the PHY access logic to perform all accesses (MDIO reads and
9
Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
8
writes) at PHYMNTNC write time.
10
Message-id: 20230915170153.10959-5-viktor@daynix.com
9
10
Signed-off-by: Luc Michel <luc.michel@amd.com>
11
Reviewed-by: sai.pavan.boddu@amd.com
12
Message-id: 20231017194422.4124691-11-luc.michel@amd.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
14
---
13
contrib/elf2dmp/qemu_elf.h | 2 ++
15
hw/net/cadence_gem.c | 56 ++++++++++++++++++++++++++------------------
14
contrib/elf2dmp/qemu_elf.c | 68 +++++++++++++++++++++++++++++++-------
16
1 file changed, 33 insertions(+), 23 deletions(-)
15
2 files changed, 58 insertions(+), 12 deletions(-)
16
17
17
diff --git a/contrib/elf2dmp/qemu_elf.h b/contrib/elf2dmp/qemu_elf.h
18
diff --git a/hw/net/cadence_gem.c b/hw/net/cadence_gem.c
18
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
19
--- a/contrib/elf2dmp/qemu_elf.h
20
--- a/hw/net/cadence_gem.c
20
+++ b/contrib/elf2dmp/qemu_elf.h
21
+++ b/hw/net/cadence_gem.c
21
@@ -XXX,XX +XXX,XX @@ typedef struct QEMUCPUState {
22
@@ -XXX,XX +XXX,XX @@ static void gem_phy_write(CadenceGEMState *s, unsigned reg_num, uint16_t val)
22
int is_system(QEMUCPUState *s);
23
s->phy_regs[reg_num] = val;
23
24
typedef struct QEMU_Elf {
25
+#ifndef CONFIG_LINUX
26
GMappedFile *gmf;
27
+#endif
28
size_t size;
29
void *map;
30
QEMUCPUState **state;
31
diff --git a/contrib/elf2dmp/qemu_elf.c b/contrib/elf2dmp/qemu_elf.c
32
index XXXXXXX..XXXXXXX 100644
33
--- a/contrib/elf2dmp/qemu_elf.c
34
+++ b/contrib/elf2dmp/qemu_elf.c
35
@@ -XXX,XX +XXX,XX @@ static bool check_ehdr(QEMU_Elf *qe)
36
return true;
37
}
24
}
38
25
39
-int QEMU_Elf_init(QEMU_Elf *qe, const char *filename)
26
+static void gem_handle_phy_access(CadenceGEMState *s)
40
+static int QEMU_Elf_map(QEMU_Elf *qe, const char *filename)
27
+{
41
{
28
+ uint32_t val = s->regs[R_PHYMNTNC];
42
+#ifdef CONFIG_LINUX
29
+ uint32_t phy_addr, reg_num;
43
+ struct stat st;
44
+ int fd;
45
+
30
+
46
+ printf("Using Linux mmap\n");
31
+ phy_addr = FIELD_EX32(val, PHYMNTNC, PHY_ADDR);
47
+
32
+
48
+ fd = open(filename, O_RDONLY, 0);
33
+ if (phy_addr != s->phy_addr) {
49
+ if (fd == -1) {
34
+ /* no phy at this address */
50
+ eprintf("Failed to open ELF dump file \'%s\'\n", filename);
35
+ if (FIELD_EX32(val, PHYMNTNC, OP) == MDIO_OP_READ) {
51
+ return 1;
36
+ s->regs[R_PHYMNTNC] = FIELD_DP32(val, PHYMNTNC, DATA, 0xffff);
37
+ }
38
+ return;
52
+ }
39
+ }
53
+
40
+
54
+ if (fstat(fd, &st)) {
41
+ reg_num = FIELD_EX32(val, PHYMNTNC, REG_ADDR);
55
+ eprintf("Failed to get size of ELF dump file\n");
42
+
56
+ close(fd);
43
+ switch (FIELD_EX32(val, PHYMNTNC, OP)) {
57
+ return 1;
44
+ case MDIO_OP_READ:
45
+ s->regs[R_PHYMNTNC] = FIELD_DP32(val, PHYMNTNC, DATA,
46
+ gem_phy_read(s, reg_num));
47
+ break;
48
+
49
+ case MDIO_OP_WRITE:
50
+ gem_phy_write(s, reg_num, val);
51
+ break;
52
+
53
+ default:
54
+ break; /* only clause 22 operations are supported */
58
+ }
55
+ }
59
+ qe->size = st.st_size;
60
+
61
+ qe->map = mmap(NULL, qe->size, PROT_READ | PROT_WRITE,
62
+ MAP_PRIVATE | MAP_NORESERVE, fd, 0);
63
+ if (qe->map == MAP_FAILED) {
64
+ eprintf("Failed to map ELF file\n");
65
+ close(fd);
66
+ return 1;
67
+ }
68
+
69
+ close(fd);
70
+#else
71
GError *gerr = NULL;
72
- int err = 0;
73
+
74
+ printf("Using GLib mmap\n");
75
76
qe->gmf = g_mapped_file_new(filename, TRUE, &gerr);
77
if (gerr) {
78
@@ -XXX,XX +XXX,XX @@ int QEMU_Elf_init(QEMU_Elf *qe, const char *filename)
79
80
qe->map = g_mapped_file_get_contents(qe->gmf);
81
qe->size = g_mapped_file_get_length(qe->gmf);
82
+#endif
83
+
84
+ return 0;
85
+}
56
+}
86
+
57
+
87
+static void QEMU_Elf_unmap(QEMU_Elf *qe)
58
/*
88
+{
59
* gem_read32:
89
+#ifdef CONFIG_LINUX
60
* Read a GEM register.
90
+ munmap(qe->map, qe->size);
61
@@ -XXX,XX +XXX,XX @@ static uint64_t gem_read(void *opaque, hwaddr offset, unsigned size)
91
+#else
62
DB_PRINT("lowering irqs on ISR read\n");
92
+ g_mapped_file_unref(qe->gmf);
63
/* The interrupts get updated at the end of the function. */
93
+#endif
64
break;
94
+}
65
- case R_PHYMNTNC:
95
+
66
- if (FIELD_EX32(retval, PHYMNTNC, OP) == MDIO_OP_READ) {
96
+int QEMU_Elf_init(QEMU_Elf *qe, const char *filename)
67
- uint32_t phy_addr, reg_num;
97
+{
68
-
98
+ if (QEMU_Elf_map(qe, filename)) {
69
- phy_addr = FIELD_EX32(retval, PHYMNTNC, PHY_ADDR);
99
+ return 1;
70
- if (phy_addr == s->phy_addr) {
100
+ }
71
- reg_num = FIELD_EX32(retval, PHYMNTNC, REG_ADDR);
101
72
- retval &= 0xFFFF0000;
102
if (!check_ehdr(qe)) {
73
- retval |= gem_phy_read(s, reg_num);
103
eprintf("Input file has the wrong format\n");
74
- } else {
104
- err = 1;
75
- retval |= 0xFFFF; /* No device at this address */
105
- goto out_unmap;
76
- }
106
+ QEMU_Elf_unmap(qe);
77
- }
107
+ return 1;
78
- break;
108
}
79
}
109
80
110
if (init_states(qe)) {
81
/* Squash read to clear bits */
111
eprintf("Failed to extract QEMU CPU states\n");
82
@@ -XXX,XX +XXX,XX @@ static void gem_write(void *opaque, hwaddr offset, uint64_t val,
112
- err = 1;
83
s->sar_active[(offset - R_SPADDR1HI) / 2] = true;
113
- goto out_unmap;
84
break;
114
+ QEMU_Elf_unmap(qe);
85
case R_PHYMNTNC:
115
+ return 1;
86
- if (FIELD_EX32(val, PHYMNTNC, OP) == MDIO_OP_WRITE) {
87
- uint32_t phy_addr, reg_num;
88
-
89
- phy_addr = FIELD_EX32(val, PHYMNTNC, PHY_ADDR);
90
- if (phy_addr == s->phy_addr) {
91
- reg_num = FIELD_EX32(val, PHYMNTNC, REG_ADDR);
92
- gem_phy_write(s, reg_num, val);
93
- }
94
- }
95
+ gem_handle_phy_access(s);
96
break;
116
}
97
}
117
98
118
return 0;
119
-
120
-out_unmap:
121
- g_mapped_file_unref(qe->gmf);
122
-
123
- return err;
124
}
125
126
void QEMU_Elf_exit(QEMU_Elf *qe)
127
{
128
exit_states(qe);
129
- g_mapped_file_unref(qe->gmf);
130
+ QEMU_Elf_unmap(qe);
131
}
132
--
99
--
133
2.34.1
100
2.34.1
diff view generated by jsdifflib
1
The spec for m68k semihosting is documented in the libgloss
1
From: Luc Michel <luc.michel@amd.com>
2
sources. Add a comment with the URL for it, as we already
3
have for nios2 semihosting.
4
2
3
The CRC was stored in an unsigned variable in gem_receive. Change it for
4
a uint32_t to ensure we have the correct variable size here.
5
6
Signed-off-by: Luc Michel <luc.michel@amd.com>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Reviewed-by: sai.pavan.boddu@amd.com
9
Message-id: 20231017194422.4124691-12-luc.michel@amd.com
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
Message-id: 20230801154451.3505492-1-peter.maydell@linaro.org
10
---
11
---
11
target/m68k/m68k-semi.c | 4 ++++
12
hw/net/cadence_gem.c | 2 +-
12
1 file changed, 4 insertions(+)
13
1 file changed, 1 insertion(+), 1 deletion(-)
13
14
14
diff --git a/target/m68k/m68k-semi.c b/target/m68k/m68k-semi.c
15
diff --git a/hw/net/cadence_gem.c b/hw/net/cadence_gem.c
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/target/m68k/m68k-semi.c
17
--- a/hw/net/cadence_gem.c
17
+++ b/target/m68k/m68k-semi.c
18
+++ b/hw/net/cadence_gem.c
18
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@ static ssize_t gem_receive(NetClientState *nc, const uint8_t *buf, size_t size)
19
*
20
if (FIELD_EX32(s->regs[R_NWCFG], NWCFG, FCS_REMOVE)) {
20
* You should have received a copy of the GNU General Public License
21
rxbuf_ptr = (void *)buf;
21
* along with this program; if not, see <http://www.gnu.org/licenses/>.
22
} else {
22
+ *
23
- unsigned crc_val;
23
+ * The semihosting protocol implemented here is described in the
24
+ uint32_t crc_val;
24
+ * libgloss sources:
25
25
+ * https://sourceware.org/git/?p=newlib-cygwin.git;a=blob;f=libgloss/m68k/m68k-semi.txt;hb=HEAD
26
if (size > MAX_FRAME_SIZE - sizeof(crc_val)) {
26
*/
27
size = MAX_FRAME_SIZE - sizeof(crc_val);
27
28
#include "qemu/osdep.h"
29
--
28
--
30
2.34.1
29
2.34.1
31
30
32
31
diff view generated by jsdifflib